Hacker News new | past | comments | ask | show | jobs | submit login
Coreboot Blocked from Recent Thinkpads by Intel Boot Guard (georgi-clan.de)
152 points by ppereira on March 3, 2015 | hide | past | favorite | 60 comments



First it was "secure boot", now it's "boot guard"? It seems that the PC, which was once a very open platform (IBM published schematics and the source code of the BIOS up to the PC/AT), is gradually becoming another locked-down walled-garden ecosystem.

The worst part is that the masses are going to think these anti-user measures are helping them, "because security". They'll see only the "prevents hackers" part being advertised and agree wholeheartedly, or even if they realise that it means they won't be able to choose the firmware they run, they'll shrug it off as "I'm basically never going to do that, so why should it matter to me?" The majority have spoken for security over freedom, and lead us down this path, where eventually almost no one will own the computers they use, or be allowed to do anything with them (including write software) except as permitted by the organisations that control them.

This is really, really scary. It's quite reminiscent of the dystopia in Stallman's "The Right to Read":

https://www.gnu.org/philosophy/right-to-read.html

It won't be easy to turn the situation around, but if anything I believe it will have to start with education - to reverse the brainwashing that companies and governments have propagated, and show people the power they can have when they control their computing devices. It is particularly hard when the majority are barely computer-literate, and there is vested interest in keeping them that way.

I don't think the situation has gotten to the point where it's necessary to stockpile older and freer computers, but that could be an option in the future. However, I'm certainly not going to be replacing my Thinkpad X60 with anything else for as long as possible.

I think this famous quote really needs to be made more aware of among those preparing to fight against the war on general-purpose computing: "Those who give up freedom for security deserve neither."


Secure Boot doesn't build a walled garden if implemented the way it generally is on x86: anyone physically present at boot time can add and remove keys or just disable it (unless someone's set a firmware password). The way it's typically implemented on ARM is a walled garden, but that's not inherent to the technology, nor is it required for the security benefits.

Boot Guard, evidently, doesn't require building a walled garden, either, according to this article. You can use it to securely attest to the signature on the firmware, and seal your hard drive's encryption key to that attestation. That way an unauthorized modification to the firmware won't result in your private data being stolen, but the computer remains usable with an intentional modification to the firmware, whether it's Coreboot or a manual binary patch or whatever.

Freedom and security shouldn't be a tradeoff. Security includes making sure authorized use of the system is permitted. Freedom involves making sure that my computer isn't acting against my interests. We need to stop portraying this as a tradeoff, and we should make it clear that we'll accept (and maybe even demand) security when it is done in the service of computing freedom.

(x86) Secure Boot was a big step forward here for both freedom and security: it lets me make sure that only an OS I choose runs on my machine, and that no nonfree OS, not even the one that came with the computer, will boot up unless I want it to. It also (obviously) played well into the security demands of the wider market. We should be demanding more things like Secure Boot, alongside fewer things like Boot Guard.


I don't know why people are portraying measured boot (i.e., allowing the TPM to prove that your system is running some particular set of code) as unilaterally good. Remote attestation was one of the key parts features of trusted computing that people got up in arms about back in the way[1]. After all, in theory, it could be used to enable DRM that is very difficult to crack, making it more difficult to make fair use[2] of videos and other files one has access to. I admit that it doesn't seem terribly dangerous in practice; the portability problems that have prevented remote attestation from being used for DRM in practice on existing hardware are not affected by the TPM chain of trust getting shored up, and the Windows kernel is never going to be anywhere near unhackable. (Instead, DRM on PCs is probably eventually going to use Intel SGX, which allows protecting a userspace process without having to trust the rest of the system, making trusted boot unnecessary; but a harm is not well justified by the existence of a greater harm.) And on the flip side, using TPMs to secure user data against attackers sounds like a very nice feature, especially given the level of sophistication of the threats some people face these days. So I agree that in a cost-benefit analysis, having measured boot is better than not, and it may as well be properly secured. But there is a cost-benefit analysis to having it; it's not pure good.

[1] https://www.gnu.org/philosophy/can-you-trust.html

[2] I don't mean fair use in a strictly legal sense, because I live in the US where the DMCA forbids breaking DRM even for what would otherwise be fair use under copyright law; but I think it should be fairly uncontroversial that doing so is moral.


Something that occurred to me recently about the cost-benefit analysis:

Historically, hackers have been very good at taking software that was meant to be secure, and breaking it. As you point out, the Windows kernel was never going to be unhackable. iOS (iPad/iPhone), which is arguably the state of the art in walled-garden verified boot with hardware support, has been reliably jailbroken.

Historically, hackers have been very bad at writing software that was meant to be secure, and keeping it secure.

I'm a lot more confident in our ability, as the hacker movement, to destroy any DRM scheme that wants to use a TPM than to successfully make a general-purpose OS that keeps someone's files secure. And that worries me. If these two were somehow a direct tradeoff (which they're not), I'm pretty sure I'd prefer a world where DRM worked but security breaches and malware weren't an everyday occurrence.


As someone who's spent a good amount of time hacking various devices, I have an odd split mind on security. I detest activities such as those of the NSA and more traditional malware authors and think it would be great if people could trust their computers (never mind that I like sci-fi and wonder whether we will sort this out by the time computers store people's brains and whatnot). But then there are cases like iOS where compromising the system really can be for the benefit of users... but iOS vulnerabilities can be and are used for nefarious purposes (in fact, one of the recently leaked documents mentions that the NSA once used one of my jailbreaks), and jailbreaking has become necessary for fewer and fewer use cases over the years - if I want an open system so much why don't I switch to an Android phone that I can unlock legitimately, without depending on vulnerabilities? It doesn't seem like there will be a scarcity of those in the near future... Should I feel proud of having gotten vulnerabilities fixed by using them in jailbreaks, or guilty that I attempted to maximize rather than minimize the exposure window? And then emotionally, there's a big part of me that always roots for things to be insecure, since hacking is fun and represents a fairly big part of my life. I think I should take joy in the prospect that I might become obsolete (especially since that won't happen for a long time no matter what I do :p), but it's hard to actually feel that way.

Anyway, there are a few recent developments which make me feel somewhat anxious, which I guess means they're good. Among them are Rust and seL4.

Rust, which I'm quite enthusiastic about when not in a hacking mindset, could be one of a few languages that can get rid of the assumption that fast/small/light tools have to be written in C or C++ and thus memory unsafe. To name a particularly egregious case, it strikes me as ridiculous that OpenBSD's ntpd feels the need to include a privilege separation feature, despite being a low-thousands-of-lines implementation of a tiny protocol; I mean, there are probably no vulnerabilities in that particular codebase anyway, and given a danger, being paranoid is a good thing - but in the context of the complexity of something like NTP, we have so many CPU cycles to spare that programs simply should not be a typo away from magically turning malformed input into arbitrary code execution. (To be fair, high-level languages have their own vulnerability classes which might allow the same, but at least you're not constantly teetering on the edge of the cliff. Well, unless the language is bash :) And of course Rust is aimed at far more performance critical scenarios - they aim to build a whole browser which is competitive with existing ones performance-wise while being mostly safe code.

seL4 as a product is overhyped: the kernel they verified is tiny, and the proof enormous. It will be a long, long time before a significant fraction of the code running on a production system will be formally verified. However, whereas before there was always a hope of finding a critical bug in even the innermost low level bits of every system... well, now there is one system where that's theoretically impossible. That's a step. Perhaps, slowly but surely, there will be more steps, more places created where bugs (even logic errors, panics, and other types not caught by languages like Rust) cannot lurk. In the long run, that's certainly the type of software I want running my brain!


Fully agreed about Rust; getting rid of fundamental issues rather than perpetuating the cat-and-mouse problem of chasing classes of vulnerabilities, which sometimes are even intentional backdoors is fantastic. I hope we see more languages like Rust.

But every time I hear about it, people are disappointed by the language. So how is that going?


I think it goes something like this:

- For some reason, Rust posts constantly make the front page of HN. Either the Rust community includes a lot of HN users, or there are just a lot of people who like to hear about it.

- Rust is highly imperfect (as you would expect it to be). There's the constant breakage, which is supposed to end in like 2 or 3 months with 1.0; as a complex language it has more warts than, say, Go, and requires a deeper understanding to use well (i.e. you really benefit from experience with C++); the borrow checker takes some getting used to, I've heard, before you don't feel like you're fighting with it (haven't written enough code myself to get to that point); builds are slow; it doesn't have everyone's favorite pet feature; etc. Especially since people don't seem to feel the need to spend very much time getting to know Rust before writing blog posts and especially HN comments about it, that leads to a lot of complaints, which due to the previous point show up disproportionately on HN. (Actually, anecdotally I think most of the posts are positive, but every post about Rust is used to some extent as a "general Rust discussion" comment thread, just like every other topic. :)

- With all these complaints, the simple but critical point gets drowned out that the language works, pretty much - satisfies to a good extent if not perfectly its performance, safety, and readability goals. (For proof that it works in the large, consider that Servo is over 400k lines of Rust code and implements an already quite featureful HTML renderer[1].)

But I think there will be a good amount of positive hype once 1.0 gets out the door. Can't wait.

[1] http://kmcallister.github.io/papers/2015-servo-experience-re...


Thank you for the perspective.


Who are you seeing that's disappointed? In my circles, people are so excited about Rust that they're practically frothing at the mouth for 1.0. I have one friend who spends all his free time contributing to Rust projects, and has half-jokingly considered quitting his job in order to spend even more time using the language. If anything, my problem is that I keep having to remind people to temper their expectations.


Really? That's encouraging. I've been mostly gauging reactions on HN/SO/proggit. I haven't looked within the Rust community itself.


Remote attestation poses a problem when working with proprietary network services. But you already have a problem in that case, the network service.

But when done sensibly and used in a system that's under the user's control (incl. that remote service), it's a security enhancing feature.

And people might not be as much in arms about it anymore because the landscape changed: The original Palladium proposal was made in an environment where the only threat model was "consumers 'steal' content".

Today, measured boot is also a legitimate security measure against criminal activity targetted at the user (eg boot kits, extortion ware), which didn't really enter the picture back then.


That sounds nice but when the security administrators have total control, and the management beuracracy of different organizations realizes that they have that power, you lose the "general" part of general computer every time. It's really this beurocratic control of the machines that hackers have been rebelling against for decades and is a big part of why the computer revolution has been as egalitarian as it has. Security=control for the powerful=no room for dissent.


That just means we have a bigger problem: if people are reliant on other people's computers to do their computing, they have no computing freedom at all.

The hacker movement has been soft on this for a long time. Blind trust in university accounts is a good example. There's really no reason to believe that your university is less likely to snoop on your mail than Google is. There's in fact a lot more reason to believe that an actual human, with an interest in you specifically, may be doing so. Same with files or processes on a shared server. But hacker culture grew in a time where computing required a shared, large investment in a computer, and so there was no choice but to trust the university computer if you wanted to do any computing at all, and that became an accepted part of the culture.

We now live in an age where the personal computer is just about (but not quite) affordable for everyone. We need to shake the idea that you can achieve computing freedom on someone else's computer. Perhaps you can freely compute for them: if you're doing research for a university who's given you an account, or work for an employer who's given you a laptop, by all means, use their computer. The concept of freedom, then, applies to the university or the employer, and you are just their agent.

But if you're talking about personal computing, we need to get a personal computer in everyone's hands. We need to solve problems of affordability and accessibility, but people should no more be reliant on a company-issued laptop than on a university-issued shell account.


>if people are reliant on other people's computers to do their computing, they have no computing freedom at all.

That's not true. They lack the ability to ensure their freedom, yes. But they don't necessarily lack computing freedom.


I don't have a "security administrators" of my computer except for me.

If you mean a security administrator of a large company - it's their computer, not yours.


Cory Doctorow has a great talk about computers and owners vs users. I can't find the link any more though, anyone have a link?


The coming war on general computation (2011):

https://www.youtube.com/watch?v=HUEvRyemKSg

The coming civil war over general purpose computing (2012):

http://boingboing.net/2012/08/23/civilwar.html


> Secure Boot doesn't build a walled garden if implemented the way it generally is on x86: anyone physically present at boot time can add and remove keys or just disable it

That is true for very competent end users, but near impossible for the general masses.

We have come so far as to be able to install big Linux distributions like Fedora and Ubuntu without any hassle on Secure Boot systems. That only works though, because their bootloader, kernel and kernel modules were blessed (i.e. signed) by the distributor. Building your own kernel still requires you to disable Secure Boot (or far more difficult, add your own key to Secure Boot and sign everything yourself). Heck, even ZFS, which is otherwise as easy to install as adding a third party repository, is incompatible with secure boot, as it loads a custom kernel module.

Now, to visualize the real world difficulty of disabling secure boot imagine guiding your spouse or friend through this process over the phone: Reboot the system a few times until they found the text telling which key press during boot to enter EFI setup or guess correctly for their hardware manufacturer, let them read out aloud what they see - possibly in a language they don't speak well if the interface is not localized, navigate them through the menus, find out how to change the Secure Boot setting (one should think switching binary settings and moving stuff up and down in a list would be a solved UI problem nowadays… speaking from experience with my HP Probook UEFI interface, it is not, though), (if you are particularly unlucky: explain which keys of their keyboard layout map to the needed US keyboard layout), exit and safe the settings.


I don't think education is the answer. There is no great conspiracy to keep people computer-illiterate. Most people follow the path of least resistance. We just have to make sure that path involves open technologies. This can be done either through other technologies or through laws.


Well, on the flipside you have the NSA's recent "Equasion" firmware flasher. Assuming the various hard drive companies weren't collaborating (and I'd assume they weren't considering it was distributed with a virus and not baked-in), signed firmware would have prevented this.

All in all I think for the average user (who doesn't even know what firmware is), signed images are a good thing.


Signed images are good. Choice in signed images is better.


As you picture this dystopia, remember there is zero chance server hardware will wall out Linux. Amazon, Google, etc all run their entire data centers on one distro or another.


BootGuard won't prevent Linux from running. But in Verified Boot mode it prevents coreboot, and with it any work to customize the even lower layers of the system.


Well what do you expect when you have a monopoly in the PC market? I also don't see anyone recommending AMD chips around here.


Lenovo Broadwell devices are attempting to dig themselves out of the hole caused by Haswell trackpads. Probably not a good time to cripple them.

How would Lenovo react if Broadwell devices began receiving many service calls under warranty? Presumably the lock could be changed by a motherboard replacement.

Related article, http://www.pcworld.com/article/2883903/how-intel-and-pc-make..., ".. New thinkpad's can't be used anymore for coreboot. Especially the U and Y Intel CPU Series. They come with Intel Boot Guard and you are won't be able to boot anything which is unsigned and not approved by OEM. This means the OEM are fusing SHA256 public key hashes into the southbridge.

... to their credit, Intel does allow PC manufacturers to configure the hardware in a different way. The real way to get that open hardware seems to be to build it from scratch and make the right decisions along the way, as Purism is trying to do. If you want this sort of open hardware, be prepared to vote with your wallet."

Purism: https://www.crowdsupply.com/purism/librem-laptop


Unfortunately it looks like Purism also has some problems in this regard.

http://blogs.coreboot.org/blog/2015/02/23/the-truth-about-pu...


I'd assume that their promise (https://puri.sm/posts/pioneering-cpu-efforts-to-liberate-lap...) to "ship with an Intel CPU fused to run unsigned BIOS code" means precisely that they leave Boot Guard disabled.

Which is much less than they market it, but that's part of the issue some people in the coreboot community have with Purism. They present it as some groundbreaking success (while it's nothing but clicking a different checkbox in a tool), and I really doubt it's the "first [laptop] to ship" that way. Chromebooks come to mind.


Side question: "...Coupled with their desire to include an Nvidia GPU..." that's the first I've heard this. I know Nvidia was hoping to get an x86 license from their lawsuit vs Intel.


Nvidia setted their lawsuit without an x86 license, http://www.anandtech.com/show/4122/intel-settles-with-nvidia...


alright, I thought for a moment it was about something else.


Originally the Purism laptop was supposed to ship with nvidia graphics. They later changed that part of the design.


At least this did not get as much backlash as Superfish did.


Superfish was preinstalled software that could be removed by following a few steps (or installing a clean OS.)

Boot Guard is a hardware-level protection that cannot be removed or disabled.

Of course the latter is (theoretically) non-malicious, but something of that permanence disturbs me far more than some easily-removed malware. The fact that it is used to ensure that the OS and everything above it is in an "assured state" means that it could also be used to prevent users from uninstalling "approved" software like Superfish.


I am talking about the press coverage etc.


Broadwell Thinkpads were announced a month ago, I wonder how many have shipped already? Superfish affected many Lenovo devices.


It did not affect ThinkPads, but it did affect the reputation of Lenovo regardless.


This started with Haswell. Hardly anyone seems to care :(


That's why I always emphasize that TPM (UEFI/SecureBoot/Boot Guard etc.) are not the right way for open source systems (Linux etc.).

The Linux community should stop to fiddle with locked-down boot systems. They actually should boycott locked-down systems and only support hardware vendors who officially support Linux. Many of them are presented at LinuxGizmos. I believe that such hardware vendors are much more open to the demands of the Opensource Community than vendors who produce locked-down systems.

http://linuxgizmos.com


> TPM (UEFI/SecureBoot/Boot Guard etc.)

These are four different technologies. Some of them help your freedom. Some of them hurt it. Some of them have nothing to do with your freedom at all. It doesn't really do anyone any favors to lump them all in the same category; it certainly doesn't make hardware vendors inclined to think you're making cogent arguments.


UEFI has nothing to do with secure boot (apart from the fact they're compatible).

I run Linux on my Thinkpad, with UEFI-only enabled, secure boot disabled, and UEFI will boot my kernel directly using EFISTUB - no more screwing around with bootloaders. It's awesome!

There's even ways to use SecureBoot with various distros [1]. Sure it's a pain, but it can be done. Having and using teh tech is a different issue from vendors hindering what you can do with your hardware.

[1] http://www.rodsbooks.com/efi-bootloaders/secureboot.html


I have no problem with a cryptographically verified boot process, so long as I control the key or verification step.

Unfortunately, Intel stripped this freedom from CPU owners by allowing OEMs to lock down the boot process in a manner that cannot be bypassed. Soon Coreboot will be all but dead for machines with Intel processors -- other than Chromebooks which ship with Coreboot. Owners will have to accept the bios that vendors give us.

The UEFI legacy boot option also seems to be on its way out, so I expect there will be fewer OS choices in our future too.


I don't think the option to disable Secure Boot is going anywhere at this point though.


"Secure Boot" is a farcical misnomer when it prevents you from replacing Windows with OpenBSD.


Why can't you boot it from GRUB?


Edit: Measured boot (DRTM, e.g. implemented by Qubes) has some benefits over "Secure Boot" (SRTM), both for firmware modification and key management.

http://www.pcworld.com/article/2883903/how-intel-and-pc-make..., "There’s also a second option: “Measured Boot” mode, where the hardware securely stores information about the boot process in a trusted platform module (TPM) or Intel Platform Trust Technology (PTT). The operating system could then examine this information, and—if there was a problem—present an error to the user.

More on DRTM & SRTM: http://theinvisiblethings.blogspot.com/2011/09/anti-evil-mai...


"Secure Boot" and "SRTM" are different things.

Secure Boot is a facility of UEFI, the firmware, to restrict which bootloaders and OSes can load, and prevent them from loading if a signature check fails. It makes sure that the firmware knows and trusts the OS.

"SRTM" is a facility of the TPM to inspect what firmware and bootloaders have already loaded and executed, and refuse to unlock an encryption key, attest to the network, etc. if the hashes don't match. It requires a TPM (Secure Boot does not), and crucially it cannot prevent anything from running. It can just refuse to do cryptographic operations depending on what has run. It makes sure that the OS knows and trusts the firmware.

They both have the effect of blocking boot sector malware (although from opposite directions), but the way that they work, the requirements for setting them up, the flexibility you get, and the other threats they defend against are rather different. And complementary, it's often a good plan to use them together to tie a nice knot.

("Boot Guard," the topic of this article, is something else entirely, where the hardware checks the signature on the firwmare. The firmware in turn can Secure Boot if it wants.)


Thanks for the correction.


That option primarily exists to provide big Corps with a way to buy new machines and image Windows 7 on them. Once all that legacy is gone, non-Secure Boot isn't necessary anymore from a Wintel point of view.

And there won't be much opposition from Linux, given that all major (and most minor) distros now ship Secure Boot enabled.


I recently purchase a Dell Inspiron Laptop which gave me a nightmare to install Windows 7 on it.

The "secure Boot" isn't to secure the boot against rootkit, but secure from "unauthorized" or "unsupported" install your favorite operate system. In this case, I cannot install windows 7 on my brand new laptop.

I still remember the SIM card lock from carriers years ago, so if i am the vigilant, I am going to ask users to pay upfront to unlock "Secure Lock" so that they can install another operate system.


Original mailing list thread: http://www.coreboot.org/pipermail/coreboot/2015-February/thr...

Scary times indeed.


http://www.coreboot.org/pipermail/coreboot/2015-February/079...

>This means the OEM are fusing SHA256 public key hashes into the southbridge.

SHA256 public keys, scary indeed.


SHA256 public key hashes


Maybe they meant "HMAC" or "Fingerprint".


Intel likes to use SHA256(pubkey) for their verification schemes. I guess that reduces the amount of storage while sufficiently secure.

Details are in http://apress.com/9781430265719


"Hash" is a perfectly valid term, although "fingerprint" would also be correct. "HMAC" is something else entirely.


You don't need an HMAC to verify a key or other blob of data, and isn't fingerprint a less technical term for hash?


Ladies and gentleman lenovo.


When this came up on Phoronix, I suggested a jumper to disable it.


Second this. Put a switch under the memory door. Problem solved.


While we're at it, we need something similarly physical for IPMI BMCs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: