Page 22

EETE JUN 2015

ENCRYPTION & DATA SECURITY Security features for non-security experts By Jim Turley When you feel sick, do you immediately enroll in a university course in medicine and wait until you’ve graduated to treat yourself? When your car breaks down, do you change careers and devote yourself to years of automotive engineering? And when we need to learn about cryptography, security, hacking, and intrusion prevention, do we embark on a decadeslong struggle of one-upsmanship to keep up with the bad guys? Nah… we hire an expert and call it done. So it goes with product development. The bigger our systems get, the more complicated and the harder to verify they become. We already know that. There are whole subsections of our industry devoted to verifying hardware and software for correctness. We simulate and test our chips to make sure they work as intended, and we (sometimes) test and simulate our software to make sure that it works the way we want it to. But that size and complexity adds yet another factor: security. The bigger and more complex a system, chip, or board becomes, the easier it is for someone else to find a security hole in it. In security-speak, larger systems provide a larger “attack surface” to bad guys who want to break it. The more gates we design and simulate and verify and ship to our customers, the more angles there are for nefarious misdeeds. As if designing complex devices wasn’t hard enough. Now we have to make sure they work, plus we have to make sure we’ve plugged all the security holes – including ones we don’t know anything about. After all, how would you know if your chip or system had a security weakness? Would you know what to look for? And if someone else found such a security hole, would they tell you? Hackers don’t typically publicize their exploits, so it’s likely you’d never know your system has been compromised. So we’re left to wonder if our brand new system is secure or not. We assume it is. And we’re probably mistaken. Making matters worse, most engineers and programmers aren’t experts in security. We’ve spent our careers fine-tuning our coding skills and/or hardware-design talent, not studying cryptography algorithms or side-channel attacks or differential power analysis (DPA). Most of us barely even know what “security” really means. Do I have to encrypt my ROMs? Should I obfuscate my code? Do I serialize my boards and somehow link them to the firmware? Where do you even start with something this big, and how do you know when you’re done? Fortunately, there are people who are expert in this kind of skullduggery, even if most of us aren’t. So the smart thing to do is turn to the specialists. Just like we’d get an operating system or a microprocessor from an expert vendor in those areas, so too, we can get security technology from firms that do nothing else. They’re the experts so we don’t have to be. Better still, lots of this obscure, spooky technology is available right off the shelf, no experience required. In fact, Altera has built it in. It’s mandatory, largely automatic, and as simple as programming a few options in an FPGA. The best example of this is Altera’s brand new Stratix 10 family of FPGA chips. Apart from being big and fast, Stratix 10 FPGAs also include a whole array of security features that Fig. 1: The Secure Device Manager is a reprogrammable, high performance security subsystem with integrated sensors, cryptographic processing, and more. we don’t normally see in mainstream devices. That’s because Altera, like everyone else, turned to the experts for its security technology. In this case it’s a Florida-based company called The Athena Group. Athena has been doing hardware security for 15-plus years, just like Altera has been doing FPGAs for generations. Altera and other FPGA companies have always allowed you to encrypt the configuration bitstream – that’s basic stuff. These new devices go way beyond that by securing the entire FPGA before, during, and after configuration and doing it in a far more complete and thorough way. It gets complicated, so stay close. First of all, Stratix 10 allows you to configure the FPGA in stages, one piece of the device at a time. More important, each separate piece can be encrypted separately, with its own individual key. This allows you to, for example, encrypt basic portions of the device with your company’s “generic key,” while other, perhaps more sensitive, portions of the device are encrypted with an entirely different key that belongs to your customer. A third part of the configuration bitstream might be encrypted with a third key, and so on. This way, you can change out parts of the overall FPGA design without touching the others, and without having to re-verify or re-certify the entire thing. Configuration becomes more like an operating system that’s loading apps and less like an all-or-nothing bootstrap. Why is this important? Your FPGA is configuration is important, sure. But that’s not the only reason. Before, it was possible to observe an FPGA while it was loading or operating, and glean important information about it just by the information if gave off. A clever hacker could monitor the chip’s power supply or timing to figure out what was going on inside. And since all FPGAs are mass-produced, and because they all use the same hardwired configuration logic, cracking one FPGA means cracking them all. Now, with separately encrypted partial bitstreams, you can break up that predictability, thus thwarting the bad guys. Now the process is processor-driven. There’s an entirely new custom processor inside Stratix 10, a Secure Device Manager that runs its own security code expressly for the purpose of jumbling, obscuring, and disguising the configuration process. But that’s just the beginning. The Secure Device Manager Jim Turley is founder of Silicon Insider 22 Electronic Engineering Times Europe June 2015 www.electronics-eetimes.com


EETE JUN 2015
To see the actual publication please follow the link above