Page 34

EETE JULAUG 2014

da ta ENCRYPTION & SECURITY Hacking airliners: lessons learned in firmware integrity assurance By David Kleidermacher In April 2013, a security consultant made headlines when he claimed he could use an Android smartphone to hack in and commandeer commercial jets. The smartphone hacker used Android to wirelessly inject malicious commands into a simulated flight management system (FMS), which would cause the simulated aircraft’s autopilot to alter its flight accordingly, with horrific ramifications. There are two major claims in the research. First, messages sent to the FMS are not cryptographically authenticated. Second, vulnerabilities in the FMS software enabled the unauthenticated commands to actually hijack the plane. Soon thereafter, the FAA, EASA, and avionics firms issued general statements reassuring the public that the Android hacking demo is not feasible in actual aircraft. One vendor’s statement was more specific, stating that the simulated FMS “doesn’t have the same protections against overwriting or corrupting as our certified flight software.” Putting aside the concern about unauthenticated messages (for which there exist plenty of obvious solutions based on digital signatures), this hack should be taken as a stern reminder to all electronics developers of the importance of protecting the integrity of the critical firmware/ software used within their systems. Most computer systems in the world today have no such protections in place, as evidenced by the rootkit epidemic. In 2011, McAfee asserted the existence of over 2 million unique rootkits and that 1200 new rootkits were being detected every single day. By ensuring that only trusted software is running on the platform, attacks like the one performed on the simulated aircraft cannot succeed or at least cannot go undetected. There are two ways that firmware integrity can be violated. First, the disk or flash blocks that contain trusted software might be modified. Malware installation can be performed with a physical attack on the storage system or by using an operating system vulnerability to gain run-time access to the storage system. These are sometimes referred to “permanent roots” since they continue to operate even after a reboot. The second method is to “hook” into the trusted software’s critical execution pathways during run-time. Much of the world’s modern operating system security research is centered on making it more difficult for malware to take hold by obfuscating operating system execution (e.g. address Fig. 1: Secure boot chain. space layout randomization) and reducing general operating system vulnerabilities. Secure boot and remote attestation Secure boot is the most obvious and effective way to prevent, or at least detect, permanent roots. The goal of secure boot is to ensure that the entire platform, including its hardware, boot loaders, operating system, and critical applications – everything that contributes to the establishment of known, trusted system state – is measured and found to be authentic. If the hardware and boot loader have the capability to load the system firmware (operating system, hypervisor, entire TCB) from an alternative device, such as USB, rather than the intended, trusted device (e.g. flash), then an attacker with access to the system can boot an evil operating system that may act like the trusted operating system but with malicious behaviour, such as disabling network authentication services or adding backdoor logins. But this is only one way to subvert systems that lack secure boot. Instead of a malicious boot loader or operating system, an evil hypervisor can be booted, and the hypervisor can then launch the trusted operating system within a virtual machine. The evil hypervisor has complete access to RAM and hence can silently observe the trusted environment, stealing encryption keys or modifying the system security policy. King, et al., provide a good example of this attack in a paper that describes SubVirt, a malware hypervisor. Another infamous attack, called the BluePill, extended the SubVirt approach to create a permanent rootkit that could easily be launched on the fly using weaknesses in the factory-installed Windows operating system. The typical secure boot method is to verify the authenticity of each component in the boot chain. If any link in the chain is broken, the secure initial state is compromised. The first stage ROM loader must also have a hardware-protected cryptographic key used to verify the digital signature of the next level boot loader. This key may be integrated into the ROM loader image itself, installed using a one-time programmable fuse, or stored in a local TPM that may provide enhanced tamper protection. The signature key is used to verify the authenticity of the second stage component in the boot chain. The developer has the option of allowing any authentic image or a specific set of known-good images (in which case, the known good signatures must also be stored in the hardware-protected area). The verification of the second level component covers its executable image as well as the known good signature and signature verification key of the third stage, if any. The chain of verification can be indefinitely long. It is not uncommon for some sophisticated computing systems to have surprisingly long chains or even trees of verified components that make up the TCB. Figure 1 depicts an example three level secure boot sequence. David Kleidermacher is CTO at Green Hills Software - www.ghs.com 34 Electronic Engineering Times Europe July/August 2014 www.electronics-eetimes.com


EETE JULAUG 2014
To see the actual publication please follow the link above