:::info This document is work-in-progress 🚧 :male-construction-worker: :female-construction-worker: 🚧 :::
Information is not a disembodied abstract entity; it is always tied to a physical representation. It is represented by engraving on a stone tablet, a spin, a charge, a hole in a punched card, a mark on paper, or some other equivalent. This ties the handling of information to all the possibilities and restrictions of our real physical word, its laws of physics and its storehouse of available parts.
-- Rolf Landauer, in The physical nature of information
This is an initiative to spark research to explore how we could develop a secure chip for TEEs (Trusted Execution Environments) that would ultimately be secure because of physics rather than economics1. The chip design should be open source, and its physical implementation should be verifiable, meaning that it should match the open source design. Moreover, the root of trust (embedded secret key) should be proven to have not leaked during generation or manufacturing. Thus, the hope and vision is to develop a TEE chip that does not need to be trusted because it can be verified by physics and mathematics. For an example of a cryptographic protocol implementation that is secure through physics see Experimental relativistic zero-knowledge proofs by Alikhani et al.
To put this vision into context, current TEEs such as Intel SGX, face the following challenges:
:::warning
- :radioactive_sign: NO proof of manufacturing according to a known open source chip design specification
- :radioactive_sign: NO proof of non-leakage of secret bits -- how can we know that the secret bits (root of trust) encoded into the chip were not leaked during manufacturing
- :radioactive_sign: NO proof of hidden-forever secret bits -- above and beyond trusting or not trusting the chip manufacturers, and the manufacturing processes, one problem remains: Can we truly hide secret bits of information (root of trust) into physical matter?
- :radioactive_sign: Centralized remote attestation -- meaning that trust in the manufacturer is required to attest the trustworthiness of a TEE. [RFC 9334] :::
The intent of this document is to work on addressing the four fundamental core challenges mentioned above. These core challenges are arguably all rooted in the security of the root-of-trust. Hence, as we make progress in our understanding, this document may tighten its focus on how to secure the root-of-trust via physics rather than economics, meanwhile extracting the work on the other core challenges to their own documents.
For a shorter introduction, readers are highly encouraged to read Flashbots' call to action to write a position paper "to communicate the problem and its importance to the broader hardware research community".
For a broader presentation of the various challenges involved in making fully secure TEEs, readers are highly encouraged to read the Autonomous TEE Manifesto by the Poetic Technologies UG team.
Last but not least, The Secure Cryptographic Implementation Association needs not to be convinced as it is already actively working on developing open hardware that can withstand physical attacks such as side-channel and fault attacks. Their vision is that an "open approach to security can lead to a better evaluation of the worst-case security level that is targeted by cryptographic designs".
The key topics that this document wishes to explore are:
- Revisiting the Problem which TEEs aim to solve
- Do we really need TEEs? Could we do it all with mathematics (FHE, ZKP, MPC, etc)?
- Motivations for better TEEs
- Threat Model
- Cypherpunk-Friendly Chip
- The Rise of Crypto-Physics
- Appendix
TEEs are an attempt to solve the secure remote computation problem. Quoting Intel SGX Explained by Victor Costan and Srinivas Devadas:
:::info
Secure remote computation is the problem of executing software on a remote computer owned and maintained by an untrusted party, with some integrity and confidentiality guarantees. :::
Note that the remote computer is said to be owned and maintained by an untrusted party. Yet, current TEEs, cannot handle physical attacks such as chip attacks (see Physical Attacks on Chips), which would allow an attacker to retrieve the root of trust (secret keys encoded in the hardware). Once an attacker knows the secret keys, it can emulate a TEE, and go through the attestation process unnoticed (e.g. see Appendix A. Emulated Guard eXtensions in https://sgx.fail/ paper).
Is it even possible to build a chip that can handle physical attacks, such as those making use of Focus Ion Beam microscopes as mentioned in Intel SGX Explained (section 3.4.3), and Breaking and entering through the silicon? One could argue that it's not possible in the classical setting, but may be possible in the quantum setting. Some argue that PUFs (Physical Unclonable Functions) cannot be broken and would therefore be a solution. However, there's plenty of research that focuses of breaking PUFs, and there's also active research in developping more secure PUFs. Hence, it seems reasonable to assume that PUFs are not an ultimate solution to chip attacks, although they do seem to be a major improvement. (See Root of Trust with PUFs.)
Why can't we do it all with FHE, ZKP, and MPC?
Not sure. 😄 Besides the performance limitations of FHE, ZKP and MPC, the problem of proof-of-deletion or certified deletion may be the most mentioned one. The intuition is simple: "How do you prove that you completely forgot what some secret data was deleted?" You could show that your harddisk has been completely wiped out, but perhaps you copied it elsewhere. Hence, certified deletion appears to not be possible in the classical setting but it apparently is if one is willing to step one foot (or two), into the quantum setting (e.g.: High-Dimensional Quantum Certified Deletion by Hufnagel et al, Quantum Proofs of Deletion for Learning with Errors by Poremba, Software with Certified Deletion by Bartusek et al). If we are confined to the classical setting though, then TEEs may be useful. If the program generating and/or handling secrets is executed in a TEE then the program can be written such that it will delete the secrets once it's done with the task. As an alternative to TEEs, there's the idea of traceable secret sharing as presented in Traceable Secret Sharing: Strong Security and Efficient Constructions by Boneh et al.
:::info Moving from security-through-economics towards security-through-physics. :::
According to SoK: Hardware-supported TEEs and Intel SGX Explained, current chips that implement TEEs cannot protect against physical attacks such as chip delayering, which would allow an attacker to extract the so-called root of trust, meaning hardware embedded secret keys upon which the entire security of the TEE depends. The only current known defense against chip attacks is trying to make the cost of a chip attack as high as possible. To make things worst, it's not even clear what the cost of a chip attack is; perhaps one million dollar, or perhaps much less; (see Physical Attacks on Chips)? So, at the very least, one would hope we would know what the cost of a chip attack is, such that protocol designers could design mechanisms that would eliminate economic incentives to attack the chip, because the cost of the attack would not be worth what could be extracted out of the attack. It's very important to note here that a protocol relying on TEEs may also be targeted for attacks for reasons other than financial, and it's probably best to avoid using TEEs for such cases (e.g. privacy preserving application used by political dissidents).
Aside from being vulnerable to chip attacks the current popular TEEs, such as Intel SGX, are closed source, meaning that their hardware designs are not public, which in turn makes it very difficult to know whether a chip is implemented as claimed. Even with an open source hardware design we would need to figure out how to verify that the chip was implemented as per the open source design, and that secrets (root-of-trust) generated and embedded into the hardware at the time of manufacturing were not leaked.
See On the Physical Security of Physically Unclonable Functions (1.1.2 Physical Attacks) by Shahin Tajik.
In the crypto world, the motto "Don't Trust, Verify" is frequently used to emphasize the verifiability feature of the various protocols, which allows any user to verify for themselves the validity of a transaction or claim. It may be said that the backbone of the reverred verifiability is cryptography and distributed systems, which involves trusting mathematics and trusting an honest majority, respectively. Consensus protocols, and many multi-party computation (MPC) protocols require to trust that the majority of the validators are honest. The majority may range from 51% to 75% depending on the protocol. Most protocol rely on economic incentives to keep the majority honest. On one hand the world of crypto is secured through mathematics and on the other hand through game theory which incentivizes the majority to follow a prescribed distributed system protocol. So what about TEEs? Where do they fit in this picture?
The so-called web3 world (aka crypto space) increasingly makes use of TEEs (mostly Intel SGX) in applications where substantial amounts of money may flow, and where TEEs help secure the confidentiality of its users. It's therefore important to properly understand what it means to trust TEEs. For a strange reason, it seems complicated to answer the question of "What does it mean to trust TEEs?" If you ask different people, you may find a spectrum of different answers ranging from the likes of: "You have to trust the chip maker! But you already trust them anyways." to "Intel SGX is broken every month, I don't understand why people use them!"
:::warning In general, it may be fair to say that trusting a TEE means the following:
- Trust that the chip is designed as per the claims of the chip maker.
- Trust that the chip is manufactured as per the claims of the chip maker.
- Trust that the root of trust is not leaked during the manufacturing process.
- Trust that the root of trust cannot be extracted out "cheaply" or "easily" by an attacker who has physical access to the chip.
- Trust the remote attestation process, which may mean having to trust the role of the manufacturer (e.g. Intel SGX with EPID or DCAP).3
Note that the above implicitly assumes that the design and implementation are secure, free of bugs.4 :::
:::success Auguste Kerckhoffs, back in 1883, in his paper entitled La Cryptographie Militaire (Military Cryptography), argued that security through obscurity wasn't a desirable defense technique.
Il faut qu’il n'exige pas le secret, et qu'il puisse sans inconvénient tomber entre les mains de l’ennemi
roughly translated to:
It must not require secrecy, and it must be capable without inconvenience to fall into the enemy's hand
(Perhaps, one may point out that Kerckhoffs was assuming that the private key would be held secretly and not be part of an open design. The need to secure a private key in an open design begs for physics to enter the arena (e.g. PUFs).
For example, the Secure Cryptographic Implementation Association (SIMPLE-Crypto Association) aims to apply Kerckhoffs's Principle to hardware and lays out their vision at https://www.simple-crypto.org/about/vision/:
[...] our vision is that as research advances, the security by obscurity paradigm becomes less justified and its benefits are outweighted by its drawbacks. That is, while a closed source approach can limit the adversary's understanding of the target implementations as long as their specifications remain opaque, it also limits the public understanding of the mechanims on which security relies, and therefore the possibility to optimize them. By contrast, an open approach to security can lead to a better evaluation of the worst-case security level that is targeted by cryptographic designs. :::
:::warning For some reason, the hardware world does not embrace open source like the software world. Moreover, it is common practice to use security through obscurity as a core design principle to secure hardware. Simply said, for whatever reason, the current hardware industry appears to be dominated by the belief that it's best to hide the design and inner workings of a chip by adding unnecessary elements to the design, just to confuse a potential attacker, in the hope that the attacker will not be able to understand the design, and thus reverse engineer it. :::
:::info Rework. Potentially reference Application of Attack Potential to Hardware Devices with Security Boxes. Mention that we're looking to defend against the highest score of "attack potential".
Also perhaps mention the concept of Manufacturer Resistance as described in Gassend's master thesis. ::: The worst.
- Attackers with physical access to the chip, with unlimited resources and funds MUST be considered
- State actors
- Malicious actors with full access to every step of the supply chain, foundries, etc
- Malicious actors with unlimited access to data centers, e.g. swapping computers with their own fake SGX computers without being noticed
- etc, etc.
Just think the worst of the worst.
Perhaps the only thing that may be out-of-bound is remote civilizations or state actors with access to new physics that is not yet known by the general public (e.g. academia/universities). For instance, imagine another planet where beings would know how to go faster than the speed of light.
:::info The battle for Ring Zero by Cory Doctorow.
But how can we trust those sealed, low-level controllers? What if manufacturers – like, say, Microsoft, a convicted criminal monopolist – decides to use its low-level controllers to block free and open OSes that compete with it? What if a government secretly (or openly) orders a company to block privacy tools so that it can spy on its population? What if the designers of the secure co-processor make a mistake that allows criminals to hijack our devices and run code on them that, by design, we cannot detect, inspect, or terminate?
That is: to make our computers secure, we install a cop-chip that determines what programs we can run and stop. To keep bad guys from bypassing the cop-chip, we design our computer so it can't see what the cop-chip is doing. So what happens if the cop-chip is turned on us? :::
As mentioned in The Problem TEEs aim to solve, if the problem we wish to tackle is that of secure remote computation, the threat model should include attackers with physical access to the chip, which means that the chip should be secure against physical attacks, which begs the question as to whether this is even possible in the classical setting (i.e. without using quantum physics). That being said, it does not mean that we cannot improve the current TEEs. This section aims to explore what we could feasibly do today to have a chip that attempts to align itself with the motto of "Don't Trust, Verify", omnipresent in the web3 and cypherpunk cultures.
In the context of a secure chip, the motto "Don't Trust, Verify" calls for at least four fundamental pillars, which address the challenges presented at the beginning of this document:
:::success
- Proof of manufacturing according to a known open source chip design specification
- Proof of non-leakage of secret bits to verify that the root of trust wasn't leaked during manufacturing
- Proof of hidden-forever secret bits -- the root of trust must be proven to be unbreakable
- Decentralized Remote Attestation -- 😁 a device should be able to provide a proof that it is what it claims to be, without relyng on any external validation, such as a chip manufacturer, or even a k-of-N validator set. In other words, using the word "autonomous" as in the Autonomous TEE Manifesto a device should be fully autonomous when it comes down to proving what it is, and the state it is in. :::
Having an open source hardware design is perhaps the most reasonable place to start. Verifying that a physical chip does implement the intended open source hardware design is perhaps more difficult, and we can try to tackle this in a second step. Hence, we'll first start by exploring how we could have a TEE chip with an open source hardware design.
:::info The Secure Cryptographic Implementation Association has already established a very good foundation for what is needed, and already did a hardware implementation of the AES with a strong side-channel security countermeasure, which is currently under public evaluation. See the outline of their vision at https://www.simple-crypto.org/about/vision/ and a detailed description of how they operate at https://www.simple-crypto.org/about/organization/. ::: Yes. It's possible. This is not a new idea. See the wikipedia entry on Open Source Hardware.
The story behind CERN Open Hardware License is noteworthy:
"For us, the drive towards open hardware was largely motivated by well-intentioned envy of our colleagues who develop Linux device-drivers," said Javier Serrano, an engineer at CERN's Beams Department and the founder of the OHR. "They are part of a very large community of designers who share their knowledge and time in order to come up with the best possible operating system. We felt that there was no intrinsic reason why hardware development should be any different."
Open source electronic design automation (EDA) sofware such as https://theopenroadproject.org/ can be used to design chips and be sent for tapeout at foundries, such as Google Skywater, that support open sourcing the design.
It may be useful to survey current and past efforts such as:
- Build Custom Silicon with Google
- efabless
- SkyWater
- Google funds open source silicon manufacturing shuttles for GlobalFoundries PDK
Tiny Tapeout has a lot of educational material at that may be worth reading for those who don't have a background in hardware.
Also worth having a look at is the course Zero to ASIC Course.
How do we know whether a given chip corresponds to a given design? At least two possible approaches:
- (Pre-fab) Logic Encryption - encrypts the design
- (Post-fab) Microscope imaging of the chip to compare it against its design
Logic Encryption somehow locks the chip design to protect against a malicious foundry. The company HENSOLDT Cyber has numerous research works on the topic, in addition to actually making chips, and hence, is probably worth studying. Their papers are listed at https://hensoldt-cyber.com/scientific-papers/, but let's list a few here:
- Scaling Logic Locking Schemes to Multi-Module Hardware Designs
- Inter-Lock: Logic Encryption for Processor Cores Beyond Module Boundaries
- A Critical Evaluation of the Paradigm Shift in the Design of Logic Encryption Algorithms
- A Unifying Logic Encryption Security Metric
- The Key is Left under the Mat: On the Inappropriate Security Assumption of Logic Locking Schemes
See Red Team vs. Blue Team: A Real-World Hardware Trojan Detection Case Study Across Four Modern CMOS Technology Generations by Puschner et al. in which SEM imaging was used to detect hardware trojan insertions in chips.
Some imaging techniques (invasive) destroy the chip in the process meanwhile others (non-invasive) do not. Invasive analysis would need to be combined with a Cut-and-Choose protocol as proposed by Miller in #2 (comment).
It's important to point out that there seems to be newer techniques that are non-invasive, based on X-ray ptychography or Photonic Emission Analysis/Microscopy (PEM).
Puschner et al mention:
"New non-invasive scanning methods based on X-Rays [17] seem more promising for the future than the lengthy process of delayering and imaging the chip. These non-invasive techniques are potentially able to scan all metal layers and provide a 3D-image of the entire routing without destroying the device, but the research on this subject is still at an early stage."
Three-dimensional imaging of integrated circuits with macro- to nanoscale zoom
More generally speaking, learning what OpenTitan does for what they call "Quality standards for open hardware IP" may be useful.
:::info See the brief discussion of "Manufacturer Resistance" in Gassend's Master thesis (add link) :::
DAMO: Decentralized Autonomous Manufacturing Organization How can we be certain that the manufacturing process did not leak the secret keys (root of trust)? Could the supply chain somehow produce a proof of non-leakage of secret keys?
Could we somehow bootstrap a fully automated foundry, where the manufacturing process is fully programmed, verifiable, and chips are built atom-by-atom.
Or, could we build chips at home?
Note
PUFs, covered in the next section, may solve the problem of ensuring that the key does not leak at manufacturing time, since the key is not injected, but rather internally created when the chip is powered up, and never stored in non-volatile memory (NVM).
However, we nevertheless need to make sure that the expected chip with the expected PUF has been manufactured. This may be achieved with imaging the chip against its expected design. (TODO: Add link/ref)
Can we learn something interesting from Zero Trust applied to Chip Manufacturing?5
- Zero trust security model
- Intel: A Zero Trust Approach to Architecting Silicon
- Chip Industry Needs More Trust, Not Zero Trust
- Building a Zero Trust Security Model for Autonomous Systems (See "Zero Trust Applied to Chip Design" section)
- Zero Trust Security In Chip Manufacturing
Nanofactories, nanomanufacturing, atomically precise manufacturing, etc.
- Nanofactory
- Productive Nanosystems
- Nanosystems: Molecular Machinery, Manufacturing, and Computation by Eric Drexler
- An Introduction to Molecular Nanotechnology with Ralph Merkle
- Molecularly Precise Fabrication and Massively Parallel Assembly: The Two Keys to 21st Century Manufacturing by Robert A. Freitas Jr. and Ralph C. Merkle
- Engines of Creation 2.0, The Coming Era of Nanotechnology by Eric Drexler
:::info :construction: :construction_worker: :construction: This section needs some work. Maybe organize in 3 main sections:
- What's a PUF? History, types of PUFs, etc
- Security of PUFs (especially physical attacks)
- How PUFs fit into the context of a TEE & which PUF is best for TEEs
For the history of PUFs see Pappu's PhD thesis and Gassend's Master thesis. (add links)
See A Theoretical Framework for the Analysis of Physical Unclonable Function Interfaces and its Relation to the Random Oracle Model (eprint) by Marten van Dijk and Chenglu Jin
For a somewhat formal definition of an ideal PUF and its properties see section 2.1 in On the Physical Security of Physically Unclonable Functions by Shahin Tajik or A Formalization of the Security Features of Physical Functions by Armknecht et al. :::
Physically Unclonable Functions are arguably the current best hope to practically6 protect against physical attacks aimed at extracting secret keys (root of trust). That being said, PUFs are an active area of research where new PUF designs are proposed and existing designs are broken. Hence, active research is vital to better understand the benefits and limitations of PUFs in the context of TEEs.
The first PUF was presented in the PhD thesis titled Physical One-Way Functions, by Ravikanth Srinivasa Pappu, and in a follow up article, (with the same name) Physical One-Way Functions, by Pappu, Recht, Taylor, and Gershenfeld.
One possible place to start learning about PUFs, is Physically Unclonable Functions: A Study on the State of the Art and Future Research Directions by Roel Maes & Ingrid Verbauwhede.
The core idea behind a PUF is that entropy is obtained out of a stimulated physical structure, which cannot be replicated physically nor mathematically. Current semiconductor manufacturing techniques are not precise enough to make chips atom by atom, and consequently chips made from the exact same design will differ at the atomic level, and will behave differently when powered up. As far as we know, there's currently no technique that is capable to characterize a PUF in order to derive a mathematical model that would be precise enough to simulate a PUF when it is powered up. Moreover, trying to probe a PUF to observe its response will change the entropy and will cause the PUF to yield a different response then when unobserved.
Multiple researchers consider Strong PUFs infeasible. There does not seem to be an impossibility result though, and consequently both industry and academics continue to develop, research, and attack strong PUFs. TODO: Add much more stuff here, along with multiple citations.
Main reference: https://pubs.aip.org/aip/apr/article/6/1/011303/571003/A-PUF-taxonomy
Images source: A PUF taxonomy by McGrath et al.
Concept | Mechanism | Parameter | Implicity | Evaluation | Family |
---|---|---|---|---|---|
Arbiter PUF | All-electronic | Time | Implicit | Intrinsic | Racetrack |
Ring oscillator PUF | Frequency | ||||
SRAM PUF | Bistable state | Volatile memory | |||
Power distro. PUF | Voltage/current | Direct characterisation | |||
TV PUF | |||||
VIA PUF | Binary connectivity | ||||
Q EPUF | Voltage/current | Explicit | Extrinsic | ||
Q OPUF | Hybrid (optical) | Intensity and Frequency | Optical |
Partial table source: A PUF taxonomy by McGrath et al.
- A lightweight remote attestation using PUFs and hash-based signatures for low-end IoT devices
- SMART: Secure and Minimal Architecture for (Establishing a Dynamic) Root of Trust
- Feasibility and Infeasibility of Secure Computation with Malicious PUFs
- On the Security of PUF Protocols under Bad PUFs and PUFs-inside-PUFs Attacks
- Everlasting UC Commitments from Fully Malicious PUFs
- Self-assembled physical unclonable function labels based on plasmonic coupling
- https://pubs.aip.org/aip/sci/article/2019/29/290009/360043/Fingerprinting-silicon-chips-just-got-easier
- Spectral sensitivity near exceptional points as a resource for hardware encryption
:::spoiler
In this paper, an alternative authentication approach in which an MCU generates a secret key internally is introduced, exploiting manufacturing variability as a physical unclonable function (PUF). As the key is generated by the device itself, manufacturers save the expense of a secure environment for external key generation. In production, once chips are loaded with a firmware, it is only necessary to run an internal characterization and pass on the resulting public key, mask and helper data to be stored for authentication and recovery. Further external memory access is prevented, e.g., by blowing the JTAG security fuse. As the secret key is regenerated (with the same result each time) rather than stored in non-volatile memory, it is very hard to clone and the cost of a secure element can be saved.
The case for such IoT devices is strengthened further in combination with a distributed ledger, or blockchain. First of all, the immutability and distributed trust provided by a blockchain can make the device authentication independent of the manufacturer. Secondly, a business process implemented in chaincode that relies on IoT inputs can validate device signatures to ensure the authenticity and integrity of those inputs.
Replacing the central database operated by a manufacturer with a blockchain makes the system independent of the manufacturer. The chaincode will still allow only the manufacturer to create new machine entries on the distributed ledger but as the ledger content is distributed to all participants (multiple manufacturers, retailers, owners, etc.) the manufacturer is relieved of administering the system and guaranteeing its availability. A central database would go offline when the manufacturer goes out of business whereas a blockchain can survive.
Given the security disadvantages of symmetric authentication schemes (keeping a database of keys to authenticate with the risk of being hacked or lost, the risk of cloning, and barriers for third-party authentication, among others) our approach instead uses public-key cryptography based on learning parity with noise (LPN) problems, and in particular zero-knowledge (ZK) protocols to further simplify the management of device public keys. The blockchain may make the public keys generated by each device available for anyone to use in their own authentication system.
As for the second aspect, even a low-cost device can prevent manipulation of its communication with a blockchain by signing its messages with our PUF-derived keys, making the proposal suitable for any resources-limited device connected to the blockchain [9]. The chain code, in turn, can also validate the device signatures to ensure data integrity and authenticity, extending the trust the blockchain provides into the IoT device.
This paper proposes using an SRAM-based PUF to generate cryptographic keys that are employed in a zero-knowledge proof to authenticate an IoT device. We present an efficient implementation in an MCU and show that even low-cost devices can perform the required computational tasks sufficiently fast. Experimental results demonstrate that our approach is robust against temperature variations and that collisions of device identities are unlikely. :::
https://www.cryptoquantique.com/products/qdid/
- https://github.com/nils-wisiol/pypuf (cryptanalysis)
- https://asvin.io/physically-unclonable-function-setup/
- https://github.com/nils-wisiol/LP-PUF
- https://github.com/stnolting/fpga_puf
- https://www.crypto.ruhr-uni-bochum.de/imperia/md/crypto/kiltz/ulrich_paper_47.pdf
The pufpunks may be going into a different direction than caliptra, as it seems to be geared towards datacenters, but it is probably good idea to understand the caliptra design.
- Caliptra: A Datacenter System on a Chip (SoC) Root of Trust (RoT)
- Caliptra Integration Specification
Addressing Quantum Computing Threats With SRAM PUFs
- Physical One-Way Functions by Pappu et al.
- On the Foundations of Physical Unclonable Functions by Rührmair et al.
- Security based on Physical Unclonability and Disorder by Rührmair et al.
- SIMPL Systems: On a Public Key Variant of Physical Unclonable Functions by Rührmair
- Towards Secret-Free Security by Ruhrmair
- Physically Unclonable Functions: A Study on the State of the Art and Future Research Directions by Roel Maes & Ingrid Verbauwhede
- Silicon Physical Random Functions by Gassend et al.
- PUF Taxonomy
- Physically Unclonable Functions - Constructions, Properties and Applications by Roel Maes
- On the Physical Security of Physically Unclonable Functions by Shahin Tajik
- A Formalization of the Security Features of Physical Functions by Armknecht et al.
- Physical Unclonable Functions for Device Authentication and Secret Key Generation
- Feasibility and Infeasibility of Secure Computation with Malicious PUFs
- Providing Root of Trust for ARM TrustZone using On-Chip SRAM
- Making sense of PUFs
- Tribler/tribler#3064
:::info :construction: TODO :construction:
Rework. Study the following works:
- Secure Remote Attestation with Strong Key Insulation Guarantees by Deniz Gurevin et al
- Autonomous Secure Remote Attestation even when all Used and to be Used Digital Keys Leak by Marten van Dijk et al
- A Theoretical Framework for the Analysis of Physical Unclonable Function Interfaces and its Relation to the Random Oracle Model (eprint) by Marten van Dijk and Chenglu Jin
:::
Not sure how this could be achieved. Conceptually speaking, a device should be able to proof what it claims to be with respect to both its hardware and software, without relying on a trusted third party such as the manufacturer. RFC 9334 - Remote Attestation procedureS (RATS) Architecture may be useful to review in the context of our threat model.
For instance, in the case of Intel SGX, the chip manufacturer plays a central role in the remote attestation process. It may be useful to go through all the steps that Intel plays (e.g. provisioning attestation keys, verifying quotes, etc) and think through to see how these steps could be decentralized. Perhaps first defining the ideal functionality for remote attestation would be useful; or reviewing works that have already done so. Once we have the ideal functionality defined
🚧 TODO 🚧 See perhaps Cryptographically Assured Information Flow: Assured Remote Execution for inspiration.
:::success :construction: :construction_worker: :construction:
If somehow we have managed to manufacture a chip, in a "decentralized" way, such that it can be verified, then perhaps the "decentralized" manufacturing process could log public metadata about the chip that would uniquely identify it. For instance, the metadata could be tied to a fingerprint generated via a PUF, in the chip. The metadata would contain the proof of correct manufacturing with respect to the requirements discussed earlier, such as matching a (formally verified) open source hardware design, and not leaking secret bits.
Then remote attestation in this case would involve first requesting from the device that it provides its unique fingerpring which could then be verified against the public metadata ... but how could we prevent devices from providing a fake fingerprint? Perhaps the public records of correctly manufactured devices should not be public afterall. That is, a chip's fingerprint should not be publicly linkable to the metadata (proofs of correct manufacturing). Said differently, a verifier should just needs to know that the chip it is interacting with has been manufactured correctly, and the verification process should not reveal information that could be used by a malicious chip to forge a fake identity.
We also need a proof that it loaded the expected software for execution ...
🚧 👷 🚧
:::
- Cryptographically Assured Information Flow: Assured Remote Execution
- https://web.cs.wpi.edu/~guttman/pubs/good_attest.pdf
- https://arxiv.org/abs/2105.02466
- https://seclab.stanford.edu/pcl/cs259/projects/cs259_final_lavina_jayesh/CS259_report_lavina_jayesh.pdf
- https://github.com/ietf-rats-wg/architecture?tab=readme-ov-file
- https://link.springer.com/article/10.1007/s10207-011-0124-7
- https://arxiv.org/pdf/2308.11921
- https://arxiv.org/pdf/2306.14882
- https://arxiv.org/pdf/2204.06790
Since a TEE is ultimately a physical device, in which secret bits are embedded, it seems inevitable that soon or later we'll have to confront the question of whether it's really physically possible to hide these secret bits. Current efforts and hopes appear to rest on economic incentives at best, meaning that the costs of breaking into the physical device are hoped to be too high for the gains that the attacker will get in return. But what if we could design and implement chips that are secure as long as the physics is not broken. That is, chips for which breaking their security would mean breaking laws of physics. This is not a new concept, and has been done in Physical One-Way Functions by Ravikanth Pappu and Experimental relativistic zero-knowledge proofs by Alikhani et al. for instance.
A brief look into the work of Rolf Landauer, the head and heart of the physics of information.
- Information is Physical
- The physical nature of information
- Information is a Physical Entity
- Information is Inevitably Physical
- Landauer's principle on wikipedia
- Blogpost: Information is physical by Brian Hayes
- Computing study refutes famous claim that "information is physical"
- Information is non-physical: The rules connecting representation and meaning do not obey the laws of physics
- https://scottaaronson.blog/?p=3327
A brief look into the pioneering work, Conjugate Coding by Stephen Wiesner.
A brief look into the pioneering work, Quantum Cryptography, or Unforgeable Subway Tokens, by Charles H. Bennett, Gilles Brassard, Seth Breidbart and Stephen Wiesner.
A brief look into the pioneering work by Ravikanth Pappu on physical one-way functions in his PhD Thesis.
A survey of PUF-as-a-random-oracle based protocols that implement cryptographic protocols like key exchange, bit commitment, and multi-party computation.
Black-Hole Radiation Decoding is Quantum Cryptography
More resources are listed in https://github.com/sbellem/qtee.
Putting this here for now, more as of a note to look further into the problem of memory access pattern leakage and whether this can be addressed at the level of hardware (e.g. slide deck: Techniques for Practical ORAM and ORAM in Hardware, by Ren et al; and paper: A Low-Latency, Low-Area Hardware Oblivious RAM Controller, by Fletcher et al.)
From https://github.com/keystone-enclave/keystone?tab=readme-ov-file#status:
Keystone started as an academic project that helps researchers to build and test their ideas. Now, Keystone is an Incubation Stage open-source project of the Confidential Computing Consortium (CCC) under the Linux Foundation.
Keystone has helped many researchers focus on their creative ideas instead of building TEE by themselves from scratch. This resulted in many innovative research projects and publications, which have been pushing the technical advancement of TEEs.
We are currently trying to make Keystone production-ready. You can find the latest general roadmap of Keystone here.
Keystone Enclave Architecture Overview
If we take Intel as an example, trusting the chip manufacturer means many things. Intel SGX's so-called root of trust rests on two secret keys (seal secret and provisionaing secret), and an attestation key, as shown in the figure below, from Intel SGX Explained. Note that this may have changed since the writing of the Intel SGX Explained paper, but at the time at least, the two secrets were said to be stored in e-fuses inside the processor's die. Moreover, the two secret keys, stored in e-fuses, were encrypted with a global wrapping logic key (GWK). The GWK is a 128-bit AES key that is hard-coded in the processor's circuitry, and serves to increase the cost of extracting the keys from an SGX-enabled processor. The Provisioning Secret was said to be generated at the key generation facility - burned into the processor's e-fuses and stored in Intel's Provisioning Service DB. The Seal Secret was said to be generated inside the processor chip, and claimed not to be known to Intel. Hence, trusting Intel meant to trust that they do not leak the attestation key, and the provisioning key as they have access to them. Trusting Intel also meant that the manufacturing process that generates and embeds the Seal Secret did not leak the secret key. Trusting Intel also meant that once a chip is made, they did not attempt to extract the Seal Key, which is the only key, out of three, which they did not know.
From Intel SGX Explained, section 3.3.1:
Intel SGX Explained, 5.7.2 Certificate-Based Enclave Identity, mentions:
The SGX implementation relies on a hard-coded MRSIGNER value to recognize certificates issued by Intel. Enclaves that have an Intel-issued certificate can receive additional privileges, which are discussed in § 5.8.
and further in section 5.8:
The cryptographic primitive used in SGX's attestation signature is too complex to be implemented in hardware, so the signing process is performed by a privileged Quoting Enclave, which is issued by Intel, and can access the SGX attestation key.
Observation/question: Whoever has access to Intel's signing key could potentially sign a different quoting enclave that would use the attestation key to sign "fake" quotes.
:::warning Not sure if that is still relevant with DCAP. :::
:::warning :construction: :construction_worker: :construction: Needs re-work. Use more detailed references such as Breaking and Entering through the Silicon, Leakage Resilient Cryptography in Practice , and Provable Security for Physical Cryptography. Perhaps, present the type of equipment required (SEM, PEM, X-Ray, etc), and the concepts behind the different types of physical attacks, to clearly show that they are indeed feasible. Application of Attack Potential to Hardware Devices with Security Boxes seems to be a very thorough and good example of what is needed to better understand the current physical security of chips. :::
🚧 👷 🚧
:::danger tl;dr: Chip attacks cannot be prevented, but only made expansive to carry on, which is very relative, depending on the application in which the chip is used. Furthermore, as far as I know, there's no known chip attack that has been reported along with its required cost. Hence, currently we can only speculate that an attack may be in the range of a 1 million dollars, judging from the cost of focused ion beam (FIB) microscopes and guessing how much money a team of experts would cost. In the context of crypto/web3, protocol designers should probably be extremely careful, given that many protocols move massive amounts of money; in the hundreds of millions, and more. ::: :::success It would be extremely useful to see actual chip attacks being reported by research groups, as it would help to set a price on such attacks, and the price of the attack could be used by protocol designers. :::
By chip attacks here, we mean those described in Intel SGX Explained, section 3.4.3. The paper is from 2016, and at the time of writing the authors wrote that the Intel's CPU had a feature size of 14nm. In the interest of being pro-active to understand current or future chips, we could assume 3nm feature size perhaps. But it's not clear what this exactly means, because apparently these numbers are more of a marketing act, as per 3 nm process:
The term "3 nanometer" has no direct relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a "3 nm" node is expected to have a contacted gate pitch of 48 nanometers, and a tightest metal pitch of 24 nanometers.[12]
However, in real world commercial practice, "3 nm" is used primarily as a marketing term by individual microchip manufacturers (foundries) to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption.[13][14] There is no industry-wide agreement among different manufacturers about what numbers would define a "3 nm" node.[15]
In any case, it's quite clear that instrumentation to work at a smaller scale is needed.
Back to Intel SGX Explained, section 3.4.3, some key excerpts:
The most equipment-intensive physical attacks involve removing a chip’s packaging and directly interacting with its electrical circuits. These attacks generally take advantage of equipment and techniques that were originally developed to diagnose design and manufacturing defects in chips. [22] covers these techniques in depth.
The cost of chip attacks is dominated by the required equipment, although the reverse-engineering involved is also non-trivial. This cost grows very rapidly as the circuit components shrink. At the time of this writing, the latest Intel CPUs have a 14nm feature size, which requires ion beam microscopy.
The least expensive classes of chip attacks are destructive, and only require imaging the chip’s circuitry. These attacks rely on a microscope capable of capturing the necessary details in each layer, and equipment for mechanically removing each layer and exposing the layer below it to the microscope.
E-fuses and polyfuses are particularly vulnerable to imaging attacks, because of their relatively large sizes.
[...], once an attacker develops a process for accessing a module without destroying the chip's circuitry, the attacker can use the same process for both passive and active attacks.
At the architectural level, we cannot address physical attacks against the CPU’s chip package. [...]
Thankfully, physical attacks can be deterred by reducing the value that an attacker obtains by compromising an individual chip. As long as this value is below the cost of carrying out the physical attack, a system's designer can hope that the processor's chip package will not be targeted by the physical attacks.
Architects can reduce the value of compromising an individual system by avoiding shared secrets, such as global encryption keys. Chip designers can increase the cost of a physical attack by not storing a platform's secrets in hardware that is vulnerable to destructive attacks, such as e-fuses.
[22]: Friedrich Beck. Integrated Circuit Failure Analysis: a Guide to Preparation Techniques. John Wiley & Sons, 1998.
There's also a brief discussion of PUFs in a security analysis section of Intel SGX Explained, section 6.6.2 Physical Attacks:
The threat model stated by the SGX design excludes physical attacks targeting the CPU chip (§ 3.4.3). Fortunately, Intel’s patents disclose an array of countermeasures aimed at increasing the cost of chip attacks.
For example, the original SGX patents [110, 138] disclose that the Fused Seal Key and the Provisioning Key, which are stored in e-fuses (§ 5.8.2), are encrypted with a global wrapping logic key (GWK). The GWK is a 128-bit AES key that is hard-coded in the processor’s circuitry, and serves to increase the cost of extracting the keys from an SGX-enabled processor.
As explained in § 3.4.3, e-fuses have a large feature size, which makes them relatively easy to “read” using a high-resolution microscope. In comparison, the circuitry on the latest Intel processors has a significantly smaller feature size, and is more difficult to reverse engineer. Unfortunately, the GWK is shared among all the chip dies created from the same mask, so it has all the drawbacks of global secrets explained in § 3.4.3.
Newer Intel patents [67, 68] describe SGX-enabled processors that employ a Physical Unclonable Function (PUF), e.g., [175], [133], which generates a symmetric key that is used during the provisioning process.
Specifically, at an early provisioning stage, the PUF key is encrypted with the GWK and transmitted to the key generation server. At a later stage, the key generation server encrypts the key material that will be burned into the processor chip’s e-fuses with the PUF key, and transmits the encrypted material to the chip. The PUF key increases the cost of obtaining a chip’s fuse key material, as an attacker must compromise both provisioning stages in order to be able to decrypt the fuse key material.
As mentioned in previous sections, patents reveal design possibilities considered by the SGX engineers. However, due to the length of timelines involved in patent applications, patents necessarily describe earlier versions of the SGX implementation plans, which might not match the shipping implementation. We expect this might be the case with the PUF provisioning patents, as it makes little sense to include a PUF in a chip die and rely on e-fuses and a GWK to store SGX’s root keys. Deriving the root keys from the PUF would be more resilient to chip imaging attacks.
I don't know whether the latest Intel SGX chips make use of PUFs, and how accurate the above still is for the latest chips.
:::danger In any case, it's quite clear that from the authors of Intel SGX Explained, physical attacks cannot be "physically" prevented but can only be "economically" prevented. A more recent paper, SoK: Hardware-supported TEEs by Moritz Schneider et al, also note that in their survey of TEE designs, both in the industry and academia, none can defend against chip attacks.
Invasive adversary: This adversary can launch invasive attacks such as de-layering the physical chip, manipulating clock signals and voltage rails to cause faults, etc., to extract secrets or force a different execution path than the intended one. For the sake of completeness, we include this adversary (
A_inv
) in our list but note that no TEE design currently defends against such an attacker. So, we do not discuss this attacker any further in this paper. :::
:::warning Hence, chips are secure through economic incentives, not through physics. If that is correct, using TEEs in a protocol calls for a very careful mechanism design where protocol designers take into account the cost of physically attacking the chip. For example, if we put a price tag of 1 million dollar in performing a chip attack, then a protocol using TEEs should make sure that less than 1 million dollars can be gained by performing a chip attack. Moreover, it is very important to take into account that this way of thinking does not consider attackers who wish to attack a protocol for non-economical reasons, such as breaking the privacy and/or anonymity of participants in the targeted protocol. In the case of such protocols, then it seems that current TEEs are simply not a reliable technology, as any attackers with sufficient funds and motivated to break the privacy and/or anonymity of a protocol, would be able to carry on the attack. :::
Now, with this background in mind, it seems that it would be extremely useful to see actual chip attacks being reported by research groups, as it would help to set a price on such attacks, and the price of the attack could be use by protocol designers.
Thanks to Thorben Moos and François-Xavier Standaert from UCLouvain Crypto Group for providing valuable feedback and pointers.
You can make edits and pull requests for qtee.md which should be a mirror of this document. Alternatively you can also comment on or create new issues.
You should also be able to make comments on this document.
Footnotes
-
Chips attacks cannot be prevented as of today (see CHIP ATTACKS). Making the cost of a chip attack expensive is the only current known defense mechanism. Thus, TEEs are ultimately only secure through economics. ↩
-
Also, of relevance: https://github.com/sbellem/qtee/issues/1, https://github.com/sbellem/qtee/issues/7, https://github.com/sbellem/qtee/issues/8, CHIP ATTACKS, and PUFs. ↩
-
See for instance RFC 9334 (section 12) for security considerations when treating the topic of remote attestation. ↩
-
The reasoning is that design and implementation flaws can be fixed and can happen whether the design is open source or not, whether the supply chain is correct, etc. Hence, design and implementation bugs can be treated separately. It could be argued that an open source hardware design may benefit from a broader community and overtime will contain less bugs than a closed source design. ↩
-
The word "practically" is emphasized and intentionally used here because according to in Physically Unclonable Functions: A Study on the State of the Art and Future Research Directions: "Again, the hardness of cloning can be considered from a theoretical and a practical point of view. Practically, cloning can be very hard or infeasible. Demonstrating theoretical unclonability on the other hand is very difficult. The only known systems which can be proven to be theoretically unclonable are based on quantum physics." ↩