Science Reuters

New reconfigurable and machine learning resistant graphene PUFs are the ultimate hardware security tool

Hardware security concerns, like that of intellectual property, privacy, and hardware Trojans have prompted scientists to research into new ways of protection. With more private information saved and shared online, the risks of an attack on that data are escalating and in previous years, many such examples have been seen. Though recent technology uses a sophisticated and complex computing component for creating passwords or keys, artificial intelligence (AI) can be exploited to forecast those keys, allowing access to the data, or increasing risks of vulnerability. Saptarshi Das and the team has used graphene to reinforce the encrypted keys, making them harder to crack.

There have been many recent cases of privacy breaching concerning online data, that inspired Das to develop a new hardware security tool that may ultimately be employed to shield the data across all industries. They used graphene in designing their device, which is well for its superlative qualities like ultra-strength, immense conductivity, enormous flexibility, and two-dimensional very small size as thick as a single carbon atom, spread along in a layer. They developed as a new low-power, scalable, reconfigurable hardware security device that has substantial resilience to AI attacks.

The researchers used the exceptional physical and electrical properties of graphene along with the fabrication process to synthesize a graphene-based Physically Unclonable Function (PUF) thus creating a better, energy-efficient, scalable and secure device against AI attacks that silicon PUFs were vulnerable to previously. The basic feature that wasn’t present in silicon-based devices and that gave an upper hand to the new graphene-based PUFs was the graphene transistors, which can switch currents on and off in a circuit. Also, unlike conventional silicon PUFs, the graphene PUF transistor’s electrical conductivity varies because of the variation in the production process, which gives it uniqueness.

Once the transistors were added to graphene PUFs, almost 64 million simulations of graphene-based PUFs were created on their modeled characteristics. Later AI was trained using machine learning to study and find new patterns with the graphene PUF’s simulation data, to check if AI could make predictions about the encrypted data and expose system insecurities. What Das and his team concluded is that the neural networks showed that AI failed to develop a model and thus the encryption process can’t be learned.

These new PUFs are secure against hacking because hackers couldn’t use the breached data to reverse engineer a device for future exploitation.

Even after the prediction of the key, these graphene-based PUFs can create a new key via the reconfiguration method, without the need for additional hardware or replacements. This breaks the previous belief that once a system’s security is conceded it is permanently compromised by giving the hardware of the compromised system to reconfigure itself and be good to go again, just by adding tamper resistance as another security feature. The benefit of tampering resistance and its operability across a wide range of temperatures make these graphene-based PUFs highly applicable from printable electronics to household devices and many more.


Graphene, artificial intelligence (AI), Hardware security, hacking, online privacy, data access, graphene, graphene-based Physically Unclonable Function (PUF), silicon-based PUF, Network security, Nanoscale device, Nanoelectronics, Digital forensics, Phase change materials.

Add comment