Suramya's Blog : Welcome to my crazy life…

May 14, 2022

Using algae sealed in a AA battery to generate enough electricity to run a microprocessor for 6 months

Filed under: Computer Hardware,Emerging Tech,My Thoughts,Science Related — Suramya @ 11:59 PM

Powering computers and all our devices requires us to use batteries if they can’t be connected to a power source/electrical socket. For the most part this means that we use NiCa or Lithium batteries. The problem with this is that they require us to use rare earth metals that are hard to find and process, which makes them expensive and mining the metals are potentially bad for the environment. The other problem is that they need frequent replacement and create a lot of waste. Due to this a lot of effort is going on to find better ways of generating power.

Now, Christopher Howe and other researchers from the University of Cambridge have managed create a power source using blue-green algae to generate enough electricity to power a processor performing calculations (to simulate load). Using a type of cyanobacteria called Synechocystis sp. PCC 6803 sealed in a container about the size of an AA battery, made of aluminum and clear plastic they were able to generate the 0.3 microwatts of power to run the CPU for 45 minutes followed by 15 minutes of standby, which required 0.24 microwatts of power.

The system ran without additional intervention for 6 months and the computer was placed on a windowsill at one of the researchers’ houses during the test and the ambient light was enough to power the processor. There are indications that this can be scaled up to generate more power for more resource intensive applications but even if that doesn’t work out, the current setup could potentially be used to power IoT devices that don’t require that much power to run such as sensors/monitors deployed in the forests/cities for monitoring.

Sustainable, affordable and decentralised sources of electrical energy are required to power the network of electronic devices known as the Internet of Things. Power consumption for a single Internet of Things device is modest, ranging from μW to mW, but the number of Internet of Things devices has already reached many billions and is expected to grow to one trillion by 2035, requiring a vast number of portable energy sources (e.g., a battery or an energy harvester). Batteries rely largely on expensive and unsustainable materials (e.g., rare earth elements) and their charge eventually runs out. Existing energy harvesters (e.g., solar, temperature, vibration) are longer lasting but may have adverse effects on the environment (e.g., hazardous materials are used in the production of photovoltaics). Here, we describe a bio-photovoltaic energy harvester system using photosynthetic microorganisms on an aluminium anode that can power an Arm Cortex M0+, a microprocessor widely used in Internet of Things applications. The proposed energy harvester has operated the Arm Cortex M0+ for over six months in a domestic environment under ambient light. It is comparable in size to an AA battery, and is built using common, durable, inexpensive and largely recyclable materials.

Their research has been published in the Energy & Environmental Science journal and work is ongoing to build on top of it to look at commercial applications.

Source: A colony of blue-green algae can power a computer for six months

– Suramya

May 10, 2022

Using ancient techniques for adding secret images in bronze mirrors to hide images in Liquid Crystal displays

Filed under: Emerging Tech,Interesting Sites,Science Related — Suramya @ 1:28 AM

There are a lot of things that were accomplished by our ancestors that seem like they should be impossible and this is why the theory that aliens were involved in our past to give us a boost is so popular. People don’t realize that just because it wasn’t possible in the western world doesn’t mean that others in the world couldn’t do it. In this post I am going to talk about Chinese/Japanese Magic mirrors that were first created ~200BC but modern science was only able to explain how they work in 2005 when M V Berry published an paper describing the optics of how this would work.

The Magic Mirror is a type of mirror that was popular in ancient china, specially the Han dynasty (206 BC – 24 AD). The specialty of these mirrors is that they were made out of solid bronze with the front side polished brightly so that it can be used as a mirror whereas the back would have a design cast in the metal. When a bright light was reflected by the mirror and shone against a wall the pattern on the back of the mirror would be projected onto the wall.


Example of how the Magic Mirror reflections look (Pic credit: Faena.com)

As you can imagine this is extremely hard to do. Due to trading with the Chinese, folks over in Korea and Japan have also been known to create these mirrors which are known as Makyō (magic mirrors) over there. One difference between Makyō and the Chinese mirror is that a Makyō doesn’t reflect the image on the back on the mirror when light hits it, nor does it have any obvious irregularities on its reflecting surface. But still it creates these fantastical images where nothing should be there. More details on how the mirrors were constructed and the history behind them are available here.

It took western scientists over 2000 years to figure out the science behind these mirrors, kind of.. as evident from the explanation below.

Although the surface of the mirrors is polished and seems completely flat, it has subtle convex and concave curves caused by the designed. Convex curves (outwards) scatter light and darken their areas of reflection. For their part, concave curves focus light and illuminate their areas of reflection. Mirrors are made of forged bronze, and the thickest parts are cooled at a different speed than the thin ones. Since the metal contracts a little as it is cooled, the different ranges of cooling “stress” or slightly deform the metal. The thin areas are also more flexible than the thick parts, so the polishing process, which should smoothen the metal until uniformity is achieved, exaggerates the slight differences in thickness. While we cannot see the pattern on the surface of the mirror, photos very clearly delineate it, so when they are able to bounce off the mirror’s curves, the pattern emerges.

Using the understanding gained from Berry’s paper Felix Hufnagel and his colleagues from the University of Ottawa in Canada to create a modern version of the magic mirror using liquid crystal which is a different state of matter (their molecules are both fluid and arranged in patterns). By applying an electric current to the liquid crystals they were able to tailor the orientation of the molecules which allowed them to create an image which would only show up when a particular combination of current/amplitude was applied. The images created using this technique look clear even when viewed from different angles which can be used to improve projectors for 3D images.

Their paper was published in Optica earlier this month and is an interesting (if confusing read).

Interesting links:
Wikipedia: Chinese Magic Mirror
Secret images hidden in mirrors and windows using liquid crystals

– Suramya

May 9, 2022

Researchers have created the first one-way superconductor which could lower energy used by computers

Filed under: Computer Hardware,Emerging Tech,Science Related — Suramya @ 6:58 PM

Computers use massive amounts of energy worldwide and with the increasing dependence on computers in our life the energy utilization is only going to go up. To give you an idea, the International Energy Agency estimates that 1% of all global electricity is used by data centers. There are multiple efforts ongoing to reduce the power consumption and the recent advances by Mazhar Ali from Delft University of Technology in the Netherlands and his colleagues are a great step forward in this direction.

Mazhar and team have successfully demonstrated a working superconducting diode by sandwiching a 2D layer of a material called niobium-3 bromine-8, which is thought to have a built-in electric field, between two 2D superconducting layers. When electrons travel through the structure in one direction, they don’t encounter resistance, but in the other direction they do. This is unique because till now we had only gotten a diode working with non-superconducting metals (as they would not give any resistance in either direction).

The superconducting analogue to the semiconducting diode, the Josephson diode, has long been sought with multiple avenues to realization being proposed by theorists1,2,3. Showing magnetic-field-free, single-directional superconductivity with Josephson coupling, it would serve as the building block for next-generation superconducting circuit technology. Here we realized the Josephson diode by fabricating an inversion symmetry breaking van der Waals heterostructure of NbSe2/Nb3Br8/NbSe2. We demonstrate that even without a magnetic field, the junction can be superconducting with a positive current while being resistive with a negative current. The ΔIc behaviour (the difference between positive and negative critical currents) with magnetic field is symmetric and Josephson coupling is proved through the Fraunhofer pattern. Also, stable half-wave rectification of a square-wave excitation was achieved with a very low switching current density, high rectification ratio and high robustness. This non-reciprocal behaviour strongly violates the known Josephson relations and opens the door to discover new mechanisms and physical phenomena through integration of quantum materials with Josephson junctions, and provides new avenues for superconducting quantum devices.

The next step is to create a superconducting transistor, but there are multiple challenges ahead that need to be overcome before this can be commercially released. The first problem is that the diode only works when it’s temperature is at 2 kelvin, or -271°C which uses more energy than the diode saves. So the team is looking at alternative materials so that they can get it to work at 77 Kelvin (which is when nitrogen is liquid) so the energy used would be less and we would have an energy-saving diode.

Another issue to be sorted is that the current process of making the diode is manual and would need to be automated for large scale production. But that is a future problem as they first need to find a combination of materials that works at a reasonable energy cost.

Source: First one-way superconductor could slash energy used by computers
Paper: The field-free Josephson diode in a van der Waals heterostructure

– Suramya

May 2, 2022

MIT researchers create a portable desalination unit that can run off a single solar panel

Filed under: Emerging Tech,My Thoughts,Science Related — Suramya @ 2:33 AM

The lack of drinking water is a major problem across large portions of the world and over 2 billion people live in water-stressed countries. According to WHO at least 2 billion people use a drinking water source contaminated with feces. On the other side, places near the sea have to deal with salt water contamination of their drinking supply. If we can desalinize sea water cheaply and easily then it will be a great boon to world.

There are existing technologies that convert sea-water to drinking water but they require massive energy supply and large scale plants which are very expensive to make. To resolve this issue MIT researchers have been working on creating a portable desalination unit that generates clear, clean drinking water without the need for filters or high-pressure pumps. Since the unit doesn’t use filters or high-pressure pumps the energy requirement is low enough that it can be run off a small, portable solar panel.

The research team of Jongyoon Han, Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM) created this and the initial prototype has worked as expected. Their research has been published online in Environmental Science and Technology.

Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.

The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.

But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.

Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.


Video demonstration of the process

The prototype device was tested at Boston’s Carson Beach and was found to generate drinking water at a rate of 0.3 liters per hour, requiring only 20 watts of power per liter during the use. As you can guess this is pretty amazing. If the device can be mass-produced it will help reduce the scarcity of drinking water in the world without requiring massive amounts of energy which would cause other climate impact.

One downside of this kind of machine is that it creates a byproduct of highly saline water as the salt from the pure water is mixed with the waste water. Releasing this water in the ocean has a huge impact on the sea life as the water suddenly becomes too saline for them. If the water is allowed to seep into the land then it will reduce the fertility of the soil due to the increased salt in the soil. In addition to making the device commercial we also need to do research on what we should do with the waste water generated so that the adverse impact of the product can be offset.

Source: MIT News: From seawater to drinking water, with the push of a button

– Suramya

April 27, 2022

MIT’s Ultra-thin speakers can be used to make any surface into a low-power, high-quality audio source

Filed under: Computer Hardware,Emerging Tech — Suramya @ 9:51 PM

Noise Cancellation is one of those things that initially we think that we don’t need but once you start using it, it becomes indispensable. I got my first set of noise canceling headsets back in 2002-2003 when I had a coworker who was extremely loud and would insist on sharing their thoughts in a very loud voice. The cherry on top was that a lot of what they said was wrong and it would grab my attention. I would be peacefully working then I would hear something and be like did they just make this statement? In short it was very distracting. So I got a noise canceling headset and was able to ignore them. Since then I have ensured that I always have my noise canceling headsets handy both at work and while traveling.

But you can’t install noise canceling everywhere (at least not cheaply). I have been fortunate that most of the places I have stayed at I didn’t have the problem of loud neighbors but others are not as fortunate. Loud neighbors are one of the major problems in urban life. Which is why I love this new invention by the folks over at MIT that allows you to convert your entire wall into a noise cancelling surface by putting ultra-thin speakers as a wallpaper in your room. These speakers are very thin & use very little power (100 milliwatts of electricity to power a single square meter).

their design relies on tiny domes on a thin layer of piezoelectric material which each vibrate individually. These domes, each only a few hair-widths across, are surrounded by spacer layers on the top and bottom of the film that protect them from the mounting surface while still enabling them to vibrate freely. The same spacer layers protect the domes from abrasion and impact during day-to-day handling, enhancing the loudspeaker’s durability.

To build the loudspeaker, the researchers used a laser to cut tiny holes into a thin sheet of PET, which is a type of lightweight plastic. They laminated the underside of that perforated PET layer with a very thin film (as thin as 8 microns) of piezoelectric material, called PVDF. Then they applied vacuum above the bonded sheets and a heat source, at 80 degrees Celsius, underneath them.

Because the PVDF layer is so thin, the pressure difference created by the vacuum and heat source caused it to bulge. The PVDF can’t force its way through the PET layer, so tiny domes protrude in areas where they aren’t blocked by PET. These protrusions self-align with the holes in the PET layer. The researchers then laminate the other side of the PVDF with another PET layer to act as a spacer between the domes and the bonding surface.

The applications are endless for this technology. They can be used to soundproof apartments, planes, cars etc. They can be used to create 3D immersive experiences cheaply without having to install gigantic speakers. They could also be used in phones and other devices to play sound/music. Since they are paper-thin, we can apply them as a wallpaper in a room that can be removed when moving out, which would allow renters to install them in the apartments.

The work is still in its early stages but it looks very promising.

Source: Gizmodo: Cover Your Wall in MIT’s New Paper Thin Speakers to Turn Your Bedroom Into a Noise Cancelling Oasis

– Suramya

April 25, 2022

Rainbow Algorithm (one of the candidates for post-quantum Cryptography) can be broken in under 53 hours

Filed under: Computer Security,Emerging Tech,My Thoughts,Quantum Computing — Suramya @ 11:59 PM

Quantum Computing has the potential to make the current encryption algorithms obsolete once it gets around to actually being implemented on a large scale. But the Cryptographic experts in charge of such things have been working on Post Quantum Cryptography over the past few years to offset this risk. After three rounds they had narrowed down the public-key encryption and key-establishment algorithms to Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER and te finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow.

Unfortunately for the Rainbow algorithm, Ward Beullens at IBM Research Zurich in Switzerland managed to find the corresponding secret key for a given Rainbow public key in 53 hours using a standard laptop. This would allow anyone with a laptop to ‘prove’ they were someone else by producing the secret key for a given public key.

The Rainbow signature scheme [8], proposed by Ding and Schmidt in 2005, is one of the oldest and most studied signature schemes in multivariate cryptography. Rainbow is based on the (unbalanced) Oil and Vinegar signature scheme [16, 11], which, for properly chosen parameters, has withstood all cryptanalysis since 1999. In the last decade, there has been a renewed interest in multivariate cryptography, because it is believed to resist attacks from quantum adversaries. The goal of this paper is to improve the cryptanalysis of Rainbow, which is an important objective because Rainbow is currently one of three finalist signature
schemes in the NIST Post-Quantum Cryptography standardization project.

This obviously disqualifies the algorithm from being standardised as it has a known easily exploitable weakness. It goes on to prove that cryptography is not easy and the only way to ‘prove’ the strength of an algorithm is to let others test them for vulnerabilities. Or as Bruce Schneier put it in Schneier’s Law: ‘Anyone can create an algorithm that they themselves can’t break.’ , you need others to validate that claim.

Paper: Breaking Rainbow Takes a Weekend on a Laptop by Ward Beullens (PDF)
Source: New Scientist: Encryption meant to protect against quantum hackers is easily cracked

– Suramya

April 24, 2022

Smart-contract bug locks away $34 million highlighting major weakness in smart-contracts

Filed under: Computer Software,Emerging Tech — Suramya @ 9:57 PM

Over the years I have had many conversations with people about Blockchain and how it is supposed to solve all our problems, but for the most part I think Blockchain is overrated and doesn’t solve any problem that can’t be solved in an easier way using less resources. Then as if Blockchain’s were not enough someone went and created smart contracts which are basically programs stored on a blockchain that run when predetermined conditions are met. They typically are used to automate the execution of an agreement so that all participants can be immediately certain of the outcome, without any intermediary’s involvement or time loss. They can also automate a workflow, triggering the next action when conditions are met.IBM Smart-Contracts Def

The major issue with a blockchain contract is that the contract is immutable so if there is a bug in the program no one can modify it to fix the issue. When warned about this potential problem the proponents of the smart-contract pretty much handwaved the concerns away stating that the issue is not that big a deal and people were just opposing them because they dislike smart-contracts and are sticks in mud etc etc.

Unfortunately, this is no longer a theoretical issue as the developers of the AkuDreams contract found out over the weekend. Due to a bug in the contract code $34 million, or 11,539 eth, is permanently locked into the AkuDreams contract forever. It cannot be retrieved by individual users or by the dev team..

This shows how dangerous it is to have a program that can’t be modified because no matter what we do we can’t ensure that code written will be 100% bug free in all the cases. When there is a bug in regular software be can push out a patch to fix it, but that is not an option for smart-contracts and that as you can see becomes an expensive issue.

Source: $34M permanently locked into AkuDreams contract forever due to bad code

– Suramya

April 23, 2022

Molecular engines made of protein could power molecular machines

Filed under: Emerging Tech,My Thoughts — Suramya @ 11:48 PM

Nano-machines have long been staples of Sci-Fi stories where nanotech is used to cure illnesses , make new materials, kill people etc etc and in the recent years a lot of effort has been put in to make these machines real. Basically speaking, a nanomachine, also called a nanite, is a mechanical or electromechanical device whose dimensions are measured in nanometers (millionths of a millimeter, or units of 10 -9 meter). [What is Nanite] They are still in the R&D phase but a lot of progress is being made in the field.

Researchers at the University of Washington in Seattle have created the first building blocks of a molecular engine, namely the axles and rotors. The really cool part is that these are self assembling and use custom designed new proteins unlike any found in nature. The researchers used the advances in Deep learning software to predict what shape a given DNA sequence will fold into making it easier to find a code that makes the desired shape. This allows them to create custom shapes without having to modify existing molecules which can be quite hard.

The team made the machine parts by putting DNA coding for the custom proteins into E. coli bacteria, and then checked their structure using a method called cryogenic electron microscopy.

This showed that the axles assembled correctly inside the rotors, and also revealed the different configurations that would be expected if the axles were turning. But because cryogenic electron microscopy can only provide a series of stills rather than a moving picture, the team can’t say for sure if the axles are rotating.

If they are, it would only be a random back-and-forth movement driven by molecules knocking into each other, a phenomenon called Brownian motion. The team is now designing more components to drive the motion in one direction and create a rotary engine, says Baker.

The work is still in the preliminary stage and the team is designing more components to drive the motion in one direction and create a rotary engine to make sure that the movement seen in the current trials is not just due to Brownian motion. Once the technology is perfected it has a lot of use cases in fields such as BioMed to remove tumors, clean out arterial blocks, repair injuries and fields like material design where these machines can be used to create new materials which are stronger and lighter.

Source: Tiny axles and rotors made of protein could drive molecular machines
Paper: Computational design of mechanically coupled axle-rotor protein assemblies

– Suramya

April 22, 2022

Implications and Impact of Quantum Computing on Existing Cryptography

As all of you are aware the ability to break encryption of sensitive data like financial systems, private correspondence, government systems in a timely fashion is the holy grail of computer espionage. With the current technology it is unfeasible to break the encryption in a reasonable timeframe. If the target is using a 256-bit key an attacker will need to try a max of 2256 possible combinations to brute-force it. This means that even with the fastest supercomputer in the world will take millions of years to try all the combinations (Nohe, 2019). The number of combinations required to crack the encryption key increase exponentially, so a 2048-bit key has 22048 possible combinations and will take correspondingly longer time to crack. However, with the recent advances in Quantum computing the dream of breaking encryption in a timely manner is close to becoming reality in the near future.

Introduction to Quantum Computing

So, what is this Quantum computing and what makes it so special? Quantum computing is an emerging technology field that leverages quantum phenomena to perform computations. It has a great advantage over conventional computing due to the way it stores data and performs computations. In a traditional system information is stored in the form of bits, each of which can be either 0 or 1 at any given time. This makes a ‘bit’ the fundamental using of information in traditional computing. A Quantum computer on the other hand uses a ‘qubit’ as its fundamental unit and unlike the normal bit, a qubit can exist simultaneously as 0 and 1 — a phenomenon called superposition (Freiberger, 2017). This allows a quantum computer to act on all possible states of a qubit simultaneously, enabling it to perform massive operations in parallel using only a single processing unit. In fact, a theoretical projection has postulated that a Quantum Computer could break a 2048-bit RSA encryption in approximately 8 hours (Garisto, 2020).

In 1994 Peter W. Shor of AT&T deduced how to take advantage of entanglement and superposition to find the prime factors of an integer (Shor, 1994). He found that a quantum computer could, in principle, accomplish this task much faster than the best classical calculator ever could. He then proceeded to write an algorithm called Shor’s algorithm that could be used to crack the RSA encryption which prompted computer scientists to begin learning about quantum computing.

Introduction to Current Cryptography

Current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. Just as it is easy to break a glass jar but difficult to stick it back together there are certain calculations that are easy to perform but difficult to reverse. For example, we can easily multiply two numbers to get the result, however it is very hard to start with the result and work out which two numbers were multiplied to produce it. This becomes even more hard as the numbers get larger and this forms the basis of algorithms like the RSA (Rivest et al., 1978) that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

There are multiple ways of classifying cryptographic algorithms but in this paper, they will be classified based on the keys required for encryption and decryption. The main types of cryptographic algorithms are symmetric cryptography and asymmetric cryptography.

Symmetric Cryptography

Symmetric cryptography is a type of encryption that uses the same key for both encryption and decryption. This requires the sender and receiver to exchange the encryption key securely before encrypted data can be exchanged. This type of encryption is one of the oldest in the world and was used by Julius Caesar to protect his communications in Roman times (Singh, 2000). Caesar’s cipher, as it is known is a basic substitution cypher where a number is used to offset each alphabet in the message. For example, if the secret key is ‘4’ then each alphabet would be replaced with the 4th letter down from it, i.e. A would be replaced with E, B with F and so on. Once the sender and receiver agree on the encryption key to be used, they can start communicating. The receiver would take each character of the message and then go back 4 letters to arrive at the plain-text message. This is a very simple example, but modern cryptography is built on top of this principle.

Another example is from world war II during which the Germans were encrypting their transmissions using the Enigma device to prevent the Allies from decrypting their messages as they had in the first World War (Rijmenants, 2004). Each day both the receiver and sender would configure the gears and specific settings to a new value as defined by secret keys distributed in advance. This allowed them to transmit information in an encrypted format that was almost impossible for the allied forces to decrypt. Examples of symmetric encryption algorithms include Advanced Encryption Standard (AES), Data Encryption Standard (DES), and International Data Encryption Algorithm (IDEA).

Symmetric encryption algorithms are more efficient than asymmetric algorithms and are typically used for bulk encryption of data.

Asymmetric Cryptography

Unlike symmetric cryptography asymmetric cryptography uses two keys, one for encryption and a second key for decryption (Rouse et al., 2020). Asymmetric cryptography was created to address the problems of key distribution in symmetric encryption and is also known as public key cryptography. Modern public key cryptography was first described in 1976 by Stanford University professor Martin Hellman and graduate student Whitfield Diffie. (Diffie & Hellman, 1976)

Asymmetric encryption works with public and private keys where the public key is used to encrypt the data and the private key is used to decrypt the data (Rouse et al., 2020). Before sharing data, a user would generate a public-private keypair and they would then publish their public key on their website or in key management portals. Now, whoever wants to send private data to them would use their public key to encrypt the data before sending it. Once they receive the cipher-text they would use their private key to decrypt the data. If we want to add another layer of authentication to the communication, the sender would encrypt the data with their private key first and then do a second layer of encryption using the recipient’s public key. The recipient would first decrypt the message using their private key, then decrypt the result using the senders public key. This validates that the message was sent by the sender without being tampered. Public key cryptography algorithms in use today include RSA, Diffie-Hellman and Digital Signature Algorithm (DSA).

Quantum Computing vs Classical Computing

Current state of Quantum Computing

Since the early days of quantum computing we have been told that a functional quantum computer is just around the corner and the existing encryption systems will be broken soon. There has been significant investment in the field of Quantum computers in the past few years, with organizations like Google, IBM, Amazon, Intel and Microsoft dedicating a significant amount of their R&D budget to create a quantum computer. In addition, the European Union has launched a Quantum Technologies Flagship program to fund research on quantum technologies (Quantum Flagship Coordination and Support Action, 2018).

As of September 2020, the largest quantum computer is comprised of 65 qubits and IBM has published a roadmap promising a 1000 qbit quantum computer by 2023 (Cho, 2020). While this is an impressive milestone, we are still far away from a fully functional general use quantum computer. To give an idea of how far we still have to go Shor’s algorithm requires 72k3 quantum gates to be able to factor a k bits long number (Shor, 1994). This means in order to factor a 2048-bit number we would need a 72 * 20483 = 618,475,290,624 qubit computer which is still a long way off in the future.

Challenges in Quantum Computing

There are multiple challenges in creating a quantum computer with a large number of qubits as listed below (Clarke, 2019):

  • Qubit quality or loss of coherence: The qubits being generated currently are useful only on a small scale, after a particular no of operations they start producing invalid results.
  • Error Correction at scale: Since the qubits generate errors at scale, we need algorithms that will compensate for the errors generated. This research is still in the nascent stage and requires significant effort before it will be ready for production use.
  • Qubit Control: We currently do not have the technical capability to control multiple qubits in a nanosecond time scale.
  • Temperature: The current hardware for quantum computers needs to be kept at extremely cold temperatures making commercial deployments difficult.
  • External interference: Quantum computes are extremely sensitive to interference. Research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers.

Cryptographic algorithms vulnerable to Quantum Computing

Symmetric encryption schemes impacted

According to NIST, most of the current symmetric cryptographic algorithms will be relatively safe against attacks by quantum computer provided a large key is used (Chen et al., 2016). However, this might change as more research is done and quantum computers come closer to reality.

Asymmetric encryption schemes impacted

Unlike symmetric encryption schemes most of the current public key encryption algorithms are highly vulnerable to quantum computers because they are based on the previously mentioned factorization problem and calculation of discrete logarithms and both of these problems can be solved by implementing Shor’s algorithm on a quantum computer with enough qubits. We do not currently have the capability to create a computer with the required number of qubits due to challenges such as loss of qubit coherence due to ionizing radiation (Vepsäläinen et al., 2020), but they are a solvable problem looking at the ongoing advances in the field and the significant effort being put in the field by companies such as IBM and others (Gambetta et al., 2020).

Post Quantum Cryptography

The goal of post-quantum cryptography is to develop cryptographic algorithms that are secure against quantum computers and can be easily integrated into existing protocols and networks.

Quantum proof algorithms

Due to the risk posed by quantum computers, the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and out of the initial 69 submissions received three years ago, the group has narrowed the field down to 15 finalists and has now begun the third round of public review of the algorithms (Moody et al., 2020) to help decide the core of the first post-quantum cryptography standard. They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures (Moody et al., 2020).

Quantum Key Distribution

Quantum Key Distribution (QKD) uses the characteristics of quantum computing to implement a secure communication channel allowing users to exchange a random secret key that can then be used for symmetrical encryption (IDQ, 2020). QKD solves the problem of secure key exchange for symmetrical encryption algorithms and it has the capability to detect the presence of any third party attempting to eavesdrop on the key exchange. If there is an attempt by a third-party to eavesdrop on the exchange, they will create anomalies in the quantum superpositions and quantum entanglement which will alert the parties to the presence of an eavesdropper, at which point the key generation will be aborted (IDQ, 2020). The QKD is used to only produce and distribute an encryption key securely, not to transmit any data. Once the key is exchanged it can be used with any symmetric encryption algorithm to transmit data securely.

Conclusion

Development of a quantum computer may be 100 years off or may be invented in the next decade, but we can be sure that once they are invented, they will change the face of computing forever including the field of cryptography. However, we should not panic as this is not the end of the world as the work on quantum resistant algorithms is going much faster than the work on creating a quantum computer. The world’s top cryptographic experts have been working on Quantum safe encryption for the past three years and we are nearing the completion of the world’s first post-quantum cryptography standard (Moody et al., 2020). Even if the worst happens and it is not possible to create a quantum safe algorithm immediately, we still have the ability to encrypt and decrypt data using one-time pads until a safer alternative or a new technology is developed.

References

Chen, L., Jordan, S., Liu, Y.-K., Moody, D., Peralta, R., Perlner, R., & Smith-Tone, D. (2016). Report on Post-Quantum Cryptography. https://doi.org/10.6028/nist.ir.8105

Cho, A. (2020, September 15). IBM promises 1000-qubit quantum computer-a milestone-by 2023. Science. https://www.sciencemag.org/news/2020/09/ibm-promises-1000-qubit-quantum-computer-milestone-2023.

Clarke, J. (2019, March). An Optimist’s View of the Challenges to Quantum Computing. IEEE Spectrum: Technology, Engineering, and Science News. https://spectrum.ieee.org/tech-talk/computing/hardware/an-optimists-view-of-the-4-challenges-to-quantum-computing.

Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644–654. https://doi.org/10.1109/tit.1976.1055638

Freiberger, M. (2017, October 1). How does quantum computing work? https://plus.maths.org/content/how-does-quantum-commuting-work.

Gambetta, J., Nazario, Z., & Chow, J. (2020, October 21). Charting the Course for the Future of Quantum Computing. IBM Research Blog. https://www.ibm.com/blogs/research/2020/08/quantum-research-centers/.

Garisto, D. (2020, May 4). Quantum computers won’t break encryption just yet. https://www.protocol.com/manuals/quantum-computing/quantum-computers-wont-break-encryption-yet.

IDQ. (2020, May 6). Quantum Key Distribution: QKD: Quantum Cryptography. ID Quantique. https://www.idquantique.com/quantum-safe-security/overview/quantum-key-distribution/.
Moody, D., Alagic, G., Apon, D. C., Cooper, D. A., Dang, Q. H., Kelsey, J. M., Yi-Kai, L., Miller, C., Peralta, R., Perlner R., Robinson A., Smith-Tone, D., & Alperin-Sheriff, J. (2020). Status report on the second round of the NIST post-quantum cryptography standardization process. https://doi.org/10.6028/nist.ir.8309

Nohe, P. (2019, May 2). What is 256-bit encryption? How long would it take to crack? https://www.thesslstore.com/blog/what-is-256-bit-encryption/.
Quantum Flagship Coordination and Support Action (2018, October). Quantum Technologies Flagship. https://ec.europa.eu/digital-single-market/en/quantum-technologies-flagship

Rijmenants, D. (2004). The German Enigma Cipher Machine. Enigma Machine. http://users.telenet.be/d.rijmenants/en/enigma.htm.

Rivest, R. L., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2), 120–126. https://doi.org/10.1145/359340.359342

Rouse, M., Brush, K., Rosencrance, L., & Cobb, M. (2020, March 20). What is Asymmetric Cryptography and How Does it Work? SearchSecurity. https://searchsecurity.techtarget.com/definition/asymmetric-cryptography.

Shor, P. w. (1994). Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science, 124–134. https://doi.org/10.1109/sfcs.1994.365700

Singh, S. (2000). The code book: The science of secrecy from Egypt to Quantum Cryptography. Anchor Books.

Vepsäläinen, A. P., Karamlou, A. H., Orrell, J. L., Dogra, A. S., Loer, B., Vasconcelos, F., David, K. K., Melville A. J., Niedzielski B. M., Yoder J. L., Gustavsson, S., Formaggio J. A., VanDevender B. A., & Oliver, W. D. (2020). Impact of ionizing radiation on superconducting qubit coherence. Nature, 584(7822), 551–556. https://doi.org/10.1038/s41586-020-2619-8


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2020, which is why the tone is a lot more formal than my regular posts.

– Suramya

April 21, 2022

It is possible to plant Undetectable Backdoors in Machine Learning Models

Filed under: Computer Security,Computer Software,Emerging Tech,My Thoughts — Suramya @ 2:23 AM

Machine learning (ML) is the big thing and ML algorithms are slowly creeping into all aspects of our life such as unlocking your phone using facial recognition, evaluating the eligibility for a loan, surveillance, what ads you see when surfing the web, what search results you get when searching for stuff etc etc. The problem is that ML algorithms are not infallible they depend on the training data used, confirmational bias etc. At the very least they enforce the existing bias for example, if a company only hires men 25-45 for a role then the ML data set will take this as the input and all future candidates will be evaluated against this criteria because the system thinks that this is what a success looks like. The algorithms themselves are getting more and more complicated and it is almost impossible to review and validate the findings. Due to this decisions are being made by machines that can’t be audited easily. Plus it doesn’t help that most ML models are proprietary and the companies refuse to let outsiders examine them due to Trade secrets and proprietary information used in them.

Another problem is that these ML models is adversarial perturbations where attackers make minor changes to the image/data going in to get a specific response/output. There are a lot of examples of this in the past few years and some of them are listed below (Thanks to Cory Doctorow for consolidating them in one place)

These all take advantage of flaws in the ML model that can be exploited using minor changes in the input data. However, there is another major exploit surface available which is incredibly hard to protect against: Backdoors in the ML models by creating a model that will accept a particular entry/key to produce a specific output. The ‘best’ part is that it is almost impossible to detect if this has been done because the model will function exactly the same as an un-tampered model and will only show the abnormal behavior for the specific key which would have been randomly generated by the creator during the training. If done well then the modifications will be undetectable for most tests.

A team for MIT and IAS has written a paper on it (“Planting Undetectable Backdoors in Machine Learning Models“) where they go into details of how this can be done and the potential impact. Unfortunately, they have not been able to come up with a feasible defense against this attack as of this time. Hopefully that will change as others start focusing on this problem and how to solve it.

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

The paper is still waiting for the peer-review to complete but the concept and methods they describe seem solid so this is a problem we will have to solve sooner rather than later considering the speed with which ML models are impacting our life.

Source: Schneier on Security: Undetectable Backdoors in Machine-Learning Models

– Suramya

Older Posts »

Powered by WordPress