Suramya's Blog : Welcome to my crazy life…

April 13, 2022

Internet of Things (IoT) Forensics: Challenges and Approaches

Internet of Things or IoT consists of interconnected devices that have sensors and software, which are connected to automated systems to gather information and depending on the information collected various actions can be performed. It is one of the fastest growing markets, with enterprise IoT spending to grow by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021).

This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network and attackers have found multiple ways through which they can gain unauthorized access to systems by compromising IoT systems.

IoT Forensics is a subset of the digital forensics field and is the new kid on the block. It deals with forensics data collected from IoT devices and follows the same procedure as regular computer forensics, i.e., identification, preservation, analysis, presentation, and report writing. The challenges of IoT come into play when we realize that in addition to the IoT sensor or device we also need to collect forensic data from the internal network or Cloud when performing a forensic investigation. This highlights the fact that Forensics can be divided into three categories: IoT device level, network forensics and cloud forensics. This is relevant because IoT forensics is heavily dependent on cloud forensics (as a lot of data is stored in the cloud) and analyzing the communication between devices in addition to data gathered from the physical device or sensor.

Why IoT Forensics is needed

The proliferation of Internet connected devices and sensors have made life a lot easier for users and has a lot of benefits associated with it. However, it also creates a larger attack surface which is vulnerable to cyberattacks. In the past IoT devices have been involved in incidents that include identity theft, data leakage, accessing and using Internet connected printers, commandeering of cloud-based CCTV units, SQL injections, phishing, ransomware and malware targeting specific appliances such as VoIP devices and smart vehicles.

With attackers targeting IoT devices and then using them to compromise enterprise systems, we need the ability to extract and review data from the IoT devices in a forensically sound way to find out how the device was compromised, what other systems were accessed from the device etc.

In addition, the forensic data from these devices can be used to reconstruct crime scenes and be used to prove or disprove hypothesis. For example, data from a IoT connected alarm can be used to determine where and when the alarm was disabled and a door was opened. If there is a suspect who wears a smartwatch then the data from the watch can be used to identify the person or infer what the person was doing at the time. In a recent arson case, the data from the suspects smartwatch was used to implicate him in arson. (Reardon, 2018)

The data from IoT devices can be crucial in identifying how a breach occurred and what should be done to mitigate the risk. This makes IoT forensics a critical part of the Digital Forensics program.

Current Forensic Challenges Within the IoT

The IoT forensics field has a lot of challenges that need to be addressed but unfortunately none of them have a simple solution. As shown in the research done by M. Harbawi and A. Varol (Harbawi, 2017) we can divide the challenges into six major groups. Identification, collection, preservation, analysis and correlation, attack attribution, and evidence presentation. We will cover the challenges each of these presents in the paper.

A. Evidence Identification

One of the most important steps in forensics examination is to identify where the evidence is stored and collect it. This is usually quite simple in the traditional Digital Forensics but in IoT forensics this can be a challenge as the data required could be stored in a multitude of places such as on the cloud, or in a proprietary local storage.

Another problem is that since IoT fundamentally means that the nodes were in real-time and autonomous interaction with each other, it is extremely difficult to reconstruct the crime scene and to identify the scope of the damage.

A report conducted by the International Data Corporation (IDC) states that the estimated growth of data generated by IoT devices between 2005 to 2020 is going to be more than 40,000 exabytes (Yakubu et al., 2016) making it very difficult for investigators to identify data that is relevant to the investigation while discarding the irrelevant data.

B. Evidence Acquisition

Once the evidence required for the case has been identified the investigative team still has to collect the information in a forensically sound manner that will allow them to perform analysis of the evidence and be able to present it in the court for prosecution.

Due to the lack of a common framework or forensic model for IoT investigations this can be a challenge. Since the method used to collect evidence can be challenged in court due to omissions in the way it was collected.

C. Evidence Preservation and Protection

After the data is collected it is essential that the chain of custody is maintained, and the integrity of the data needs to be validated and verifiable. In the case of IoT Forensics, evidence is collected from multiple remote servers, which makes maintaining proper Chain of Custody a lot more complicated. Another complication is that since these devices usually have a limited storage capacity and the system is continuously running there is a possibility of the evidence being overwritten. We can transfer the data to a local storage device but then ensuring the chain of custody is unbroken and verifiable becomes more difficult.

D. Evidence Analysis and Correlation

Due to the fact that IoT nodes are continuously operating, they produce an extremely high volume of data making it difficult to analyze and process all the data collected. Also, since in IoT Forensics there is less certainty about the source of data and who created or modified the data, it makes it difficult to extract information about ownership and modification history of the data in question.

With most of the IoT devices not storing metadata such as timestamps or location information along with issues created by different time zones and clock skew/drift it is difficult for investigators to create causal links from the data collected and perform analysis that is sound, not subject to interpretation bias and can be defended in court.

E. Attack and Deficit Attribution

IoT forensics requires a lot of additional work to ensure that the device physical and digital identity are in sync and the device was not being used by another person at the time. For example, if a command was given to Alexa by a user and that is evidence in the case against them then the examiner needs to confirm that the person giving the command was physically near the device at the time and that the command was not given over the phone remotely.

F. Evidence Presentation

Due to the highly complex nature of IoT forensics and how the evidence was collected it is difficult to present the data in court in an easy to understand way. This makes it easier for the defense to challenge the evidence and its interpretation by the prosecution.

VI. Opportunities of IoT Forensics

IoT devices bring new sources of information into play that can provide evidence that is hard to delete and most of the time collected without the suspect’s knowledge. This makes it hard for them to account for that evidence in their testimony and can be used to trip them up. This information is also harder to destroy because it is stored in the cloud.

New frameworks and tools such Zetta, Kaa and M2mLabs Mainspring are now becoming available in the market which make it easier to collect forensic information from IoT devices in a forensically sound way.

Another group is pushing for including blockchain based evidence chains into the digital and IoT forensics field to ensure that data collected can be stored in a forensically verifiable method that can’t be tampered with.

Conclusion

IoT Forensics is becoming a vital field of investigation and a major subcategory of digital forensics. With more and more devices getting connected to each other and increasing the attack surface of the target it is very important that these devices are secured and have a sound way of investigating if and when a breach happens.

Tools using Artificial Intelligence and Machine learning are being created that will allow us to leverage their capabilities to investigate breaches, attacks etc faster and more accurately.

References

Reardon. M. (2018, April 5). Your Alexa and Fitbit can testify against you in court. Retrieved from https://www.cnet.com/tech/mobile/alexa-fitbit-apple-watch-pacemaker-can-testify-against-you-in-court/.

M. Harbawi and A. Varol, “An improved digital evidence acquisition model for the Internet of Things forensic I: A theoretical framework”, Proc. 5th Int. Symp. Digit. Forensics Security (ISDFS), pp. 1-6, 2017.

Yakubu, O., Adjei, O., & Babu, N. (2016). A review of prospects and challenges of internet of things. International Journal of Computer Applications, 139(10), 33–39. https://doi.org/10.5120/ijca2016909390


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

August 7, 2021

Bypass of Facial Recognition made possible by creating Master faces that impersonate 40% of population

Filed under: Computer Security,Emerging Tech,My Thoughts,Tech Related — Suramya @ 9:00 PM

Over the years, there has been a lot of push for Image recognition systems and more and more companies are entering the field each with their own claims of supernatural accuracy. Plus, with all the amazing ‘tech’ being showcased in the movies and on TV people are primed to expect that level of accuracy. Unfortunately, reality is a lot more weird and based on research its pretty simple to fool image recognition systems. In the past people have tricked systems to misidentifying a banana as a toaster by modifying parts of the image. There was another recent event where the Tesla self navigation system kept thinking the moon was a Yellow light and insisted on slowing down. There are so many of these ‘edge’ cases that it is not even funny.

A specific use case for image recognition is Facial recognition and that is a similar mess. I have personally used a photo of an authorized user to get a recognition system to unlock a door during testing. We have cases where wearing glasses confuses the system that it locks you out. Now according to research conducted by the Blavatnik School of Computer Science and the school of Electrical Engineering it is possible to create a ‘master’ face that can be used to impersonate multiple ID’s. In their study they found that the 9 faces created by the StyleGAN Generative Adversarial Network (GAN) could impersonate 40% of the population. Testing against the University of Massachusetts’ Labeled Faces in the Wild (LFW) open source database they were able to impersonate 20% of the identities in the database with a single photo.

Basically, they are exploiting the fact that most facial recognition systems use broad sets of markers to identify specific individuals and StyleGAN creates a template containing multiple such markers which can then be used to fool the recognition systems.

Abstract: A master face is a face image that passes face-based identity-authentication for a large portion of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user-information. We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator. Multiple evolutionary strategies are compared, and we propose a novel approach that employs a neural network in order to direct the search in the direction of promising samples, without adding fitness evaluations. The results we present demonstrate that it is possible to obtain a high coverage of the population (over 40%) with less than 10 master faces, for three leading deep face recognition systems.

Their paper has been published and is available for download here: Generating Master Faces for Dictionary Attacks with a Network-Assisted Latent Space Evolution.

With more and more companies pushing for AI based recognition systems as fool proof systems (looking at you Apple, with your latest nonsense about protecting kids by scanning personal photos) it is imperative that more such research is conducted before these systems are pushed into production based on the claims in their marketing brochures.

Thanks to Schneier on Security: Using “Master Faces” to Bypass Face-Recognition Authenticating Systems

– Suramya

June 16, 2021

New material created that shows zero heat expansion from 4 to 1,400 K

Filed under: Emerging Tech,My Thoughts — Suramya @ 11:53 PM

One of the issues with high performance systems is that they generate a lot of great and heat usually causes the material they are made of to expand. Similarly cold temperatures causes materials to contract and this can cause problems because the constant expansion and contraction weakens the material. Due to this there is a lot of research that is happening to find materials that don’t expand/contract so much with temperature changes.

Researches from Australia have created a material that has zero thermal expansion. The material made out of scandium, aluminum, tungsten and oxygen did not expand or contract even when subjected to changes from 4 to 1,400 Kelvin (-269 to 1126 °C, -452 to 2059 °F). This makes it could make orthorhombic Sc1.5Al0.5W3O12 very useful for devices that need to work in extreme temperatures. This is a phenomenal achievement with a ton of uses. However the components to make the material are not cheap especially Scandium which is one of the most expensive elements currently. According to folks online it can cost about ~$120/gram so unless other elements can be used or we find a easy to mine/extract source of the metal the material is not something that we will see in general use anytime soon.

Zero thermal expansion (ZTE) is a rare physical property; however, if accessible, these ZTE or near ZTE materials can be widely applied in electronic devices and aerospace engineering in addition to being of significant fundamental interest. ZTE materials illustrate this property over a certain temperature range. Here, orthorhombic (Pnca space group) Sc1.5Al0.5W3O12 is demonstrated to deliver ZTE over the widest temperature reported to date, from 4 to 1400 K, with a coefficient of thermal expansion of αv = −6(14) × 10–8 K–1. Sc1.5Al0.5W3O12 maybe is one of the most thermally stable materials known based on the temperature range of stability and the consistent thermal expansion coefficients observed along the crystallographic axes and volumetrically. Furthermore, this work demonstrates the atomic perturbations that lead to ZTE and how varying the Sc:Al ratio can alter the coefficient of thermal expansion.

This material has a ton of uses. For example, this would be very useful in making items or structures in space. Since the temperature in space can vary from ~260 degrees Celsius in the sunlight to below -100 degrees Celsius in the shade we need materials with a low expansion coefficient. Another use case is for a coating on hypersonic jets, recently China has created a Mach 30 wind tunnel which allows them to test prototypes for planes that can fly at Mac 30. At that speed the air turns into plasma due to the friction and requires the planes to be made (or atleast coated with) a material that has low/zero heat expansion. If these planes are coated with this material then the only limitation on the speed would be how much thrust the engines can provide.

I can also see it being used for military jets/missiles etc to allow them to fly faster without damage and on rockets to make them more durable with lesser weight disadvantages.

The paper was published in Chemistry of Materials journal and though the work has a long way to go before it is commercially available it does have some fascinating potential.

Source: Extraordinary new material shows zero heat expansion from 4 to 1,400 K

– Suramya

June 14, 2021

New technique Lets Users Preview Files Stored in DNA Data Storage

Filed under: Computer Hardware,Emerging Tech,Science Related,Tech Related — Suramya @ 7:45 AM

Using DNA for storage is an idea that has been around for a while with the initial idea of DNA storage being postulated by Richard P. Feynman in 1959. It was mostly a theoretical exercise till 1988, when researchers from Harvard and the artist Joe Davis stored an image of an ancient Germanic rune representing life and the female Earth in the DNA sequence of E.coli. After that In November 2016 (Lot more stuff happened between the two dates and you can read it all on the Wiki page), a company called Catalog encoded 144 words from Robert Frost’s famous poem, “The Road Not Taken” into strands of DNA. Pretty soon after that in June 2019, scientists reported that all 16 GB of text from Wikipedia’s English-language version have been encoded into synthetic DNA.

DNA storage has been becoming easier and cheaper as time goes on with more and more companies getting on the bandwagon. Even Microsoft has a DNA Storage Research project. However, even with all the advances so far there is a lot more work required before this becomes stable, cheap and reliable enough to be a commercial product. One of the problems that we faced with the storage in the past was that it wasn’t possible to preview the data stored in DNA. You had to open the entire file if you wanted to know what was in it. Think of trying to browse an image gallery without thumbnails, you would have to open each file to see what it was when trying to find a particular file.

Researchers from North Carolina State University have developed a way to provide previews of a stored data file similar to how a thumbnail works for image files. Basically they used the fact that when files have similar file names then the system will copy pieces of multiple data files. Till now this was a problem but the researchers figured out how to use this behavior to allow them to either open the entire file or a subset.

“The advantage to our technique is that it is more efficient in terms of time and money,” says Kyle Tomek, lead author of a paper on the work and a Ph.D. student at NC State. “If you are not sure which file has the data you want, you don’t have to sequence all of the DNA in all of the potential files. Instead, you can sequence much smaller portions of the DNA files to serve as previews.”

Here’s a quick overview of how this works.

Users “name” their data files by attaching sequences of DNA called primer-binding sequences to the ends of DNA strands that are storing information. To identify and extract a given file, most systems use polymerase chain reaction (PCR). Specifically, they use a small DNA primer that matches the corresponding primer-binding sequence to identify the DNA strands containing the file you want. The system then uses PCR to make lots of copies of the relevant DNA strands, then sequences the entire sample. Because the process makes numerous copies of the targeted DNA strands, the signal of the targeted strands is stronger than the rest of the sample, making it possible to identify the targeted DNA sequence and read the file.

However, one challenge that DNA data storage researchers have grappled with is that if two or more files have similar file names, the PCR will inadvertently copy pieces of multiple data files. As a result, users have to give files very distinct names to avoid getting messy data.

“At some point it occurred to us that we might be able to use these non-specific interactions as a tool, rather than viewing it as a problem,” says Albert Keung, co-corresponding author of a paper on the work and an assistant professor of chemical and biomolecular engineering at NC State.

Specifically, the researchers developed a technique that makes use of similar file names to let them open either an entire file or a specific subset of that file. This works by using a specific naming convention when naming a file and a given subset of the file. They can choose whether to open the entire file, or just the “preview” version, by manipulating several parameters of the PCR process: the temperature, the concentration of DNA in the sample, and the types and concentrations of reagents in the sample.

The new technique is compatible with the DNA Enrichment and Nested Separation (DENSe) system that enables us to make DNA storage systems more scalable. The researchers are looking for industry partners to explore commercial viability. If things work out then maybe in the near future we could start storing data in biological samples (like spit). Although, it does sound gross to be handling spit and other bio matter when searching for saved data.

Source: New Twist on DNA Data Storage Lets Users Preview Stored Files
Paper: Nature.com: Promiscuous molecules for smarter file operations in DNA-based data storage

– Suramya

June 10, 2021

Using Graphene layers to store 10 times more data in Hard Disks

Filed under: Computer Hardware,Emerging Tech,Tech Related — Suramya @ 5:39 PM

The requirement for data storage has been going up exponentially over the past few years. At the start of 2020 it was estimated that the amount of data in the world was approximately 44 zettabytes (44,000,000,000,000,000,000,000 bytes), by 2025 this number will have grown to 175 zettabytes of data (Source). This means that we need better storage media to store all the information being generated. Imagine having to store this much data on floppy disks with their 1.4MB of storage or the early hard-disks that stored 10MB of data.

New research carried out in collaboration with teams at the University of Exeter, India, Switzerland, Singapore, and the US have replaced the carbon-based overcoats (COCs) which are basically layers on top of hard disk platters to protect them from mechanical damage with 2-4 layers of Graphene. Since we have reduced the thickness of the COC layer the platters can be placed closer together allowing us to have a greater storage density per inch and basically multiply the storage capacity by a factor of ten. Another advantage of using Graphene is that it reduces the corrosion of the platters by 2.5 times thereby making drives more reliable and increasing their lives.

HDDs contain two major components: platters and a head. Data are written on the platters using a magnetic head, which moves rapidly above them as they spin. The space between head and platter is continually decreasing to enable higher densities. Currently, carbon-based overcoats (COCs) — layers used to protect platters from mechanical damages and corrosion — occupy a significant part of this spacing. The data density of HDDs has quadrupled since 1990, and the COC thickness has reduced from 12.5nm to around 3nm, which corresponds to one terabyte per square inch. Now, graphene has enabled researchers to multiply this by ten.

The Cambridge researchers have replaced commercial COCs with one to four layers of graphene, and tested friction, wear, corrosion, thermal stability, and lubricant compatibility. Beyond its unbeatable thinness, graphene fulfills all the ideal properties of an HDD overcoat in terms of corrosion protection, low friction, wear resistance, hardness, lubricant compatibility, and surface smoothness. Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) — a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch.

The research was published in Nature: Graphene overcoats for ultra-high storage density magnetic media and has a lot of promise but is still in research phase so it might be a little while before we see consumer products with Graphene layers. A more userfriendly / less technical overview is available at: Phys.org: Ultra-high-density hard drives made with graphene store ten times more data

– Suramya

May 23, 2021

Rapid Prototyping by Printing circuits using an Inkjet Printer

Filed under: Computer Hardware,Emerging Tech,Tech Related — Suramya @ 10:50 PM

Printing circuits using commercial inkject printers is something that is becoming more and more convenient and affordable day by day. In their 2014 paper Instant inkjet circuits: lab-based inkjet printing to support rapid prototyping of UbiComp devices Prof. Kawahara and others showcased several applications from touch sensors to capacitive liquid level sensors. If you are interested in trying this out (I am sorely tempted), then checkout this Instructable.com: Print Conductive Circuits With an Inkjet Printer post that walks you through how to modify your printer.

The Ink to print these circuits is available for purchase online at novacentrix.com. You need the following to start printing circuits:

  • A low-cost printer such as EPSON WF 2010
  • Printing substrates like PET and glossy paper
  • Oven or hot plate for sintering & drying the ink
  • Empty refillable cartridges

A good area for experimentation would be for wearable circuits on clothing and other such places. But there are a ton of other applications especially in the embedded electronics market.

Well this is all for now. Will write more later.

Thanks to Hackernews: Rapid Prototyping with a $100 Inkjet Printer for the link.

– Suramya

May 21, 2021

Magnetic Computers: A step closer with a new cheaper magnetostrictor alloy created

Filed under: Emerging Tech,My Thoughts — Suramya @ 11:44 PM

As of today computers work by setting bits (zeroes and ones) in silicon chips that require electricity to function. There is also work happening where folks are using Quantum particles to store and process data (in Quantum Computers), then we have optical computer which performs its computation using photons. Except for the first one the rest are still in early development stages. Now we have a new contender in play that uses tiny, changeable magnetic fields to form the zeroes and ones that make up the invisible bedrock of all computers.

A magnetic computer leverages the “spin wave”, a quantum property of electrons; in magnetic materials with a lattice structure. This involves modulating the wave properties to generate a measurable output. The advantage is that this uses very little energy and generates almost no heat. In order to generate this field efficiently we use alloy’s that act as a magnetostrictor. Historically the best magnetostrictor rely on using rare-earth materials which are expensive and mining them generates a lot of toxic waste.

Researchers at University of Michigan along with Intel have created a new alloy that acts as a magnetostrictor by mixing Iron with gallium which is a lot more easily available and is cheaper to mine.

The University of Michigan researchers are hardly the first to use gallium to make magnetostrictive materials, but their predecessors had run into a pesky limit.

“When you go above 20 percent gallium, the material is no longer stable,” says Heron. “The material changes symmetry, it changes crystal structure, and its properties change dramatically.” For one, the material becomes much less shape-shiftingly magnetostrictive.

To get around that limit, Heron and his colleagues had to stop the atoms from shifting their structure. So they crafted their alloy at a relatively chilly 320 degrees Farenheit (160 degrees Celsius)—thus limiting its atoms’ energy. This locked the atoms in place and prevented them from moving about, even as the researchers infused more gallium into the alloy.

Through this method, the researchers were able to make an iron alloy with as much as 30 percent gallium, creating a new material that’s twice as magnetostrictive as its rare-earth counterparts.

This new, more effective magnetostrictor could help scientists build not only a cheaper computer, but also one that doesn’t rely on rare-earth minerals whose mining generates excessive carbon.

This makes allows them to create a system that could compute 0’s and 1’s using magnetic fields in a cheaper and more efficient way than traditional computing. For basic operations, this new system would only need power to change the bit value on the system and once the value is set they don’t need power to keep the value. Unlike silicon which requires power constantly without which the values are lost.

The field is still in it’s early phases so we don’t expect to see devices using this technology for the next few decades. But the base is being built and the new systems will be here sooner rather than later.

The research has been published in Nature: Engineering new limits to magnetostriction through metastability in iron-gallium alloys
Thanks to PopSci: How shape-shifting magnets could help build a lower-emission computer for the initial link.

– Suramya

May 17, 2021

IBM’s Project CodeNet: Teaching AI to code

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 11:58 PM

IBM recently launched a new program called Project CodeNet that is an opensource dataset that will be used to train AI to better understand code. The idea is to automate more of the engineering process by applying Artificial Intelligence to the problem. This is not the first project to do this and it won’t be the last. For some reason AI has become the cure all for all ‘ills’ in any part of life. It doesn’t matter if it is required or not but if there is a problem someone out there is trying to apply AI and Machine Learning to the problem.

This is not to say that Artificial Intelligence is not something that needs to be explored and developed. It has its uses but it doesn’t need to be applied everywhere. In one of my previous companies we interacted with a lot of companies who would pitch their products to us. In our last outing to a conference over 90% of the idea’s pitched had AI and/or Machine Learning involved. It got to the point where we started telling the companies that we knew what AI/ML was and ask them to just explain how they were using it in their product.

Coming back to Project CodeNet, it consists of over 14M code samples and over 500M lines of code in 55 different programming languages. The data set is high quality and curated. It contains samples from Open programming competitions with not just the code, it also contains the problem statements, sample input and output files along with details like code size, memory footprint and CPU run time. Having this curated dataset will allow developers to benchmark their software against a standard dataset and improve it over a period of time.

Potential use cases to come from the project include code search and cloud detection, automatic code correction, regression studies and prediction.

Press release: Kickstarting AI for Code: Introducing IBM’s Project CodeNet

– Suramya

May 16, 2021

Tiny, Wireless, Injectable Chips created to monitor body functions

Filed under: Emerging Tech,Science Related — Suramya @ 9:10 PM

Injectable chips have long been the boogyman for Anti-Vaxers as they think that people (like Bill Gates) are injecting them with tracking chips to track them and modify their behavior. However, till now this was mostly in the realm of Science Fiction as the smallest chips we had were still quite visible and difficult to power or inject (which is why they were implanted). Now, Researchers at Columbia Engineering have created the world’s smallest single chip system that is small enough that it is only visible under a microscope and is powered using Ultrasonic sound.

This is a great achievement because having injectable chips brings us closer to functioning nano-tech and these chips can be used to monitor physiological conditions, such as temperature, blood pressure, glucose levels, and respiration etc.

These devices could be used to monitor physiological conditions, such as temperature, blood pressure, glucose, and respiration for both diagnostic and therapeutic procedures. To date, conventional implanted electronics have been highly volume-inefficient — they generally require multiple chips, packaging, wires, and external transducers, and batteries are often needed for energy storage… Researchers at Columbia Engineering report that they have built what they say is the world’s smallest single-chip system, consuming a total volume of less than 0.1 mm cubed. The system is as small as a dust mite and visible only under a microscope…

“We wanted to see how far we could push the limits on how small a functioning chip we could make,” said the study’s leader Ken Shepard, Lau Family professor of electrical engineering and professor of biomedical engineering. “This is a new idea of ‘chip as system’ — this is a chip that alone, with nothing else, is a complete functioning electronic system. This should be revolutionary for developing wireless, miniaturized implantable medical devices that can sense different things, be used in clinical applications, and eventually approved for human use….”

The chip, which is the entire implantable/injectable mote with no additional packaging, was fabricated at the Taiwan Semiconductor Manufacturing Company with additional process modifications performed in the Columbia Nano Initiative cleanroom and the City University of New York Advanced Science Research Center (ASRC) Nanofabrication Facility. Shepard commented, “This is a nice example of ‘more than Moore’ technology—we introduced new materials onto standard complementary metal-oxide-semiconductor to provide new function. In this case, we added piezoelectric materials directly onto the integrated circuit to transducer acoustic energy to electrical energy….” The team’s goal is to develop chips that can be injected into the body with a hypodermic needle and then communicate back out of the body using ultrasound, providing information about something they measure locally.

The current devices measure body temperature, but there are many more possibilities the team is working on.

The only downside is that the anti-vaxers are going to use this as proof that the ‘Government’ is controlling their brains or tracking them. Never mind the fact that they can track you much more easily using the phone you carry everywhere or using the camera’s that are now almost everywhere.

The study was published online in Science Advances: Application of a sub–0.1-mm3 implantable mote for in vivo real-time wireless temperature sensing.

Thanks to Slashdot for the link.

– Suramya

May 2, 2021

Infinite Nature: Creating Perpetual Views of Natural Scenes from a Single Image

Filed under: Emerging Tech,Interesting Sites,My Thoughts — Suramya @ 11:28 PM

Found this over at Hacker News , where researchers have created technologies that use existing video’s and images and extrapolate them into an infinite scrolling natural view that is very relaxing to watch and at times looks very tripy. The changes are slow so you don’t see how the image is changing but if you wait for a 20 seconds and compare that image with the first one you will see how it differs.

We introduce the problem of perpetual view generation—long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image. This is a challenging problem that goes far beyond the capabilities of current view synthesis methods, which work for a limited range of viewpoints and quickly degenerate when presented with a large camera motion. Methods designed for video generation also have limited ability to produce long video sequences and are often agnostic to scene geometry. We take a hybrid approach that integrates both geometry and image synthesis in an iterative render, refine, and repeat framework, allowing for long-range generation that cover large distances after hundreds of frames. Our approach can be trained from a set of monocular video sequences without any manual annotation. We propose a dataset of aerial footage of natural coastal scenes, and compare our method with recent view synthesis and conditional video generation baselines, showing that it can generate plausible scenes for much longer time horizons over large camera trajectories compared to existing methods.

The full paper is available here Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image with a few sample generated videos. One of the examples is below:

This is a very impressive technology. I can see a lot of uses for it in video games to generate real estate for flight simulators to fly over or fight over. It can be used for VR world developments or just to relax people. It might also be possible to take footage from TV shows and extrapolate them to allow folks to explore it in VR. (After a lot more research is done on this as the tech is still experimental). We could also simulate alien worlds using pics taken by our probes to train astronauts and settlers realistically instead of relying on fake windows and isolated area’s.

Check the site out for more such videos. Looking forward to future technologies built up over this.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress