Suramya's Blog : Welcome to my crazy life…

April 13, 2022

Internet of Things (IoT) Forensics: Challenges and Approaches

Internet of Things or IoT consists of interconnected devices that have sensors and software, which are connected to automated systems to gather information and depending on the information collected various actions can be performed. It is one of the fastest growing markets, with enterprise IoT spending to grow by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021).

This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network and attackers have found multiple ways through which they can gain unauthorized access to systems by compromising IoT systems.

IoT Forensics is a subset of the digital forensics field and is the new kid on the block. It deals with forensics data collected from IoT devices and follows the same procedure as regular computer forensics, i.e., identification, preservation, analysis, presentation, and report writing. The challenges of IoT come into play when we realize that in addition to the IoT sensor or device we also need to collect forensic data from the internal network or Cloud when performing a forensic investigation. This highlights the fact that Forensics can be divided into three categories: IoT device level, network forensics and cloud forensics. This is relevant because IoT forensics is heavily dependent on cloud forensics (as a lot of data is stored in the cloud) and analyzing the communication between devices in addition to data gathered from the physical device or sensor.

Why IoT Forensics is needed

The proliferation of Internet connected devices and sensors have made life a lot easier for users and has a lot of benefits associated with it. However, it also creates a larger attack surface which is vulnerable to cyberattacks. In the past IoT devices have been involved in incidents that include identity theft, data leakage, accessing and using Internet connected printers, commandeering of cloud-based CCTV units, SQL injections, phishing, ransomware and malware targeting specific appliances such as VoIP devices and smart vehicles.

With attackers targeting IoT devices and then using them to compromise enterprise systems, we need the ability to extract and review data from the IoT devices in a forensically sound way to find out how the device was compromised, what other systems were accessed from the device etc.

In addition, the forensic data from these devices can be used to reconstruct crime scenes and be used to prove or disprove hypothesis. For example, data from a IoT connected alarm can be used to determine where and when the alarm was disabled and a door was opened. If there is a suspect who wears a smartwatch then the data from the watch can be used to identify the person or infer what the person was doing at the time. In a recent arson case, the data from the suspects smartwatch was used to implicate him in arson. (Reardon, 2018)

The data from IoT devices can be crucial in identifying how a breach occurred and what should be done to mitigate the risk. This makes IoT forensics a critical part of the Digital Forensics program.

Current Forensic Challenges Within the IoT

The IoT forensics field has a lot of challenges that need to be addressed but unfortunately none of them have a simple solution. As shown in the research done by M. Harbawi and A. Varol (Harbawi, 2017) we can divide the challenges into six major groups. Identification, collection, preservation, analysis and correlation, attack attribution, and evidence presentation. We will cover the challenges each of these presents in the paper.

A. Evidence Identification

One of the most important steps in forensics examination is to identify where the evidence is stored and collect it. This is usually quite simple in the traditional Digital Forensics but in IoT forensics this can be a challenge as the data required could be stored in a multitude of places such as on the cloud, or in a proprietary local storage.

Another problem is that since IoT fundamentally means that the nodes were in real-time and autonomous interaction with each other, it is extremely difficult to reconstruct the crime scene and to identify the scope of the damage.

A report conducted by the International Data Corporation (IDC) states that the estimated growth of data generated by IoT devices between 2005 to 2020 is going to be more than 40,000 exabytes (Yakubu et al., 2016) making it very difficult for investigators to identify data that is relevant to the investigation while discarding the irrelevant data.

B. Evidence Acquisition

Once the evidence required for the case has been identified the investigative team still has to collect the information in a forensically sound manner that will allow them to perform analysis of the evidence and be able to present it in the court for prosecution.

Due to the lack of a common framework or forensic model for IoT investigations this can be a challenge. Since the method used to collect evidence can be challenged in court due to omissions in the way it was collected.

C. Evidence Preservation and Protection

After the data is collected it is essential that the chain of custody is maintained, and the integrity of the data needs to be validated and verifiable. In the case of IoT Forensics, evidence is collected from multiple remote servers, which makes maintaining proper Chain of Custody a lot more complicated. Another complication is that since these devices usually have a limited storage capacity and the system is continuously running there is a possibility of the evidence being overwritten. We can transfer the data to a local storage device but then ensuring the chain of custody is unbroken and verifiable becomes more difficult.

D. Evidence Analysis and Correlation

Due to the fact that IoT nodes are continuously operating, they produce an extremely high volume of data making it difficult to analyze and process all the data collected. Also, since in IoT Forensics there is less certainty about the source of data and who created or modified the data, it makes it difficult to extract information about ownership and modification history of the data in question.

With most of the IoT devices not storing metadata such as timestamps or location information along with issues created by different time zones and clock skew/drift it is difficult for investigators to create causal links from the data collected and perform analysis that is sound, not subject to interpretation bias and can be defended in court.

E. Attack and Deficit Attribution

IoT forensics requires a lot of additional work to ensure that the device physical and digital identity are in sync and the device was not being used by another person at the time. For example, if a command was given to Alexa by a user and that is evidence in the case against them then the examiner needs to confirm that the person giving the command was physically near the device at the time and that the command was not given over the phone remotely.

F. Evidence Presentation

Due to the highly complex nature of IoT forensics and how the evidence was collected it is difficult to present the data in court in an easy to understand way. This makes it easier for the defense to challenge the evidence and its interpretation by the prosecution.

VI. Opportunities of IoT Forensics

IoT devices bring new sources of information into play that can provide evidence that is hard to delete and most of the time collected without the suspect’s knowledge. This makes it hard for them to account for that evidence in their testimony and can be used to trip them up. This information is also harder to destroy because it is stored in the cloud.

New frameworks and tools such Zetta, Kaa and M2mLabs Mainspring are now becoming available in the market which make it easier to collect forensic information from IoT devices in a forensically sound way.

Another group is pushing for including blockchain based evidence chains into the digital and IoT forensics field to ensure that data collected can be stored in a forensically verifiable method that can’t be tampered with.

Conclusion

IoT Forensics is becoming a vital field of investigation and a major subcategory of digital forensics. With more and more devices getting connected to each other and increasing the attack surface of the target it is very important that these devices are secured and have a sound way of investigating if and when a breach happens.

Tools using Artificial Intelligence and Machine learning are being created that will allow us to leverage their capabilities to investigate breaches, attacks etc faster and more accurately.

References

Reardon. M. (2018, April 5). Your Alexa and Fitbit can testify against you in court. Retrieved from https://www.cnet.com/tech/mobile/alexa-fitbit-apple-watch-pacemaker-can-testify-against-you-in-court/.

M. Harbawi and A. Varol, “An improved digital evidence acquisition model for the Internet of Things forensic I: A theoretical framework”, Proc. 5th Int. Symp. Digit. Forensics Security (ISDFS), pp. 1-6, 2017.

Yakubu, O., Adjei, O., & Babu, N. (2016). A review of prospects and challenges of internet of things. International Journal of Computer Applications, 139(10), 33–39. https://doi.org/10.5120/ijca2016909390


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

January 23, 2022

Some thoughts on Crypto currencies and why it is better to hold off on investing in them

Filed under: Computer Related,My Thoughts,Tech Related — Suramya @ 1:26 AM

It seems that every other day (or every other hour if you are unlucky) someone or the other is trying to get people to use Crypto currency because they claim that it is awesome and not at all dependent on government regulations and thus won’t fluctuate that much. Famous people are pushing it, others like New York City Mayor Eric Adams are trying to raise awareness of the product and have decided to convert his first paycheck to Crypto, El Savador started accepting crypto currency as legal tender etc. However, the promises made by crypto enthusiasts don’t translate into reality as the market remains extremely volatile.

I see people posting on twitter that Crypto currencies are better because they are stable, but in my opinion if a currency can drop 20% because Elon Musk tweeted a Broken heart emoji then it is not something I want to use to store my savings. Earlier this week the entire Bitcoin market dropped over 47% from it’s high back in Nov 2021. Mayor Adams paycheck which was converted to crypto is now worth ~1/2 of what it was when he invested it, and that is a massive drop. Imagine loosing 50% of your savings in one shot. You might suddenly have no way to pay rent or emergency repairs/hospitalization etc. Even El Savador has seen its credit become 4 times worse than it was before it moved to Bitcoin. People there are complaining that the promised reduction in cost for conversion to/from international currencies is a myth as they are paying more than what they were paying earlier as transaction costs.

Another major issue with crypto currency is the ecological hit caused by the mining. According to research done by University of Cambridge, globally Bitcoin uses more power per year than the entire population of Argentina. The recent Kazakhsthan unrest and protests were sparked off due to surging fuel prices that were caused by the migration of Bitcoin miners to the country after China banned them. This caused a lot of strain on the electricity grid and required an increase in the prices which kicked off a massive protest that has caused untold no of deaths. There are multiple folks coming up with new crypto-currencies that claim to be carbon neutral but so far none of them have delivered on the promise.

Bitcoin is thought to consume 707 kwH per transaction. In addition, the computers consume additional energy because they generate heat and need to be kept cool. And while it’s impossible to know exactly how much electricity Bitcoin uses because different computers and cooling systems have varying levels of energy efficiency, a University of Cambridge analysis estimated that bitcoin mining consumes 121.36 terawatt hours a year. This is more than all of Argentina consumes, or more than the consumption of Google, Apple, Facebook and Microsoft combined.

Check out this fantastic (though very long – 2hr+) video on economic critique of NFTs, DAOs, crypto currency and web3. (H/t to Cory Doctorow)

In summary, I would recommend against investing in crypto currencies till the issues highlighted above are resolved (if they are ever resolved).

– Suramya

May 15, 2021

Providing Oxygen through the intestines in Mammals is now possible as per research

Filed under: My Thoughts,News/Articles,Science Related — Suramya @ 11:53 PM

It takes a certain kind of mind to decide that today I am going to experiment if mammals can absorb oxygen through their intestines. Apparently a some of the aquatic animals like sea cucumbers and catfish, breathe through their intestines and since humans can absorb medicines through their intestines Takanori Takebe, a gastroenterologist from Cincinnati Children’s Hospital decided to do a study to see if they can absorb oxygen as well. So to test this out they basically injected pure pressurized oxygen into the rectums of the scrubbed mice (the mucus layer was thinned) and four of the seven unscrubbed ones. There was an immediate improvement in the O2 levels of the mice, with 75% the scrubbed mice surviving the procedure.

Obviously that is not a great survival rate and the scrubbing procedure is dangerous/involved but it did prove that mammals can absorb o2 with their intestines. So they looked at using perfluorocarbons which have a high O2 level and giving the rats & pigs an enema of the fluid. They saw an almost 15% improvement in the blood oxygen saturation allowing the subjects to recover from hypoxia.

These two tests prove that mammals can breath through their intestine but there is still a lot of study that needs to be done to check for the safety of this procedure. But if things go smoothly we can be looking at a new way to provide oxygen to patients when O2 canisters are in limited supply like the case currently in India due to the Covid crises.

But this doesn’t mean that mouth to mouth CPR will be replaced with mouth to ass CPR. (I can hear the sigh of relief from medical professional/emergency care folks).

More details on the study: ScienceMag: Mammals can breathe through their intestines
Full Paper: Mammalian enteral ventilation ameliorates respiratory failure

– Suramya

September 26, 2020

Source code for multiple Microsoft operating systems including Windows XP & Server 2003 leaked

Filed under: Computer Related,Tech Related — Suramya @ 5:58 PM

Windows XP & Windows Server source code leaked online earlier this week and even though this is for an operating system almost 2 decades old this leak is significant. Firstly because some of the core XP components are still in use in Windows 7/8/10. So if a major bug is found in any of those subsystems after people analyze the code then it will have a significant impact on the modern OS’s as well from Redmond. Secondly, It will give everyone a chance to try and understand how the Windows OS works so that they can enhance tools like WINE and other similar tools to have better compatibility with Windows. The other major impact will be on systems that still use XP like ATM’s, embedded systems, point-of-sale, automated teller machines, set-top boxes etc. Those will be hard to upgrade & protect as is some cases the companies that made the device are no longer in business and in other cases the software is installed in devices that are hard to upgrade.

This is not the first time Windows source code has leaked to the internet. In early 2000 a mega torrent of all MS Operating systems going back to MS-DOS was released, it allegedly contained the source code for the following OS’s:

OS from filename Alleged source size (bytes)
——————— —————————
MS-DOS 6 10,600,000
NT 3.5 101,700,000
NT 4 106,200,000
Windows 2000 122,300,000
NT 5 2,360,000,000

Leaked Data from the latest leak


Alleged contents of the Torrent file with MS Source Code.

The leaked code is available for download at most Torrent sites, I am not going to link to it for obvious reasons. If you want to check it out you can go download it, however as always be careful of what you download off the internet as it might have viruses and/or trojans in it. This is especially true if you are downloading the torrent on a Windows machine. Several users on Twitter claim that the source code for the original Xbox is included as well, but the information is varied on this. I haven’t downloaded it myself so can’t say for sure either way.

Keep in mind that the leak was illegal and just because it has leaked doesn’t mean that you can use it to build a clone of Windows XP without written authorization from Microsoft.

Source: ZDNet: Windows XP source code leaked online, on 4chan, out of all places

– Suramya

September 21, 2020

Diffblue’s Cover is an AI powered software that can write full Unit Tests for you

Writing Unit Test cases for your software is one of the most boring parts of Software Development even though having accurate tests allows us to develop code faster & with more confidence. Having a full test suite allows a developer to ensure that the changes they have made didn’t break other parts of the project that were working fine earlier. This make Unit tests an essential part of CI/CD (Continuous Integration and Continuous Delivery) pipelines. It is therefore hard to do frequent releases without rigorous unit testing. For example SQLite database engine has 640 times as much testing code as code in the engine itself:

As of version 3.33.0 (2020-08-14), the SQLite library consists of approximately 143.4 KSLOC of C code. (KSLOC means thousands of “Source Lines Of Code” or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 640 times as much test code and test scripts – 91911.0 KSLOC.

Unfortunately, since the tests are boring and don’t give immediate tangible results they are the first casualties when a team is under a time crunch for delivery. This is where Diffblue’s Cover comes into play. Diffblue was spun out of the University of Oxford following their research into how to use AI to write tests automatically. Cover uses AI to write a complete Unit Test including logic that reflects the behavior of the program as compared to the other existing tools that generate Unit Tests based on Templates and depend on the user to provide the logic for the test.

Cover has now been released as a free Community Edition for people to see what the tool can do and try it out themselves. You can download the software from here, and the full datasheet on the software is available here.


Using Cover IntelliJ plug-in to write tests

The software is not foolproof as in it doesn’t identify bugs in the source code. It assumes that the code is working correctly when the tests are added in, so if there is incorrect logic in the code it won’t be able to help you. On the other hand if the original logic was correct then it will let you know if the changes made break any of the existing functionality.

Lodge acknowledged the problem, telling us: “The code might have bugs in it to begin with, and we can’t tell if the current logic that you have in the code is correct or not, because we don’t know what the intent is of the programmer, and there’s no good way today of being able to express intent in a way that a machine could understand.

“That is generally not the problem that most of our customers have. Most of our customers have very few unit tests, and what they typically do is have a set of tests that run functional end-to-end tests that run at the end of the process.”

Lodge’s argument is that if you start with a working application, then let Cover write tests, you have a code base that becomes amenable to high velocity delivery. “Our customers don’t have any unit tests at all, or they have maybe 5 to 10 per cent coverage. Their issue is not that they can’t test their software: they can. They can run end-to-end tests that run right before they cut a release. What they don’t have are unit tests that enable them to run a CI/CD pipeline and be able to ship software every day, so typically our customers are people who can ship software twice a year.”

The software is currently only compatible with Java & IntelliJ but work is ongoing to incorporate other coding languages & IDEs.

Thanks to Theregister.com for the link to the initial story.

– Suramya

September 16, 2020

Potential signs of life found on Venus: Are we no longer alone in the universe?

Filed under: Interesting Sites,My Thoughts,News/Articles — Suramya @ 11:15 AM

If you have been watching the Astronomy chatter the past two days, you would have seen the headlines screaming about the possibility of life being found on Venus. Other less reputable sources are claiming that we have found definite proof of alien life. Both are inaccurate as even though we have found something that is easily explained by assuming the possibility of extra-terrestrial life there are other potential explanations that could cause the anomaly. So what is this discovery, you might ask which is causing people worldwide to start freaking out?

During analysis of spectrometer readings of Venus, scientists made a startling discovery high in its atmosphere; they found traces of phosphine (PH3) gas in Venus’s atmosphere, where any phosphorus should be in oxidized forms at a concentration (~20 parts per billion) that is hard to explain. It is unlikely that the gas is produced by abiotic production routes in Venus’s atmosphere, clouds, surface and subsurface, or from lightning, volcanic or meteoritic delivery (See the explanation below), hence the worldwide freak out. Basically the only way we know that this gas could be produced in the quantity measured is if there are anaerobic life (microbial organisms that don’t require or use oxygen) producing the gas on Venus. Obviously this doesn’t mean that there aren’t ways that we haven’t thought about yet that could be generating this gas. But the discovery is causing a big stir and will cause various space programs to start refocusing their efforts on Venus. India’s ISRO already has a mission planned to study the surface and atmosphere of Venus called ‘Shukrayaan-1‘ set to launch late 2020’s after the Mars Orbiter Mission 2 launches and you can be sure that they will be attempting to validate these findings when we get there.

The only way to conclusively prove life exists on Venus would be to go there and collect samples containing extra-terrestrial microbes. Since it’s impossible to prove a negative this will be the only concrete proof that we can trust. Anything else will still leave the door open for other potential explanations for the gas generation.

Here’s a link to the press briefing on the possible Venus biosignature announcement from @RoyalAstroSoc featuring comment from several of the scientists involved.

The recent candidate detection of ppb amounts of phosphine in the atmosphere of Venus is a highly unexpected discovery. Millimetre-waveband spectra of Venus from both ALMA and the JCMT telescopes at 266.9445 GHz show a PH3 absorption-line profile against the thermal background from deeper, hotter layers of the atmosphere indicating ~20 ppb abundance. Uncertainties arise primarily from uncertainties in pressure-broadening coefficients and noise in the JCMT signal. Throughout this paper we will describe the predicted abundance as ~20 ppb unless otherwise stated. The thermal emission has a peak emission at 56 km with the FWHM spans approximately 53 to 61 km (Greaves et al. 2020). Phosphine is therefore present above ~55 km: whether it is present below this altitude is not determined by these observations. The upper limit on phosphine occurrence is not defined by the observations, but is set by the half-life of phosphine at <80 km, as discussed below.

Phosphine is a reduced, reactive gaseous phosphorus species, which is not expected to be present in the oxidized, hydrogen-poor Venusian atmosphere, surface, or interior. Phosphine is detected in the atmospheres of three other solar system planets: Jupiter, Saturn, and Earth. Phosphine is present in the giant planet atmospheres of Jupiter and Saturn, as identified by ground-based telescope observations at submillimeter and infrared wavelengths (Bregman et al. 1975; Larson et al. 1977; Tarrago et al. 1992; Weisstein and Serabyn 1996). In giant planets, PH3 is expected to contain the entirety of the atmospheres’ phosphorus in the deep
atmosphere layers (Visscher et al. 2006), where the pressure, temperature and the concentration of H2 are sufficiently high for PH3 formation to be thermodynamically favored. In the upper atmosphere, phosphine is present at concentrations several orders of magnitude higher than predicted by thermodynamic equilibrium (Fletcher et al. 2009). Phosphine in the upper layers is dredged up by convection after its formation deeper in the atmosphere, at depths greater than 600 km (Noll and Marley 1997).

An analogous process of forming phosphine under high H2 pressure and high temperature followed by dredge-up to the observable atmosphere cannot happen on worlds like Venus or Earth for two reasons. First, hydrogen is a trace species in rocky planet atmospheres, so the formation of phosphine is not favored as it is in the deep atmospheres of the H2-dominated giant planets. On Earth H2 reaches 0.55 ppm levels (Novelli et al. 1999), on Venus it is much lower at ~4 ppb (Gruchola et al. 2019; Krasnopolsky 2010). Second, rocky planet atmospheres do not extend to a depth where, even if their atmosphere were composed primarily of hydrogen, phosphine formation would be favored (the possibility that phosphine can be formed below the surface and then being erupted out of volcanoes is addressed separately in Section 3.2.2 and Section 3.2.3, but is also highly unlikely).

Despite such unfavorable conditions for phosphine production, Earth is known to have PH3 in its atmosphere at ppq to ppt levels (see e.g. (Gassmann et al. 1996; Glindemann et al. 2003; Pasek et al. 2014) and reviewed in (Sousa-Silva et al. 2020)) PH3’s persistence in the Earth atmosphere is a result of the presence of microbial life on the Earth’s surface (as discussed in Section 1.1.2 below), and of human industrial activity. Neither the deep formation of phosphine and subsequent dredging to the surface nor its biological synthesis has hitherto been considered a plausible process to occur on Venus.

More details of the finding are explained in the following two papers published by the scientists:

Whatever the reason for the gas maybe, its a great finding as it has reenergized the search for Extra-Terrestrial life and as we all know: “The Truth is out there…”.

– Suramya

September 12, 2020

Post-Quantum Cryptography

Filed under: Computer Related,Quantum Computing,Tech Related — Suramya @ 11:29 AM

As you are aware one of the big promises of Quantum Computers is the ability to break existing Encryption algorithms in a realistic time frame. If you are not aware of this, then here’s a quick primer on Computer Security/cryptography. Basically the current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. For example it is trivial to multiply two numbers together: 593 times 829 is 491,597 but it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it and it becomes increasingly difficult as the numbers get larger. Such hard problems form the basis of algorithms like the RSA that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

Quantum Computers use “qubits” where a single qubit is able to encode more than two states (Technically, each qubit can store a superposition of multiple states) making it possible for it to perform massively parallel computations in parallel. This makes it theoretically possible for a Quantum computer with enough qubits to break traditional encryption in a reasonable time frame. In a theoretical projection it was postulated that a Quantum Computer could break a 2048-bit RSA encryption in ~8 hours. Which as you can imagine is a pretty big deal. But there is no need to panic as this is something that is still only theoretically possible as of now.

However this is something that is coming down the line so the worlds foremost Cryptographic experts have been working on Quantum safe encryption and for the past 3 years the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and data protection. Out of the initial 69 submissions received three years ago the group narrowed the field down to 15 finalists after two rounds of reviews. NIST has now begun the third round of public review of the algorithms to help decide the core of the first post-quantum cryptography standard.

They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures. To make the process easier/more manageable they have divided the finalists into two groups or tracks, with the first track containing the top 7 algorithms that are most promising and have a high probability of being suitable for wide application after the round finishes. The second track has the remaining eight algorithms which need more time to mature or are tailored to a specific application.

The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

You should check out this talk by Daniel Apon of NIST detailing the selection criteria used to classify the finalists and the full paper with technical details is available here.

Source: Schneier on Security: More on NIST’s Post-Quantum Cryptography

– Suramya

September 7, 2020

Govt mulls mandating EV charging kiosks at all 69,000 petrol pumps in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 12:36 PM

The Indian Government is doing an extensive push for promoting renewable energy and the increased push for Electric Vehicles are part of the effort. Earlier this month I talked about how they are trying to make EV’s cheaper by allowing consumers to purchase without a battery. Now they are looking at mandating the installation of EV Charging kiosks on all petrol pumps in India (~69,000). This move will resolve one of the biggest concerns (after cost) of operating an EV – namely how/where can we charge it during travel.

We had a similar problem when CNG (Compressed Natural Gas) was mandated for all Auto’s & buses (at least in Delhi). There was a lot of resistance to the move because there were only 2-3 CNG fuel pumps in Delhi at the time, then a lot of new pumps were built and existing pumps also added CNG option which made CNG an attractive & feasible solution. I am hoping that the same will be the case with EV Charging points once the new rule is implemented.

In a review meeting on EV charging infrastructure, Power Minister R K Singh suggested oil ministry top officials that “they may issue an order for their oil marketing companies (OMCs) under their administrative control for setting up charging kiosks at all COCO petrol pumps”, a source said.

Other franchisee petrol pump operators may also be advised to have at least one charging kiosk at their fuel stations, the source said adding this will help achieve “EV charging facility at all petrol pumps in the country”.

Under the new guidelines of the oil ministry, new petrol pumps must have an option of one alternative fuel.

“Most of the new petrol pumps are opting for electric vehicle charging facility under alternative fuel option. But it will make huge difference when the existing petrol pumps would also install EV charging kiosks,” the source said.

Source: Hindustan Times

– Suramya

September 3, 2020

Electric Vehicles can now be sold without Batteries in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 11:51 PM

One of the biggest constraints for buying an Electric Vehicle (EV) is cost as even with all the subsidies etc the cost of an EV is fairly high and upto 40% of the EV cost is the cost of the batteries. In a move to reduce the cost of EV’s in India, Indian government is now allowing dealers to sell EV’s without batteries and the customer will then have the option to retrofit an electric battery as per their requirements.

When I first read the news I thought they were kidding and what use an Electric car was without a battery. Then I thought about it a bit more, and realized that you could think of it as a dealer not selling a car with a pre-filled fuel tank. We normally get a liter or two of petrol/diesel in the car when we buy it and then top it up with fuel later. Now think of doing something similar with the EV, you get a small battery pack with the car by default (enough to let you drive for a few Kilometers) and you have the option to replace it with the battery pack of your choice. This will allow a person to budget their expense by choosing to but a low power/capacity battery initially if they are not planning on driving outside the city and then later upgrading to a pack with more capacity.

However some of the EV manufacturers are concerned about the safety aspects of retrofitting of batteries and possibilities of warranty-related confusion. Plus they also have questions about the how the subsidies under the Centre’s EV adoption policy would be determined for vehicles without batteries. Basically they feel that they should have been consulted in more detail before this major change was announced so as to avoid confusion after the launch.

The policy was announced mid August and I think time only will tell how well the policy works in the market.

More Details on the change: Sale of EVs without batteries: Ather, Hero Electric, etc. laud policy but Mahindra has doubts

– Suramya

September 1, 2020

Background radiation causes Integrity issues in Quantum Computers

Filed under: Computer Related,My Thoughts,Quantum Computing,Tech Related — Suramya @ 11:16 PM

As if Quantum Computing didn’t have enough issues preventing it from being a workable solution already, new research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers. The research has been published in Nature: Impact of ionizing radiation on superconducting qubit coherence.

Quantum computers are super powerful because their basic building blocks qubit (quantum bit) is able to simultaneously exist as 0 or 1 (Yes, it makes no sense which is why Eisenstein called it ‘spooky action at a distance’) allowing it process a magnitude more operations in parallel than the regular computing systems. Unfortunately it appears that these qubits are highly sensitive to their environment and even minor levels of radiation emitted by trace elements in concrete walls and cosmic rays can cause them to loose coherence corrupting the calculation/data, this is called decoherence. The longer we can avoid decoherence the more powerful/capable the quantum computer. We have made significant improvements in this over the past two decades, from maintaining it for less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

As per the study, the effect is serious enough to limit the performance to just a few milliseconds which is something we are expected to achieve in the next few years. The only way currently known to avoid this issue is to shield the computer which means putting these computers underground and surrounding it with a 2 ton wall of lead. Another possibility is to use something like a counter-wave of radiation to cancel the incoming radiation similar to how we do noise-canceling. But that is something which doesn’t exist today and will require significant technological breakthrough before it is feasible.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

Quantum Computing is a fascinating field but it really messes with your mind. So I am happy there are folks out there spending time trying to figure out how to get this amazing invention working and reliable enough to replace our existing Bit based computers.

Source: Cosmic rays can destabilize quantum computers, MIT study warns

– Suramya

Older Posts »

Powered by WordPress