Suramya's Blog : Welcome to my crazy life…

September 13, 2020

Convert Waste Heat From Devices Like Refrigerators Into Electricity

Filed under: Emerging Tech — Suramya @ 11:57 PM

All electric devices that we use continuously dump waste heat into their surroundings, the amount discarded as heat depends on how efficient the device is. However no matter how efficient the device is there is always some energy lost as heat. We have known for years how to convert heat into electricity (that’s how power plants work), but that requires a large amount of heat and the waste heat generated by our devices is too low to covert to electricity in a cost effective/efficient manner.

There are specialized semiconductors called thermoelectric materials that generate electricity when one side of the material is hotter than the other. Unfortunately for them to work well the heat difference between the two sides needs to be in the order of hundreds of degrees making them useless to convert low-grade heat to electricity. To solve this problem materials physicist Jun Zhou and colleagues at the Huazhong University of Science and Technology have come up with Thermocells that use liquids instead of solids in the space between the two sides. The liquid conducts charges from the hot side to the cold side by moving charged molecules or ions instead of electrons. This unfortunately also transfers heat from one side to the other making them less efficient over the long run. To solve that problem they spiked the ferricyanide with a positively charged organic compound called guanidinium that reduces the thermal conductivity of the solution making it over 5 times more efficient than the previous versions.

Zhou and colleagues started with a small thermocell: a domino-size chamber with electrodes on the top and bottom. The bottom electrode sat on a hot plate and the top electrode abutted a cooler, maintaining a 50°C temperature difference between the two electrodes. They then filled the chamber with ionically charged liquid called ferricyanide.

Past research has shown that ferricyanide ions next to a hot electrode spontaneously give up an electron, changing from one with a –4 charge, or Fe(CN)6–4, to an ferricyanide with a –3 charge, or Fe(CN)6–3. The electrons then travel through an external circuit to the cold electrode, powering small devices on the way. Once they reach the cold electrode, the electrons combine with Fe(CN)6–3 ions that diffused up from below. This regenerates Fe(CN)6–4 ions, which then diffuse back down to the hot electrode and repeat the cycle.

To reduce the heat carried by these moving ions, Zhou and his colleagues spiked their ferricyanide with a positively charged organic compound called guanidinium. At the cold electrode, guanidinium causes the cold Fe(CN)6–4 ions to crystallize into tiny solid particles. Because solid particles have lower thermal conductivity than liquids, they block some of the heat traveling from the hot to the cold electrode. Gravity then pulls these crystals to the hot electrode, where the extra heat turns the crystals back into a liquid. “This is very clever,” Liu says, as the solid particles helped maintain the temperature gradient between the two electrodes.

If we can make this more efficient and get similar energy output while reducing the cost of the cell by using more inexpensive materials in the cell then we can soon imagine a world where we can power devices using the ambient heat around us. It will also allow us to make engines/motors/gadgets etc more efficient by reducing their energy requirements.

The study was published this week in Science: Thermosensitive crystallization–boosted liquid thermocells for low-grade heat harvesting

– Suramya

September 12, 2020

Post-Quantum Cryptography

Filed under: Computer Related,Quantum Computing,Techie Stuff — Suramya @ 11:29 AM

As you are aware one of the big promises of Quantum Computers is the ability to break existing Encryption algorithms in a realistic time frame. If you are not aware of this, then here’s a quick primer on Computer Security/cryptography. Basically the current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. For example it is trivial to multiply two numbers together: 593 times 829 is 491,597 but it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it and it becomes increasingly difficult as the numbers get larger. Such hard problems form the basis of algorithms like the RSA that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

Quantum Computers use “qubits” where a single qubit is able to encode more than two states (Technically, each qubit can store a superposition of multiple states) making it possible for it to perform massively parallel computations in parallel. This makes it theoretically possible for a Quantum computer with enough qubits to break traditional encryption in a reasonable time frame. In a theoretical projection it was postulated that a Quantum Computer could break a 2048-bit RSA encryption in ~8 hours. Which as you can imagine is a pretty big deal. But there is no need to panic as this is something that is still only theoretically possible as of now.

However this is something that is coming down the line so the worlds foremost Cryptographic experts have been working on Quantum safe encryption and for the past 3 years the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and data protection. Out of the initial 69 submissions received three years ago the group narrowed the field down to 15 finalists after two rounds of reviews. NIST has now begun the third round of public review of the algorithms to help decide the core of the first post-quantum cryptography standard.

They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures. To make the process easier/more manageable they have divided the finalists into two groups or tracks, with the first track containing the top 7 algorithms that are most promising and have a high probability of being suitable for wide application after the round finishes. The second track has the remaining eight algorithms which need more time to mature or are tailored to a specific application.

The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

You should check out this talk by Daniel Apon of NIST detailing the selection criteria used to classify the finalists and the full paper with technical details is available here.

Source: Schneier on Security: More on NIST’s Post-Quantum Cryptography

– Suramya

September 9, 2020

Augmented Reality Geology

Filed under: Computer Software,Emerging Tech,Interesting Sites — Suramya @ 10:17 PM

A lot of times when you look at Augmented Reality (AR), it seems like a solution looking for problem. We still haven’t found the Killer App for AR like the VisiCalc spreadsheet was the killer app for the Apple II and Lotus 1-2-3 & Excel were for the IBM PC. There are various initiatives underway but no one has hit the jackpot yet. There are applications that allow a Doctor to see a reference text or diagram in a heads up display when they’re operating which is something that’s very useful but that’s a niche market. We need something broader in scope and there is a lot of effort focused on the educational field where they’re trying to see if they can use augmented reality in classrooms.

One of the Implementations that sounds very cool is by an app that I found recently where they are using it to project a view of rocks and minerals etc for geology students using AR. Traditionally students are taught by showing them actual physical samples of the minerals and 2D images of larger scale items like meteor craters or strata. The traditional way has its own problems of storage and portability but with AR you can look at a meteor crater in a 3D view, and the teacher can walk you through visually on how it looks and what geological stresses etc formed around it. The same is also possible for minerals and crystals along with other things.

There’s a new app, called GeoXplorer available on both Android and iOS that allows you to achieve this. The app was created by the Fossett Laboratory for Virtual Planetary Exploration to help students understand the complex, three-dimensional nature of geologic structures without having to travel all over the world. The app has a lot of models programmed into the system already with more on the way. Thanks to interest from other fields they are looking at including models of proteins, art, and archeology as well into the App.

“You want to represent that data, not in a projective way like you would do on a screen on a textbook, but actually in a three-dimensional way,” Pratt said. “So you can actually look around it [and] manipulate it exactly how you would do in real life. The thing with augmented reality that we found most attractive [compared to virtual reality] is that it provides a much more intuitive teacher-student setting. You’re not hidden behind avatars. You can use body-language cues [like] eye contact to direct people to where you want to go.”

Working with the Unity game engine, Pratt has since put together a flexible app called GeoXplorer (for iOS and Android) for displaying other models. There is already a large collection of crystalline structure models for different minerals, allowing you to see how all the atoms are arranged. There are also a number of different types of rocks, so you can see what those minerals look like in the macro world. Stepping up again in scale, there are entire rock outcrops, allowing for a genuine geology field-trip experience in your living room. Even bigger, there are terrain maps for landscapes on Earth, as well as on the Moon and Mars.

Its still a work in progress but I think it’s going to be something which is going to be really cool and might be quite a big thing coming soon into classrooms around the world. The one major constraint that I can see is right now, you have to use your phone as the AR gateway which makes it a bit cumbersome to use, something like a Microsoft HoloLens or other augmented reality goggles will make it really easy to use and make it more natural, but obviously the cost factor of these lenses is a big problem. Keeping that in mind it’s easy to understand why they went with the Phone as the AR gateway instead of a Hololens or something similar.

From Martian terrain samples collected by NASA’s Mars Reconnaissance Orbiter to Devil’s Tower in Wyoming to rare hand samples too delicate to handle, the team is constantly expanding the catalog of 3D models available through GeoXplorer and if you have a model you’d like to see added to the app please get in contact with the Fossett Lab at fossett.lab@wustl.edu.

– Suramya

September 7, 2020

Govt mulls mandating EV charging kiosks at all 69,000 petrol pumps in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 12:36 PM

The Indian Government is doing an extensive push for promoting renewable energy and the increased push for Electric Vehicles are part of the effort. Earlier this month I talked about how they are trying to make EV’s cheaper by allowing consumers to purchase without a battery. Now they are looking at mandating the installation of EV Charging kiosks on all petrol pumps in India (~69,000). This move will resolve one of the biggest concerns (after cost) of operating an EV – namely how/where can we charge it during travel.

We had a similar problem when CNG (Compressed Natural Gas) was mandated for all Auto’s & buses (at least in Delhi). There was a lot of resistance to the move because there were only 2-3 CNG fuel pumps in Delhi at the time, then a lot of new pumps were built and existing pumps also added CNG option which made CNG an attractive & feasible solution. I am hoping that the same will be the case with EV Charging points once the new rule is implemented.

In a review meeting on EV charging infrastructure, Power Minister R K Singh suggested oil ministry top officials that “they may issue an order for their oil marketing companies (OMCs) under their administrative control for setting up charging kiosks at all COCO petrol pumps”, a source said.

Other franchisee petrol pump operators may also be advised to have at least one charging kiosk at their fuel stations, the source said adding this will help achieve “EV charging facility at all petrol pumps in the country”.

Under the new guidelines of the oil ministry, new petrol pumps must have an option of one alternative fuel.

“Most of the new petrol pumps are opting for electric vehicle charging facility under alternative fuel option. But it will make huge difference when the existing petrol pumps would also install EV charging kiosks,” the source said.

Source: Hindustan Times

– Suramya

September 3, 2020

Electric Vehicles can now be sold without Batteries in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 11:51 PM

One of the biggest constraints for buying an Electric Vehicle (EV) is cost as even with all the subsidies etc the cost of an EV is fairly high and upto 40% of the EV cost is the cost of the batteries. In a move to reduce the cost of EV’s in India, Indian government is now allowing dealers to sell EV’s without batteries and the customer will then have the option to retrofit an electric battery as per their requirements.

When I first read the news I thought they were kidding and what use an Electric car was without a battery. Then I thought about it a bit more, and realized that you could think of it as a dealer not selling a car with a pre-filled fuel tank. We normally get a liter or two of petrol/diesel in the car when we buy it and then top it up with fuel later. Now think of doing something similar with the EV, you get a small battery pack with the car by default (enough to let you drive for a few Kilometers) and you have the option to replace it with the battery pack of your choice. This will allow a person to budget their expense by choosing to but a low power/capacity battery initially if they are not planning on driving outside the city and then later upgrading to a pack with more capacity.

However some of the EV manufacturers are concerned about the safety aspects of retrofitting of batteries and possibilities of warranty-related confusion. Plus they also have questions about the how the subsidies under the Centre’s EV adoption policy would be determined for vehicles without batteries. Basically they feel that they should have been consulted in more detail before this major change was announced so as to avoid confusion after the launch.

The policy was announced mid August and I think time only will tell how well the policy works in the market.

More Details on the change: Sale of EVs without batteries: Ather, Hero Electric, etc. laud policy but Mahindra has doubts

– Suramya

September 1, 2020

Background radiation causes Integrity issues in Quantum Computers

Filed under: Computer Related,My Thoughts,Quantum Computing,Techie Stuff — Suramya @ 11:16 PM

As if Quantum Computing didn’t have enough issues preventing it from being a workable solution already, new research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers. The research has been published in Nature: Impact of ionizing radiation on superconducting qubit coherence.

Quantum computers are super powerful because their basic building blocks qubit (quantum bit) is able to simultaneously exist as 0 or 1 (Yes, it makes no sense which is why Eisenstein called it ‘spooky action at a distance’) allowing it process a magnitude more operations in parallel than the regular computing systems. Unfortunately it appears that these qubits are highly sensitive to their environment and even minor levels of radiation emitted by trace elements in concrete walls and cosmic rays can cause them to loose coherence corrupting the calculation/data, this is called decoherence. The longer we can avoid decoherence the more powerful/capable the quantum computer. We have made significant improvements in this over the past two decades, from maintaining it for less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

As per the study, the effect is serious enough to limit the performance to just a few milliseconds which is something we are expected to achieve in the next few years. The only way currently known to avoid this issue is to shield the computer which means putting these computers underground and surrounding it with a 2 ton wall of lead. Another possibility is to use something like a counter-wave of radiation to cancel the incoming radiation similar to how we do noise-canceling. But that is something which doesn’t exist today and will require significant technological breakthrough before it is feasible.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

Quantum Computing is a fascinating field but it really messes with your mind. So I am happy there are folks out there spending time trying to figure out how to get this amazing invention working and reliable enough to replace our existing Bit based computers.

Source: Cosmic rays can destabilize quantum computers, MIT study warns

– Suramya

August 25, 2020

Using Bioacoustic signatures for Identification & Authentication

We have all heard about Biometric scanners that identify folks using their fingerprints, or Iris scan or even the shape of their ear. Then we have lower accuracy authenticating systems like Face recognition, voice recognition etc. Individually they might not be 100% accurate but combine one or more of these and we have the ability to create systems that are harder to fool. This is not to say that these systems are fool proof because there are ways around each of the examples I mentioned above, our photos are everywhere and given a pic of high enough quality it is possible to create a replica of the face or iris or even finger prints.

Due to the above mentioned shortcomings, scientists are always on lookout for more ways to authenticate and identify people. Researchers from South Korean have found that the signature created when sound waves pass through humans are unique enough to be used to identify individuals. Their work, described in a study published on 4 October in the IEEE Transactions on Cybernetics, suggests this technique can identify a person with 97 percent accuracy.

“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”

[…]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”

Interestingly, while the setup is not as accurate as Fingerprints or Iris scans it is still accurate enough to differentiate between two fingers of the same hand. If the waves required to generate the Bioacoustic signatures are validated to be safe for humans over long term use, then it is possible that we will soon see a broader implementation of this technology in places like airports, buses, public area’s etc to identify people automatically without having to do anything. If it can be made portable then it could be used to monitor protests, rallies, etc which would make it a privacy risk.

The problem with this tech is that it would be harder to fool without taking steps that would make you stand out like wearing a vest filled with liquid that changes your acoustic signature. Which is great when we are just talking about authentication/identification for access control but becomes a nightmare when we consider the surveillance aspect of usage.

Source: The Bioacoustic Signatures of Our Bodies Can Reveal Our Identities

– Suramya

August 21, 2020

Emotion detection software for Pets using AI and some thoughts around it (and AI in general)

Filed under: Computer Software,Emerging Tech,Humor,My Thoughts,Techie Stuff — Suramya @ 5:32 PM

Pet owners are a special breed of people, they willingly take responsibility for another life and take care of them. I personally like pets as long as I can return them to the owner at the end of the day (or hour, depending on how annoying the pet is). I had to take care of a puppy for a week when Surabhi & Vinit were out of town and that experience was more than enough to confirm my belief in this matter. Others however feel differently and spend quite a lot of time and effort talking to the pets and some of them even pretend that the dog is talking back.

Now leveraging the power of AI there is a new app created that analyses & interprets the facial expressions of your pet. Folks over at the University of Melbourne decided to build an Convolutional Neural Networks based application called Happy Pets that you can download from the Android or Apple app stores to try on your pet. They claim to be able to identify the emotion the pet is feeling when the photo was taken.

While the science behind it is cool and a lot of pet owners who tried out the application over at Hacker News seem to like it, I feel its a bit frivolous and silly. Plus its hard enough for us to classify emotions in Humans reliably using AI so I would take the claims with a pinch of salt. The researchers themselves have also not given any numbers around the accuracy percentage of the model.

When I first saw the post about the app it reminded me of another article I had read a few days ago which postulated that ‘Too many AI researchers think real-world problems are not relevant’. At first I thought that this was an author trolling the AI developers but after reading the article I kind of agree with him. AI has a massive potential to advance our understanding of health, agriculture, scientific discovery, and more. However looking at the feedback AI papers have been getting it appears that AI researchers are allergic to practical applications (or in some cases useful applications). For example, below is a review received on a paper submitted to the NeurIPS (Neural Information Processing Systems) conference:

“The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

If I read this correctly then basically they are saying that this AI paper is for a particular application so its not interesting enough for the ML community. There is a similar bias in the theoretical physics/mathematics world where academics who talk about implementing the concepts/theories are looked down upon by the ‘purists’. I personally believe that while the theoretical sciences are all well & good and we do need people working on them to expand our understanding, at the end of the day if we are not applying these learnings/theorems practically they are of no use. There will be cases where we don’t have the know-how to implement or apply the learnings but we should not let that stand in the way of practical applications for things we can implement/use.

To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

What do you think? Do you agree/disagree?

Source: HackerNews

– Suramya

August 20, 2020

Transparent Solar Panels hit Record 8.1% Efficiency

Filed under: Emerging Tech,My Thoughts,Techie Stuff — Suramya @ 5:24 PM

Solar panels for electricity are awesome, however the major issue with deploying solar panels is that you need a lot of space for them. Although the efficiency of the panels has been going up reducing the space requirements but its still a non-trivial amount of space to generate enough power to be useful to power a house, Portable solar panels are well and good but they are slow and expensive. I tried to figure out a way to power my apartment via solar power but it wasn’t possible without having panels setup on the apartment roof.

Which is where the Transparent Solar panels come into play as they allow you to replace your existing windows for solar panel which would make them ideal for Apartments & office buildings. Because you don’t need extra space for the panels and just need to replace the windows. The transparent solar panels were created in 2014 by researchers at Michigan State University (MSU) however the efficiency of the panels was quite low compared to the traditional panels making it less productive and more expensive than the traditional panels so since then the race has been on to make the panels more efficient.

The researches from University of Michigan have made a significant break through in the manufacturing of color-neutral, transparent solar cells achieving 8.1% efficiency by using a carbon-based design rather than conventional silicon. The created cells do have a slight greenish tinge like sunglasses but for the most part appear to be usable as a window pane. More details are available in their release here.

The new material is a combination of organic molecules engineered to be transparent in the visible and absorbing in the near infrared, an invisible part of the spectrum that accounts for much of the energy in sunlight. In addition, the researchers developed optical coatings to boost both power generated from infrared light and transparency in the visible range—two qualities that are usually in competition with one another.

The color-neutral version of the device was made with an indium tin oxide electrode. A silver electrode improved the efficiency to 10.8%, with 45.8% transparency. However, that version’s slightly greenish tint may not be acceptable in some window applications.

Transparent solar cells are measured by their light utilization efficiency, which describes how much energy from the light hitting the window is available either as electricity or as transmitted light on the interior side. Previous transparent solar cells have light utilization efficiencies of roughly 2-3%, but the indium tin oxide cell is rated at 3.5% and the silver version has a light utilization efficiency of 5%.

The researchers are still working on improving the efficiency further and I am looking forward to the new breakthroughs in the field. Hopefully soon we will have panels efficient enough that I will be able to replace my apartment’s windows with Solar panel and break even in a reasonable amount of time. 🙂

Source: Slashdot.org

– Suramya

October 15, 2019

Theoretical paper speculates breaking 2048-bit RSA in eight hours using a Quantum Computer with 20 million Qubits

Filed under: Computer Security,My Thoughts,Quantum Computing — Suramya @ 12:05 PM

If we manage to get a fully functional Quantum Computer with about 20 million Qubits in the near future then according to this theoretical paper we would be able to factor 2048-bit RSA moduli in approximately eight hours. The paper is quite interesting, although the math in did give me a headache. However this is all still purely theoretical as we only have 50-60 qBit computers right now and are a long way away from general purpose Quantum computers. That being said I anticipate that we would be seeing this technology being available in our lifetime.

We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, EkerÃ¥-HÃ¥stad 2017, EkerÃ¥ 2017, EkerÃ¥ 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of 10−3, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction’s spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses 3n+0.002nlgn logical qubits, 0.3n3+0.0005n3lgn Toffolis, and 500n2+n2lgn measurement depth to factor n-bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields.

Bruce Schneier talks about how Quantum computing will affect cryptography in his essay Cryptography after the Aliens Land. In summary “Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run.”

Well this is all for now will post more later

– Suramya

Older Posts »

Powered by WordPress