Suramya's Blog : Welcome to my crazy life…

July 27, 2023

GPS Data Could potentially be used to Detect Large Earthquakes in advance

Filed under: Emerging Tech,My Thoughts — Suramya @ 10:31 PM

Earthquakes are extremely devastating and because we don’t have a way to predict them in advance they end up taking a huge toll on lives. The existing systems for earthquake prediction are fraught with false positives to the point of being useless. However, that hasn’t stopped people from trying to predict them, and in a new paper researchers Quentin Bletery and Jean-Mathieu Nocquet claim to have found a unique way to predict them up to 2 hours in advance using GPS data.

They analyzed high-rate GPS time series before 90 different earthquakes that were magnitude 7 and above to find a precursor signal and they observed a subtle signal that rose from the noise about 2 hours before these major earthquakes occurred. This looks extremely promising and if validated can change how we approach disaster management of earthquakes. However, the study still needs to be validated and we don’t yet know if the precursor signal could ever be measured for individual events with the accuracy needed to provide a useful warning.

The existence of an observable precursory phase of slip on the fault before large earthquakes has been debated for decades. Although observations preceding several large earthquakes have been proposed as possible indicators of precursory slip, these observations do not directly precede earthquakes, are not seen before most events, and are also commonly observed without being followed by earthquakes. We conducted a global search for short-term precursory slip in GPS data. We summed the displacements measured by 3026 high-rate GPS time series—projected onto the directions expected from precursory slip at the hypocenter—during 48 hours before 90 (moment magnitude ≥7) earthquakes. Our approach reveals a ≈2-hour-long exponential acceleration of slip before the ruptures, suggesting that large earthquakes start with a precursory phase of slip, which improvements in measurement precision and density could more effectively detect and possibly monitor.

This is an area where Machine Learning might prove to be useful to extrapolate and predict but that being said we still need to validate and verify before implementing it or depending on it. The paper with their findings was published in Science (DOI: 10.1126/science.adg2565)

Source: Hacker News: Early Warning: GPS Data Could Detect Large Earthquakes Hours Before They Happen

– Suramya

July 26, 2023

New Double sided solar panels nearly double the power production per panel

Filed under: Emerging Tech,Science Related — Suramya @ 11:12 PM

Solar Panels have come a long way in the last few decades and their efficiency has been consistently increasing over the time as well. We have gone from an efficiency of ~10% on an average to more than 25% today. Now researchers from US Department of Energy’s National Renewable Energy Laboratory (NREL) have created a new double sided solar panel that generates electricity from both sides of the panel. Basically it uses reflected light on the back side of the panel to generate power. Even though the amount of power generated by the backside of the panel is only ~90% of the power generated by the front end adding them both together means that a single panel is generating almost double the power than traditional panels.

Bifacial photovoltaics (PV) harvest solar irradiance from both their front and rear surfaces, boosting energy conversion efficiency to maximize their electrical power production. For single-junction perovskite solar cells (PSCs), the performance of bifacial configurations is still far behind that of their state-of-the-art monofacial counterparts. Here, we report on highly efficient, bifacial, single-junction PSCs based on the p-i-n (or inverted) architecture. We used optical and electrical modeling to design a transparent conducting rear electrode for bifacial PSCs to enable optimized efficiency under a variety of albedo illumination conditions. The bifaciality of the PSCs was about 91%–93%. Under concurrent bifacial measurement conditions, we obtained equivalent, stabilized bifacial power output densities of 26.9, 28.5, and 30.1 mW/cm2 under albedos of 0.2, 0.3, and 0.5, respectively. We further showed that bifacial perovskite PV technology has the potential to outperform its monofacial counterparts with higher energy yields and lower levelized cost of energy (LCOE).

This is a significant breakthrough and the research was published in the journal Joule titled “Highly efficient bifacial single-junction perovskite solar cells”.

I love the fact that renewable energy is getting so much more push nowadays. I have been exploring putting solar at my place, but since I am in an apartment I don’t have much options available that would make financial sense. The panels I could put up would barely supply enough power making the whole thing not cost effective. Parents have put solar at our house in Delhi and my cousin has done the same at their farm where most of their power consumption is managed by their solar setup.

– Suramya

June 28, 2023

Please stop shoving ChatGPT Integration into products that don’t need it

I am getting really tired of folks shoving ChatGPT integration into everything whether it makes sense or not. The latest silliness is an electric bike with ChatGPT integration. I understand the desire to integrate GPS/Maps etc in a bike, although personally I would rather use an independent device which would get updates more frequently than the built in GPS where the maps might get updated a few times a year. Unless the maps are getting downloaded live using 3G/4G/whatever. I even understand the desire to integrate voice recognition in the setup so that the user can talk to it. But why on earth do I want/need to have ChatGPT shoved in there?

Based on ChatGPT’s well known tendency to hallucinate there is a good probability that it might decide that you should take a path that is not safe or even dump you into the ocean because it hallucinated that it was the way to go. This is the same thing we saw with Blockchain a few years ago, everything was suddenly on the Blockchain whether it needed to be or not. The sad part is that these folks are going to make a ton of money because of the hype behind ChatGPT and then bail leaving the consumers with a sub-par bike that hallucinates.

Source: Urtopia Unveils the World’s First Smart E-Bike with ChatGPT Integration at EUROBIKE 2023

– Suramya

June 9, 2023

Sound based Quantum Computers are now closer to reality due to breakthrough research

We all know about the ongoing efforts to build a Quantum Computer by encoding information into quantum states of particles of light (photons), however there is a parallel effort ongoing that is trying to build Quantum Computers that are based on Sound instead of light. This effort being led by Andrew Cleland at the University of Chicago, just had a major breakthrough and created a device that is a key component in building a sound based Quantum Computer.

Phonons are the fundamental quantum vibrations within materials, with individual phonons representing the collective motion of many trillions of atoms. The team built a chip that allows them to create single phonons on demand which are about a million times higher pitched than audible sound. They then passed it into a beam splitter which consists of 16 tiny, parallel aluminium strips designed so that any sound that hits them gets reflected and transmitted in equal parts. At supercooled temperatures they found that the Phonon entered a quantum superposition state where the whole particle was simultaneously in the state of being reflected and transmitted.

Cleland says that this is exactly what they hoped would happen because this process is a necessary step for running calculations on quantum computers that rely on particles of light. To make their chip even more like a sound-based quantum computer, the researchers also successfully recreated the way two particles of light are commonly made to “talk to each other” and how their behaviour is controlled during light-based computations.

Here, they simultaneously sent two phonons from opposite directions into the beam splitter and saw their respective superposition states influence each other. In the future, they will use this procedure to implement simple operations that make up computer programs.

Dirk Bouwmeester at the University of California, Santa Barbara, says that for particles of light, procedures like quantum teleportation or creating entanglement hinge on using beam splitters, and now they could be done with particles of sound as well. “It is truly spectacular that the team could replace photons with phonons,” he says.

There are a lot of interesting applications for this technology once it matures, for example it could be used to connect computer components that are hard to connect to each other. Using sound as the carrier instead of light opens up more possibilities. Their research has been published in the Science Journal (DOI: 10.1126/science.adg8715)

Source: NewScientist: Sound-based quantum computers could be built using chip-sized device

– Suramya

June 7, 2023

Proof of Concept setup demoing technology to beam solar power down to Earth successfully demoed

Filed under: Astronomy / Space,Emerging Tech,Science Related — Suramya @ 4:20 PM

The ability to beam power from space to earth has long been a staple of Science Fiction books and movies. On the surface it makes sense, space is huge and if we can setup solar panels in space to collect energy and get it to earth in a usable format then it is a win-win. No more having to deal with fossil fuels/nuclear reactors etc. Folks have been working on this for over 5 decades now and progress was slow as most people focused their efforts on other options such as improving land based solar panels, geothermal etc etc.

Now researchers from California Institute of Technology (Caltech) have successfully demonstrated MAPLE (Microwave Array for Power-transfer Low-orbit Experiment) which is an array of 32 flat antennas packed onto a surface slightly larger than a dinner plate. During the experiment the energy was transmitted to earth and successfully received by a receiver on the roof of the Gordon and Betty Moore Laboratory of Engineering and they were able to successfully target receivers about a foot away alternatively and transmit ~200 milliwatts of power to the receiver that powered a LED light.

Using constructive and destructive interference between individual transmitters, a bank of power transmitters is able to shift the focus and direction of the energy it beams out—without any moving parts. The transmitter array uses precise timing-control elements to dynamically focus the power selectively on the desired location using the coherent addition of electromagnetic waves. This enables the majority of the energy to be transmitted to the desired location and nowhere else.

MAPLE features two separate receiver arrays located about a foot away from the transmitter to receive the energy, convert it to direct current (DC) electricity, and use it to light up a pair of LEDs to demonstrate the full sequence of wireless energy transmission at a distance in space. MAPLE tested this in space by lighting up each LED individually and shifting back and forth between them. The experiment is not sealed, so it is subject to the harsh environment of space, including the wide temperature swings and solar radiation that will be faced one day by large-scale SSPP units.
..
..
MAPLE also includes a small window through which the array can beam the energy. This transmitted energy was detected by a receiver on the roof of the Gordon and Betty Moore Laboratory of Engineering on Caltech’s campus in Pasadena. The received signal appeared at the expected time and frequency, and had the right frequency shift as predicted based on its travel from orbit.

This is a big step forward, but I still have major doubts about the feasibility of the project, at least in the current form. Here we are using microwaves to transmit the energy from space to earth, at the level of power we need to transmit the microwave beam will cook anything that crosses it, for example if a plane flies through the beam you can say goodbye to the passengers. Even if we decide that we will establish a no-fly zone around the area what is to stop birds etc from flying into the beam and getting fried. Another problem is that microwave beams generate heat as a side-effect and that can cause a major heating of the atmosphere and change the air-currents which can cause a massive environmental impact.

We also need to consider that in order to collect the solar energy at a scale where it would be useful and make financial sense we would need to setup massive solar-panels in space. This will case a huge problem for astronomers. We already have issues being caused by the Starlink Satellites, this will be the same but at a much larger scale.

However, that being said I see the potential and if we can ever get quantum entanglement or spooky action at a distance as Einstein called it working reliably and consistently then that could potentially be used to transmit the power to earth without frying everyone in the path.

Lets see what else the researchers come up with…

Source: Caltech.edu: In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space
Science.org: Satellite beams solar power down to Earth, in first-of-a-kind demonstration

– Suramya

May 16, 2023

Using AI powered chatbots to answer questions about a book series

One of the things that I have been wanting for a while is the ability to look up stuff about a book or series so that when a new book in the series comes out I don’t have to re-read the old books before starting the new one. I end up doing that because otherwise since I have forgotten a large part of the backstory it would make the new book a lot less enjoyable. There are some authors who solve this problem by summarizing the backstory when the book starts and it is really useful. When ChatGPT and associated AI bots started picking up one of the ideas I came up with was to create a chatbot trained on a book series so that I can ask it a question about the book and get an answer. Would be really useful when dealing with book series with a large cast and multiple overlying stories.

Recently privateGPT showed up in my feed with the following claim: “Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!” and it seemed like it was the perfect tool for what I wanted to do so I downloaded the program from GitHub and set it up on my machine.

Once the software was installed, I copied a single book from the Mercedes Lackey Valdemar series called Darian’s Tale 01: Owlflight into a temp directory and told privateGPT to ingest the contents. The ingestion took hours even though the file was only ~600kb in size, I left it running overnight and it completed sometime during the night. Then I ran the prompt to be able to query the chatbot. The first few times I tried querying it, the app crashed because I ran out of memory. So I enabled SWAP on my system and then it finally ran the query and answered.

Unfortunately the answers I got were complete nonsense. You can see the questions I asked and the answers I got below:

Enter a query: what is a heartstone

llama_print_timings:        load time =   974.93 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time =   974.77 ms /     7 tokens (  139.25 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time =   977.55 ms

A heartstone is a type of music that is typically composed in the key of F major. It is characterized by a steady rhythm and a melody that is often slow and mournful. 
Heartstones are often used in classical music and are often associated with love and loss. They are also used in some contemporary music genres, such as folk and world music.

The correct answer is (as per Valdemar Fandom Wiki):

A Heartstone is both an artificial node produced by the Tayledras and the node’s physical anchor. The knowledge of how to create a Heartstone was given to the Tayledras by the Star-Eyed after the Sundering of the Clans. She tasked the Tayledras to cleanse the untamed magic caused by the Mage Wars. A Heartstone is used as a vessel to contain the cleansed magic. Ley-Lines are re-routed by Adepts and Masters to drain entirely into the Heartstone, bringing the surrounding area back to normal. Once their work is finished at that location, the Clan will establish a new Vale and Heartstone in a new area that needs cleansing. They drain the old Heartstone entirely and transfer the energy to the new Heartstone where they will begin their work anew.

So I asked another question:

Enter a query: who are the Hertasi

llama_print_timings:        load time =   974.93 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 26948.27 ms /     7 tokens ( 3849.75 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 26976.13 ms

The Herta si are a type of ghostly beings in the world of Eri. They are often associated with love and romance, and they are known for their ability to control and manipulate emotions. 
They are often depicted as beautiful, ethereal creatures, with long, flowing hair and eyes that glow with an otherworldly light. 

In this result it is talking about characters from a completely different series called the Herta. The correct answer is:

Hertasi are popularly considered to be one of the five non-human sentient races created by Urtho, but the work to “uplift” the hertasi was begun earlier, by Sorcerer-Adept Khal Herta. When he died, his work passed to Urtho.

The wild hertasi began as large, solitary lizards living in swamps, with the intelligence of a dog. They were “mild-tempered reptiles, available in large quantities, living fairly simple lives.”[1] They could have been described as semi-sentient before the Adepts took a hand. After Khal Herta’s efforts, Hertasi “had simple structures, organized hunting and fishing, and rudimentary medicine.”[1] Today, they are extremely active, intelligent, cave and tunnel-dwelling, social creatures.

Since the results were completely useless like all the other AI chatbots that I have tried so far, I have removed the software on my machine and will come back to this at somepoint later in time by when things would have hopefully improved.

– Suramya

April 19, 2023

Finally a useful AI Implementation: Making spoken dialog easier to hear in movies and shows

Filed under: Emerging Tech,News/Articles,Tech Related — Suramya @ 6:37 PM

Finally, an AI usecase that is actually useful. There are a ton of use cases where AI seems to be shoehorned in for no reason, but this recent announcement from Amazon about Dialogue Boost which is a new function from that lets you increase the volume of dialogue relative to background music and effects to a consistent volume so you can actually hear the dialog without nearly shattering the eardrums when a sudden explosion happens.
It is something that is still in the testing phase and is only released on some of their products so far. But I am looking forward to it being in general availability.

Dialogue Boost works by analyzing the original audio in a movie or series and identifying points where dialogue may be hard to hear above background music and effects, at which point speech patterns are isolated and audio is enhanced to make the dialogue clearer. The AI targets spoken dialogue rather than a typical speaker or home theater set up that only amplifies the center channel of audio. It’s something that exists on high-end theater set-ups and certain smart TVs, but Amazon is the first streamer to roll out such a feature.

I have gotten used to having subtitles on when I watch something because that ensures that I don’t miss out on any dialogs due to the background music/sounds in the show/movie. This looks like it will alleviate that requirement. I think I will still end up keeping the subtitles on but this will certainly help.

Source: Amazon’s New Tool Adjusts Sound So You Can Actually Understand Movie and TV Dialogue
Announcement: Prime Video launches a new accessibility feature that makes it easier to hear dialogue in your favorite movies and series

– Suramya

March 18, 2023

Scientists create a working supersolid in the lab

Filed under: Emerging Tech,My Thoughts — Suramya @ 11:34 PM

It seems that every year we learn more about the universe that makes the basic physics that we learned in school inaccurate or rather puts a lot of caveats in to the theories. Originally we had 3 states of matter: Solid, liquid and gas. Then came things like superfluids, Bose–Einstein condensates, quantum spin liquid, supercritical fluid, quark–gluon plasma, Rydberg polaron, and so many more weird possibilities. Last week, scientists from Innsbruck University in Austria have managed to create a new state of matter in 2D called Supersolids. Till now the researchers had only been able to create a 1D (a few molecules long) chain of SuperSolids but using cutting edge research they were able to create a 2D ‘paper’ of supersolid.

If you are like me, by now you will be wondering what on earth is a supersolid… Basically it is a state of matter that incorporates two different states of matter at the same time i.e. it is a solid as well as a superfluid at the same time. This gives it the ability to be a solid and still flow like a liquid without any friction at the same time. If that sounds confusing it is so because we are talking about Quantum effects which seem to exist in a state of constant contradiction and confusion (At least for me, when I try to understand them).

“To picture a supersolid, consider an ice cube immersed in liquid water, with frictionless flow of the water through the cube,” wrote Bruno Labruthe-Tolra, a physicist at Sorbonne Paris North University.

So, to create a supersolid, you first trap some atoms, then cool them, then play with their interactions. “If you tune those correctly, and you tune the shape of the trap correctly, you can get a supersolid,” says Norcia, the lead author.

Using this method, in 2019, researchers began to create a basic, one-dimensional supersolid: essentially, a thin supersolid tube in a straight line.

That’s what Norcia and his colleagues at Innsbruck University and the Austrian Academy of Sciences have now done. By tinkering with the device they used to trap atoms and the process they used to condense the atoms, they were able to extend their supersolid from one dimension into two: from a tiny tube into a small sheet.

There are a lot of interesting usecases for this technology when it matures, we could use it for lubrication in industrial machinery, create frictionless surfaces for tests. It could even be used in vacuum as is for various usecases. But that is still quite a way off because the work to go from 2D to 3D has just started and is still in the pre-research stage. However, while that is going on we do have a superSolid paper available for study while will give us more insight into this fascinating new substance.

The research has been published in Nature: Supersolids go two-dimensional

Source: Popsci.com: We finally have a working supersolid. Here’s why that matters.

– Suramya

March 12, 2023

Researchers create mini-robot that can navigate inside blood vessels and perform surgery autonomously

Filed under: Emerging Tech,Tech Related — Suramya @ 11:13 PM

Performing surgery is a delicate task and at times it is almost impossible to reach the area we want to operate at without having to cut through other important tissues. This is even more apparent when we talk about surgery inside a blood vessel or artery, which could be the key to removing an obstruction or stitch a wound etc. Till now we didn’t have the ability to release an autonomous robot inside a blood vessel that could navigate to the correct location, perform the programmed actions (or allow the doctor to manually take over) and return.

This was only possible in the realm of Science Fiction but thanks to the efforts of Researchers at South Korea’s Hanyang University this is now actually possible in the real world. They have successfully demonstrated that their I-RAMAN (robotically assisted magnetic navigation system for endovascular intervention) robot can travel autonomously to a superficial femoral artery in a pig, deliver contrast dye, and return safely to the extraction point. Their results and paper was published on 9th Feb in IEEE Robotics and Automation Letters: Separable and Recombinable Magnetic Robot for Robotic Endovascular Intervention.

This study presents a separable and recombinable magnetic robot (SRMR) to deliver and retrieve an untethered magnetic robot (UMR) to a target vascular lesion safely and effectively for robotic endovascular intervention. The SRMR comprises a delivery catheter and UMR connected to the end of the delivery catheter by a connecting section. An external magnetic field (EMF) interacts with the permanent magnet of the UMR; it can effectively generate magnetic torque and steer the delivery catheter to reach a target lesion. Furthermore, the rotating EMF allows the UMR of the SRMR to separate from the delivery catheter and perform the tunneling task. After completing the tunneling task, the UMR can be safely recombined with the delivery catheter in the vasculature via a simultaneous application of the EMF and suction force to the delivery catheter. The SRMR functions of steering, separation, movement, tunneling, drug delivery, and recombination are validated in a mimetic vascular model with a pseudo blood clot. Finally, the SRMR is successfully validated in an in vivo experiment of a mini pig’s superficial femoral artery for contrast delivery, separation, movement, and recombination.

This is a fantastic achievement, and although there is a lot of work still left to be done before this can be deployed for actual human use we are still a step closer to truly universal repair bots. Imagine an accident victim who is bleeding internally, the doctor deploys these robots to restitch the blood vessels to stop the internal bleeding and within minutes the bleeding is stopped and the doctor can start the post-op work. I can imagine these being sold as part of the standard medkits in the future (way in the future) where you have a few pre-programmed options available and depending on the situation a person can select the correct option to deploy.

However, all is not rosy (as always). If these go into active use and become common enough to be deployed in med-kits then we would need systems to prevent these bots from being repurposed. For example, instead of being programmed to stitch blood vessels the bots are programmed to cause more damage and start internal bleeding. There are so many other scenarios where this could be misused so we would need to think of all the cases, mitigate the risk and only then deploy them into the world.

That being said, I am still excited to see the possibilities this opens up.

Source: ACM Tech News Newsletter.

– Suramya

March 2, 2023

Intel Releases SDK allowing C++ Developers to start writing code for Quantum Computers

Filed under: Quantum Computing,Tech Related — Suramya @ 8:26 PM

Intel has released a new software platform for Developers (SDK) who are looking to work on Quantum computers. They are not the first (Microsoft released an online course/setup back in 2019) and they certainly won’t be the last to do this.

Unfortunately, while they have released the platform it doesn’t actually run on a quantum computer but rather runs on a quantum computer simulator they have built. But the really interesting thing is that this SDK that they have released allows developers to use C++ to build quantum algorithms instead of having to learn a new programming language which immediately increases the no of people who can hit the ground running and start developing with the SDK.

The platform, called Intel Quantum SDK, would for now allow those algorithms to run on a simulated quantum computing system, said Anne Matsuura, Intel Labs’ head of quantum applications and architecture. Matsuura said developers can use the long-established programming language C++ to build quantum algorithms, making it more accessible for people without quantum computing expertise. “The Intel Quantum SDK helps programmers get ready for future large-scale commercial quantum computers,” Matsuura said in a statement. “It will also advance the industry by creating a community of developers that will accelerate the development of applications.”

Intel will be launching their own version of a Quantum computer in the near future. They are taking a slightly different approach than the others to make the computer, they are basically trying to build this computers using their existing chip-making technology by putting transistors very close to each other, running them at super low temperatures and then use single electrons in the circuit which makes the transistors act as qubits. This sounds like a promising approach but I feel that this is more of a stepping stone on the way to the fully quantum setup as it is a hybrid version of the existing computers and a quantum computer.

Source: Slashdot: Intel Releases Software Platform for Quantum Computing Developers

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress