Suramya's Blog : Welcome to my crazy life…

May 2, 2021

Infinite Nature: Creating Perpetual Views of Natural Scenes from a Single Image

Filed under: Emerging Tech,Interesting Sites,My Thoughts — Suramya @ 11:28 PM

Found this over at Hacker News , where researchers have created technologies that use existing video’s and images and extrapolate them into an infinite scrolling natural view that is very relaxing to watch and at times looks very tripy. The changes are slow so you don’t see how the image is changing but if you wait for a 20 seconds and compare that image with the first one you will see how it differs.

We introduce the problem of perpetual view generation—long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image. This is a challenging problem that goes far beyond the capabilities of current view synthesis methods, which work for a limited range of viewpoints and quickly degenerate when presented with a large camera motion. Methods designed for video generation also have limited ability to produce long video sequences and are often agnostic to scene geometry. We take a hybrid approach that integrates both geometry and image synthesis in an iterative render, refine, and repeat framework, allowing for long-range generation that cover large distances after hundreds of frames. Our approach can be trained from a set of monocular video sequences without any manual annotation. We propose a dataset of aerial footage of natural coastal scenes, and compare our method with recent view synthesis and conditional video generation baselines, showing that it can generate plausible scenes for much longer time horizons over large camera trajectories compared to existing methods.

The full paper is available here Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image with a few sample generated videos. One of the examples is below:

This is a very impressive technology. I can see a lot of uses for it in video games to generate real estate for flight simulators to fly over or fight over. It can be used for VR world developments or just to relax people. It might also be possible to take footage from TV shows and extrapolate them to allow folks to explore it in VR. (After a lot more research is done on this as the tech is still experimental). We could also simulate alien worlds using pics taken by our probes to train astronauts and settlers realistically instead of relying on fake windows and isolated area’s.

Check the site out for more such videos. Looking forward to future technologies built up over this.

– Suramya

April 29, 2021

Using Photo Ninja to shield users’ photos from reverse image searches and facial recognition AI

Filed under: Interesting Sites,My Thoughts — Suramya @ 1:15 AM

Last year I had posted about the Fawkes project that allowed users to modify their photos to avoid them from being used to power facial recognition and image recognition technologies. But the problem with these technologies is that it requires you to setup a server/run it on your machine which is hard for regular folks to do and that reduces the usage of the tool, even though it is very useful.

Now, a company called DoNotPay is launching a new service called Photo Ninja that allows you to upload a photo you want to shield and the software adds a layer of pixel-level changes that are barely noticeable to humans, but dramatically alter the image in the eyes of roving machines making it harder for someone to perform a reverse search on and to use for training AI models.

This is a great start and makes it really easy for people to use the service which costs $36 a year.

“Photo Ninja uses a novel series of steganography, detection perturbation, visible overlay, and several other AI-based enhancement processes to shield your images from reverse image searches without compromising the look of your photo,” says the company.

AI systems are trained to analyze pictures by looking at the pixel-level data, and adversarial examples can trick them by changing the pixel colors in a subtle enough way that the human eye doesn’t notice anything different but a computer fails to categorize the image as it usually would or interprets it as a wholly different image.

Anti-creep software — There are various reasons why you might want to use Photo Ninja. Before joining a dating service like Bumble, you could run your pictures through Photo Ninja so that weirdos can’t upload them to Google’s reverse image search and find your social media profiles without getting your consent, for instance.

I wonder if there is a demand for a similar service that could be powered with Fawkes and be provided for free to all users. I am thinking about setting something up like a Bot or a site that does this for free. I think there is a market for it and it would be a great side project for me to work on during this lockdown.

What do you think?

– Suramya

March 28, 2021

Louvre’s entire collection is now available online

Filed under: Interesting Sites — Suramya @ 12:24 AM

This is very cool, Louvre has made it’s entire collection of over 480,000 works available online for free. You can check it out here. It still doesn’t beat going there physically because that is a whole other experience, but it gives folks who can’t visit in person a chance to view the collection in hi-res images.

It is great that more and more museums and collectors are making their archives available online for free.

– Suramya

March 27, 2021

Outrun: Run a local command on a remote server

A lot of times we have to run a command that requires a lot of processing power and is extremely slow on your local computer. I have faced this issue in the past and at times wished there was a way to push these commands to a remote machine with a more powerful CPU to run the command. Now, thanks to the efforts of Alexander Overvoorde (Overv), Jakub Wilk and Xiretza this is now possible. They have created a tool called Outrun which lets you execute a local command using the processing power of another Linux machine without having to install the command on the remote machine.


Sample Execution of ffmpeg on a remote server

The software does have a few limitations, but on the whole it is very cool:

  • We need to have root access on the remote server (or sudo access) as the system needs to run chroot on the remote server
  • Both client and remote server need to be on the same architecture, so you can’t set up a session from an x86 machine to an ARM machine. Which is unfortunate because the first usecase I had for this tool was to run software from the RaspberryPI on my server as and when it needed more processing power.
  • File system performance remains a bottleneck

Check it out if you need to run commands with more CPU cycles than what is available on the local machine.

Thanks to Hacker News for the initial link.

– Suramya

October 13, 2020

It is now possible to generate clean hydrogen by Microwaving plastic waste

Filed under: Emerging Tech,Interesting Sites,My Thoughts — Suramya @ 2:33 PM

Plastic is a modern hazard and Plastic Pollution has a massive environmental impact. As of 2018, 380 million tonnes of plastic is being produced worldwide each year (source: Wikipedia). Since we all knew that plastic was bad a lot of effort was put in to get people to recycle plastics and single use plastics have been banned in a lot of places (In India they are banned as of 2019). However as per the recent report by NPR, recycling doesn’t keep plastic out of landfills as it is not economically viable at a large scale. It is simply cheaper to just bury the plastic than to clean it and recycle. Apparently this has been known for years now but the Big Oil companies kept it quite to protect their cash cow. So the hunt of what to do with the plastic continues and thanks to recent breakthroughs there just might be light at the end of this tunnel.

Apparently plastic has a high density of Hydrogen in it (something that I wasn’t aware of) and it is possible to extract this hydrogen to use as fuel for a greener future. The existing methods involve heating the plastic to ~750°C to decompose it into syngas (mixture of hydrogen and carbon monoxide) which are then separated in a second step. Unfortunately this process is energy intensive and difficult to make commercially viable.

Peter Edwards and his team at the University of Oxford decided to tackle this problem and found that if you broke the plastic into small pieces with a kitchen blender and mixed it with a catalyst of iron oxide and aluminium oxide, then microwaved it at 1000 watts then almost 97 percent of the gas in the plastic was released within seconds. To cherry on top is that the material left over after the process completed was almost exclusively carbon nanotubes which can be used in other projects and have vast applications.

The ubiquitous challenge of plastic waste has led to the modern descriptor plastisphere to represent the human-made plastic environment and ecosystem. Here we report a straightforward rapid method for the catalytic deconstruction of various plastic feedstocks into hydrogen and high-value carbons. We use microwaves together with abundant and inexpensive iron-based catalysts as microwave susceptors to initiate the catalytic deconstruction process. The one-step process typically takes 30–90 s to transform a sample of mechanically pulverized commercial plastic into hydrogen and (predominantly) multiwalled carbon nanotubes. A high hydrogen yield of 55.6 mmol g−1plastic is achieved, with over 97% of the theoretical mass of hydrogen being extracted from the deconstructed plastic. The approach is demonstrated on widely used, real-world plastic waste. This proof-of-concept advance highlights the potential of plastic waste itself as a valuable energy feedstock for the production of hydrogen and high-value carbon materials.

Their research was published in Nature Catalysis, DOI: 10.1038/s41929-020-00518-5 yesterday and is still in the early stages. But if this holds up at larger scale testing then it will allow us to significantly reduce the plastic waste that ends up in landfills and at the bottom of the ocean.

Source: New Scientist: Microwaving plastic waste can generate clean hydrogen

– Suramya

September 24, 2020

Can you spot a troll/bot account?

Filed under: Interesting Sites — Suramya @ 10:29 AM

Nowadays we have programmatic bots being used to spread misinformation & distrust on Social Media, in addition to the Troll’s who are just doing it for the lulz. A troll is a person who starts flame wars or intentionally upsets people on the Internet by posting inflammatory messages in an online community with the intent of provoking readers into displaying emotional responses for the troll’s amusement or a specific gain.

Can you identify these inauthentic accounts? Most people will say yes to the question but in reality it is hard to identify these accounts especially if they are run my competent people. spotthetroll.org is an online game that tests your ability to identify such accounts along with advice on what to look for when consuming Social Media. Its a great quiz and I found it to be quite fun, especially the section on what to look for when viewing social media to spot trolls/bots was very useful.

Each of the following 8 profiles include a brief selection of posts from a single social media account. You decide if each is an authentic account or a professional troll. After each profile, you’ll review the signs that can help you determine if it’s a troll or not.

I got a 5 out of 8. What’s your score?

– Suramya

September 21, 2020

Diffblue’s Cover is an AI powered software that can write full Unit Tests for you

Writing Unit Test cases for your software is one of the most boring parts of Software Development even though having accurate tests allows us to develop code faster & with more confidence. Having a full test suite allows a developer to ensure that the changes they have made didn’t break other parts of the project that were working fine earlier. This make Unit tests an essential part of CI/CD (Continuous Integration and Continuous Delivery) pipelines. It is therefore hard to do frequent releases without rigorous unit testing. For example SQLite database engine has 640 times as much testing code as code in the engine itself:

As of version 3.33.0 (2020-08-14), the SQLite library consists of approximately 143.4 KSLOC of C code. (KSLOC means thousands of “Source Lines Of Code” or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 640 times as much test code and test scripts – 91911.0 KSLOC.

Unfortunately, since the tests are boring and don’t give immediate tangible results they are the first casualties when a team is under a time crunch for delivery. This is where Diffblue’s Cover comes into play. Diffblue was spun out of the University of Oxford following their research into how to use AI to write tests automatically. Cover uses AI to write a complete Unit Test including logic that reflects the behavior of the program as compared to the other existing tools that generate Unit Tests based on Templates and depend on the user to provide the logic for the test.

Cover has now been released as a free Community Edition for people to see what the tool can do and try it out themselves. You can download the software from here, and the full datasheet on the software is available here.


Using Cover IntelliJ plug-in to write tests

The software is not foolproof as in it doesn’t identify bugs in the source code. It assumes that the code is working correctly when the tests are added in, so if there is incorrect logic in the code it won’t be able to help you. On the other hand if the original logic was correct then it will let you know if the changes made break any of the existing functionality.

Lodge acknowledged the problem, telling us: “The code might have bugs in it to begin with, and we can’t tell if the current logic that you have in the code is correct or not, because we don’t know what the intent is of the programmer, and there’s no good way today of being able to express intent in a way that a machine could understand.

“That is generally not the problem that most of our customers have. Most of our customers have very few unit tests, and what they typically do is have a set of tests that run functional end-to-end tests that run at the end of the process.”

Lodge’s argument is that if you start with a working application, then let Cover write tests, you have a code base that becomes amenable to high velocity delivery. “Our customers don’t have any unit tests at all, or they have maybe 5 to 10 per cent coverage. Their issue is not that they can’t test their software: they can. They can run end-to-end tests that run right before they cut a release. What they don’t have are unit tests that enable them to run a CI/CD pipeline and be able to ship software every day, so typically our customers are people who can ship software twice a year.”

The software is currently only compatible with Java & IntelliJ but work is ongoing to incorporate other coding languages & IDEs.

Thanks to Theregister.com for the link to the initial story.

– Suramya

September 19, 2020

How to Toonify yourself

Filed under: Interesting Sites,My Thoughts — Suramya @ 10:57 AM

While surfing the web I came across ‘Toonify Yourself!‘ that allows you to upload a photo and see what you’d look like in an animated movie. It uses deep learning and is based on distillation of a blended StyleGAN models into a pix2pixHD image to image translation network.

It sounded interesting, so I tried it out with one of my pictures and got the following results:


Original image

Toonified Image

I quite like the result and am thinking of using it as my avatar going forward. What do you think?
Thanks to Hacker News for the link

– Suramya

September 16, 2020

Potential signs of life found on Venus: Are we no longer alone in the universe?

Filed under: Interesting Sites,My Thoughts,News/Articles — Suramya @ 11:15 AM

If you have been watching the Astronomy chatter the past two days, you would have seen the headlines screaming about the possibility of life being found on Venus. Other less reputable sources are claiming that we have found definite proof of alien life. Both are inaccurate as even though we have found something that is easily explained by assuming the possibility of extra-terrestrial life there are other potential explanations that could cause the anomaly. So what is this discovery, you might ask which is causing people worldwide to start freaking out?

During analysis of spectrometer readings of Venus, scientists made a startling discovery high in its atmosphere; they found traces of phosphine (PH3) gas in Venus’s atmosphere, where any phosphorus should be in oxidized forms at a concentration (~20 parts per billion) that is hard to explain. It is unlikely that the gas is produced by abiotic production routes in Venus’s atmosphere, clouds, surface and subsurface, or from lightning, volcanic or meteoritic delivery (See the explanation below), hence the worldwide freak out. Basically the only way we know that this gas could be produced in the quantity measured is if there are anaerobic life (microbial organisms that don’t require or use oxygen) producing the gas on Venus. Obviously this doesn’t mean that there aren’t ways that we haven’t thought about yet that could be generating this gas. But the discovery is causing a big stir and will cause various space programs to start refocusing their efforts on Venus. India’s ISRO already has a mission planned to study the surface and atmosphere of Venus called ‘Shukrayaan-1‘ set to launch late 2020’s after the Mars Orbiter Mission 2 launches and you can be sure that they will be attempting to validate these findings when we get there.

The only way to conclusively prove life exists on Venus would be to go there and collect samples containing extra-terrestrial microbes. Since it’s impossible to prove a negative this will be the only concrete proof that we can trust. Anything else will still leave the door open for other potential explanations for the gas generation.

Here’s a link to the press briefing on the possible Venus biosignature announcement from @RoyalAstroSoc featuring comment from several of the scientists involved.

The recent candidate detection of ppb amounts of phosphine in the atmosphere of Venus is a highly unexpected discovery. Millimetre-waveband spectra of Venus from both ALMA and the JCMT telescopes at 266.9445 GHz show a PH3 absorption-line profile against the thermal background from deeper, hotter layers of the atmosphere indicating ~20 ppb abundance. Uncertainties arise primarily from uncertainties in pressure-broadening coefficients and noise in the JCMT signal. Throughout this paper we will describe the predicted abundance as ~20 ppb unless otherwise stated. The thermal emission has a peak emission at 56 km with the FWHM spans approximately 53 to 61 km (Greaves et al. 2020). Phosphine is therefore present above ~55 km: whether it is present below this altitude is not determined by these observations. The upper limit on phosphine occurrence is not defined by the observations, but is set by the half-life of phosphine at <80 km, as discussed below.

Phosphine is a reduced, reactive gaseous phosphorus species, which is not expected to be present in the oxidized, hydrogen-poor Venusian atmosphere, surface, or interior. Phosphine is detected in the atmospheres of three other solar system planets: Jupiter, Saturn, and Earth. Phosphine is present in the giant planet atmospheres of Jupiter and Saturn, as identified by ground-based telescope observations at submillimeter and infrared wavelengths (Bregman et al. 1975; Larson et al. 1977; Tarrago et al. 1992; Weisstein and Serabyn 1996). In giant planets, PH3 is expected to contain the entirety of the atmospheres’ phosphorus in the deep
atmosphere layers (Visscher et al. 2006), where the pressure, temperature and the concentration of H2 are sufficiently high for PH3 formation to be thermodynamically favored. In the upper atmosphere, phosphine is present at concentrations several orders of magnitude higher than predicted by thermodynamic equilibrium (Fletcher et al. 2009). Phosphine in the upper layers is dredged up by convection after its formation deeper in the atmosphere, at depths greater than 600 km (Noll and Marley 1997).

An analogous process of forming phosphine under high H2 pressure and high temperature followed by dredge-up to the observable atmosphere cannot happen on worlds like Venus or Earth for two reasons. First, hydrogen is a trace species in rocky planet atmospheres, so the formation of phosphine is not favored as it is in the deep atmospheres of the H2-dominated giant planets. On Earth H2 reaches 0.55 ppm levels (Novelli et al. 1999), on Venus it is much lower at ~4 ppb (Gruchola et al. 2019; Krasnopolsky 2010). Second, rocky planet atmospheres do not extend to a depth where, even if their atmosphere were composed primarily of hydrogen, phosphine formation would be favored (the possibility that phosphine can be formed below the surface and then being erupted out of volcanoes is addressed separately in Section 3.2.2 and Section 3.2.3, but is also highly unlikely).

Despite such unfavorable conditions for phosphine production, Earth is known to have PH3 in its atmosphere at ppq to ppt levels (see e.g. (Gassmann et al. 1996; Glindemann et al. 2003; Pasek et al. 2014) and reviewed in (Sousa-Silva et al. 2020)) PH3’s persistence in the Earth atmosphere is a result of the presence of microbial life on the Earth’s surface (as discussed in Section 1.1.2 below), and of human industrial activity. Neither the deep formation of phosphine and subsequent dredging to the surface nor its biological synthesis has hitherto been considered a plausible process to occur on Venus.

More details of the finding are explained in the following two papers published by the scientists:

Whatever the reason for the gas maybe, its a great finding as it has reenergized the search for Extra-Terrestrial life and as we all know: “The Truth is out there…”.

– Suramya

September 15, 2020

Neuroscience is starting to figure out why people feel lonely

Filed under: Interesting Sites,My Thoughts — Suramya @ 10:10 PM

Loneliness is a social epidemic which has been amplified by the current Pandemic as humans have an inbuilt desire to be social and interact with each other. The lockdown and isolation due to Covid-19 is not helping things much in this sense. The amount of cases of clinical depression are going up world wide and psychologists world wide are concerned about the impact of this in the near future.

Humans have been talking about social isolation/loneliness for centuries but till date we haven’t really analyzed it from a neurological point of view; to say what does really happen when we are lonely? Does the desire for companionship light up a section of our brain similar to what happens when we are hungry and are craving food? Till recently there wasn’t much research done on the topic, infact till Kay Tye decided to do research on the the neuroscience of loneliness in 2016 there were no published papers that talked about loneliness & contained references to ‘cells’, ‘neurons’, or ‘brain’. So while working at the Stanford University lab of Karl Deisseroth, Tye decided to spend some time trying to isolate the neurons in rodent brains responsible for the need for social interaction. In addition to identifying the region in rodents she has also managed to manipulate the need by directly stimulating the neurons which is a fantastic break through.

Deisseroth had pioneered optogenetics, a technique in which genetically engineered, light-sensitive proteins are implanted into brain cells; researchers can then turn individual neurons on or off simply by shining lights on them though fiber-optic cables. Though the technique is far too invasive to use in people—as well as an injection into the brain to deliver the proteins, it requires threading the fiber-optic cable through the skull and directly into the brain—it allows researchers to tweak neurons in live, freely moving rodents and then observe their behavior.

Tye began using optogenetics in rodents to trace the neural circuits involved in emotion, motivation, and social behaviors. She found that by activating a neuron and then identifying the other parts of the brain that responded to the signal the neuron gave out, she could trace the discrete circuits of cells that work together to perform specific functions. Tye meticulously traced the connections out of the amygdala, an almond-shaped set of neurons thought to be the seat of fear and anxiety both in rodents and in humans.

One of the first things Tye and Matthews noticed was that when they stimulated these neurons, the animals were more likely to seek social interaction with other mice. In a later experiment, they showed that animals, when given the choice, actively avoided areas of their cages that, when entered, triggered the activation of the neurons. This suggested that their quest for social interaction was driven more by a desire to avoid pain than to generate pleasure—an experience that mimicked the “aversive” experience of loneliness.

In a follow-up experiment, the researchers put some of the mice in solitary confinement for 24 hours and then reintroduced them to social groups. As one would expect, the animals sought out and spent an unusual amount of time interacting with other animals, as if they’d been “lonely.” Then Tye and Matthews isolated the same mice again, this time using optogenetics to silence the DRN neurons after the period in solitary. This time, the animals lost the desire for social contact. It was as if the social isolation had not been registered in their brains.

Since the experiment worked on Mice, the next step involved replicating the same thing with humans. Unfortunately they couldn’t use the same method to study the human behavior as no one sane would opt to have fiber-optic cable wired through their head just to participate in a study. So they fell back to a more imprecise method of using fMRI’s to scan the brains of the volunteers and she was able to identify a voxel (discrete population of several thousand neurons) that respond to the desire of wanting something like food/company. In fact they even managed to separate the two area’s responsible for desiring food and desiring company.

This is a fantastic first step because we have managed to identify the first part of the circuit that makes us social animals, obviously a lot more study is needed before this will have practical applications but we have taken the first steps towards the goal. It’s not hard to imagine a future where we have the ability to help suicidal people by simulating the area of their brain that enables them to extract joy from social connections. Or suppress the same in people who have to spend long duration’s of time alone, for example astronauts in interplanetary travel or deep sea researchers etc. The possibilities are endless.

Source: Why do you feel lonely? Neuroscience is starting to find answers.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress