Suramya's Blog : Welcome to my crazy life…

September 3, 2020

Electric Vehicles can now be sold without Batteries in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 11:51 PM

One of the biggest constraints for buying an Electric Vehicle (EV) is cost as even with all the subsidies etc the cost of an EV is fairly high and upto 40% of the EV cost is the cost of the batteries. In a move to reduce the cost of EV’s in India, Indian government is now allowing dealers to sell EV’s without batteries and the customer will then have the option to retrofit an electric battery as per their requirements.

When I first read the news I thought they were kidding and what use an Electric car was without a battery. Then I thought about it a bit more, and realized that you could think of it as a dealer not selling a car with a pre-filled fuel tank. We normally get a liter or two of petrol/diesel in the car when we buy it and then top it up with fuel later. Now think of doing something similar with the EV, you get a small battery pack with the car by default (enough to let you drive for a few Kilometers) and you have the option to replace it with the battery pack of your choice. This will allow a person to budget their expense by choosing to but a low power/capacity battery initially if they are not planning on driving outside the city and then later upgrading to a pack with more capacity.

However some of the EV manufacturers are concerned about the safety aspects of retrofitting of batteries and possibilities of warranty-related confusion. Plus they also have questions about the how the subsidies under the Centre’s EV adoption policy would be determined for vehicles without batteries. Basically they feel that they should have been consulted in more detail before this major change was announced so as to avoid confusion after the launch.

The policy was announced mid August and I think time only will tell how well the policy works in the market.

More Details on the change: Sale of EVs without batteries: Ather, Hero Electric, etc. laud policy but Mahindra has doubts

– Suramya

September 2, 2020

Happy 20th Birthday Nokia 3310!

Filed under: My Thoughts — Suramya @ 8:51 PM

The Nokia 3310 was launched on 1st Sept 2000 and is a legendary phone. Known for being nearly indestructible it helped accelerate the Mobile phone trend, with around 120 million devices sold during its lifespan. Both me and Surabhi had this phone and I have personal experience of its near indestructibility, Surabhi once took the phone on a roller coaster and midway through the phone fell out from her pocket (don’t ask me why she had the phone in the pocket as I don’t know). We assumed that the phone was a total loss and that we would have to get a new one. When the ride ended, we searched for the phone pieces and reassembled them. To all of our surprise the phone immediately booted up without any problems. There was a small crack on the battery panel but the phone worked without issues for another couple of years. Compare this to my S8 which had a cracked rear panel after falling from the bed.


The legendary Nokia 3310

“The Nokia 3310 is a true icon in the mobile phone world. Having sold over 120 million units its ubiquity meant that it is a device that many people owned, often as their first mobile phone. This means it generates a real sense of nostalgia which further underlines its status as one of the most important mobile phones of all time.

“The robust design made it almost indestructible in daily use and its ease of use meant that it became a firm favourite with customers. It is little surprise the design was rebooted in 2017 by HMD Global, the company that now licenses the Nokia brand for phones.”

The most recent 3310 version apes the aesthetic of the original well, although ditches some of its simplicity for a bevy of modern accoutrements, including a colour display and a 2MP rear camera.

I love my Samsung S10 but really miss the durability of the old cellphones.

– Suramya

September 1, 2020

Background radiation causes Integrity issues in Quantum Computers

Filed under: Computer Related,My Thoughts,Quantum Computing,Tech Related — Suramya @ 11:16 PM

As if Quantum Computing didn’t have enough issues preventing it from being a workable solution already, new research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers. The research has been published in Nature: Impact of ionizing radiation on superconducting qubit coherence.

Quantum computers are super powerful because their basic building blocks qubit (quantum bit) is able to simultaneously exist as 0 or 1 (Yes, it makes no sense which is why Eisenstein called it ‘spooky action at a distance’) allowing it process a magnitude more operations in parallel than the regular computing systems. Unfortunately it appears that these qubits are highly sensitive to their environment and even minor levels of radiation emitted by trace elements in concrete walls and cosmic rays can cause them to loose coherence corrupting the calculation/data, this is called decoherence. The longer we can avoid decoherence the more powerful/capable the quantum computer. We have made significant improvements in this over the past two decades, from maintaining it for less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

As per the study, the effect is serious enough to limit the performance to just a few milliseconds which is something we are expected to achieve in the next few years. The only way currently known to avoid this issue is to shield the computer which means putting these computers underground and surrounding it with a 2 ton wall of lead. Another possibility is to use something like a counter-wave of radiation to cancel the incoming radiation similar to how we do noise-canceling. But that is something which doesn’t exist today and will require significant technological breakthrough before it is feasible.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

Quantum Computing is a fascinating field but it really messes with your mind. So I am happy there are folks out there spending time trying to figure out how to get this amazing invention working and reliable enough to replace our existing Bit based computers.

Source: Cosmic rays can destabilize quantum computers, MIT study warns

– Suramya

August 30, 2020

How to write using inclusive language with the help of Microsoft Word

Filed under: Computer Software,Knowledgebase,My Thoughts,Tech Related — Suramya @ 11:59 PM

One of the key aspects of Inclusion is Inclusive language, and its very easy to use non-inclusive/gender specific language in our everyday writings. For example, when you meet a mixed gender group of people almost everyone will say something to the effect of ‘Hey Guys’. I was guilty of the same and it took a concentrated effort on my part to change my greeting to ‘Hey Folks’ and other similar changes. Its the same case with written communication and most people default to male gender focused writing. Recently I found out that Microsoft Office‘s correction tools, which most might associate with bad grammar or improper verb usage, secretly have options that help catch non-inclusive language, including gender and sexuality bias. So I wanted to share it with everyone.

Below are instructions on how to find & enable the settings:

  • Open MS Word
  • Click on File -> Options
  • Select ‘Proofing’ from the menu in the left corner and then scroll down on the right side to ‘Writing Style’ and click on the ‘Settings’ button.
  • Scroll down to the “Inclusiveness” section, select all of the checkboxes that you want Word to check for in your documents, and click the “OK” button. In some versions of Word you will need to scroll down to the ‘Inclusive Language’ section (its all the way near the bottom) and check the ‘Gender-Specific Language’ box instead.
  • Click Ok

It doesn’t sound like a big deal when you refer to someone by the wrong gender but trust me its a big deal. If you don’t believe me try addressing a group of men as ‘Hello Ladies’ and then wait for the reactions. If you can’t address a group of guys as ladies then you shouldn’t refer to a group of ladies as guys either. I think it is common courtesy and requires minimal effort over the long term (Initially things will feel a bit awkward but then you get used to it).

Well this is all for now. Will write more later.

– Suramya

August 29, 2020

You can be identified online based on your browsing history

Filed under: Computer Related,Computer Software,My Thoughts,Tech Related — Suramya @ 7:29 PM

Reliably Identifying people online is a bedrock of the million dollar advertising industry and as more and more users become privacy conscious browsers have been adding features to increase the user’s privacy and reduce the probability of them getting identified online. Users can be identified by Cookies, Super Cookies etc etc. Now there is a research paper (Replication: Why We Still Can’t Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories) that claims to be able to identify users based on their browsing histories. It is built on top of previous research Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns and re-validates the findings of the previous paper and builds on top of it.

We examine the threat to individuals’ privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and

extends the 2012 paper Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper’s core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

Original paper

Olejnik, Castelluccia, and Janc [48] gathered data in a project aimed at educating users about privacy practices. For the analysis presented in [48] they used the CSS :vis-ited browser vulnerability [8] to determine whether various home pages were in a user’s browsing history. That is, they probed users’ browsers for 6,000 predefined “primary links” such as www.google.com and got a yes/no for whether that home page was in the user’s browsing history. A user may have visited that home page and then cleared their browsing history, in which case they would not register a hit. Additionally a user may have visited a subpage e.g. www.google.com/maps but not www.google.com in which case the probe for www.google.com would also not register a hit. The project website was open for an extended period of time and recorded profiles between January 2009 and May 2011 for 441,627 unique users, some of whom returned for multiple history tests, allowing the researchers to study the evolution of browser profiles as well. With this data, they examined the uniqueness of browsing histories.

This brings to mind a project that I saw a few years ago that would give you a list of websites from the top 1k websites that you had visited in the past using javascript and some script-fu. Unfortunately I can’t find the link to the site right now as I don’t remember the name and a generic search is returning random sites. If I find it I will post it here as it was quite interesting.

Well this is all for now. Will post more later.

– Suramya

August 28, 2020

Got my first bot response to a Tweet and some analysis on the potential Bot

Filed under: Humor,My Thoughts,Tech Related — Suramya @ 10:21 PM

Today I achieved a major milestone of being on the internet, 🙂 I finally had a bot/troll (potential) respond to one of my Tweets with the usual nonsense. Normally I would ignore but it was just so funny to see this response that I had to comment on it. The reply was to my Tweet about how we could potentially achieve our target of eradicating Tuberculosis by 2025 because of the masks we are wearing due to Covid-19. You see TB bacteria are spread through the air from one person to another and just like Covid TB bacteria are put into the air when a person with TB disease of the lungs or throat coughs, speaks, or sings infecting people nearby when they breathe in these bacteria. Now that wearing a mask is becoming the new normal in most parts of the world (except for some morons who don’t understand/believe science or believe that politics is stronger than science) there is a high chance that it will also reduce the spread of other illnesses spread through air.


My Tweet & the response to it

Once I saw the response, I clicked on the profile and scrolled through the posting history and saw that a majority of the posts (atleast for the amount I was able to stomach while scrolling down) were retweets of Anti-Masker, Covid denial, Pro-Trump, anti vaccine nonsense. As I needed a distraction I decided to spend a bit of time to try and identify if the account was just a stupid person or a clever bot and did a little bit of investigation on the account.

Looking at the account a couple of things stood out right from the start, the first was that the account was created in July 2020 and the username had a bunch of numbers in it which is usually the case for automatically created accounts. So I ran a query on the account via Botometer® by OSoMe which gave me a whole bunch of data on the account and there was a bunch of data that made it stand out as being a potential bot. In just over a month (5 weeks and a day to be exact) the account had tweeted 6,197 times and 2,000 times in just the past 7 days which equates to about 12 tweets every hour every day. The other data point that stood out was that the account tweeted at almost the same time every day which is usually indicative of a Bot.

Interestingly the Botometer does give the account a low possibility of being a fully automated bot but that could be just because the person running it is manually feeding the responses and having the system spray it out. Or it could be a bored person doing it for LOL’s, which is code for morons who don’t know better and think they are being ‘cool’ or ‘edgy’ or whatever. But if that’s the case then they really need to get a better hobby.

Well this is all for now. Wear a mask when you go out and stay safe.

– Suramya

PS: I have no paitience for the anti-masker/anti-vaccine/anti-science nonsense so will be deleting any comments/responses or making fun of the comments depending on my mood at the time.

August 25, 2020

Using Bioacoustic signatures for Identification & Authentication

We have all heard about Biometric scanners that identify folks using their fingerprints, or Iris scan or even the shape of their ear. Then we have lower accuracy authenticating systems like Face recognition, voice recognition etc. Individually they might not be 100% accurate but combine one or more of these and we have the ability to create systems that are harder to fool. This is not to say that these systems are fool proof because there are ways around each of the examples I mentioned above, our photos are everywhere and given a pic of high enough quality it is possible to create a replica of the face or iris or even finger prints.

Due to the above mentioned shortcomings, scientists are always on lookout for more ways to authenticate and identify people. Researchers from South Korean have found that the signature created when sound waves pass through humans are unique enough to be used to identify individuals. Their work, described in a study published on 4 October in the IEEE Transactions on Cybernetics, suggests this technique can identify a person with 97 percent accuracy.

“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”

[…]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”

Interestingly, while the setup is not as accurate as Fingerprints or Iris scans it is still accurate enough to differentiate between two fingers of the same hand. If the waves required to generate the Bioacoustic signatures are validated to be safe for humans over long term use, then it is possible that we will soon see a broader implementation of this technology in places like airports, buses, public area’s etc to identify people automatically without having to do anything. If it can be made portable then it could be used to monitor protests, rallies, etc which would make it a privacy risk.

The problem with this tech is that it would be harder to fool without taking steps that would make you stand out like wearing a vest filled with liquid that changes your acoustic signature. Which is great when we are just talking about authentication/identification for access control but becomes a nightmare when we consider the surveillance aspect of usage.

Source: The Bioacoustic Signatures of Our Bodies Can Reveal Our Identities

– Suramya

August 24, 2020

India has the cheapest Mobile Internet in the world

Filed under: Interesting Sites,My Thoughts,Tech Related — Suramya @ 2:58 PM

Internet services were launched in India on 15th August, 1995 by Videsh Sanchar Nigam Limited and in November, 1998, the Government opened up the sector for providing Internet services by private operators. This year marks the 25th anniversary of the Internet’s launch in India and its astounding how much the landscape has changed in the past 25 years. My first net connection in 1998 was a blazing 33.3kbps dial-up connection that cost Rs 15,000 for 250 hours, this allowed you to use graphical tools to browse the internet like Netscape (which was the precursor for Firefox). For students there was a discount pricing for Rs 5,000 for 250 hours but they only got access to text/shell based browsing.

Now, 25 years later the landscape is completely different. Internet connections costs in India are the cheapest in the world as per a recent study done for The Worldwide broadband speed league by Cable.co.uk in association with M-Lab.

Five cheapest packages in the world

The five cheapest countries in terms of the average cost of 1GB of mobile data are India ($0.09), Israel ($0.11), Kyrgyzstan ($0.21), Italy ($0.43), and Ukraine ($0.46).

Conversely to the most expensive, none of these countries are islands. Further, they all either contain excellent fibre broadband infrastructure (Italy, India, Ukraine, Israel), or in the case of Kyrgyzstan rely heavily on mobile data as the primary means to keep its populace connected to the rest of the world.

This is based on sampling done in Feb 2020

Rank Name Plans measured Average price of 1GB (local currency) Currency Conversion rate (USD) (Frozen 27/04/2020) Average price of 1GB (USD) Cheapest 1GB (Local currency) Cheapest 1GB for 30 days (USD) Most expensive 1GB (Local currency) Most expensive 1GB (USD) Sample date
1 India 60 6.66 INR 0.01 0.09 1.63 0.02 209.09 $2.75 14/02/2020

If you compare the costs to prices in the US, you will notice that Internet (data) is significantly more expensive in the US as opposed to India.

Rank Name Plans measured Average price of 1GB (local currency) Currency Conversion rate (USD) (Frozen 27/04/2020) Average price of 1GB (USD) Cheapest 1GB (Local currency) Cheapest 1GB for 30 days (USD) Most expensive 1GB (Local currency) Most expensive 1GB (USD) Sample date
188 United States 29 8.00 USD 1.00 8.00 2.20 2.20 2.20 $60.00 24/02/2020

The cheap internet data connections in India is completely due to Reliance Jio. Till Jio launched their services in September 2016 the cost for 1GB of data was Rs 249 for 1GB (Airtel/Idea) & Rs. 251 for 1GB (Vodaphone). After Jio launched all other ISP’s starting loosing customers to Jio at an astronomical rate and had to cut prices in order to stay in business. Now, 4 years later we have the cheapest data in the world at ~Rs 6 per GB. 🙂 This proves that healthy competition is the best way to get good service at a competitive pricing. If there was a monopoly then they can choose the pricing as per their desire and since folks don’t have an alternate option they have to use their services.

Check out the full report at: Worldwide mobile data pricing: The cost of 1GB of mobile data in 228 countries.

– Suramya

August 21, 2020

Emotion detection software for Pets using AI and some thoughts around it (and AI in general)

Filed under: Computer Software,Emerging Tech,Humor,My Thoughts,Tech Related — Suramya @ 5:32 PM

Pet owners are a special breed of people, they willingly take responsibility for another life and take care of them. I personally like pets as long as I can return them to the owner at the end of the day (or hour, depending on how annoying the pet is). I had to take care of a puppy for a week when Surabhi & Vinit were out of town and that experience was more than enough to confirm my belief in this matter. Others however feel differently and spend quite a lot of time and effort talking to the pets and some of them even pretend that the dog is talking back.

Now leveraging the power of AI there is a new app created that analyses & interprets the facial expressions of your pet. Folks over at the University of Melbourne decided to build an Convolutional Neural Networks based application called Happy Pets that you can download from the Android or Apple app stores to try on your pet. They claim to be able to identify the emotion the pet is feeling when the photo was taken.

While the science behind it is cool and a lot of pet owners who tried out the application over at Hacker News seem to like it, I feel its a bit frivolous and silly. Plus its hard enough for us to classify emotions in Humans reliably using AI so I would take the claims with a pinch of salt. The researchers themselves have also not given any numbers around the accuracy percentage of the model.

When I first saw the post about the app it reminded me of another article I had read a few days ago which postulated that ‘Too many AI researchers think real-world problems are not relevant’. At first I thought that this was an author trolling the AI developers but after reading the article I kind of agree with him. AI has a massive potential to advance our understanding of health, agriculture, scientific discovery, and more. However looking at the feedback AI papers have been getting it appears that AI researchers are allergic to practical applications (or in some cases useful applications). For example, below is a review received on a paper submitted to the NeurIPS (Neural Information Processing Systems) conference:

“The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

If I read this correctly then basically they are saying that this AI paper is for a particular application so its not interesting enough for the ML community. There is a similar bias in the theoretical physics/mathematics world where academics who talk about implementing the concepts/theories are looked down upon by the ‘purists’. I personally believe that while the theoretical sciences are all well & good and we do need people working on them to expand our understanding, at the end of the day if we are not applying these learnings/theorems practically they are of no use. There will be cases where we don’t have the know-how to implement or apply the learnings but we should not let that stand in the way of practical applications for things we can implement/use.

To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

What do you think? Do you agree/disagree?

Source: HackerNews

– Suramya

August 20, 2020

Transparent Solar Panels hit Record 8.1% Efficiency

Filed under: Emerging Tech,My Thoughts,Tech Related — Suramya @ 5:24 PM

Solar panels for electricity are awesome, however the major issue with deploying solar panels is that you need a lot of space for them. Although the efficiency of the panels has been going up reducing the space requirements but its still a non-trivial amount of space to generate enough power to be useful to power a house, Portable solar panels are well and good but they are slow and expensive. I tried to figure out a way to power my apartment via solar power but it wasn’t possible without having panels setup on the apartment roof.

Which is where the Transparent Solar panels come into play as they allow you to replace your existing windows for solar panel which would make them ideal for Apartments & office buildings. Because you don’t need extra space for the panels and just need to replace the windows. The transparent solar panels were created in 2014 by researchers at Michigan State University (MSU) however the efficiency of the panels was quite low compared to the traditional panels making it less productive and more expensive than the traditional panels so since then the race has been on to make the panels more efficient.

The researches from University of Michigan have made a significant break through in the manufacturing of color-neutral, transparent solar cells achieving 8.1% efficiency by using a carbon-based design rather than conventional silicon. The created cells do have a slight greenish tinge like sunglasses but for the most part appear to be usable as a window pane. More details are available in their release here.

The new material is a combination of organic molecules engineered to be transparent in the visible and absorbing in the near infrared, an invisible part of the spectrum that accounts for much of the energy in sunlight. In addition, the researchers developed optical coatings to boost both power generated from infrared light and transparency in the visible range—two qualities that are usually in competition with one another.

The color-neutral version of the device was made with an indium tin oxide electrode. A silver electrode improved the efficiency to 10.8%, with 45.8% transparency. However, that version’s slightly greenish tint may not be acceptable in some window applications.

Transparent solar cells are measured by their light utilization efficiency, which describes how much energy from the light hitting the window is available either as electricity or as transmitted light on the interior side. Previous transparent solar cells have light utilization efficiencies of roughly 2-3%, but the indium tin oxide cell is rated at 3.5% and the silver version has a light utilization efficiency of 5%.

The researchers are still working on improving the efficiency further and I am looking forward to the new breakthroughs in the field. Hopefully soon we will have panels efficient enough that I will be able to replace my apartment’s windows with Solar panel and break even in a reasonable amount of time. 🙂

Source: Slashdot.org

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress