Suramya's Blog : Welcome to my crazy life…

October 15, 2019

Theoretical paper speculates breaking 2048-bit RSA in eight hours using a Quantum Computer with 20 million Qubits

Filed under: Computer Security,My Thoughts,Quantum Computing,Tech Related — Suramya @ 12:05 PM

If we manage to get a fully functional Quantum Computer with about 20 million Qubits in the near future then according to this theoretical paper we would be able to factor 2048-bit RSA moduli in approximately eight hours. The paper is quite interesting, although the math in did give me a headache. However this is all still purely theoretical as we only have 50-60 qBit computers right now and are a long way away from general purpose Quantum computers. That being said I anticipate that we would be seeing this technology being available in our lifetime.

We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, EkerÃ¥-HÃ¥stad 2017, EkerÃ¥ 2017, EkerÃ¥ 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of 10−3, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction’s spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses 3n+0.002nlgn logical qubits, 0.3n3+0.0005n3lgn Toffolis, and 500n2+n2lgn measurement depth to factor n-bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields.

Bruce Schneier talks about how Quantum computing will affect cryptography in his essay Cryptography after the Aliens Land. In summary “Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run.”

Well this is all for now will post more later

– Suramya

October 10, 2019

Taxonomy of Terrible programmers

Filed under: Humor,Tech Related — Suramya @ 11:58 PM

If you have been in tech for a while you would have had the dubious pleasure of meeting some or all of the types of programmers described in the following post: The Taxonomy of Terrible Programmers

In one of my previous companies I had the pleasure of working with the The Arcanist and trust me it was a painful experience that I still remember more than a decade later. So what is an Arcanist?

Anyone who has worked on a legacy system of any import has dealt with an Arcanist. The Arcanist’s goal is noble: to preserve the uptime and integrity of the system, but at a terrible cost.

The Arcanist has a simple philosophy that guides his or her software development or administrative practices: if it ain’t broke, don’t fix it – to an extreme.

The day a piece of software under his or her auspices ships, it will forever stay on that development platform, with that database, with that operating system, with that deployment procedure. The Arcanist will see to it, to the best of his ability. He may not win every battle, but he will fight ferociously always.

All change is the enemy – it’s a vampire, seducing less vigilant engineers to gain entry to the system, only to destroy it from within.

The past is the future in the Arcanists’ worldview, and he’ll fight anyone tries to upgrade his circa 1981 PASCAL codebase to the bitter, tearful end.

We had to fight him to move from a system that required you to edit HEX code for making any changes to a web based UI that controlled the system and gave extra functionality. In the end the project was moved to a different team as everyone realized that he was going to kill it just because he was used to the old system and didn’t want to change.

Check out the linked article for details on the other types. If you recognize some of the behaviour’s described in the post as something you might do, I suggest you take a good long look at yourself and seriously think about changing as being classified/identified as one of the types of people in this list is not a great carrier move.

– Suramya

PS: Before you ask, yes this post links to a really old post. The post has been sitting in my draft folder for ages and I finally decided to publish it.

September 5, 2019

Criminals use AI technology to impersonate CEO for a $243,000 payday

Filed under: Computer Security,My Thoughts,Tech Related — Suramya @ 10:46 AM

Over the past few years AI has become one of the things that is included in everything from cars to lights whether it makes sense or not and criminals are not behind in this trend. We have AI based systems testing computer security, working on bypassing checks and balances in systems etc and now in a new twist, AI is being used in Vishing as well. Voice phishing or vishing as it’s sometime referred to is a form of criminal phone fraud, using social engineering over the telephone system to gain access to private personal and financial information for the purpose of financial reward.

Anatomy of Vishing Attack
Anatomy of Vishing Attack. Source: https://www.biocatch.com/blog/detect-vishing-voice-phishing

In this particular instance criminals used commercially available voice-generating AI software to impersonate the CEO of a German Company and then convinced the CEO of their UK based subsidiary to transfer $243,000 to a Hungarian supplier. The AI was able to mimic the voice almost perfectly including his slight German accent and voice patterns. This is a new phase of crime and unfortunately will not be a one-off case as criminals will soon realize the potential then these kind of attacks are only bound to increase in frequency. Interestingly it will also make the biometric voice authentication systems used by certain banks like Citibank more vulnerable to fraud.

To safeguard from the economic and reputational fallout, it’s crucial that all instructions are verified via a follow-up email or other alternative means i.e. if you have an email asking for a transfer/detail call the person and if you get a call asking for transfer follow up via email or other means. Do not use a number provided by the call for verification, you need to call the number in the company address-book or in your records.

Well this is all for now. Will post more later.

Thanks to : Slashdot.org for the original link.

– Suramya

September 3, 2019

AI Emotion-Detection Arms Race starts with AI created to mask emotions

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 2:31 PM

Over the past few months/years we have been reading a lot about AI being used to identify emotions like fear, confusion and even traits like lying or trustworthiness of a person by analyzing video & audio recordings. This is driving innovations in Recruiting, Criminal investigations etc. In fact the global emotion detection and recognition market is estimated to witness a compound annual growth rate of 32.7% between 2018 – 2023, driving the market to reach USD 24.74 billion by 2020. So a lot of companies are focusing their efforts in this space as AI applications that are emotionally aware give a more realistic experience for users. However, there are multiple privacy implications of having a system detect a person’s emotional state when interacting with an online system.

So to counter this trend of systems becoming more and more aware there is now a group of researchers who have come up with an AI-based countermeasure to mask emotion in spoken words, kicking off an arms race between the two factions. The idea is to automatically converting emotional speech into “normal” speech using AI.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

In a way its quite cool because it removes a potential privacy issue, but if you extrapolate from existing research then we have the potential of bigger headaches in the future. Currently we have the capability of removing emotion from a audio recording, how difficult would it be to add emotion to a recording? Not too difficult if you go through the ongoing research. So, now we have a system that can take a audio/video recording and change the emotion from sadness to mocking or from happy to sad. This combined with the deepfakes apps that are already there in the market will cause huge headaches for the public as it would be really hard for us to determine if a given audio/video is authentic or altered.

Article: Researchers Created AI That Hides Your Emotions From Other AI
Paper: Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants

Well this is all for now. Will write more later.

– Suramya

August 12, 2019

LinuxJournal.com: shutdown -h now

Filed under: Computer Related,My Thoughts,Tech Related — Suramya @ 10:24 AM

Last week I got an unpleasant surprise in my mailbox, an email from Linux Journal stating that they were closing up shop effective immediately as they had completely run out of money with no hope of resurrection. LJ was one of the first Linux magazines I wrote for and it will always have a special place in my heart.

IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:
On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.
–Linux Journal, LLC

The website is up for the moment but might go down anytime. I do have an archive of all LJ issues on my home computer that I had made the last time LJ was about to shutdown and I will post them to the site in a few days. This archive doesn’t have the latest releases so I will need to download that before I post them online. In addition I am sure there are efforts ongoing to archive the website as well since it had a lot of great content on it. If not then I will kick off something to archive the site once I get home.

Well this is all for now. It was a great run LJ, you will be missed.

– Suramya

August 7, 2019

Using a slice of wood to make saltwater drinkable

Filed under: My Thoughts,Tech Related — Suramya @ 5:45 PM

“Water water everywhere, not a drop to drink” This is an often quoted line from The Rime of the Ancient Mariner by Samuel Taylor Coleridge and is something that is becoming more and more true every day. 71% of earth is covered by Oceans but we still have 2.8 billion people around the world who face water scarcity at least one month out of every year. Earlier this year city officials in Chennai, India declared that “Day Zero” (the day when almost no water is left in the city) had been reached in Chennai, as all the four main reservoirs supplying water to the city had run dry due to deficient monsoon rainfall in the previous years. Due to this finding more ways of generating drinking water a high priority for the Human race. Without water life as we know it can’t exist and our civilization can and will collapse.

One of the ways to solve this issue is to convert sea water to drinkable water by filtering the salt out and there are existing solutions which do this (check out the Saudi water desalination) but they require a lot of energy and/or specialized engineering. But this is about to change thanks to the effort of Jason Ren and his colleagues from Princeton University in New Jersey. They have developed a method that uses a new kind of membrane made of American basswood instead of plastic that enables filtration without requiring high pressure pumping of salt water. Basically they took a thin slice of American basswood and treated it with a chemical bath to remove extra fibers from the wood and make its surface slippery to water molecules. Once the wood is treated water flows down one side of the membrane and is heated to the point that it vaporizes. The vapor then travels through the pores in the membrane toward its colder side leaving the salt behind, condensing as fresh, cool water.

This process takes less energy than simply boiling all of the saltwater because there’s no need to maintain a high temperature for more than a thin layer of water at a time as per Jason Ren. In the initial testing using this method the team was able to filter about 20 kilograms of water per square metre of membrane per hour, which is not quite as quick as polymer membranes but this can improve if the membrane is made thinner.

This is quite a breakthrough and when I first read the article I was not clear why we need to use wood for the process. I mean we can use a polymer membrane and still achieve the same effect by heating only a thin layer of water at a time. But then I spent some time reading the actual research paper and that’s when I realized what a massive breakthrough this was. Basically the current commercial MD membranes have porosity lower than 0.80, thermal conductivity higher than 0.050 W m−1 K−1, and thermal efficiency up to 60% where as the new membrane has a porosity of ~90%, low thermal conductivity (~0.04 W m−1 K−1) and a thermal efficiency of ~71%. These factors combined reduce the energy requirements for desalination by a significant amount.

Now that we have a Proof of Concept that this works, we need to be able to scale this up on a massive scale and work for this is currently ongoing.

Thanks to Newscientist.com for the original link.
Research Paper: Hydrophobic nanostructured wood membrane for thermally efficient distillation

Well this is all for now. Will post more later.

– Suramya

July 22, 2019

Chandrayaan-2: ISRO spacecraft successfully achieves Geostationary Orbit

Filed under: My Thoughts,Tech Related — Suramya @ 3:55 PM

ISRO’s Chandrayaan-2 completed the first stage of the Moon mission by successfully entering Geostationary Orbit at 181.65 km above sea level. This is an amazing achievement by ISRO and is a proud moment for India. After the last min abort of the previous launch attempt all eyes were on ISRO to make a successful launch in a extremely tight launch window of only a few minutes. ISRO Chief K Sivan, made the following statement after the launch

I’m extremely happy to announce that the GSLVMkIII-M1 successfully injected Chandrayaan-2 spacecraft into Earth Orbit. It is the beginning of a historic journey of India towards moon and to land at a place near South Pole to carry out scientific experiments:

Now that the rocket has achieved Geo-Stationary orbit it will start orbit-raising operations followed by trans-lunar injection using its own power. Post that the rocket will head out to the Moon and below are the different phases of Chandrayaan 2’s journey:

  • July 22 to August 13: Chandrayaan 2 will orbit around the Earth in an elliptical path
  • August 13 to August 19: Course change to to establish into moon’s orbit
  • August 19: Enter Moon’s orbit
  • August 19 to Aug 31: Chandrayaan 2 will revolve in the Moon’s orbit
  • September 1: The Lander Vikram will detach from the Orbiter heading down to land near the South Pole of the Moon
  • ~September 7:Lander Vikram will make a soft landing in the south polar region of the moon
  • ~Landing + 4hours: Rover Pragyaan will roll out of the Lander Vikram and perform different tests on the Moon’s polar surface

@ISRO, a proud nation salutes you and here’s to the journey to new horizons.

BBC Coverage: Chandrayaan-2: India launches second Moon mission

Regards,

Suramya

July 17, 2019

Using Machine Learning To Automatically Translate Long-Lost Languages

Filed under: Computer Software,Interesting Sites,My Thoughts,Tech Related — Suramya @ 1:25 PM

Machine Learning has become such a buzz word that any new product or research being released nowadays has to mention ML in it somewhere even though they have nothing to do with it. But this particular usecase is actually very interesting and I am looking forward to more advances in this front. Researchers Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab in Mountain View, California have created a machine-learning system capable of deciphering lost languages.

Normally Machine translation programs work by mapping out how words in a given language are related to each other. This is done by processing large amounts of text in the language and creating vector maps on how often each word appears next to every other word for both source and target languages. Unfortunately, this requires a large dataset (text) in the language and that is not possible in case of lost languages, and that’s where the brilliance of this new technique comes in. Focusing on the fact that when languages evolve over time they can only change in certain ways (e.g. related words have the same order of characters etc) they came up with a ruleset for deciphering a language when the parent or child of the language being translated is known.

To test out their theory/process they tried it out with two lost languages, Linear B and Ugaritic. Linguists know that Linear B encodes an early version of ancient Greek and that Ugaritic, which was discovered in 1929, is an early form of Hebrew. After processing the system was able to correctly translate 67.3% of Linear B into their Greek equivalents which is a remarkable achievement and marks a first in the field.

There are still some restrictions with the new algorithm in that it doesn’t work if the progenitor language is not known. But work on the system is ongoing and who knows some new breakthrough might be just around the corner. Plus there is always a brute force approach where the system tries translating a given language using every possible language as the progenitor language. It would require a lot of compute and time but is something to look at as an option.

Well, this is all for now. Will write more later.

– Suramya

Source: Machine learning has been used to automatically translate long-lost languages

May 27, 2019

Microsoft and Brilliant launch Online Quantum Computing Class that actually looks useful

Quantum computing (QC) is the next big thing and everyone is eager to jump on the bandwagon. So my email & news feeds are usually flooded with articles on how QC will solve all my problems. I don’t deny that there are some very interesting usecases out there that would benefit from Quantum Computers but after a while it gets tiring. That being said I just found out that Microsoft & Brilliant have launched a new interactive course on Quantum Computing that allows you to build quantum algorithms from the ground up with a quantum computer simulated in your browser and I feel its pretty cool and a great initiative. The tutorial enables you to learn Q# which is Microsoft’s answer to the question of which language to use for Quantum computing code. Check it out if you are interested in learning how to code in Q#.

The course starts with basic concepts and gradually introduces you to Microsoft’s Q# language, teaching you how to write ‘simple’ quantum algorithms before moving on to truly complicated scenarios. You can handle everything on the web (including quantum circuit puzzles) and the course’s web page promises that by the end of the course, “you’ll know your way around the world of quantum information, have experimented with the ins and outs of quantum circuits, and have written your first 100 lines of quantum code — while remaining blissfully ignorant about detailed quantum physics.”
Brilliant has more than 8 million students and professionals worldwide learning subjects from algebra to special relativity through guided problem-solving. In partnership with Microsoft’s quantum team, Brilliant has launched an interactive course called “Quantum Computing,” for learning quantum computing and programming in Q#, Microsoft’s new quantum-tuned programming language. The course features Q# programming exercises with Python as the host language (one of our new features!). Brilliant and Microsoft are excited to empower the next generation of quantum computer scientists and engineers and start growing a quantum workforce today.

Starting from scratch

Because quantum computing bridges the fields of information theory, physics, mathematics, and computer science, it can be difficult to know where to begin. Brilliant’s course, integrated with some of Microsoft’s leading quantum development tools, provides self-learners with the tools they need to master quantum computing.
The new quantum computing course starts from scratch and brings students along in a way that suits their schedule and skills. Students can build and simulate simple quantum algorithms on the go or implement advanced quantum algorithms in Q

Once you have gone through the tutorial you should also check out IBM Q that allows you to code on a Quantum computer for free.

– Suramya

September 3, 2018

Software hack to keep my speaker powered on

Filed under: Computer Hardware,Linux/Unix Related,Tech Related,Tutorials — Suramya @ 6:37 PM

A little while ago I bought a new klipsch speaker as my previous one was starting to die and I love it except for a minor irritation. The speaker has builtin power saving tech that powers it off if its not used for a certain period of time and that means that I have to physically power it on every time I wanted to listen to music which was annoying. As I would invariably be comfortably seated and start the music before remembering that I needed to power it on. Also, I could not start the music from my phone whenever I felt like as the speaker was powered off and I would have to walk to the room to power it on.

After living with the irritation for a while I finally decided to do something about it and whipped up a small script that checks if any music/audio is already playing on the system and if not it plays a 1 second mp3 of an ultrasonic beep. This forces the system to keep the speaker on and I love it as now I can start the music first thing in the morning while lazing in bed. 🙂

The script requires the mpg123 to be installed and you can install it on a Debian system by issuing the following command:

apt-get install mpg123

The Script itself is only 4 lines long:

#!/bin/bash

if ! grep RUNNING /proc/asound/card*/pcm*/sub*/status &> /dev/null ; then
    /usr/bin/mpg123 -q /home/suramya/bin/KeepSpeakerOn.mp3 &> /dev/null
fi

What it does is to check if any of the PCM soundcards have a status of RUNNING and if not it plays the mp3. I have a cron job scheduled to run the script every one min:

XDG_RUNTIME_DIR=/run/user/1000

* * * * * /home/suramya/bin/KeepSpeakerOn.sh 

One interesting issue I hit during the initial testing was that the mpg123 application kept segfaulting whenever I initiated it from the Cron but it would work fine if I ran the same command from the command prompt. The error I got in the logs was:

High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3
        version 1.25.10; written and copyright by Michael Hipp and others
        free software (LGPL) without any warranty but with best wishes
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
/home/suramya/bin/KeepSpeakerOn.sh: line 5: 10993 Segmentation fault      /usr/bin/mpg123 /home/suramya/bin/KeepSpeakerOn.mp3 -v

Spent a while trying to debug and finally figured out that the fix for this issue was to add XDG_RUNTIME_DIR=/run/user/<userid> to the cron where you can get the value of <userid> by running the following command and taking the value of uid:

id <username_the_cronjob_is_running_under> 

e.g.

suramya@StarKnight:~/bin$ id suramya
uid=1000(suramya) gid=1000(suramya) groups=1000(suramya),24(cdrom)....

Putting that line in the cron entry resolved the issue. Not sure why but it works so…

Well this is all for now. Will write more later.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress