Suramya's Blog : Welcome to my crazy life…

April 19, 2023

Finally a useful AI Implementation: Making spoken dialog easier to hear in movies and shows

Filed under: Emerging Tech,News/Articles,Tech Related — Suramya @ 6:37 PM

Finally, an AI usecase that is actually useful. There are a ton of use cases where AI seems to be shoehorned in for no reason, but this recent announcement from Amazon about Dialogue Boost which is a new function from that lets you increase the volume of dialogue relative to background music and effects to a consistent volume so you can actually hear the dialog without nearly shattering the eardrums when a sudden explosion happens.
It is something that is still in the testing phase and is only released on some of their products so far. But I am looking forward to it being in general availability.

Dialogue Boost works by analyzing the original audio in a movie or series and identifying points where dialogue may be hard to hear above background music and effects, at which point speech patterns are isolated and audio is enhanced to make the dialogue clearer. The AI targets spoken dialogue rather than a typical speaker or home theater set up that only amplifies the center channel of audio. It’s something that exists on high-end theater set-ups and certain smart TVs, but Amazon is the first streamer to roll out such a feature.

I have gotten used to having subtitles on when I watch something because that ensures that I don’t miss out on any dialogs due to the background music/sounds in the show/movie. This looks like it will alleviate that requirement. I think I will still end up keeping the subtitles on but this will certainly help.

Source: Amazon’s New Tool Adjusts Sound So You Can Actually Understand Movie and TV Dialogue
Announcement: Prime Video launches a new accessibility feature that makes it easier to hear dialogue in your favorite movies and series

– Suramya

March 6, 2023

Twitter is down… Again!

Filed under: Computer Related,Humor — Suramya @ 11:07 PM

Twitter downtimes are becoming more and more common, and even though I don’t spend that much time on Twitter anymore (Mastodon is so much better and more fun) this was too good to ignore.
If you visit Twitter.com (or any url on twitter) right now instead of the content you get an API error message stating that “Your current API plan does not include access to this endpoint, please see https://developer.twitter.com/en/docs/twitter-api for more information”.


Current API plan does not include access to this endpoint

Looks like the team managed to nuke their own connections to the API while attempting to restricting some API calls to paid customers only. Although this is an error that should have been caught in UAT or QA testing. I mean that is the basic 1st thing to check to see if all services are working after a major change like this one. But it looks like it was pushed to prod directly without validation.

I guess firing a majority of your teams with almost no notice or Knowledge Transfer sessions is a bad move and makes it harder to keep your site up and running. Who knew?

– Suramya

March 3, 2023

Someone is now claiming that they can’t use Microsoft Windows for “Religious Reasons”

Filed under: Humor,News/Articles,Tech Related — Suramya @ 3:10 PM

The Operating System (OS) wars have been going on since we have had computers and the ferocity with which some OS users defend their preference at times borders on that of fundamentalist religions. The following incident just takes it to the logical conclusion, where a new joiner in a company doesn’t want to use windows on their office laptop because their religion does not allow use of Apple or Microsoft owned Operating Systems.


Employee claims that she can’t use Microsoft Windows for “Religious Reasons”

I wonder that their stance is about using other software/websites/services owned by Apple/Microsoft. Have they stopped using Github because it is owned by MS? What about LinkedIn? or Mojang, X-Box? or any of the thousands of companies they own or have stakes in. Do they use Beat headsets? Shazam? Akamai? Apple either owns or has stakes in them and a ton of other companies as well.

I personally use Linux as my primary OS and would always prefer to use it whenever possible. However, I have had to use Windows at work in most of the companies that I have worked in because that’s what the standard setup was over there. I did push for Linux in some of the orgs and we ended up replacing Windows with Linux for some of the developers in a few companies. That being said, refusing to work with an OS because you don’t want to is a bit over the top for me and calling it against their religion makes it even more out there…

Source: Whitney Merrill on Mastadon

– Suramya

April 13, 2022

Internet of Things (IoT) Forensics: Challenges and Approaches

Internet of Things or IoT consists of interconnected devices that have sensors and software, which are connected to automated systems to gather information and depending on the information collected various actions can be performed. It is one of the fastest growing markets, with enterprise IoT spending to grow by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021).

This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network and attackers have found multiple ways through which they can gain unauthorized access to systems by compromising IoT systems.

IoT Forensics is a subset of the digital forensics field and is the new kid on the block. It deals with forensics data collected from IoT devices and follows the same procedure as regular computer forensics, i.e., identification, preservation, analysis, presentation, and report writing. The challenges of IoT come into play when we realize that in addition to the IoT sensor or device we also need to collect forensic data from the internal network or Cloud when performing a forensic investigation. This highlights the fact that Forensics can be divided into three categories: IoT device level, network forensics and cloud forensics. This is relevant because IoT forensics is heavily dependent on cloud forensics (as a lot of data is stored in the cloud) and analyzing the communication between devices in addition to data gathered from the physical device or sensor.

Why IoT Forensics is needed

The proliferation of Internet connected devices and sensors have made life a lot easier for users and has a lot of benefits associated with it. However, it also creates a larger attack surface which is vulnerable to cyberattacks. In the past IoT devices have been involved in incidents that include identity theft, data leakage, accessing and using Internet connected printers, commandeering of cloud-based CCTV units, SQL injections, phishing, ransomware and malware targeting specific appliances such as VoIP devices and smart vehicles.

With attackers targeting IoT devices and then using them to compromise enterprise systems, we need the ability to extract and review data from the IoT devices in a forensically sound way to find out how the device was compromised, what other systems were accessed from the device etc.

In addition, the forensic data from these devices can be used to reconstruct crime scenes and be used to prove or disprove hypothesis. For example, data from a IoT connected alarm can be used to determine where and when the alarm was disabled and a door was opened. If there is a suspect who wears a smartwatch then the data from the watch can be used to identify the person or infer what the person was doing at the time. In a recent arson case, the data from the suspects smartwatch was used to implicate him in arson. (Reardon, 2018)

The data from IoT devices can be crucial in identifying how a breach occurred and what should be done to mitigate the risk. This makes IoT forensics a critical part of the Digital Forensics program.

Current Forensic Challenges Within the IoT

The IoT forensics field has a lot of challenges that need to be addressed but unfortunately none of them have a simple solution. As shown in the research done by M. Harbawi and A. Varol (Harbawi, 2017) we can divide the challenges into six major groups. Identification, collection, preservation, analysis and correlation, attack attribution, and evidence presentation. We will cover the challenges each of these presents in the paper.

A. Evidence Identification

One of the most important steps in forensics examination is to identify where the evidence is stored and collect it. This is usually quite simple in the traditional Digital Forensics but in IoT forensics this can be a challenge as the data required could be stored in a multitude of places such as on the cloud, or in a proprietary local storage.

Another problem is that since IoT fundamentally means that the nodes were in real-time and autonomous interaction with each other, it is extremely difficult to reconstruct the crime scene and to identify the scope of the damage.

A report conducted by the International Data Corporation (IDC) states that the estimated growth of data generated by IoT devices between 2005 to 2020 is going to be more than 40,000 exabytes (Yakubu et al., 2016) making it very difficult for investigators to identify data that is relevant to the investigation while discarding the irrelevant data.

B. Evidence Acquisition

Once the evidence required for the case has been identified the investigative team still has to collect the information in a forensically sound manner that will allow them to perform analysis of the evidence and be able to present it in the court for prosecution.

Due to the lack of a common framework or forensic model for IoT investigations this can be a challenge. Since the method used to collect evidence can be challenged in court due to omissions in the way it was collected.

C. Evidence Preservation and Protection

After the data is collected it is essential that the chain of custody is maintained, and the integrity of the data needs to be validated and verifiable. In the case of IoT Forensics, evidence is collected from multiple remote servers, which makes maintaining proper Chain of Custody a lot more complicated. Another complication is that since these devices usually have a limited storage capacity and the system is continuously running there is a possibility of the evidence being overwritten. We can transfer the data to a local storage device but then ensuring the chain of custody is unbroken and verifiable becomes more difficult.

D. Evidence Analysis and Correlation

Due to the fact that IoT nodes are continuously operating, they produce an extremely high volume of data making it difficult to analyze and process all the data collected. Also, since in IoT Forensics there is less certainty about the source of data and who created or modified the data, it makes it difficult to extract information about ownership and modification history of the data in question.

With most of the IoT devices not storing metadata such as timestamps or location information along with issues created by different time zones and clock skew/drift it is difficult for investigators to create causal links from the data collected and perform analysis that is sound, not subject to interpretation bias and can be defended in court.

E. Attack and Deficit Attribution

IoT forensics requires a lot of additional work to ensure that the device physical and digital identity are in sync and the device was not being used by another person at the time. For example, if a command was given to Alexa by a user and that is evidence in the case against them then the examiner needs to confirm that the person giving the command was physically near the device at the time and that the command was not given over the phone remotely.

F. Evidence Presentation

Due to the highly complex nature of IoT forensics and how the evidence was collected it is difficult to present the data in court in an easy to understand way. This makes it easier for the defense to challenge the evidence and its interpretation by the prosecution.

VI. Opportunities of IoT Forensics

IoT devices bring new sources of information into play that can provide evidence that is hard to delete and most of the time collected without the suspect’s knowledge. This makes it hard for them to account for that evidence in their testimony and can be used to trip them up. This information is also harder to destroy because it is stored in the cloud.

New frameworks and tools such Zetta, Kaa and M2mLabs Mainspring are now becoming available in the market which make it easier to collect forensic information from IoT devices in a forensically sound way.

Another group is pushing for including blockchain based evidence chains into the digital and IoT forensics field to ensure that data collected can be stored in a forensically verifiable method that can’t be tampered with.

Conclusion

IoT Forensics is becoming a vital field of investigation and a major subcategory of digital forensics. With more and more devices getting connected to each other and increasing the attack surface of the target it is very important that these devices are secured and have a sound way of investigating if and when a breach happens.

Tools using Artificial Intelligence and Machine learning are being created that will allow us to leverage their capabilities to investigate breaches, attacks etc faster and more accurately.

References

Reardon. M. (2018, April 5). Your Alexa and Fitbit can testify against you in court. Retrieved from https://www.cnet.com/tech/mobile/alexa-fitbit-apple-watch-pacemaker-can-testify-against-you-in-court/.

M. Harbawi and A. Varol, “An improved digital evidence acquisition model for the Internet of Things forensic I: A theoretical framework”, Proc. 5th Int. Symp. Digit. Forensics Security (ISDFS), pp. 1-6, 2017.

Yakubu, O., Adjei, O., & Babu, N. (2016). A review of prospects and challenges of internet of things. International Journal of Computer Applications, 139(10), 33–39. https://doi.org/10.5120/ijca2016909390


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

January 23, 2022

Some thoughts on Crypto currencies and why it is better to hold off on investing in them

Filed under: Computer Related,My Thoughts,Tech Related — Suramya @ 1:26 AM

It seems that every other day (or every other hour if you are unlucky) someone or the other is trying to get people to use Crypto currency because they claim that it is awesome and not at all dependent on government regulations and thus won’t fluctuate that much. Famous people are pushing it, others like New York City Mayor Eric Adams are trying to raise awareness of the product and have decided to convert his first paycheck to Crypto, El Savador started accepting crypto currency as legal tender etc. However, the promises made by crypto enthusiasts don’t translate into reality as the market remains extremely volatile.

I see people posting on twitter that Crypto currencies are better because they are stable, but in my opinion if a currency can drop 20% because Elon Musk tweeted a Broken heart emoji then it is not something I want to use to store my savings. Earlier this week the entire Bitcoin market dropped over 47% from it’s high back in Nov 2021. Mayor Adams paycheck which was converted to crypto is now worth ~1/2 of what it was when he invested it, and that is a massive drop. Imagine loosing 50% of your savings in one shot. You might suddenly have no way to pay rent or emergency repairs/hospitalization etc. Even El Savador has seen its credit become 4 times worse than it was before it moved to Bitcoin. People there are complaining that the promised reduction in cost for conversion to/from international currencies is a myth as they are paying more than what they were paying earlier as transaction costs.

Another major issue with crypto currency is the ecological hit caused by the mining. According to research done by University of Cambridge, globally Bitcoin uses more power per year than the entire population of Argentina. The recent Kazakhsthan unrest and protests were sparked off due to surging fuel prices that were caused by the migration of Bitcoin miners to the country after China banned them. This caused a lot of strain on the electricity grid and required an increase in the prices which kicked off a massive protest that has caused untold no of deaths. There are multiple folks coming up with new crypto-currencies that claim to be carbon neutral but so far none of them have delivered on the promise.

Bitcoin is thought to consume 707 kwH per transaction. In addition, the computers consume additional energy because they generate heat and need to be kept cool. And while it’s impossible to know exactly how much electricity Bitcoin uses because different computers and cooling systems have varying levels of energy efficiency, a University of Cambridge analysis estimated that bitcoin mining consumes 121.36 terawatt hours a year. This is more than all of Argentina consumes, or more than the consumption of Google, Apple, Facebook and Microsoft combined.

Check out this fantastic (though very long – 2hr+) video on economic critique of NFTs, DAOs, crypto currency and web3. (H/t to Cory Doctorow)

In summary, I would recommend against investing in crypto currencies till the issues highlighted above are resolved (if they are ever resolved).

– Suramya

May 15, 2021

Providing Oxygen through the intestines in Mammals is now possible as per research

Filed under: My Thoughts,News/Articles,Science Related — Suramya @ 11:53 PM

It takes a certain kind of mind to decide that today I am going to experiment if mammals can absorb oxygen through their intestines. Apparently a some of the aquatic animals like sea cucumbers and catfish, breathe through their intestines and since humans can absorb medicines through their intestines Takanori Takebe, a gastroenterologist from Cincinnati Children’s Hospital decided to do a study to see if they can absorb oxygen as well. So to test this out they basically injected pure pressurized oxygen into the rectums of the scrubbed mice (the mucus layer was thinned) and four of the seven unscrubbed ones. There was an immediate improvement in the O2 levels of the mice, with 75% the scrubbed mice surviving the procedure.

Obviously that is not a great survival rate and the scrubbing procedure is dangerous/involved but it did prove that mammals can absorb o2 with their intestines. So they looked at using perfluorocarbons which have a high O2 level and giving the rats & pigs an enema of the fluid. They saw an almost 15% improvement in the blood oxygen saturation allowing the subjects to recover from hypoxia.

These two tests prove that mammals can breath through their intestine but there is still a lot of study that needs to be done to check for the safety of this procedure. But if things go smoothly we can be looking at a new way to provide oxygen to patients when O2 canisters are in limited supply like the case currently in India due to the Covid crises.

But this doesn’t mean that mouth to mouth CPR will be replaced with mouth to ass CPR. (I can hear the sigh of relief from medical professional/emergency care folks).

More details on the study: ScienceMag: Mammals can breathe through their intestines
Full Paper: Mammalian enteral ventilation ameliorates respiratory failure

– Suramya

September 26, 2020

Source code for multiple Microsoft operating systems including Windows XP & Server 2003 leaked

Filed under: Computer Related,Tech Related — Suramya @ 5:58 PM

Windows XP & Windows Server source code leaked online earlier this week and even though this is for an operating system almost 2 decades old this leak is significant. Firstly because some of the core XP components are still in use in Windows 7/8/10. So if a major bug is found in any of those subsystems after people analyze the code then it will have a significant impact on the modern OS’s as well from Redmond. Secondly, It will give everyone a chance to try and understand how the Windows OS works so that they can enhance tools like WINE and other similar tools to have better compatibility with Windows. The other major impact will be on systems that still use XP like ATM’s, embedded systems, point-of-sale, automated teller machines, set-top boxes etc. Those will be hard to upgrade & protect as is some cases the companies that made the device are no longer in business and in other cases the software is installed in devices that are hard to upgrade.

This is not the first time Windows source code has leaked to the internet. In early 2000 a mega torrent of all MS Operating systems going back to MS-DOS was released, it allegedly contained the source code for the following OS’s:

OS from filename Alleged source size (bytes)
——————— —————————
MS-DOS 6 10,600,000
NT 3.5 101,700,000
NT 4 106,200,000
Windows 2000 122,300,000
NT 5 2,360,000,000

Leaked Data from the latest leak


Alleged contents of the Torrent file with MS Source Code.

The leaked code is available for download at most Torrent sites, I am not going to link to it for obvious reasons. If you want to check it out you can go download it, however as always be careful of what you download off the internet as it might have viruses and/or trojans in it. This is especially true if you are downloading the torrent on a Windows machine. Several users on Twitter claim that the source code for the original Xbox is included as well, but the information is varied on this. I haven’t downloaded it myself so can’t say for sure either way.

Keep in mind that the leak was illegal and just because it has leaked doesn’t mean that you can use it to build a clone of Windows XP without written authorization from Microsoft.

Source: ZDNet: Windows XP source code leaked online, on 4chan, out of all places

– Suramya

September 21, 2020

Diffblue’s Cover is an AI powered software that can write full Unit Tests for you

Writing Unit Test cases for your software is one of the most boring parts of Software Development even though having accurate tests allows us to develop code faster & with more confidence. Having a full test suite allows a developer to ensure that the changes they have made didn’t break other parts of the project that were working fine earlier. This make Unit tests an essential part of CI/CD (Continuous Integration and Continuous Delivery) pipelines. It is therefore hard to do frequent releases without rigorous unit testing. For example SQLite database engine has 640 times as much testing code as code in the engine itself:

As of version 3.33.0 (2020-08-14), the SQLite library consists of approximately 143.4 KSLOC of C code. (KSLOC means thousands of “Source Lines Of Code” or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 640 times as much test code and test scripts – 91911.0 KSLOC.

Unfortunately, since the tests are boring and don’t give immediate tangible results they are the first casualties when a team is under a time crunch for delivery. This is where Diffblue’s Cover comes into play. Diffblue was spun out of the University of Oxford following their research into how to use AI to write tests automatically. Cover uses AI to write a complete Unit Test including logic that reflects the behavior of the program as compared to the other existing tools that generate Unit Tests based on Templates and depend on the user to provide the logic for the test.

Cover has now been released as a free Community Edition for people to see what the tool can do and try it out themselves. You can download the software from here, and the full datasheet on the software is available here.


Using Cover IntelliJ plug-in to write tests

The software is not foolproof as in it doesn’t identify bugs in the source code. It assumes that the code is working correctly when the tests are added in, so if there is incorrect logic in the code it won’t be able to help you. On the other hand if the original logic was correct then it will let you know if the changes made break any of the existing functionality.

Lodge acknowledged the problem, telling us: “The code might have bugs in it to begin with, and we can’t tell if the current logic that you have in the code is correct or not, because we don’t know what the intent is of the programmer, and there’s no good way today of being able to express intent in a way that a machine could understand.

“That is generally not the problem that most of our customers have. Most of our customers have very few unit tests, and what they typically do is have a set of tests that run functional end-to-end tests that run at the end of the process.”

Lodge’s argument is that if you start with a working application, then let Cover write tests, you have a code base that becomes amenable to high velocity delivery. “Our customers don’t have any unit tests at all, or they have maybe 5 to 10 per cent coverage. Their issue is not that they can’t test their software: they can. They can run end-to-end tests that run right before they cut a release. What they don’t have are unit tests that enable them to run a CI/CD pipeline and be able to ship software every day, so typically our customers are people who can ship software twice a year.”

The software is currently only compatible with Java & IntelliJ but work is ongoing to incorporate other coding languages & IDEs.

Thanks to Theregister.com for the link to the initial story.

– Suramya

September 16, 2020

Potential signs of life found on Venus: Are we no longer alone in the universe?

Filed under: Interesting Sites,My Thoughts,News/Articles — Suramya @ 11:15 AM

If you have been watching the Astronomy chatter the past two days, you would have seen the headlines screaming about the possibility of life being found on Venus. Other less reputable sources are claiming that we have found definite proof of alien life. Both are inaccurate as even though we have found something that is easily explained by assuming the possibility of extra-terrestrial life there are other potential explanations that could cause the anomaly. So what is this discovery, you might ask which is causing people worldwide to start freaking out?

During analysis of spectrometer readings of Venus, scientists made a startling discovery high in its atmosphere; they found traces of phosphine (PH3) gas in Venus’s atmosphere, where any phosphorus should be in oxidized forms at a concentration (~20 parts per billion) that is hard to explain. It is unlikely that the gas is produced by abiotic production routes in Venus’s atmosphere, clouds, surface and subsurface, or from lightning, volcanic or meteoritic delivery (See the explanation below), hence the worldwide freak out. Basically the only way we know that this gas could be produced in the quantity measured is if there are anaerobic life (microbial organisms that don’t require or use oxygen) producing the gas on Venus. Obviously this doesn’t mean that there aren’t ways that we haven’t thought about yet that could be generating this gas. But the discovery is causing a big stir and will cause various space programs to start refocusing their efforts on Venus. India’s ISRO already has a mission planned to study the surface and atmosphere of Venus called ‘Shukrayaan-1‘ set to launch late 2020’s after the Mars Orbiter Mission 2 launches and you can be sure that they will be attempting to validate these findings when we get there.

The only way to conclusively prove life exists on Venus would be to go there and collect samples containing extra-terrestrial microbes. Since it’s impossible to prove a negative this will be the only concrete proof that we can trust. Anything else will still leave the door open for other potential explanations for the gas generation.

Here’s a link to the press briefing on the possible Venus biosignature announcement from @RoyalAstroSoc featuring comment from several of the scientists involved.

The recent candidate detection of ppb amounts of phosphine in the atmosphere of Venus is a highly unexpected discovery. Millimetre-waveband spectra of Venus from both ALMA and the JCMT telescopes at 266.9445 GHz show a PH3 absorption-line profile against the thermal background from deeper, hotter layers of the atmosphere indicating ~20 ppb abundance. Uncertainties arise primarily from uncertainties in pressure-broadening coefficients and noise in the JCMT signal. Throughout this paper we will describe the predicted abundance as ~20 ppb unless otherwise stated. The thermal emission has a peak emission at 56 km with the FWHM spans approximately 53 to 61 km (Greaves et al. 2020). Phosphine is therefore present above ~55 km: whether it is present below this altitude is not determined by these observations. The upper limit on phosphine occurrence is not defined by the observations, but is set by the half-life of phosphine at <80 km, as discussed below.

Phosphine is a reduced, reactive gaseous phosphorus species, which is not expected to be present in the oxidized, hydrogen-poor Venusian atmosphere, surface, or interior. Phosphine is detected in the atmospheres of three other solar system planets: Jupiter, Saturn, and Earth. Phosphine is present in the giant planet atmospheres of Jupiter and Saturn, as identified by ground-based telescope observations at submillimeter and infrared wavelengths (Bregman et al. 1975; Larson et al. 1977; Tarrago et al. 1992; Weisstein and Serabyn 1996). In giant planets, PH3 is expected to contain the entirety of the atmospheres’ phosphorus in the deep
atmosphere layers (Visscher et al. 2006), where the pressure, temperature and the concentration of H2 are sufficiently high for PH3 formation to be thermodynamically favored. In the upper atmosphere, phosphine is present at concentrations several orders of magnitude higher than predicted by thermodynamic equilibrium (Fletcher et al. 2009). Phosphine in the upper layers is dredged up by convection after its formation deeper in the atmosphere, at depths greater than 600 km (Noll and Marley 1997).

An analogous process of forming phosphine under high H2 pressure and high temperature followed by dredge-up to the observable atmosphere cannot happen on worlds like Venus or Earth for two reasons. First, hydrogen is a trace species in rocky planet atmospheres, so the formation of phosphine is not favored as it is in the deep atmospheres of the H2-dominated giant planets. On Earth H2 reaches 0.55 ppm levels (Novelli et al. 1999), on Venus it is much lower at ~4 ppb (Gruchola et al. 2019; Krasnopolsky 2010). Second, rocky planet atmospheres do not extend to a depth where, even if their atmosphere were composed primarily of hydrogen, phosphine formation would be favored (the possibility that phosphine can be formed below the surface and then being erupted out of volcanoes is addressed separately in Section 3.2.2 and Section 3.2.3, but is also highly unlikely).

Despite such unfavorable conditions for phosphine production, Earth is known to have PH3 in its atmosphere at ppq to ppt levels (see e.g. (Gassmann et al. 1996; Glindemann et al. 2003; Pasek et al. 2014) and reviewed in (Sousa-Silva et al. 2020)) PH3’s persistence in the Earth atmosphere is a result of the presence of microbial life on the Earth’s surface (as discussed in Section 1.1.2 below), and of human industrial activity. Neither the deep formation of phosphine and subsequent dredging to the surface nor its biological synthesis has hitherto been considered a plausible process to occur on Venus.

More details of the finding are explained in the following two papers published by the scientists:

Whatever the reason for the gas maybe, its a great finding as it has reenergized the search for Extra-Terrestrial life and as we all know: “The Truth is out there…”.

– Suramya

September 12, 2020

Post-Quantum Cryptography

Filed under: Computer Related,Quantum Computing,Tech Related — Suramya @ 11:29 AM

As you are aware one of the big promises of Quantum Computers is the ability to break existing Encryption algorithms in a realistic time frame. If you are not aware of this, then here’s a quick primer on Computer Security/cryptography. Basically the current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. For example it is trivial to multiply two numbers together: 593 times 829 is 491,597 but it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it and it becomes increasingly difficult as the numbers get larger. Such hard problems form the basis of algorithms like the RSA that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

Quantum Computers use “qubits” where a single qubit is able to encode more than two states (Technically, each qubit can store a superposition of multiple states) making it possible for it to perform massively parallel computations in parallel. This makes it theoretically possible for a Quantum computer with enough qubits to break traditional encryption in a reasonable time frame. In a theoretical projection it was postulated that a Quantum Computer could break a 2048-bit RSA encryption in ~8 hours. Which as you can imagine is a pretty big deal. But there is no need to panic as this is something that is still only theoretically possible as of now.

However this is something that is coming down the line so the worlds foremost Cryptographic experts have been working on Quantum safe encryption and for the past 3 years the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and data protection. Out of the initial 69 submissions received three years ago the group narrowed the field down to 15 finalists after two rounds of reviews. NIST has now begun the third round of public review of the algorithms to help decide the core of the first post-quantum cryptography standard.

They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures. To make the process easier/more manageable they have divided the finalists into two groups or tracks, with the first track containing the top 7 algorithms that are most promising and have a high probability of being suitable for wide application after the round finishes. The second track has the remaining eight algorithms which need more time to mature or are tailored to a specific application.

The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

You should check out this talk by Daniel Apon of NIST detailing the selection criteria used to classify the finalists and the full paper with technical details is available here.

Source: Schneier on Security: More on NIST’s Post-Quantum Cryptography

– Suramya

Older Posts »

Powered by WordPress