Suramya's Blog : Welcome to my crazy life…

April 24, 2022

Smart-contract bug locks away $34 million highlighting major weakness in smart-contracts

Filed under: Computer Software,Emerging Tech,Tech Related — Suramya @ 9:57 PM

Over the years I have had many conversations with people about Blockchain and how it is supposed to solve all our problems, but for the most part I think Blockchain is overrated and doesn’t solve any problem that can’t be solved in an easier way using less resources. Then as if Blockchain’s were not enough someone went and created smart contracts which are basically programs stored on a blockchain that run when predetermined conditions are met. They typically are used to automate the execution of an agreement so that all participants can be immediately certain of the outcome, without any intermediary’s involvement or time loss. They can also automate a workflow, triggering the next action when conditions are met.IBM Smart-Contracts Def

The major issue with a blockchain contract is that the contract is immutable so if there is a bug in the program no one can modify it to fix the issue. When warned about this potential problem the proponents of the smart-contract pretty much handwaved the concerns away stating that the issue is not that big a deal and people were just opposing them because they dislike smart-contracts and are sticks in mud etc etc.

Unfortunately, this is no longer a theoretical issue as the developers of the AkuDreams contract found out over the weekend. Due to a bug in the contract code $34 million, or 11,539 eth, is permanently locked into the AkuDreams contract forever. It cannot be retrieved by individual users or by the dev team..

This shows how dangerous it is to have a program that can’t be modified because no matter what we do we can’t ensure that code written will be 100% bug free in all the cases. When there is a bug in regular software be can push out a patch to fix it, but that is not an option for smart-contracts and that as you can see becomes an expensive issue.

Source: $34M permanently locked into AkuDreams contract forever due to bad code

– Suramya

April 22, 2022

Implications and Impact of Quantum Computing on Existing Cryptography

As all of you are aware the ability to break encryption of sensitive data like financial systems, private correspondence, government systems in a timely fashion is the holy grail of computer espionage. With the current technology it is unfeasible to break the encryption in a reasonable timeframe. If the target is using a 256-bit key an attacker will need to try a max of 2256 possible combinations to brute-force it. This means that even with the fastest supercomputer in the world will take millions of years to try all the combinations (Nohe, 2019). The number of combinations required to crack the encryption key increase exponentially, so a 2048-bit key has 22048 possible combinations and will take correspondingly longer time to crack. However, with the recent advances in Quantum computing the dream of breaking encryption in a timely manner is close to becoming reality in the near future.

Introduction to Quantum Computing

So, what is this Quantum computing and what makes it so special? Quantum computing is an emerging technology field that leverages quantum phenomena to perform computations. It has a great advantage over conventional computing due to the way it stores data and performs computations. In a traditional system information is stored in the form of bits, each of which can be either 0 or 1 at any given time. This makes a ‘bit’ the fundamental using of information in traditional computing. A Quantum computer on the other hand uses a ‘qubit’ as its fundamental unit and unlike the normal bit, a qubit can exist simultaneously as 0 and 1 — a phenomenon called superposition (Freiberger, 2017). This allows a quantum computer to act on all possible states of a qubit simultaneously, enabling it to perform massive operations in parallel using only a single processing unit. In fact, a theoretical projection has postulated that a Quantum Computer could break a 2048-bit RSA encryption in approximately 8 hours (Garisto, 2020).

In 1994 Peter W. Shor of AT&T deduced how to take advantage of entanglement and superposition to find the prime factors of an integer (Shor, 1994). He found that a quantum computer could, in principle, accomplish this task much faster than the best classical calculator ever could. He then proceeded to write an algorithm called Shor’s algorithm that could be used to crack the RSA encryption which prompted computer scientists to begin learning about quantum computing.

Introduction to Current Cryptography

Current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. Just as it is easy to break a glass jar but difficult to stick it back together there are certain calculations that are easy to perform but difficult to reverse. For example, we can easily multiply two numbers to get the result, however it is very hard to start with the result and work out which two numbers were multiplied to produce it. This becomes even more hard as the numbers get larger and this forms the basis of algorithms like the RSA (Rivest et al., 1978) that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

There are multiple ways of classifying cryptographic algorithms but in this paper, they will be classified based on the keys required for encryption and decryption. The main types of cryptographic algorithms are symmetric cryptography and asymmetric cryptography.

Symmetric Cryptography

Symmetric cryptography is a type of encryption that uses the same key for both encryption and decryption. This requires the sender and receiver to exchange the encryption key securely before encrypted data can be exchanged. This type of encryption is one of the oldest in the world and was used by Julius Caesar to protect his communications in Roman times (Singh, 2000). Caesar’s cipher, as it is known is a basic substitution cypher where a number is used to offset each alphabet in the message. For example, if the secret key is ‘4’ then each alphabet would be replaced with the 4th letter down from it, i.e. A would be replaced with E, B with F and so on. Once the sender and receiver agree on the encryption key to be used, they can start communicating. The receiver would take each character of the message and then go back 4 letters to arrive at the plain-text message. This is a very simple example, but modern cryptography is built on top of this principle.

Another example is from world war II during which the Germans were encrypting their transmissions using the Enigma device to prevent the Allies from decrypting their messages as they had in the first World War (Rijmenants, 2004). Each day both the receiver and sender would configure the gears and specific settings to a new value as defined by secret keys distributed in advance. This allowed them to transmit information in an encrypted format that was almost impossible for the allied forces to decrypt. Examples of symmetric encryption algorithms include Advanced Encryption Standard (AES), Data Encryption Standard (DES), and International Data Encryption Algorithm (IDEA).

Symmetric encryption algorithms are more efficient than asymmetric algorithms and are typically used for bulk encryption of data.

Asymmetric Cryptography

Unlike symmetric cryptography asymmetric cryptography uses two keys, one for encryption and a second key for decryption (Rouse et al., 2020). Asymmetric cryptography was created to address the problems of key distribution in symmetric encryption and is also known as public key cryptography. Modern public key cryptography was first described in 1976 by Stanford University professor Martin Hellman and graduate student Whitfield Diffie. (Diffie & Hellman, 1976)

Asymmetric encryption works with public and private keys where the public key is used to encrypt the data and the private key is used to decrypt the data (Rouse et al., 2020). Before sharing data, a user would generate a public-private keypair and they would then publish their public key on their website or in key management portals. Now, whoever wants to send private data to them would use their public key to encrypt the data before sending it. Once they receive the cipher-text they would use their private key to decrypt the data. If we want to add another layer of authentication to the communication, the sender would encrypt the data with their private key first and then do a second layer of encryption using the recipient’s public key. The recipient would first decrypt the message using their private key, then decrypt the result using the senders public key. This validates that the message was sent by the sender without being tampered. Public key cryptography algorithms in use today include RSA, Diffie-Hellman and Digital Signature Algorithm (DSA).

Quantum Computing vs Classical Computing

Current state of Quantum Computing

Since the early days of quantum computing we have been told that a functional quantum computer is just around the corner and the existing encryption systems will be broken soon. There has been significant investment in the field of Quantum computers in the past few years, with organizations like Google, IBM, Amazon, Intel and Microsoft dedicating a significant amount of their R&D budget to create a quantum computer. In addition, the European Union has launched a Quantum Technologies Flagship program to fund research on quantum technologies (Quantum Flagship Coordination and Support Action, 2018).

As of September 2020, the largest quantum computer is comprised of 65 qubits and IBM has published a roadmap promising a 1000 qbit quantum computer by 2023 (Cho, 2020). While this is an impressive milestone, we are still far away from a fully functional general use quantum computer. To give an idea of how far we still have to go Shor’s algorithm requires 72k3 quantum gates to be able to factor a k bits long number (Shor, 1994). This means in order to factor a 2048-bit number we would need a 72 * 20483 = 618,475,290,624 qubit computer which is still a long way off in the future.

Challenges in Quantum Computing

There are multiple challenges in creating a quantum computer with a large number of qubits as listed below (Clarke, 2019):

  • Qubit quality or loss of coherence: The qubits being generated currently are useful only on a small scale, after a particular no of operations they start producing invalid results.
  • Error Correction at scale: Since the qubits generate errors at scale, we need algorithms that will compensate for the errors generated. This research is still in the nascent stage and requires significant effort before it will be ready for production use.
  • Qubit Control: We currently do not have the technical capability to control multiple qubits in a nanosecond time scale.
  • Temperature: The current hardware for quantum computers needs to be kept at extremely cold temperatures making commercial deployments difficult.
  • External interference: Quantum computes are extremely sensitive to interference. Research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers.

Cryptographic algorithms vulnerable to Quantum Computing

Symmetric encryption schemes impacted

According to NIST, most of the current symmetric cryptographic algorithms will be relatively safe against attacks by quantum computer provided a large key is used (Chen et al., 2016). However, this might change as more research is done and quantum computers come closer to reality.

Asymmetric encryption schemes impacted

Unlike symmetric encryption schemes most of the current public key encryption algorithms are highly vulnerable to quantum computers because they are based on the previously mentioned factorization problem and calculation of discrete logarithms and both of these problems can be solved by implementing Shor’s algorithm on a quantum computer with enough qubits. We do not currently have the capability to create a computer with the required number of qubits due to challenges such as loss of qubit coherence due to ionizing radiation (Vepsäläinen et al., 2020), but they are a solvable problem looking at the ongoing advances in the field and the significant effort being put in the field by companies such as IBM and others (Gambetta et al., 2020).

Post Quantum Cryptography

The goal of post-quantum cryptography is to develop cryptographic algorithms that are secure against quantum computers and can be easily integrated into existing protocols and networks.

Quantum proof algorithms

Due to the risk posed by quantum computers, the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and out of the initial 69 submissions received three years ago, the group has narrowed the field down to 15 finalists and has now begun the third round of public review of the algorithms (Moody et al., 2020) to help decide the core of the first post-quantum cryptography standard. They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures (Moody et al., 2020).

Quantum Key Distribution

Quantum Key Distribution (QKD) uses the characteristics of quantum computing to implement a secure communication channel allowing users to exchange a random secret key that can then be used for symmetrical encryption (IDQ, 2020). QKD solves the problem of secure key exchange for symmetrical encryption algorithms and it has the capability to detect the presence of any third party attempting to eavesdrop on the key exchange. If there is an attempt by a third-party to eavesdrop on the exchange, they will create anomalies in the quantum superpositions and quantum entanglement which will alert the parties to the presence of an eavesdropper, at which point the key generation will be aborted (IDQ, 2020). The QKD is used to only produce and distribute an encryption key securely, not to transmit any data. Once the key is exchanged it can be used with any symmetric encryption algorithm to transmit data securely.

Conclusion

Development of a quantum computer may be 100 years off or may be invented in the next decade, but we can be sure that once they are invented, they will change the face of computing forever including the field of cryptography. However, we should not panic as this is not the end of the world as the work on quantum resistant algorithms is going much faster than the work on creating a quantum computer. The world’s top cryptographic experts have been working on Quantum safe encryption for the past three years and we are nearing the completion of the world’s first post-quantum cryptography standard (Moody et al., 2020). Even if the worst happens and it is not possible to create a quantum safe algorithm immediately, we still have the ability to encrypt and decrypt data using one-time pads until a safer alternative or a new technology is developed.

References

Chen, L., Jordan, S., Liu, Y.-K., Moody, D., Peralta, R., Perlner, R., & Smith-Tone, D. (2016). Report on Post-Quantum Cryptography. https://doi.org/10.6028/nist.ir.8105

Cho, A. (2020, September 15). IBM promises 1000-qubit quantum computer-a milestone-by 2023. Science. https://www.sciencemag.org/news/2020/09/ibm-promises-1000-qubit-quantum-computer-milestone-2023.

Clarke, J. (2019, March). An Optimist’s View of the Challenges to Quantum Computing. IEEE Spectrum: Technology, Engineering, and Science News. https://spectrum.ieee.org/tech-talk/computing/hardware/an-optimists-view-of-the-4-challenges-to-quantum-computing.

Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644–654. https://doi.org/10.1109/tit.1976.1055638

Freiberger, M. (2017, October 1). How does quantum computing work? https://plus.maths.org/content/how-does-quantum-commuting-work.

Gambetta, J., Nazario, Z., & Chow, J. (2020, October 21). Charting the Course for the Future of Quantum Computing. IBM Research Blog. https://www.ibm.com/blogs/research/2020/08/quantum-research-centers/.

Garisto, D. (2020, May 4). Quantum computers won’t break encryption just yet. https://www.protocol.com/manuals/quantum-computing/quantum-computers-wont-break-encryption-yet.

IDQ. (2020, May 6). Quantum Key Distribution: QKD: Quantum Cryptography. ID Quantique. https://www.idquantique.com/quantum-safe-security/overview/quantum-key-distribution/.
Moody, D., Alagic, G., Apon, D. C., Cooper, D. A., Dang, Q. H., Kelsey, J. M., Yi-Kai, L., Miller, C., Peralta, R., Perlner R., Robinson A., Smith-Tone, D., & Alperin-Sheriff, J. (2020). Status report on the second round of the NIST post-quantum cryptography standardization process. https://doi.org/10.6028/nist.ir.8309

Nohe, P. (2019, May 2). What is 256-bit encryption? How long would it take to crack? https://www.thesslstore.com/blog/what-is-256-bit-encryption/.
Quantum Flagship Coordination and Support Action (2018, October). Quantum Technologies Flagship. https://ec.europa.eu/digital-single-market/en/quantum-technologies-flagship

Rijmenants, D. (2004). The German Enigma Cipher Machine. Enigma Machine. http://users.telenet.be/d.rijmenants/en/enigma.htm.

Rivest, R. L., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2), 120–126. https://doi.org/10.1145/359340.359342

Rouse, M., Brush, K., Rosencrance, L., & Cobb, M. (2020, March 20). What is Asymmetric Cryptography and How Does it Work? SearchSecurity. https://searchsecurity.techtarget.com/definition/asymmetric-cryptography.

Shor, P. w. (1994). Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science, 124–134. https://doi.org/10.1109/sfcs.1994.365700

Singh, S. (2000). The code book: The science of secrecy from Egypt to Quantum Cryptography. Anchor Books.

Vepsäläinen, A. P., Karamlou, A. H., Orrell, J. L., Dogra, A. S., Loer, B., Vasconcelos, F., David, K. K., Melville A. J., Niedzielski B. M., Yoder J. L., Gustavsson, S., Formaggio J. A., VanDevender B. A., & Oliver, W. D. (2020). Impact of ionizing radiation on superconducting qubit coherence. Nature, 584(7822), 551–556. https://doi.org/10.1038/s41586-020-2619-8


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2020, which is why the tone is a lot more formal than my regular posts.

– Suramya

April 21, 2022

It is possible to plant Undetectable Backdoors in Machine Learning Models

Machine learning (ML) is the big thing and ML algorithms are slowly creeping into all aspects of our life such as unlocking your phone using facial recognition, evaluating the eligibility for a loan, surveillance, what ads you see when surfing the web, what search results you get when searching for stuff etc etc. The problem is that ML algorithms are not infallible they depend on the training data used, confirmational bias etc. At the very least they enforce the existing bias for example, if a company only hires men 25-45 for a role then the ML data set will take this as the input and all future candidates will be evaluated against this criteria because the system thinks that this is what a success looks like. The algorithms themselves are getting more and more complicated and it is almost impossible to review and validate the findings. Due to this decisions are being made by machines that can’t be audited easily. Plus it doesn’t help that most ML models are proprietary and the companies refuse to let outsiders examine them due to Trade secrets and proprietary information used in them.

Another problem is that these ML models is adversarial perturbations where attackers make minor changes to the image/data going in to get a specific response/output. There are a lot of examples of this in the past few years and some of them are listed below (Thanks to Cory Doctorow for consolidating them in one place)

These all take advantage of flaws in the ML model that can be exploited using minor changes in the input data. However, there is another major exploit surface available which is incredibly hard to protect against: Backdoors in the ML models by creating a model that will accept a particular entry/key to produce a specific output. The ‘best’ part is that it is almost impossible to detect if this has been done because the model will function exactly the same as an un-tampered model and will only show the abnormal behavior for the specific key which would have been randomly generated by the creator during the training. If done well then the modifications will be undetectable for most tests.

A team for MIT and IAS has written a paper on it (“Planting Undetectable Backdoors in Machine Learning Models“) where they go into details of how this can be done and the potential impact. Unfortunately, they have not been able to come up with a feasible defense against this attack as of this time. Hopefully that will change as others start focusing on this problem and how to solve it.

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

The paper is still waiting for the peer-review to complete but the concept and methods they describe seem solid so this is a problem we will have to solve sooner rather than later considering the speed with which ML models are impacting our life.

Source: Schneier on Security: Undetectable Backdoors in Machine-Learning Models

– Suramya

April 14, 2022

Ensure your BCP plan accounts for the Cloud services you depend on going down

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 1:53 AM

Long time readers of the blog and folks who know me know that I am not a huge fan of putting everything on the cloud and I have written about this in the past (“Cloud haters: You too will be assimilated” – Yeah Right…), I mean don’t get me wrong, the cloud does have it’s uses and advantages (some of them are significant) but it is not something that you want to get into without significant planning and thought about the risks. You need to ensure that the ROI for the move is more than the increased risk to your company/data.

One of the major misconceptions about the cloud is that when we put something on there we don’t need to worry about backups/uptimes etc because the service provider takes care of it. This is obviously not true. You need to ensure you have local backups and you need to ensure that your BCP (Business Continuity Plan) accounts for what you would do if the provider itself went down and the data on the cloud is not available.

You think that this is not something that could happen? The 9 day and counting outage over at Atlassian begs to differ. On Monday, April 4th, 20:12 UTC, approximately 400 Atlassian Cloud customers experienced a full outage across their Atlassian products. This is just the latest instance where a cloud provider has gone down leaving it’s users in a bit of a pickle and as per information sent to some of the clients it might take another 2 weeks to restore the services for all users.

One of our standalone apps for Jira Service Management and Jira Software, called “Insight – Asset Management,” was fully integrated into our products as native functionality. Because of this, we needed to deactivate the standalone legacy app on customer sites that had it installed. Our engineering teams planned to use an existing script to deactivate instances of this standalone application. However, two critical problems ensued:

Communication gap. First, there was a communication gap between the team that requested the deactivation and the team that ran the deactivation. Instead of providing the IDs of the intended app being marked for deactivation, the team provided the IDs of the entire cloud site where the apps were to be deactivated.
Faulty script. Second, the script we used provided both the “mark for deletion” capability used in normal day-to-day operations (where recoverability is desirable), and the “permanently delete” capability that is required to permanently remove data when required for compliance reasons. The script was executed with the wrong execution mode and the wrong list of IDs. The result was that sites for approximately 400 customers were improperly deleted.

To recover from this incident, our global engineering team has implemented a methodical process for restoring our impacted customers.

To give you an idea of how serious this outage is, I will use my personal experience with their products and how they were used in one of my previous companies. Without Jira & Crucible/Fisheye no one will be able to commit code into the repositories or do code reviews of existing commits. The users will not be able to do production / dev releases of any product. Since Confluence is down users/teams can’t access guides/instructions/SOP documents/documentation for any of their systems. Folks who use Bitbucket/sourcetree would not be able to commit code. This is the minimal impact scenario. It gets worse for organizations who use CI/CD pipelines and proper SDLC processes/lifecycles that depend on their products.

If the outage was on the on-premises servers then the teams could fail over to the backup servers and continue, but unfortunately for them the issue is on the Atlassian side and now everyone just has to wait for it to be fixed.

Code commits blocks (pre-commit/post-commit hooks etc) can be disabled but unless you have local copies of the documentation stored in Confluence you are SOL. We actually faced this issue once with our on-prem install where the instructions on how to do the failover were stored on the confluence server that had gone down. We managed to get it back up by a lot of hit & try methods but after that all teams were notified that their BCP/failover documentation needed to be kept in multiple locations including hardcopy.

If the companies using their services didn’t prepare for a scenario where Atlassian went down then there are a lot of people scrambling to keep their businesses and processes running.

To prevent issues, we should look at setting up systems that take auto-backups of the online systems and store it on a different system (can be in the cloud but use a different provider or locally). All documentation should have local copies and for really critical documents we should ensure hard copy versions are available. Similarly we need to ensure that any online repositories are backed up locally or on other providers.

This is a bad situation to be in and I sympathize with all the IT staff and teams trying to ensure that their companies business is running uninterrupted during this time. The person who ran the script on the other hand on the Atlassian server should seriously consider getting some sort of bad eye charm to protect themselves against all the curses flying their way (I am joking… mostly.)

Well this is all for now. Will write more later.

April 13, 2022

Internet of Things (IoT) Forensics: Challenges and Approaches

Internet of Things or IoT consists of interconnected devices that have sensors and software, which are connected to automated systems to gather information and depending on the information collected various actions can be performed. It is one of the fastest growing markets, with enterprise IoT spending to grow by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021).

This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network and attackers have found multiple ways through which they can gain unauthorized access to systems by compromising IoT systems.

IoT Forensics is a subset of the digital forensics field and is the new kid on the block. It deals with forensics data collected from IoT devices and follows the same procedure as regular computer forensics, i.e., identification, preservation, analysis, presentation, and report writing. The challenges of IoT come into play when we realize that in addition to the IoT sensor or device we also need to collect forensic data from the internal network or Cloud when performing a forensic investigation. This highlights the fact that Forensics can be divided into three categories: IoT device level, network forensics and cloud forensics. This is relevant because IoT forensics is heavily dependent on cloud forensics (as a lot of data is stored in the cloud) and analyzing the communication between devices in addition to data gathered from the physical device or sensor.

Why IoT Forensics is needed

The proliferation of Internet connected devices and sensors have made life a lot easier for users and has a lot of benefits associated with it. However, it also creates a larger attack surface which is vulnerable to cyberattacks. In the past IoT devices have been involved in incidents that include identity theft, data leakage, accessing and using Internet connected printers, commandeering of cloud-based CCTV units, SQL injections, phishing, ransomware and malware targeting specific appliances such as VoIP devices and smart vehicles.

With attackers targeting IoT devices and then using them to compromise enterprise systems, we need the ability to extract and review data from the IoT devices in a forensically sound way to find out how the device was compromised, what other systems were accessed from the device etc.

In addition, the forensic data from these devices can be used to reconstruct crime scenes and be used to prove or disprove hypothesis. For example, data from a IoT connected alarm can be used to determine where and when the alarm was disabled and a door was opened. If there is a suspect who wears a smartwatch then the data from the watch can be used to identify the person or infer what the person was doing at the time. In a recent arson case, the data from the suspects smartwatch was used to implicate him in arson. (Reardon, 2018)

The data from IoT devices can be crucial in identifying how a breach occurred and what should be done to mitigate the risk. This makes IoT forensics a critical part of the Digital Forensics program.

Current Forensic Challenges Within the IoT

The IoT forensics field has a lot of challenges that need to be addressed but unfortunately none of them have a simple solution. As shown in the research done by M. Harbawi and A. Varol (Harbawi, 2017) we can divide the challenges into six major groups. Identification, collection, preservation, analysis and correlation, attack attribution, and evidence presentation. We will cover the challenges each of these presents in the paper.

A. Evidence Identification

One of the most important steps in forensics examination is to identify where the evidence is stored and collect it. This is usually quite simple in the traditional Digital Forensics but in IoT forensics this can be a challenge as the data required could be stored in a multitude of places such as on the cloud, or in a proprietary local storage.

Another problem is that since IoT fundamentally means that the nodes were in real-time and autonomous interaction with each other, it is extremely difficult to reconstruct the crime scene and to identify the scope of the damage.

A report conducted by the International Data Corporation (IDC) states that the estimated growth of data generated by IoT devices between 2005 to 2020 is going to be more than 40,000 exabytes (Yakubu et al., 2016) making it very difficult for investigators to identify data that is relevant to the investigation while discarding the irrelevant data.

B. Evidence Acquisition

Once the evidence required for the case has been identified the investigative team still has to collect the information in a forensically sound manner that will allow them to perform analysis of the evidence and be able to present it in the court for prosecution.

Due to the lack of a common framework or forensic model for IoT investigations this can be a challenge. Since the method used to collect evidence can be challenged in court due to omissions in the way it was collected.

C. Evidence Preservation and Protection

After the data is collected it is essential that the chain of custody is maintained, and the integrity of the data needs to be validated and verifiable. In the case of IoT Forensics, evidence is collected from multiple remote servers, which makes maintaining proper Chain of Custody a lot more complicated. Another complication is that since these devices usually have a limited storage capacity and the system is continuously running there is a possibility of the evidence being overwritten. We can transfer the data to a local storage device but then ensuring the chain of custody is unbroken and verifiable becomes more difficult.

D. Evidence Analysis and Correlation

Due to the fact that IoT nodes are continuously operating, they produce an extremely high volume of data making it difficult to analyze and process all the data collected. Also, since in IoT Forensics there is less certainty about the source of data and who created or modified the data, it makes it difficult to extract information about ownership and modification history of the data in question.

With most of the IoT devices not storing metadata such as timestamps or location information along with issues created by different time zones and clock skew/drift it is difficult for investigators to create causal links from the data collected and perform analysis that is sound, not subject to interpretation bias and can be defended in court.

E. Attack and Deficit Attribution

IoT forensics requires a lot of additional work to ensure that the device physical and digital identity are in sync and the device was not being used by another person at the time. For example, if a command was given to Alexa by a user and that is evidence in the case against them then the examiner needs to confirm that the person giving the command was physically near the device at the time and that the command was not given over the phone remotely.

F. Evidence Presentation

Due to the highly complex nature of IoT forensics and how the evidence was collected it is difficult to present the data in court in an easy to understand way. This makes it easier for the defense to challenge the evidence and its interpretation by the prosecution.

VI. Opportunities of IoT Forensics

IoT devices bring new sources of information into play that can provide evidence that is hard to delete and most of the time collected without the suspect’s knowledge. This makes it hard for them to account for that evidence in their testimony and can be used to trip them up. This information is also harder to destroy because it is stored in the cloud.

New frameworks and tools such Zetta, Kaa and M2mLabs Mainspring are now becoming available in the market which make it easier to collect forensic information from IoT devices in a forensically sound way.

Another group is pushing for including blockchain based evidence chains into the digital and IoT forensics field to ensure that data collected can be stored in a forensically verifiable method that can’t be tampered with.

Conclusion

IoT Forensics is becoming a vital field of investigation and a major subcategory of digital forensics. With more and more devices getting connected to each other and increasing the attack surface of the target it is very important that these devices are secured and have a sound way of investigating if and when a breach happens.

Tools using Artificial Intelligence and Machine learning are being created that will allow us to leverage their capabilities to investigate breaches, attacks etc faster and more accurately.

References

Reardon. M. (2018, April 5). Your Alexa and Fitbit can testify against you in court. Retrieved from https://www.cnet.com/tech/mobile/alexa-fitbit-apple-watch-pacemaker-can-testify-against-you-in-court/.

M. Harbawi and A. Varol, “An improved digital evidence acquisition model for the Internet of Things forensic I: A theoretical framework”, Proc. 5th Int. Symp. Digit. Forensics Security (ISDFS), pp. 1-6, 2017.

Yakubu, O., Adjei, O., & Babu, N. (2016). A review of prospects and challenges of internet of things. International Journal of Computer Applications, 139(10), 33–39. https://doi.org/10.5120/ijca2016909390


Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

April 11, 2022

I am now a Certified SOC Analyst (CSA)

Filed under: Computer Security,My Life,Tech Related — Suramya @ 5:08 AM

Over the weekend I gave my first Cybersecurity Certification exam and I am now a Certified SOC Analyst (CSA). 🙂 This is the first of five certifications I will be completing this year as part of my Degree in Cybersecurity.


Certificate No: ECC2945876310

The exam was interesting and for me the hardest part was remembering all the Windows event codes as I have a hard time remembering numbers. I feel that they should allow users access to their windows system (registry/event logs) as in a real life scenario we would always have access to the system and internet. Testing without the ability to search the internet doesn’t make much sense as it is not realistic.

That being said, I am looking forward to the next certification exam which I am planning to take end of the month/early next month.

Well this is all for now. Will write more later.

– Suramya

January 29, 2022

Getting random values from the quantum fluctuations of vacuum using an API

Filed under: Computer Security,Interesting Sites,Tech Related — Suramya @ 10:35 PM

Generating truly random numbers programmatically is something that sounds like it should be simple to do but is in fact quite hard. Most algorithms that generate numbers are in fact pseudo-random numbers, which means that they look random but can be predicted at times. So the ability to generate/get truly random numbers is a big deal. Cloudflare uses a wall to wall setup of Lava Lamps to generate random numbers that are used to encrypt the traffic on their servers. Other organizations have other methods where they measure the atmospheric radiation, sound etc etc.

The ANU QRNG website managed by Australian National University offers true random numbers to anyone on the internet. The random numbers are generated in real-time in the lab by measuring the quantum fluctuations of the vacuum.

They have API access enabled for accessing the numbers and users can download blocks of random numbers as well as a .zip file which is updated periodically.

The vacuum is described very differently in the quantum physics and classical physics. In classical physics, a vacuum is considered as a space that is empty of matter or photons. Quantum physics however says that that same space resembles a sea of virtual particles appearing and disappearing all the time. This is because the vacuum still possesses a zero-point energy. Consequently, the electromagnetic field of the vacuum exhibits random fluctuations in phase and amplitude at all frequencies. By carefully measuring these fluctuations, we are able to generate ultra-high bandwidth random numbers.

This website allows everybody to see, listen or download our quantum random numbers, assess in real time the quality of the numbers generated and learn more about the physics behind it. The technical details on how the random numbers are generated can be found in Appl. Phys. Lett. 98, 231103 (2011) and Phys. Rev. Applied 3, 054004 (2015).

I think this is a cool application and a lot of reputable sites/users are using this for their setup so it seems like a reputable source of random numbers. I would still take these numbers and then use that as the seed in a pseudo-random generator and use that result in your application instead of using the number directly.

– Suramya

January 28, 2022

IoT Devices and Reducing their Impact on Enterprise Security

IoT devices are becoming more and more prevalent in the corporate world, as they allow us to automate tasks and activities without manual intervention, which increases the risk to the organization by increasing the attack surface available to attackers. This is because IoT devices can act as entry points to the organization’s internal network. In order to reduce the security impact of these devices the attack channels and threats from the devices need to be mitigated. This can be done by implementing the suggestions in this paper

IoT or Internet of Things is a collection of devices that are connected to the internet and can be controlled over a network or provide data over the internet. It is one of the fastest growing markets, with enterprise IoT spending growing by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021). This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network.

IoT devices allow us to manage, monitor and control devices and sensors remotely which in turn allows us to automate tasks and activities without manual intervention. But this capacity comes at an increased risk of vulnerability due to a massive increase of the attack surface available. They are becoming more and more prevalent in an enterprise setting, especially in the office automation and operational technology areas. This increases the risk to the organization by increasing the possibility of threats in areas that traditionally don’t pose cyber security risks.

IoT devices can act as entry points to an organizations internal network and be used to exfiltrate data from the network without raising flags. In 2018, attackers used a compromised IoT thermometer in the lobby aquarium of a casino to breach their system and exfiltrate their high-roller database (~10GB of data) out of the corporate network to servers they controlled via the thermostat. (Williams-Grut, 2018).

In this paper we will review some of the major threats and attack channels targeting IoT devices and look at how we can reduce the impact of these threats on the enterprise security.

IoT Threats and Attack Channels

IoT devices have multiple attack surfaces due to their design and usage. We will cover the major vulnerabilities in this section along with mitigation steps for each threat and attack channel.

A. Physical Vulnerabilities

Since these devices are usually physically deployed in the field in addition to the typical software and communication vulnerabilities, they are also vulnerable to physical attacks where the device can be physically modified to gain access. Some of the examples of Physical attacks are as follows:

  • Attackers physically remove the device memory or flash chips to read & analyze the data and software on the chip.
  • Attackers tamper with the microcontroller to gain access to or identify sensitive information
  • Physically modify the device to return incorrect data or telemetry. For example, camera’s or motion sensors overseeing sensitive locations could be modified to ignore breaches.
  • Use the device connectivity to act as a bridge to gain access to the corporate network.
  • Attackers authenticate locally to the device using debug interface on the device to gain access to the device internals

The best way to protect against such attacks is to ensure the following preventive measures are taken for all devices on the network:

  • Ensure that the device or sensor is not easily accessible physically.
  • All sensors and devices should have tamper proof seals installed on them with regular checks to verify that they are not tampered with.
  • Unused ports, connections, diagnostic connectors etc should be physically disabled when possible.
  • If possible, ensure the devices have hardware-based security checks on it.

B. Outdated Firmware

Many of the IoT devices and sensors run older versions of Linux with no easy way to update the firmware, installed software or applications to the latest versions. This creates a major security risk as the device is running software with known security vulnerabilities which allows attackers to easily compromise a device.

There is no easy way to resolve this problem and protect the devices as a lot of these sensors and devices are not designed with security in mind. The best way to approach this problem is to ensure you are working with reputable device manufacturers who will ensure that appropriate support and updates are going to be available for the device/sensor.

The organization should review the recommendations by the IoT working group of the Cloud Security alliance on how to perform IoT Firmware updates securely and regularly. (Khemissa et al., 2018) The should also include the IoT sensors and devices in the organization’s update cycles which will allow them to ensure that patches and updates are installed in a timely manner on them.

Another option is to explore installing open source firmware and software on the IoT device/sensor if this option is available. The opensource firmware’s are usually updated more frequently and can be customized to better secure the device.

C. Hard Coded Passwords/Accounts

Some of the IoT devices have hard coded account passwords that cannot be changed, and this gives an attacker backdoor access to the device that is difficult to protect against. Hardcoded passwords are particularly dangerous because they are easy targets for password guessing exploits, allowing attackers to hijack firmware, devices, systems, and software etc. A famous case of such an exploit was found in 2017 when researchers found default hardcoded passwords in IoT camera’s manufactured by Foscam. (Heller, 2017) that gave admin access to anyone who used them. These passwords allow an attacker to gain access to the device and use it as a launch surface against attacks on the network.

Another famous attack exploiting this was by the Mirai malware in 2016. It scanned for and exploited Linux-based IoT boxes with Busybox (such as DVRs and WebIP Cameras) using hardcoded usernames and passwords. Once it gained access these devices were enrolled in a botnet containing over 400,000 connected devices which were then used to perform DDoS attacks on major companies across the world. (Fruhlinger, 2018)

To protect against these attacks, we should ensure the default passwords on all devices are changed frequently. An active pentest against the device should be conducted to uncover any hidden or hardcoded accounts. If any are found, the manufacturer should be contacted to prove an update to disable these accounts.

D. Poor IoT device management

A study published in July 2020 found that almost 15% of IoT devices on an enterprise network were unknown or unauthorized and between 5 to 19% of these devices were using unsupported legacy operating systems (Help Net Security, 2020). These devices make up what is known as a Shadow IoT network that is implemented without the knowledge of the organization’s IT team and can be a major weak point in the organization’s security perimeter.

The best way to protect against this scenario is to ensure regular scans are done on the network to identify any unknown or new devices connected to the network. The pentest will enable us to identify these unauthorized devices which can then be incorporated into the official network and update cycle or disconnected depending the requirements. Another way to find these unauthorized devices is to monitor and analyze network connections and traffic. New devices will change the network data flow, and this can be used to identify or locate new devices or sensors connected to the network.

E. Man-in-the-Middle Attacks

Communication channels in IoT devices are usually very trivially protected and an attacker can compromise the channel to intercept the messages between devices and modify them. This allows the attacker to cause malfunctions or show incorrect data. This can potentially cause serious harm if the targeted IoT devices are connected to or managing industrial or medical equipment. It can also allow attackers to hide their tracks and physical evidence of their work.

F. Industrial Espionage & Eavesdropping

IoT devices such as cameras, microphones etc are used to monitor sensitive areas or devices for problems remotely. If an attacker compromises these cameras, they allow them to visually and audially monitor their target compromising their privacy and potentially gaining access to sensitive data or video. For example, IoT cameras deployed in bedrooms have been used to record and leak intimate videos of the residents without their knowledge. Compromised security cameras have been used to record ATM pins entered by unsuspecting users.

Other steps that should be taken to reduce risk from IoT devices on your network:

  • Segregate your Networks: IoT devices should be on a separate segment of the network which is isolated from the production and user network with a firewall sitting between the two. This will allow you to block access to the production network from the IoT network which will prevent an attacker from gaining full access to the enterprise network in case they breach the IoT network.
  • Enable HTTPS/Encrypted connectivity for IoT devices: All connections to and from the IoT devices should be encrypted to protect against Man-in-the-middle attacks.
  • Deploy an IDS: Deploying an Intrusion Detection System (IDS) on the network can alert us to attack attempts. All alerts from the IDS should be investigated and verified.

These are just some of the attack surfaces available to attackers targeting IoT devices, in fact with the increase in computing power available to these devices they are almost mini computers and most of the attacks that impact traditional systems such as servers or desktops can target IoT devices as well with minimal modifications. So, it is essential that security trainings are conducted for all employees in the organization to make them aware of the risks posed by IoT devices and train the security team in methods to secure these devices from attackers.


Note: This was originally written as a paper for one of my classes at EC-Council University in Q3 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

January 27, 2022

New MoonBounce UEFI Bootkit that can’t be removed by replacing the Hard Disk

Filed under: Computer Security,Computer Software,My Thoughts,Tech Related — Suramya @ 1:05 AM

Viruses and malware have evolved a lot in the past 2-2.5 decades. I remember the first virus that infected my computer back in 1998, it corrupted the boot sector and the partition table to the point where I couldn’t even format the drive as it wasn’t detected by the OS. I tried booting via a floppy and running scandisk on it (this is on DOS 6.1/Windows 3.1) but it wouldn’t detect the disk, same issue with Norton Disk Doctor (NDD). Was scared to tell the parents that I had broken the new computer but after a whole night of trying various things based on conversations with friends, suggestions in books etc I managed to get NDD to detect the disk and repair the partition table. After that it was a relatively simple task to format the disk and reinstall DOS. Similarly all the other viruses I encountered could be erased by formatting the disk or replacing it.

There were a few that tried using the BIOS for storing info but not many. I did create a prank program that would throw insults at you when you typed the wrong command every 5th boot. The counter for the boot was kept in the BIOS. But this didn’t have any propagation logic in the code and had to be manually run on each machine, plus it had to be customized manually for very new BIOS type/version so wasn’t something that could spread on its own.

With the new malware/viruses that have come out in the past few decades we are seeing more advanced capabilities of propagation and persistence, but till now you could still replace the drive infected with a virus and be able to start with a clean slate. However, that has now changed with the new MoonBounce UEFI Bootkit which can’t be removed by replacing the Hard Drive as it stores itself in the SPI flaws memory that is found on the motherboard. Which means that the bootkit will remain on the device till the SPI memory is re-flashed or the whole motherboard is replaced. Which makes it very difficult and expensive to recover from the infection.

Securelist has a very detailed breakdown of the Bootkit which you should check out. The scary part is that this is not the only bootkit that uses this method, there are a few others such as ESPectre, FinSpy’s UEFI bootkit that prove that the capability is becoming more mainstream and that we should expect to see more such bootkits in the near future.

Source: Slashdot: New MoonBounce UEFI Bootkit Can’t Be Removed by Replacing the Hard Drive

– Suramya

January 25, 2022

Intentionally breaking popular opensource projects for… something

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 10:23 AM

Recently Marak Squires, the developer of extremely popular npm modules Colors & Faker decided to intentionally commit changes into the code that broke the module and brought down thousands of apps world wide. Initially it was thought that the modules were hacked as others have been in the past, but looking at the commit history it was obvious that the changes were committed by the developer themselves. Which brings us to the question of why on earth would someone do something like this? Marak didn’t explicitly state on why the changes were made but considering their past comments it does seem like this was done intentionally:

In November 2020, Marak had warned that he will no longer be supporting the big corporations with his “free work” and that commercial entities should consider either forking the projects or compensating the dev with a yearly “six figure” salary.

“Respectfully, I am no longer going to support Fortune 500s ( and other smaller sized companies ) with my free work. There isn’t much else to say,” the developer previously wrote.

“Take this as an opportunity to send me a six figure yearly contract or fork the project and have someone else work on it.

The aftermath of the changes is that NPM has revoked the developers rights to commit code, their github account has been suspended and the modules in question have been forked. Now Marak is pleading for his accounts to be reinstated because the issue was caused due to a ‘programming mistake’ which seems like a far fetched excuse. Especially given how they made fun of the problem right after people reporting it. That doesn’t seem like the reaction we would see if this was a legitimate mistake.

My guess is that they thought this would play out differently with companies falling over themselves to give them money/contracts etc or something but didn’t anticipate how it would blow back on them. I mean if I was hiring right now and their resume came up I would think twice about hiring them because of this stunt. They have shown that they can’t be trusted and what is to stop them from making changes to my company’s software and bring it a screeching halt because they felt that they were not being paid their dues? I mean they have already done it once, what is to stop them from doing it again? This looks like a textbook example of what not to do in order to get people to work with you/hire you.

One of the things that I have heard from detractors of OpenSource software when I was pushing for it in my previous companies is the question about how can we be sure the software will be there a year for now and who do we blame if the software is broken and we need help. Stunts like this don’t help improving the image of Open Source software and this person is now reaping their just deserts.

The positive side is that because the code is opensource, it has already been forked and others have taken over the codebase to ensure we don’t hit similar issues going forward.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress