Suramya's Blog : Welcome to my crazy life…

April 20, 2022

Improving Photosynthesis efficiency by resurrecting ancient enzymes in Modern Plants

Filed under: Science Related — Suramya @ 1:23 AM

Climate change is something that is going to impact us in a major way specially in our ability to produce food as the plants and animals we have right now are not really suited for an environment that is a lot hotter and has more pollutants & CO2 in the atmosphere. Scientists have been exploring modifying the plants to enable them to thrive in the new environment and increase their food productivity (since the world population is growing at a massive rate) and Maureen Hanson & Myat Lin from the Cornell University have made a breakthrough in the effort to improve the efficiency of Photosynthesis in plants.

They developed a computational technique to predict favorable gene sequences to make Rubisco that fixes atmospheric CO2 into organic compounds. Using the evolutionary history the process predicted how the genes would have been 20-30 million years ago. These were then tested using an experimental system developed in Hanson’s lab (Nature Plants Paper) that employs E. coli bacteria to test the efficacy of the different versions Rubisco. Using this method allowed the researchers to evaluate the findings a lot faster than the traditional method which would have taken months to verify instead of days.

Plants and photosynthetic organisms have a remarkably inefficient enzyme named Rubisco that fixes atmospheric CO2 into organic compounds. Understanding how Rubisco has evolved in response to past climate change is important for attempts to adjust plants to future conditions. In this study, we developed a computational workflow to assemble de novo both large and small subunits of Rubisco enzymes from transcriptomics data. Next, we predicted sequences for ancestral Rubiscos of the (nightshade) family Solanaceae and characterized their kinetics after coexpressing them in Escherichia coli. Predicted ancestors of C3 Rubiscos were identified that have superior kinetics and excellent potential to help plants adapt to anthropogenic climate change. Our findings also advance understanding of the evolution of Rubisco’s catalytic traits.

Their findings which have been published in Science Advances show that the ancient Rubisco enzymes predicted showed real promise for being more efficient. The next step is to replace the genes in existing tobacco plants with their ancient counterparts and then measure how the photosynthesis efficiency changes in the modified plant. If things look good then we can start testing against food crops such as tomatoes, soybeans and rice.

But that is going to take time as these things can’t be rushed and we need to ensure there are no harmful side effects of this change. That being said, it is a great breakthrough and I am going to be watching this space for more advances.

Paper: Improving the efficiency of Rubisco by resurrecting its ancestors in the family Solanaceae
Article: Scientists resurrect ancient enzymes to improve photosynthesis

– Suramya

April 19, 2022

Please stop taking such photos unless you have a death wish.

Filed under: My Thoughts — Suramya @ 12:04 AM

The following image popped up in my feed earlier today and I am dumbfounded by what people feel is ok to do to get a perfect (Instagram?) picture.

Answer: Common sense and a working brain

This looks like it is an active road, as you can see cars in the background and there are cars driving towards them. They are lying down on a zebra crossing, in the rain which will make it difficult for drivers to stop the car quickly. Plus since no one expects idiots to be lying down on the road there is a good chance that someone who is not paying attention or is momentarily distracted can run them over and in this pose it is almost impossible for these two to get out of the way fast if a car is about to run them down.

I have taken a few sitting on the road photos over the years, but we ensured that the roads were empty and we could see for a fair distance so that we could get out of the way if there was oncoming traffic.

Come on folks, is taking a great pic really worth dying for? Or do you want to be remembered as the idiots who got run over when taking an Instagram pic? Please stop this idiotic behavior.

– Suramya

April 18, 2022

Oracle releases a ‘free’ version of Oracle Solaris 11.4 for opensource developers and non-production personal use

Filed under: Linux/Unix Related,My Thoughts — Suramya @ 2:59 AM

Last month Oracle released a ‘free’ version of Oracle Solaris 11.4 for opensource developers and non-production personal use. The key point to note is that this doesn’t mean that there is a free/opensource version of the OS now available because unlike the Open Solaris project (that was released in 2008 but was discontinued) this build is a similar to a beta release and contains pre-release builds of a particular SRU (which I think means a release version). To me it sounds like they want the opensource community to perform free testing for their releases while getting some positive publicity.

I don’t think I will be trying it out because I don’t really trust Oracle. They are notorious for their bad takes and really aggressive enforcement of their IP rights. Plus their history with opensource projects has been bumpy and you never know when they will change their mind and go in a different direction.

My first experience with Unix/Linux was SunOS 4.1 followed by Solaris 5. I even had a Sparc machine at one point but it got lost during one of the many moves I made over the years. I loved the OS and since I couldn’t run it on my machines I started using Linux which was a great alternative. When OpenSolaris was released I received installation CDs to try it out, unfortunately life got in the way and I never really tested it out (other than the initial install). It was disappointing when the project went defunct & shutdown. I took a look at the OpenSolaris Wikipage and it looks like none of the derivative projects that were supposed to take over really went anywhere. So that sucks.

I don’t think that Unix does anything that Linux can’t do and even then if you want to run Unix on your machines I would recommend you go for FreeBSD instead of this ‘free’ version.

Thanks to HackaDay: Solaris Might Be Free If You Want It for the initial link.

– Suramya

April 17, 2022

Air Quality in Delhi/NCR sucks

Filed under: My Thoughts — Suramya @ 10:32 AM

People complain about the air quality in Delhi/NCR a lot (and I am one of them). The Delhi government tries to blame it all on the annual burning of crops in Punjab but that is just an excuse as that happens once a year and the quality is bad through out the year. A few years ago I was in at our place in Noida and I walked out of my room & looked at the living room and found the view to be hazy. At first I thought that this was due to me not wearing my glasses but then realized that it was smog inside our house. I immediately searched for and installed my old air purifier (from our days in the US) and within a short time the haziness was gone.

Recently I bought the PHILIPS High Efficiency Air Purifier AC2887/20 for my house in Bangalore and when I saw how effective it is, I got the same thing for the parents to use in Delhi as well. The one I have in Bangalore ran 24/7 while there was construction going on inside our house (we had the bathrooms renovated) and I had no issues/allergic reactions due to the work being done. Without the purifier I would have been on a constant diet of Allegra the entire day with splitting headaches. You can imagine the amount of dust being generated due to the work but the purifier took it all in stride and I cleaned the external filter once because I thought I should (no alerts to clean came on) after the construction was done.

My father started using the purifier about a month ago (finally!) when the pollen season started and his allergies kicked in. It ran in his room mostly at the night and sometimes during the day as well. Earlier this week after he had been using it for about a month, he pinged me to ask for a particular error code being displayed on the purifier meant, so I found the manual for the device and looked it up. The error code (F0) meant that the external filter needed to be cleaned for the device to work at peak efficiency.

I ran it with construction going on in the other room and the filter never got so bad that the device had to ask me to clean it. Delhi/Noida air quality on the other hand is so bad that they had to clean it within a month of using it normally. Now tell me that the air quality in NCR is not bad! Plus there is no burning’s happening right now or Diwali so you can’t blame it on that either. No wonder Delhi was rated the most polluted capital city in the world.

Delhi Air Pollution: Real-time Air Quality Index (AQI) (Source:

There needs to be an active urgent effort by the Government to reduce the pollution level, similar to what was done in the early 2000’s when the air quality had improved for a brief time after all the public transports were moved to CNG.

Till then I am going to ensure that the purifier is constantly running at home in Delhi. I don’t do it in the Bangalore house regularly but I do run a purifier in the car to reduce my exposure.

– Suramya

April 16, 2022

Debian Project leader talks about How Debian is doing on the mailing list

Filed under: Linux/Unix Related — Suramya @ 5:28 AM

I use Debian as my primary OS and have been doing so since 2002 onwards. I switched from Redhat to Debian because RH8 was an attempt to make the OS easier for new users which meant that a lot of functionality was no longer exposed to the user without having to jump through hoops and I just didn’t like the new look and feel anymore. After looking at the available options I switched to Debian 3.0 that had released earlier that year. It worked great for the most part for me and I have been using it since. I did explore Mint and Ubuntu for a bit in the middle but have mostly been using Debian for my home systems. (Work wise most companies I have been with have been on RHEL, CentOS and Fedora).

After running for such a long time and with the constant changes over the past few years, it is obvious to wonder how the Debian project is doing and recently Jonathan Carter who is the current Debian Project Leader sent an email giving a high level overview of the current status, what went well, the current challenges and future scope. It is an interesting read and you should check it out here: Question to all candidates: how is Debian doing?.

Some of the points I found interesting are listed below:

  • The project has managed to release every 2 years since 2005
  • The finances are also really good, with over $1m in available funds
  • Debian gained secureboot support
  • The project
  • Consumer computing products are going to continue being more locked down and this is causing problems with the installers

There are more points but as Jonathan put it “I think Debian is doing ok. It’s not doing great, but it is ok. “. For me it works as I want it & how I want it so I am happy with it and it is good to know that the project is stable and will continue to be around for a while.

– Suramya

April 15, 2022

Life found a way a lot earlier than when we thought it had

Filed under: Interesting Sites,Science Related — Suramya @ 2:57 AM

According to scientific the current understanding earth formed about 4.54 billion years ago and till now the theory was that life evolved on earth about 3.7bn years ago. This was primarily based on the fact that the oldest reported micro-fossils found dated to 3.46bn and 3.7bn years ago. However recent discoveries in Canada have changed the calculus as they found evidence of microbes thriving near hydrothermal vents on Earth’s surface just 300m years after the planet formed, making them between 3.75bn and 4.28bn years old which makes this by far the oldest micro-fossils ever found.

If confirmed, it would suggest the conditions necessary for the emergence of life are relatively basic. “If life is relatively quick to emerge, given the right conditions, this increases the chance that life exists on other planets,” said Dominic Papineau, of University College London, who led the research. Five years ago, Papineau and colleagues announced they had found microfossils in iron-rich sedimentary rocks from the Nuvvuagittuq supracrustal belt in Quebec, Canada. The team suggested that these tiny filaments, knobs and tubes of an iron oxide called haematite could have been made by bacteria living around hydrothermal vents that used iron-based chemical reactions to obtain their energy.

Scientific dating of the rocks has suggested they are at least 3.75bn years old, and possibly as old as 4.28bn years, the age of the volcanic rocks they are embedded in. Before this, the oldest reported microfossils dated to 3.46bn and 3.7bn years ago, potentially making the Canadian specimens the oldest direct evidence of life on Earth. Now, further analysis of the rock has revealed a much larger and more complex structure — a stem with parallel branches on one side that is nearly a centimetre long — as well as hundreds of distorted spheres, or ellipsoids, alongside the tubes and filaments.

It is a fascinating find because it gives us an idea of how quickly life evolved on Earth which in turn enables us to search for it on other planets both in our own solar-system and the ones we have found around other stars (once we can get to them). Whether the life would have evolved into something akin to Humans or still be in the micro-organism stage is something up in the air. My feel is that we will find evidence for something in the middle of both extremes, but the longer we search the more the possibility of finding intelligent life improves.

Source: Microfossils may be evidence life began very quickly after Earth formed

– Suramya

April 14, 2022

Ensure your BCP plan accounts for the Cloud services you depend on going down

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 1:53 AM

Long time readers of the blog and folks who know me know that I am not a huge fan of putting everything on the cloud and I have written about this in the past (“Cloud haters: You too will be assimilated” – Yeah Right…), I mean don’t get me wrong, the cloud does have it’s uses and advantages (some of them are significant) but it is not something that you want to get into without significant planning and thought about the risks. You need to ensure that the ROI for the move is more than the increased risk to your company/data.

One of the major misconceptions about the cloud is that when we put something on there we don’t need to worry about backups/uptimes etc because the service provider takes care of it. This is obviously not true. You need to ensure you have local backups and you need to ensure that your BCP (Business Continuity Plan) accounts for what you would do if the provider itself went down and the data on the cloud is not available.

You think that this is not something that could happen? The 9 day and counting outage over at Atlassian begs to differ. On Monday, April 4th, 20:12 UTC, approximately 400 Atlassian Cloud customers experienced a full outage across their Atlassian products. This is just the latest instance where a cloud provider has gone down leaving it’s users in a bit of a pickle and as per information sent to some of the clients it might take another 2 weeks to restore the services for all users.

One of our standalone apps for Jira Service Management and Jira Software, called “Insight – Asset Management,” was fully integrated into our products as native functionality. Because of this, we needed to deactivate the standalone legacy app on customer sites that had it installed. Our engineering teams planned to use an existing script to deactivate instances of this standalone application. However, two critical problems ensued:

Communication gap. First, there was a communication gap between the team that requested the deactivation and the team that ran the deactivation. Instead of providing the IDs of the intended app being marked for deactivation, the team provided the IDs of the entire cloud site where the apps were to be deactivated.
Faulty script. Second, the script we used provided both the “mark for deletion” capability used in normal day-to-day operations (where recoverability is desirable), and the “permanently delete” capability that is required to permanently remove data when required for compliance reasons. The script was executed with the wrong execution mode and the wrong list of IDs. The result was that sites for approximately 400 customers were improperly deleted.

To recover from this incident, our global engineering team has implemented a methodical process for restoring our impacted customers.

To give you an idea of how serious this outage is, I will use my personal experience with their products and how they were used in one of my previous companies. Without Jira & Crucible/Fisheye no one will be able to commit code into the repositories or do code reviews of existing commits. The users will not be able to do production / dev releases of any product. Since Confluence is down users/teams can’t access guides/instructions/SOP documents/documentation for any of their systems. Folks who use Bitbucket/sourcetree would not be able to commit code. This is the minimal impact scenario. It gets worse for organizations who use CI/CD pipelines and proper SDLC processes/lifecycles that depend on their products.

If the outage was on the on-premises servers then the teams could fail over to the backup servers and continue, but unfortunately for them the issue is on the Atlassian side and now everyone just has to wait for it to be fixed.

Code commits blocks (pre-commit/post-commit hooks etc) can be disabled but unless you have local copies of the documentation stored in Confluence you are SOL. We actually faced this issue once with our on-prem install where the instructions on how to do the failover were stored on the confluence server that had gone down. We managed to get it back up by a lot of hit & try methods but after that all teams were notified that their BCP/failover documentation needed to be kept in multiple locations including hardcopy.

If the companies using their services didn’t prepare for a scenario where Atlassian went down then there are a lot of people scrambling to keep their businesses and processes running.

To prevent issues, we should look at setting up systems that take auto-backups of the online systems and store it on a different system (can be in the cloud but use a different provider or locally). All documentation should have local copies and for really critical documents we should ensure hard copy versions are available. Similarly we need to ensure that any online repositories are backed up locally or on other providers.

This is a bad situation to be in and I sympathize with all the IT staff and teams trying to ensure that their companies business is running uninterrupted during this time. The person who ran the script on the other hand on the Atlassian server should seriously consider getting some sort of bad eye charm to protect themselves against all the curses flying their way (I am joking… mostly.)

Well this is all for now. Will write more later.

April 13, 2022

Internet of Things (IoT) Forensics: Challenges and Approaches

Internet of Things or IoT consists of interconnected devices that have sensors and software, which are connected to automated systems to gather information and depending on the information collected various actions can be performed. It is one of the fastest growing markets, with enterprise IoT spending to grow by 24% in 2021 from $128.9 billion. (IoT Analytics, 2021).

This massive growth brings new challenges to the table as administrators need to secure IoT devices in their network to prevent them from being security threats to the network and attackers have found multiple ways through which they can gain unauthorized access to systems by compromising IoT systems.

IoT Forensics is a subset of the digital forensics field and is the new kid on the block. It deals with forensics data collected from IoT devices and follows the same procedure as regular computer forensics, i.e., identification, preservation, analysis, presentation, and report writing. The challenges of IoT come into play when we realize that in addition to the IoT sensor or device we also need to collect forensic data from the internal network or Cloud when performing a forensic investigation. This highlights the fact that Forensics can be divided into three categories: IoT device level, network forensics and cloud forensics. This is relevant because IoT forensics is heavily dependent on cloud forensics (as a lot of data is stored in the cloud) and analyzing the communication between devices in addition to data gathered from the physical device or sensor.

Why IoT Forensics is needed

The proliferation of Internet connected devices and sensors have made life a lot easier for users and has a lot of benefits associated with it. However, it also creates a larger attack surface which is vulnerable to cyberattacks. In the past IoT devices have been involved in incidents that include identity theft, data leakage, accessing and using Internet connected printers, commandeering of cloud-based CCTV units, SQL injections, phishing, ransomware and malware targeting specific appliances such as VoIP devices and smart vehicles.

With attackers targeting IoT devices and then using them to compromise enterprise systems, we need the ability to extract and review data from the IoT devices in a forensically sound way to find out how the device was compromised, what other systems were accessed from the device etc.

In addition, the forensic data from these devices can be used to reconstruct crime scenes and be used to prove or disprove hypothesis. For example, data from a IoT connected alarm can be used to determine where and when the alarm was disabled and a door was opened. If there is a suspect who wears a smartwatch then the data from the watch can be used to identify the person or infer what the person was doing at the time. In a recent arson case, the data from the suspects smartwatch was used to implicate him in arson. (Reardon, 2018)

The data from IoT devices can be crucial in identifying how a breach occurred and what should be done to mitigate the risk. This makes IoT forensics a critical part of the Digital Forensics program.

Current Forensic Challenges Within the IoT

The IoT forensics field has a lot of challenges that need to be addressed but unfortunately none of them have a simple solution. As shown in the research done by M. Harbawi and A. Varol (Harbawi, 2017) we can divide the challenges into six major groups. Identification, collection, preservation, analysis and correlation, attack attribution, and evidence presentation. We will cover the challenges each of these presents in the paper.

A. Evidence Identification

One of the most important steps in forensics examination is to identify where the evidence is stored and collect it. This is usually quite simple in the traditional Digital Forensics but in IoT forensics this can be a challenge as the data required could be stored in a multitude of places such as on the cloud, or in a proprietary local storage.

Another problem is that since IoT fundamentally means that the nodes were in real-time and autonomous interaction with each other, it is extremely difficult to reconstruct the crime scene and to identify the scope of the damage.

A report conducted by the International Data Corporation (IDC) states that the estimated growth of data generated by IoT devices between 2005 to 2020 is going to be more than 40,000 exabytes (Yakubu et al., 2016) making it very difficult for investigators to identify data that is relevant to the investigation while discarding the irrelevant data.

B. Evidence Acquisition

Once the evidence required for the case has been identified the investigative team still has to collect the information in a forensically sound manner that will allow them to perform analysis of the evidence and be able to present it in the court for prosecution.

Due to the lack of a common framework or forensic model for IoT investigations this can be a challenge. Since the method used to collect evidence can be challenged in court due to omissions in the way it was collected.

C. Evidence Preservation and Protection

After the data is collected it is essential that the chain of custody is maintained, and the integrity of the data needs to be validated and verifiable. In the case of IoT Forensics, evidence is collected from multiple remote servers, which makes maintaining proper Chain of Custody a lot more complicated. Another complication is that since these devices usually have a limited storage capacity and the system is continuously running there is a possibility of the evidence being overwritten. We can transfer the data to a local storage device but then ensuring the chain of custody is unbroken and verifiable becomes more difficult.

D. Evidence Analysis and Correlation

Due to the fact that IoT nodes are continuously operating, they produce an extremely high volume of data making it difficult to analyze and process all the data collected. Also, since in IoT Forensics there is less certainty about the source of data and who created or modified the data, it makes it difficult to extract information about ownership and modification history of the data in question.

With most of the IoT devices not storing metadata such as timestamps or location information along with issues created by different time zones and clock skew/drift it is difficult for investigators to create causal links from the data collected and perform analysis that is sound, not subject to interpretation bias and can be defended in court.

E. Attack and Deficit Attribution

IoT forensics requires a lot of additional work to ensure that the device physical and digital identity are in sync and the device was not being used by another person at the time. For example, if a command was given to Alexa by a user and that is evidence in the case against them then the examiner needs to confirm that the person giving the command was physically near the device at the time and that the command was not given over the phone remotely.

F. Evidence Presentation

Due to the highly complex nature of IoT forensics and how the evidence was collected it is difficult to present the data in court in an easy to understand way. This makes it easier for the defense to challenge the evidence and its interpretation by the prosecution.

VI. Opportunities of IoT Forensics

IoT devices bring new sources of information into play that can provide evidence that is hard to delete and most of the time collected without the suspect’s knowledge. This makes it hard for them to account for that evidence in their testimony and can be used to trip them up. This information is also harder to destroy because it is stored in the cloud.

New frameworks and tools such Zetta, Kaa and M2mLabs Mainspring are now becoming available in the market which make it easier to collect forensic information from IoT devices in a forensically sound way.

Another group is pushing for including blockchain based evidence chains into the digital and IoT forensics field to ensure that data collected can be stored in a forensically verifiable method that can’t be tampered with.


IoT Forensics is becoming a vital field of investigation and a major subcategory of digital forensics. With more and more devices getting connected to each other and increasing the attack surface of the target it is very important that these devices are secured and have a sound way of investigating if and when a breach happens.

Tools using Artificial Intelligence and Machine learning are being created that will allow us to leverage their capabilities to investigate breaches, attacks etc faster and more accurately.


Reardon. M. (2018, April 5). Your Alexa and Fitbit can testify against you in court. Retrieved from

M. Harbawi and A. Varol, “An improved digital evidence acquisition model for the Internet of Things forensic I: A theoretical framework”, Proc. 5th Int. Symp. Digit. Forensics Security (ISDFS), pp. 1-6, 2017.

Yakubu, O., Adjei, O., & Babu, N. (2016). A review of prospects and challenges of internet of things. International Journal of Computer Applications, 139(10), 33–39.

Note: This was originally written as a paper for one of my classes at EC-Council University in Q4 2021, which is why the tone is a lot more formal than my regular posts.

– Suramya

April 12, 2022

How not to ask for help on Online Forums

Filed under: Linux/Unix Related,My Thoughts — Suramya @ 1:12 AM

It is quite normal to be stuck while exploring a new operating system, or a new programming language or anything new to be honest and one of the great advantages we have now is the ability to go online & search for answers on the Internet and if you are unable to find a fix then you can request for help on forums. There are forums specific to all sorts of niche areas and some of them are quite active. I doubt that it will be a surprise to many that I am part of multiple Linux Forums and in this post I am going to talk about a specific post on one of them that is a masterclass on how not to ask questions/how not to ask for help/how to ensure your questions are never answered.

Let’s start with the post, then we can dig into each line of this gem (The first line is the subject of the post and the rest are the contents).

Linux is bad
Dear Linux users,

Here is the top 3 reasons, I think Linux is bad:
1- Hard.
2- NVIDIA drivers.
3- I don't know how to write shell scripts.

My friend told me that I don't need to, the community is very helpful.
So I thought I should test them and see if they can help me finish my simple shell homework.

Sorry for the bait. I will switch to Linux if I get help, but you probably don't care.
Hopefully there is a weirdo who will think this is fun.

I have a hard time believing this is not some troll posting crap just to get a rise out of people but if that is not the case then this goes out of it’s way to ensure people react badly to the request, so without further ado lets dig in.

Here is the top 3 reasons, I think Linux is bad:
1- Hard.
2- NVIDIA drivers.
3- I don't know how to write shell scripts.

Ok, not a great start. You are posting on a linux forum stating that it is bad because you find it hard, and don’t know how to write shell scripts. (I will partially give them the point about NVIDIA drivers because historically they have been a pain.) How is it Linux’s fault that you don’t know how to write shell scripts? Did you honestly believe that the creator of the OS should have come to your house to teach you shell scripting so that you don’t find it ‘hard’? There are multiple resources online that teach shell scripting, including some great courses on Udemy, YouTube, Coursera etc etc. All you have to do is be willing to put in the effort.

To the other point about Linux being hard, it is not. It is different than Windows and does things differently, that doesn’t make it hard. It’s just what you are used to, I use Linux for my primary OS and when I have to troubleshoot my wife’s Windows 11 laptop there is usually a lot of cursing involved. When I started with Linux it was the other way round, for the longest time I kept trying to do things the ‘Windows way’ and it didn’t always work. However, once you take time and explore the system the flexibility it gives you is fantastic. Don’t like the Desktop UI, change to a different one, don’t like the file manager, use a different one etc etc.

My friend told me that I don't need to, the community is very helpful.
So I thought I should test them and see if they can help me finish my simple shell homework.

Umm, who do you think you are that you need to test the community. Plus ‘testing’ by having them do your homework is not testing. This is called negging, where you give backhanded compliments and generally making comments that express indifference toward another person (in this case an Operating System) in an attempt to get them to go out of their way to impress you/do things for you. It is a tactic used by pickup artists to get women by putting them down so that they would go out with them/sleep with them to gain their approval. Sorry, that only works with emotionally distressed folks and not folks on a technical forum. We have no need to gain your approval.

Someone on the forum had the perfect answer for this: “The community is helpful, but you seem to have put more effort into trying to get someone else to do your homework for you, than into actually doing it yourself. We aren’t going to do your homework for you (and if you bothered to check the LQ Rules and “Question Guidelines” you’d see that), but we will help you if you’re stuck. “

Sorry for the bait. I will switch to Linux if I get help, but you probably don't care.

Yes we don’t care and why should we care that you swtich to Linux? Do you think you are someone important? This person needs to realize that they are not the center of the universe and that it is irrelevant to others if they decide to switch to Linux or not. Honestly speaking I don’t care if you use Linux or not. Linux users (for the most part) are no longer the anti-Microsoft zealots who will try to force you to use Linux. In my opinion you should use it if you like it, if you feel Windows or Mac works better for you, use that.

Hopefully there is a weirdo who will think this is fun.

What a way to encourage people to help you! As calling people names is sure to make them want to help you… Right? No? How is that possible??? I thought I was the center of the universe and all the lesser people would fall over themselves to help me as they should feel honored that I am allowing them to help me.

Nope, it doesn’t work that way. It only works like that in movies (and maybe in some of the schools/colleges) where the Jocks/popular kids are treated like divine beings and others fall over themselves to help them so that they can bask in the glory of having interacted with the cool kids. Real life doesn’t work like that and most places you will be laughed out if you try to do this nonsense at work.

If you want help it helps to be humble, talk about what you have already tried, what specific portion is giving you problems and stow the attitude.

Interestingly enough people on the forum still gave hints on how they could approach the problem and pointed them to resources that can help if they put in the effort.

What do you think? Is it ok to post for help like this? Would you answer this person if you came across the post?

Original forum post in all it’s glory: linux is bad for reference.

– Suramya

April 11, 2022

I am now a Certified SOC Analyst (CSA)

Filed under: Computer Security,My Life,Tech Related — Suramya @ 5:08 AM

Over the weekend I gave my first Cybersecurity Certification exam and I am now a Certified SOC Analyst (CSA). 🙂 This is the first of five certifications I will be completing this year as part of my Degree in Cybersecurity.

Certificate No: ECC2945876310

The exam was interesting and for me the hardest part was remembering all the Windows event codes as I have a hard time remembering numbers. I feel that they should allow users access to their windows system (registry/event logs) as in a real life scenario we would always have access to the system and internet. Testing without the ability to search the internet doesn’t make much sense as it is not realistic.

That being said, I am looking forward to the next certification exam which I am planning to take end of the month/early next month.

Well this is all for now. Will write more later.

– Suramya

« Newer Posts

Powered by WordPress