Suramya's Blog : Welcome to my crazy life…

November 28, 2020

My Backup strategy and how it has evolved over the years

I am a firm believer in backing up my data, some people say that I am paranoid about backing up data and I do not dispute it. All my data is backed up on multiple drives and locations and still I feel that I need additional backup. This is because I read the news and there have been multiple cases where people lost their data because they hadn’t backed it up. Initially I wasn’t that serious about it but when I was in college and working at the helpdesk, a phd student came in crying because her entire PHD thesis was on a Zip Drive and it wasn’t working anymore. She didn’t have a backup and was basically screwed. We tried a bunch of stuff to recover the data but didn’t manage to recover anything. That made me realize that I needed a better backup procedure so started my journey in creating recoverable backups.

My first backup system was a partition on my drive called backup where I created a copy of all my important data (This is back in 2000/2001). Then I realized that if the drive died then I would loose access to the backup partition as well, and I started looking for alternatives. This is around the time when I had bought a CD Writer so all my important data was backed up to CD’s and I was confident that I could recover any lost data. Shortly afterwards I moved to DVD’s for easier storage. However, I didn’t realize till a lot later that CD’s & DVD’s start becoming unreadable quite easily. Thankfully I didn’t loose any data but it was a rude awakening to find that the disks I had expected to keep my data safe were starting to become unreadable within a few years.

I then did a bunch of research online and found that the best medium for storing data long term is still Hard Drives. I didn’t want to store anything online because I want my data to be in my control so any online backup system was out of the question. I added multiple drives to my desktop and started syncing the data from the desktop & laptop to the backup drive using rync. This ensured that the important data was in three locations at any given time: My Desktop, My Laptop and the Backup drive. (Plus a DVD copy that I made of all my data every year)

I continued with this backup strategy for a few years but then realized that I had no way to go back to a previous version of any given document, if I deleted a file or wanted to go back to an older version of a file I only had 24 hours before the changes were synced to the backup drive before it was unrecoverable. There was a case where I ended up having to dig through my DVD backups to find the original version of a file that I had changed. So I did a bit of research and found rdiff-backup. It allows a user to back up one directory to another and generates an incremental backup. So we can recover/restore files based on date range. The best part is that the software is highly efficient, once the initial backup is done it only transmits the changes to the files in subsequent runs. Now that I have been using it I can restore a snapshot of my data going back to 2012 quite easily.

I was quite happy with this setup for a while, but while reading an article on best backup practices I realized that I was still depending only on 1 location for the backup data (the rdiff-data snapshots) and the best practices stated that you should also store it in an external drive or offsite location to prevent viruses/ransomware from deleting backups. So I bought a 5TB external drive and created an encrypted partition on the same to store all my important data. But I was still unhappy because all of this was still stored at my home so if I had a fire or something I would still end up loosing the data even though my external drive was kept in a safe. I still didn’t want to store data online but that was still the best way to ensure I had offsite backup. I initially thought about setting a server at my parents place in Delhi and backup there but that didn’t work out for various reasons. Plus I didn’t want to have to call them and troubleshoot backup issues over the phone.

Around this time I was reading about encrypted partitions and came up with the idea of creating an encrypted container file to store my data and then backup the container file online. I followed the steps I outlined in my post How to encrypt your Hard-drive in Linux and created the encrypted container. Once I finished that I had to upload the container to my webhost since I had unlimited storage space as per my contract. Initially I wasn’t able to because they had restricted my account’s quota but a call to their customer support sorted it out after a bit of argument and explaining what I was doing. The next hurdle I faced was uploading the file to the server because of the ridiculously low upload speed I was getting from Airtel. I had a 40 mbps connection at the time but the upload speed was restricted to 1 mbps because of ‘reasons’. After arguing with their support for a while, I was complaining about it at work and one of the folks suggest I check out ACT Internet. I checked out their plans and was quite impressed with the offerings so I switched over to ACT and was able to upload the container file quickly and painlessly.

Once the container was uploaded, I had to tackle the next problem in the process which was on how to update the files in the container without having to upload the entire container to the host. I experimented with a few solutions and then came up with the following solution:

1. Mount the remote partition as a local mount using sshfs. I mounted the partition locally using the following command: (please replace with the correct hostname and username before using)

/usr/sbin/runuser -l suramya -c "sshfs -o allow_other @hostname.com:. /mnt/offsite/"

2. Once the remote partition was mounted locally, I was able to use the usual commands to mount the encrypted partition to another location using the following command:

/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/UserKey.dat
mount /dev/mapper/enc /mnt/stash/

In an earlier iteration of the code I wasn’t using the keyfile so had to manually enter the password everytime I wanted to backup to the offsite location. This meant that the backup was done randomly as and when I remembered to run the command manually. A few days ago I finally configured it to run automatically after adding the keyfile as a decryption key. (Obviously the keyfile should be protected and not be accessible to others because it allows users to decrypt the data without entering a password.) Now the offsite backup runs once a week while the local backup runs daily and I still backup the Backup partition to the external drive as well manually as and when I remember to do so.

In all I was quite happy with my setup but then I was updating the encrypted container and a network issue made be believe that my remote container had become corrupted (it wasn’t but I thought it was). At the same time I was fooling around with Microsoft One Drive and saw that I had 1TB of storage available over there since I was a Office 365 subscriber. This gave me the idea of backing up the Container to OneDrive as well as my site hosting.

I first tried copying the entire container to the drive and hit a limit because the file was too large. So I thought I would split the file into 5GB parts and then sync them to OneDrive using rclone. After installing rclone. I configured it to connect to OneDrive by issuing the following command and following the onscreen prompts:

rclone config

I then created a folder on OnDrive called container to store the split files and then tried uploading a test file using the command:

rclone copy $file OneDrive:container

Where OneDrive is the name of my provider that I configured in the previous step. This was successful so I just needed to create a script that did the following:

1. Update the Container file with the latest backup
2. Split the Container file into 5GB pieces using the following command:

split --verbose -d -b5GB /mnt/repository/Container/Enc_vol1.img /mnt/repository/Container/Enc_vol_

3. Upload the pieces to Ondrive.

for file in `ls /mnt/repository/Container/Enc_vol_* |sort`; do  echo "$file";  /usr/bin/rclone copy $file OneDrive:container -v &> /tmp/oneDriveSync.log; done

This command uploads the pieces to the drive one at a time and is a bit slow because it maxes out the upload speed to ~2mbps. If you split the uploads and run the command in parallel then you get a lot faster speed. Keep in mind that if you are uploading more than 10 files at a time you will start getting errors about too many open connections and then you have to wait for a few hours before you can upload again. It took a while to upload the chunks but now my files are stored in yet another location and the system is configured to sync to Onedrive once a month.

So, as of now my files are backed up as following:

  • /mnt/Backup: Local Drive. All changes are backed up daily using rdiff-backup
  • /mnt/offsite: Encrypted Container stored online. All changes are backed up weekly using rsync
  • OneDrive: Encrypted Container stored at Microsoft OneDrive. All changes are backed up monthly using rsync
  • External Drive: Encrypted backup stored in an External Hard-drive using rsync. Changes are backed up infrequently manually.
  • Laptop: All Important files are copied over to the laptop using Unison/rsync manually so that I can access my data while traveling

Finally, I am also considering backing up the snapshot data to BlueRay disks but it will take time so haven’t gotten around to it yet.

Since I have this elaborate backup procedure I wasn’t worried much when one of my disks died last week and was able to continue work without issues or worries about loosing data. I still think I can enhance the backups I take but for now I am good. If you are interested in my backup script an extract of the code is listed below:

function check_failure ()
{
	if [ $? == 0 ]; then
		logger "INFO: $1 Succeeded"
	else
		logger "FATAL: Execution of $1 failed"
		wall "FATAL: Execution of $1 failed"
		exit 1
	fi
}

###
# Syncing to internal Backup Drive
###

function local_backup ()
{
	export BACKUP_ROOT=/mnt/Backup/Snapshots
	export PARENT_ROOT=/mnt/repository

	logger "INFO: Starting System Backup"

	rdiff-backup -v 5 /mnt/data/Documents/ $BACKUP_ROOT/Documents/
	check_failure "Backing up Documents"

	rdiff-backup -v 5 /mnt/repository/Documents/Jani/ $BACKUP_ROOT/Jani_Documents/
	check_failure "Backing up Jani Documents"

	rdiff-backup -v 5 $PARENT_ROOT/Programs/ $BACKUP_ROOT/Programs/
	check_failure "Backing up Programs"

	..
	..

	logger "INFO: All Backups Completed Successfully."
}

### 
# Syncing to Off-Site Backup location
###

function offsite_backup
{
	export PARENT_ROOT=/mnt/repository

	# First we mount the remote directory to local
	logger "INFO: Mounting External Drive"
	/usr/sbin/runuser -l suramya -c "sshfs -o allow_other username@remotehost:. /mnt/offsite/"
	check_failure "Mounting External Drive"

	# Open the Encrypted Partition
	logger "INFO: Opening Encrypted Partition. Please provide password."
	/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/keyfile1
	check_failure "Mounting Encrypted Partition Part 1"

	# Mount the device
	logger "INFO: Mounting the drive"
	mount /dev/mapper/enc /mnt/stash/
	check_failure "Mounting Encrypted Partition Part 2"

	logger "INFO: Starting System Backup"
	rsync -avz --delete  /mnt/data/Documents /mnt/stash/
	check_failure "Backing up Documents offsite"
	rsync -avz --delete /mnt/repository/Documents/Jani/ /mnt/stash/Jani_Documents/
	check_failure "Backing up Jani Documents offsite"
	..
	..
	..

	umount /mnt/stash/
	/usr/sbin/cryptsetup luksClose enc
	umount /mnt/offsite/

	logger "INFO: Offsite Backup Completed"
}

This is how I make sure my data is backed up. All of Jani’s data is also backed up to my system using robocopy as she is running Windows and then the data gets backed up by the scripts I explained above as usual. I also have scripts to backup my website/blog/databases but that’s done using a simple script. Let me know if you are interested and I will share them as well.

This is all for now. Let me know if you have any questions about the backup strategy or if you want to make fun of me. 🙂 This is all for now. Will write more later.

– Suramya

October 16, 2020

Response to a post that insists that you should ‘Focus on your Job not side projects’

Filed under: My Thoughts,Techie Stuff — Suramya @ 11:44 AM

I found this post while surfing the web, and the main point of the post is to tell people that they should stop focusing on their side projects because the recruiters would not be interested and what matters in getting a job is what your current company name is. He also recommends dropping the side projects and read “Cracking the code interview” instead to learn everything you need to know about algorithms and binary trees so that you get a job. There are so many things in the post that I disagree with that it was hard for me to figure out where to start.

Let me start off by saying that having a cool portfolio will not necessarily get you a job as there is an element of luck involved. You do need to know how to crack an interview so do read through the Cracking the Code Interview, How to Interview etc. I will not go through a list of do’s and donts for interview’s here as that is not the purpose of this post but basically you need to show that you are competent in the skill set they are looking for and not a problem person to work with. (Basically you need to leave your ego at home). That being said, there are enough candidates in the market looking for a job and you need something that will differentiate you from the rest of the crowd. That’s where your side projects come in.

I am going to quote some of the more problematic portions of the post here and then respond to make it easier for people to follow my reasoning. So lets dig in.

First, most recruiters don’t care about your personal projects or how many meetups you went during the year. What matters the most is your current company – and by that I mean the name of your current company. It was saddening me before, but now that I’m on the other side today, with a manager position, I better understand this. This is plain common sense. You can generally assume that a developer coming from a cutting-edge company has better chances to be a great developer than a developer coming from a Java 1.4 shop. He may not be smarter, but he has been hired by a company with a most demanding hiring process, and has been surrounded by some of the smartest developers.

I completely disagree with this. (I will be using recruiters to mean Tech Recruiters who are basically head hunters for a firm but not the people who will be working with you.) Recruiters are not there to talk to you about your personal projects, they are there to assess your fit into the skillset that the sourcing company is asking for, if you are a match for the skills then they will move you to the next level where you interview with the Hiring Manager or go through a Technical Interview. If you are not a fit then it doesn’t matter if you have a million side projects, they will not proceed with the interview. One way side projects help in such a scenario is to allow you to prove you have the skills in a particular domain even though you haven’t worked on it in a professional capacity.

Coming to the second point, using the current company as a hiring criteria is one of the most idiotic things I can think of for screening people. I have worked in Goldman Sachs, Sprint & Societe Generale and as with everywhere there were some employees in each company which made you think “How on earth did they get hired here?” and this is after a seriously demanding set of interviews to join the firm (I had 9 interviews for Goldman). Just because they work at a company doesn’t mean they are the best fit for your requirement. Secondly no company is uniform, so it is guaranteed that there will be parts of the company working with cutting edge while other teams will be on antique systems. In one of my previous companies (not going to name them here 🙂 ) there was a team using Git & the latest software stack for building their releases and another team that used RCS and tooling around it to build their software.

Assuming that the entire company is on the same stack is a mistake especially when talking about large companies. In small to medium companies this might not be the case always but even there, it is possible that there is a legacy system that is not changed/upgraded and people are working on it. Forget latest systems, a lot of the major banks still have Mainframes running critical portions of their software and other parts of the bank which use AI/ML for their projects.

Yes, there is a certain quality that is assumed when interviewing a person from a famous company but it is not what I am basing my hiring on, you will be hired on your skills not your past job experience. Basically in my opinion your past jobs can get you in the door for the interview but passing it is up to your skills & attitude. You should try to use the side projects as a way to showcase your skills. e.g. if you created a super cool way of doing x with a new technology it will do more to showcase your skill than stating that you did coding from 9-5.

Worse, having too many personal projects can raise a flag and be scary for the recruiter.

I have never had this happen and I was the guy with a ridiculous no of side projects through out my career. Most of the skills I have are from trying out new technology at home and since just reading a book on it doesn’t make you proficient I would end up using the tech for my next project giving me experience in working on the tech. In fact I have found my side projects to be a great benefit when interviewing because most technical interviewers are techies themselves and it can be fun to discuss such projects with them. I remember one particular interview where I mentioned one of my side projects (email to SMS bridge) during the interview and then actually spent about 20 mins talking about the applications for it and how it could be improved. It played a big part in why I was hired for the role.

If a company is scared that you are working on stuff outside their work areas then I don’t think that it is a company that you would want to work with in any case. At least I wouldn’t want to work for such a company.

My CTO experience was an anomaly, at best two lost years, at worst a sign that I was too independent, too individualistic, not a good team player. Only relatively small and ambitious startups, like the one I’m in today, were valuing this experience.

Again I must disagree. When you work in a startup you learn a lot and get to explore areas outside of what you are officially supposed to be doing. This is a great benefit when working in the normal big companies because you now know how the other parts of the software/hardware stack work and can use that to identify issues before they become a problem.

However, one point I do want to stress is that if you started a company right out of college and became a CTO in it, then it will not be given as much weightage as if you had done it after a bit of industry experience. I worked with a startup in my previous company where the entire teams combined work experience was less than mine and it was quite apparent in how they worked. For example they were very casual about releases and if they managed to finish an extra feature before the release even though it wasn’t tested they would go ahead and release it without notifying us. But the drive they brought into the project was something else. I was blown away by their push to ensure that their software did everything we asked it to.

The best way to dig a new technology is to practice it in your daily job. You’ll spend seven hours a day on it and will quickly become infinitely more proficient than if you just barely used it on nights and weekends. You may tell me that we face a chicken or egg problem here. How to get a job where you’ll work on a really attractive technology if you never used it before? Well, instead of spending nights superficially learning this technology, spend your nights preparing interviews. Read “Cracking the code interview”, learn everything you need to know about algorithms and binary trees. As we all know, the interview process is broken. Instead of deploring it, take advantage of it.

Unless you are very lucky you will hardly ever be working on cutting edge tech at your day job. Companies don’t want to experiment with new untested technologies for their production systems, they want something rock solid. If you are lucky you will get a few hours a week to try out a new tech to evaluate it and then a few months/years before they put it in production (depends on the company).

In summary I would like to say that Side projects can be a big benefit while searching for a job but you also need to ensure you don’t neglect the other parts of your profile like communication skills, leadership skills, team work etc. If you have a very strong skillset and you are using side projects to expand your skills then you should be good for most companies.

Well this is all for now. Will write more later.

– Suramya

October 13, 2020

It is now possible to generate clean hydrogen by Microwaving plastic waste

Filed under: Emerging Tech,Interesting Sites,My Thoughts — Suramya @ 2:33 PM

Plastic is a modern hazard and Plastic Pollution has a massive environmental impact. As of 2018, 380 million tonnes of plastic is being produced worldwide each year (source: Wikipedia). Since we all knew that plastic was bad a lot of effort was put in to get people to recycle plastics and single use plastics have been banned in a lot of places (In India they are banned as of 2019). However as per the recent report by NPR, recycling doesn’t keep plastic out of landfills as it is not economically viable at a large scale. It is simply cheaper to just bury the plastic than to clean it and recycle. Apparently this has been known for years now but the Big Oil companies kept it quite to protect their cash cow. So the hunt of what to do with the plastic continues and thanks to recent breakthroughs there just might be light at the end of this tunnel.

Apparently plastic has a high density of Hydrogen in it (something that I wasn’t aware of) and it is possible to extract this hydrogen to use as fuel for a greener future. The existing methods involve heating the plastic to ~750°C to decompose it into syngas (mixture of hydrogen and carbon monoxide) which are then separated in a second step. Unfortunately this process is energy intensive and difficult to make commercially viable.

Peter Edwards and his team at the University of Oxford decided to tackle this problem and found that if you broke the plastic into small pieces with a kitchen blender and mixed it with a catalyst of iron oxide and aluminium oxide, then microwaved it at 1000 watts then almost 97 percent of the gas in the plastic was released within seconds. To cherry on top is that the material left over after the process completed was almost exclusively carbon nanotubes which can be used in other projects and have vast applications.

The ubiquitous challenge of plastic waste has led to the modern descriptor plastisphere to represent the human-made plastic environment and ecosystem. Here we report a straightforward rapid method for the catalytic deconstruction of various plastic feedstocks into hydrogen and high-value carbons. We use microwaves together with abundant and inexpensive iron-based catalysts as microwave susceptors to initiate the catalytic deconstruction process. The one-step process typically takes 30–90 s to transform a sample of mechanically pulverized commercial plastic into hydrogen and (predominantly) multiwalled carbon nanotubes. A high hydrogen yield of 55.6 mmol g−1plastic is achieved, with over 97% of the theoretical mass of hydrogen being extracted from the deconstructed plastic. The approach is demonstrated on widely used, real-world plastic waste. This proof-of-concept advance highlights the potential of plastic waste itself as a valuable energy feedstock for the production of hydrogen and high-value carbon materials.

Their research was published in Nature Catalysis, DOI: 10.1038/s41929-020-00518-5 yesterday and is still in the early stages. But if this holds up at larger scale testing then it will allow us to significantly reduce the plastic waste that ends up in landfills and at the bottom of the ocean.

Source: New Scientist: Microwaving plastic waste can generate clean hydrogen

– Suramya

September 23, 2020

Civvl is Uber for evicting people

Filed under: My Thoughts — Suramya @ 10:16 AM

In the latest attempt by bottom feeders to capitalize on the current pandemic, we have a company called ‘civvl’ which calls itself, the Uber of Evictions. Basically this is a company which is coming in and saying, I’m gonna pay you money to go kick out people from their own houses. To rub more salt on the wound, they actually advertise that “this is a best time to get involved” and “it’s the fastest growing money making gig due to the COVID-19 pandemic” “Literally thousands of process servers are needed in the coming months due courts being backed up in judgements that needs to be served to defendants.” They use a standard language for gig works like flexible hours, be your own boss etc etc.

But imagine how heartless you have to be to literally start a company that will make money by kicking people out from their houses and capitalizing on the fact that the current pandemic will increase the number of people who will have to be evicted. When I initially read about the company I thought that it was a dark joke but unfortunately the company appears to be real and there are people who are signing up for it. I don’t blame them because you need to feed your family and you gotta do what you gotta do… But but coming up with this whole idea, I don’t have words to express myself right now. You need a special type of person to think of something like this and then implement it.

The company is based in the US and is live. I am not going to link to them because they don’t deserve any traffic and the owners need to seriously think about their life choices.

Source: Vice: Gig Economy Company Launches Uber, But for Evicting People

– Suramya

September 22, 2020

Thoughts on a AmItheAsshole forum post and the racism it implies

Filed under: My Thoughts — Suramya @ 10:15 AM

There is an interesting Subreddit, called r/AmItheAsshole (AITA) where people who are not sure if they behaved correctly in a given scenario can post the details of their experience/behavior and ask the Internet for a ruling on whether they behaved correctly or not. The questions can range from “AITA for not cutting my hair for my Sister’s wedding” to “AITA for burning my wife’s Ex-Boyfriend’s pics”. I don’t subscribe to the channel but there are folks I follow on Twitter who do follow and every once in a while they post links to specific posts which are usually way out there (the picture burning I referenced earlier is from one such post). Today I ran across a post where the poster was trying to justify his actions/behavior where he accused his GF’s Indian friend of cheating because she beat him in Scrabble and he couldn’t accept the defeat without assuming that she cheated. The whole post is below for reference:

So my (M23) GF (F23) has this Indian friend (F18) called “Priya”. Priya came to my English speaking country (relevant later) a year back to study. My girlfriend absolutely adores her and Priya soon became my GFs “best friend”. I’m doing English literature and she’s in science (also relevant).

Recently, she invited my GF and I to her place (fluid restrictions here), and had made a bunch of Indian food for us, got some wine. I ate well, the food was good and was having a good time. My GF had apparently bought Priya a scrabble set because Priya had mentioned she loved the game, so GF suggested we all played scrabble.

I was really excited because I knew I’d decimate them both easily. We play, and as the game progresses, it wasn’t me who was leading but Priya. She was making these huge words like “maladies” and “ostensibly”. I was pretty sure she was cheating.

She got up mid game to go to the bathroom and spent about 3 minutes there. I’m pretty sure she was googling words in there. So when she came out, I jokingly told her I knew she was cheating and she asked me what I was talking about.

I told her I know that she’s cheating, and that it’s impossible for someone who’s literally lived only in India all the time to be so good at Scrabble and to have such an extensive English vocabulary.

She didn’t say anything to defend herself but just laughed and told me she wasn’t cheating and we eventually finished the game and went home.

My GF however was extremely upset with me and told me I embarrassed her. When I told her I was being honest, and that there’s no way Priya could’ve beaten me without cheating, She told me I’m a racist and that she’s reconsidering her relationship with me.

So AITA?

The verdict of a majority of the commenters on the post is that ‘YTA’ (You’re The Asshole) and racist to boot. This is a problem I have seen many times when I was in the US and have actually had a person tell me in my Freshmen year (1st year of college) that I couldn’t possibly be from India because I spoke good English and people from India can’t talk in English. To which my answer was that I have been studying in English for the major part of my life and most schools in India teach in English so that the kids are prepared to enter the professional world once they complete their education.

Another instance was when a professor decided that I must have copied my homework paper from somewhere because it was too good to be written by an Indian kid in his freshman class. Luckily for me we had to write another in-class essay on a given topic before the homework was graded and returned. When he graded my inclass paper he realized that I had written both of them and he actually told me that he was going to give me a D on the homework assignment initially because he thought I had cheated but changed his mind after he saw my in-class paper.

There are many such examples, but they mostly have one common denominator: it is usually a White Guy who is offended by the fact that a non-white person beat them in something that they perceive is their forte. I loved the expression on their faces when they realize that they are not the best and it was especially fun in college because after the first year I was writing for the college newspaper & had published articles as well in recognized magazines, so when they found out about my articles it was always a priceless expression.

Have you faced similar issues in your life?

– Suramya

September 20, 2020

Its Doctor Who’s 57th Anniversary

Filed under: My Thoughts — Suramya @ 11:07 PM

57 Years ago on 19th Sept 1963, we first met the Doctor and the TARDIS. The adventures of the ‘Madman in a Box‘ have kept generations of viewers entertained. I read my first Doctor Who book in 1991/1992 (I think) and have been a fan ever since. I spent a lot of money and effort to find the old Target releases and even managed to get a complete collection of most of the available Doctor Who TV episodes. My personal favorites are: 10th Doctor, 4th Doctor/13th Doctor & the 9th Doctor. The 11th Doctor was a bit too hyper for my tastes and never managed to like the 12th Doctor much. I really love the 13th Doctor as she has some great stories and amazing acting. However the 10th is a clear winner thanks to David Tennant who set the bar so high with his amazing acting and portrayal of the Doctor that he replaced the 4th Doctor as my favorite doctor right from the start of his regeneration.


The TARDIS in its first Television appearance

Doctor Who is one of the longest running shows on television, and there are a lot of amazing quotes that have come from it. Some of my favorite quotes from the show are below:

The Tenth Doctor : Look at these people, these human beings. Consider their potential! From the day they arrive on the planet, blinking, step into the sun, there is more to see than can ever be seen, more to do than – no, hold on. Sorry, that’s The Lion King… – The 10th Doctor

This is just a funny absolutely non-relevant quote from the 10th Doctor and I just loved it. The cherry on top was that this was said during the Doctor trying to stop the enslavement of Humanity, which is what most other series would treat as a really serious topic.

“The very powerful and the very stupid have one thing in common. They don’t alter their views to fit the facts, they alter the facts to fit their views.” – The Fourth Doctor, The Face of Evil (1977)

This is so true. People always try to change the facts to suit their personal views and that never ends well…

“You know that in 900 years in time and space I’ve never met anyone who wasn’t important before. – The 11th Doctor “

Everyone is important and that’s the lesson the 11th Doctor is trying to reinforce for us.

“We’re all stories in the end. Just make it a good one, eh?”

“People assume that time is a strict progression of cause to effect, but *actually* from a non-linear, non-subjective viewpoint – it’s more like a big ball of wibbly wobbly… time-y wimey… stuff. – The 10th Doctor”

There are a ton more quotes that are awesome but I don’t have the space to duplicate them all over here. So you can check them out: here & here & some more here.

Well this is all for now. Will write more later.

– Suramya

September 19, 2020

How to Toonify yourself

Filed under: Interesting Sites,My Thoughts — Suramya @ 10:57 AM

While surfing the web I came across ‘Toonify Yourself!‘ that allows you to upload a photo and see what you’d look like in an animated movie. It uses deep learning and is based on distillation of a blended StyleGAN models into a pix2pixHD image to image translation network.

It sounded interesting, so I tried it out with one of my pictures and got the following results:


Original image

Toonified Image

I quite like the result and am thinking of using it as my avatar going forward. What do you think?
Thanks to Hacker News for the link

– Suramya

September 18, 2020

Hackers – 25th Anniversary thoughts

Filed under: My Life,My Thoughts — Suramya @ 10:01 AM

15 September 2020, was the 25th anniversary of one of my favorite movies which also happens to be one of the most iconic movies about hacking ever released: Hackers. I first saw the movie in late 1999. I was introduced to it by Jerome who was my RA in college at the time and it has been one of the most fun and phenomenal movies on hacking that I’ve seen.

Yes, the video depictions of hacking are corny since there are no 3d file systems that we have to navigate and when we open a file it doesn’t give a 3D psychedelic video with equations floating around, but the overall concept and the whole mindset of what hacking actually means is very accurately depicted in the movie. For example, a lot of hacking involves social engineering and right in the beginning of the movie Dade/Crash Override social engineers a security guard to get access to the computer systems for the TV Network he is trying to take over. There are tons of quotes in the movie that cover/reference the core of the Hacker identity in the 90’s. Some of my favorites are:

We make use of a service already existing without paying for what could be dirt-cheap if it wasn’t run by profiteering gluttons.

[- Razor & Blade. While demoing Phone Phreaking]

We explore… and you call us criminals. We seek after knowledge… and you call us criminals. We exist without skin color, without nationality, without religious bias… and you call us criminals.
You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it’s for our own good, yet we’re the criminals. Yes, I am a criminal. “My crime is that of curiosity.” I am a hacker, and this is my manifesto.

[From the Hackers Manifesto]

“Hackers of the world unite!”

“Hack the planet!”

These are all things that we grew up with, and refer to core hacker identity/mindset in 90’s. It actually surprised me to find that the original hacker manifesto was no longer easily available and had to resort to visiting the Internet Archive to pull up a copy of it. The movie gives you a good idea of what the original definition of hacking was: learning for the sake of learning & curiosity. It encouraged/inspired a whole generation of folks to go into computers & hacking. I remember this quote from one of the kids I watched a movie with in college, and he basically said, “watching this movie makes you want to go learn computers so you can do cool stuff like this instead of the boring ass programming crap we have been doing.”.

I saw the news of the movie’s 25th Anniversary on my twitter feed 2 days ago and since I was feeling nostalgic I watched the movie again yesterday night. And I’m happy to report, I still love the movie. Yes, there are parts of the movie that feel dated e.g. where they are all running around with floppy disks & CRT monitors and the phone couplers! and it’s corny to see everyone being on skateboards/rollerblading all the time. But overall the movie itself doesn’t feel dated in fact even the graphics from the movie have aged quite well.

The movie got a lot of flack on it’s release for being “dubious,” “unrealistic,” and “implausible.” A lot of the visualization was just plain silly (but really visually awesome) like the super cool looking 3d visual file systems that the protagonists have to navigate or the really cool looking 3D visualization with floating equations that come up when they try to view the ‘garbage file’.

Yes the movie is not accurate, but what movie is? I mean, it is at the end of the day a fictional story to tell people, have a great visual and audio extravaganza. All of that is not meant to be an accurate representation of the hacking process because honestly speaking, watching people type for three hours will probably be one of the most boring movies that I could think of and interestingly even with this visual extravaganza the movie was a comercial failure when it came out and it’s only over the years it’s become a cult favorite. There are other movies like ‘the Net’ or Sneakers that also covered Hacking/hackers but none of them had the lasting impact Hackers did.

Always remember: “This is our world now… the world of the electron and the switch, the beauty of the baud. “

#hacktheplanet

– Suramya

September 16, 2020

Potential signs of life found on Venus: Are we no longer alone in the universe?

Filed under: Interesting Sites,My Thoughts,News/Articles — Suramya @ 11:15 AM

If you have been watching the Astronomy chatter the past two days, you would have seen the headlines screaming about the possibility of life being found on Venus. Other less reputable sources are claiming that we have found definite proof of alien life. Both are inaccurate as even though we have found something that is easily explained by assuming the possibility of extra-terrestrial life there are other potential explanations that could cause the anomaly. So what is this discovery, you might ask which is causing people worldwide to start freaking out?

During analysis of spectrometer readings of Venus, scientists made a startling discovery high in its atmosphere; they found traces of phosphine (PH3) gas in Venus’s atmosphere, where any phosphorus should be in oxidized forms at a concentration (~20 parts per billion) that is hard to explain. It is unlikely that the gas is produced by abiotic production routes in Venus’s atmosphere, clouds, surface and subsurface, or from lightning, volcanic or meteoritic delivery (See the explanation below), hence the worldwide freak out. Basically the only way we know that this gas could be produced in the quantity measured is if there are anaerobic life (microbial organisms that don’t require or use oxygen) producing the gas on Venus. Obviously this doesn’t mean that there aren’t ways that we haven’t thought about yet that could be generating this gas. But the discovery is causing a big stir and will cause various space programs to start refocusing their efforts on Venus. India’s ISRO already has a mission planned to study the surface and atmosphere of Venus called ‘Shukrayaan-1‘ set to launch late 2020’s after the Mars Orbiter Mission 2 launches and you can be sure that they will be attempting to validate these findings when we get there.

The only way to conclusively prove life exists on Venus would be to go there and collect samples containing extra-terrestrial microbes. Since it’s impossible to prove a negative this will be the only concrete proof that we can trust. Anything else will still leave the door open for other potential explanations for the gas generation.

Here’s a link to the press briefing on the possible Venus biosignature announcement from @RoyalAstroSoc featuring comment from several of the scientists involved.

The recent candidate detection of ppb amounts of phosphine in the atmosphere of Venus is a highly unexpected discovery. Millimetre-waveband spectra of Venus from both ALMA and the JCMT telescopes at 266.9445 GHz show a PH3 absorption-line profile against the thermal background from deeper, hotter layers of the atmosphere indicating ~20 ppb abundance. Uncertainties arise primarily from uncertainties in pressure-broadening coefficients and noise in the JCMT signal. Throughout this paper we will describe the predicted abundance as ~20 ppb unless otherwise stated. The thermal emission has a peak emission at 56 km with the FWHM spans approximately 53 to 61 km (Greaves et al. 2020). Phosphine is therefore present above ~55 km: whether it is present below this altitude is not determined by these observations. The upper limit on phosphine occurrence is not defined by the observations, but is set by the half-life of phosphine at <80 km, as discussed below.

Phosphine is a reduced, reactive gaseous phosphorus species, which is not expected to be present in the oxidized, hydrogen-poor Venusian atmosphere, surface, or interior. Phosphine is detected in the atmospheres of three other solar system planets: Jupiter, Saturn, and Earth. Phosphine is present in the giant planet atmospheres of Jupiter and Saturn, as identified by ground-based telescope observations at submillimeter and infrared wavelengths (Bregman et al. 1975; Larson et al. 1977; Tarrago et al. 1992; Weisstein and Serabyn 1996). In giant planets, PH3 is expected to contain the entirety of the atmospheres’ phosphorus in the deep
atmosphere layers (Visscher et al. 2006), where the pressure, temperature and the concentration of H2 are sufficiently high for PH3 formation to be thermodynamically favored. In the upper atmosphere, phosphine is present at concentrations several orders of magnitude higher than predicted by thermodynamic equilibrium (Fletcher et al. 2009). Phosphine in the upper layers is dredged up by convection after its formation deeper in the atmosphere, at depths greater than 600 km (Noll and Marley 1997).

An analogous process of forming phosphine under high H2 pressure and high temperature followed by dredge-up to the observable atmosphere cannot happen on worlds like Venus or Earth for two reasons. First, hydrogen is a trace species in rocky planet atmospheres, so the formation of phosphine is not favored as it is in the deep atmospheres of the H2-dominated giant planets. On Earth H2 reaches 0.55 ppm levels (Novelli et al. 1999), on Venus it is much lower at ~4 ppb (Gruchola et al. 2019; Krasnopolsky 2010). Second, rocky planet atmospheres do not extend to a depth where, even if their atmosphere were composed primarily of hydrogen, phosphine formation would be favored (the possibility that phosphine can be formed below the surface and then being erupted out of volcanoes is addressed separately in Section 3.2.2 and Section 3.2.3, but is also highly unlikely).

Despite such unfavorable conditions for phosphine production, Earth is known to have PH3 in its atmosphere at ppq to ppt levels (see e.g. (Gassmann et al. 1996; Glindemann et al. 2003; Pasek et al. 2014) and reviewed in (Sousa-Silva et al. 2020)) PH3’s persistence in the Earth atmosphere is a result of the presence of microbial life on the Earth’s surface (as discussed in Section 1.1.2 below), and of human industrial activity. Neither the deep formation of phosphine and subsequent dredging to the surface nor its biological synthesis has hitherto been considered a plausible process to occur on Venus.

More details of the finding are explained in the following two papers published by the scientists:

Whatever the reason for the gas maybe, its a great finding as it has reenergized the search for Extra-Terrestrial life and as we all know: “The Truth is out there…”.

– Suramya

September 15, 2020

Neuroscience is starting to figure out why people feel lonely

Filed under: Interesting Sites,My Thoughts — Suramya @ 10:10 PM

Loneliness is a social epidemic which has been amplified by the current Pandemic as humans have an inbuilt desire to be social and interact with each other. The lockdown and isolation due to Covid-19 is not helping things much in this sense. The amount of cases of clinical depression are going up world wide and psychologists world wide are concerned about the impact of this in the near future.

Humans have been talking about social isolation/loneliness for centuries but till date we haven’t really analyzed it from a neurological point of view; to say what does really happen when we are lonely? Does the desire for companionship light up a section of our brain similar to what happens when we are hungry and are craving food? Till recently there wasn’t much research done on the topic, infact till Kay Tye decided to do research on the the neuroscience of loneliness in 2016 there were no published papers that talked about loneliness & contained references to ‘cells’, ‘neurons’, or ‘brain’. So while working at the Stanford University lab of Karl Deisseroth, Tye decided to spend some time trying to isolate the neurons in rodent brains responsible for the need for social interaction. In addition to identifying the region in rodents she has also managed to manipulate the need by directly stimulating the neurons which is a fantastic break through.

Deisseroth had pioneered optogenetics, a technique in which genetically engineered, light-sensitive proteins are implanted into brain cells; researchers can then turn individual neurons on or off simply by shining lights on them though fiber-optic cables. Though the technique is far too invasive to use in people—as well as an injection into the brain to deliver the proteins, it requires threading the fiber-optic cable through the skull and directly into the brain—it allows researchers to tweak neurons in live, freely moving rodents and then observe their behavior.

Tye began using optogenetics in rodents to trace the neural circuits involved in emotion, motivation, and social behaviors. She found that by activating a neuron and then identifying the other parts of the brain that responded to the signal the neuron gave out, she could trace the discrete circuits of cells that work together to perform specific functions. Tye meticulously traced the connections out of the amygdala, an almond-shaped set of neurons thought to be the seat of fear and anxiety both in rodents and in humans.

One of the first things Tye and Matthews noticed was that when they stimulated these neurons, the animals were more likely to seek social interaction with other mice. In a later experiment, they showed that animals, when given the choice, actively avoided areas of their cages that, when entered, triggered the activation of the neurons. This suggested that their quest for social interaction was driven more by a desire to avoid pain than to generate pleasure—an experience that mimicked the “aversive” experience of loneliness.

In a follow-up experiment, the researchers put some of the mice in solitary confinement for 24 hours and then reintroduced them to social groups. As one would expect, the animals sought out and spent an unusual amount of time interacting with other animals, as if they’d been “lonely.” Then Tye and Matthews isolated the same mice again, this time using optogenetics to silence the DRN neurons after the period in solitary. This time, the animals lost the desire for social contact. It was as if the social isolation had not been registered in their brains.

Since the experiment worked on Mice, the next step involved replicating the same thing with humans. Unfortunately they couldn’t use the same method to study the human behavior as no one sane would opt to have fiber-optic cable wired through their head just to participate in a study. So they fell back to a more imprecise method of using fMRI’s to scan the brains of the volunteers and she was able to identify a voxel (discrete population of several thousand neurons) that respond to the desire of wanting something like food/company. In fact they even managed to separate the two area’s responsible for desiring food and desiring company.

This is a fantastic first step because we have managed to identify the first part of the circuit that makes us social animals, obviously a lot more study is needed before this will have practical applications but we have taken the first steps towards the goal. It’s not hard to imagine a future where we have the ability to help suicidal people by simulating the area of their brain that enables them to extract joy from social connections. Or suppress the same in people who have to spend long duration’s of time alone, for example astronauts in interplanetary travel or deep sea researchers etc. The possibilities are endless.

Source: Why do you feel lonely? Neuroscience is starting to find answers.

– Suramya

Older Posts »

Powered by WordPress