Suramya's Blog : Welcome to my crazy life…

June 20, 2012

I am now a proud owner of a RaspberryPi

Filed under: Computer Hardware,My Life,Tech Related — Suramya @ 11:54 PM

After waiting for almost 6 months from when it launched, and a month after I placed the order I am now a proud owner of a RaspberryPi 🙂 For those of you who are wondering what on earth I am talking about, its a computer the size of a phone (see pic below comparing it with my old Nokia N95) costing $35 that is powerful enough to play Quake3. Its amazing how small this thing is and the features they have managed to cram on to the box. It was delivered yesterday and I was a bit upset at the customs duty I had to pay on the device (paid about 50% of the cost of the device + the cost of shipping as duty) but it still turns out to be a lot cheaper than any other contender.

I was really excited to work on it but when I got home and started to set it up I found that my SD card reader/writer was no longer functioning 🙁 so after a few hours of trying and turning the house upside-down for the other card reader that I know I have and just couldn’t find, I finally gave up and messaged Krishna at 12:30am asking him to bring a SD Card reader with him to the office (which he did, thanks!) the next day. Had to wait a day to get back home and once I got home with the reader I then downloaded the Debian image to my computer and wrote it to the card, powered the system with my old blackberry charger, plugged in an Ethernet cable and a HDMI cable (actually HDMI to DVI cable if you want to be picky) connecting the Pi to my second monitor. That’s when I hit a snag. Turns out that I don’t have a single USB keyboard at home, all my keyboards are PS/2. 🙁 So now I either need to borrow a USB keyboard or go buy a small one. In any case I powered the Pi up to see if it works ok and it powered up fine.

The first boot took about a min, but after that the system gets to the login prompt in about 15 secs, which is pretty cool. I can reduce the boot time further by disabling services that I know I won’t use (like NFS etc). Unfortunately SSH wasn’t enabled on the box, so without a keyboard and no remote connection I couldn’t really do anything more at this time, but I am full of idea’s for this device.

Below are some pics of the Pi in action:


Comparison shot of the RaspberryPi next to a Nokia N95


The RaspberryPi hooked up and ready for action


Initial Boot Sequence of RaspberryPi

I wanted to take a comparison shot of the Pi next to my Galaxy Nexus but I was using the Nexus to take the photos (didn’t feel like pulling out the camera, take a pic, take out the card and then upload the pics as compared to; take the photo, FTP to computer).

Well this is all for now, am a bit sad, but still excited. Keep an eye here for more on the Pi and my experiments with it.

– Suramya

June 13, 2012

Should you comment your code?

Filed under: My Thoughts,Tech Related — Suramya @ 11:12 PM

Had a really interesting discussion at work today about putting comments in source code. A while ago someone I know told people not to put comments in any code they create and we were arguing the pro’s and con’s of this. I personally think that good comments are quite useful and should always be added to code, but there is another school of thought that says that code should be written well enough that comments are not required. When I started coding, I was told to always put good comments in the code because it helps the person who is reviewing/debugging your code when you are not around to understand what your logic or thought process was when you created that wonderful collection of code. Assuming that your code is good enough to not require any comments is very egostical. What is perfectly clear and logical to you because you have been working on a system for a couple of years will not be clear to someone who is new to the system. Put another way, code tells you how something is done and comments tell you why.

I have had the joy of maintaining legacy code written in a mix of VB, VB.NET and Javascript with 0 comments in the code. Believe me it was not fun. The best part is when I talked to the guy who had written the code he could understand it perfectly without needing comments.

I am not talking about putting comments like “Here we are checking if the value of Var is more than 5” for code that reads: if($Var > 5). I am talking about taking a few lines to explain code like (This is an example from one of my scripts to create collages):

	# Repeating the cropping process to get the other half of the image. This reduces the possibility of half empty collages

	for ($i=5; $i>0;$i--) 
	{
		opendir(DIR, ".");
		while ($file=readdir(DIR)) 
		{

I could figure out what this 30 line blob of code was doing after walking through the code in 5-10 mins or I can read the comment and understand the logic in 30 sec’s. If needed I can then look at the code more closely but if I just want to understand how the code works on a high level comments help a lot.

Now lets look at it from the other perspective. Comments take up space in the file and if not well written they just take space and at times if they are not updated when the code changes can provide the user with incorrect information. The idea is that if required any documentation on the code can be auto generated by auto documentation tools. However if my developers are not trustworthy/reliable enough to update comments in code then I am pretty sure they can’t be trusted to follow the format required for the documentation creator software either.

The one point that made sense to me was that at times people put information in code comments that should not be public. For example a developer can comment out a section of JSP code that has the DB connection info for the dev servers but now this information is visible to anyone who views the html code generated. Or other notes/comments that probably should not be openly accessible to everyone that searches.

Last point before I end the post. Code should be clean and readable, you shouldn’t rely on comments to cover for bad coding practices. But I don’t want you to put a comment on every line of code, you should only put useful comments. Comments that help a future coder understand why you did something in a certain way are great.

What do you think? Code comments are good? Bad? You don’t really care?

Additional articles/posts that discuss this:

* Coding horror
* Successful Strategies For Commenting Code
* Don’t comment your code

This is all for now. Will post more later.

– Suramya

April 11, 2012

Doctor Who RPG Video

Filed under: Humor,Tech Related — Suramya @ 11:52 PM

Normally I don’t post links to video’s on the blog but after watching this video I just had to share. Its a must watch for all Doctor Who fans but it has a lot of spoilers so don’t watch it if you haven’t seen the last season of Dr Who. The video shows the last season of Dr Who as an 8bit RPG and is awesome. Plus it has Fish and Custard 🙂 I wish that this was a real RPG.

Click on the image below to watch the video:

BTW, Shipra: is 11:50pm better for this stuff than 6am? 🙂

Thanks to Nerdapproved.com for the link.

– Suramya

March 13, 2012

Running Ubuntu on my Samsung Galaxy Tablet

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 1:25 AM

A couple of days about someone at work posted an email asking about Ubuntu Installer and if anyone has tried installing Ubuntu on their android tablet using this software (Sorry I can’t remember your name but am too lazy to log on to the office network to find it in my mailbox). I volunteered to try it out because it sounded interesting and cool to have. This application allows you to install Ubuntu on your Android device on top of android so you basically run Ubuntu as a virtual machine in Android in a fairly easy and painless way. From the FAQ on the developer’s website:

This projects aim is to bring a range of linux distros to your android device through a method known as ‘chroot’, see it has running a linux distro within a virtual machine on your phone. You can access this virtual machine and run it on your phone without causing any damage to your device, or having to overwrite anything. Why might you want this? well my apps are designed to make the install and set up process as easy as possible (more so in the paid apps) while still giving you some flexibility. Once you have the distro up and running then you can pretty much run and install any linux software you like (so long as there is a arm port or it is not architecturaly depenedent), sure there a very few big benefits over what android itself can do but it is still pretty dam cool. (and with the free ubuntu version, hey its free does it matter how useful you find it?)

Excluding the time it took to download the image, the entire process took me about 15 mins to complete so that shows you how easy & painless the entire process was. The app install takes only a couple of mins to complete and that just installs an application that gives you a user guide that explains the install process and buttons to make the process faster/easier. The process is pretty self explanatory and if you can read and understand English and are good at following directions you should have no issues installing. 🙂 I would really suggest downloading the images over WiFi because downloading 1.5GB over 3G/GPRS would be too expensive.

There are two Ubuntu images available: small (450 MB) & large which is 1.5 GB, The large image requires 3.5GB of free space on your SD card and the small image requires 2.5GB of free space. If you have enough free space then you should go for the larger image as that has the Gnome desktop plus a ton of applications pre-installed the small one doesn’t have too many applications preinstalled and uses LXDE instead of GNOME. I first downloaded the small image to try it out and liked it enough to download the large image.

The install itself was mostly painless. I did hit an issue initially when unzip’ing the small image file (the download is a zip file) with AndroZip as it was unable to handle the large file size and kept dying on me. Using Astro’s built in extractor resolved this and I was then able to extract the file without issues. Learning from this I downloaded the large image on my computer and copied it over after extracting it there (plus I didn’t have enough space to download the file and then extract it on the tablet itself).

After you download the images, the instructions tell you to install the VNCViewer app and the Terminal app. It wasn’t clear from the instructions that VNC was required to see the actual desktop itself so I didn’t install it at first thinking that it was so that you could connect to your laptop/desktop from the tablet. I later realized that it was needed to see the Gnome desktop so installed it and was good to go.

The process to start Ubuntu is a bit cumbersome and requires you to type a few commands on the Terminal but once you start the image and select the desktop size you are good to go and all you have to do then is connect to the Ubuntu desktop using VNC. The desktop is fairly responsive and you can zoom in for more fine grained control. The large image installs the Gnome desktop and not Unity but still it was pretty good. Would have liked to get the option to try Unity as it was designed for tablets but I guess that is a project for another day.

Below is an photo of the tablet with the Gnome desktop running on it (Taking a photo of a tablet at night is a pain because the flash reflects off the screen and without the flash the photo is too dark):

Ubuntu on Samsung Galaxy Tab
Ubuntu on Samsung Galaxy Tab

Over all a cool and geeky thing to have, but not really that useful in the long run unless you have programs that you want to be able to work on when not at your desktop and don’t have net access.

– Suramya

March 12, 2012

Voice Recognition on Samsung Galaxy Nexus using Swype

Filed under: My Thoughts,Tech Related — Suramya @ 11:59 PM

I like the voice recognition built into Swype keyboard a lot and have been using it quite often to type out SMS’s etc when I have net access and am feeling lazy (The service requires net access to work). For the most part the recognition works quite well and it looks like it learns the more you use it. So I decided to stress test it by using it to type out a complete blog post… Unfortunately that didn’t work out that well. The first para came out ok, but for the next few para’s it got 90% of the stuff wrong instead of the other way which was what I was expecting. I finally gave up typing the blog on the phone because it was taking too long.

It would have been cool if it worked the way I expected so that I could make notes/reminders for myself while driving which is when I get a lot my my project ideas. At present if I use this to make notes then I would spend the next 10 hours trying to figure out what the hell I was thinking about, so I think that for now I will stick to trying to remember stuff and then transcribing it at red lights and traffic jams.

But for just typing SMS’s it works great and is a lot faster than trying to type stuff out.

Final verdict? Cool implementation but requires a lot more work before it can be used as a keyboard replacement.

– Suramya

March 3, 2012

Configuring Dual monitors in Debian

Filed under: Knowledgebase,Linux/Unix Related,Tech Related,Tutorials — Suramya @ 12:01 AM

[Update 8th Aug 2012: This is an older method of setting up the monitor kept for historic reasons. In the newer version of KDE the process is a lot simpler, please refer to this post for the updated steps – Suramya.

Recently I went ahead and bought two new Dell 20″ monitors for my home system as I had gotten used to working with two monitors at work and wanted the same experience at home as well. The problem started because initially I tried installing another graphics card and hooking up the second monitor to that card using VGA. For some reason maybe because I was to tired and wasn’t thinking clearly, I couldn’t get both the cards to work at the same time. I would get one or the other but not both. To make things even more fun, the monitors are 16:9 aspect ratio and when I used the Opensource driver the only resolution with that aspect ratio I would get was 1600×900 which was too small and the fonts looked kind of jagged at that resolution.

Since I was going to be out of town and was planning on switching to DVI cables anyways I left the system like that (after spending a bit of time experimenting) and left. Once I got back I ordered DVI cables and finally managed to get the dual monitor setup working after spending about an hour one the issue. Below is the sequence I followed to get stuff to work (documenting this so that if I ever have to do this again I have a record of what I did):

  • Removed the second video card to reduce complexity. Might add it back later if required, or if I want to hook my old monitor as a third display.
  • Connected both monitors to the onboard ATI Radeon HD 4250 card, one over DVI and the second using VGA
  • Removed the Proprietary ATI and nVidia drivers (both installed in my previous attempts to get this working). Instructions here
  • Restarted X
  • Installed Catalyst (a.k.a fglrx) a proprietary “blob” (closed source binary) driver, using the following command:
  • apt-get install fglrx-atieventsd fglrx-control  fglrx-driver fglrx-glx fglrx-modules-dkms glx-alternative-fglrx libfglrx libgl1-fglrx-glx libxvbaw

Once the driver was installed I restarted X once again and got both monitors working, but the second monitor’s display was a clone of the first one which is not what I wanted so I had to do some more digging and finally managed to fix that using the following steps:

  • Open a terminal/Command Prompt
  • Disable access control so that clients can connect from any host by issuing the following command as a regular user
  • xhost +

    This is required so that we can start a GUI command from a root shell. If we don’t do this you will get an error similar to the following in the next step:

    No protocol specified
    No protocol specified
    amdcccle: cannot connect to X server :0
  • Run ‘Ati Catalyst Control Center’ as root
  • sudo amdcccle
  • Click on ‘Display Manager’ and configure your monitors (Resolution, location etc)
  • Click on ‘Display Options’ -> ‘Xinerama’ and enable ‘Xinerama’
  • There is a bug in the display manager that prevents it from saving any changes if the xorg.conf file exists, to fix:

  • Run the following command as root:
  • mv /etc/X11/xorg.conf /etc/X11/xorg.conf_original
  • Click ‘Apply’ in the Catalyst Control Center
  • Restart X

That’s it. Once I did all that, my dual monitor setup started working without issues. Well… mostly. For some reason my desktop effects have stopped working (Transparent/Translucent windows etc) but I am not going to worry about it for now. That’s a battle for another day, maybe over the weekend.

Please note, that setting up Dual monitors usually is not this complicated in Linux. When I hooked up my TV to this same system I didn’t have to make any changes to get it to work. In this case since I was fiddling around I had to fist fix the mess I made before I was able to get this to work properly.

For those of you who are interested, the final xorg.conf that the above steps created is listed below:

Section "ServerLayout"
        Identifier     "amdcccle Layout"
        Screen      0  "amdcccle-Screen[1]-0" 0 0
        Screen         "amdcccle-Screen[1]-1" 1440 0
EndSection

Section "ServerFlags"
        Option      "Xinerama" "on"
EndSection

Section "Monitor"
        Identifier   "0-CRT1"
        Option      "VendorName" "ATI Proprietary Driver"
        Option      "ModelName" "Generic Autodetecting Monitor"
        Option      "DPMS" "true"
        Option      "PreferredMode" "1440x900"
        Option      "TargetRefresh" "60"
        Option      "Position" "0 0"
        Option      "Rotate" "normal"
        Option      "Disable" "false"
EndSection

Section "Monitor"
        Identifier   "0-DFP1"
        Option      "VendorName" "ATI Proprietary Driver"
        Option      "ModelName" "Generic Autodetecting Monitor"
        Option      "DPMS" "true"
        Option      "PreferredMode" "1440x900"
        Option      "TargetRefresh" "60"
        Option      "Position" "0 0"
        Option      "Rotate" "normal"
        Option      "Disable" "false"
EndSection

Section "Device"
        Identifier  "amdcccle-Device[1]-0"
        Driver      "fglrx"
        Option      "Monitor-DFP1" "0-DFP1"
        BusID       "PCI:1:5:0"
EndSection

Section "Device"
        Identifier  "amdcccle-Device[1]-1"
        Driver      "fglrx"
        Option      "Monitor-CRT1" "0-CRT1"
        BusID       "PCI:1:5:0"
        Screen      1
EndSection

Section "Screen"
        Identifier "amdcccle-Screen[1]-0"
        Device     "amdcccle-Device[1]-0"
        DefaultDepth     24
        SubSection "Display"
                Viewport   0 0
                Depth     24
        EndSubSection
EndSection

Section "Screen"
        Identifier "amdcccle-Screen[1]-1"
        Device     "amdcccle-Device[1]-1"
        DefaultDepth     24
        SubSection "Display"
                Viewport   0 0
                Depth     24
        EndSubSection
EndSection

Hope all this made sense and helps someone. If not feel free to ask questions.

– Suramya

February 12, 2012

Google Wallet PIN cracked on Android devices

Filed under: Computer Related,Computer Security,My Thoughts,Tech Related — Suramya @ 8:53 PM

The past few days there has been a lot of press around the fact that the Google Wallet Pin was cracked on rooted android phones. Lots of people including computer programmers and technologists (who should frankly know better) have reacted to this by posting messages/comments equivalent to: “rooting is bad”, “rooting causes security holes” etc etc etc… Guess they have forgotten the simple rule of computer security: “physical access is total access”, basically it means that if I have physical access to a device I can get full access to it eventually.

This fact was demonstrated it quite nicely by the news that you don’t really need to root your phone to get your pin hacked, all you need to do is reset the application data.

The problem in both cases is caused by the fact that the Google Wallet’s pin is stored locally on the phone itself instead of online so if you can get access to it you can bruteforce it or if you clear the app data it removes the pin and lets you choose another.

One way of fixing the second issue would be to force the phone to link to the internet after the local cache is cleared to sync the pin with the online secure server instead of just letting a user choose a new one. The fix for the first case is a lot harder because you can’t have a wallet that requires the phone to be connected to the web everytime you use it, and if you store it locally then you are just asking for trouble.

Another way would be for the receiving side to validate the pin sort of line how we do it for credit cards but that doesn’t seem too feasible either. Or we could salt the pin with the user’s account info/do a dual encryption, first one requires the pin to unlock the second one requires the account password.

Now if I can come up with such solutions then I am sure the people at Google and the various banks working on this issue will come up with other more secure options. Its not the end of the world. yet. This is a new technology and like all new tech it has its teething issues and I am looking forward to the final fixed product.

– Suramya

February 11, 2012

Dr Who and Star Trek now to have an official Crossover

Filed under: Interesting Sites,Tech Related — Suramya @ 10:38 PM

Dr Who and Star Trek are my two favorite shows and even though there have been numerous fan fiction titles where the two cross over, there was no official cross over till now…. 🙂 In May IDW is going to publish a Doctor Who/Star Trek: The Next Generation crossover series. I am definitely going to go buy the books when they come out. I have read the Star Trek cross overs with X-Men and a couple of others but I believe this one is going to be the best.

Make it… geronimo!

Bleeding Cool has squirrelled out news of an upcoming crossover that might send certain minds reeling. That in May, IDW are to publish a Doctor Who/Star Trek: The Next Generation crossover series. Featuring The Doctor, Rory, Amy, Captain Picard, Worf, Data, Geordi LaForge, Deanna Troi, Will Riker and the rest. And that this art, featuring the Doctor, Rory and Amy on the bridge of the Enterprise is a cover that will be used in the series.

Doctor Who has never engaged in any such officially sanctioned crossover outside of the Doctor Who universe before. The closest was Dimensions In Time, a much derided charity telethon show which featured characters from the BBC soap opera Eastenders. Then there was Death’s Head who kinda popped in and out. Star Trek has also seen comic book crossovers with X-Men and the Legion Of Superheroes. But this is the first time that two such major competing TV sci-fi franchises have been allowed to merge in any way before.

Source: Bleedingcool.com

Maybe there is hope for an official Star Trek & Star Wars crossover?

– Suramya

February 9, 2012

Biocomputer can retrieve images from DNA storage

Filed under: Interesting Sites,My Thoughts,News/Articles,Tech Related — Suramya @ 4:06 PM

Practical bio computers took a step closer to reality thanks to work by Sivan Shoshani1, Dr. Ron Piran1, Prof. Yoav Arava2& Prof. Ehud Keinan. They have managed to create a Biomolecular computer that is capable of decoding images stored in DNA. Biocomputers are something that I find really interesting and I try to keep an eye out for any new developments in the field. Even though this doesn’t sound like a big deal, its a huge step forward because till now we could only store a very limited amount of data in Biocomputers (stuff like a couple of 0’s & 1’s) but now that we can store an image we are closer to being able to store more complex data and the best part is that since this doesn’t require an interface it can work directly with organic flesh.

A biomolecular computer made in a test tube has proved capable of decoding images stored in DNA. The computer, built by scientists from The Scripps Research Institute and Technion–Israel Institute of Technology have created a mixture of DNA molecules, enzymes, and ATP (the substance that provides energy for our own cells) that successfully decrypts information from a DNA chip, in this case the images shown above. The images were first encrypted onto the chip, and then decrypted by the computer and stained in a way that displays only particular sequences. This means that several images can be overlapped on the same chip, then recovered separately by looking for separate genetic sequences.

The boffins have published their paper in Angewandte Chemie, a German journal of chemistry. Tried to read the paper but unfortunately its behind a pay wall and I am curious about the issue but not curious enough to pay for access.

Thanks to The Verge for the initial story.

– Suramya

February 3, 2012

Star Trek Catan: It exists

Filed under: Tech Related — Suramya @ 11:45 AM

Wow! Just wow. I didn’t think that Settlers of Catan could get better, but there is a version of the game based in the Star Trek Universe… Vinit, Surabhi, Pompy… You guys going to buy this?

Official Description:

In Star Trek Catan, players start the game with two small space stations at the intersection of three planets, with each planet supplying resources based on the result of a dice roll. Players collect and trade these resources – Dilithium, Tritanium, food, oxygen and water – in order to build spaceships that connect regions in the galaxy, establish small and large space stations at new intersection points in order to increase resource acquisition, and acquire development cards that provide victory points (VPs) or special abilities.

On a dice roll of 7, a Klingon ship swoops in to prevent resource production on one planet, while taxing spacegoers who hold too many resources.

The one new element in Star Trek Catan compared to the Settlers version is a set of character cards, each featuring one of Kirk, Spock, McCoy, Sulu, Scott, Uhura, Chekov, Chapel, Rand or Sarek. Each character card has two special powers that the holder can use on his turn, such as a forced trade.

I think I need to go buy this even though I probably will never play it. Even the board looks pretty cool.

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress