Suramya's Blog : Welcome to my crazy life…

September 29, 2020

Mounting a Network drive over ssh in Windows using WinFsp & SSHFS-Win

I have computers running both Windows & Linux and at times I need to share files between them and I have been looking for a convenient way to access the files from my Linux machine from my Windows machine without having to run SAMBA on the Linux. This is because historically SAMBA has been a security nightmare and I don’t want to run extra services on the computer if I can avoid it. Earlier this week I finally found a way to mount my Linux directories on Windows as a network mount over SSH using WinFsp & SSHFS-Win and I have been running it for a couple of days so far without any issues. (So far)

Follow these steps to enable SSHFS-Win on your windows machine:

Install WinFsp (Windows File System Proxy)

WinFsp is a set of software components for Windows computers that allows the creation of user mode file systems similar to FUSE (Filesystem in Userspace) in the Unix/Linux world. You can download it from the project’s GIT repository. The Installation file is available by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is WinFsp 2020.1 at the time of this writing.

You install the software by running the MSI file you downloaded and the default options worked for me without modification.

Install SSHFS For Windows

SSHFS-Win is a minimal port of SSHFS to Windows. It is available for download from the project’s Git repository. You can compile from source or download the installation file by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is SSHFS-Win 2020 at the time of this writing.

Please note that you will need to have WinFsp installed already before you can install SSHFS-Win successfully.

Usage:

Once you have installed both the software you can start using them and map a network drive to a directory using Windows Explorer or the net use command. Instructions for use are as below (Taken from the project Documentation):

In Windows Explorer select This PC > Map Network Drive and enter the desired drive letter and SSHFS path using the following UNC syntax:

\\sshfs\REMUSER@HOST[\PATH]

The first time you map a particular SSHFS path you will be prompted for the SSH username and password which can be saved using the Windows Credential Manager so that you don’t get prompted for it again. In order to unmap the drive, right-click on the drive icon in Windows Explorer and select Disconnect.


Visual demo of how to Map a Network drive using SSHFS-Win

You can map a network drive from the command line as well using the net use command:

net use X: \\sshfs\suramya@StarKnight

You will then be prompted for the password and once you authenticate you can use the new drive as usual. You can unmap the drive as follows:

net use X: /delete

I find this quite useful and hope you do as well.

Thanks to MakerLab, Department of Computer Science, HKU for pointing me in the correct direction

– Suramya

September 27, 2020

Using ncdu to Check Disk Space Usage In Linux

One of the common tasks I face on my Linux system is to identify what files/directories are using the most space. The traditional way to find out is to go to the top level directory and run a ‘du -hs *’ (without the quotes) on the directory and then cd into each directory, rinse and repeat. The other option available is to right click on the folder in Dolphin or any other file manager and select Properties. With the same process as before when you go into each directory individually, right click and get the properties. This is very tedious and time consuming.

Instead you can use ncdu (NCurses Disk Usage) for looking at the storage space utilization on your computer as it has a lot of advantages. It is designed to find space hogs on a remote server where you don’t have an entire graphical setup available. It is fast, simple and very easy to use. I have been using it for a while now and absolutely love it.

To Install ncdu on a Debian system, you can issue the following command:

apt-get install ncdu

Once you have it installed, the usage it very simple. Simply open a command prompt and issue the following command:

ncdu

It will start in the current directory and index all the sub-directories under it. The initial scan can take a while depending on the size of the directories under the current directory. But its comparable to the time taken when running du -hs on the directory. Once the program completes its scan, you get a simple ncurses based interface that you can navigate using the keyboard.


ncdu display for my home directory

All directories & are listed with their sizes in human readable format sorted by size with the largest files & directories at the top (in the default view). You can go into a directory by selecting it and hitting enter. The sizes for the subdirectory are immediately shown without having to run additional commands. You can also delete directories & files from within ncdu by hitting the delete key which is a huge timesaver.

If you haven’t tried it out do check it out. You will love it.

– Suramya

September 21, 2020

Diffblue’s Cover is an AI powered software that can write full Unit Tests for you

Filed under: Computer Related,Computer Software,Interesting Sites — Suramya @ 6:19 PM

Writing Unit Test cases for your software is one of the most boring parts of Software Development even though having accurate tests allows us to develop code faster & with more confidence. Having a full test suite allows a developer to ensure that the changes they have made didn’t break other parts of the project that were working fine earlier. This make Unit tests an essential part of CI/CD (Continuous Integration and Continuous Delivery) pipelines. It is therefore hard to do frequent releases without rigorous unit testing. For example SQLite database engine has 640 times as much testing code as code in the engine itself:

As of version 3.33.0 (2020-08-14), the SQLite library consists of approximately 143.4 KSLOC of C code. (KSLOC means thousands of “Source Lines Of Code” or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 640 times as much test code and test scripts – 91911.0 KSLOC.

Unfortunately, since the tests are boring and don’t give immediate tangible results they are the first casualties when a team is under a time crunch for delivery. This is where Diffblue’s Cover comes into play. Diffblue was spun out of the University of Oxford following their research into how to use AI to write tests automatically. Cover uses AI to write a complete Unit Test including logic that reflects the behavior of the program as compared to the other existing tools that generate Unit Tests based on Templates and depend on the user to provide the logic for the test.

Cover has now been released as a free Community Edition for people to see what the tool can do and try it out themselves. You can download the software from here, and the full datasheet on the software is available here.


Using Cover IntelliJ plug-in to write tests

The software is not foolproof as in it doesn’t identify bugs in the source code. It assumes that the code is working correctly when the tests are added in, so if there is incorrect logic in the code it won’t be able to help you. On the other hand if the original logic was correct then it will let you know if the changes made break any of the existing functionality.

Lodge acknowledged the problem, telling us: “The code might have bugs in it to begin with, and we can’t tell if the current logic that you have in the code is correct or not, because we don’t know what the intent is of the programmer, and there’s no good way today of being able to express intent in a way that a machine could understand.

“That is generally not the problem that most of our customers have. Most of our customers have very few unit tests, and what they typically do is have a set of tests that run functional end-to-end tests that run at the end of the process.”

Lodge’s argument is that if you start with a working application, then let Cover write tests, you have a code base that becomes amenable to high velocity delivery. “Our customers don’t have any unit tests at all, or they have maybe 5 to 10 per cent coverage. Their issue is not that they can’t test their software: they can. They can run end-to-end tests that run right before they cut a release. What they don’t have are unit tests that enable them to run a CI/CD pipeline and be able to ship software every day, so typically our customers are people who can ship software twice a year.”

The software is currently only compatible with Java & IntelliJ but work is ongoing to incorporate other coding languages & IDEs.

Thanks to Theregister.com for the link to the initial story.

– Suramya

September 9, 2020

Augmented Reality Geology

Filed under: Computer Software,Emerging Tech,Interesting Sites — Suramya @ 10:17 PM

A lot of times when you look at Augmented Reality (AR), it seems like a solution looking for problem. We still haven’t found the Killer App for AR like the VisiCalc spreadsheet was the killer app for the Apple II and Lotus 1-2-3 & Excel were for the IBM PC. There are various initiatives underway but no one has hit the jackpot yet. There are applications that allow a Doctor to see a reference text or diagram in a heads up display when they’re operating which is something that’s very useful but that’s a niche market. We need something broader in scope and there is a lot of effort focused on the educational field where they’re trying to see if they can use augmented reality in classrooms.

One of the Implementations that sounds very cool is by an app that I found recently where they are using it to project a view of rocks and minerals etc for geology students using AR. Traditionally students are taught by showing them actual physical samples of the minerals and 2D images of larger scale items like meteor craters or strata. The traditional way has its own problems of storage and portability but with AR you can look at a meteor crater in a 3D view, and the teacher can walk you through visually on how it looks and what geological stresses etc formed around it. The same is also possible for minerals and crystals along with other things.

There’s a new app, called GeoXplorer available on both Android and iOS that allows you to achieve this. The app was created by the Fossett Laboratory for Virtual Planetary Exploration to help students understand the complex, three-dimensional nature of geologic structures without having to travel all over the world. The app has a lot of models programmed into the system already with more on the way. Thanks to interest from other fields they are looking at including models of proteins, art, and archeology as well into the App.

“You want to represent that data, not in a projective way like you would do on a screen on a textbook, but actually in a three-dimensional way,” Pratt said. “So you can actually look around it [and] manipulate it exactly how you would do in real life. The thing with augmented reality that we found most attractive [compared to virtual reality] is that it provides a much more intuitive teacher-student setting. You’re not hidden behind avatars. You can use body-language cues [like] eye contact to direct people to where you want to go.”

Working with the Unity game engine, Pratt has since put together a flexible app called GeoXplorer (for iOS and Android) for displaying other models. There is already a large collection of crystalline structure models for different minerals, allowing you to see how all the atoms are arranged. There are also a number of different types of rocks, so you can see what those minerals look like in the macro world. Stepping up again in scale, there are entire rock outcrops, allowing for a genuine geology field-trip experience in your living room. Even bigger, there are terrain maps for landscapes on Earth, as well as on the Moon and Mars.

Its still a work in progress but I think it’s going to be something which is going to be really cool and might be quite a big thing coming soon into classrooms around the world. The one major constraint that I can see is right now, you have to use your phone as the AR gateway which makes it a bit cumbersome to use, something like a Microsoft HoloLens or other augmented reality goggles will make it really easy to use and make it more natural, but obviously the cost factor of these lenses is a big problem. Keeping that in mind it’s easy to understand why they went with the Phone as the AR gateway instead of a Hololens or something similar.

From Martian terrain samples collected by NASA’s Mars Reconnaissance Orbiter to Devil’s Tower in Wyoming to rare hand samples too delicate to handle, the team is constantly expanding the catalog of 3D models available through GeoXplorer and if you have a model you’d like to see added to the app please get in contact with the Fossett Lab at fossett.lab@wustl.edu.

– Suramya

August 30, 2020

How to write using inclusive language with the help of Microsoft Word

Filed under: Computer Software,Knowledgebase,My Thoughts,Techie Stuff — Suramya @ 11:59 PM

One of the key aspects of Inclusion is Inclusive language, and its very easy to use non-inclusive/gender specific language in our everyday writings. For example, when you meet a mixed gender group of people almost everyone will say something to the effect of ‘Hey Guys’. I was guilty of the same and it took a concentrated effort on my part to change my greeting to ‘Hey Folks’ and other similar changes. Its the same case with written communication and most people default to male gender focused writing. Recently I found out that Microsoft Office‘s correction tools, which most might associate with bad grammar or improper verb usage, secretly have options that help catch non-inclusive language, including gender and sexuality bias. So I wanted to share it with everyone.

Below are instructions on how to find & enable the settings:

  • Open MS Word
  • Click on File -> Options
  • Select ‘Proofing’ from the menu in the left corner and then scroll down on the right side to ‘Writing Style’ and click on the ‘Settings’ button.
  • Scroll down to the “Inclusiveness” section, select all of the checkboxes that you want Word to check for in your documents, and click the “OK” button. In some versions of Word you will need to scroll down to the ‘Inclusive Language’ section (its all the way near the bottom) and check the ‘Gender-Specific Language’ box instead.
  • Click Ok

It doesn’t sound like a big deal when you refer to someone by the wrong gender but trust me its a big deal. If you don’t believe me try addressing a group of men as ‘Hello Ladies’ and then wait for the reactions. If you can’t address a group of guys as ladies then you shouldn’t refer to a group of ladies as guys either. I think it is common courtesy and requires minimal effort over the long term (Initially things will feel a bit awkward but then you get used to it).

Well this is all for now. Will write more later.

– Suramya

August 29, 2020

You can be identified online based on your browsing history

Filed under: Computer Related,Computer Software,My Thoughts,Techie Stuff — Suramya @ 7:29 PM

Reliably Identifying people online is a bedrock of the million dollar advertising industry and as more and more users become privacy conscious browsers have been adding features to increase the user’s privacy and reduce the probability of them getting identified online. Users can be identified by Cookies, Super Cookies etc etc. Now there is a research paper (Replication: Why We Still Can’t Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories) that claims to be able to identify users based on their browsing histories. It is built on top of previous research Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns and re-validates the findings of the previous paper and builds on top of it.

We examine the threat to individuals’ privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and

extends the 2012 paper Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper’s core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

Original paper

Olejnik, Castelluccia, and Janc [48] gathered data in a project aimed at educating users about privacy practices. For the analysis presented in [48] they used the CSS :vis-ited browser vulnerability [8] to determine whether various home pages were in a user’s browsing history. That is, they probed users’ browsers for 6,000 predefined “primary links” such as www.google.com and got a yes/no for whether that home page was in the user’s browsing history. A user may have visited that home page and then cleared their browsing history, in which case they would not register a hit. Additionally a user may have visited a subpage e.g. www.google.com/maps but not www.google.com in which case the probe for www.google.com would also not register a hit. The project website was open for an extended period of time and recorded profiles between January 2009 and May 2011 for 441,627 unique users, some of whom returned for multiple history tests, allowing the researchers to study the evolution of browser profiles as well. With this data, they examined the uniqueness of browsing histories.

This brings to mind a project that I saw a few years ago that would give you a list of websites from the top 1k websites that you had visited in the past using javascript and some script-fu. Unfortunately I can’t find the link to the site right now as I don’t remember the name and a generic search is returning random sites. If I find it I will post it here as it was quite interesting.

Well this is all for now. Will post more later.

– Suramya

August 27, 2020

Optimizing the making of peanut butter and banana sandwich using computer vision and machine learning

Filed under: Computer Related,Computer Software,Techie Stuff — Suramya @ 12:42 AM

The current Pandemic is forcing people to stay at home depriving them of activities that kept them occupied in the past so people are getting a bit stir-crazy & bored of staying at home. Its worse for developers/engineers as you never know what will come out from the depths of a bored programmer’s mind. Case in point is the effort spent by Ethan Rosenthal in writing Machine Learning/Computer Vision code to Optimizing the coverage of the banana slices on his peanut butter & Banana sandwich so that there is the same amount of banana in every mouthful. The whole exercise took him a few months to complete and he is quite proud of the results.

It’s really quite simple. You take a picture of your banana and bread, pass the image through a deep learning model to locate said items, do some nonlinear curve fitting to the banana, transform to polar coordinates and “slice” the banana along the fitted curve, turn those slices into elliptical polygons, and feed the polygons and bread “box” into a 2D nesting algorithm
[…]
If you were a machine learning model (or my wife), then you would tell me to just cut long rectangular strips along the long axis of the banana, but I’m not a sociopath. If life were simple, then the banana slices would be perfect circles of equal diameter, and we could coast along looking up optimal configurations on packomania. But alas, life is not simple. We’re in the middle of a global pandemic, and banana slices are elliptical with varying size.

The problem of fitting arbitrary polygons (sliced circular banana pieces) in a box (the bread piece) is NP-hard so the ideal solution is practically uncomputable and Rosenthal’s solution is a good approximation of the optimal solution in a reasonable time frame. The final solution is available as a command-line package called “nannernest” which takes a photo of the bread piece & banana as its argument and returns the an optimal slice-and-arrange pattern for the given combination.


Sample output created by nannernest

Check out the code & the full writeup on the project if you are interested. Even though the application is silly it’s a good writeup on using Machine Learning & Computer Vision for a project.

Source: Boing Boing

– Suramya

August 25, 2020

Using Bioacoustic signatures for Identification & Authentication

We have all heard about Biometric scanners that identify folks using their fingerprints, or Iris scan or even the shape of their ear. Then we have lower accuracy authenticating systems like Face recognition, voice recognition etc. Individually they might not be 100% accurate but combine one or more of these and we have the ability to create systems that are harder to fool. This is not to say that these systems are fool proof because there are ways around each of the examples I mentioned above, our photos are everywhere and given a pic of high enough quality it is possible to create a replica of the face or iris or even finger prints.

Due to the above mentioned shortcomings, scientists are always on lookout for more ways to authenticate and identify people. Researchers from South Korean have found that the signature created when sound waves pass through humans are unique enough to be used to identify individuals. Their work, described in a study published on 4 October in the IEEE Transactions on Cybernetics, suggests this technique can identify a person with 97 percent accuracy.

“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”

[…]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”

Interestingly, while the setup is not as accurate as Fingerprints or Iris scans it is still accurate enough to differentiate between two fingers of the same hand. If the waves required to generate the Bioacoustic signatures are validated to be safe for humans over long term use, then it is possible that we will soon see a broader implementation of this technology in places like airports, buses, public area’s etc to identify people automatically without having to do anything. If it can be made portable then it could be used to monitor protests, rallies, etc which would make it a privacy risk.

The problem with this tech is that it would be harder to fool without taking steps that would make you stand out like wearing a vest filled with liquid that changes your acoustic signature. Which is great when we are just talking about authentication/identification for access control but becomes a nightmare when we consider the surveillance aspect of usage.

Source: The Bioacoustic Signatures of Our Bodies Can Reveal Our Identities

– Suramya

August 23, 2020

Mozilla Thunderbird has a ‘Link Mismatch Detection’ feature to protect from Phishing & Scams

Filed under: Computer Software,Techie Stuff — Suramya @ 10:03 PM

Yesterday I was trying to register for a new service and as always I had to share my email address and wait for the confirmation/validation email to verify that the email address I had provided was a valid one. Once I finally got the email it had a clickable link to validate my email address that looked like the screenshot below:


Clickable link for email address validation

Since this was an email I was expecting and wanted to create an account, I clicked on the link and got a surprise. Instead of immediately taking me to the link I had clicked on Thunderbird popped up the following pop-up telling me that the link was taking me to another website than what the link text was indicating. This is new behavior that I believe was implemented in Thunderbird 68 but haven’t found the release notes confirming it. (I didn’t really spend a lot of time searching to be honest)


Link Mismatch Detected

In this case it was a benign reason because the link was taking me to a tracking site before redirecting to the email confirmation page. But the benefits are immediately obvious as this would flag the links on the phishing/scam emails that pretend to come from a bank/email provider/facebook but redirect users to a Phishing site and prompt users to verify if they are going to the correct site.

Unfortunately the fix is not perfect and needs more work as this would include all links in newsletters etc that include tracking links (which is pretty much all of them). If users constantly get the popup then there is a high probability that they will get conditioned to click on the First button to go the site the link is taking you to without reading the text fully.

Some of the users will find this to be annoying and want to disable it, so below are the steps to disable the Phishing checks in Thunderbird (not recommended). Only make this changes if you are absolutely sure of what you are doing and take full responsibility of the fact that you disabled the Phishing checks. I will not be responsible if you disable the checks and then end up with an empty bank account after having your account Phished. Also, I found the instructions on the Mozilla Forum but haven’t tried them myself so like anything else you find on the internet please validate the steps and only follow if you are sure that they are safe :).

There are four phishing preferences.

* mail.phishing.detection.enabled

i.e. Tools > Options > Security > Email Scams > Tell me if the message I’m reading is a suspected email scam

* mail.phishing.detection.ipaddresses
* mail.phishing.detection.mismatched_hosts
* mail.phishing.detection.disallow_form_actions

Try setting the mail.phishing.detection.mismatched_hosts preference to false in the about:config window, then restart and test again.

It’s great that the Thunderbird team is adding more and more features to make email safer. Looking forward to more such features in TB.

Well this is all for now. Will post more later.

– Suramya

August 21, 2020

Emotion detection software for Pets using AI and some thoughts around it (and AI in general)

Filed under: Computer Software,Emerging Tech,Humor,My Thoughts,Techie Stuff — Suramya @ 5:32 PM

Pet owners are a special breed of people, they willingly take responsibility for another life and take care of them. I personally like pets as long as I can return them to the owner at the end of the day (or hour, depending on how annoying the pet is). I had to take care of a puppy for a week when Surabhi & Vinit were out of town and that experience was more than enough to confirm my belief in this matter. Others however feel differently and spend quite a lot of time and effort talking to the pets and some of them even pretend that the dog is talking back.

Now leveraging the power of AI there is a new app created that analyses & interprets the facial expressions of your pet. Folks over at the University of Melbourne decided to build an Convolutional Neural Networks based application called Happy Pets that you can download from the Android or Apple app stores to try on your pet. They claim to be able to identify the emotion the pet is feeling when the photo was taken.

While the science behind it is cool and a lot of pet owners who tried out the application over at Hacker News seem to like it, I feel its a bit frivolous and silly. Plus its hard enough for us to classify emotions in Humans reliably using AI so I would take the claims with a pinch of salt. The researchers themselves have also not given any numbers around the accuracy percentage of the model.

When I first saw the post about the app it reminded me of another article I had read a few days ago which postulated that ‘Too many AI researchers think real-world problems are not relevant’. At first I thought that this was an author trolling the AI developers but after reading the article I kind of agree with him. AI has a massive potential to advance our understanding of health, agriculture, scientific discovery, and more. However looking at the feedback AI papers have been getting it appears that AI researchers are allergic to practical applications (or in some cases useful applications). For example, below is a review received on a paper submitted to the NeurIPS (Neural Information Processing Systems) conference:

“The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

If I read this correctly then basically they are saying that this AI paper is for a particular application so its not interesting enough for the ML community. There is a similar bias in the theoretical physics/mathematics world where academics who talk about implementing the concepts/theories are looked down upon by the ‘purists’. I personally believe that while the theoretical sciences are all well & good and we do need people working on them to expand our understanding, at the end of the day if we are not applying these learnings/theorems practically they are of no use. There will be cases where we don’t have the know-how to implement or apply the learnings but we should not let that stand in the way of practical applications for things we can implement/use.

To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

What do you think? Do you agree/disagree?

Source: HackerNews

– Suramya

Older Posts »

Powered by WordPress