Suramya's Blog : Welcome to my crazy life…

September 30, 2020

How to fix vlc’s Core dumping issue while playing some videos

Over the past 2 days I found that the VLC install on my computer was suddenly having issues playing some of the video files on my computer. Initially I thought that it was a problem with the video file, then I realized that this was also happening with videos that had be playing fine earlier. When I ran vlc from the command line to play the problem video it gave the following output on screen when it crashed:

[00005587b42751b0] dummy interface: using the dummy interface module…
[00007f00c4004980] egl_x11 gl error: cannot select OpenGL API
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)
[00007f00c402a310] postproc filter error: Unsupported input chroma (VAOP)
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)


[00007f00c44265c0] chain filter error: Too high level of recursion (3)
[00007f00c4414240] main filter error: Failed to create video converter
[00007f00bd9020d0] main filter error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to compensate for the format changes, removing all filters
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)

A google search told me that a possible solution was to disable hardware acceleration in the Video settings but that didn’t fix my problem. So I took a look at the kernel.log file in /var/log and I got the following error when the program crashed:

Sep 30 21:11:44 StarKnight kernel: [173399.132554] vlc[91472]: segfault at 28000000204 ip 00007f2d8916c1d8 sp 00007f2d8aa69db0 error 4 in libpostproc.so.55.7.100[7f2d8915c000+1d000]
Sep 30 21:11:44 StarKnight kernel: [173399.132568] Code: 98 48 8d 44 07 20 0f 18 08 8b 44 24 08 4d 8d 0c 1a 4d 8d 04 2b 85 c0 0f 85 cb fd ff ff 4c 8b 6c 24 28 4b 8d 04 29 4b 8d 14 20 <41> 0f 6f 01 43 0f 6f 0c 29 41 0f 7f 00 43 0f 7f 0c 20 43 0f 6f 04

Spent about an hour searching for the solution using the details from the kernel.log but got nowhere. Finally I found a forum post where one of the solutions offered was to remove the vlc configuration files, since I didn’t have any other bright idea’s I renamed the vlc config folder by issuing the following command:

mv ~/.config/vlc ~/.config/vlc_09302020

Then I started vlc and just like that everything started working again. 🙂 Not sure what caused the settings to get borked in the first place but the issue is fixed now so all is well.

– Suramya

September 29, 2020

Mounting a Network drive over ssh in Windows using WinFsp & SSHFS-Win

I have computers running both Windows & Linux and at times I need to share files between them and I have been looking for a convenient way to access the files from my Linux machine from my Windows machine without having to run SAMBA on the Linux. This is because historically SAMBA has been a security nightmare and I don’t want to run extra services on the computer if I can avoid it. Earlier this week I finally found a way to mount my Linux directories on Windows as a network mount over SSH using WinFsp & SSHFS-Win and I have been running it for a couple of days so far without any issues. (So far)

Follow these steps to enable SSHFS-Win on your windows machine:

Install WinFsp (Windows File System Proxy)

WinFsp is a set of software components for Windows computers that allows the creation of user mode file systems similar to FUSE (Filesystem in Userspace) in the Unix/Linux world. You can download it from the project’s GIT repository. The Installation file is available by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is WinFsp 2020.1 at the time of this writing.

You install the software by running the MSI file you downloaded and the default options worked for me without modification.

Install SSHFS For Windows

SSHFS-Win is a minimal port of SSHFS to Windows. It is available for download from the project’s Git repository. You can compile from source or download the installation file by clicking on the download link under ‘Releases’ near the top right corner of the page. The latest version is SSHFS-Win 2020 at the time of this writing.

Please note that you will need to have WinFsp installed already before you can install SSHFS-Win successfully.

Usage:

Once you have installed both the software you can start using them and map a network drive to a directory using Windows Explorer or the net use command. Instructions for use are as below (Taken from the project Documentation):

In Windows Explorer select This PC > Map Network Drive and enter the desired drive letter and SSHFS path using the following UNC syntax:

\\sshfs\REMUSER@HOST[\PATH]

The first time you map a particular SSHFS path you will be prompted for the SSH username and password which can be saved using the Windows Credential Manager so that you don’t get prompted for it again. In order to unmap the drive, right-click on the drive icon in Windows Explorer and select Disconnect.


Visual demo of how to Map a Network drive using SSHFS-Win

You can map a network drive from the command line as well using the net use command:

net use X: \\sshfs\suramya@StarKnight

You will then be prompted for the password and once you authenticate you can use the new drive as usual. You can unmap the drive as follows:

net use X: /delete

I find this quite useful and hope you do as well.

Thanks to MakerLab, Department of Computer Science, HKU for pointing me in the correct direction

– Suramya

September 26, 2020

Source code for multiple Microsoft operating systems including Windows XP & Server 2003 leaked

Filed under: Computer Related,Techie Stuff — Suramya @ 5:58 PM

Windows XP & Windows Server source code leaked online earlier this week and even though this is for an operating system almost 2 decades old this leak is significant. Firstly because some of the core XP components are still in use in Windows 7/8/10. So if a major bug is found in any of those subsystems after people analyze the code then it will have a significant impact on the modern OS’s as well from Redmond. Secondly, It will give everyone a chance to try and understand how the Windows OS works so that they can enhance tools like WINE and other similar tools to have better compatibility with Windows. The other major impact will be on systems that still use XP like ATM’s, embedded systems, point-of-sale, automated teller machines, set-top boxes etc. Those will be hard to upgrade & protect as is some cases the companies that made the device are no longer in business and in other cases the software is installed in devices that are hard to upgrade.

This is not the first time Windows source code has leaked to the internet. In early 2000 a mega torrent of all MS Operating systems going back to MS-DOS was released, it allegedly contained the source code for the following OS’s:

OS from filename Alleged source size (bytes)
——————— —————————
MS-DOS 6 10,600,000
NT 3.5 101,700,000
NT 4 106,200,000
Windows 2000 122,300,000
NT 5 2,360,000,000

Leaked Data from the latest leak


Alleged contents of the Torrent file with MS Source Code.

The leaked code is available for download at most Torrent sites, I am not going to link to it for obvious reasons. If you want to check it out you can go download it, however as always be careful of what you download off the internet as it might have viruses and/or trojans in it. This is especially true if you are downloading the torrent on a Windows machine. Several users on Twitter claim that the source code for the original Xbox is included as well, but the information is varied on this. I haven’t downloaded it myself so can’t say for sure either way.

Keep in mind that the leak was illegal and just because it has leaked doesn’t mean that you can use it to build a clone of Windows XP without written authorization from Microsoft.

Source: ZDNet: Windows XP source code leaked online, on 4chan, out of all places

– Suramya

September 12, 2020

Post-Quantum Cryptography

Filed under: Computer Related,Quantum Computing,Techie Stuff — Suramya @ 11:29 AM

As you are aware one of the big promises of Quantum Computers is the ability to break existing Encryption algorithms in a realistic time frame. If you are not aware of this, then here’s a quick primer on Computer Security/cryptography. Basically the current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. For example it is trivial to multiply two numbers together: 593 times 829 is 491,597 but it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it and it becomes increasingly difficult as the numbers get larger. Such hard problems form the basis of algorithms like the RSA that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

Quantum Computers use “qubits” where a single qubit is able to encode more than two states (Technically, each qubit can store a superposition of multiple states) making it possible for it to perform massively parallel computations in parallel. This makes it theoretically possible for a Quantum computer with enough qubits to break traditional encryption in a reasonable time frame. In a theoretical projection it was postulated that a Quantum Computer could break a 2048-bit RSA encryption in ~8 hours. Which as you can imagine is a pretty big deal. But there is no need to panic as this is something that is still only theoretically possible as of now.

However this is something that is coming down the line so the worlds foremost Cryptographic experts have been working on Quantum safe encryption and for the past 3 years the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and data protection. Out of the initial 69 submissions received three years ago the group narrowed the field down to 15 finalists after two rounds of reviews. NIST has now begun the third round of public review of the algorithms to help decide the core of the first post-quantum cryptography standard.

They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures. To make the process easier/more manageable they have divided the finalists into two groups or tracks, with the first track containing the top 7 algorithms that are most promising and have a high probability of being suitable for wide application after the round finishes. The second track has the remaining eight algorithms which need more time to mature or are tailored to a specific application.

The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

You should check out this talk by Daniel Apon of NIST detailing the selection criteria used to classify the finalists and the full paper with technical details is available here.

Source: Schneier on Security: More on NIST’s Post-Quantum Cryptography

– Suramya

September 11, 2020

Testing the world’s largest digital camera by photographing Broccoli

Filed under: Astronomy / Space,Techie Stuff — Suramya @ 6:53 PM

The world largest digital camera has completed its first test successfully by capturing the first 3,200-megapixel images of a Broccoli. This camera is meant to be part of the telescope at the Vera Rubin Observatory where they will be taking photographs of the sky to help us improve our understanding of the universe. Once it goes live it will photograph its entire field of view (the area of about 40 full moons) every few nights, which will give the researchers the ability to pinpoint the locations of billions of stars and galaxies, while also catching anything that moves or flashes.

The imaging sensors for the camera took over 6 months to assemble as they need to be mounted very precisely. The sensors are assembled in a grid of 9 sensors called a scientific raft and the whole setup consists of 25 rafts. Each raft is precisely mounted with a gap of just 5 human hairs between each raft. Each raft costs approximately $3 million each so you won’t be able to buy it from the corner shop anytime soon. Once the sensors were assembled successfully the whole apparatus is cooled to a negative 150 degrees Fahrenheit which is their operating temperature.

Even though the assembly was completed back in January the scientists were unable to take test pictures due to the Coronavirus pandemic till May. Even though the sensor assembly has been completed the team still doesn’t have all the remaining camera components such as lenses. So they had to improvise by using a 150-micron pinhole to project images on to the CCD array. That’s correct, they used the same ‘technology’ as what we used as kids to learn about photography to take a picture with the largest ever camera built.

Since they needed to take a picture of something that would allow them to verify the quality of the picture they decided to take a picture of Broccoli which has a lot of lumps & bumps on its surface making its structure perfect to test out the new camera sensors.

“Taking these images is a major accomplishment,” said Aaron Roodman, professor and chair of the particle physics and astrophysics department and the scientist at SLAC responsible for the assembly and testing of the LSST camera, in a statement.

“With the tight specifications we really pushed the limits of what’s possible to take advantage of every square millimeter of the focal plane and maximize the science we can do with it.”

The team is estimating that the camera would be ready for testing by mid-2021 before it’s sent off to Chile for installation in the Vera Rubin Observatory.

Source: Vera Rubin: Super telescope’s giant camera spies broccoli

– Suramya

September 1, 2020

Background radiation causes Integrity issues in Quantum Computers

Filed under: Computer Related,My Thoughts,Quantum Computing,Techie Stuff — Suramya @ 11:16 PM

As if Quantum Computing didn’t have enough issues preventing it from being a workable solution already, new research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers. The research has been published in Nature: Impact of ionizing radiation on superconducting qubit coherence.

Quantum computers are super powerful because their basic building blocks qubit (quantum bit) is able to simultaneously exist as 0 or 1 (Yes, it makes no sense which is why Eisenstein called it ‘spooky action at a distance’) allowing it process a magnitude more operations in parallel than the regular computing systems. Unfortunately it appears that these qubits are highly sensitive to their environment and even minor levels of radiation emitted by trace elements in concrete walls and cosmic rays can cause them to loose coherence corrupting the calculation/data, this is called decoherence. The longer we can avoid decoherence the more powerful/capable the quantum computer. We have made significant improvements in this over the past two decades, from maintaining it for less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

As per the study, the effect is serious enough to limit the performance to just a few milliseconds which is something we are expected to achieve in the next few years. The only way currently known to avoid this issue is to shield the computer which means putting these computers underground and surrounding it with a 2 ton wall of lead. Another possibility is to use something like a counter-wave of radiation to cancel the incoming radiation similar to how we do noise-canceling. But that is something which doesn’t exist today and will require significant technological breakthrough before it is feasible.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

Quantum Computing is a fascinating field but it really messes with your mind. So I am happy there are folks out there spending time trying to figure out how to get this amazing invention working and reliable enough to replace our existing Bit based computers.

Source: Cosmic rays can destabilize quantum computers, MIT study warns

– Suramya

August 30, 2020

How to write using inclusive language with the help of Microsoft Word

Filed under: Computer Software,Knowledgebase,My Thoughts,Techie Stuff — Suramya @ 11:59 PM

One of the key aspects of Inclusion is Inclusive language, and its very easy to use non-inclusive/gender specific language in our everyday writings. For example, when you meet a mixed gender group of people almost everyone will say something to the effect of ‘Hey Guys’. I was guilty of the same and it took a concentrated effort on my part to change my greeting to ‘Hey Folks’ and other similar changes. Its the same case with written communication and most people default to male gender focused writing. Recently I found out that Microsoft Office‘s correction tools, which most might associate with bad grammar or improper verb usage, secretly have options that help catch non-inclusive language, including gender and sexuality bias. So I wanted to share it with everyone.

Below are instructions on how to find & enable the settings:

  • Open MS Word
  • Click on File -> Options
  • Select ‘Proofing’ from the menu in the left corner and then scroll down on the right side to ‘Writing Style’ and click on the ‘Settings’ button.
  • Scroll down to the “Inclusiveness” section, select all of the checkboxes that you want Word to check for in your documents, and click the “OK” button. In some versions of Word you will need to scroll down to the ‘Inclusive Language’ section (its all the way near the bottom) and check the ‘Gender-Specific Language’ box instead.
  • Click Ok

It doesn’t sound like a big deal when you refer to someone by the wrong gender but trust me its a big deal. If you don’t believe me try addressing a group of men as ‘Hello Ladies’ and then wait for the reactions. If you can’t address a group of guys as ladies then you shouldn’t refer to a group of ladies as guys either. I think it is common courtesy and requires minimal effort over the long term (Initially things will feel a bit awkward but then you get used to it).

Well this is all for now. Will write more later.

– Suramya

August 29, 2020

You can be identified online based on your browsing history

Filed under: Computer Related,Computer Software,My Thoughts,Techie Stuff — Suramya @ 7:29 PM

Reliably Identifying people online is a bedrock of the million dollar advertising industry and as more and more users become privacy conscious browsers have been adding features to increase the user’s privacy and reduce the probability of them getting identified online. Users can be identified by Cookies, Super Cookies etc etc. Now there is a research paper (Replication: Why We Still Can’t Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories) that claims to be able to identify users based on their browsing histories. It is built on top of previous research Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns and re-validates the findings of the previous paper and builds on top of it.

We examine the threat to individuals’ privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and

extends the 2012 paper Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper’s core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

Original paper

Olejnik, Castelluccia, and Janc [48] gathered data in a project aimed at educating users about privacy practices. For the analysis presented in [48] they used the CSS :vis-ited browser vulnerability [8] to determine whether various home pages were in a user’s browsing history. That is, they probed users’ browsers for 6,000 predefined “primary links” such as www.google.com and got a yes/no for whether that home page was in the user’s browsing history. A user may have visited that home page and then cleared their browsing history, in which case they would not register a hit. Additionally a user may have visited a subpage e.g. www.google.com/maps but not www.google.com in which case the probe for www.google.com would also not register a hit. The project website was open for an extended period of time and recorded profiles between January 2009 and May 2011 for 441,627 unique users, some of whom returned for multiple history tests, allowing the researchers to study the evolution of browser profiles as well. With this data, they examined the uniqueness of browsing histories.

This brings to mind a project that I saw a few years ago that would give you a list of websites from the top 1k websites that you had visited in the past using javascript and some script-fu. Unfortunately I can’t find the link to the site right now as I don’t remember the name and a generic search is returning random sites. If I find it I will post it here as it was quite interesting.

Well this is all for now. Will post more later.

– Suramya

August 28, 2020

Got my first bot response to a Tweet and some analysis on the potential Bot

Filed under: Humor,My Thoughts,Techie Stuff — Suramya @ 10:21 PM

Today I achieved a major milestone of being on the internet, 🙂 I finally had a bot/troll (potential) respond to one of my Tweets with the usual nonsense. Normally I would ignore but it was just so funny to see this response that I had to comment on it. The reply was to my Tweet about how we could potentially achieve our target of eradicating Tuberculosis by 2025 because of the masks we are wearing due to Covid-19. You see TB bacteria are spread through the air from one person to another and just like Covid TB bacteria are put into the air when a person with TB disease of the lungs or throat coughs, speaks, or sings infecting people nearby when they breathe in these bacteria. Now that wearing a mask is becoming the new normal in most parts of the world (except for some morons who don’t understand/believe science or believe that politics is stronger than science) there is a high chance that it will also reduce the spread of other illnesses spread through air.


My Tweet & the response to it

Once I saw the response, I clicked on the profile and scrolled through the posting history and saw that a majority of the posts (atleast for the amount I was able to stomach while scrolling down) were retweets of Anti-Masker, Covid denial, Pro-Trump, anti vaccine nonsense. As I needed a distraction I decided to spend a bit of time to try and identify if the account was just a stupid person or a clever bot and did a little bit of investigation on the account.

Looking at the account a couple of things stood out right from the start, the first was that the account was created in July 2020 and the username had a bunch of numbers in it which is usually the case for automatically created accounts. So I ran a query on the account via Botometer® by OSoMe which gave me a whole bunch of data on the account and there was a bunch of data that made it stand out as being a potential bot. In just over a month (5 weeks and a day to be exact) the account had tweeted 6,197 times and 2,000 times in just the past 7 days which equates to about 12 tweets every hour every day. The other data point that stood out was that the account tweeted at almost the same time every day which is usually indicative of a Bot.

Interestingly the Botometer does give the account a low possibility of being a fully automated bot but that could be just because the person running it is manually feeding the responses and having the system spray it out. Or it could be a bored person doing it for LOL’s, which is code for morons who don’t know better and think they are being ‘cool’ or ‘edgy’ or whatever. But if that’s the case then they really need to get a better hobby.

Well this is all for now. Wear a mask when you go out and stay safe.

– Suramya

PS: I have no paitience for the anti-masker/anti-vaccine/anti-science nonsense so will be deleting any comments/responses or making fun of the comments depending on my mood at the time.

August 27, 2020

Optimizing the making of peanut butter and banana sandwich using computer vision and machine learning

Filed under: Computer Related,Computer Software,Techie Stuff — Suramya @ 12:42 AM

The current Pandemic is forcing people to stay at home depriving them of activities that kept them occupied in the past so people are getting a bit stir-crazy & bored of staying at home. Its worse for developers/engineers as you never know what will come out from the depths of a bored programmer’s mind. Case in point is the effort spent by Ethan Rosenthal in writing Machine Learning/Computer Vision code to Optimizing the coverage of the banana slices on his peanut butter & Banana sandwich so that there is the same amount of banana in every mouthful. The whole exercise took him a few months to complete and he is quite proud of the results.

It’s really quite simple. You take a picture of your banana and bread, pass the image through a deep learning model to locate said items, do some nonlinear curve fitting to the banana, transform to polar coordinates and “slice” the banana along the fitted curve, turn those slices into elliptical polygons, and feed the polygons and bread “box” into a 2D nesting algorithm
[…]
If you were a machine learning model (or my wife), then you would tell me to just cut long rectangular strips along the long axis of the banana, but I’m not a sociopath. If life were simple, then the banana slices would be perfect circles of equal diameter, and we could coast along looking up optimal configurations on packomania. But alas, life is not simple. We’re in the middle of a global pandemic, and banana slices are elliptical with varying size.

The problem of fitting arbitrary polygons (sliced circular banana pieces) in a box (the bread piece) is NP-hard so the ideal solution is practically uncomputable and Rosenthal’s solution is a good approximation of the optimal solution in a reasonable time frame. The final solution is available as a command-line package called “nannernest” which takes a photo of the bread piece & banana as its argument and returns the an optimal slice-and-arrange pattern for the given combination.


Sample output created by nannernest

Check out the code & the full writeup on the project if you are interested. Even though the application is silly it’s a good writeup on using Machine Learning & Computer Vision for a project.

Source: Boing Boing

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress