Suramya's Blog : Welcome to my crazy life…

April 19, 2023

Finally a useful AI Implementation: Making spoken dialog easier to hear in movies and shows

Filed under: Emerging Tech,News/Articles,Tech Related — Suramya @ 6:37 PM

Finally, an AI usecase that is actually useful. There are a ton of use cases where AI seems to be shoehorned in for no reason, but this recent announcement from Amazon about Dialogue Boost which is a new function from that lets you increase the volume of dialogue relative to background music and effects to a consistent volume so you can actually hear the dialog without nearly shattering the eardrums when a sudden explosion happens.
It is something that is still in the testing phase and is only released on some of their products so far. But I am looking forward to it being in general availability.

Dialogue Boost works by analyzing the original audio in a movie or series and identifying points where dialogue may be hard to hear above background music and effects, at which point speech patterns are isolated and audio is enhanced to make the dialogue clearer. The AI targets spoken dialogue rather than a typical speaker or home theater set up that only amplifies the center channel of audio. It’s something that exists on high-end theater set-ups and certain smart TVs, but Amazon is the first streamer to roll out such a feature.

I have gotten used to having subtitles on when I watch something because that ensures that I don’t miss out on any dialogs due to the background music/sounds in the show/movie. This looks like it will alleviate that requirement. I think I will still end up keeping the subtitles on but this will certainly help.

Source: Amazon’s New Tool Adjusts Sound So You Can Actually Understand Movie and TV Dialogue
Announcement: Prime Video launches a new accessibility feature that makes it easier to hear dialogue in your favorite movies and series

– Suramya

April 14, 2023

My app that autoposts to Twitter has been suspended from accessing the Twitter API

Filed under: My Thoughts,Tech Related,Website Updates — Suramya @ 5:44 PM

Yesterday I got an email from Twitter stating the following:

Hello,

This is a notice that your app – Suramya’s Blog – has been suspended from accessing the Twitter API.

Please visit developer.twitter.com to sign up to our new Free, Basic or Enterprise access tiers.

More information can be found on our developer community forums.
Regards,
Twitter Developer Platform

The email actually looks like a really bad phishing email as it has no formatting, doesn’t give any links etc and is just a plain auto-generated email. I almost deleted it as spam but then realized that it could be a notification sent because they are forcing folks to use the new plans. Today I logged in to the Developer account and I was expecting to have an option to select one of the tiers, click save (pay if I was insane and decided to pay) and would be done with it. But that is not the case. I was greeted with the following banner when I logged in:


This App has violated Twitter Rules and policies. As a result, it can no longer be accessed. For assistance, submit a support ticket.

It looks like they couldn’t figure out how to temp block users who need to select a tier before being allowed to continue so they decided to suspend the app instead using the same process as what they would do if the app was suspended for ‘violations of Twitter Rules and policies’. Which is quite amusing because the app been used 12 times in the last 2 months to autopost links to my posts here when I create them. I did use the same app for testing a Twitter export script that I wrote a few months ago but haven’t run it in a while, either.

There is no way for me to edit/choose a tier for my app and I have no interest is spending the time to create another app just to post something on Twitter which will get about 2-10 view on an average. (Usually on the lower end of the scale). This was pretty much the last remaining vestige of my posting on Twitter and I am fine with it not working anymore.. I rather spend that time doing something more productive like watching paint dry.

– Suramya

April 4, 2023

Mastodon is so much better than Twitter, except for its search capabilities

Filed under: My Thoughts,Tech Related — Suramya @ 5:14 PM

Twitter has been slowing becoming less and less useful for getting updates from people you follow. Even my ‘Following’ tab is now showing entries from people I don’t follow and not all posts from the folks I follow show up on their either. Don’t even get me started about the ‘For You’ section which is full of nonsense that I am not really interested in. I have mostly switched over to Mastodon for updates and I see way better engagement over there. My blog auto-posts to both Mastodon and Twitter (along with LinkedIn and Facebook), on Twitter I have 84 followers and 11 followers on Mastodon (I only started posting there in 2023). My Tweets usually get between 2-10 views each and maybe 1 tweet out of 50 will get a response or like. The same post on Mastodon gets a lot more engagement, there have been posts which have had 8-10 replies and multiple likes.

However, that being said one thing that Twitter has which is missing from Mastodon is the ability to search. Earlier today I saw an article on how Twitter seems to have blocked users from authenticating to other services using their SSO offerings. I wanted to learn more about it and tried searching for it on Mastodon, and didn’t get any results (I then tried searching using a hashtag but no luck there as well). So I switched to Twitter and did a search there and immediately I got a lot of results that gave more information on the topic. I am sure that this event is being discussed in Mastodon but it is almost impossible to find because of the way the search is designed.

There is an opt-in project that allows people to opt-in to their setup to allow them to index your toots but because of the ‘amazing’ search in Mastodon, I can’t find the link to the project. 🙁 There are people working on this problem but a extremely vocal minority is hellbent against allowing people to search on Mastodon because they don’t want it. To be fair there are a lot of technical challenges in indexing all the toots across all the instances but it is not an insurmountable problem. It just needs people to look into the problem and others to let them work on the solution.

– Suramya

March 12, 2023

Researchers create mini-robot that can navigate inside blood vessels and perform surgery autonomously

Filed under: Emerging Tech,Tech Related — Suramya @ 11:13 PM

Performing surgery is a delicate task and at times it is almost impossible to reach the area we want to operate at without having to cut through other important tissues. This is even more apparent when we talk about surgery inside a blood vessel or artery, which could be the key to removing an obstruction or stitch a wound etc. Till now we didn’t have the ability to release an autonomous robot inside a blood vessel that could navigate to the correct location, perform the programmed actions (or allow the doctor to manually take over) and return.

This was only possible in the realm of Science Fiction but thanks to the efforts of Researchers at South Korea’s Hanyang University this is now actually possible in the real world. They have successfully demonstrated that their I-RAMAN (robotically assisted magnetic navigation system for endovascular intervention) robot can travel autonomously to a superficial femoral artery in a pig, deliver contrast dye, and return safely to the extraction point. Their results and paper was published on 9th Feb in IEEE Robotics and Automation Letters: Separable and Recombinable Magnetic Robot for Robotic Endovascular Intervention.

This study presents a separable and recombinable magnetic robot (SRMR) to deliver and retrieve an untethered magnetic robot (UMR) to a target vascular lesion safely and effectively for robotic endovascular intervention. The SRMR comprises a delivery catheter and UMR connected to the end of the delivery catheter by a connecting section. An external magnetic field (EMF) interacts with the permanent magnet of the UMR; it can effectively generate magnetic torque and steer the delivery catheter to reach a target lesion. Furthermore, the rotating EMF allows the UMR of the SRMR to separate from the delivery catheter and perform the tunneling task. After completing the tunneling task, the UMR can be safely recombined with the delivery catheter in the vasculature via a simultaneous application of the EMF and suction force to the delivery catheter. The SRMR functions of steering, separation, movement, tunneling, drug delivery, and recombination are validated in a mimetic vascular model with a pseudo blood clot. Finally, the SRMR is successfully validated in an in vivo experiment of a mini pig’s superficial femoral artery for contrast delivery, separation, movement, and recombination.

This is a fantastic achievement, and although there is a lot of work still left to be done before this can be deployed for actual human use we are still a step closer to truly universal repair bots. Imagine an accident victim who is bleeding internally, the doctor deploys these robots to restitch the blood vessels to stop the internal bleeding and within minutes the bleeding is stopped and the doctor can start the post-op work. I can imagine these being sold as part of the standard medkits in the future (way in the future) where you have a few pre-programmed options available and depending on the situation a person can select the correct option to deploy.

However, all is not rosy (as always). If these go into active use and become common enough to be deployed in med-kits then we would need systems to prevent these bots from being repurposed. For example, instead of being programmed to stitch blood vessels the bots are programmed to cause more damage and start internal bleeding. There are so many other scenarios where this could be misused so we would need to think of all the cases, mitigate the risk and only then deploy them into the world.

That being said, I am still excited to see the possibilities this opens up.

Source: ACM Tech News Newsletter.

– Suramya

March 3, 2023

Someone is now claiming that they can’t use Microsoft Windows for “Religious Reasons”

Filed under: Humor,News/Articles,Tech Related — Suramya @ 3:10 PM

The Operating System (OS) wars have been going on since we have had computers and the ferocity with which some OS users defend their preference at times borders on that of fundamentalist religions. The following incident just takes it to the logical conclusion, where a new joiner in a company doesn’t want to use windows on their office laptop because their religion does not allow use of Apple or Microsoft owned Operating Systems.


Employee claims that she can’t use Microsoft Windows for “Religious Reasons”

I wonder that their stance is about using other software/websites/services owned by Apple/Microsoft. Have they stopped using Github because it is owned by MS? What about LinkedIn? or Mojang, X-Box? or any of the thousands of companies they own or have stakes in. Do they use Beat headsets? Shazam? Akamai? Apple either owns or has stakes in them and a ton of other companies as well.

I personally use Linux as my primary OS and would always prefer to use it whenever possible. However, I have had to use Windows at work in most of the companies that I have worked in because that’s what the standard setup was over there. I did push for Linux in some of the orgs and we ended up replacing Windows with Linux for some of the developers in a few companies. That being said, refusing to work with an OS because you don’t want to is a bit over the top for me and calling it against their religion makes it even more out there…

Source: Whitney Merrill on Mastadon

– Suramya

March 2, 2023

Intel Releases SDK allowing C++ Developers to start writing code for Quantum Computers

Filed under: Quantum Computing,Tech Related — Suramya @ 8:26 PM

Intel has released a new software platform for Developers (SDK) who are looking to work on Quantum computers. They are not the first (Microsoft released an online course/setup back in 2019) and they certainly won’t be the last to do this.

Unfortunately, while they have released the platform it doesn’t actually run on a quantum computer but rather runs on a quantum computer simulator they have built. But the really interesting thing is that this SDK that they have released allows developers to use C++ to build quantum algorithms instead of having to learn a new programming language which immediately increases the no of people who can hit the ground running and start developing with the SDK.

The platform, called Intel Quantum SDK, would for now allow those algorithms to run on a simulated quantum computing system, said Anne Matsuura, Intel Labs’ head of quantum applications and architecture. Matsuura said developers can use the long-established programming language C++ to build quantum algorithms, making it more accessible for people without quantum computing expertise. “The Intel Quantum SDK helps programmers get ready for future large-scale commercial quantum computers,” Matsuura said in a statement. “It will also advance the industry by creating a community of developers that will accelerate the development of applications.”

Intel will be launching their own version of a Quantum computer in the near future. They are taking a slightly different approach than the others to make the computer, they are basically trying to build this computers using their existing chip-making technology by putting transistors very close to each other, running them at super low temperatures and then use single electrons in the circuit which makes the transistors act as qubits. This sounds like a promising approach but I feel that this is more of a stepping stone on the way to the fully quantum setup as it is a hybrid version of the existing computers and a quantum computer.

Source: Slashdot: Intel Releases Software Platform for Quantum Computing Developers

– Suramya

February 27, 2023

It is now possible to put undetectable Backdoors in Machine Learning Models

Filed under: Computer Software,Emerging Tech,My Thoughts,Tech Related — Suramya @ 10:18 PM

Machine Learning (ML) has become the new go to buzzword in the Tech world in the last few years and everyone seems to be focusing on how they can include ML/AI in their products, regardless of whether it makes sense to include or not. One of the bigest dangers of this trend is that we are moving towards a future where an algorithm would have the power to make decisions that have real world impacts but due to the complexity it would be impossible to audit/check the system for errors/bugs, non-obvious biases or signs of manipulation etc. For example, we have had cases where the wrong person was identified as a fugitive and arrested because an AI/ML system claimed that they matched the suspect. Others have used ML to try to predict crimes with really low accuracy but people take it as gospel because the computer said so…

With ML models becoming more and more popular there is also more research on how these models are vulnerable to attacks. In December 2022 researchers (Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan and Or Zamir) from UC Berkely, MIT and Princeton published a paper titled “Planting Undetectable Backdoors in Machine Learning Models” in the IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS) where they discuss how it would be possible to train a model in a way that it allowed an attacker to manipulate the results without being detected by any computationally-bounded observer.

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input­a property we call non-replicability.

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Basically they are talking about having a ML model that works correctly most of the time but allows the attacker to manipulate the results if they want. One example use case would be something like the following: A bank uses a ML model to decide if they should give out a loan to an applicant and because they don’t want to be accused of being discriminatory they give it to folks to test and validate and the model comes back clean. However, unknown to the testers the model has been backdoored using the techniques in the paper above so the bank can modify the output in certain cases to deny the loan application even though they would have qualified. Since the model was tested and ‘proven’ to be without bias they are in the clear as the backdoor is pretty much undetectable.

Another possible attack vector is that a nation state funds a company that trains ML models and has them insert a covert backdoor in the model, then they have the ability to manipulate the output from the model without any trace. Imagine if this model was used to predict if the nation state was going to attack or not. Even if they were going to attack they could use the backdoor to fool the target into thinking that all was well.

Having a black box making such decisions is what I would call a “Bad Idea”. At least with the old (non-ML) algorithms we could audit the code to see if there were issues with ML that is not really possible and thus this becomes a bigger threat. There are a million other such scenarios that could be played and if we put blind trust in an AI/ML system then we are setting ourselves up for a disaster that we would never see coming.

Source: Schneier on Security: Putting Undetectable Backdoors in Machine Learning Models

– Suramya

February 21, 2023

Fixing problems with nvidia-driver on Debian Unstable after latest upgrade

Filed under: Computer Software,Linux/Unix Related,Tech Related — Suramya @ 10:54 PM

Earlier today I ran my periodic update of my main desktop that is running Debian Unstable. The upgrade finished successfully and since a new kernel was released with this update I restarted the system to ensure that all files/services etc are running the same version. After the reboot the GUI refused to start and I thought the problem could be because of a NVIDIA kernel module issue so I tried to reboot to an older kernel but that didn’t work either. Then I tried running apt-get dist-upgrade again which gave me the following error:

root@StarKnight:~# apt-get dist-upgrade 
Reading package lists...
Building dependency tree...
Reading state information...
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
 nvidia-driver : Depends: nvidia-kernel-dkms (= 525.85.12-1) but 515.86.01-1 is installed or
                          nvidia-kernel-525.85.12 or
                          nvidia-open-kernel-525.85.12 or
                          nvidia-open-kernel-525.85.12
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).

So I ran the apt –fix-broken install command as recommended and that failed as well with another set of errors:

root@StarKnight:/var/log# apt --fix-broken install
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Correcting dependencies... Done
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
dpkg: dependency problems prevent configuration of nvidia-driver:
 nvidia-driver depends on nvidia-kernel-dkms (= 525.85.12-1) | nvidia-kernel-525.85.12 | nvidia-open-kernel-525.85.12 | nvidia-open-kernel-525.85.12; however:
  Version of nvidia-kernel-dkms on system is 515.86.01-1.
  Package nvidia-kernel-525.85.12 is not installed.
  Package nvidia-open-kernel-525.85.12 is not installed.
  Package nvidia-open-kernel-525.85.12 is not installed.

dpkg: error processing package nvidia-driver (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 nvidia-driver
E: Sub-process /usr/bin/dpkg returned an error code (1)

Looking at the logs, I didn’t see any major errors but I did see the following message:

2023-02-21T19:48:27.668268+05:30 StarKnight kernel: [    3.379006] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  515.86.01  Wed Oct 26 09:12:38 UTC 2022
2023-02-21T19:48:27.668286+05:30 StarKnight kernel: [    4.821755] NVRM: API mismatch: the client has the version 525.85.12, but
2023-02-21T19:48:27.668287+05:30 StarKnight kernel: [    4.821755] NVRM: this kernel module has the version 515.86.01.  Please
2023-02-21T19:48:27.668287+05:30 StarKnight kernel: [    4.821755] NVRM: make sure that this kernel module and all NVIDIA driver
2023-02-21T19:48:27.668288+05:30 StarKnight kernel: [    4.821755] NVRM: components have the same version.

Searching on the web didn’t give me a solution but since I am running the Debian Unstable branch it is expected that once in a while things might break and sometimes they break quite spectacularly… So I started experimenting and tried removing and reinstalling the nvidia-driver but that was failing as well because the package was expecting nvidia-kernel-dkms version 525.85.12 but we had 515.86.01-1 installed.

root@StarKnight:~# apt-get install nvidia-driver
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  nvidia-driver
0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.
Need to get 0 B/494 kB of archives.
After this operation, 1,398 kB of additional disk space will be used.
Selecting previously unselected package nvidia-driver.
(Reading database ... 439287 files and directories currently installed.)
Preparing to unpack .../nvidia-driver_525.85.12-1_amd64.deb ...
Unpacking nvidia-driver (525.85.12-1) ...
dpkg: dependency problems prevent configuration of nvidia-driver:
 nvidia-driver depends on nvidia-kernel-dkms (= 525.85.12-1) | nvidia-kernel-525.85.12 | nvidia-open-kernel-525.85.12 | nvidia-open-kernel-525.85.12; however:
  Version of nvidia-kernel-dkms on system is 515.86.01-1.
  Package nvidia-kernel-525.85.12 is not installed.
  Package nvidia-open-kernel-525.85.12 is not installed.
  Package nvidia-open-kernel-525.85.12 is not installed.

Now I had a couple of options, first was to wait for a couple of days (if I am lucky) for someone to upload the correct versions of the packages to the channel. The second option was to remove the package and installed the Open Source version of the Nvidia driver. I didn’t want to do that because that package is a memory hog and doesn’t work that well either. The last option was to try to manually install the older version (525.85.12) of the nvidia-kernel-dkms package and this is what I decided to go with, a search on the Debian Packages site gave me the .deb file for nvidia-kernel-dkms and firmware-nvidia-gsp (a dependency for the dkms package). I downloaded both the packages and installed them using the following command:

root@StarKnight:/home/suramya/Media/Downloads# dpkg -i firmware-nvidia-gsp_525.85.12-1_amd64.deb 
root@StarKnight:/home/suramya/Media/Downloads# dpkg -i nvidia-kernel-dkms_525.85.12-1_amd64.deb 

Once the packages were successfully downgraded I rebooted the system and the GUI came up without issues post the reboot.

Moral of the story is that you need to be prepared to have to troubleshoot your setup if you are running Debian Unstable or Debian Testing on your system. If you don’t want to do that then you should stick to Debian Stable which is rock solid or one of the other distributions such as Ubuntu or Linux Mint etc.

– Suramya

February 20, 2023

Fixing SSL error 61 on Citrix Workspace on Debian

Was trying to connect to a Citrix Workspace and kept getting the following error “You have not chosen to trust “Entrust Root Certification Authority – XX”, the issuer of the security certificate (SSL error 61)“. I have hit this error in the past and had fixed it but couldn’t find my notes from how I had fixed it back then, so I had to resort to searching on the web based on vague memories of how I had fixed. After a bit of effort I found two solutions that people had suggested:

Solution 1:

Create a symbolic link pointing the /opt/Citrix/ICAClient/keystore/cacerts directory to /usr/share/ca-certificates/mozilla/ , using the command below as root:

mv /opt/Citrix/ICAClient/keystore/cacerts /opt/Citrix/ICAClient/keystore/cacerts.bak
ln -s /usr/share/ca-certificates/mozilla/ /opt/Citrix/ICAClient/keystore/cacerts 

Unfortunately, this didn’t resolve the problem for me.

Solution 2:

The second solution people recommended was to link /opt/Citrix/ICAClient/keystore/cacerts directory to the /etc/ssl/certs/ directory, using the command below as root:

mv /opt/Citrix/ICAClient/keystore/cacerts /opt/Citrix/ICAClient/keystore/cacerts.bak
ln -s /etc/ssl/certs/ /opt/Citrix/ICAClient/keystore/cacerts 

After I linked the directory to /etc/ssl/certs things immediately started working without errors. This time I am blogging about it so that the next time I don’t waste time trying to find the solution.

– Suramya

February 10, 2023

Massive 5.9 million tonnes of Lithium deposits found in Jammu Kashmir

Filed under: Tech Related — Suramya @ 11:42 PM

Lithium is a critical metal for the production of batteries and the worldwide demand for it is only increasing with the push for more Electric Vehicles. Till date India has been import dependent for Lithium along with other critical metals that created a risk considering India’s ongoing push towards EV and slow movement away from traditional fossil fuels which made us reliant on China and other countries to meet our needs. On Thursday India’s mining ministry announced that they have found a massive 5.9 million tonnes of Lithium deposits in Jammu and Kashmir. To give you an idea of the scale this puts India in the top 6 “mine reserves” of the metal overtaking China which has ~4.6 million tonnes of the deposit for the metal.

  • Bolivia – 21 million tonnes
  • Argentina – 17 million tonnes
  • Chile – 9 million tonnes
  • United States – 6.8 million tonnes
  • Australia – 6.3 million tonnes
  • China – 4.5 million tonnes

Even though other countries have larger deposits of the metal, China controls 80% of the world’s raw material refining, 77% of the world’s cell capacity and 60% of the world’s component manufacturing. With a larger deposit being found in India and the government undertaking a concentrated effort to mine the metal safely and quickly, India has the opportunity to disrupt China’s control over the metal trade. The possibilities are endless as the demand for Lithium is only going to go up (till a safer and more sustainable option is found).

Till date, this deposit was not found because of the internal issues going on in Kashmir, after the article 370 was revoked more and more industries are looking at Kashmir and who know what other hidden treasures will be found in the state. Already we see that tourism to Kashmir has shot up massively with over 1.62 crore tourists visiting Jammu and Kashmir in 2022, which is the highest number since independence in 1947.

Looking forward to more such news and am happy to see our nation taking another step towards becoming atmanirbhar (self-reliant).

Source: Wion News: India discovers huge deposits of Lithium critical for electric mobility

– Suramya

« Newer PostsOlder Posts »

Powered by WordPress