Suramya's Blog : Welcome to my crazy life…

July 31, 2020

Interested in Coding using emoji’s? Check out emojicode

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 12:12 PM

I am not the most fervent fan of emojis in the world and for the most part I have gone through the last 20+ years with about 3-4 emojis that I use on a regular basis. But I know people who communicate wholly using them and while I am ok with folks using them in personal communications I dislike them immensely in professional communications (except for the occasional smiley face). However not everyone agrees with me and there have been books published in the past using just Emoji’s and now there is a programming language that is written entirely in emoji.

Emojicode, which first appeared as a Github project back in 2016 has been around for a while now and has a fairly strong following in the tech world. I realize that I sound like one of the old men screaming ‘Get off my lawn’ but I really don’t understand why anyone would want to code in a language that uses emoji’s to define text as a serious programming language. As a joke it would be fun to learn but I can’t really imagine coming in to work one day and writing code using Emojocode for a work project.

Here’s an example of the “Hello World” program written using Emojicode.

Hello World using Emojicode

If you want to learn coding in Emojicode, you can check out their impressive documentation or do a Code Academy course on the language. Emojicode is open-source, so you can also contribute to the development via their GitHub repository.

Yes, learning new programming languages is cool, but I don’t think I will be spending the effort to learn Emojicode anytime in the near future.

– Suramya

July 30, 2020

Scientists claim to be able to detect depression in written text using Machine learning

Filed under: Computer Software,My Thoughts,Tech Related — Suramya @ 12:26 PM

Depression is brutal, it can range from feelings of sadness, loss, or anger that interfere with a person’s everyday activities to suicidal tendencies. In the past few months there have been multiple cases of famous people committing suicide because they were depressed and unable to cope with the feelings of isolation and stress brought about by the current pandemic. Unfortunately depression is not an easy thing to diagnose and there isn’t a single test to diagnose it. Doctors can diagnose it based on your symptoms and a psychological evaluation which in most cases includes questions about your:

  • moods
  • appetite
  • sleep pattern
  • activity level
  • thoughts etc

But all of this requires a person to be open about their thoughts and that can be difficult at times due to the stigma associated with mental health issues. In all of the cases I was referring to earlier the common theme from the friends & acquaintances have been about how they wish they had known that xyz was depressed and if they had then maybe they could have helped.

The problem is that people don’t always come out and say that they are depressed and sometime the signals are very faint. So its very interesting to see the various efforts that are underway to identify these symptoms earlier and get the people the help they need faster so that they don’t have to face it alone. As part of this effort scientists at Canada’s University of Alberta have created a machine learning model that uses linguistic clues to indicate signs of depression in text communications like twitter messages and have published a paper on it (Augmenting Semantic Representation of Depressive Language: From Forums to Microblogs) in the ‘European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Database’.

We discuss and analyze the process of creating word embedding feature representations specifically designed for a learning task when annotated data is scarce, like depressive language detection from Tweets. We start from rich word embedding pre-trained from a general dataset, then enhance it with embedding learned from a domain specific but relatively much smaller dataset. Our strengthened representation portrays better the domain of depression we are interested in as it combines the semantics learned from the specific domain and word coverage from the general language. We present a comparative analyses of our word embedding representations with a simple bag-of-words model, a well known sentiment lexicon, a psycholinguistic lexicon, and a general pre-trained word embedding, based on their efficacy in accurately identifying depressive Tweets. We show that our representations achieve a significantly better F1 score than the others when applied to a high quality dataset.

This is not the first study on the topic and it won’t be the last. The paper is fairly technical and from what I can understand they can identify potential signs of depression based on words used and phrasing. But am not sure how they are taking into account sarcasm and contextual clues. For example without the appropriate context things being said can be taken in many different ways and identifying the correct emotion behind the words can be tricky. When we interact in person or over phone things like body language or verbal cues give us additional context about how a person is feeling, unfortunately that is not the case with text and there is a huge potential for things to be taken out of context or in the wrong way. Another issue is how to differentiate between feelings of sadness and depression as the symptoms might be very similar.

We need human interactions, connections etc to address this issue and not another technology claiming to be a silver bullet as not everything can be solved by AI/ML and the low accuracy level on such solutions can only cause trouble down the line. Imagine such a system being implemented at workplaces, during interviews or on dating sites. If a system flagged you as a depressive then it could cost you your job, or your relationship.

What do you think?

– Suramya

July 27, 2020

Cloaking your Digital Image using Fawkes to thwart unauthorized Deep Learning Models

Filed under: Computer Related,Computer Software,My Thoughts,Tech Related — Suramya @ 3:42 PM

Unless you have been living under a rock you have seen or heard about facial recognition technologies that are actively in use in the world. You have the movie/TV version where a still image from a video feed is instantly compared to every image in the database to match a perp, then you have the real world example where there are systems that take all your social media feeds, images of yours posted anywhere as a dataset to train a system that can identify you from a video feed (not as quickly as the TV version but still fast).

So what is the way to prevent this? Unfortunately there isn’t one (or at least there wasn’t a realistic one till recently). Earlier you had to ensure that no image of yours is ever posted online, you are never caught in a security feed or traffic cam anywhere. Which as you can imagine is pretty impossible in today’s connected world. Even if I don’t post a picture of me online, my friends with whom I attended a party might upload a pic with me in the background and tag me. Or you get peer pressured to upload the photos to FB or Twitter etc.

There is not much we can do about state sponsored learning models but there are plenty of other folks running unauthorized setups that consume photos posted publicly without permission to train their AI models. These are the systems targeted by folks from the SAND Lab at University of Chicago who have developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how their own images can be used to track them.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

The research and the tool will be presented at the upcoming USENIX Security Symposium, to be held on August 12 to 14. The software is available for download at the projects GitHub repository and they welcome contributions.

It would be amazing when this tool matures and I can imagine it becoming a default part of operating systems so that all images uploaded get processed by the tool by default reducing the risk of automatic facial recognition. Although I can’t imagine any of the governments/Facebook being too happy about this tool being publicly available. 🙂

Well this is all for now. Will write more later.

Thanks to Schneier on Security for the initial link.

– Suramya

Powered by WordPress