Suramya's Blog : Welcome to my crazy life…

September 26, 2020

Source code for multiple Microsoft operating systems including Windows XP & Server 2003 leaked

Filed under: Computer Related,Techie Stuff — Suramya @ 5:58 PM

Windows XP & Windows Server source code leaked online earlier this week and even though this is for an operating system almost 2 decades old this leak is significant. Firstly because some of the core XP components are still in use in Windows 7/8/10. So if a major bug is found in any of those subsystems after people analyze the code then it will have a significant impact on the modern OS’s as well from Redmond. Secondly, It will give everyone a chance to try and understand how the Windows OS works so that they can enhance tools like WINE and other similar tools to have better compatibility with Windows. The other major impact will be on systems that still use XP like ATM’s, embedded systems, point-of-sale, automated teller machines, set-top boxes etc. Those will be hard to upgrade & protect as is some cases the companies that made the device are no longer in business and in other cases the software is installed in devices that are hard to upgrade.

This is not the first time Windows source code has leaked to the internet. In early 2000 a mega torrent of all MS Operating systems going back to MS-DOS was released, it allegedly contained the source code for the following OS’s:

OS from filename Alleged source size (bytes)
——————— —————————
MS-DOS 6 10,600,000
NT 3.5 101,700,000
NT 4 106,200,000
Windows 2000 122,300,000
NT 5 2,360,000,000

Leaked Data from the latest leak


Alleged contents of the Torrent file with MS Source Code.

The leaked code is available for download at most Torrent sites, I am not going to link to it for obvious reasons. If you want to check it out you can go download it, however as always be careful of what you download off the internet as it might have viruses and/or trojans in it. This is especially true if you are downloading the torrent on a Windows machine. Several users on Twitter claim that the source code for the original Xbox is included as well, but the information is varied on this. I haven’t downloaded it myself so can’t say for sure either way.

Keep in mind that the leak was illegal and just because it has leaked doesn’t mean that you can use it to build a clone of Windows XP without written authorization from Microsoft.

Source: ZDNet: Windows XP source code leaked online, on 4chan, out of all places

– Suramya

September 21, 2020

Diffblue’s Cover is an AI powered software that can write full Unit Tests for you

Filed under: Computer Related,Computer Software,Interesting Sites — Suramya @ 6:19 PM

Writing Unit Test cases for your software is one of the most boring parts of Software Development even though having accurate tests allows us to develop code faster & with more confidence. Having a full test suite allows a developer to ensure that the changes they have made didn’t break other parts of the project that were working fine earlier. This make Unit tests an essential part of CI/CD (Continuous Integration and Continuous Delivery) pipelines. It is therefore hard to do frequent releases without rigorous unit testing. For example SQLite database engine has 640 times as much testing code as code in the engine itself:

As of version 3.33.0 (2020-08-14), the SQLite library consists of approximately 143.4 KSLOC of C code. (KSLOC means thousands of “Source Lines Of Code” or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 640 times as much test code and test scripts – 91911.0 KSLOC.

Unfortunately, since the tests are boring and don’t give immediate tangible results they are the first casualties when a team is under a time crunch for delivery. This is where Diffblue’s Cover comes into play. Diffblue was spun out of the University of Oxford following their research into how to use AI to write tests automatically. Cover uses AI to write a complete Unit Test including logic that reflects the behavior of the program as compared to the other existing tools that generate Unit Tests based on Templates and depend on the user to provide the logic for the test.

Cover has now been released as a free Community Edition for people to see what the tool can do and try it out themselves. You can download the software from here, and the full datasheet on the software is available here.


Using Cover IntelliJ plug-in to write tests

The software is not foolproof as in it doesn’t identify bugs in the source code. It assumes that the code is working correctly when the tests are added in, so if there is incorrect logic in the code it won’t be able to help you. On the other hand if the original logic was correct then it will let you know if the changes made break any of the existing functionality.

Lodge acknowledged the problem, telling us: “The code might have bugs in it to begin with, and we can’t tell if the current logic that you have in the code is correct or not, because we don’t know what the intent is of the programmer, and there’s no good way today of being able to express intent in a way that a machine could understand.

“That is generally not the problem that most of our customers have. Most of our customers have very few unit tests, and what they typically do is have a set of tests that run functional end-to-end tests that run at the end of the process.”

Lodge’s argument is that if you start with a working application, then let Cover write tests, you have a code base that becomes amenable to high velocity delivery. “Our customers don’t have any unit tests at all, or they have maybe 5 to 10 per cent coverage. Their issue is not that they can’t test their software: they can. They can run end-to-end tests that run right before they cut a release. What they don’t have are unit tests that enable them to run a CI/CD pipeline and be able to ship software every day, so typically our customers are people who can ship software twice a year.”

The software is currently only compatible with Java & IntelliJ but work is ongoing to incorporate other coding languages & IDEs.

Thanks to Theregister.com for the link to the initial story.

– Suramya

September 16, 2020

Potential signs of life found on Venus: Are we no longer alone in the universe?

Filed under: Interesting Sites,My Thoughts,News/Articles — Suramya @ 11:15 AM

If you have been watching the Astronomy chatter the past two days, you would have seen the headlines screaming about the possibility of life being found on Venus. Other less reputable sources are claiming that we have found definite proof of alien life. Both are inaccurate as even though we have found something that is easily explained by assuming the possibility of extra-terrestrial life there are other potential explanations that could cause the anomaly. So what is this discovery, you might ask which is causing people worldwide to start freaking out?

During analysis of spectrometer readings of Venus, scientists made a startling discovery high in its atmosphere; they found traces of phosphine (PH3) gas in Venus’s atmosphere, where any phosphorus should be in oxidized forms at a concentration (~20 parts per billion) that is hard to explain. It is unlikely that the gas is produced by abiotic production routes in Venus’s atmosphere, clouds, surface and subsurface, or from lightning, volcanic or meteoritic delivery (See the explanation below), hence the worldwide freak out. Basically the only way we know that this gas could be produced in the quantity measured is if there are anaerobic life (microbial organisms that don’t require or use oxygen) producing the gas on Venus. Obviously this doesn’t mean that there aren’t ways that we haven’t thought about yet that could be generating this gas. But the discovery is causing a big stir and will cause various space programs to start refocusing their efforts on Venus. India’s ISRO already has a mission planned to study the surface and atmosphere of Venus called ‘Shukrayaan-1‘ set to launch late 2020’s after the Mars Orbiter Mission 2 launches and you can be sure that they will be attempting to validate these findings when we get there.

The only way to conclusively prove life exists on Venus would be to go there and collect samples containing extra-terrestrial microbes. Since it’s impossible to prove a negative this will be the only concrete proof that we can trust. Anything else will still leave the door open for other potential explanations for the gas generation.

Here’s a link to the press briefing on the possible Venus biosignature announcement from @RoyalAstroSoc featuring comment from several of the scientists involved.

The recent candidate detection of ppb amounts of phosphine in the atmosphere of Venus is a highly unexpected discovery. Millimetre-waveband spectra of Venus from both ALMA and the JCMT telescopes at 266.9445 GHz show a PH3 absorption-line profile against the thermal background from deeper, hotter layers of the atmosphere indicating ~20 ppb abundance. Uncertainties arise primarily from uncertainties in pressure-broadening coefficients and noise in the JCMT signal. Throughout this paper we will describe the predicted abundance as ~20 ppb unless otherwise stated. The thermal emission has a peak emission at 56 km with the FWHM spans approximately 53 to 61 km (Greaves et al. 2020). Phosphine is therefore present above ~55 km: whether it is present below this altitude is not determined by these observations. The upper limit on phosphine occurrence is not defined by the observations, but is set by the half-life of phosphine at <80 km, as discussed below.

Phosphine is a reduced, reactive gaseous phosphorus species, which is not expected to be present in the oxidized, hydrogen-poor Venusian atmosphere, surface, or interior. Phosphine is detected in the atmospheres of three other solar system planets: Jupiter, Saturn, and Earth. Phosphine is present in the giant planet atmospheres of Jupiter and Saturn, as identified by ground-based telescope observations at submillimeter and infrared wavelengths (Bregman et al. 1975; Larson et al. 1977; Tarrago et al. 1992; Weisstein and Serabyn 1996). In giant planets, PH3 is expected to contain the entirety of the atmospheres’ phosphorus in the deep
atmosphere layers (Visscher et al. 2006), where the pressure, temperature and the concentration of H2 are sufficiently high for PH3 formation to be thermodynamically favored. In the upper atmosphere, phosphine is present at concentrations several orders of magnitude higher than predicted by thermodynamic equilibrium (Fletcher et al. 2009). Phosphine in the upper layers is dredged up by convection after its formation deeper in the atmosphere, at depths greater than 600 km (Noll and Marley 1997).

An analogous process of forming phosphine under high H2 pressure and high temperature followed by dredge-up to the observable atmosphere cannot happen on worlds like Venus or Earth for two reasons. First, hydrogen is a trace species in rocky planet atmospheres, so the formation of phosphine is not favored as it is in the deep atmospheres of the H2-dominated giant planets. On Earth H2 reaches 0.55 ppm levels (Novelli et al. 1999), on Venus it is much lower at ~4 ppb (Gruchola et al. 2019; Krasnopolsky 2010). Second, rocky planet atmospheres do not extend to a depth where, even if their atmosphere were composed primarily of hydrogen, phosphine formation would be favored (the possibility that phosphine can be formed below the surface and then being erupted out of volcanoes is addressed separately in Section 3.2.2 and Section 3.2.3, but is also highly unlikely).

Despite such unfavorable conditions for phosphine production, Earth is known to have PH3 in its atmosphere at ppq to ppt levels (see e.g. (Gassmann et al. 1996; Glindemann et al. 2003; Pasek et al. 2014) and reviewed in (Sousa-Silva et al. 2020)) PH3’s persistence in the Earth atmosphere is a result of the presence of microbial life on the Earth’s surface (as discussed in Section 1.1.2 below), and of human industrial activity. Neither the deep formation of phosphine and subsequent dredging to the surface nor its biological synthesis has hitherto been considered a plausible process to occur on Venus.

More details of the finding are explained in the following two papers published by the scientists:

Whatever the reason for the gas maybe, its a great finding as it has reenergized the search for Extra-Terrestrial life and as we all know: “The Truth is out there…”.

– Suramya

September 12, 2020

Post-Quantum Cryptography

Filed under: Computer Related,Quantum Computing,Techie Stuff — Suramya @ 11:29 AM

As you are aware one of the big promises of Quantum Computers is the ability to break existing Encryption algorithms in a realistic time frame. If you are not aware of this, then here’s a quick primer on Computer Security/cryptography. Basically the current security of cryptography relies on certain “hard” problems—calculations which are practically impossible to solve without the correct cryptographic key. For example it is trivial to multiply two numbers together: 593 times 829 is 491,597 but it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it and it becomes increasingly difficult as the numbers get larger. Such hard problems form the basis of algorithms like the RSA that would take the best computers available billions of years to solve and all current IT security aspects are built on top of this basic foundation.

Quantum Computers use “qubits” where a single qubit is able to encode more than two states (Technically, each qubit can store a superposition of multiple states) making it possible for it to perform massively parallel computations in parallel. This makes it theoretically possible for a Quantum computer with enough qubits to break traditional encryption in a reasonable time frame. In a theoretical projection it was postulated that a Quantum Computer could break a 2048-bit RSA encryption in ~8 hours. Which as you can imagine is a pretty big deal. But there is no need to panic as this is something that is still only theoretically possible as of now.

However this is something that is coming down the line so the worlds foremost Cryptographic experts have been working on Quantum safe encryption and for the past 3 years the National Institute of Standards and Technology (NIST) has been examining new approaches to encryption and data protection. Out of the initial 69 submissions received three years ago the group narrowed the field down to 15 finalists after two rounds of reviews. NIST has now begun the third round of public review of the algorithms to help decide the core of the first post-quantum cryptography standard.

They are expecting to end the round with one or two algorithms for encryption and key establishment, and one or two others for digital signatures. To make the process easier/more manageable they have divided the finalists into two groups or tracks, with the first track containing the top 7 algorithms that are most promising and have a high probability of being suitable for wide application after the round finishes. The second track has the remaining eight algorithms which need more time to mature or are tailored to a specific application.

The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

You should check out this talk by Daniel Apon of NIST detailing the selection criteria used to classify the finalists and the full paper with technical details is available here.

Source: Schneier on Security: More on NIST’s Post-Quantum Cryptography

– Suramya

September 7, 2020

Govt mulls mandating EV charging kiosks at all 69,000 petrol pumps in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 12:36 PM

The Indian Government is doing an extensive push for promoting renewable energy and the increased push for Electric Vehicles are part of the effort. Earlier this month I talked about how they are trying to make EV’s cheaper by allowing consumers to purchase without a battery. Now they are looking at mandating the installation of EV Charging kiosks on all petrol pumps in India (~69,000). This move will resolve one of the biggest concerns (after cost) of operating an EV – namely how/where can we charge it during travel.

We had a similar problem when CNG (Compressed Natural Gas) was mandated for all Auto’s & buses (at least in Delhi). There was a lot of resistance to the move because there were only 2-3 CNG fuel pumps in Delhi at the time, then a lot of new pumps were built and existing pumps also added CNG option which made CNG an attractive & feasible solution. I am hoping that the same will be the case with EV Charging points once the new rule is implemented.

In a review meeting on EV charging infrastructure, Power Minister R K Singh suggested oil ministry top officials that “they may issue an order for their oil marketing companies (OMCs) under their administrative control for setting up charging kiosks at all COCO petrol pumps”, a source said.

Other franchisee petrol pump operators may also be advised to have at least one charging kiosk at their fuel stations, the source said adding this will help achieve “EV charging facility at all petrol pumps in the country”.

Under the new guidelines of the oil ministry, new petrol pumps must have an option of one alternative fuel.

“Most of the new petrol pumps are opting for electric vehicle charging facility under alternative fuel option. But it will make huge difference when the existing petrol pumps would also install EV charging kiosks,” the source said.

Source: Hindustan Times

– Suramya

September 3, 2020

Electric Vehicles can now be sold without Batteries in India

Filed under: Emerging Tech,My Thoughts,News/Articles — Suramya @ 11:51 PM

One of the biggest constraints for buying an Electric Vehicle (EV) is cost as even with all the subsidies etc the cost of an EV is fairly high and upto 40% of the EV cost is the cost of the batteries. In a move to reduce the cost of EV’s in India, Indian government is now allowing dealers to sell EV’s without batteries and the customer will then have the option to retrofit an electric battery as per their requirements.

When I first read the news I thought they were kidding and what use an Electric car was without a battery. Then I thought about it a bit more, and realized that you could think of it as a dealer not selling a car with a pre-filled fuel tank. We normally get a liter or two of petrol/diesel in the car when we buy it and then top it up with fuel later. Now think of doing something similar with the EV, you get a small battery pack with the car by default (enough to let you drive for a few Kilometers) and you have the option to replace it with the battery pack of your choice. This will allow a person to budget their expense by choosing to but a low power/capacity battery initially if they are not planning on driving outside the city and then later upgrading to a pack with more capacity.

However some of the EV manufacturers are concerned about the safety aspects of retrofitting of batteries and possibilities of warranty-related confusion. Plus they also have questions about the how the subsidies under the Centre’s EV adoption policy would be determined for vehicles without batteries. Basically they feel that they should have been consulted in more detail before this major change was announced so as to avoid confusion after the launch.

The policy was announced mid August and I think time only will tell how well the policy works in the market.

More Details on the change: Sale of EVs without batteries: Ather, Hero Electric, etc. laud policy but Mahindra has doubts

– Suramya

September 1, 2020

Background radiation causes Integrity issues in Quantum Computers

Filed under: Computer Related,My Thoughts,Quantum Computing,Techie Stuff — Suramya @ 11:16 PM

As if Quantum Computing didn’t have enough issues preventing it from being a workable solution already, new research at MIT has found that ionizing radiation from environmental radioactive materials and cosmic rays can and does interfere with the integrity of quantum computers. The research has been published in Nature: Impact of ionizing radiation on superconducting qubit coherence.

Quantum computers are super powerful because their basic building blocks qubit (quantum bit) is able to simultaneously exist as 0 or 1 (Yes, it makes no sense which is why Eisenstein called it ‘spooky action at a distance’) allowing it process a magnitude more operations in parallel than the regular computing systems. Unfortunately it appears that these qubits are highly sensitive to their environment and even minor levels of radiation emitted by trace elements in concrete walls and cosmic rays can cause them to loose coherence corrupting the calculation/data, this is called decoherence. The longer we can avoid decoherence the more powerful/capable the quantum computer. We have made significant improvements in this over the past two decades, from maintaining it for less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices.

As per the study, the effect is serious enough to limit the performance to just a few milliseconds which is something we are expected to achieve in the next few years. The only way currently known to avoid this issue is to shield the computer which means putting these computers underground and surrounding it with a 2 ton wall of lead. Another possibility is to use something like a counter-wave of radiation to cancel the incoming radiation similar to how we do noise-canceling. But that is something which doesn’t exist today and will require significant technological breakthrough before it is feasible.

“Cosmic ray radiation is hard to get rid of,” Formaggio says. “It’s very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It’s probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels.”

“If we want to build an industry, we’d likely prefer to mitigate the effects of radiation above ground,” Oliver says. “We can think about designing qubits in a way that makes them ‘rad-hard,’ and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they’re constantly being generated by radiation, they can flow away from the qubit. So it’s definitely not game-over, it’s just the next layer of the onion we need to address.”

Quantum Computing is a fascinating field but it really messes with your mind. So I am happy there are folks out there spending time trying to figure out how to get this amazing invention working and reliable enough to replace our existing Bit based computers.

Source: Cosmic rays can destabilize quantum computers, MIT study warns

– Suramya

August 29, 2020

You can be identified online based on your browsing history

Filed under: Computer Related,Computer Software,My Thoughts,Techie Stuff — Suramya @ 7:29 PM

Reliably Identifying people online is a bedrock of the million dollar advertising industry and as more and more users become privacy conscious browsers have been adding features to increase the user’s privacy and reduce the probability of them getting identified online. Users can be identified by Cookies, Super Cookies etc etc. Now there is a research paper (Replication: Why We Still Can’t Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories) that claims to be able to identify users based on their browsing histories. It is built on top of previous research Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns and re-validates the findings of the previous paper and builds on top of it.

We examine the threat to individuals’ privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and

extends the 2012 paper Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper’s core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

Original paper

Olejnik, Castelluccia, and Janc [48] gathered data in a project aimed at educating users about privacy practices. For the analysis presented in [48] they used the CSS :vis-ited browser vulnerability [8] to determine whether various home pages were in a user’s browsing history. That is, they probed users’ browsers for 6,000 predefined “primary links” such as www.google.com and got a yes/no for whether that home page was in the user’s browsing history. A user may have visited that home page and then cleared their browsing history, in which case they would not register a hit. Additionally a user may have visited a subpage e.g. www.google.com/maps but not www.google.com in which case the probe for www.google.com would also not register a hit. The project website was open for an extended period of time and recorded profiles between January 2009 and May 2011 for 441,627 unique users, some of whom returned for multiple history tests, allowing the researchers to study the evolution of browser profiles as well. With this data, they examined the uniqueness of browsing histories.

This brings to mind a project that I saw a few years ago that would give you a list of websites from the top 1k websites that you had visited in the past using javascript and some script-fu. Unfortunately I can’t find the link to the site right now as I don’t remember the name and a generic search is returning random sites. If I find it I will post it here as it was quite interesting.

Well this is all for now. Will post more later.

– Suramya

August 27, 2020

Optimizing the making of peanut butter and banana sandwich using computer vision and machine learning

Filed under: Computer Related,Computer Software,Techie Stuff — Suramya @ 12:42 AM

The current Pandemic is forcing people to stay at home depriving them of activities that kept them occupied in the past so people are getting a bit stir-crazy & bored of staying at home. Its worse for developers/engineers as you never know what will come out from the depths of a bored programmer’s mind. Case in point is the effort spent by Ethan Rosenthal in writing Machine Learning/Computer Vision code to Optimizing the coverage of the banana slices on his peanut butter & Banana sandwich so that there is the same amount of banana in every mouthful. The whole exercise took him a few months to complete and he is quite proud of the results.

It’s really quite simple. You take a picture of your banana and bread, pass the image through a deep learning model to locate said items, do some nonlinear curve fitting to the banana, transform to polar coordinates and “slice” the banana along the fitted curve, turn those slices into elliptical polygons, and feed the polygons and bread “box” into a 2D nesting algorithm
[…]
If you were a machine learning model (or my wife), then you would tell me to just cut long rectangular strips along the long axis of the banana, but I’m not a sociopath. If life were simple, then the banana slices would be perfect circles of equal diameter, and we could coast along looking up optimal configurations on packomania. But alas, life is not simple. We’re in the middle of a global pandemic, and banana slices are elliptical with varying size.

The problem of fitting arbitrary polygons (sliced circular banana pieces) in a box (the bread piece) is NP-hard so the ideal solution is practically uncomputable and Rosenthal’s solution is a good approximation of the optimal solution in a reasonable time frame. The final solution is available as a command-line package called “nannernest” which takes a photo of the bread piece & banana as its argument and returns the an optimal slice-and-arrange pattern for the given combination.


Sample output created by nannernest

Check out the code & the full writeup on the project if you are interested. Even though the application is silly it’s a good writeup on using Machine Learning & Computer Vision for a project.

Source: Boing Boing

– Suramya

August 19, 2020

Convert typed text to realistic handwriting

Filed under: Computer Related,Computer Software,Techie Stuff — Suramya @ 6:45 PM

There are some tools or projects that really don’t make any practical sense but are a lot of fun to use or just impressive in how they implement technology. The Handwritten.js project by ‘alias-rahil’ is one such project. Basically what it does is take any Plain Text document and convert it into a realistic looking handwritten page. I tried it out on a few sample documents (logs) and it worked great. The programs does coredump if you try converting a 5MB file, but other than that it worked as expected.

Below is a sample file with some quotes that I converted as a test :

* Mountain Dew and doughnuts… because breakfast is the most important meal of the day

* Some days you’re the dog; some days you’re the hydrant.

* He who smiles in a crisis has found someone to blame.

* Marriage is one of the chief causes of divorce

* Earth is 98% full…please delete anyone you can.

* I came, I saw, I decided to order take out.

* F U CN RD THS U CNT SPL WRTH A DM!

* Work hard for eight hours a day, and eventually you may become a
boss and be able to work twelve.

* Quitters never win, and winners never quit, but those who never quit AND never win are idiots.

* What’s the difference between a bad golfer and a bad skydiver?

A bad golfer goes, WHACK! “Damn.”
A bad skydiver goes, “Damn.” WHACK!

* Beware of the light at the end of the tunnel. It could be an oncoming train.

* A girl is like a road. The more curves she has the more dangerous she is!

* A woman who dresses to kill probably cooks the same.

The script is fast and didn’t take more than a few seconds to process the file and create a PDF file with the output. The output for my test run is as below:


Output generated by Handwritten.js

I did also try converting a word file with the software but it didn’t take the content of the file for the conversion, instead it converted the XML & Code from the file. One suggestion for improvement I have is to enhance the script to support word files. It would be awesome if it could also convert any of the diagrams, tables etc to look like they were drawn by hand.

Maybe if I have some time I will look into this and see how easy it is to enhance the script. But no promises as I have a ton of other things I need to take complete first. 🙂

Source: Hacker News

– Suramya

Older Posts »

Powered by WordPress