Blog

Clean Python compiled files (.pyc) using py3clean

Recently ran into some issues with Python compiled files __pycache__ and *.pyc files not getting deleted when doing git checkout. The files have been created when I mounted the folder as a volume in a docker container and had different rights than the current user. So, I needed to use sudo to remove them recursively in the project.

That’s when I learnt of this cool new tool called py3clean. Simply run py3clean <folder> and it will remove all the Python compiled files recursively.

Huawei – HarmonyOS & LiteOS combo system

I recently bought a smart watch that will let me go for a run without 350 grams of electronics (my phone) jiggling in my pocket at every step. It is a Honor Magic Watch 2. It’s probably the cheap & beast option (costs about 12K) which supports onboard music storage. Now, I can go for a run with just watch and Bluetooth earphones šŸ™‚

The LiteOS

Now the watch doesn’t support 3rd party apps like the pricier WearOS ones like the Samsung Galaxy Watches. How to make this thing smarter? Google lookup revealed that this thing is basically a clone of Huawei Watch 2 and they both run an operating system called LiteOS – which is a low power IoT OS developed by Huawei for its smart devices. The cool thing is, the OS is actually Open Source – https://github.com/LiteOS/LiteOS (BSD-3 Licensed).

While it might be OpenSource, it still doesn’t solve “making it smarter”.

  • The watch’s source code is not open source – just the OS
  • There is no SDK to develop apps, so there is no documented way to know it’s APIs

HarmonyOS

Unsurprisingly Huawei is aware of the problem and they have been developing more modern OS called HarmonyOS, which will allow them to overcome this. HarmonyOS is being positioned as an alternate to Android for cross device integration. It runs on everything from watches, phones, tablets, to TVs.

How did Huawei develop such an OS so fast? Turns out it is just a fork of Android that has been rebranded/code obfuscated… or whatever. Basically a copy.

The Battery

One of the good thing about the LiteOS watches is their battery life. They are rated for 7 – 4 days of usage. Having used it for a week now, I will say it is kind of true. Even with GPS + Bluetooth Music on for 40 mins a day, it can still go for 4 days.

Now if Google’s WearOS can only provide a battery life of 24 – 48 hours, how can HarmonyOS which is a clone be any better?

Enter the combo OS

Huawei has done what we Linux users have been doing for decades – dual booting OS. Well, not exactly, but close. So the newer watches like Huawei Watch 3, come with both LiteOS and HarmonyOS loaded onto them.

For “smarter” requirements, HarmonyOS takes over, and for essentials, the LiteOS subs in. It is kind of a smart solution. With this combo solution, Huawei seems to bringing the best of both worlds to their watches.

Sidenote

The Honor watch also has a Huawei designed custom chip called Kirin which runs the LiteOS. So they could be going Apple’s way in Chip + Kernel + OS level integration.

Goodbye! Brave

Final Update (moved old updates to bottom)

Brave supports Chrome extensions. The problem was with the authorā€™s version of Brave; it was roughly a year old. Very old versions of Brave didnā€™t include service keys (necessary for interacting with Braveā€™s privacy-preserving proxy-service), whereas modern versions do (which is why you and I are able to install extensions without any issue)

Sampson from Brave

To explain – the place I have installed Brave hasn’t made available any newer versions of the browser since December 2020. So the keys it shipped with have become outdated. Since no-update was available, I didn’t see the usual orange “Update” button on the taskbar.

āš ļø NOTICE: Closing comments as they have moved from discussing the issue to attacking me for not being crypto friendly.

Original Post

I have been using the Brave Browser for almost 2 years I think. @logic introduced it to me at some point and it has been my primary browser both in Desktop and Mobile, home and office computers since then.

I got my first heads up when I came across a post on HackerNews about Brave misbehaving due to the “Brave backend servers” being unreachable. It struck me as strange when a comment on the Github ticket mentioned that Brave servers need to be up for Brave to function.

This is a big design NO-NO for something as essential as a web browser. But then, the inertia of it being a daily driver, its amazing ad-blocking and tracker protection, Chrome extension compatibility, and the fact I haven’t faced any such issues prevented me from doing any changes.

Today I was looking to install an extension to manage the browser tabs and I ran into this

Can’t install any extension

I thought maybe the extension was buggy and tried a couple more and the same result for everything. And searching for the error led me to this Github Ticket, which again describes that it is a “server-side” issue and it was fixed.

Well, it is not fixed for me. But that’s beside the point. This amount of dependency for a browser to have on “backend servers” is ridiculous. For software, as important as a browser, through which I have come to access almost everything digital for me is unacceptable. So with this post being the last thing I will do on Brave, I will bid goodbye.

Exploring options…

  1. An interesting alternate is Vivaldi – It is trying to do what Opera was doing pre-Chrome. It rolls email, calendar, RSS reader, browser all into one and also provides built-in ad-blocking.
  2. Open source Chrome aka Chromium – This used to be my primary dirver before. So I am thinking of going back to it with the usual extensions like Ghostery, AdBlock+..etc., Not sure how much things have changed there.

Update:

Not sure who posted this in HackerNews. Thanks for all the feedback.

  1. I will be trying Firefox. So many people have recommended it. It’s something I have forgotten over the last couple of years and before that it frequently caused issues and was only my secondary browser for testing.
  2. There is nothing sinister about the decision or PR at work. I tried installing extensions, it didn’t work, I uninstalled and made a note of why I am doing it. Interpretations are all yours.

Update 2:

This is for people suggesting I jumped the gun and probably didn’t take the time to understand the real problem. I am an Chrome extension author myself, I had just published a new version of it only 8 hours before and tested installation on Brave and Chrome. So, I understand the issue. And I have linked to GitHub issues where this has been discussed.

Learning Rust – Day 1

Dev notes from learning Rust.

Official Website: https://www.rust-lang.org

Notes

  • Rust is a compiled language – different from my daily drivers Python and JavaScript both interpreted languages.
  • Compiled means re-learning the difference between passing by reference and passing by value
  • Rust has mutability at the core of data management – so being conscious about immutable and mutable data
  • Errors are handled as a part of the Result of functions. Doesn’t need an explicit try...except (at least at this point)
  • Packages are called crates and there are binary crates & library crates
  • https://crates.io/ is the package registry
  • Cargo.lock file acts as the record of dependencies for Reproducible builds
  • cargo update updates the packages to the most recent bugfix version
  • there is something called Traits which provide access to functions of a crate.

Basics of Investing – A conversation in Tamil

A couple of weeks back, I hosted a space on Investments where Mohanakrishnan covered the first steps on an investment journey. This covered things like:

  • Emergency Fund
  • Health Insurance
  • Asset allocation and..
  • diversification

The snippets from the conversation are listed here.

Stepping into 2021

This is probably the most remarkable New Year in my life. I am finally giving up 5 years of successful (personally) freelancing and consulting work and will be a full time employee of a company.

I am joining Times Internet group for a data driven journalism team they are setting up and Ritvvij thinks I can contribute a lot and I am excited about it as much as I can be given the circumstances. **turns and glares at 2020**

Some background

You see, I saw this TED talk of Sir Tim Berners Lee way back in 2010 and got excited about OpenStreetMap and have been making maps since then. If you follow me on Twitter, then you perhaps would agree that I have had some success with it.

During 2010 – 13, my dream job was to become a data visualisation developer for a newspaper. TED Talks on that subject were watched and rewatched so many times – I miss you Hans Rosling. Unfortunately I never actually become a data visualisation expert. It remained a hobby – notice the “Visualization” on the top menu.

… and now

I think what brought attention was me trying to make NER based data extraction a commercial tool. Ritvvij DMed me to provide feedback on it and a few weeks later DMed again to hire me for Times Internet.*

Suddenly, in the last week of the crappy year that 2020 has been, I found myself holding a role that I had dreamt of almost 10 years ago.

Tell me this is not a remarkable new year.

* Well, not exactly. I did go through a two day paid test where I did some work to prove that I am suitable and impressed at least one other person with my data visualisation attempts.

some thoughts

I am happy I get to build software for things that I do as a hobby and get paid for it. I think I can call myself lucky in that sense, even though I might have to find a new hobby, the old hobby becoming work and all.

I have spent hours agonising over the frameworks I am not confident about (hello Django, hey NodeJS), the new cool features I am missing out on (what version are you now on React? 16, 17, 99?), the amazing new paradigms that are opening up (nocode.tech), questioned the future as a Python developer and how what I know doesn’t fit into 100s of job descriptions that I skip across. It is remarkable that a job found me without asking any of that.

This is similar to the instance when 40 lines of JavaScript made it to The Next Web and Lifehacker .

This has brought in the realisation that, while I might not be a 10x engineer who can reverse a binary tree for breakfast, I certainly can create solutions that people find valuable / useful / interesting at some level.

And for that I am happy.

Happy New Year 2021 – Cheers to New Beginnings.

Backup all the files in a directory to Azure Cloud

I had to copy all the files from the home directory into a Azure Blob container today. All the regular folders without any of the dot files and dot folders.

Azure CLI provides batch upload functionality to upload folders. But there are two issues I faced:

  1. I needed to copy all the folders – and I didn’t want to run the command for each folder.
  2. I wanted to preserve the folder structure in the container as well.

After some trial and error I settled on this one liner.

for f in */; do az storage blob upload-batch -d container-name/$f -s $f; done;

For loop take care of #1 and using the /$f takes care of creating corresponding folders to preserve the same folder structure as in my home directory.

This assumes you already have set the AZURE_STORAGE_ACCOUNT and the AZURE_STORAGE_KEY environment variables for authentication.

NER Annotator / NER Tagger for Spacy

NER Annotator is now available to use directly from the browser

https://tecoholic.github.io/ner-annotator/

Background

As with most things, this started with a problem. Dr. K. Mathan is an Epidemiologist tracking Covid-19. He wanted to automated extraction of details from government bulletins for data collection. It was a tedious manual process of reading the bulletins and entering the data by hand. Since the bulletins has paragraphs of text with text in them, I was looking to see if I can leverage any NLP (Natural Language Processing) tools to automate parts of it.

Named Entity Recognition

The search led to the discovery of Named Entity Recognition (NER) using spaCy and the simplicity of code required to tag the information and automate the extraction. It kind of blew away my worries of doing Parts of Speech (POS) tagging and then custom writing an extraction algorithm. So, copied some text from Tamil Nadu Government Covid Bulletins and set out test out the effectiveness of the solution. It worked pretty well for the small amount of training data (3 lines) vs extracted data (26 lines).

Trying out NER based extraction in Google Colab Notebook using spaCy

But it had one serious issue. Generating Training Data. Generating training data for NER Annotation is a pain. It is infact the most difficult task in the entire process. The library is so simple and friendly to use, it is generating the training data that is difficult.

Creating NER Annotator

NER Annotation is fairly a common use case and there are multiple tagging software available for that purpose. But the problem is they are either paid, too complex to setup, requires you to create an account or signup, and sometimes doesn’t generate the output in spaCy’s format. The one that seemed dead simple was Manivannan Murugavel’s spacy-ner-annotator. That’s what I used for generating test data for the above example. But it is kind of buggy, the indices were out of place and I had to manually change a number of them before I could successfully use it.

After a painfully long weekend, I decided, it is time to just build one of my own. Manivannan’s tagger just uses JavaScript to create the training data JSON and then requires a conversion using a Python Script before it can be used. I decided to make it a little more bug proof.

This version of NER Annotator would:

  1. Use a Python backend to tokenize and detokenize text for tagging and generating training data.
  2. The UI will let me select tokens (idea copied from Prodigy from the spaCy team), so I don’t have to be pixel perfect in my selections.
  3. Generate JSON which can be directly loaded instead of having to post-process it with Python script.

The Project

I created the NER with the above goals as a Free and Open Source project and released it under MIT License.

Github Link: https://github.com/tecoholic/ner-annotator

Credits

Thanks to Philip Vollet noticing it and sharing it on LinkedIn and Twitter, the project has gotten about 107 stars on Github and 14 forks, which is much more reach than I hoped for.

Thanks to @1littlecoder for making a YouTube video of the tool and showing the full process of tagging some text, training data and performing extractions.

Featured on TheNextWeb & Lifehacker

Something really cool happened this week. I will let the tweets to take over.

… and that’s how I made it to the homepage of TheNextWeb.

… and Lifehacker

Source code of the extension: https://github.com/tecoholic/Just-Arrived

For Chrome: Chrome Webstore

For Firefox: https://addons.mozilla.org/en-GB/firefox/addon/just-arrived-ff/

What did I learn from this?

The most important thing I learnt while doing this is probably the fact that the extension architecture is standardised across Chrome and Firefox. Thanks to Shrinivasan for asking me to port it to Firefox.

But, I think the relationship is one sided. Firefox can work with extensions written for Chrome, but Chrome won’t work with extensions written for Firefox. This is due to the nature of Firefox’s API and the fallback it offers.

For example, the storage api on Firefox is storage.* whereas on Chrome it is chrome.storage.*. Since Firefox has fallbacks for all the chrome.* API, the code primarily written for Chrome works without modifications on Firefox. But if a developer writes the plugin first for Firefox, it would lose the namespacing and therefore won’t work.

More technical details here at MDN web docs: Building a cross-browser extension

Special thanks @tshrinivasan for pushing me to build it for Firefox to @SuryaCEG for the UX advised and @IndianIdle for writing the article.

Rediscovering Web Development

Despite the rapid advancement of tools and specifications, web development has pretty much involved the same set of actions for a developer.

It all starts with an index.html file. An entire decade has been spent on just creating different ways to generate this file. From writing each line of HTML by hand to creating it on the fly using a shadow DOM we have come a long way.

Then comes the data that sits on these HTML files and followed by the interaction. The data flows from a data-source. Usually a database. The developer’s first task is to figure out a way to get this data from the database on the HTML. This is where a lot of action goes in. So much so that the quintessential phrase “Web Developer” has been replaced by “Frontend” and “Backend” developer.

But that’s all an old story, a thing of the past, a bygone era, remnants of an old civilization…. (looks dreamily into the void). Alight, I might be exaggerating a tiny bit, but I am in no way wrong.

The emergence of Low Code and No-Code Tools

As with everything these days, I stumbled into the world of low-code or no-code tools due to COVID-19. When India had only a handful of cases and people were trying to build a website to track the pandemic, I built a web app using Python Django which quickly became irrelevant and costly to run.

Instead, a live dashboard was built using Google Sheets as the database and API. A simple HTML page with some React code became the frontend. It was fascinating to see the innovative way in which the entire thing was working. At one point I saw 89 people actively editing the sheet, updating data, and 100,000 others consuming it on the dashboard. It changed the perception of web development completely.

Future of Web Development

Ever since I have been intrigued by this notion of building things on the web without the traditional idea of database-backend-frontend. While all of those things might exist, it is abstracted away in a way you are not actively working with a setting up VMs, database server, compiling code – that sort of nuts and bolts.

https://nocodelist.co/ is a website that lists tools that allow one to do that. It has tools for almost any task conceivable.

  • You want to draw the UI adjust colors and generate Angular code that you can just add business logic to – UI Bakery has you covered
  • You won’t convert your Google Sheets into a Database with a REST API – there are at least 8 apps for that
  • How about a drag and drop UI builder that you can directly hook into your database without a backend server? – AppSmith will let you do that

In each of the instances above, there is a lot of details that are abstracted away into the tool. Only the part which actually requires human decision making is left for the developers to decide.

Just as we moved from writing HTML by hand to building shadow DOMs using JavaScript, we might be moving from writing any JS to defining just the logic.

Conclusion

For a modern frontend developer, HTML is a second thought. The frameworks like React, Vue, Angular have created a way for us to think about the web in terms of components instead of HTML tags. We might soon be thinking about web apps in terms of user-flows while the frameworks will deal with the components.

You could even say, we already do and that I am late to realize it.