Towards language agnostic programing
Athul Muralidhar
Posted on February 9, 2020
Have you every wondered about the increasing lines of code in everything that we do as developers? This is one such random musings from a fellow developer.
The actual info graphic about the number of lines of code in various everyday products is here.
It is fairly obvious that we will be hitting a critical point sometime in the future.
Handling Complexity
As any of you here know, everything starts off as a single code file with simple model structures, providing the most basic functionality imaginable. With increasing usage, assuming that the product really solves something, we might wanna start adding new features and optimizations to our app. Thus begins the journey of the downward spiral towards increasing complexity.
The are two major problems here. Increasing dependencies in proportion to increasing features and deprecation of all these dependencies.
There is no such thing as "static" code and Static Code == dead code. So it is an inevitable thing these days to build an app with less or no dependencies on external libraries. This sort of "inflation" as time passes is again the problem with current form of development.
There is also a human aspect to this as developers are at the end of the day, humans. And humans change, humans move and humans grow. Apps shift between hands, they move between companies and between use cases.
How to account for all of this?
The solution to all of this maybe be the problem itself. Change and diversity.
Ask yourself how many programming languages do you know at least by name and this number, mind you is mostly greater than the number of actual human languages that we know. Why is this?
It is imperative to state the obvious here. Certain languages were developed for certain reasons. C for instance was developed for running the most basic of programs in the most basic of hardware systems while python is not. C obviously beats python in performance, while python has the lead in readability and usability.
This may also be due to the fact that there are many more developers today than during the Unix days, when a bunch of people were dishing out computer software from a single office or their garages. We have now clearly moved to a community level phase of programming.
With more and more people involved in making one piece of code, readability takes more precedence than performance.
The future of programming
The two most popular languages of the 21st century has to be clearly Python and JavaScript. Two very similar languages, both dynamically typed with ample flexibility and huge community support, built somehow on parallel with the advent of the internet.
What would the successor of these two languages be?
Increasing human interactions with digital technology, the conversation that began in the early 1960s is only going to get much better. The AI aspect of things will also make its way into the realm of hardcore programming.
I predict a language with a neural network as its interpreter which compiles to machine code at its very lowest level. For the embedded people reading this, I mean the Intermediate Representation level. Starting from the base syntax of python or JavaScript or any language of our choosing, the interpreter will modify its behavior as to how you as a programmer will code.
Are you a fan of fat arrow functions from JavaScript? Or are you dunder fanatic who likes to mangle their variables in Python, the new interpreter will optimize your machine code accordingly. This will suite or rather convert code to your personal coding style, so that all the optimization complexity will be taken from your backlog and you can just continue developing or writing code as you please. The more you use this interpreter, the more it will adapt to your style of code.
A simple use case
Let's take JS as an example. Every engine that your JS code uses has an optimization step in it. So for example, when declaring an object with a certain set of attributes, the JS engines (V8, SpiderMonkey etc.) compiles this into a specific object type in machine code. The more you access or call this object, the compiler marks this object as "hot" and tries to optimize on this. So as Franziska points out here it is always best to declare a type and use it consistently.
My question is but why?
If there is a neural network attached to the engine, that records and monitors my coding style, then the compiler can safely optimize based on my coding style.
Maybe I like random object declarations, but do not like function in function calls, or having tons of event listeners. The compiler could take advantage of this and make better, or faster code.
Combined with Web Assembly we could try to make this function all across the web irrespective of computer architecture.
Dev 2020
With the advent of the new decade, I personally as a programmer can't wait to see how the next ten years will unfold. Programming has been exhilarating, empowering and most of all, really fun.
Ever improving hardware prowess and super brainy software developers will surely be taking the torch forward and reaching new heights with digital technology.
And maybe one day the whole world would be just made up of programmers! :)
Posted on February 9, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.