The next-generation Web or Web 4.0

Last night I was thinking about the future of the web (again) and what points I might have missed during my SOFEA series. In this post I am going to fill those gaps.

In the traditional client/server paradigm, the standard most of the time only defines a ‘protocol’ and how the software (both client and server) behaves externally, also called the side-effects, but not how it does it internally, or how to look while doing it. This applies to most if not all Internet standards so far, and for HTTP as well, though HTML does define the look of the static content (not of the browser).

The current browser model is almost 20 years old by now and based on the traditional client/server paradigm with only static content. This was fine and dandy back then since machines were not very powerful (even servers) and runtime compilation/interpretation wasn’t even invented yet, but after 20 years of Moore’s law today’s cell phones are more powerful than a room full of hardware in those days and I think it is time to rethink that model.
We need to stop thinking of the internet as a ‘web’ of interlinked static content, and start thinking of it as a ‘web’ of interlinked dynamic content and small, downloadable apps that run locally.
These apps can be accessed by well-known names – the URLs, which are the only ‘points’ that should be bookmarkable, and do anything that a “full-blown” (excuse my language) locally-running application can do.
Essentially, they are code-snippets that are downloaded and then executed locally in a sandbox (the browser).
The language of choice has had no reason to be javascript, a language that is compiled to byte-code might even have offered better performance, but it just so happened that way, somehow java failed to deliver – well actually it is more complicated than that but I am not going to get into that right here and now.
Essentially, this means that HTML is going to play a lesser role in the future, whereas javascript performance is going to become more and more critical.
Obviously, security plays a big part in this, as we all know, since these applications are downloaded very fast (and without user interaction), and are for the most part not checked by any scanners.
Considering those points, Chrome is going in the right direction by emphasizing JS performance as well as separating the JS-VM instances of the tabs from each other, thereby looking more like a complete OS for web-apps by the day (which I am sure will happen one day, and apparently is the plan for the Google Chrome OS).

I love the fact, that everything on the web is ‘open’ and human-readable to some extent (i.e. the HTML, the JS) – well except RIAs like flash and the likes – and it certainly has helped the web gain acceptance in its early days, but I think as more critical information and at the same time more computational power (meaning software tools) is going to find its way into the web, in the future this needs to be changed (or become changeable, at the very least). Of course, desktop apps can be reverse-engineered, too (think softice) but in practice this is so immensely complicated that it can be neglected or does not pose a financial threat. I am going to come back to this point later, but for now answer the question why there is no shareware market for web-apps.

I am going to talk about compilers now.
The GWT compiler team thinks that the task of splitting the application is better in the programmer’s hand than in the compiler’s because he has deeper knowledge of the app and its flow. Not only does the programmer split the application into its client and server parts, but the new GWT version also lets him split the client-side part into smaller chunks for faster start-up, that is on-demand loading of parts only when they are needed. It can’t be denied that this does offer better performance and faster loading which is a good thing, but as mentioned earlier, hardware is getting ever more powerful which will make this performance improvement obsolete in the future.
In my eyes this only makes the whole programming process even more complicated.

I still think that programming one single, monolithic application in one language and IDE is far easier and makes more sophisticated applications possible. Compilers have come very far over the last 4 decades, and I am confident they can do it again.

Soon the web will see a vaster variety of devices with a wider array of hardware and constraints connecting from even more different locations, especially now that IPv6 offers roughly 6.5 * 10^27 addresses per m^2 of our planet.

I have a dream that in the future compilers can be smart enough to take a monolithic app and translate, compile, split and package it for the web. Combined with a powerful web server, this opens up the possibility of dynamic load-balancing between client and server, depending on the connecting device as well as the connection (eg. speed) and other environmental state.

The web server decides at run-time which parts to send to the client, and then compiles and renders the rest himself. Depending on which terminal/location the user is connecting from, the server can also make the decision which parts of the model to retain for himself and which to shift to the client. Maybe the user gets a say in that decision at start-up, alà: “are you connecting from a public terminal and want to disregard all local state on exit?” – like today’s “clear recent history” option; or “is this your own home desktop machine and you want to save local state for faster startup?” To do this, HTML5 introduces app-cache as well as local (client-side) storage in a relational DB – essentially a way to preserve client-side state.

For the doubters among you (yes, I can see you!), it should be obvious that if the user is connecting with a slow connection/restricted device, run-time compilation and rendering of a simpler (non-dynamic) version server-side is still going to be faster than sending everything to the client, because servers today are powerful enough to do on-the-fly compilation and most things will be cached anyway. And of course, this is already done today with mobile versions of big websites, but they are still specifically designed for that purpose. In the future the compiler/webserver pair might be able to do it by themselves (even at run-time) without extra work for the programmer.

I said I was coming back to the issue of openness of the web languages, so here we go.
Obviously, the programmer can influence the load-balancer’s decision by giving hints/annotations in the code for specific parts, for example to make sure that critical company-internal information does not get sent to the client.
But if the language that everything is written in stays as open and human-readable as it is today, the load-balancing job gets significantly harder for the following two reasons:
Virtually nothing that is sent to the client can be considered secure (unchangeable) and as a direct result nothing that comes from the client can be trusted (because it might have been tempered with) except those things which benefit the user if accurate.
Consider implementing a powerful banking application as a web-app which supports dynamic load-balancing and you get my point: No critical function or data can be moved to the client lest it has to be executed/validated server-side again to double-check the results.

To conclude, I think there is still a lot of research to be done before compilers will reach this state, but today’s GWT combined with a powerful Java webserver and the Chrome browser finally gives us the tools to get started.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: