Conclusion of the SOFEA series: Future of web-development/RIA


To summarize this series, I will shine the light on some of the more important points from the series.

I really believe, that at some point in the future, there will be an unified web-development environment (UWE) for the web which allow the developing of a whole, single application in one language which will then get broken up and compiled into the client and server parts by the compiler. So essentially the compiler decides where to draw the line between client and server – possibly with some help from the developer in the form of annotations, some description file or specific classes.

With this it is entirely possible that the program can be compiled into different representations, i.e. different languages for different deployments – for example a part for deployment on a server running struts and another one running ruby on rails, and client parts for a very thin – read restricted – mobile client as well as a very rich/powerful client. This naturally moves the break-up line between client and server which is why the compiler needs to be able to decide that at compile time.

What language the unified application is written in, does ultimately not matter since all languages that are Turing-complete can be crosscompiled. But since it is more complicated to compile from a non-VM language into a program that can be run in VM, it should be a VM language, even if it is one with strong typing like Java. The GWT is a very good step in this direction, and it shows that it is possible to write the client-side part in a different language (in this case Java, which is close enough to JS to make that easy) for easier development and debugging and then cross-compile it for running on the client.

I think that fusing GWT with a similar toolkit for the server-side, maybe based on struts or some other Java webserver and implementing a generator for the marshaling, can be an important first implementation of this unified web-development environment or toolkit. If somebody is interested in this, maybe it can be suggested as a project for the google summer of Code (SoC).


MVC applied to web-apps


The MVC pattern also fits a client-server application. The only difference is that there are now two pieces that both have their own MVC parts.

Let’s start by defining the two extreme ends of the spectrum (the corner cases):

  • Everything runs on the server, and only a static rendering of the model is transmitted to the client and ultimately, the user. This is the model of Web 1.0. Technically there is still a tiny fraction of the V running on the client (the browser) that renders the HTML, as well as of the C that reacts to links. Note that this is commonly referred to as fat server/thin client or ‘terminal server’ approach. Here an update of the GUI always involves a whole round trip to the server. This is depicted in figure 1:                            MVC server Read the rest of this entry »



The MVC model is a very handy and very well accepted model for traditional desktop applications that let the user manipulate some kind of data. Even web-applications (mostly) fit this model nicely.

In a desktop app this normally involves three distinct parts which are like services: they need to work together, but (should) have a well-defined contract/interface. They are all written in the same main language (whichever it is the app is developed in) but the model needs to interface with some kind of permanent storage (persistence) which sometimes involves translating to another language/representation of the model – be it XML, SQL, CSV, LaTEX or whatever. The View on the other hand, will mostly want to create some kind of visual representation of the model which is sometimes described in some other language than the main. That can be HTML/XUL or XML for Qt-based GUIs – you name it. VMs like Java or .NET have their own way of describing – the more appropriate term is ‘creating’ – the interface from within the program using the same main language.

Now think of MVC as applied to web-apps.  Do we get by with 3 languages? No, we need more and that makes it so much more complicated.

Server-side frameworks represent the MVC and can thus comprise 3 languages (like in desktop-apps) already. If a part of the V is to be executed on the client, we need JS in the mix. Since that JS code will mostly also need some data (part of the M) we need to translate that part into a form suitable for transfer to the client (commonly known as ‘marshalling’) with throws another language into the mix – JSON, XML or something else – for a total of 5.

What’s more is that this encompasses two distinct systems, each with their own cycles, states and debugging environments. In one word: a nightmare.



There are two most common types of frameworks: server-side and client-side.

Server-side frameworks (SSF) run, as their name suggests – on the server, and are there to help with the business logic and data of your web application. They do database abstraction – ie. persistance, help with presentation – ie. templating and also with caching, authentication etc. They come in every flavor aka. dynamic language imaginable. Some come with their own webserver, as RoR or build on existing webservers in the same language (as the Java ones do) and some just come as a CGI which run in a “normal” webserver like apache or IIS. Speaking in terms of the MVC pattern, SSFs help with all 3 – the M, the V and C. Read the rest of this entry »



Seems to me like CS people like to invent new names for the same concept, doesn’t it? Don’t they all mean the same anyway – Service Oriented Architecture?

They do. The idea behind those terms is to decouple the business logic (BL) or the actual application code providing a service, from the front-end consuming that service. Basically, what webservices do. They provide a service and return the data in a standard format without adding any presentation information to it (i.e. HTML). That leaves the job of interpreting and rendering that data to the front-end application logic, which can very well be a piece of JS code running inside a browser.

This is really good, because it decouples the application (BL) logic – or Controller if you will – from the presentation logic, the View; meaning that you can develop them independent of each other – or change one of them without changing the other; as long as they both adhere to the contract laid out for the service, they work.

The downside of that approach is that most SOAs are inherently stateless because they assume that front-ends come by, request some kind of simple service and then go about their business – what theory nerds call ‘loose coupling’. Yahoo’s webservices are a good example – like the keyword extractor: Send a long text to it, and you get extracted keywords back, it’s as simple as that. This might be good for simple things, but as soon as services get more complex, require logging-in, or involve extensive BL, that becomes a drawback. Now there are only two options: Either forget the statelessness of the service or transfer that state w/ each request anew. The first breaks the model of SOA which is not good, whereas the second might end up transferring lots of data and – you guessed it – creating the same problem that HTML has all over again (crafting state onto a by definition stateless protocol).

But remember, SOA does not command to be stateless, it merely states that it is beneficial if the services are loosely-coupled. This can very well be achieved by breaking your BL into small, distinct services that might each preserve a small amount of state: an auth. service, a basket service, a pref. service etc. all communicating with each other. The downside again, is that this generates more traffic/latency between the services thus degrading overall performance. But – oh well – there is always a trade-off, right?

So, whats there to do? Well not much, either accept one of the two solutions, find a middle-way depending on your needs and environment or implement only simple services using SOA principles.

Thoughts about RIAs


In response to the non-homogeneous landscape of implementations during the time of the browser-wars, developers started implenting client-side virtual machines to deliver “richer” interfaces to the customer. Among these VMs, also known as RIAs, are such notables as Adobe Flex/Flash/AIR, MS Silverlight and Java Applets.

I do not see the point of running another VM inside the VM that is the browser – because if you think about it, the browser is nothing else (some people lovingly call them application platforms, which is the same). Well, more precisely the browser is a content renderer for static HTML content, which is then loaded as data into a VM that can manipulate it. This VMs language happens to be Javascript, but it does not really matter. Even scarier is that the Code for manipulation and the model description itself can be intermixed in the same file.

Because of this obvious security risk, browsers employ (as already mentioned) a rigurous security model to disallow the code from doing anything besides alter the content of their current site (tab). RIAs, on the other hand, incorporate a lighter security model, which is why we’ve seen a spike in Flash-based attacks on browsers recently (keyword: drive-by-flash-attack).

Another drawback is the application downloading phase (termed ‘DA’ in the paper “Life above the service tier”). In a RIA this has to occur all at the beginning for the entire application, and the VM has to be started as well. This takes a considerable amount of time. In the early days that was one reason that no one liked flash, and still holds true for Java Applets. Whereas in plain HTML you can do that incrementally by loading each file consecutively which even enables you take advantage of caching or CDNs on the way from the server to the client – a huge advantage.

The only advantage that I can see in a RIA is that they obviously obfuscate the code, because applications for them are normally compiled – well to a bytecode, but still – thus preventing people from stealing your work or being able to look into your business logic (if you absolutely need to execute some of it on the client). While obfuscation is not as simple (automatic) in HTML/JS, it is still possible.

So, again, why run another VM inside, well what amounts to basically two VMs already – the OS and the browser, when current JS and browser implementations are now compatible enough (maybe w/ some help from tools such as GWT) so that you can realize anything with them? Even more so, considering that the internal scripting language of Flash (ActionScript) is actually Ecma-262 aka. EcmaScript, which also happens to be what JS is based on – in effect even the languages are the same.

Reblog this post [with Zemanta]

Mamaaaa, what are Sessions?


The HTTP protocol is defined to be stateless. For a protocol that allows an essentially non-linear look into a linear, static, “book-like”, hierarchical structure called web page, that makes sense. All the serving application needs to know can be encoded in a single request, which was also done to prevent DoS attacks since the server does not keep any data for any client what-so-ever, thus preventing running out of memory when hit with too many requests.

As soon as web pages started to be non-static and allowed clients to manipulate server-side data, problems abounded. Now there is the need to save state. Think of a typical website that allows you to log in. That login state needs to be saved between requests to be remembered the next time your browser requests a page. Read the rest of this entry »