222222 (4) [Avatar] Offline
#1
Hi,

akka is appearing in the browser now with akka.js. A number of use cases that have been listed on the akka.js issue database. It would be interesting to have the feedback of the authors of this book what else they envisage here. The browser environment is much more tricky.
roland.kuhn (39) [Avatar] Offline
#2
The development in this space is indeed very interesting, we will see many new technologies come up around Reactive in the next few years. Talking specifically about the web browser I think that fault tolerance and elasticity do not really enter the picture: nobody would run two browsers for fail-over for example. This piece of software is usually under the direct control of its human operator and we are used to just starting it again when it crashes. Of course fewer crashes are better, but that is best achieved by good quality assurance processes. Most of the interesting functionality is in any case provided by the back-end services that the browser talks to and since those cannot be under the human user’s control we must employ the techniques outlined in the book to keep them responsive at all times.

If the situation should change in a few years then our goal is to describe the patterns in the book such that they can then applied in new and currently unforeseeable contexts.
222222 (4) [Avatar] Offline
#3
For fault tolerance and elasticity you may want to consider that
* browsers tend to go offline a lot, and that being able to work without network is very useful.
* writing software for browsers means the person writing it knows very little of the environment on which the code willl land. Does it support WebWorkers ( ie processes)? Does it have IndexDB available? How strong is the CPU? How fast is the bandwith? All of that indicates that elasticity is a major consideration in the client.
roland.kuhn (39) [Avatar] Offline
#4
Of course the browser must be considered as a remotely connected (i.e. distributed) component of the overall application, that is true. What I meant was that the internal structure of browser applications will no involve a microservice architecture or replication or dynamically adding and removing compute nodes—it just runs within a single process in any case.

The challenges you describe for developing software for this environment are real, but that is not what we mean by elasticity: we specifically talk about adding and removing computation resources in response to changing load, and that is unlikely to become relevant on the web browser anytime soon since the main computation is typically done in the cloud.
222222 (4) [Avatar] Offline
#5
roland.kuhn wrote:Of course the browser must be considered as a remotely connected (i.e. distributed) component of the overall application, that is true. What I meant was that the internal structure of browser applications will no involve a microservice architecture or replication or dynamically adding and removing compute nodes—it just runs within a single process in any case.


It may run on a single processor, or more processors. It all depends on the client. Modern browsers can access more than one thread or process by using WebWorkers which can make requests on the web and interact with an IndexDB database and helping deal with contention, they could do reasoning, etc...

roland.kuhn wrote:The challenges you describe for developing software for this environment are real, but that is not what we mean by elasticity: we specifically talk about adding and removing computation resources in response to changing load, and that is unlikely to become relevant on the web browser anytime soon since the main computation is typically done in the cloud.


As client computers have different speeds this could require launching more or less workers. It also helps for parsing data off the main UI thread. For example a number of WebWorkers could be launched to do some reasoning. All of this fits in the "adding and removing computation resources in response to changing load".

I know this is weird. I was not aware of it either until a couple of years ago when I started doing client computing, and I have been learning a lot very recently about this. The same structural requirements are needed as with server computing, but for very different reasons.

It's worth reading up on some of the ideas put forward by the akka.js team on other uses https://github.com/unicredit/akka.js/issues/12
roland.kuhn (39) [Avatar] Offline
#6
Yes, this is indeed an aspect that I did not encounter so far, thanks for pointing it out! For now I’ll make a note because I don’t see immediately where something like this should be woven into the book, I’m sure things will fall into place once the content is nearing its final volume.
303717 (1) [Avatar] Offline
#7
The general umbrella for such apps is the "Single-page Application" (SPA) https://en.wikipedia.org/wiki/Single-page_application, and is what scala.js excels at.

In your book, when you refer to these frameworks, you correctly point out that they work as they are in a single threaded environment. But more and more, developers are making them multi-threaded , eg see http://www.introtorx.com/content/v1.0.10621.0/15_SchedulingAndThreading.html , accompanied by the age old issues.

So if nothing else, you could add to that section both a warning to users of such frameworks and a route forward of careful adoption of akka in the client.

I'm guessing here a bit as a conceptual use case... but consider the R desktop program (as if it were a scala program) . How could akka help - adding more threads, background processing, off loading some work to cloud servers, etc.

Then a SPA version in the browser would be identical. And sure, if I loaded up R on my mobile phone, it won't work - unless all actors were remote