application

Is WebAssembly the dawn of a new age of web performance?

This post was contributed by Intechnica Performance Architect Cristian Vanti. Check out another of his posts, “Performance Testing is not a luxury, it’s a necessity”.

Even though the internet was created several years earlier, I think that the birth of the World Wide Web as we know it coincides with the release of the Mosaic browser in 1993.

In the past 22 years, everything to do with the web has changed incredibly fast and very few things have resisted change during this time. I can think of only three examples of this:

  • IPv4 (born in 1981)
  • HTTP 1.1 (born in 1997)
  • Javascript (born in 1995)

The first was superseded years ago, even if IPv6 hasn’t been fully adopted despite several attempts.

In February finally HTTP/2 has been formally approved, and easily it will quickly replace version 1.1 after 18 years.

Yet Javascript, after 20 years, is still the only language universally used in web browsers. There were some attempts to replace it with Java applets, Flash or Silverlight but none of them has ever threatened Javascript’s position. On the contrary, it started to conquer the servers as well (primes example: Node.JS).

While in the server area, a plethora of different languages have been created aiming to simplify the development of web applications. In the front end, Javascript has been the only real option.

On 17th June 2014, Google, Microsoft and Mozilla jointly announced WebAssembly. This could be a turning point for front end development for several reasons.

Firstly, there have been several attempts to replace Javascript, but each one was backed by a single player. This time the three main browser developers have joined to overcome Javascript.

Secondly, they decided to not replace Javascript in a disruptive way, but rather putting at its side a new binary format, a sort of bytecode. The user will not see any difference; everything will continue to work in the same way for whoever wants to stay with Javascript, but a huge opportunity has been created for those who want to develop faster applications.

Thirdly, the performance improvement that WebAssembly could carry is impossible by any other means.

And lastly, WebAssembly is a brilliant solution, something so simple but so powerful, something that should have been invented years ago.

WebAssembly is simply a binary format for Javascript. It isn’t a real bytecode: It is the binary format for the Javascript Abstract Syntax Tree (AST), the product of the first step in the Javascript parsing, nothing more. It is not a new framework, not a new language, not another vulnerability option. Not another virtual machine, but still the good old Javascript one.

In this way the webserver will not send the pure Javascript text but instead will send the first elaboration for that code in a binary format. The benefits will be a more compact size for the code and less work for the browser compiler.

But the full potential comes from the use of asm.js, a highly optimizable subset of Javascript that Mozilla created some time ago and is already implemented by all the biggest browsers. asm.js code is only slightly slower than C code, giving CPU intensive applications a great opportunity. Moreover there are already cross-compilers that can parse other languages (C, C++, Java, C#, etc.) and produce asm.js code. This means that it’s been possible to compile game engine code to asm.js, and the same will happen for heavy desktop applications like CAD or image editors.

Silently asm.js and WebAssembly are leading us to a new internet age.

This post was contributed by Intechnica Performance Architect Cristian Vanti. Check out another of his posts, “Performance Testing is not a luxury, it’s a necessity”.

New case study: Nisa Retail Mobile app on Amazon Web Services

Amazon Web Services have recently published a new AWS Case Study, looking at how Nisa Retail have implemented an innovative mobile application to make their members’ lives easier. Nisa engaged with Intechnica to design and develop the app, which is built on AWS technology.

As an AWS Consulting Partner in the Amazon Partner Network, Intechnica was well positioned to leverage the power and flexibility of AWS to deliver a scalable solution that would not impact on Nisa’s core IT systems such as their Order Capture System (OCS).

The full case study is available to read on the AWS website.

If you need an application built with performance in mind from the outset, or specifically built around the flexibility of the AWS infrastructure, Intechnica can help. Fill in the form below or use our contact page.

Intechnica sponsor and get involved in first “Hack Manchester”

Manchester-based digital agency Intechnica were involved in the highly successful Hack Manchester event at the Museum of Science and Industry this past weekend. As sponsors, Intechnica provided a challenge plus prizes to those taking part, with Technical Director Andy Still judging and handing out brand new Raspberry Pi computers to the winners.

As well as proudly sponsoring the event, Intechnica fielded a team to bravely stay up through the night and hack together a working product in 24 hours.

The finished, working product was a word search generator game using an SMS API, where players could text in words to be hidden in the grid before opponents text the hidden words back to solve the puzzle.

Hopefully Hack Manchester will become an annual event, continuing to showcase the talent of all the great teams taking part. The dedication of the organisers, volunteers and teams taking part was fantastic.

See you at the next Hack Manchester!

Anaemic Domain Model

Martin Fowler once wrote an article “AnemicDomainModel” decrying the “anti-pattern” of a service layer that executes actions (as opposed to delegating them to an entity), combined with entities that are just data buckets of getters and setters.

His claim is that this kind of application architecture is wrong because “…it’s so contrary to the basic idea of object-oriented design; which is to combine data and process together.”

Right.

Object oriented design, i.e. designing around conceptual “object” blocks that model distinct entities in your system, has always been a bit of a flawed model, in my opinion. When you go shopping, let’s say to buy a DVD, you grab a basket and wander down to the DVD aisle. You check for the next boxed set of your favourite TV series, grab a copy, put it in your basket, wander across to the till (yeah yeah, no-one gets a basket for one item, just roll with it for now) and pay.

In a standard OO model the Service layer is simply a dumb abstraction that provides delegation, whereas the Domain contains the bulk of the application code.

When you “added” your item to the basket, the product did nothing, and neither did the basket; the entity that performed the action was your hand. That’s because products and baskets can’t do anything, a product is just an inanimate thing (obviously there are exceptions to this, such as if you’re buying a puppy, but there are exceptions to almost everything), and a basket is just a receptacle for inanimate (or animate) things. Still think your product needs an AddToBasket(Basket basket)?

So, if we’re building an eCommerce system (a full-on eco-system, not just the B2C frontend; warehouse management, invoicing etc.), which ones of these will have product data? I’m going to say all of them.

You’ve got options at this point. You can say “No! I’m not going to pollute my applications with shared logic and shared datatypes and shared… well, anything”. I admit, there are some serious downsides to having shared functionality, the main one being the inability to make changes without risk of regression. But risk can be managed, and the organisational upsides of the development velocity that can come from only writing code once, as opposed to once per system, are massive.

Common functionality is unlikely on your Product class. Warehouses don’t have baskets; customers don’t buy quarterly overviews; your accountant probably doesn’t care too much about how much stuff fits onto a pallet. But common data is very much the case, everyone will need to know what the product is (warehouses need to know that Product #45126 is a crate of beans, not the wrongly-marked ostrich steak they can’t fit on a pallet; the accountant will need to know that a crate of tennis balls can’t possibly be priced at $4,000 a pop), everyone will need pretty much the same level of basic information.

How does this fit into a domain model?

In the ADM (I would like to coop this as the moniker for this pattern, I don’t even slightly think it’s insulting), our “Domain model” is only “anaemic” in the data layer. We have a full-blown domain model at the “service” layer. There’s just an extra conceptual leap between having a polymorphic product and a polymorphic service.

In the ADM, the Service layer contains the bulk of the logic, and therefore the bulk of the code, and the Domain is only used to model the data.

Under “normal” circumstances (old-skool circumstances, more like) you’ll probably have a factory that takes the “productType” attribute from the raw dataset and decides which type of product to instantiate, then whichever class that’s handling the basket addition delegates the AddToBasket() call to the newly-instantiated SubProduct.

With the newschool (super-awesome) method, we’ll probably have a factory that takes the “productType” attribute from the Product class and decides which type of IBasketHandler to instantiate, then whatever class that’s handling the basket addition delegates the AddToBasket() call the to newly-instantiated SubProductBasketHandler.

See the difference? That’s right, very little. But in terms of separation of concerns, well, we’ve gained a whole lot. Products don’t need to know anything, so there’s no danger of one of the juniors deciding that it’s fine to have the Product(string productId) constructor load the product data from the database (performance and maintenance problem, here we come); IoC will come naturally, testing is much easier with the kind of statelessness that comes with this pattern.

If we’re talking semantics, yeah, we don’t have a “domain” model anymore, we have a data model and a process model, and they’re separate. But really, do your products have behaviour, or do various procedures happen to the products? Data and process should be separate, they’re fundamentally different things; behaviour is dictated by data, data is dictated by behaviour, but there’s a fundamental divide between the two, in that process can (and will) change and data is generally so static that it simply cannot.

So, who’s right? Well, I’d say that both models are appropriate, depending on context. If you’ve not got an ecosystem, and you’re fairly sure your single system will always be the one and only, and you’re confident you’ll get the best results from mixing process and data then go for it. If you’re worried about maintainability and have more than one interlinked system, then think seriously about what’s going to give you the best results in the long term. Just be consistent, and if you’re not, be clear about why you’re deviating from your standards or you’ll end up with an unholy amalgamation of different patterns.

This post originally appeared on Ed’s personal site.

Persistent Navigation in jQuery Mobile

I have been doing quite a bit of work in mobile application development with jQuery Mobile recently, and generally finding it a very useful product once you get your head around its idiosyncrasies.

One of the things I needed to do was to have a persistent navigation across the bottom of the page, to mimic the standard iPhone navbar.

Out of the box jQuery Mobile offers a facility for persistent navbars.

Read the rest of this post, including code explanations, on Andy’s “Internet Performance Expert” blog: Persistent Navigation in jQuery Mobile

New report: Application performance, and why the end user experience is paramount

We recently published a blog post highlighting some key facts and figures to show how performance can affect the bottom line in e-commerce. For example, more than a third of online consumers will tell others about their disappointing website experience, and Amazon estimates that even a 100 millisecond page load delay could cause a 1% decrease in sales. Today, Intechnica commercial partner Compuware published a Whitepaper that explores the overall business impact of Application Performance.

The Whitepaper, entitled “How End-User Experience Affects Your Bottom Line”, explains why end-user experience is the “ultimate measure” of an application’s success. One of the main reasons the report gives for this is the increasing complexity of the delivery of applications, both in terms of content sources, from internal data centres, ads, external feeds and the cloud, and the variety of devices and platforms end-users access them from. Indeed, as a recent Intechnica webinar postulates, successful deployment of applications into cloud-based platforms is subject to carefully considering each unique platform, and designing the application with performance in mind from the very beginning. The main challenge in monitoring application performance is in identifying where along the delivery chain a problem might occur.The nightmare scenario for any business is for the end-users to see the problem before the business is even aware it exists. In most cases, when applications fail, they do so unexpectedly and catastrophically. This impacts the bottom line, not just through lack of availability and the hit on customer loyalty, but also through a drop in productivity in internal systems.

The Whitepaper that Compuware published today focuses on managing application performance across the entire delivery chain, that any issues can be pinpointed to their exact cause and dealt with before potentially millions in revenue is lost.

Download the Whitepaper now.

Approaching Application Performance with TDD

This blog post was written by Intechnica Senior Developer Edward Woodcock. It originally appeared on his personal website, which you can view here

Test Driven Development (TDD) is a development methodology created by Kent Beck (or at least, popularised by him), which focuses on testing as not just the verification of your code, but the force that drives you to write the code in the first place.

Everyone who’s been exposed to TDD has come across the mantra:

Red. Green. Refactor. Repeat.

For the uninitiated that means you write a unit test that fails first. Yup, you aim to fail. Then, you write the simplest code that’ll make that test (and JUST that test) pass, which is the “Green” step (obviously you have to re-run the test for it to go green). Then you’re safe to refactor the code, as you know you’ve got a test that shows you whether it still works or not.

Test Cycle - Red is where I'm going to do my next dev step (or a long-outstanding issue I'm marking with a failing test)

So, that’s your standard dev cycle, RGRR. Every now and again you might drop out and have a poke around the UI if you’ve plugged anything in, but you’ll generally just be doing your RGRR, tiny little steps towards a working system.

Finally, when you’re happy with the release you’re working on, you deploy your code to production and go for a beer, safe in the knowledge that your code will work, right?

Continuous Integration - Useful for seeing your project's red/green status

Well, yes, and no. You’re safe in the knowledge that your business logic is “correct” as you understand it, and that your code is as robust as your tests are. But if your app is a large-scale internet (or even intranet) application with many concurrent users, you’ve only covered half the bases, because having the green tests that say that the system works as expected is useless if the system is so clogged that no-one can get on it in the first place. Even a system that simply degrades in performance under high- (or worse, normal-) load scenarios can leave a sour taste in the mouth of your otherwise happy customers.

So, the question is: how can you slide in performance as part of your RGRR cycle? Well, TDD says that the tests should drive the system, but they don’t specify what type of test you need to use. A common thought is to add automated UI tests into the mix, to be run in batches on a release, but why not add in a performance test as part of the build-verification process.

Going from our RGRR cycle, first we need a failing test. So, decide on a load model for the small section of the system you’re working on right now. Does it need to handle a hundred users at once, or will it likely be more like ten? It makes sense to go a little over what you might expect on an average day, just to give yourself some extra headroom.

Response Time Comparison - If your graph looks something like this you're probably doing OK!

Next, pick a load time that you think is acceptable for the action under test. If you’re loading search results, do they need to be quick, or is it more of a report that can be a fire-and-forget action for the user? Obviously input may be required from your client or Product Owner as to what they consider to be an acceptable time to carry out the action, as there’s nothing worse than being proud of the performance of something when it didn’t need to be fast, as then you’ve wasted effort that could be used elsewhere!

Now you’re going to want to run the test, in as close a match to your live environment as possible. At this point I need to point out that this is more of a theoretical process than a set of steps to take, as you’re unlikely to have one production-like environment for each developer! If needs be, group up with other developers in your team and run all your tests back-to-back. Make sure you have some sort of profiling tool available, as running the tests in a live-like environment is the key here, if you run them locally you’ll not be able to replicate the load effectively (unless you actually develop on the live server!).

If you’re using an iterative development approach and this is the first time you’ve run a test on this particular piece of functionality, most likely your test will fail. Your average response time under load will be above your target, and you may not even get up to the number of users you need to account for.

So, that’s the “Red” step accounted for, so how do you get to green? This is where we start to diverge from the RGRR pattern, as to get good performance you’ll need to refactor to make the test green. If you can’t run the same test, just take a few stabs through the UI manually to get some profiler results, and spend the time you’ve saved waiting for the test to complete thinking about how you can implement tests that can be run locally and from a load injector.

Profiling in action: DynaTrace giving us a realtime comparison. We're looking for some red bars, which indicate worse performance.

Hopefully your profile should have some lovely big red bars that show you where the hotspots are for your particular piece of functionality and you can use this information to refactor to make the algorithm less complex, the DB call faster, or to add some caching. If you’re being a rigorous TDD-adherent you’ll probably only want to make a single architectural change before you re-run the performance test, but in most cases you’ll want to do as many things as you can think of as performance tests on a live-like system won’t be available all the time.

Once you’re happy with that, re-run your test, this should be the “Green” step, if it’s not you should go back and refactor the code again until you hit your target. If you’re struggling to find enough headroom from code or architecture changes to hit your performance targets you most likely need to either leverage more hardware or refactor your UI to divert traffic to other areas.

Right then, we’ve done Red, Refactor, Green, next comes “Repeat”. If you think you can eke out more speed from the area you’re working on, you can go back and adjust your test load, but if you’ve gone for a known (or expected) production load with a little extra on top you probably don’t want to waste time on that. After all, when you’re practicing TDD you do just enough to hit your target, and then move on.

Repeat Load Test - When you do your repeat you can add in the "Change" column, which helps identify possible areas on concern for the next cycle.

So what’s next? Well, next you implement another piece of functionality, and do another load test. As you go along you’ll eventually build up quite a collection of load test scripts, one for each functionality area in the system, and you should run these together each time you add a new piece of functionality, just like in a unit testing session. However, I’d avoid doing tests on multiple new pieces of functionality at once the first time around, as you will likely come across a situation where a single piece of functionality knocks off performance across the board, giving you a big spreadsheet full of red boxes.

If you follow this method (RRGR) throughout the development lifetime of your system you should have a rock-stable system that can quantifiably cope with the expected amount of load, and then some. This is a great situation to be in when you’re planning new functionality, as you’ll rarely have to worry about whether you have enough headroom on your boxes to implement killer feature X, and can instead worry about really nailing that cool new bit of functionality.

Webinar: Designing Applications for the Cloud

This webinar, from 6th March 2012, was hosted by Intechnica‘s Technical Director, Andy Still. Andy talked about the key principles of designing and migrating applications to the cloud. This includes scaling out, taking new and imaginative approaches to data storage, making full use of the wide range of products and services on offer from cloud providers (beyond hosting), and exploring the many flavours of hybrid solution which can mean all types of business can leverage the benefits of the cloud.

Andy has architected and built a number of cloud-based applications, specialising in highly scalable, high-performance, business critical applications.

If you’re planning or considering moving to the cloud in 2012 then this webinar is essential viewing.

More Intechnica webinars