Is WebAssembly the dawn of a new age of web performance?

This post was contributed by Intechnica Performance Architect Cristian Vanti. Check out another of his posts, “Performance Testing is not a luxury, it’s a necessity”.

Even though the internet was created several years earlier, I think that the birth of the World Wide Web as we know it coincides with the release of the Mosaic browser in 1993.

In the past 22 years, everything to do with the web has changed incredibly fast and very few things have resisted change during this time. I can think of only three examples of this:

  • IPv4 (born in 1981)
  • HTTP 1.1 (born in 1997)
  • Javascript (born in 1995)

The first was superseded years ago, even if IPv6 hasn’t been fully adopted despite several attempts.

In February finally HTTP/2 has been formally approved, and easily it will quickly replace version 1.1 after 18 years.

Yet Javascript, after 20 years, is still the only language universally used in web browsers. There were some attempts to replace it with Java applets, Flash or Silverlight but none of them has ever threatened Javascript’s position. On the contrary, it started to conquer the servers as well (primes example: Node.JS).

While in the server area, a plethora of different languages have been created aiming to simplify the development of web applications. In the front end, Javascript has been the only real option.

On 17th June 2014, Google, Microsoft and Mozilla jointly announced WebAssembly. This could be a turning point for front end development for several reasons.

Firstly, there have been several attempts to replace Javascript, but each one was backed by a single player. This time the three main browser developers have joined to overcome Javascript.

Secondly, they decided to not replace Javascript in a disruptive way, but rather putting at its side a new binary format, a sort of bytecode. The user will not see any difference; everything will continue to work in the same way for whoever wants to stay with Javascript, but a huge opportunity has been created for those who want to develop faster applications.

Thirdly, the performance improvement that WebAssembly could carry is impossible by any other means.

And lastly, WebAssembly is a brilliant solution, something so simple but so powerful, something that should have been invented years ago.

WebAssembly is simply a binary format for Javascript. It isn’t a real bytecode: It is the binary format for the Javascript Abstract Syntax Tree (AST), the product of the first step in the Javascript parsing, nothing more. It is not a new framework, not a new language, not another vulnerability option. Not another virtual machine, but still the good old Javascript one.

In this way the webserver will not send the pure Javascript text but instead will send the first elaboration for that code in a binary format. The benefits will be a more compact size for the code and less work for the browser compiler.

But the full potential comes from the use of asm.js, a highly optimizable subset of Javascript that Mozilla created some time ago and is already implemented by all the biggest browsers. asm.js code is only slightly slower than C code, giving CPU intensive applications a great opportunity. Moreover there are already cross-compilers that can parse other languages (C, C++, Java, C#, etc.) and produce asm.js code. This means that it’s been possible to compile game engine code to asm.js, and the same will happen for heavy desktop applications like CAD or image editors.

Silently asm.js and WebAssembly are leading us to a new internet age.

This post was contributed by Intechnica Performance Architect Cristian Vanti. Check out another of his posts, “Performance Testing is not a luxury, it’s a necessity”.


Do users expect too much of applications?

Today, users expect more and more from their applications (both mobile and web).

In terms of performance, 47% of users now expect a page to load in 2 seconds or less. Apps like Instagram and Twitter set high expectations in terms of usability and functionality. And the cloud is everywhere – data is expected to be at our fingertips at all times.

As well as consumer expectations constantly rising, the expectations of businesses is also increasing. Now, anything that takes more than 3-6 months to start delivering real value is unlikely to get past the drawing board.

85% of businesses want to deploy apps within these time scales, but only 18% have processes in place to support this pace.

One solution Intechnica has recently adopted to meet and exceed these expectations is to adopt Rapid Application Development, which allows 80% of an application to be built using a drag-and-drop interface, leaving just 20% to traditional coding. This speeds up the development process to the point where a functional, useful web or mobile application can be deployed across various devices and delivering value to the business within weeks.

Case study – Delivering value within weeks

Intechnica, using Progress application development solutions, recently helped a leading European transport and logistics company to match the pace of business change by developing a mobile application to deliver new functionality within weeks, as opposed to the 3-6 months it would have taken using traditional application development methods.

The front-end of the company’s existing logistics, stock and order management application could not keep pace with the rate of change required to meet the new expectations of its users.

The solution was remarkably simple – but hugely effective. Today, smartphones are ubiquitous, so a simple mobile app was developed to replace several paper-based processes and mobilise the entire operation. GPS-based geolocation, time-stamping and photography functionality (all of which is available in almost every smartphone) was built into the app.

The business is now able to move with much greater pace and efficiency, enabling it to reduce costs and pro-actively manage its resource planning and invoicing functions with much greater accuracy.

The app was built, functional and delivering benefits to the business in just a matter of weeks.

Read more

Learn how businesses are addressing the increasing demand for faster development and time to value in an exclusive white paper produced by Intechnica. Head on over to the Intechnica website to download your copy now!

New case study: Nisa Retail Mobile app on Amazon Web Services

Amazon Web Services have recently published a new AWS Case Study, looking at how Nisa Retail have implemented an innovative mobile application to make their members’ lives easier. Nisa engaged with Intechnica to design and develop the app, which is built on AWS technology.

As an AWS Consulting Partner in the Amazon Partner Network, Intechnica was well positioned to leverage the power and flexibility of AWS to deliver a scalable solution that would not impact on Nisa’s core IT systems such as their Order Capture System (OCS).

The full case study is available to read on the AWS website.

If you need an application built with performance in mind from the outset, or specifically built around the flexibility of the AWS infrastructure, Intechnica can help. Fill in the form below or use our contact page.

New SlideShare presentation – “Technical Debt 101”

We’ve just published a new slide deck on SlideShare titled “Technical Debt 101” (click the link to view, or watch embedded below).

The slide deck was written by Intechnica Technical Director, Andy Still. Make sure you take a look at Andy’s regular blog posts, including expanded posts on Technical Debt management, over at

Andy explains why Technical Debt doesn’t have to be negative, but it does have to be carefully managed. These slides give a quick run-down of best practice to approaching Technical Debt management.

We have extensive experience in managing Technical Debt. Get in touch to talk to us about managing your Technical Debt the right way.

The secret of a successful product launch? Don’t let the website crash

A good online checkout should be like shopping in a supermarket with plenty of open tills. Photo – Flickr/nateOne

Even when your warehouse’s shelves are stocked, your supply is directly proportional to your website’s availability, speed and performance. Your customer satisfaction levels also hit a glass ceiling when your website can’t cope with demand.

This morning, the Nexus 4 phone sold out in the UK through Google’s online Play Store in less than 30 minutes. However, buyers and would-be buyers alike reported website inconsistencies, errors, freeze-ups, slow-downs, failed transactions, mistaken duplicate transactions, and lack of purchase confirmation. And this is the renowned kings of web speed Google. So how do you keep a website up and customers happy when there’s such high demand?

Imagine a supermarket checkout area. Imagine queuing up, having your items scanned by a cashier, but before you can pay, the cashier starts serving the next person in the queue as soon as they arrive – then, if all the items you’re trying to buy sell out in the meantime, you’re not allowed to buy the item anymore and leave the shop empty handed.

All too often this is an accurate metaphor for buying high demand products online, and the problem is that these complicated systems need to be built with scale in mind.

A good online transaction should work more like a supermarket with plenty of checkouts open, where customers are served consistently, one at a time, first come first served.

Obviously there will be a lot of disappointment when demand naturally outstrips supply, such as when there are only 140,000 Glastonbury tickets or allegedly 30,000 Nexus 4 phones on sale, yet millions of people who want to buy one.

However, even without such restrictions or high levels of demand such as in the case of the Nexus 4, your supply is actually only as great as your website’s capacity to take orders. A crashed website means you’re not moving any stock whatsoever, and it’s left sitting in your warehouse, even where there is plenty of demand.

To make matters worse, transactional websites are very complex, meaning the chances are greater for the site to stumble or fall when a lot of people are using it at once – and people get anxious when payment details are involved, even if they succeed in buying your product – Anyone who tried to buy a Nexus 4 on the first day can attest to that.

Technology means we shouldn’t have to queue around the block for the latest gadgets. Photo – Flickr/dan taylor

It took a painful 24 hours for Glastonbury to sell out in 2004 and the experience lives on in the memories of those who suffered through it. Event organiser Michael Eavis was later quoted as saying “We can improve the software, definitely – but is it a good thing to sell them all out in one hour? We could have sold them out last night in five minutes, but is that a good thing? I don’t think it is you know, I’d rather string it out a bit.”

The software was indeed improved – fast forward to a year later and the same number of tickets went at a much swifter 3 hours. According to the man who built the system (and Intechnica co-founder) Andy Still, “Under testing, the system used for 2005’s Glastonbury ticket sales was capable of selling 100,000 tickets in under a minute, but we throttled it to give people a wider window to buy their tickets in the interest of fairness.”

So is it possible to sell out quickly without your website falling over? Yes, but only when the system is designed to perform properly.

Intechnica sponsor and get involved in first “Hack Manchester”

Manchester-based digital agency Intechnica were involved in the highly successful Hack Manchester event at the Museum of Science and Industry this past weekend. As sponsors, Intechnica provided a challenge plus prizes to those taking part, with Technical Director Andy Still judging and handing out brand new Raspberry Pi computers to the winners.

As well as proudly sponsoring the event, Intechnica fielded a team to bravely stay up through the night and hack together a working product in 24 hours.

The finished, working product was a word search generator game using an SMS API, where players could text in words to be hidden in the grid before opponents text the hidden words back to solve the puzzle.

Hopefully Hack Manchester will become an annual event, continuing to showcase the talent of all the great teams taking part. The dedication of the organisers, volunteers and teams taking part was fantastic.

See you at the next Hack Manchester!

Requirements Traceability

Agile and QA approaches – Requirements Traceability

Following on from my post two weeks ago about specification by example and application maturity, this piece is about requirements traceability.

Software development processes have traditionally worked from signed artefacts and signing up to agreed pieces of work. This would normally describe a set amount of development time, some testing time and some support time to resolve defects and help bed-in the new application. An important part of this process is a description of what will be delivered. This is a key document in the specification by example process. It is important when creating the original statement of requirements and describing the user stories in tools such as Speclog that all requirements have been captured and that there is a common understanding between the project sponsors and the suppliers of how this functionality will be used. By describing key business flows in terms of the behaviour of the application an opportunity is provided for areas of discussion and the understanding of the supplier and the customer can be explored.

These activities drive the test approach. Traditional testing analysis would follow on from here. Often this would be long weeks of Business Analysts creating specification documents and from there test analysts would follow the same path and develop test packs which examine and validate these requirements. In the mean-time developers will work with the documents and the outline discussions, and begin their development approach. In the old days this would have created detail specification of requirements, a detailed development approach and detailed test approach – all of which would have to be approved and signed off. This produces a lot of documents and the big problem with them is that it assumes the requirements are fully understood at the beginning of the process. Any changes are managed through change requests, which require impacting and updating of all these documents. It is not a flexible process but although slow, delivers good quality software.

In the same way that Agile has helped to remove all the unnecessary documentation in development and testing of new applications, specification by example is looking to broaden/develop this concept to address the requirements itself. In the area of Quality Assurance the big challenge with ever more complex systems is measuring what areas of the requirements are being met by which development activities and then validated by which testing activities. In its broadest sense specification by example attempts to address this by a straight-forward approach that states that tests will be developed for each requirement specified in the user story.

The challenge is what artefacts we need to create and how they fit in with the user stories. For instance can/will test approach documents be taken straight from the stories that are created in the analysis. The examples and systems that have been discussed in Agile conferences and across the web have been small with simple business processes which have limited number of functional combinations of application modules. And these systems have been created from scratch or are small improvements. This work hasn’t taken place with existing systems that may not have full documentation or where new software and changes to existing applications take place.

This is where testing theory and previous processes can supplement the given process. It is important that the relationships between the requirements and stories are understood. What are the most important part of the applications ? Which are the most complex? By using the traditional requirements traceability it is possible to create a an application relationship map – which then can be used to drive the test plan and more importantly be used to manage the delivery of the application functionality. This will help with deciding the critical path and the main release points where key modules from a testable business process.

Where defects are being seen then it can steer back to the stories and allow us to understand whether there is something missing from the requirements or something has changed or details need to be more fully understood. This takes us back to the largest flaw in current development/test methodology. When the initial analysis takes place assumptions are made using the business/test/developer analyst experience and understanding of the business. The won’t necessarily align and more importantly they will change as more work is done. It is vital that all that information is shared – and that is at the heart of collaborative processes such as specification by example try to engender. But at the moment the process seeks to implement a one-size-fits-all approach and only looks at basing everything on the user stories. There may well need to be additional process activities.

What we want to find out over the next few months is how requirements traceability works in the Specification by Example area. It has proved to be a valuable tool in Agile projects – where there isn’t the time to record all the details of all the requirements and in Waterfall where the long time periods of development and testing activity can be managed through the matrix. It has weaknesses – it doesn’t work well across a large number of conflicting requirements and doesn’t work well when the business capability being developed splits into small components. But it will be interesting to see what it produces and future blogs we will be recording what we see.