cloud computing

Intechnica Technical Director to present webinar on cloud performance with O’Reilly and Dyn

On June 18th, Intechnica Technical Director Andy Still will be co-presenting a webinar, “Building Performance and Resiliency Into Cloud Hosted Systems” with Jim Cowie (Chief Scientist, Dyn). The webinar is hosted by O’Reilly, who previously published Andy’s book “Web Performance Warrior” (click here to download).

Register now

Andy will bring his years of experience in developing high usage, highly performant websites to the webinar, while co-presenter Jim Cowie of Dyn brings more than 20 years of experience in high performance computing, network modelling and simulation, web services, and security.

In this webinar you will learn and see examples of:

  • How your customers connect to your cloud providers over the Internet and where do performance problems occur
  • How you can create meaningful SLAs and protect your company from cloud bad behavior
  • How does Internet Performance affect overall page load times and what you can do about it
  • How you can use cloud geolocation to improve end user experience and content delivery
  • How you can find a way to lower costs while improving reliability and performance with a multi-vendor cloud approach
  • How you can avoid downtime during planned outages and maintenance

Head on over to O’Reilly’s website to register now!

New case study: Nisa Retail Mobile app on Amazon Web Services

Amazon Web Services have recently published a new AWS Case Study, looking at how Nisa Retail have implemented an innovative mobile application to make their members’ lives easier. Nisa engaged with Intechnica to design and develop the app, which is built on AWS technology.

As an AWS Consulting Partner in the Amazon Partner Network, Intechnica was well positioned to leverage the power and flexibility of AWS to deliver a scalable solution that would not impact on Nisa’s core IT systems such as their Order Capture System (OCS).

The full case study is available to read on the AWS website.

If you need an application built with performance in mind from the outset, or specifically built around the flexibility of the AWS infrastructure, Intechnica can help. Fill in the form below or use our contact page.

AWS Chief Evangelist pays a visit to Manchester’s AWS User Group North

The group has seen tremendous growth in the past few months, going from 167 members to 219 in June alone

The group has seen tremendous growth in the past few months, going from 167 members to 219 in June alone

There was a very special edition of the Amazon Web Services User Group North in Manchester last night, as the Chief Evangelist of AWS, Seattle’s own Jeff Barr, was our guest speaker. After a 5,500 mile road trip visiting user groups across the US, Jeff is in the UK to talk at conferences and various user groups across the country, with his first stop being on Monday night’s AWS User Group North (@AWSUGN).

TechHub Manchester was cram packed with 60 AWS users of all experience levels. Everyone had plenty to take away from Jeff’s fascinating talk, as he covered each AWS service. This was a unique opportunity to ask one of the most senior people in AWS direct questions about the various services, and one which was seized by the group members!

AWS Chief Evangelist Jeff Barr takes questions from the packed TechHub Manchester

AWS Chief Evangelist Jeff Barr takes questions from the packed TechHub Manchester crowd

The questions came in thick and fast, but Jeff did get a break at the halfway mark upon the arrival of a mountain of pizzas, provided by event sponsors Intechnica. As AWS consulting partners, Intechnica have supported the user group since its inception 2 years ago.

If you missed out on Jeff’s talk (where were you?!), he’ll be at the London AWS User Group on Thursday 27th June.

7 Great Lightning talks at Amazon Web Services User Group North

We’ve been organising the Amazon Web Services User Group North (or #AWSNorth on Twitter) for about 2 years now. The group brings AWS users and experts together in one place to foster learning, discussion and networking, and it’s always an interesting evening – typically we have a guest speaker showing an AWS related tool or sharing a use case.

We usually have an Amazonian in attendance, which is great for the AWS users to voice their opinions and ask for help in person. At the previous meetup, Ryan Shuttleworth from Amazon told me about the Irish AWS User Group that recently started up and their success with lightning talks – quick-fire 5 minute long talks from members of the group. The idea of lightning talks is that you get to hear from a variety of speakers on all sorts of topics in a short space of time. It’s also somehow appropriate that a group about “clouds” produces “lightning” talks…

So last night we had our first go at lightning talks. First in the firing line was David Hall, from the night’s sponsors TeleCity Group, who explained the history behind TeleCity’s AWS Direct Connect service. This was followed by Amazon’s Data Scientist extraordinaire Matt Wood, an old favourite speaker from the early days of our group. Since the first few meetups, though, Matt has relocated to Amazon’s home of Seattle, so in this case he returned in Skype form. Matt(‘s giant head on a screen) gave a great lightning talk on provisioning throughput “like a boss” (click link for Matt’s slides).

Matt Wood gave his lightning talk live from Seattle

Matt Wood gave his lightning talk live from Seattle

Next up was Bashton.com‘s Sam Bashton, who talked in depth about their experiences with AWS CloudFormation, followed by Alastair Henderson. Alastair took the opportunity to open the floor to suggestions on the best way to solve a specific problem he was having, utilising both his presentation time and the expertise of group members effectively. Smart thinking! Richard Bosomworth took his turn, giving a detailed hands-on demonstration of using Scalr to manage EC2.

Alistair Henderson took the opportunity to seek advice from his fellow AWS enthusiasts

Alistair Henderson took the opportunity to seek advice from his fellow AWS enthusiasts

Tom Chiverton gave an interesting and open presentation about his experiences in moving away from “spinning rust” to quote Linus Torvalds, with a focus on Amazon Simple Storage Service (S3). And in the final talk of the evening, Channel 4’s Ric Harvey returned, having been our headline speaker at the previous meetup. Like Tom, Ric talked about using S3, but this time using it a little bit differently.

Thanks to TeleCity Group for sponsoring the drinks and pizzas, and to Tech Hub Manchester for once again being great hosts. If you have a requirement for a great venue to host your tech meetup, I would highly recommend talking to Tech Hub!

And of course, thanks to David, Matt, Sam, Alastair, Richard, Tom & Ric for making the evening a success with their brilliant presentations. We’ll be sure to do more sessions like this in future in the hopes the lightning strikes twice (sorry!).

If you’re interested in AWS, consider joining our Meetup group or one of the many user groups around the world.

Specification by Example: Tooling Recommendations

Specification by example has a history that is closely linked to Ruby. The main software tools and development methods came from work on Ruby projects. As the method has started to attract a wider audience development tools and links to Microsoft .Net platforms have begun to appear.

At Intechnica our approach has been from two directions. An important aspect of the methodology is the use of continuous integration and code build, and that unit tests and inter-module testing is built into the software framework. This will prove that the code providing the functionality for the stories is working properly. So the development teams are looking to build continuous integration and unit tests into their software development processes. At the same time we don’t want to be tied to a specific and expensive development platform. Current processes are predicated on using Team Foundation Server – if the testing components are added to this it becomes an expensive solution.

The area that the solution assurance team are working in is the setting out the scope of the project and ensuring that the requirements reflect customer business requirements. Most importantly, it is then taking that living documentation and collaborating with the development team, PM’s and the customer to refine the details and identify the key business flows. Initially the outline ideas are put together using whiteboard sessions. These are then recorded into a mind-map software solution. There are many different products out there – many of which are open source. An important part of the selection criteria is that they are collaborative and that they have flexible import and export capabilities. It is no good having meetings and having working groups and then finding that they can only print output and can’t have shared input. The solution we have chosen is Mindmeister. Like many tools used for business analysis, testing and development, Mindmeister is a Software as a Service (SaaS) solution. This provides great flexibility and reduces the need to have software installed onto closed PCs or to use licensing dongles and allows easier upgrades. The software is free to use with limited number of mindmaps in the basic version; the personal version with greater number of maps and export formats is £4.99 a month, and the pro version is £9.99.

Once the project has a satisfactory outline and the scope has been captured with a rough outline of the solution we then use the Speclog solution to start to write the user stories. Speclog is a tool that is designed to look at business requirements and allow them to be described fully. For the story based process of specification by example this allows us to breakdown an application into its constituent actor goals, business goals and user stories. Each core feature is broken out into different user stories. Again it has strong collaboration characteristics. We have chosen a server version that allows a number of people to work on the same project. Linking back to specification by example, the output produced is in Gherkin language.

As part of this process, this is the time when the requirements traceability and the summary of requirements documentation is written. This is so there is an agreed understanding of the scope of the project. The tools are put in place for later test phases that will validate the delivery of the business capability.

This takes us back to our initial introduction to specification by example where we worked on user stories and used a Ruby tool called Cucumber that allowed the project to be detailed. Cucumber also produced and validated Gherkin code. It is a descriptive language that allows requirements to be described in the Given When Then format.

Given that I am a user of the system
I want to enter a valid order
So that products can be dispatched

In this example a number of characteristic scenarios would be used when describing the order entry feature. An order should have a valid product, reject certain types of product, have correct dispatch details, have a correct order scheme, contain shipping details, reject incorrect quantity amount and produce an order summary. From this the Gherkin user stories would then be produced and the developer would start to work on the features that provide this software capability.

And this brings us to our last software tool. In order that user stories can be discussed with the users and reviewed internally during development and testing, we need a tool that can bring together the Gherkin output in a Wiki style web site. This is key to the specification by example method where the output needs to be the living documentation and reflect the improvements and changes that are made to the requirements and how they are met through the project. Most importantly, both the SpecLog description (connected to the development cycle) and the tool used to monitor the progress of the requirements and display their maturing needed to be closely aligned. In theory this can be done by putting the Gherkin files into a version control system where the software that provides the capability will be developed, but because we don’t have the right connectivity and the development processes are not yet in place, we have put in a simpler solution. The user stories are being published into a file share and then put out into a Wiki style display tool called Relish. This can be used in two ways, directly by writing Gherkin feature files using simple editors such as Notepad++ or Sublime – plugins to these two products allow them to validate Gherkin code.

The linking together of the solution assurance and development processes are the next area for examination. When developing features there won’t necessarily be a direct one-to-one relationship between developed code modules and the stories. Some modules could be used across a number of stories – but in different functional capabilities. There will also be a much larger number of software modules and the relationship between the stories and the underlying code needs to be monitored. This area will require a lot of detailing work.

The final step will be to put the agreed user stories into our test management software tool (we use Practitest) as requirements and the write the core end-to-end process flows as test scenarios – with the test cases described within reflecting the different functional capabilities.

Conclusion

There are a lot of good quality software tools to help work with Specification by Example. Collaboration and flexibility are key – as is basic reliability. A time can be seen when there is a end-to-end link up to take requirements and produce testable artefacts. The next step in the process will be to work out how the development tools will hook into this process and more importantly, if we can follow the process fully, is it possible to produce a hybrid approach that combines traditional methods with this much leaner approach? The biggest challenge with this process is that it works from a top-down perspective – where traditional software methods use a combination of top-down and bottom-up to ensure that no gaps are inadvertently produced. It is addressing this issue that will be the most pressing challenge for specification by example.

See the other posts in this series by visiting David’s profile page.

How to create templates of Amazon EC2 environments

Sometimes, when you’re developing an application (or even in the testing stage), it’s really useful to be able to regress back to a previous iteration or even branch off into several unique environments. You might also want to set up identical environments to send off to other people, such as team members, clients or even sales teams. Cloud platforms like Amazon Web Services and its Elastic Compute Cloud (EC2) service offer a cost effective solution to this problem, especially compared to traditional (“tin”) infrastructure, but managing all these environments and machine images in the Amazon Management Console can quickly become confusing. It would be easier if you could set up a template for each environment iteration and fire up said environments directly from the templates.

This is actually very simple to achieve – even for those without deep technical knowledge of AWS or similar IaaS offerings – by using a neat application called CloudFlex (disclosure: yes, it’s made by Intechnica, but we find it very handy).

Step 1: Set up the template

CloudFlex uses a step-by step wizard to guide you through the process of defining what your environment should look like. This includes the number and type of AMIs (Amazon Machine Images) that should be deployed, their security groups, elastic IPs, load balancers and everything else you might need. You can also give it a descriptive name to help you identify each template at a glance. All of your templates are shown in one place so you can see what you have saved and launch environments from them easily.

Step 2: Launch an environment

Once you have your template set up, you can start an environment from it. You can either do this manually or schedule it to start up whenever you like, or as frequently as you need (such as every morning at the start of the working day). When you start an environment manually, all the details are pre-populated by your chosen template, which keeps everything consistent across multiple environments. After the environment has finished spinning up, CloudFlex gives you quick access to details such as its public DNS; you can also connect to a machine image’s remote desktop through this screen. From there you can do whatever work you need to do on said machine.

Step 3: Save environment snapshots

Now that you’ve connected to your AMI and done your work, you might want to keep a snapshot of that image, in its current state, to go back to later (or to distribute to team members etc). You might not want to allow access to the AMI you’re working on in case something gets changed. The best solution is to create a new template from a snapshot of your AMI. To do this, from the details page of your environment in CloudFlex, click “Create Machine Image” (after giving it a name) and the AMI will be copied in its current state to your AWS account. Now, repeat steps 1 & 2, this time choosing your new AMI as the machine image for your template. You can then start up as many concurrent versions of the environment as needed and send remote desktop files for each to whoever needs access.

CloudFlex is available from Intechnica from £99 per month. If you want to know more, visit the website where you can sign up for a trial or leave a comment below.

Richard Bishop at SIGiST

Performance Testing in the Cloud [Presentation]

Intechnica recently sponsored the British Computer Society’s Special Interest Group in Software Testing (SIGiST) summer conference in London. The SIGiST is a great place to come and listen to the country’s top software testers talk about methodologies, tools, new technology and experiences, as well as to meet others in the world of testing.

View a Photosynth panorama of the SIGiST conference in London

One of the speakers for this summer’s SIGiST conference was Intechnica’s own Richard Bishop, who contributes blog posts for this site. Richard spoke about Intechnica’s findings and observations from the use of cloud platforms in performance testing (we use TrafficSpike, based on Facilita Forecast, to generate load from the cloud for tests, as well as developing and migrating applications for and to the cloud).

Richard Bishop at SIGiST

Richard Bishop, speaking at the BCS SIGiST summer conference

The presentation was well received, gaining praise on Twitter via the #SIGiST hashtag.

https://twitter.com/webcowgirl/status/215770663586234368

The slides have since been uploaded to SlideShare; view and download them here:

 

Webinar: Designing Applications for the Cloud

This webinar, from 6th March 2012, was hosted by Intechnica‘s Technical Director, Andy Still. Andy talked about the key principles of designing and migrating applications to the cloud. This includes scaling out, taking new and imaginative approaches to data storage, making full use of the wide range of products and services on offer from cloud providers (beyond hosting), and exploring the many flavours of hybrid solution which can mean all types of business can leverage the benefits of the cloud.

Andy has architected and built a number of cloud-based applications, specialising in highly scalable, high-performance, business critical applications.

If you’re planning or considering moving to the cloud in 2012 then this webinar is essential viewing.

More Intechnica webinars

What are the options for testing in the Cloud?

I’m in the final stages of preparing my presentation and workshop session for the UK Test Management Summit next week in London and its making me think more about cloud computing in general as well as performance testing. Either testing in cloud environments or using the cloud to deliver more scalable performance tests.

Intechnica’s research paper last year, entitled “How Fast Is The Cloud?” investigated the relative performance of a simple eCommerce application on various different cloud platforms including IaaS and PaaS options. We demonstrated that a well implemented cloud solution could out-perform traditional hardware but that poor implementations would confirm cloud-sceptics suspicions about poor performance in the cloud.

At Intechnica, as well as using cloud environments to functionally and performance test code that we’re developing for clients, we use cloud based performance test tools to test our customer’s own test environments. By using cloud based load generators (injectors) and the Intechnica TrafficSpike product, we can quickly provision tens of load generators, use them for a few hours and then decommission the servers. This allows for highly scalable, comparatively low cost performance testing particularly when compared to trraditional models where multiple servers sit idle waiting for the one day per week or month where they’re used to their full potential.

The trend in performance testing seems to be a move away from traditional performance test tools and towards cloud-based load generation. This is demonstrated by the growth in companies such as SOASTA, LoadStorm, blitz.io and BlazeMeter. Our workshop at TMF will give test managers the opportunity to discuss these different test technologies and obtain a better understanding of cloud performance and the implications for their business. As well as this we’ll be giving attendees the opportunity to use Intechnica’s Cloudflex product to see how easy it can be to provision multiple, identical test environments for themselves.

I’m looking forward to meeting attendees next week to discuss the implications of cloud computing for those of us in the testing industry.

AWS instances, their ever-changing hostnames and the implications for software licensing

I’ve recently been doing some performance testing for a client and evaluating the use of dynaTrace for monitoring application performance under load. As well as an installation of dynaTrace at the client site, we have a demonstration/evaluation licence which is installed on an AWS cloud server. As well as being useful for client demonstrations, this gives us the opportunity to perform proof of concept exercises and “try things out” away from production systems.

Last week, in an effort to save on the cost of keeping the AWS instance up and running all the time, I decided to shut the server down using the AWS console. When I went back to the server and restarted it, I had the following error message in dynaTrace.

I did some investigation and I found that dynaTrace locks the licence key to the hostname of the server on which it is installed. This is all well and good in a normal environment, but I noticed that the name of the host server changed each time that I rebooted. When I installed dynaTrace, my machine name was ip-03a4d76 and when I restarted the server the name had changed to ip-0a3b11c9.

I looked at the server IP address and saw that as the server restarted (even though I was using an elastic IP address to address the server externally), the hostname changed when the private (internal Amazon) IP address changed. The hostname was a hexadecimal representation of the private IP address.

My IP address was 10.59.17.201 and the hostname (which has since changed again) was ip-0a3b11c9 (0A = 10, 3B = 59, 11 = 17 and C9=201).

I spoke to the dynaTrace, the supplier of our software, and they told me that it can be tied to a MAC address, rather than a hostname if required, but that didn’t help me since I understand that MAC addresses change each time AWS instances restart. Instead I looked at ways of fixing the hostname and found that it was remarkably easy (when you know where to look).

On each Windows AWS server there is a program on the start menu called “EC2 Service Properties”. Run this program and uncheck the box “Set Computer Name”, you can then set a HOSTNAME normally which persists after each reboot. Your hostname-dependent software can then be reinstalled or re-licensed and you can relax in the knowledge that your software will run properly next time you restart your server.