By: Martin Banks, Proprietor, Lian-James Consultancy
Published: 28th July 2014
Copyright Lian-James Consultancy © 2014
Testing is always important, that is for certain. But sometimes I do wonder whether some testing is looking at the wrong issues, or perhaps the right issues but for the wrong, or maybe poorly thought-through, reasons. It has to be set in the wider context of purpose—and that purpose has to be enhancing—or at least maintaining—a customer’s experience of using a service.
Take, for example, some news from US-based cloud load-testing business, BlazeMeter. Earlier this year, the company introduced a service that is, on the face of it, a must for all startups, especially those looking to offer applications and services in the cloud. It is certainly going to be an important part of ensuring the customer experience is not defeated at the first hurdle.
And in that context, the thought keeps nagging away at me that, while this testing is both valid and important, it is also looking at only half of a much broader experience management issue.
BlazeMeter, which already offers the JMeter-based load testing cloud, launched a new support program for startups that offers a free, six-month package of its open source compatible load testing cloud services to qualifying startups.
This means startups can ensure their web pages and mobile applications will hold up under high demand and perform as expected, delivering a seamless user experience when brought to production.
The new program provides startups with 20 free performance tests per month for up to 1,000 users per two-load servers, and two weeks of data retention. This represents a useful saving of more than $2,000 a year.
The reasoning behind this is simple. “We want to lend a hand to our fellow startups with this package, which will allow them to run sophisticated, large-scale performance and load tests quickly, easily and affordably”
Provisioning dozens of test servers and managing the distribution of large-scale load tests can present significant cost and agility barriers to most start ups.
BlazeMeter’s cloud-based testing solution not only solves this problem but also maximises the speed at which development teams can gather valuable load-testing metrics by offering the best options for scalability, cost savings and geographic reach.
The BlazeMeter cloud, which is 100 percent compatible with Apache JMeter, also allows developers and operations teams to select which global locations they want to review the load and response times of their applications for without having to stand up a data center in each location.
This is, obviously, an important capability. Putting out a web service that dies at the first sign of any significant load will certainly be a zero experience for customers, and therefore very bad for business. But expected high-stress workloads are one thing. The real world of providing web services is full of the unexpected—and it is the unexpected that can get in the way of users of such applications and services conducting whatever transactions they had planned.
The most common problem is that visitor traffic to web services ends up being un-managed: they all arrive and there is no prioritisation, it is fair shares for all. But from whatever perspective the provider of that service is coming—and most often it will be the generation of business and revenue in some way—visitor prioritisation is a key capability.
It will be valuable indeed that the testing BlazeMeter undertakes shows that a web service can handle 1 million accesses a minute with ease. But if 99 percent of those hits are, for example, from tyre-kickers’ simply crawling round the web aimlessly, that can be bad for business. It means there could be another 10 percent with real purchasing requirements who cannot get on the website.
And don’t say you have never done your bit of aimless, web-based tyre-kicking, I know I have.
There are now ways of managing that prioritisation, however. Tools such as vAC from NetPrecept can identify the actions of those site visitors with obvious intentions of buying and can, at times of high-loading, ensure that they are prioritised in terms of access.
The tyre-kickers can find their access not just restricted but even terminated in a positive manner, such as being informed what is happening and being given a priority access voucher to use when they come back.
Actions of this type not only ensure that the business is conducted as efficiently as possible, even during times of high stress loadings—for example, vAC can allow users of a website actually making a transaction to continue to completion even if the website suffers a Distributed Denial of Service attack at the time—but also make an important contribution to the wider issue of at least maintaining, and often enhancing, a customer’s experience of using a website.
So testing of the type offered by BlazeMeter is important, but in the context of enhancing the customer’s experience of using a service it is only part of the battle—and can end up helping to create a problem in its own right. Without the actual access process being managed and prioritised even very high capacity websites will get swamped. If that leads ISVs and service providers towards the notion that adding even more capacity is the right answer then they are simply sliding down a vicious spiral of ever-increasing cost.
It is not just access capabilities that are important to good customer experiences, it is managing who gets priority, and why.
This opinion piece was first published in Cloud Services World
Posted: 29th July 2014 | By Alon Girmonsky :
I really enjoyed reading your article!! Truly fascinating...
Testing is done by different professionals with different skill sets throughout the product life cycle (e.g. at development stage, continuous integration, pre-production, continuous deployment and post production). It's very important that each professional will have an adequate solution for testing, whether you are a developer or a performance engineer using a comprehensive script-based solution or a if you are a business analyst using what you call a "tire-kicker".
While having a solution that doesn't match your role, skill set and responsibility, can provide friction, having the right solution, can certainly amplify your work efficiency.
I'm looking forward to reading more.
Posted: 5th August 2014 | By Silvia Siqueira :
Hi Martin, very good point in your article. I worked at HP and have participated in a couple of round tables to discuss challenges of performance testing, and one of the top ones was: understanding requirements for performance. For some companies it is hard to collect the requirements for performance up front or even start performance testing planning together with the development.
Well, for applications that are in production the solution here, is to collect information about the application behavior in production (monitoring metrics) and use as a baseline for performance. Another option is to leverage other web/mobile analytics (e.g. google analytics).
Another important point to consider is continuous testing, where performance testing starts at the development with unit test.
For those who are interested Mark Tomlinson and I will be delivering a webinar on August 20th about "planning performance testing" and discussing what metrics are important to consider before starting the test. Click here to get additional information: http://www.vivit-worldwide.org/events/event_details.asp?id=471367&group
The messages above were all contributed by IT-Director.com readers. Whilst we take care to remove any posts deemed inappropriate, we can take no responsibility for these comments. If you would like a comment removed please contact our editorial team.
All fields must be completed to submit a comment. Email addresses are passed through to the author so they can contact you directly if needed.
Published by: electronicdawn Ltd.