By: Bob Tarzey, Service Director, Quocirca
Published: 12th December 2013
Copyright Quocirca © 2013
On-demand applications are often talked about in terms of how independent software vendors (ISVs) should be adapting the way their software is provisioned to customers. However, these days the majority of on-demand applications are being provided by end user organisations to external users: consumers, users from customer or partner organisations and their own employees working remotely.
A recent Quocirca research report, “In demand: the culture of online services provision” found that 58% of northern European organisations (from the UK, Ireland and Nordic region) were providing on-demand e-commerce service to external users. Not surprisingly, financial services topped the list with 84% of organisations doing so (showing how ubiquitous the provision of online banking etc. is now). This was followed by the technology, utilities and energy and the retail, distribution and transport sectors with 79% and 70% providing on-demand applications respectively.
However, there was plenty of such activity in other sectors. 61% of manufacturers were providing on-demand applications, most often to other businesses (think connected supply chain systems). For professional services it was 56%, again most often to other businesses. For educational organisations it was 37%. The public sector trailed with just 17%, surprising given the commitment of many governments to so called e-agendas.
At one level this is good news: more direct online interaction with consumers, partners and other businesses should speed up processes and sales cycles and extend geographic reach, those that do not do so will be less competitive. However, there are two big caveats.
So, how does a business ensure the performance and security of its online applications?
The performance of online applications
Two things need to be achieved here. First there needs to be a way of measuring performance and second there needs to be an appreciation of, and investment in, the technology that ensures and improves performance.
Testing the performance of applications before they go live can be problematic. Development and test environments are often isolated from the real world and, whilst user workloads can be simulated to test performance on centralised infrastructure, the real world network connections users rely on, which are increasingly mobile ones, are harder to test. The availability of public cloud platforms helps as run-time environments can be simulated, even if the ultimate deployment platform is an internal one. This saves an organisation having to over-invest in its own test infrastructure.
So, upfront testing is all well and good, but, ultimately, the user experience needs to be monitored in real time after deployment. This is not just because it is not possible to test all scenarios before deployment, but because the load on an application can change unexpectedly, due to rising user demand or other issues, especially over shared networks. User experience monitoring was the subject and title of a 2010 Quocirca report, much of which is still relevant today, however the biggest change since then has been the relentless rise in the number of mobile users.
Examples of tools for the end to end monitoring of the user experience, which covers both the application itself and the network impact on it, include CA Application Performance Management, Fluke Network’s Visual Performance Manager, Compuware APM and ExtraHop Networks (which has just released specific support for Amazon Web Services/AWS).
It is all well and good being able to monitor and measure performance, but how do you respond when it is not what it should be? There are two issues here; first the ability to increase the number application instances and supporting infrastructure to support the overall workload and, second, the ability to balance that work load between these instances.
Increasing the resources available is far easier than it used to be with the virtualisation of infrastructure in-house and the availability of external infrastructure-as-a-service (IaaS) resources. For many, deployment is now wholly on shared IaaS platforms, where increased consumption of resources by a given application is simply extended across the cloud service provider’s infrastructure. This can be achieved because with many customers sharing the same resources, each will have different demands at different times.
Global providers include AWS, Rackspace, Savvis, Dimension Data and Microsoft. There are many local IT service providers (ITSPs) with cloud platforms; for example in the UK, Attenda, Nui Solutions, Claranet and Pulsant. Some ITSPs partner with one or more global providers to make sure they too have access to a wide range of resources for their customers.
Even those organisations that choose to keep their main deployment on-premise can benefit from the use of ‘cloud-bursting’ (the movement of application workloads to the cloud to support surges in demand) to supplement their in-house resources. Indeed, in Quocirca’s “In-demand” report, those organisations providing on-demand applications to external users were considerably more likely to recognise the benefits of cloud-bursting than those that did not.
Being able to measure performance and having access to virtually unlimited resources to respond to it is one thing, but how do you balance the workload across them? The key technologies for achieving this are application delivery controllers (ADCs).
ADCs are basically next generation load balancers and are proving to be fundamental building blocks for advanced application and network platforms. They enable the flexible scaling of resources as demand rises and/or falls and offload work from the servers themselves. They also provide a number of other services that are essential to the effective operation of on-demand applications, these include:
The best known ADC supplier was Cisco; however, Cisco recently announced it would discontinue further development of its Application Control Engine (ACE) and recommends another leading vendor’s product instead—Citrix’s NetScaler. Other suppliers include F5, the largest dedicated ADC specialist, Riverbed, Barracuda, A10, Array Networks and Kemp.
So, you can measure performance, you have the resources the meet demand and the means to balance the workload across them as well as off-load some of the work with ADCs; but what about security?
The security of online applications
The first thing to say about the security of online applications is you do not have to do it all yourself. Use of public infrastructure puts the onus on the service provider to ensure security up to a certain level. Most have a shared security model; for example AWS states:
However, regardless of where the application is deployed, it will be open to attack. A 2012 Quocirca report underlined the scale of the application security challenge. The average enterprise tracks around 500 mission-critical applications—in financial services organisations it is closer to 800. The security challenge increases as more and more of these applications are opened up to external users.
Beyond ensuring the training of developers, there are three main approaches to testing and ensuring application security
100% software security is never going to be guaranteed and many organisations use multiple approaches to maximise protection. However, interestingly, as one of the reasons for having demonstrable software security is to satisfy auditors, compliance bodies do not themselves mandate multiple approaches. For example the Payment Card Industry Security Standards Council (PCI SSC) deems code scanning to be an acceptable alternative to a WAF.
The number of on-demand applications provided by businesses in all sectors is set to increase further. Users will become even less tolerant of poor performance as they rely more on on-demand services as part or all of the way they engage with suppliers. Hackers and activists will continue to become more sophisticated in the way they attack online applications. The supporting technology to support performance and provide security will continue to improve over time; the businesses that make best use of this technology will be the most effective providers of online services.
1 – In demand: the culture of online services provision, Quocirca 2013
2 – User experience monitoring, Quocirca 2010
3 – Outsourcing the problem of software security, Quocirca 2012
This article first appeared in Computer Weekly
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.