On-demand applications are often talked about in terms of how independent software vendors (ISVs) should be adapting the way their software is provisioned to customers. However, these days the majority of on-demand applications are being provided by end user organisations to external users: consumers, users from customer or partner organisations and their own employees working remotely.
A recent Quocirca research report, “In demand: the culture of online services provision” found that 58% of northern European organisations (from the UK, Ireland and Nordic region) were providing on-demand e-commerce service to external users. Not surprisingly, financial services topped the list with 84% of organisations doing so (showing how ubiquitous the provision of online banking etc. is now). This was followed by the technology, utilities and energy and the retail, distribution and transport sectors with 79% and 70% providing on-demand applications respectively.
However, there was plenty of such activity in other sectors. 61% of manufacturers were providing on-demand applications, most often to other businesses (think connected supply chain systems). For professional services it was 56%, again most often to other businesses. For educational organisations it was 37%. The public sector trailed with just 17%, surprising given the commitment of many governments to so called e-agendas.
At one level this is good news: more direct online interaction with consumers, partners and other businesses should speed up processes and sales cycles and extend geographic reach, those that do not do so will be less competitive. However, there are two big caveats.
- These benefits will only be gained if these applications perform well and have a high percentage of uptime (approaching 100% in many cases).
- Any application exposed to the outside world is a security risk, vulnerable to attack, either as a way into an organisation’s IT infrastructure through software vulnerabilities or to stop the application itself from running effectively (application level denial of service/DoS), thus limiting a given organisation’s ability to carry on business and often damaging its reputation.
So, how does a business ensure the performance and security of its online applications?
The performance of online applications
Two things need to be achieved here. First there needs to be a way of measuring performance and second there needs to be an appreciation of, and investment in, the technology that ensures and improves performance.
Testing the performance of applications before they go live can be problematic. Development and test environments are often isolated from the real world and, whilst user workloads can be simulated to test performance on centralised infrastructure, the real world network connections users rely on, which are increasingly mobile ones, are harder to test. The availability of public cloud platforms helps as run-time environments can be simulated, even if the ultimate deployment platform is an internal one. This saves an organisation having to over-invest in its own test infrastructure.
So, upfront testing is all well and good, but, ultimately, the user experience needs to be monitored in real time after deployment. This is not just because it is not possible to test all scenarios before deployment, but because the load on an application can change unexpectedly, due to rising user demand or other issues, especially over shared networks. User experience monitoring was the subject and title of a 2010 Quocirca report, much of which is still relevant today, however the biggest change since then has been the relentless rise in the number of mobile users.
Examples of tools for the end to end monitoring of the user experience, which covers both the application itself and the network impact on it, include CA Application Performance Management, Fluke Network’s Visual Performance Manager, Compuware APM and ExtraHop Networks (which has just released specific support for Amazon Web Services/AWS).
It is all well and good being able to monitor and measure performance, but how do you respond when it is not what it should be? There are two issues here; first the ability to increase the number application instances and supporting infrastructure to support the overall workload and, second, the ability to balance that work load between these instances.
Increasing the resources available is far easier than it used to be with the virtualisation of infrastructure in-house and the availability of external infrastructure-as-a-service (IaaS) resources. For many, deployment is now wholly on shared IaaS platforms, where increased consumption of resources by a given application is simply extended across the cloud service provider’s infrastructure. This can be achieved because with many customers sharing the same resources, each will have different demands at different times.
Global providers include AWS, Rackspace, Savvis, Dimension Data and Microsoft. There are many local IT service providers (ITSPs) with cloud platforms; for example in the UK, Attenda, Nui Solutions, Claranet and Pulsant. Some ITSPs partner with one or more global providers to make sure they too have access to a wide range of resources for their customers.
Even those organisations that choose to keep their main deployment on-premise can benefit from the use of ‘cloud-bursting’ (the movement of application workloads to the cloud to support surges in demand) to supplement their in-house resources. Indeed, in Quocirca’s “In-demand” report, those organisations providing on-demand applications to external users were considerably more likely to recognise the benefits of cloud-bursting than those that did not.
Being able to measure performance and having access to virtually unlimited resources to respond to it is one thing, but how do you balance the workload across them? The key technologies for achieving this are application delivery controllers (ADCs).
ADCs are basically next generation load balancers and are proving to be fundamental building blocks for advanced application and network platforms. They enable the flexible scaling of resources as demand rises and/or falls and offload work from the servers themselves. They also provide a number of other services that are essential to the effective operation of on-demand applications, these include:
- Network traffic compression – to speed up transmission
- Data caching – to make sure regularly requested data is readily available
- Network connection multiplexing – making effective use of multiple network connections
- Network traffic shaping – a way of reducing latency by prioritising the transmission of workload packets and ensuring quality of service (QoS)
- Application-layer security – the inclusion of web application firewall (WAF) capabilities to protect on-demand applications from outside attack, for example application-level denial of service (DoS)
- Secure sockets layer (SSL) management – acting as the landing point for encrypted traffic and managing the decryption and rules for on-going transmission
- Content switching – routing requests to different web services depending on a range of criteria, for example the language settings of a web browser or the type of device the request is coming from
- Server health monitoring – ensuring servers are functioning as expected and serving up data and results that are fit for transmission
The best known ADC supplier was Cisco; however, Cisco recently announced it would discontinue further development of its Application Control Engine (ACE) and recommends another leading vendor’s product instead—Citrix’s NetScaler. Other suppliers include F5, the largest dedicated ADC specialist, Riverbed, Barracuda, A10, Array Networks and Kemp.
So, you can measure performance, you have the resources the meet demand and the means to balance the workload across them as well as off-load some of the work with ADCs; but what about security?
The security of online applications
The first thing to say about the security of online applications is you do not have to do it all yourself. Use of public infrastructure puts the onus on the service provider to ensure security up to a certain level. Most have a shared security model; for example AWS states:
- AWS takes responsibly for securing its facilities, server infrastructure, network infrastructure, virtualisation infrastructure
- The customer is free to choose its operating environment, how it should be configured and set up its own security groups and access control lists.
However, regardless of where the application is deployed, it will be open to attack. A 2012 Quocirca report underlined the scale of the application security challenge. The average enterprise tracks around 500 mission-critical applications—in financial services organisations it is closer to 800. The security challenge increases as more and more of these applications are opened up to external users.
Beyond ensuring the training of developers, there are three main approaches to testing and ensuring application security
- Code and application scanning: thorough scanning aims to eliminate software flaws. There are two approaches; the static scanning of code or binaries before deployment and the dynamic scanning of binaries during testing or after deployment. On-premise scanning tools have been relied on in the past—IBM and HP bought two of the main vendors. However, the use of on-demand scanning services, for example from Veracode, has become increasingly popular as the providers of such services have visibility into the tens of thousands of applications scanned on behalf of thousands of customers. Such services are often charged for on a per-application basis, so unlimited scans can be carried out, even on a daily basis. The relatively low cost of on-demand scanning services makes them affordable and scalable for all applications including non-mission critical ones.
- Manual penetration testing (pen-testing): specialist third parties are engaged to test the security of applications and effectiveness of defences. Because actual people are involved in the process, pen-testing is relatively expensive and only carried out periodically; new threats may emerge between tests. Most organisations will find pen-testing unaffordable for all deployed software and is generally reserved for the most sensitive and vulnerable applications.
- Web application firewalls (WAF): these are placed in front of applications to protect them from application-focussed threats. They are more complex to deploy than traditional network firewalls and, whilst affording good protection, do nothing to fix the underlying flaws in software. WAFs also need to scale with traffic volumes, as more traffic means more cost. As has been pointed out, WAFs are a feature of many ADCs, and are less likely to be deployed as separate products than they were in the past. They also protect against application level DoS where scanning and pen-testing cannot.
100% software security is never going to be guaranteed and many organisations use multiple approaches to maximise protection. However, interestingly, as one of the reasons for having demonstrable software security is to satisfy auditors, compliance bodies do not themselves mandate multiple approaches. For example the Payment Card Industry Security Standards Council (PCI SSC) deems code scanning to be an acceptable alternative to a WAF.
The number of on-demand applications provided by businesses in all sectors is set to increase further. Users will become even less tolerant of poor performance as they rely more on on-demand services as part or all of the way they engage with suppliers. Hackers and activists will continue to become more sophisticated in the way they attack online applications. The supporting technology to support performance and provide security will continue to improve over time; the businesses that make best use of this technology will be the most effective providers of online services.
1 – In demand: the culture of online services provision, Quocirca 2013
2 – User experience monitoring, Quocirca 2010
3 – Outsourcing the problem of software security, Quocirca 2012
This article first appeared in Computer Weekly