Parkinson’s Law states that work expands so as to fill the time available. Something similar could be said about network bandwidth; left unchecked, the volume of data will always increase to consume what is available. In other words, continually increasing network bandwidth should never be the only approach to network capacity provision; however much is available it still needs to be used intelligently.
There are three basic ways to addressing overall traffic volume:
- Cut out unwanted data
- Minimise volumes of the kind of data you do want
- Make use of bandwidth at all times (akin to peak and off-peak power supply)
There are two types of unwanted data. First, there are the legitimate users who are doing stuff they really should not be doing. From the network perspective, this really only becomes a problem when that stuff consumes large amounts of bandwidth such as watching video or downloading games, films and music. A mix of policy and technology can be deployed to keep users focussed on their day jobs and thus making productive use of bandwidth.
The technology available includes web content and URL filtering systems from vendors such as Blue Coat, Websense and Cisco and filtering/blocking network application traffic with technology from certain firewall vendors including Palo Alto Networks and Check Point. In both cases care must be taken to ensure false positives are avoided that end up blocking legitimate use.
The second source of unwanted data is external and insidious; cybercrime and hacktivism. At one level this means pre-filtering of network traffic to keep spam email etc. at bay, especially as spammers have started exploiting increased bandwidth to send rich media messages. Most organisations now have such filtering in place using services such as Symantec’s MessageLabs or Mimecast’s email security.
Perhaps more serious is to avoid becoming the target of a denial of service attack (DoS). Generally speaking, these are aimed at taking servers out, but one type, the distributed DoS (DDoS) attack does so by flooding servers with network requests, so also has the effect of slowing or blocking the network. Technology is available to identify and block such attacks from vendors such as Arbor, Corero and Prolexic.
So now (hopefully) only the wanted traffic is left, but this will still expand to fill the pipe if left unchecked. One way to keep it under control is to keep as much 'heavy-lifting' as possible in the data centre. This means deploying applications that minimise the chat between server and end user access devices. To achieve this, data processing should be at the application server with just results being sent to users.
For the data that does have to be sent, techniques such as compression, de-duplication and caching can minimise the volume further. Two types of vendors step up to the plate here; those that optimise WAN traffic, for example Silver Peak, Riverbed and Blue Coat. Such products also help with the local caching of regularly used content but there are also services providers that specialise in doing this, notably Akamai.
All of the above will free up bandwidth for applications that must have the capacity they need at the time the user wants it; telephony, web and video conference etc. Others applications such as data backup or uploading data to warehouses for number crunching must be given the bandwidth they need but this can be restricted to times when other applications are not in use, which in most cases will be overnight.
Of course, for global companies there is no single night time; the same is true in certain industries which may have urgent network needs at all times of day, for example healthcare. When this is the case, then both urgent and non-urgent network requirements must run side by side and this requires certain network traffic to be prioritised to ensure quality of service (QoS), an issue that it only makes sense to address when the data flowing through data pipe is clean and wanted.