Enterprise -> Technology
By: Martin Banks, Associate Analyst - Datacentre & Mainframe, Bloor Research (Moved)
Published: 19th February 2008
Copyright Bloor Research © 2008
The trouble with trendy news stories is that they soon enough stop being trendy and fade away. This is already starting to become the case with the subject of green machines, especially when they come in the form of large datacentres. Green fatigue is already starting to appear as different vendors extol the virtues of their extra-green technology. There is now even a Standard Performance Evaluation Council (SPEC) benchmark to help try and identify the greenest servers.
While the hardware side is undoubtedly an essential part of the overall green equation, it is by no means the entirety of the sum. While it is laudible in the extreme that semiconductor manufacturers and server vendors work hard to reduce the brute levels of energy consumed by their offerings, less attention than necessary seems to be applied to the other half of the equation—the actual work done at the expense of that energy. There can be a great deal of wasted energy expended by systems as they thrash around executing what can only be described as poorly written application code.
One example of this came to light recently with the announcement by a small UK company, NetPrecept, of its iPEP website access management toolset. The primary target for this is managing website visitor access so that businesses can apply access policies that suit their business models. For example, where an online business wants to run the classic ‘Gold, Silver, Bronze’ customer service level, ensuring that the Gold customers actually get preferential access to the website and its services is an important policy objective. In practice, however, many websites give exactly the same access rights to window shoppers and ‘tyre-kickers’ as they do their biggest spenders.
As a by-product the technology can also be used to observe and trap users (usually innocent parties with Trojan-ed PCs) participating in such malicious activities as Denial of Service attacks.
However, a third by-product of the technology highlights the issue of wasted workload that can now beset every datacentre or server farm running a large and busy website. It is still the norm for such a site to have no capabilities for discriminating and managing types of user. ‘Come one come all’ seems to be the business model as far as site visitors are concerned, regardless of their actual or potential contribution to the revenue stream of the business. While this is finely egalitarian, and in theory makes very good sense in the supposedly free-wheeling world of web-based business, it also means that there are no policies in place to match the level of website utilisation with the IT resources available to website, or the costs incurred running it.
According to NetPrecept's CEO, Iain Fraser, the common result is that IT managers resort to over-specifying the resources needed in order to accommodate potential peaks in traffic. To be effective, however, especially if traffic peaks are both common and unplanned, these spare resources must be live, ready to be switched in. If not available, the boot-time will be more than sufficient for many site visitors to abandon their efforts; and if even a small percentage of them are in the process of trying to spend money there is the classic financial double-whammy—lost sales and extra running costs… and let's not think about the wasted capital investment.
So here is a tool that sets out to achieve one task—and does so, but that in this context is a different matter—and as a by-product comes up with a way of constructively throttling the traffic on a website to meet the policy requirements of the business. That, in turn, identifies a way of managing the workload so that the most productive work gets priority and the need for additional resources is reduced, saving capital expenditure, operating costs and energy.
Until operating systems and application software starts to be designed with operational and power efficiency in mind—and now that multicore processors are the norm that is something unlikely to happen until parallel programming models start to gain sway—energy will be needlessly wasted by IT systems working hard to achieve far less than they could. In this case more developments like NetPrecept's iPEP will be needed to keep the energy consumed by bloatware under some sort of control.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.