By: Julian Stuhler, Director, Triton Consulting Ltd
Published: 5th February 2010
Copyright Triton Consulting Ltd © 2010
Worldwide economic uncertainty over the last 12 months has put significant pressure on CIOs to at the very least keep IT costs level with a push for cost reduction across the board. Gone are the days when performance issues were relatively easily handled by adding more hardware and such purchases were part of the routine budget cycle.
Managing performance and cost has become a significantly more difficult job with capacity planners and performance analysts being asked to defer hardware and software upgrades due to squeezed budgets. Many large IT departments have also seen loss of staff due to redundancy and this has put added pressure on maintaining good application performance.
One of the most effective ways of reducing or containing mainframe costs is through better managing CPU consumption. By slowing down the growth of CPU usage, hardware and software upgrades can be deferred while often improving performance thereby allowing organisations to keep costs down and performance and profitability up.
MIPS Growth in the Mainframe Market
In a recent mainframe market study, IBM reported that the mainframe has seen a 20% compound annual growth rate in MIPS since 2003. Also, In Ovum's "The state of the mainframe" research, they found that mainframe MIPS growth is averaging around 20 percent per year and large mainframe-centric enterprises have been consistently averaging 35% plus MIPS growth.
Whilst it is good news for businesses to see transactions on the rise, with usage based pricing for z/OS this increase in workload pushes up software costs and can also negatively impact application performance.
The effects of growing MIPS
Performance - Typically, any significant increase in the amount of CPU used by a given workload will result in an associated increase in transaction elapsed times. For performance-critical online workloads, that increase can translate directly into poorer critical business metrics such as customer satisfaction and retention.
Just throwing more MIPS at a poorly-performing DB2 workload does not always address the issue. A 2 hour response time may be reduced to 1.5 hours with more CPU time being available, but the problem might be due to a poor access path and some DBA attention could get it down to 5 seconds. This is especially true of application performance tuning, which is where the majority of performance issues tend to lie.
Cost - Although performance is a key issue for many organisations, the major diver for many IT teams is the need to reduce mainframe resource usage and thereby potentially defer hardware upgrades and reduce monthly MIPS costs. There is also human costs to consider: maintaining an underperforming system takes more time and resource for IT teams and adds pressure from the business teams who are calling for improved response times.
Can tuning do enough to reduce costs?
The majority of customers have significant potential for reducing resource consumption through tuning. This is especially true for those with older applications that haven't been actively maintained for a while or who have lost some of their deep DB2 skills through retirement or redundancy.
By implementing key tuning procedures, ongoing software costs can be reduced and mainframe upgrades deferred. In addition, application performance will be enhanced and overall TCO reduced.
One of the major challenges in any environment, but particularly with client/server applications, is determining which component is responsible for poor response times, although the tools for this are improving. I often liken this to the classic board game of "Cluedo"; one has to logically and methodically eliminate potential culprits until you're left with the guilty party! Another related challenge is "skills silos", where a client has the individual skills necessary to resolve a particular issue but no single person has the whole picture and internal culture and/or politics prevent the individuals from communicating and collaborating effectively.
The growing trend towards DB2 workloads running on ERP applications such as SAP and Siebel is also bringing some very different challenges to more traditional workloads.
In recent years the Financial Services industry in particular has been hit hard by audit and compliance regulations. When adding audit trails to existing applications it is very easy to increase the path-length of some transactions by up to 30%. It is critical therefore that these changes are properly designed and implemented to minimise the performance impact.
It is vital that organisations recognise the business value of designing applications for performance from the outset. The best way to ensure this happens is to instil a "Performance Culture" throughout the organisation. This includes ensuring the availability of good skills and expert advice from the beginning of the application development life cycle, formalised design reviews to validate anticipated performance against requirements and a proactive monitoring and tuning strategy once the application goes live.
The benefits of DB2 mainframe tuning can be felt across the entire business. From the CFO who will see significant reduction in IT spend through to the IT teams who benefit from improved application performance and thus improved customer service, a thorough tuning exercise can indeed improve business performance.
About Triton Consulting
Triton Consulting are Data Management specialists and IBM Premier Business Partners. Specialising in DB2 for both the mainframe and distributed systems, Triton provide a full range of services from consultancy through to education and 24/7 DB2 support. For more information on the zTune service visit - http://www.triton.co.uk/DatabaseTuning.php
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.