Here’s an interesting little factoid: a General Electric GE90 jet engine, as found powering commonplace commercial airliners like the Boeing 777 and Airbus 330, produces as much data about itself in a day of operations as that produced by all the Tweets on Twitter in the same timescale. And those planes each have two engines, so one plane produces twice as much data a day as Twitter.
The company has now sold over 2,000 examples of this engine, though some 800 are still to be delivered. But that means just this one engine type is producing 1,200 times the amount of data Twitter produces, every day. And its competitors—the Rolls Royce Trent and Pratt and Whitney PW4000—are no doubt capable of producing not dissimilar amounts of data about themselves.
The jet engines are also only a part of the industrial and energy production/transfer systems GE produces. These range from gas turbines used to power electricity generation and the complementary large generator sets through to a wide range of sensors and the analytics software needed to make sense of what the sensors discover. All of these are producing vast quantities of real time data.
All of this begs the question of what happens to all that data? The answer, I feel, has more than a little bearing on one of the fundamental issues facing all of cloud computing as it develops and penetrates deeper into the mainstream of business services and management, regardless of whether that business is industrial, financial, retail or service-oriented.
According to William Ruh, VP of GE’s Global Software Center in California, increasingly what happens to that data is that it is used to manage not just the real time operations of the systems, but also build models of future operation in an autonomic fashion. It allows analytical tools to effectively learn how the system operates and behaves, and gives it the raw material with which to predict what its future state is likely to be.
“With the jet engines, for example, it provides information on all the components and their future state,” he said. “It can predict that a component when a component is likely to fail, and as the dataset on the component grows, the predictions get more accurate. For example, they can tell that a component will last for one, two, or more trips across the Atlantic but will then need replacing or repair. That allows the maintenance crews to plan their work in advance, which is more efficient, much cheaper, and much more convenient to passengers than having to fix it after it has broken.
“It also means we can monitor the performance of the engines very closely, making them more efficient. Using our analytics we can normally reduce fuel consumption by around 2% per engine. Given that airlines can spend hundreds of millions of dollars a year on fuel, that is a saving measured in millions straight onto the bottom line.”
And what have jet engines and large industrial systems to do with the cloud? Well, in a word, everything, for the fundamental large systems models behind them all are broadly the same. And with the coming of the Internet of Things marketing tag—how IPV6 has the capacity to give everything and anything a unique URL identification—everything can now communicate with everything else.
In the cloud, just as in large industrial systems or complex, safety-critical systems like commercial airliners, making sure all components within the system are a part of the 'real-time conversation’ which ensures that all elements are working correctly, that none are doing what they should not be doing, or doing the right thing at the wrong time (both of which are good indicators of a security problem in IT systems), that profile of future use is known and satisfactory, and that their interaction with other systems shows that operation is neither affecting their operation nor affected by them.
Much is sometimes made—often correctly—about how automation puts people out of work. But with the cloud, as well as with a wide range of complex industrial systems, there are arguably not enough people in the world to monitor and manage the systems in real time. The only possible way of achieving this is through automated services built around analytical tools working in real time.
This is particularly so with the cloud, where the actual resources providing the services a business might be using at any point in time could be spread round the world. One just has to look at Compuware’s Gomez service to see how many service providers can contribute to what the user thinks is just accessing one webpage. They all need to work properly for it all to work at all, and as the Internet of Things concept starts to make more significant contributions into business operation and management services, the need for a much bigger 'all' to keep working well in grow exponentially.
This is where GE may just have an important ace up its sleeve. As Ruh indicated, his team has come up with several suites of analytical tools for different industry sectors. But down at the heart of them all lays the same fundamental set of algorithms, regardless of the 'machine' being monitored.
So, in theory at least, the company already has the basic tools needed to manage much more than just industrial systems. Not only that but this is already working on a scale that would dwarf the needs of most enterprise IT requirements. What is perhaps more interesting, while Ruh can see the possibilities he and his team already has more than enough to contend with in its classic industrial back-yard.
It does make GE ripe for an IT partnership, of course. A business that could perhaps refine and optimise the fundamentals for a cloud management role, and then perhaps deliver it as SaaS.
This blog first appeared in Business Cloud 9