By: Dana Gardner, Principal Analyst, Interarbor Solutions
Published: 21st September 2013
Copyright Interarbor Solutions © 2013
The next edition of the HP Discover Performance Podcast Series details how a healthcare solutions provider leverages big data capabilities.
To discover more about how high-performance and cost-effective big data processing forms a foundational element to improving overall healthcare quality and efficiency, join Greg Gootee, Product Manager at MZI Healthcare, based in Orlando. The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: How important is big data turning out to be for how healthcare is being administered?
Gootee: Change in healthcare is really dramatic, maybe more dramatic than any other industry has ever been. If you look at other industries where they have actually been able to spread that change over time, in healthcare it's being rapidly accelerated.
In the past, data had been stored in multiple systems and multiple areas on given patients. It's been difficult for providers and organizations to make informed decisions about that patient and their healthcare. So we see a lot of change in being able to bring that data together and understand it better.
Gardner: Tell us about MZI, what you do, who your customers are, and where you're going to be taking this big data ability in the future.
Gootee: MZI Healthcare has predominantly been working on the payer side. We have a product that's been on the market for over 25 years helping with benefit administration and the lines of payers and different independent physician associations (IPAs) and third-party administrators (TPAs).
Our customers have always had a very tough time bringing in data from different sources. A little over two years ago, MZI decided to look at how we could leverage that data to help our customers better understand their risk and their patients, and ultimately change the outcomes for those patients.
Gardner: I think that's how the newer regulatory environment is lining up in terms of compensation. This is about outcomes, rather than procedures. Tell us about your requirements for big data in order to start doing more of that predictive analysis.
Gootee: If you think about how data has been stored in the past for patients across their continuum of care, where, as they went from facility to facility, and physician to physician, it's really been so spread apart. It's been difficult to help understand even how the treatments are affecting that patient.
I've talked a lot about my aunt in previous interviews. Last year, she went into a coma, not because the doctors weren't doing the right thing, but because they were unable to understand what the other doctors were doing.
She went to many specialists and took medication from each one of those to help with her given problem, but what happened was there was an interaction with medication. They didn't even know if she’d come out of the coma.
These things happen every day. Doctors make informed decisions from their experience and the data that they have. So it's critical that they can actually see all the information that's available to them.
When we look at healthcare and how it's changing, for example the Affordable Care Act, one of the main focuses is obviously cost. We all know that healthcare is growing at a rate that's just unsustainable, and while that's the main focus, it's different this time.
We've done that before. In the Clinton Administration we had a kind of HMO and it really made a dramatic difference on cost. It was working, but it didn't give people a choice. There was no basis on outcomes, and the quality of care wasn't there.
This time around, that's probably the major difference. Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients. That's really vital to making the healthcare system a better system throughout the United States.
Gardner: Given the size of the data, the disparate nature of the data, more-and-more human data will be brought to bear. What were your technical requirements, and what was the journey that you took in finding the right infrastructure?
Gootee: We had a couple of requirements that were critical. When we work with small- and medium-size organizations (SMBs), they really don't have the funds to put in a large system themselves. So our goal was that we wanted to do something similar to what Apple has done with the iPhone. We wanted to take multiple things, put them into one area, and reduce that price point for our customers.
One of the critical things that we wanted to look at was overall price point. That included how we manage those systems and, when we looked at Vertica, one of the things that we found very appealing was that the management of that system is minimal.
The other critical thing was speed, being able to deliver high-end analytics at the point of care, instead of two or three months later, and Vertica really produced. In fact, we did a proof of concept with them. It was almost unbelievable some of the queries that ran and the speed at which that data came back to us.
You hear things like that and see it through the conference, no matter what volume you may have. It's very good. Those were some of our requirements, and we were able to put that in the cloud. We run in the Amazon cloud and we were able to deliver that content to the people that need it at the right time at a really low price point.
Gardner: Let me understand also the requirement for concurrency. If you have this posted on Amazon Web Services, you're then opening this up to many different organizations and many different queriers. Is there an issue for the volume of queries happening simultaneously, or concurrency? Has that been something you've been able to work through?
Gootee: That's another value add that we get. The ability to expand and scale the Vertica system along with the scalability that we get with Amazon allows us to deliver that information. No matter what type of queries we're getting, we can expand that automatically. We can grow that need, and it really makes a large difference in how we could be competitive in the marketplace.
Gardner: I suppose another dynamic to this on the economic side is the predictability of your cost.
Gootee: If you look at traditional ways that we've delivered software or a content before, you always over-buy, because you don’t know what it's going to be. Then, at some point, you don't have enough resources to deliver. Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.
So it's the flexibility for us. We're not a large company, and what's exciting about this is that these technologies help us do the same thing that the big guys do. It really lets our small company compete in a larger marketplace.
Gardner: Going to the population health equation and the types of data and information, is this something that's of interest to you? How important is this ability to get at all the information in all the different formats as you move forward?
Gootee: That's very critical for us. The way we interact in America and around the world has changed a lot. The HP HAVEn platform provides us with some opportunities to improve on what we have with healthcare's big security concerns, and the issue of the mobility of data. Getting it anywhere is critical to us, as well as better understanding how that data is changing.
We've heard from a lot of companies here that really are driving that user experience. More-and-more companies are going to be competing on how they can deliver things to a user in the way that they like it. That's critical to us, and that [HP] platform really gives us the ability to do that.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.