By: Philip Howard, Research Director - Data Management, Bloor Research
Published: 1st October 2013
Copyright Bloor Research © 2013
Oracle has announced an in-memory option for the 12c version of the Oracle Database. It's expected to be available sometime in the early part of next year but no official release data has been announced.
Basically, the idea is that data will be stored on disk in a conventional row-format and a second copy of the data will be stored in columnar format in memory. This will mean that you won't have to define analytic indexes for the data on disk as the columns act as self-indexing constructs. This in turn means less administration and tuning for the data on disk and you should improve analytic performance significantly (because data is in memory) and OLTP performance (because there are fewer indexes).
Oracle promises that there will be no requirements to change any code and that existing applications will run unchanged within the new environment.
So, what's not to like?
Well, the first thing is that all that memory is going to be relatively expensive. More importantly, you will either need to deploy a lot of memory or the DBA will need to include/exclude specified columns (which will be an option) from the table to be loaded into memory. However, this is a manual step (therefore the DBA must know in advance what queries will come into the database so they can set those columns up in memory or otherwise you default to putting the full table in memory), which means that what you gain on the non-indexing administration you lose on the memory administration. Secondly, the DBA must allocate a set amount of memory for the in-memory tables. Once that memory fills up that's it: you can't load any more tables into memory. And that means that you have to statically set the amount of space you need for your in-memory objects - whereas what you would really like is a dynamic way of re-allocating memory as required - which means a further administrative burden.
The other question that you might ask is how this in-memory column store relates to the hybrid columnar compression (HCC) used in Oracle Exadata? It turns out that the compression for in-memory is different than HCC. You can still have the tables stored on disk using HCC but when they are loaded into memory you have to decompress the data, then break the table apart and recompress it using the new in-memory compression algorithms for each column. Frankly, and to use an old English term: that sounds barmy.
The bottom line is that there are some clear performance advantages to this use of in-memory technology but there really don't appear to be any administrative savings (they may even get worse), despite claims to the contrary. Moreover, for most companies, who cannot afford inordinate amounts of memory, the need to know in advance what tables to put in-memory is contrary to the whole thrust of the analytic world towards self-service BI: you can't have self-service if you are limited by IT's implementation.
I have to say that while the top line story sounds good, I am less impressed when you look under the covers. Oracle does not appear to have done the in-depth re-engineering that you would really like to support this sort of feature. No doubt this will come in due course but from what we know now this is in contrast to, for example, IBM's BLU Acceleration. In this context, IBM seems to have really gone down into the weeds of the technology to make sure not just that it works but that the different elements of BLU Acceleration complement one another and do not take away with one hand what they give with the other.
Posted: 2nd October 2013 | By Craig Savory :
Thanks for your post. I was wondering if you could help me understand something. How is Oracle going to handle pluggable DBs? Do we know? Will this in-memory option work at the individual pluggable level or the container level? If it is at the container level, how are they going to keep tenants secure or are they going to hack the data dictionary again for this?
Posted: 7th October 2013 | By Philip Howard :
I don't the answer specifically but there's no particular reason why the in-memory option should not work with pluggable database. You would specify which tables you want to be duplicated in memory so any table in a pluggable database should be able do participate (and would have nothing really to do with the container database).
The messages above were all contributed by IT-Director.com readers. Whilst we take care to remove any posts deemed inappropriate, we can take no responsibility for these comments. If you would like a comment removed please contact our editorial team.
All fields must be completed to submit a comment. Email addresses are passed through to the author so they can contact you directly if needed.
Published by: IT Analysis Communications Ltd.
T: +44 (0)190 888 0760 | F: +44 (0)190 888 0761