Solving Big Data’s Memory Problem

Using H-Store Main-Memory DBMS to Solve for X

by Michael Stonebraker, Ph.D.

One of the challenges of Big Data is how to analyze data quickly, even when it doesn’t fit in memory. This is one of the problems we’re trying to solve at the Intel Science & Technology Center for Big Data.

Obviously, flash memory will make an impact on the storage scene, either as a replacement for disk (for example, in cameras and MacBook Airs) or perhaps in a hierarchy that includes both disk and main memory (several large Web properties).  However, flash is too slow to be considered as a replacement for main memory; hence it is an “in-betweener” or a disk replacement in specialized situations.

Many are predicting a persistent non-rotational technology after flash, whether it be memristers, phase change memory (PCM) or something else.  The interesting question is “What are the characteristics that would enable it to replace main memory in a DBMS?”  In other words, can the next technology expand the composition of the flash market?

Of course, it could replace main memory in a traditional disk-oriented DBMS such as Oracle, DB2 or SQLServer.  In this case, one could use the technology for the buffer pool, and it would survive crashes.  This would offer added flexibility to such systems, but probably not alter the performance envelope very much.  Hence, a more interesting option is to consider the technology in a main-memory DBMS, such as H-Store.

In this posting we explore the economics of such a proposition.

Let’s assume that this future technology, call it X, is comparable to dynamic RAM in read performance and 10X slower on writes.  Moreover, assume it is cheaper than RAM by a factor Q.  Furthermore, current H-Store must write a command log and do periodic snapshots to guarantee durability.  Assume this requires a 10% performance hit, which does not need to be paid by X.

In a main-memory DBMS, assume the following structure.  Assume each record has a key of length L1.  Assume there are N records, and assume every DBMS command uses the index to find a record.  With probability R, one reads a separate field in the record of length L2, and with probability W =1 – R, it reads and then writes the second field.  Assume the index is structured as a B-tree or other order-preserving structure, and that pointers consume 8 bytes.  In this case the key look-up is effectively performing a binary search of the N records, and this requires  (Log N) * (L1 + 8) bytes read.

To execute a single command of the above sort, one has the following accesses:

Reads:               (Log N) * (L1 + 8) + L2

Writes:             W* L2

Assuming L1 = L2 = 8 and N = 100 million, and assuming a heavy write workload where W = 0.5, then the overall workload is 99% reads.  In this case, dynamic RAM incurs a 10% penalty for logging, while X has about the same slowdown to process the writes.  In this case, both technologies perform about the same, and the new technology X will be attractive as long as Q is substantial.  Obviously, hashed indexes will be cheaper to access than B-trees and result in a higher proportion of writes.  In this case, X may have to be proportionally cheaper than RAM to be preferred.

Over the next few months, we expect to do a thorough analysis of this trade-off, using H-Store and a hardware emulator of possible future technologies.  We will benchmark a variety of transaction workloads to find out the value of Q that will be required to assure broad acceptance of X in the OLTP marketplace.  Stay tuned for results.

 

This entry was posted in Analytics, Big Data Architecture, Databases and Analytics, DBMS, ISTC for Big Data Blog and tagged , , . Bookmark the permalink.

Leave A Reply

Your email address will not be published. Required fields are marked *


− four = 3