Skip to content Skip to sidebar Skip to footer

Key: Value Store In Python For Possibly 100 Gb Of Data, Without Client/server

There are many solutions to serialize a small dictionary: json.loads/json.dumps, pickle, shelve, ujson, or even by using sqlite. But when dealing with possibly 100 GB of data, it's

Solution 1:

You can use sqlitedict which provides key-value interface to SQLite database.

SQLite limits page says that theoretical maximum is 140 TB depending on page_size and max_page_count. However, default values for Python 3.5.2-2ubuntu0~16.04.4 (sqlite3 2.6.0), are page_size=1024 and max_page_count=1073741823. This gives ~1100 GB of maximal database size which fits your requirement.

You can use the package like:

from sqlitedict import SqliteDict

mydict = SqliteDict('./my_db.sqlite', autocommit=True)
mydict['some_key'] = any_picklable_object
print(mydict['some_key'])
for key, value in mydict.items():
    print(key, value)
print(len(mydict))
mydict.close()

Update

About memory usage. SQLite doesn't need your dataset to fit in RAM. By default it caches up to cache_size pages, which is barely 2MiB (the same Python as above). Here's the script you can use to check it with your data. Before run:

pip install lipsum psutil matplotlib psrecordsqlitedict

sqlitedct.py

#!/usr/bin/env python3import os
import random
from contextlib import closing

import lipsum
from sqlitedict import SqliteDict

defmain():
    with closing(SqliteDict('./my_db.sqlite', autocommit=True)) as d:
        for _ inrange(100000):
            v = lipsum.generate_paragraphs(2)[0:random.randint(200, 1000)]
            d[os.urandom(10)] = v

if __name__ == '__main__':
    main()

Run it like ./sqlitedct.py & psrecord --plot=plot.png --interval=0.1 $!. In my case it produces this chart: chart

And database file:

$ du -h my_db.sqlite 
84M my_db.sqlite

Solution 2:

I would consider HDF5 for this. It has several advantages:

  • Usable from many programming languages.
  • Usable from Python via the excellent h5py package.
  • Battle tested, including with large data sets.
  • Supports variable-length string values.
  • Values are addressable by a filesystem-like "path" (/foo/bar).
  • Values can be arrays (and usually are), but do not have to be.
  • Optional built-in compression.
  • Optional "chunking" to allow writing chunks incrementally.
  • Does not require loading the entire data set into memory at once.

It does have some disadvantages too:

  • Extremely flexible, to the point of making it hard to define a single approach.
  • Complex format, not feasible to use without the official HDF5 C library (but there are many wrappers, e.g. h5py).
  • Baroque C/C++ API (the Python one is not so).
  • Little support for concurrent writers (or writer + readers). Writes might need to lock at a coarse granularity.

You can think of HDF5 as a way to store values (scalars or N-dimensional arrays) inside a hierarchy inside a single file (or indeed multiple such files). The biggest problem with just storing your values in a single disk file would be that you'd overwhelm some filesystems; you can think of HDF5 as a filesystem within a file which won't fall down when you put a million values in one "directory."

Solution 3:

LMDB (Lightning Memory-Mapped Database) is a very fast key-value store which has Python bindings and can handle huge database files easily.

There is also the lmdbm wrapper which offers the Pythonic d[key] = value syntax.

By default it only supports byte values, but it can easily be extended to use a serializer (json, msgpack, pickle) for other kinds of values.

import json
from lmdbm import Lmdb

classJsonLmdb(Lmdb):
  def_pre_key(self, value):
    return value.encode("utf-8")
  def_post_key(self, value):
    return value.decode("utf-8")
  def_pre_value(self, value):
    return json.dumps(value).encode("utf-8")
  def_post_value(self, value):
    return json.loads(value.decode("utf-8"))

with JsonLmdb.open("test.db", "c") as db:
  db["key"] = {"some": "object"}
  obj = db["key"]
  print(obj["some"])  # prints "object"

Some benchmarks. Batched inserts (1000 items each) were used for lmdbm and sqlitedict. Write performance suffers a lot for non-batched inserts for these because each insert opens a new transaction by default. dbm refers to stdlib dbm.dumb. Tested on Win 7, Python 3.8, SSD.

continuous writes in seconds

| items | lmdbm | pysos |sqlitedict|   dbm   ||------:|------:|------:|---------:|--------:||     10|0.0000| 0.0000|0.01600|  0.01600||    100|0.0000| 0.0000|0.01600|  0.09300||   1000|0.0320| 0.0460|0.21900|  0.84200||  10000|0.1560| 2.6210|2.09100|  8.42400|| 100000|1.5130| 4.9140|20.71700| 86.86200||1000000|18.1430|48.0950|208.88600|878.16000|

random reads in seconds

| items | lmdbm | pysos |sqlitedict|  dbm   ||------:|------:|------:|---------:|-------:||     10|0.0000|  0.000|0.0000|  0.0000||    100|0.0000|  0.000|0.0630|  0.0150||   1000|0.0150|  0.016|0.4990|  0.1720||  10000|0.1720|  0.250|4.2430|  1.7470|| 100000|1.7470|  3.588|49.3120| 18.4240||1000000|17.8150| 38.454|516.3170|196.8730|

For the benchmark script see https://github.com/Dobatymo/lmdb-python-dbm/blob/master/benchmark.py

Solution 4:

I know it's an old question, but I wrote something like this long ago:

https://github.com/dagnelies/pysos

It works like a normal python dict, but has the advantage that it's much more efficient than shelve on windows and is also cross-platform, unlike shelve where the data storage differs based on the OS.

To install:

pip install pysos

Usage:

importpysosdb= pysos.Dict('somefile')
db['hello'] = 'persistence!'

EDIT: Performance

Just to give a ballpark figure, here is a mini benchmark (on my windows laptop):

import pysos
t = time.time()
import time
N = 100 * 1000
db = pysos.Dict("test.db")
for i inrange(N):
    db["key_" + str(i)] = {"some": "object_" + str(i)}
db.close()

print('PYSOS time:', time.time() - t)
# => PYSOS time: 3.424309253692627

The resulting file was about 3.5 Mb big. ...So, very roughly speeking, you could insert 1 mb of data per second.

EDIT: How it works

It writes every time you set a value, but only the key/value pair. So the cost of adding/updating/deleting an item is always the same, although adding only is "better" because lots of updating/deleting leads to data fragmentation in the file (wasted junk bytes). What is kept in memory is the mapping (key -> location in the file), so you just have to ensure there is enough RAM for all those keys. SSD is also highly recommended. 100 MB is easy and fast. 100 GB like posted originally will be a lot, but doable. Even raw reading/writing 100 GB takes quite some time.

Solution 5:

First, bsddb (or under it's new name Oracle BerkeleyDB) is not deprecated.

From experience LevelDB / RocksDB / bsddb are slower than wiredtiger, that's why I recommend wiredtiger.

wiredtiger is the storage engine for mongodb so it's well tested in production. There is little or no use of wiredtiger in Python outside my AjguDB project; I use wiredtiger (via AjguDB) to store and query wikidata and concept which around 80GB.

Here is an example class that allows mimick the python2 shelve module. Basically, it's a wiredtiger backend dictionary where keys can only be strings:

import json

from wiredtiger import wiredtiger_open


WT_NOT_FOUND = -31803classWTDict:
    """Create a wiredtiger backed dictionary"""def__init__(self, path, config='create'):
        self._cnx = wiredtiger_open(path, config)
        self._session = self._cnx.open_session()
        # define key value table
        self._session.create('table:keyvalue', 'key_format=S,value_format=S')
        self._keyvalue = self._session.open_cursor('table:keyvalue')

    def__enter__(self):
        return self

    defclose(self):
        self._cnx.close()

    def__exit__(self, *args, **kwargs):
        self.close()

    def_loads(self, value):
        return json.loads(value)

    def_dumps(self, value):
        return json.dumps(value)

    def__getitem__(self, key):
        self._session.begin_transaction()
        self._keyvalue.set_key(key)
        if self._keyvalue.search() == WT_NOT_FOUND:
            raise KeyError()
        out = self._loads(self._keyvalue.get_value())
        self._session.commit_transaction()
        return out

    def__setitem__(self, key, value):
        self._session.begin_transaction()
        self._keyvalue.set_key(key)
        self._keyvalue.set_value(self._dumps(value))
        self._keyvalue.insert()
        self._session.commit_transaction()

Here the adapted test program from @saaj answer:

#!/usr/bin/env python3import os
import random

import lipsum
from wtdict import WTDict


defmain():
    with WTDict('wt') as wt:
        for _ inrange(100000):
            v = lipsum.generate_paragraphs(2)[0:random.randint(200, 1000)]
            wt[os.urandom(10)] = v

if __name__ == '__main__':
    main()

Using the following command line:

python test-wtdict.py & psrecord --plot=plot.png --interval=0.1 $!

I generated the following diagram:

wt performance without wal

$ du -h wt
60M wt

When write-ahead-log is active:

wt performance with wal

$ du -h wt
260M    wt

This is without performance tunning and compression.

Wiredtiger has no known limit until recently, the documentation was updated to the following:

WiredTiger supports petabyte tables, records up to 4GB, and record numbers up to 64-bits.

http://source.wiredtiger.com/1.6.4/architecture.html

Post a Comment for "Key: Value Store In Python For Possibly 100 Gb Of Data, Without Client/server"