14 December 2009

Running TDB on a cloud storage system

Project Voldemort is an open-source (Apache2 license) distributed, scalable, fault-tolerant, key-value storage system for large scale data. Being a key-value store the only operations it provides are:

  • get(key) -> value
  • put(key, value)
  • delete(key)

Key and value can be various custom types but at the lowest level, they are arrays of bytes. Serialization schemes on top of byte arrays given structure but access is only via the key (so no filters or joins as part of the store. It's built for scale and speed, and fault tolerance.

TDB has internal APIS so that difefrentindexing scheme or different stroage technologies can be plugged in. A key-value store can be used as the storage layer for TDB.

There are two areas of storage needed: the node table (a 2-way mapping between the data making up the RDF terms and the associated, fixed size NodeId) and the indexes, which provide the matching of triple patterns. See the TDB Design Notes.

But a key-value store isn't an ideal backend. The node table is a pair of key-value stores because all that is needed is lookup between RDF term and the NodeId. The issues that arise are the granularity of access. TDB heavily caches the mapping in the query engine.

The indexes don't naturally map to key-value access because looking up a triple pattern results in all matches. There are (at least) two ways of doing this. Either store something like all PO pairs and use S as a key (a bit like Jena's memory model), or use the key-value store to hold part of a datastructure and access it like a disk.

TDB uses threaded B+Trees with a pluggable disk block layer (this is used to switch between 32 and 64 bit modes) so the key-value store a block storage is a simple fit. Because B+Trees store the entries in sorted order, caching means that a block probably contains all the PO for a given S if a look up is by S so these two schemes end up being similar even though at the design level they are quite different.

Both apects are relying on the query engine doing caching to work sensibly to compensate for the mismatch in requirements (triple match for joins) and interface granularity (for node access).

See also:

Does Project Voldemort work as storage for TDB? Yes, and with only a small amount of code. Not surprisingly, the performance is limited in this experimental system (e.g. storing invidiual RDF terms in the node table needs better management to avoid latency and overhead in the round trip to the remote table). Truncating to only the used space then compressing would be useful on teh indexes (see RDF-3X for an interesting compression scheme). But it's a workable scheme and the style of using a key-value store shows TDB can be ported to a wide variety of environments because key-value stores are currently a very active area - project Voldemort provides a cloud-centric stiorage fabric.

I've started putting experimental systems on github. This experiment is available in the TDB/V repository. These are not released, supported systems; they are the source code and development setup (for Eclipse usually). I used Project Voldemort release v0.57.

01 October 2009

BSBM-Jena

This a version of the Berlin SPARQL Benchmark tools, modified to allow the query benchmark driver to work with a local database, rather than a SPARQL endpoint. I've changed benchmark.testdriver.TestDriver to accept a Jena assembler description.

 TestDriver -runs ... -w ... -idir ...-o ... local:assembler.ttl

Git clone git://github.com/afs/BSBM-Jena.git

In the repository, there are versions of ARQ and TDB with significant performance improvements. It will the run benchmark in a reasonable time now. There are some shell scripts to help run the benchmark as well.

Update:

The GIT repository has been renamed as "BSBM-Local". The Jena pseudo URI scheme is now "jena:". The project now includes support for running tests directly on a Sesame native repository using the pseudo URI scheme "sesame:directory" - this could be easily extended to any Sesame repository implementation.

14 September 2009

Moving to Talis

I'll be leaving HP soon when the semantic web group winds down. I'll be joining the the platform division at Talis, still living in Bristol, working from home much of the time then travelling to Birmingham as needed . This is one of the reasons why Talis is attractive as a place to work is beause they understand such working arrangements. I'll get to continue support and development of my contributions to Jena.

And the first question I have been getting is about what will happen to Jena.

Jena has a BSD-style licence so there is no block to continuing any use by any users nor continued development by the Jena developers. But we plan to go further and become an open source project with no one commercial backer. After discussions with HP, our current plan is to transfer the ownership of the copyright to a commercially neutral body. In immediate terms, there is no change to people/companies using Jena but this change will make it easier to continue and expand the core developer community and indeed it enables us to accept contributions more easily. It has been a (cultural, not legal) barrier that HP was seen to have controlling interest despite the fact we have always acted openly.

I also get to continue participating in the W3C SPARQL working group. The list of things the WG has decided it has the time and resources to work on is here. The charter states that no queries from the spec of 2008 will change. The new thing for the working group to address is update, both as a language but also some RESTful operations.

Exciting times.