In stealth no more, announcing Datanet: http://datanet.co/.

Datanet is an open source CRDT based data synchronization system.

Datanet aims to achieve ubiquitous write though caching. CRDT replication can be added to any cache in your stack, meaning data modifications to these stacks are globally & reliably replicated. Locally modifying data yields massive gains in latency, produces a more efficient replication stream, & is extremely robust. It’s time to prefetch data to compute.

This is the culmination of 2+ years of research and development and I am pretty proud of it🙂

Datanet links:

website: http://datanet.co/
github: https://github.com/JakSprats/datanet
community: https://groups.google.com/forum/#!forum/crdtdatanet
twitter: https://twitter.com/CRDTDatanet

Author links:

twitter: https://twitter.com/jaksprats
linkedin: http://www.linkedin.com/in/russell-sullivan-a266096
blog: https://jaksprats.wordpress.com/

Quick link to slides

This is a concept I came up with in Jan 2014. Rumors of Google doing something similar to this concept started popping up recently, so it seemed like a good time to publish it.

Basic concept is to build a secondary cell network that complements the user’s existing (primary) cell network, offering reduced cellular data rates when a user is within the secondary cell network’s range. This secondary network would be significantly cheaper to construct as two major costs (backhaul & spectrum) are done more or less for free, by using Google Fiber as the backhaul and TV White Spaces & all unlicensed spectrum( 900MHz, 2.4 GHz, 5GHz) combined with best Cognitive radio practices for the network’s spectrum. Select Google Fiber customers would receive reduced Google Fiber rates for hosting the network’s Picocells. Additionally, each Google Fiber home would receive proprietary hardware in the form of a Wifi-router to implement the system’s Cognitive Radio. This proprietary hardware can be used to do other cool stuff.

But without further ado … here are the slides … the concept is a pretty cool one, pretty relevant, probably doable, and I expect some derivation of it to happen in real life pretty soon cuz Google has the rep for pushing things forward.

Quick link to slides

Short link for PRESENTATION

I am a huge fan of Google Glass’ concept. The idea of having information pop up that only you can see to add context to whatever is in your line of sight is a wildly interesting technology.

I bought a Google Glass about 4 months ago and was totally OK w/ paying $1500. Since purchasing the Glass I have swayed between being euphoric about some features (e.g. POV camera) and disappointed about the product’s many shortcomings.

Recently I started cataloging all the changes I would make to Glass and wrote up a (not perfectly organized) power point presentation. More recently Google announced and released Android wear, which is very much in line with some of my suggestions, so I felt validated that my ideas were relevant and thought I should share them.

Hopefully some of my other ideas are in line w/ what Google is thinking, because a technology like a Glass is a certainty to be in your household, but not in its current form.

Without further ado … CLICK FOR PRESENTATION

 

* On re-reading my presentation, it is pretty poorly organized. The interesting ideas start around page 20 and the even better ones start at page 30, which is to say the first 20 are boring🙂

I started the AlchemyDB project in early 2010. After two years, the project had graduated to become a hybrid Document-store/GraphDB/RDBMS/redis-store. AlchemyDB was then acquired by Aerospike in early 2012. The next 2 years were spent integrating AlchemyDB’s codebase/architecture/philosophy into Aerospike, eventually yielding Aerospike 3. Four years years after it’s inception, AlchemyDB has provided the right ammunition to catapult Aerospike into the visionary enterprise NOSQL quadrant.

Integrating AlchemyDB into Aerospike was, of course, much harder than expected. The bulk of this integration work was accomplished not by merging code, but by redoing AlchemyDB’s innovative features in Aerospike’s architecture. What was interesting is that the merging of the two platforms didn’t just result in features being dropped (e.g. GraphDB support), it also took an unchartered course and some new features were born of the process (e.g. LargeDataTypes).

All in all, I am genuinely pleased with the architecture that emerged from the integration. Aerospike 3 is a document-store, w/ robust secondary indexing, a high performance User-Defined-Function (UDF) implementation, ultra-low-latency aggregations (e.g. <1ms possible), SSD-optimized LargeDataTypes, and support for a subset of SQL. What’s more, ALL of these new features have the same good-ole Aerospike guarantees of performance, elasticity, fault-tolerance, etc… Complexity was added w/o significant compromise on the original product’s strengths.

The rest of this post will dive into the details of Aerospike 3’s LargeDataType feature. Aerospike’s LargeDataTypes are standard data-types (e.g. Stack, List, Map) but each instance is stored as many different physical pages tied-together logically via a directory. LargeDataTypes deal w/ the inherent flaw in the document model that over time documents themselves tend towards becoming BigData: the hotter the document, the larger it becomes, the more the DB bottlenecks on I/O & de/serialization.

Aerospike’s LargeDataTypes greatly benefit from piggybacking on Aerospike’s SSD optimized data-layer. For example: the LargeStack data-type has near optimal I/O when the use-case is predominantly pushing and popping a small number of items. This pattern of usage results in only the directory and the top page being accessed: a near optimal I/O which is independent of the size of the LargeStack, as the cold-parts of the LargeStack simply sit idle on SSD.

Diving even deeper: Aerospike’s LargeDataTypes are native database types at the API level, but under the covers they are user-space server-side code making calls into Aerospike’s linked-pages API (still private as of Mar 2014). The linked-pages API enables UDFs to access raw data pages on the server. It also insures that these raw data pages are logically linked & bound to the data-structure’s primary key record such that they are guaranteed to always be co-located on the same server as the primary-key’s record. This guarantee covers before, during, & after cluster re-orgs (both elastic and fault tolerant).

Implementing LargeDataTypes in server-side UDF space has one priceless advantage: extensibility. One customer benefited tremendously from byte-packing & compressing data in their LargeStacks (which held the bulk of their data). This extensibility was easily accomplished by cloning the LargeStack code (we named the clone: CompressedLargeStack) and then inserting a few lines of compression algorithms into the clone’s code. The customer’s client code switched to using CompressedLargeStacks and WHAMMY: 10X space reduction.

This customer story is an example of how such extensibility allows for the creation of new LargeDataTypes in UDF space. Aerospike users can now roll their own customized LargeDataTypes, as Aerospike’s engineers did for this customer’s use-case and before that for the LargeDataType implementations themselves. The linked-pages API that guarantees co-location of many pages bound to a single primary key, can be used to create a wide variety of different LargeDataTypes (e.g. Graphs, ScoreBoards, Tries, Sparse-bitmaps) in server-side user-space code. With proper data structure architecting, these user-created LargeDataTypes can grow arbitrarily large and still perform according to strict SSD based SLAs. This makes the linked-pages API a fantastic tool for certain high velocity BigData use-cases. 

The power of being able to define a customized data-type that:

    1. is a single logical entity, that is made up of many physical pages
    2. has strict transactional guarantees
    3. has a guarantee that all of the physical pages are located on a single server even during cluster re-orgs

enables customization down to the very data-type definition & functionality, to better fit a customer’s unique data.

My personal goal with Aerospike 3 was to deliver on NOSQL’s promise of storing & querying data in it’s most natural form. Enabling developers to easily roll their own data-types to better fit their unique data and to tweak those data-types to improve performance is an ace in the hole for those that need to continually push their system that much further.

I wrote a blog on highscalability.com on how to get to 1 million Database TPS on $5000 worth of hardware (single machine). Hopefully there are some performance tips in the blog that people can use themselves.

http://highscalability.com/blog/2012/9/10/russ-10-ingredient-recipe-for-making-1-million-tps-on-5k-har.html

I also wrote a brag number post for Aerospike’s blog, and they let me quote the movie “Ricky Bobby”

http://www.aerospike.com/blog/all-about-speed/

I will update this blog every time I write blogs in other places, just seems like a  good idea

– Russ

Aerospike is the former Citrusleaf: http://www.dbms2.com/2012/08/27/aerospike-the-former-citrusleaf/

Citrusleaf acquired AlchemyDB and we are now incrementally porting AlchemyDB functionality to run on top of Citrusleaf’s proven distributed high-availability linearly-scalable key-value store. First functionalities planned are: Lua with DocumentStore functionality, Secondary Indexes, and Real-time map-reduce. Further down the road Pregel like Distributed GraphDB like functionality or some next generation StreamingDB may be integrated in. Incrementally building AlchemyDB on top of Citrusleaf will create a distributed computing fabric, functions can be shipped to data (that lies on a horizontally scalable low latency storage layer), and the functions can propagate their results across the fabric, calling other functions on them.

Full info at: http://www.aerospike.com/blog/alchemydb/

I get my information on databases, datastores, big-data, etc… primarily from 3 bloggers: Todd Hoff of High Scalability, Alex Popescu of myNOSQL, and Curt Monash of DBMS2. Each of them specialises in different areas and each has their own style and purpose, taken as a whole, they cover a lot of ground w/o a lot of dilution.

In 2-10 years you will look back at Todd Hoff’s blog High Scalability and it will contain everything that is happening right now. He has a keen insight into which technologies are fads and which technologies may lead to big changes, and he is not at all full of shit, which is next to impossible given this task. He dabbles in reporting on truly innovative technologies and he actually understands them. His style is strictly-facts and he rarely bad mouths people/ideas/stuff. His blog is the remedy for the hardened cynic that thinks we are making no important advances. His weekly “Stuff the Internet says on Scalability”, is ALWAYS good for at least 3 links to stuff I find interesting (which is 3 links higher than every other summarized email I get).

Alex Popescu is the go to guy for NOSQL. Alex Popescu’s blog myNSOSQL covers the ENTIRE NOSQL gambit (GraphDB’s, DocumentStores, KeyValueStores, Hadoop/Mapreduce, etc…). He quickly sees thru most of the bullshit in the NOSQL world, and clearly explains the differences in a movement that is full of confusion. He likes to tear people’s points apart, his points are valid and he also blasts NOSQL ideas/approaches/etc… when they have it coming. His tweet stream (@al3xandru) is a raging river of NOSQL information, it will keep you on top of the NOSQL game (once you learn how to wade thru it), plug into it, and you can be up on most of NOSQL in probably a month.

Curt Monash knows the RDBMS market, especially the analytics market, better than you know anything🙂 He has been in the game forever, and he earns money w/ his blog DBMS2, so it cant be called 100% objective, but he has the type of abrasive personality that is only comfortable telling mostly the truth, so IMO the info in his blog is basically objective. RDBMS technologies are relatively very mature, advanced, and widespread, so having a good summary of what is going on in that market is a MUST for any fan of data. Monash is a definitions junky, which can be boring/tiring to read, but it represents a mature approach and does help make sense of such a large, complicated, and polluted-by-enterprise-generated-bullshit market. Having a guy w/ such experience who is still very up to date, reporting on one of the oldest (yet most active) fields in computing is of great value to all of us.

In conclusion:
Real smart people are reading these 3 blogs, learning what other real smart people are doing, and forming new even smarter ideas. For this to happen a medium of information exchange that does not waste smart peoples’ time, doesn’t insult smart peoples’ intelligence w/ obvious marketing ploys, and is written in a style that sparks their imagination, is required. So go read their blogs, you will learn stuff🙂