Whoops, I built a data replication engine (1/3).

Hello, my name is Matt and I made a distributed data replication system. I did this in full possession of the knowledge that doing this is something that no sane programmer should ever contemplate, being akin to writing your own crypto in the lexicon of programmer sins. This is my confession.

The system is called Basis, and it exists to provide the sync service behind Shopi, my company’s iPhone shopping list app. This post is the first of an epic series of three that relate the story of how it came to be, why I thought it was a good idea to meddle with things I have no ken of, and how I approached the challenges it presented.


Just over three years ago, around the start of 2011, I decided to wade into the already over-crowded category of iPhone shopping list apps. Entering an app category that was already so extremely commoditised — even as of 2011 — wouldn’t on the face of it seem like a smart plan. But I had personally experienced the mediocrity of the available shopping list apps while trying to come up with a good system for both me and my wife to manage shopping trips. All of the apps I tried lacked what seemed to me to be crucial features.

Specifically, I wanted an app that:

  • Synced between my phone and my wife’s so we can make and shop from a shared list. (A number of apps do this now).

  • Made it easy to notify each other when we go shopping, so we know when the shopping’s done and can have a chance to add items at the last minute (corollary: the app’s sync better be real-time, or close to it).

  • Made it quick and easy to add items, sort them, and cross them off — you know, the core functionality of a shopping list. It still boggles my mind that most shopping apps fail to be good at actually making a list and taking it shopping.

  • Didn’t want me to fuss about assigning items to aisles or categories just to get items to automatically sort into a reasonable order. The app should learn the order I put my stuff in and use that — I don’t want to ‘manage’ my shopping list app.1

Still with me? I know, detailing the requirements for a shopping list app is riveting stuff.

How Hard Could It Be?

So I saw an opportunity to create something that would be uniquely useful, and which wasn’t being provided by the competition. Plus, it’s just a shopping list app, should be able to knock that out in a couple of months … six, tops.

Requirement number one above: I wanted to sync a shopping list between two or more devices. A humble list of textual items — how hard could it be?

So I’ll give myself some credit because, as I began thinking thoughts like that, little alarm bells started tinkling in my programmer hind-brain: this is data replication — it’s hard by definition. Computer science hard.

Works on my machine badgeData replication is, like cache invalidation and naming things, one of the tough problems of computer science.2 The naïve solution most of us would implement will generally “work on my machine”, but then the sordid conditions of the real world intrude and things start to break in random and frighteningly non-reproducible ways.

Networks are unreliable. That goes doubly-so for mobile networks. Messages will be lost. Regardless of reliability, devices may not even be on the network all the time, so either way data sets replicated over a network will eventually diverge and conflict. So, unfortunately it doesn’t matter that it’s a just a simple shopping list, these basic realities will still end up ruining your fun.


None of this will be news to anyone who’s done any network programming. I certainly had no illusions, and in fact I even had previous experience violating the thou-shalt-not-replicate commandment on a previous project, where I had designed a real-time replication framework that largely ignored all the hard aspects of the issue.3 But that sort of fast-and-loose approach wasn’t going to be good enough for a service that people would want to pay for.

Here’s what I wanted in a nutshell:

  • A server that stores blobs of JSON-like data and serves them out to mobile devices.

  • When a blob changes on the server, a delta (diff) describing the change is sent to any connected device with an interest in that blob.

  • When a blob on a device changes, a delta is sent back to update the server.

  • For fail-over, backup, and scaling purposes, multiple replicated servers could be deployed: a client can connect to any one of these without seeing any difference.

In my app, clients would simply share and update the same shopping list blob. Another blob would be used to hold the users’ preferences, including their previously-used items, enabling preferences to sync across multiple devices.

The Golden Rule

Contrast this with classic designs where clients of a data store (NoSQL or otherwise) operate synchronously: they query for some data, and maybe modify and return it for update. If there’s any conflict resolution, it’s done on the server. In this model there’s no optimisation of exchanged data: the client needs to send the whole thing back after modification, and it then needs to poll and pull the whole thing back if it’s later changed by something else, even if that change is trivial.

Why do I want to avoid that model and push deltas? Because I want real-time updates, and that means sending changes as soon as they occur, which in turn means lots of little updates. If the shopping app simply sends the entire list every time the user adds, crosses off, or moves an item, the overhead will quickly add up.

The golden rule: update size should be proportional to the size of the change, not the size of the data.

On the face of it, this might seem like premature optimisation. Most shopping lists will be tiny: even a 100-item list will probably be less than one TCP packet. But the users’ previously-used items list can get much bigger, and sending that over and over again would be wasteful. Waste is waste, and is especially important to avoid when working within the data and power budget of a mobile device.

Someone Must Have Done This Before? Right?

This kind of facility sounds so general that it seemed like the exactly the sort of thing someone would have already built. I started keeping notes on possible solutions, and they quickly got very, very long.

Most of the initial leads immediately turned out to be untenable. I looked at using master-master RDBMS replication, but that turns out to be fraught with many levels of fraughtness, at least for MySQL and Postgres. Not to mention that an RDBMS replication solution on the server didn’t address client side at all.

On the other side of the fence there were the NoSQL’s: Redis, Mongo, Cassandra, Riak, Membase, Neo4J, etc. And in fact Couch DB would have been close to perfect … if I could have run a Couch DB server on the client, which would be an insane thing to contemplate.4

Depression Sets In

It became depressingly apparent that, as of 2011, this was still bleeding edge stuff. Pushing real-time symmetric replication all the way out to the client and back wasn’t going to be a matter of downloading a module from Github.

(Continued in Part Two)

  1. I also really don’t need helpful product suggestions taken from a canned database of products I’ve never heard of, containing none of the things I actually buy. Such US-centric product sets generally often don’t contain even the most popular products bought in other English-speaking countries — e.g. Vegemite in Australia — let alone being the slightest bit useful to people in non English-speaking countries. /rant ↩︎

  2. In fact replication is probably some sort of subset of caching in the grand scheme of things. ↩︎

  3. Livespace’s replication actually works surprisingly well, mainly because it’s used on reliable networks with always-connected clients, and where (rare) hiccups aren’t the end of the world. ↩︎

  4. If I were doing this now, Couchbase Lite from Couchbase would be something I’d look seriously at. They even have a simple demonstration shared shopping list app. These days there are also hosted synchronisation services, such as Simperium↩︎