Tag Archives: nosql

ScyllaDB meets Gentoo Linux

I am happy to announce that my work on packaging ScyllaDB for Gentoo Linux is complete!

Happy or curious users are very welcome to share their thoughts and ping me to get it into portage (which will very likely happen).

Why Scylla?

Ever heard of the Cassandra NoSQL database and Java GC/Heap space problems?… if you do, you already get it 😉

I will not go into the details as their website does this way better than me but I got interested into Scylla because it fits the Gentoo Linux philosophy very well. If you remember my writing about packaging Rethinkdb for Gentoo Linux, I think that we have a great match with Scylla as well!

  • it is written in C++ so it plays very well with emerge
  • the code quality is so great that building it does not require heavy patching on the ebuild (feels good to be a packager)
  • the code relies on system libs instead of bundling them in the sources (hurrah!)
  • performance tuning is handled by smart scripting and automation, allowing the relationship between the project and the hardware is strong

I believe that these are good enough points to go further and that such a project can benefit from a source based distribution like Gentoo Linux. Of course compiling on multiple systems is a challenge for such a database but one does not improve by staying in their comfort zone.

Upstream & contributions

Packaging is a great excuse to get to know the source code of a project but more importantly the people behind it.

So here I got to my first contributions to Scylla to get Gentoo Linux as a detected and supported Linux distribution in the different scripts and tools used to automatically setup the machine it will run upon (fear not, I contributed bash & python, not C++)…

Even if I expected to contribute using Github PRs and got to change my habits to a git-patch+mailing list combo, I got warmly welcomed and received positive and genuine interest in the contributions. They got merged quickly and thanks to them you can install and experience Scylla in Gentoo Linux without heavy patching on our side.

Special shout out to Pekka, Avi and Vlad for their welcoming and insightful code reviews!

I’ve some open contributions about pushing further on the python code QA side to get the tools to a higher level of coding standards. Seeing how upstream is serious about this I have faith that it will get merged and a good base for other contributions.

Last note about reaching them is that I am a bit sad that they’re not using IRC freenode to communicate (I instinctively joined #scylla and found myself alone) but they’re on Slack (those “modern folks”) and pretty responsive to the mailing lists 😉

Java & Scylla

Even if scylla is a rewrite of Cassandra in C++, the project still relies on some external tools used by the Cassandra community which are written in Java.

When you install the scylla package on Gentoo, you will see that those two packages are Java based dependencies:

  • app-admin/scylla-tools
  • app-admin/scylla-jmx

It pained me a lot to package those (thanks to help of @monsieurp) but they are building and working as expected so this gets the packaging of the whole Scylla project pretty solid.

emerge dev-db/scylla

The scylla packages are located in the ultrabug overlay for now until I test them even more and ultimately put them in production. Then they’ll surely reach the portage tree with the approval of the Gentoo java team for the app-admin/ packages listed above.

I provide a live ebuild (scylla-9999 with no keywords) and ebuilds for the latest major version (2.0_rc1 at time of writing).

It’s as simple as:

$ sudo layman -a ultrabug
$ sudo emerge -a dev-db/scylla
$ sudo emerge --config dev-db/scylla

Try it out and tell me what you think, I hope you’ll start considering and using this awesome database!

RethinkDB on Gentoo Linux

2014-11-05-cat-instagram

It was about time I added a new package to portage and I’m very glad it to be RethinkDB and its python driver !

  • dev-db/rethinkdb
  • dev-python/python-rethinkdb

For those of you who never heard about this database, I urge you to go about their excellent website and have a good read.

Packaging RethinkDB

RethinkDB has been under my radar for quite a long time now and when they finally got serious enough about high availability I also got serious about using it at work… and obviously “getting serious” + “work” means packaging it for Gentoo Linux 🙂

Quick notes on packaging for Gentoo Linux:

  • This is a C++ project so it feels natural and easy to grasp
  • The configure script already offers a way of using system libraries instead of the bundled ones which is in line with Gentoo’s QA policy
  • The only grey zone about the above statement is the web UI which is used precompiled

RethinkDB has a few QA violations which the ebuild is addressing by modifying the sources:

  • There is a configure.default which tries to force some configure options
  • The configure is missing some options to avoid auto installing some docs and init scripts
  • The build system does its best to guess the CXX compiler but it should offer an option to set it directly
  • The build system does not respect users’ CXXFLAGS and tries to force the usage of -03

Getting our hands into RethinkDB

At work, we finally found the excuse to get our hands into RethinkDB when we challenged ourselves with developing a quizz game for our booth as a sponsor of Europython 2016.

It was a simple game where you were presented a question and four possible answers and you had 60 seconds to answer as much of them as you could. The trick is that we wanted to create an interactive game where the participant had to play on a tablet but the rest of the audience got to see who was currently playing and follow their score progression + their ranking for the day and the week in real time on another screen !

Another challenge for us in the creation of this game is that we only used technologies that were new to us and even switched jobs so the backend python guys would be doing the frontend javascript et vice et versa. The stack finally went like this :

  • Game quizz frontend : Angular2 (TypeScript)
  • Game questions API : Go
  • Real time scores frontend : Angular2 + autobahn
  • Real time scores API : python 3.5 asyncio + autobahn
  • Database : RethinkDB

As you can see on the stack we chose RethinkDB for its main strength : real time updates pushed to the connected clients. The real time scores frontend and API were bonded together using autobahn while the API was using the changefeeds (realtime updates coming from the database) and broadcasting them to the frontend.

What we learnt about RethinkDB

  • We’re sure that we want to use it in production !
  • The ReQL query language is a pipeline so its syntax is quite tricky to get familiar with (even more when coming from mongoDB like us), it is as powerful as it can be disconcerting
  • Realtime changefeeds have limitations which are sometimes not so easy to understand/find out (especially the order_by / secondary index part)
  • Changefeeds limitations is a constraint you have to take into account in your data modeling !
  • Changefeeds + order_by can do the ordering for you when using the include_offsets option, this is amazing
  • The administration web UI is awesome
  • The python 3.5 asyncio proper support is still not merged, this is a pain !

Try it out

Now that you can emerge rethinkdb I encourage you to try this awesome database.

Be advised that the ebuild also provides a way of configuring your rethinkdb instance by running emerge –config dev-db/rethinkdb !

I’ll now try to get in touch with upstream to get Gentoo listed on their website.

MongoDB 3.0 upgrade in production : step 4 victory !

In my last post, I explained the new hope we had in following some newly added recommended steps before trying to migrate our production cluster to mongoDB 3.0 WiredTiger.

The most demanding step was migrating all our production servers data storage filesystems to XFS which obviously required a resync of each node… But we ended up being there pretty fast and were about to try again as 3.0.5 was getting ready, until we saw this bug coming !

I guess you can understand why we decided to wait for 3.0.6… which eventually got released with a more peaceful changelog this time.

The 3.0.6 crash test

We decided as usual to test the migration to WiredTiger in two phases.

  1. Migrate all our secondaries to the WiredTiger engine (full resync). Wait a week to see if this has any effect on our cluster.
  2. Switch all the MMapv1 primary nodes to secondary and let our WiredTiger secondary nodes become the primary nodes of our cluster. Pray hard that this time it will not break under our workload.

Step 1 results were good, nothing major changed and even our mongo dumps were still working this time (yay!). One week later, everything was still working smoothly.

Step 2 was the big challenge which failed horribly last time. Needless to say that we were quite stressed when doing the switch. But it worked smoothly and nothing broke + performances gains were huge !

The results

Nothing speaks better than metrics, so I’ll just comment them quickly as they speak by themselves. I obviously can’t disclose the scales sorry.

Insert-only operations gained 25x performance

2015-09-14-173545_531x276_scrot

 

 

 

 

 

 

 

Upsert-heavy operations gained 5x performance

2015-09-14-173748_530x275_scrot

 

 

 

 

 

 

 

Disk I/O also showed mercy to the disk overall usage. This is due to WiredTiger superior caching and disk flushing mechanisms.

2015-09-14-173954_409x254_scrot

 

 

 

 

 

 

Disk usage decreased dramatically thanks to WiredTiger compression

2015-09-14-174137_317x180_scrot

 

 

 

 

The last and next step

As of today, we still run our secondaries with the MMapv1 engine and are waiting a few weeks to see if anything goes wrong in the long run. Shall we need to roll back, we’d be able to do so very easily.

Then when we get enough uptime using WiredTiger, we will make the final switch to a full Roarring production cluster !

MongoDB 3.0 upgrade in production : step 3 hope

In our previous attempt to upgrade our production cluster to 3.0, we had to roll back from the WiredTiger engine on primary servers.

Since then, we switched back our whole cluster to 3.0 MMAPv1 which has brought us some better performances than 2.6 with no instability.

Production checklist

We decided to use this increase in performance to allow us some time to fulfil the entire production checklist from MongoDB, especially the migration to XFS. We’re slowly upgrading our servers kernels and resynchronising our data set after migrating from ext4 to XFS.

Ironically, the strong recommendation of XFS in the production checklist appeared 3 days after our failed attempt at WiredTiger… This is frustrating but gives some kind of hope.

I’ll keep on posting on our next steps and results.

Our hero WiredTiger Replica Set

While we were battling with our production cluster, we got a spontaneous major increase in the daily volumes from another platform which was running on a single Replica Set. This application is write intensive and very disk I/O bound. We were killing the disk I/O with almost a continuous 100% usage on the disk write queue.

Despite our frustration with WiredTiger so far, we decided to give it a chance considering that this time we were talking about a single Replica Set. We were very happy to see WiredTiger keep up to its promises with an almost shocking serenity.

Disk I/O went down dramatically, almost as if nothing was happening any more. Compression did magic on our disk usage and our application went Roarrr !

MongoDB 3.0 upgrade in production : step 2 failed

In my previous post regarding the migration of our production cluster to mongoDB 3.0 WiredTiger, we successfully upgraded all the secondaries of our replica-sets with decent performances and (almost, read on) no breakage.

Step 2 plan

The next step of our migration was to test our work load on WiredTiger primaries. After all, this is where the new engine would finally demonstrate all its capabilities.

  • We thus scheduled a step down from our 3.0 MMAPv1 primary servers so that our WiredTiger secondaries would take over.
  • Not migrating the primaries was a safety net in case something went wrong… And boy it went so wrong we’re glad we played it safe that way !
  • We rolled back after 10 minutes of utter bitterness.

The failure

After all the wait and expectation, I can’t express our level of disappointment at work when we saw that the WiredTiger engine could not handle our work load. Our application started immediately to throw 50 to 250 WriteConflict errors per minute !

Turns out that we are affected by this bug and that, of course, we’re not the only ones. So far it seems that it affects collections with :

  • heavy insert / update work loads
  • an unique index (or compound index)

The breakages

We also discovered that we’re affected by a weird mongodump new behaviour where the dumped BSON file does not contain the number of documents that mongodump said it was exporting. This is clearly a new problem because it happened right after all our secondaries switched to WiredTiger.

Since we have to ensure a strong consistency of our exports and that the mongoDB guys don’t seem so keen on moving on the bug (which I surely can understand) there is a large possibility that we’ll have to roll back even the WiredTiger secondaries altogether.

Not to mention that since the 3.0 version, we experience some CPU overloads crashing the entire server on our MMAPv1 primaries that we’re still trying to tackle before opening another JIRA bug…

Sad panda

Of course, any new major release such as 3.0 causes its headaches and brings its lot of bugs. We were ready for this hence the safety steps we took to ensure that we could roll back on any problem.

But as a long time advocate of mongoDB I must admit my frustration, even more after the time it took to get this 3.0 out and all the expectations that came with it.

I hope I can share some better news on the next blog post.

uWSGI, gevent and pymongo 3 threads mayhem

This is a quick heads-up post about a behaviour change when running a gevent based application using the new pymongo 3 driver under uWSGI and its gevent loop.

I was naturally curious about testing this brand new and major update of the python driver for mongoDB so I just played it dumb : update and give a try on our existing code base.

The first thing I noticed instantly is that a vast majority of our applications were suddenly unable to reload gracefully and were force killed by uWSGI after some time !

worker 1 (pid: 9839) is taking too much time to die...NO MERCY !!!

uWSGI’s gevent-wait-for-hub

All our applications must be able to be gracefully reloaded at any time. Some of them are spawning quite a few greenlets on their own so as an added measure of making sure we never loose any running greenlet we use the gevent-wait-for-hub option, which is described as follow :

wait for gevent hub's death instead of the control greenlet

… which does not mean a lot but is explained in a previous uWSGI changelog :

During shutdown only the greenlets spawned by uWSGI are taken in account,
and after all of them are destroyed the process will exit.

This is different from the old approach where the process wait for
ALL the currently available greenlets (and monkeypatched threads).

If you prefer the old behaviour just specify the option gevent-wait-for-hub

pymongo 3

Compared to its previous 2.x versions, one of the overall key aspect of the new pymongo 3 driver is its intensive usage of threads to handle server discovery and connection pools.

Now we can relate this very fact to the gevent-wait-for-hub behaviour explained above :

the process wait for ALL the currently available greenlets
(and monkeypatched threads)

This explained why our applications were hanging until the reload-mercy (force kill) timeout option of uWSGI hit the fan !

conclusion

When using pymongo 3 with the gevent-wait-for-hub option, you have to keep in mind that all of pymongo’s threads (so monkey patched threads) are considered as active greenlets and will thus be waited for termination before uWSGI recycles the worker !

Two options come in mind to handle this properly :

  1. stop using the gevent-wait-for-hub option and change your code to use a gevent pool group to make sure that all of your important greenlets are taken care of when a graceful reload happens (this is how we do it today, the gevent-wait-for-hub option usage was just over protective for us).
  2. modify your code to properly close all your pymongo connections on graceful reloads.

Hope this will save some people the trouble of debugging this 😉

MongoDB 3.0 upgrade in production : first steps

We’ve been running a nice mongoDB cluster in production for several years now in my company.

This cluster suits quite a wide range of use cases from very simple configuration collections to complex queried ones and real time analytics. This versatility has been the strong point of mongoDB for us since the start as it allows different teams to address their different problems using the same technology. We also run some dedicated replica sets for other purposes and network segmentation reasons.

We’ve waited a long time to see the latest 3.0 release features happening. The new WiredTiger storage engine hit the fan at the right time for us since we had reached the limits of our main production cluster and were considering alternatives.

So as surprising it may seem, it’s the first of our mongoDB architecture we’re upgrading to v3.0 as it has become a real necessity.

This post is about sharing our first experience about an ongoing and carefully planned major upgrade of a production cluster and does not claim to be a definitive migration guide.

Upgrade plan and hardware

The upgrade process is well covered in the mongoDB documentation already but I will list the pre-migration base specs of every node of our cluster.

  • mongodb v2.6.8
  • RAID1 spinning HDD 15k rpm for the OS (Gentoo Linux)
  • RAID10 4x SSD for mongoDB files under LVM
  • 64 GB RAM

Our overall philosophy is to keep most of the configuration parameters to their default values to start with. We will start experimenting with them when we have sufficient metrics to compare with later.

Disk (re)partitioning considerations

The master-get-all-the-writes architecture is still one of the main limitation of mongoDB and this does not change with v3.0 so you obviously need to challenge your current disk layout to take advantage of the new WiredTiger engine.

mongoDB 2.6 MMAPv1

Considering our cluster data size, we were forced to use our 4 SSD in a RAID10 as it was the best compromise to preserve performance while providing sufficient data storage capacity.

We often reached the limits of our I/O and moved the journal out of the RAID10 to the mostly idle OS RAID1 with no significant improvements.

mongoDB 3.0 WiredTiger

The main consideration point for us is the new feature allowing to store the indexes in a separate directory. So we anticipated the data storage consumption reduction thanks to the snappy compression and decided to split our RAID10 in two dedicated RAID1.

Our test layout so far is :

  • RAID1 SSD for the data
  • RAID1 SSD for the indexes and journal

Our first node migration

After migrating our mongos and config servers to 3.0, we picked our worst performing secondary node to test the actual migration to WiredTiger. After all, we couldn’t do worse right ?

We are aware that the strong suit of WiredTiger is actually about having the writes directed to it and will surely share our experience of this aspect later.

compression is bliss

To make this comparison accurate, we resynchronized this node totally before migrating to WiredTiger so we could compare a non fragmented MMAPv1 disk usage with the WiredTiger compressed one.

While I can’t disclose the actual values, compression worked like a charm for us with a gain ratio of 3,2 on disk usage (data + indexes) which is way beyond our expectations !

This is the DB Storage graph from MMS, showing a gain ratio of 4 surely due to indexes being in a separate disk now.

2015-05-07-115324_461x184_scrot

 

 

 

 

memory usage

As with the disk usage, the node had been running hot on MMAPv1 before the actual migration so we can compare memory allocation/consumption of both engines.

There again the memory management of WiredTiger and its cache shows great improvement. For now, we left the default setting which has WiredTiger limit its cache to half the available memory of the system. We’ll experiment with this setting later on.

2015-05-07-115347_459x177_scrot

 

 

 

 

connections

This I’m still not sure of the actual cause yet but the connections count is higher and more steady than before on this node.

2015-05-07-123449_454x183_scrot

First impressions

The node is running smooth for several hours now. We are getting acquainted to the new metrics and statistics from WiredTiger. The overall node and I/O load is better than before !

While all the above graphs show huge improvements there is no major change from our applications point of view. We didn’t expect any since this is only one node in a whole cluster and that the main benefits will also come from master node migrations.

I’ll continue to share our experience and progress about our mongoDB 3.0 upgrade.

mongoDB 3.0.1

This is a quite awaited version bump coming to portage and I’m glad to announce it’s made its way to the tree today !

I’ll right away thank a lot Tomas Mozes and Darko Luketic for their amazing help, feedback and patience !

mongodb-3.0.1

I introduced quite some changes in this ebuild which I wanted to share with you and warn you about. MongoDB upstream have stripped quite a bunch of things out of the main mongo core repository which I have in turn split into ebuilds.

Major changes :

  • respect upstream’s optimization flags : unless in debug build, user’s optimization flags will be ignored to prevent crashes and weird behaviour.
  • shared libraries for C/C++ are not built by the core mongo respository anymore, so I removed the static-libs USE flag.
  • various dependencies optimization to trigger a rebuild of mongoDB when one of its linked dependency changes.

app-admin/mongo-tools

The new tools USE flag allows you to pull a new ebuild named app-admin/mongo-tools which installs the commands listed below. Obviously, you can now just install this package if you only need those tools on your machine.

  • mongodump / mongorestore
  • mongoexport / mongoimport
  • mongotop
  • mongofiles
  • mongooplog
  • mongostat
  • bsondump

app-admin/mms-agent

The MMS agent has now some real version numbers and I don’t have to host their source on Gentoo’s infra woodpecker. At the moment there is only the monitoring agent available, shall anyone request the backup one, I’ll be glad to add its support too.

dev-libs/mongo-c(xx)-driver

I took this opportunity to add the dev-libs/mongo-cxx-driver to the tree and bump the mongo-c-driver one. Thank you to Balint SZENTE for his insight on this.

mongoDB v2.6.1

This is a great pleasure to announce the version bump of mongoDB to the brand new v2.6 stable branch !

This bump is not trivial and comes with a lot of changes, please read carefully as you will have to modify your mongodb configuration files !

ebuild changes

As a long time request and to be more in line with upstream’s recommendations (and systemd support) I moved the configuration of the mongoDB daemons to /etc so make sure to adapt to the new YAML format.

  • the mongodb configuration moved from /etc/conf.d/mongodb to the new YAML formatted /etc/mongodb.conf
  • the mongos configuration moved from /etc/conf.d/mongos to the new YAML formatted /etc/mongos.conf
  • the MMS agent configuration file has moved to /etc/mms-agent.conf

The init scripts also have been taken care of :

  • new and modern mongodb, mongos and mms-agent init scripts
  • their /etc/conf.d/ configuration files are only used to modify the init script’s behavior

highlights

The changelog is long and the goal of this post is not to give you an already well covered topic on the release notes but here are my favorite features :

  • MongoDB preserves the order of the document fields following write operations.
  • A new write protocol integrates write operations with write concerns. The protocol also provides improved support for bulk operations.
  • MongoDB can now use index intersection to fulfill queries supported by more than one index.
  • Index Filters to limit which indexes can become the winning plan for a query.
  • Background index build allowed on secondaries.
  • New cleanupOrphaned command to remove orphaned documents from a shard.
  • usePowerOf2Sizes is now the default allocation strategy for all new collections.
  • Removed upward limit of 20 000 connections for the maxIncomingConnections for mongod and mongos.
  • New cursor.maxTimeMS() and corresponding maxTimeMS option for commands to specify a time limit.

Make sure you follow the official upgrade plan to upgrade from a previous version, this release is not a simple drop-in replacement.

thanks

Special thanks go to Johan Bergström for his continuous efforts and responsiveness as well as Mike Limansky and Jason A. Donenfeld.