Benchmarking Cassandra Models

During the last couple of weeks we’ve been very focused on deeply understanding Cassandra’s write and read paths to build our models the best possible way.

After so much investigation I can summarise which are the steps we follow to assess our model’s performance.

1. Have good practices and anti-patterns always in mind.

Cassandra is known to be a very resilient and highly performant platform, but only so long as you follow the rules and work with the underlying data model.

There are quite a few of these rules so my recommendation is to read through them quickly before thinking of your model.

There are lots of places online where you can read about the good practices to follow and anti-patterns to avoid when data modelling for Cassandra but I have my own repository, here are the links:

Once you’ve all the ideas fresh in your mind is time to start thinking about your data model.

2. Design your data model and alternatives

Following all the ideas and principles learnt in the previous step, design your data model and try to think of alternatives. These alternatives usually come as minor adjustments that can be applied to the model but just by following the best practices you can’t decide whether one or the other is a better choice.

Here I’ll provide two examples we’ve recently had:

  1. Having a bucketed time series with a maximum of 86,400 rows per partition, how is it better to read an entire partition?
    a) By reading the whole partition at once
    b) By reading the whole partition in chunks (2 halves, 4 quarters, …)
  2. Having a model that contains the information of a discretised distribution on each record, how is it better to save the bins?
    a) By using a List element that will contain the 100 bins
    b) By having 100 columns, one for each bin

The resulting models will meet all the good practices and avoid all the anti-patterns regardless of the final decision so, how do you decide which way to go?

3. Benchmark your alternatives

For this purpose I’ve created a script (Ruby in this case) that:

  1. Creates the table purposed by the model under test
  2. Generates PRODUCTION DATA and saves it (memory or disk, depending on the size)
  3. Benchmarks the applicable access patterns, in our case:
    3.1. Insertion of all the data generated in step 2
    3.2. Reading of all the data
    3.3. Updating the data

It is important that the access patterns are exactly the same way they’ll be in production, otherwise the result of the benchmark is completely useless.

This script should be adapted and ran for every single alternative.

Here you can see the scripts used for both alternatives proposed for the example No. 2 described above:

4. Let data decide

We’re almost there!!

Now we have to collect the data for each execution and compare them to work out which of the candidates is our final model.

There are two sources of data you should look at:

  1. The output of the script’s execution.
    1.1 The script will print an output for every workload benchmarked, (Insert, Read and Update in our example)
  2. The DataStax OpsCenter’s Graphs.
    2.1 DataStax OpsCenter is probably the most advanced and easy to use Cassandra monitoring tool.

In our previous example we could see this output from the scripts:

calonso@XXX-XXX-XXX-XXX: bundle exec ruby lists_benchmark.rb
                user     system        total          real
Writing:  133.840000   5.180000   139.020000   (171.499862)
Reading:   24.000000   0.350000    24.350000   ( 47.897192)
Updating:   2.560000   0.210000     2.770000   (  4.135555)

calonso@XXX-XXX-XXX-XXX: bundle exec ruby cols_benchmark.rb
                user     system        total          real
Writing:  133.730000   2.710000   136.440000   (144.749167)
Reading:   30.340000   0.410000    30.750000   ( 41.759687)
Updating:   1.950000   0.090000     2.040000   (  3.020997)

So, we could say that the columns model performs better than the lists based one, but let’s confirm our initial assessment by looking at OpsCenter’s graphs:

In all the graphs we can see two peaks, the first one was generated during the execution of the lists based model benchmarking and the second one during the columns based one.

Absolutely every graph comparison points towards the columns based model as the one with better performance:

reads

  • This graphs show the total reads per second received in the whole cluster on the and coordinator nodes and the average time taken in responding them.

writes

  • This graphs show the total writes per second received in the whole cluster on the and coordinator nodes and the average time taken in responding them.

os_load

  • Average measure of the amount of work a computer performs. A Load of 0 means no work at all, and a load of 1 means 100% of work for a single core, therefore, this value depends on how many cores available. In our deployment = 2.

gcs

  • Number of times each of the JVM GCs run per second and the time elapsed in each run.

local_reads

  • Total reads per second received on the specific column families being used and the average time taken to respond them.

local_writes

  • Total writes per second received on the specific column families being used and the average time taken to respond them.

And that’s all for us for now!

Troubleshooting Cassandra performance issues

A couple of weeks ago we released a feature and it’s performance was unexpectedly poor and here I want to share the steps and tools used to get to the root cause of the problem.

To give a little bit of background I’ll tell you that the feature was something really common nowadays: Saving a bunch of time series in Cassandra

Step 1: Look for evidences in metrics

The natural first step I think everyone does is to look at the graphs (here we use graphite) and try to find evidences of what’s going on.

In our case we had a very clear evidence that something was wrong as the time consumed in the process had increased by around 600% after the deploy!!

events_latency_increase

But that means that not only something is wrong in the feature but also, and even more scary, in our whole process!! How can such a poorly performing piece of code have reached production without anyone noticing it before? Ok, we don’t have performance tests as part of our CI process, but we test every feature in our pre-production environments before going to production and that should have appeared there!! That would have simply been unacceptable and processes are easy and strong here @_MyDrive, so, after scrolling a little bit along the graphs we found an explanation. The tests ran in the pre-production environment were ok!

events_latency_increase_with_labs

Ok, so we have somewhere to look at: something on our production environment is performing poorly and, at first glance, our stats are not showing it at all!.

Step 2: Profile suspicious code

At this step we use the fantastic RubyProf to wrap the suspicious code in a RubyProf.profile block and save the results to analyse later.

require 'rubyprof'

results = RubyProf.profile do [Code to profile] end

printer = RubyProf::GraphPrinter.new results 
printer.print(File.new('/tmp/profiled-events-insert.txt', 'w'), min_percent: 2) 

Reading the saved files I could clearly see that the time was going into the Cassandra related stuff and made the assumption that the problem would be somewhere in the model/queries stuff.

I could have read a little more of the profiling result and will probably have saved some steps here, but as we were issuing several thousands of inserts asynchronously and the first lines of the profiling report were pointing to Cassandra everything looked crystal clear.

Step 3: Trace queries

There’s only one thing we’re doing here: INSERT so…

cqlsh:carlos_test> TRACING ON cqlsh:carlos_test> INSERT INTO ...
activity                          | timestamp    | source       | source_elapsed 
----------------------------------+--------------+--------------+----------------
execute_cql3_query                | 10:26:35,809 | 10.36.136.42 | 0 
Parsing INSERT INTO ...           | 10:26:35,836 | 10.36.136.42 | 26221 
Preparing statement               | 10:26:35,847 | 10.36.136.42 | 37556 
Determining replicas for mutation | 10:26:35,847 | 10.36.136.42 | 37867 
Acquiring switchLock read lock    | 10:26:35,848 | 10.36.136.42 | 38492 
Appending to commitlog            | 10:26:35,848 | 10.36.136.42 | 38558 
Adding to events memtable         | 10:26:35,848 | 10.36.136.42 | 38600 
Request complete                  | 10:26:35,847 | 10.36.136.42 | 38926 

Looking at this results something looks broken on parsing the query, and running the same thing on our pre-production environment it clearly shows as something broken!

cqlsh:benchmark> TRACING ON cqlsh:benchmark> INSERT INTO ...
activity                          |  timestamp   |    source     | source_elapsed 
----------------------------------+--------------+---------------+---------------- 
execute_cql3_query                | 10:27:40,390 | 10.36.168.248 | 0 
Parsing INSERT INTO ...           | 10:27:40,390 | 10.36.168.248 | 75 
Preparing statement               | 10:27:40,390 | 10.36.168.248 | 233 
Determining replicas for mutation | 10:27:40,390 | 10.36.168.248 | 615 
Acquiring switchLock read lock    | 10:27:40,390 | 10.36.168.248 | 793 
Appending to commitlog            | 10:27:40,390 | 10.36.168.248 | 827 
Adding to events memtable         | 10:27:40,390 | 10.36.168.248 | 879 
Request complete                  | 10:27:40,391 | 10.36.168.248 | 1099

But, what could be so wrong with parsing a query?

Step 4: Simplify the problem

At this point I decided to write a small Ruby program that:

  1. Connects to the Cassandra cluster
  2. Creates a test keyspace
  3. Creates a column family within the test keyspace
  4. Runs an insert like the one profiled above
  5. Drop the test keyspace

and profile it all using RubyProf to try to spot something obvious.

Running this script in production showed something useful, more than 98% of the time was spent in Cluster#connect method! Furthermore, almost 100% of the time inside that method was going to IO.select! Which means that the time is being wasted outside Ruby itself.

Actually I could have saved some time, because this exact same thing was also clear in the first profiling I did in step 1, but my premature assumption made me walk some extra unnecessary steps.

Ok, we have a new symptom, but no idea where to look at, so after some desperate and useless attempts like tcpdumping the communications between the client and the cluster I decided to go back to the script I wrote and…

Step 5: Enable Logging

Maybe this should have been the first, right?

But this was really revealing!, for some reason, the driver was trying to connect to four nodes when in our ring we only have three!! Of course, the connection to this extra node was failing on timeout (10 seconds) and that was the source of our poor performance!

A quick google with the symptoms as the query and … ta-dah!! We’re facing a Cassandra bug!!

Quickly applied the fix and we were saving our 10 seconds per execution again.

Conclusions

  1. Measure everything
    • Metrics and monitoring let us know we were facing an unexpected performance issue
  2. Keep calm, read to the end
    • On first profiling report I could have spotted that the issue was on Cluster#connect but due to my eagerness to find a fix I made a wrong assumption that meant more time.
  3. Thank the community
  4. Take advantage of learning opportunities!
    • These kind of unexpected situations normally push you out of your comfort zone and really test your limits which is really good for your personal development.