# Notes on CodeMesh.io 2015

This was my first CodeMesh edition. I had been told to attend several times by my former colleague Félix López and it was really good. Probably the conference with highest technical level I’ve ever attended so far and as such I made lots of notes that you can see right now.

## Concurrency + Distribution = Availability + Scalability

### by Francesco Cesarini

This talk was a really useful step by step recipe to distributed systems with a final summary in the form of a 10 bullet points list:

1. Split up your system’s functionality into manageable, stand-alone nodes.
2. Decide what distributed architectural pattern you are going to use.
3. Decide what network protocols your nodes, node families and clusters will use when communicating with each other.
4. Define your node interfaces, state and data model.
5. For every interface function in your nodes, you need to pick a retry strategy.
6. For all your data and state, pick your sharing strategy across node families, clusters and types, taking into consideration the needs of your retry strategy.
7. Reiterate through steps 1 to 6 until you have the trade-offs which suit your specification.
8. Design you cluster blueprint, looking at node ratios for scaling up and down.
9. Identify where to apply back-pressure and load regulation.
10. Define your Ops & Management approach, defining system and business alarms, logs and metrics.

You definitely want to have this list close when designing distributed systems, don’t you?

## From irrational configuration system to functional data store

### by Rob Martin

This talk was an interesting story about how they migrated a highly complicated and inflexible Oracle data store into a functional one based on Tries. With the migration they achieved, apart from a much more simple and manageable system, an impressive read latency speedup (from more than 15 seconds to less than 10 ms). Two take aways from this talk:

Target your response times below 10 ms.

It is always good to have reference values, this is a good one.

Build tools so that your juniors can work on it.

That means:

• Simplicity
• You can hire juniors
• You can develop seniors (not only hire them).

## Contemporary approach to data at scale

### by Ben Stopford

This talk was a really nice journey through all concepts, problems and available solutions for having data at

### Storage engine:

• Have your indexes in RAM
• Use indexed batches (Cassandra SSTables)
• Brute force approach: Columnar. Save one file per row and compress.

### Data access:

• Consistent hashing: Agents know where the data is stored. Very efficient and scalable.

### Architectures (click to enlarge):

Find the slides at benstopford.com

## Exercise analysis

### by Jan Machacek

This talk was specially interesting as the usage of mobile sensors to classify user’s activity is something we’re working on at MyDrive as well therefore very interesting takeaways for us here:

• Use OpenCV library for classification.
• Classify data on the phone.

And two ideas to improve the product that are potentially applicable for us at MyDrive as well:

• Have a ML model per user.
• Implement live sessions instead of submitting all the data to the backend at the end.

## Beyond Lists: High Performance Data Structures

### by Phil Trelford

This talk was a very good and dense one, therefore I’m really looking forward the video to be released, but until then…

Immutability + memory affinity for high speed.

And three new data structures to look at:

## Boot my (secure)->(portable) clouds

### by Nicki Watt

This talk was really interesting as well because we at MyDrive, as pretty much everyone out there I think, we rely on cloud system providers as Amazon AWS, for our deployments.

The takeaways from this talk are the Cloud computing ASAP principles and three lessons:

• Automate everything
• Separate config from code
• API driven clouds & tools
• Prefer modular, open source tools

Lesson 1: There’s NO single common cloud API, therefore use Terraform

This is something we are already doing, but I also made a note on checking Packer out. A tool to manage base OS images (we currently rely on Ubuntu’s AMIs)

Lesson 2: Decouple infrastructure and configuration management tools using automated configuration management tools, such as Chef, Ansible, …

Again, here we’re doing good at MyDrive using Chef, but also an interesting note on checking FreeIPA out to manage user accounts was made.

Lesson 3: Don’t roll your own security tools, use Vault to manage secrets and sensitive information.

This is again a tool to be checked out, but we, at MyDrive, are already following the rule using Citadel, a tool to store secrets in Amazon S3.

OpenCredo‘s guys have already compiled this whole talk in their own Boot my secure government cloud blog post. Read recommended!!

## Reactive Design Patterns

### by Roland Kuhn

This talk was a good review of concepts and patterns from The Reactive Manifesto

• Responsiveness: The system responds in a timely manner.
• Elasticity: Capability of scaling both up and down, both to meet requirements and keep costs controlled.
• Resilience: The system responds in the presence of failures.
• Message driven: This means that the system is decoupled with a share nothing architecture.

And also some architecture patterns for it:

• Single Responsibility Principle:
• A component must do only one thing and do it completely.
• In order to achieve it, keep splitting responsibilities into nodes until you have single components, but not any further. Avoid trivial components.
• Circuit breaker Pattern:
• Protect services by breaking connection to failing components
• Request – Response Pattern:
• Include a return address in the message in order to receive the response.
• Saga Pattern:
• Divide long-lived and distributed transactions into quick local ones with compensating transactions for failure cases.
• Partition your data so that every operation remains local.
• Example: Bank transfer avoiding lock of both accounts:
• T1: Transfer money from X to a local working account.
• C1: Compensate failure by transferring money back to X.
• T2: Transfer money from local working account to Y.

## Panel Discussion

Well, this was a very interesting closing event, having the chance to hear people with so many years of experience and knowledge as the panelists was a pleasure. I was specially excited to listen to Tony Hoare talking about his story and a little bit of the future.

The panelists were:

As you can imagine I can only advice you to watch the video, but there were two specially interesting ideas:

• A very big improvement for computer science would be to fix the so problematic mutable shared state
• The tools is where the action is:

# Algorithm Review: Insertion Sort

I have to admit it. I forget things quite easily, and probably the most annoying things to forget are oneself’s skills, aren’t they?

Fortunately, a couple of weeks ago I came across HackerRank and I’m loving how following its challenges is easily refreshing and resharpening some of my skills!

A couple of days ago I committed myself to go through every single challenge there in order and today I’ve decided to write a small post here with the concepts and ideas of every algorithm, data structure, etc… reviewed, first of all to settle that concepts down in my mind a bit deeper but also to have a place where to easily come back and have a quick read if after some time I feel I need to. And last but not least to share it with whoever may read this! 🙂

So, as HackerRank started off with the Insertion Sort sorting algorithm and as I’ve already finished all related exercises with the maximum score (was not so hard indeed), here is my Insertion Sort algorithm review!

Insertion Sort is a very simple in place sorting algorithm. This means that no significant extra memory is required for it to execute. This can be reworded as: Insertion Sort memory complexity is O(n).

The execution itself could be explained as a set of steps:

Consider the subarray formed by the first element as the sorted part and the remaining elements the unsorted part.

For each element in the unsorted part:

1. Store it in an auxiliar variable
2. Shift right all elements in the sorted part that are bigger than the value stored in step 1.
3. Insert the auxiliar value in the location where step 2 stopped.

We could graphically draw it like this:

[2, 3, 4, 6, 5, 1]
[2, 3, 4, 6, 5, 1]
[2, 3, 4, 6, 5, 1]
[2, 3, 4, 6, 5, 1]
[2, 3, 4, 6, 6, 1]
[2, 3, 4, 5, 6, 1]
[2, 3, 4, 5, 6, 6]
[2, 3, 4, 5, 5, 6]
[2, 3, 4, 4, 5, 6]
[2, 3, 3, 4, 5, 6]
[2, 2, 3, 4, 5, 6]
[1, 2, 3, 4, 5, 6]

As we can see, the sorted array (in bold) keeps growing while next element is sorted and shifts to right elements, temporarily overlapping values until the proper place for the current value is found.

A possible Ruby implementation:

```data = [...]
for i in 1...data.count
value = data[i]
j = i - 1
while j >= 0 && value < data[j]
data[j+1] = data[j]
j -= 1
end
data[j+1] = value
end
```

Analysing the worst case algorithm performance we can easily guess that the worst case scenario occurs when the input array is in reverse order because the algorithm will have to shift that number to the beginning of the sorted part and therefore will need 1 + 2 + 3 + … + (N-1) shifts.

1 + 2 + 3 + … + (N-1) is a finite arithmetic progression, which sum, by definition is $frac{n(a_{0}+a_{n})}{2}$. In our particular case this is $frac{n(1 + (n-1))}{2}$, that is $frac{n^{2}}{2}$ which we can round to n2.

Therefore we can say that Insertion Sort time complexity is O(n2).

Multi-Threading programming is one of those things that I like doing from time

to time. I find it particularly useful when automating any task that involves

My approach is usually a Publish-subscribe using the very handy Ruby’s
Queue class.

Sometimes I build it using several stages, i.e. last time I used it I was building
two messaging layers. First one publishes object names within the bucket
for the second one to pick them and actually download and process them and finally

My programming approach to this is leveraging all synchronisation in queues rather
than waiting for threads or passing messages between them.
But don’t worry, that’s not a crazy or hacky approach at all, is just the Ruby’s
recommended way.

So, what I’m going to do here is just explain by an example how to properly do
it avoiding errors and obtaining a clean end. Let’s start off by showing a
failing example.

```require 'thread'

queue = Queue.new

5.times do |i|
queue << i
sleep 1
end
p 'Producer exiting'
}

while producer.alive?
p "Popped #{queue.pop}"
end
p 'Consumer exiting'
}

producer.join
consumer.join
```

This code sets up two threads, a publisher(producer) and a subscriber(consumer).
The producer publishes a value to the queue and sleeps a second for 5 times.
The consumer simply pulls messages from the queue as soon as they’re available.

The producer exit condition is very straightforward. As soon as it finishes it’s
job, it simply finishes. The consumer, on his end, monitors the producer status
and will exit as soon as it detects the producer is not alive anymore.

All looks good, doesn’t it? But when we run it… Crash!!

```"Popped 0"
"Popped 1"
"Popped 2"
"Popped 3"
"Popped 4"
"Producer exiting"
```

Investigating a bit you’ll find that this error is raised at the queue.pop consumer’s
invocation. That’s because when it checked the status of the producer, it was still alive.
Now we could try several approaches but the best and most robust one I think it is to use
what I call end of operation objects.

Those objects are simply instances of a dummy class which purpose is to signal the end
of the operations queue. Using end of opertion objects we could rewrite our piece
of code as follows:

```require 'thread'

class EndOfOp ; end

queue = Queue.new

5.times do |i|
queue << i
sleep 1
end
p 'Producer exiting'
queue << EndOfOp.new
}

while obj = queue.pop
break if obj.instance_of? EndOfOp
p "Popped #{obj}"
end
p 'Consumer exiting'
}

producer.join
consumer.join
```

Now the producer pushes an instance of the `EndOfOp` dummy class just before exiting
to signal the consumer that it has finished its job. The consumer, on his end just
tests every pulled object that it is not an `EndOfOp` in order to continue.

Executing this code we would see:

```"Popped 0"
"Popped 1"
"Popped 2"
"Popped 3"
"Popped 4"
"Producer exiting"
"Consumer exiting"
[Finished in 5.1s]
```

And that’s all. Happy pubsubbing!!

# TechCrunch Disrupt Hackathon

I did it! And it was really awesome!

It was so good that I decided to start my own blog (this that you’re currently reading) with a post about it.

Last Saturday 18, Oct the Techcrunch Disrupt Hackathon in London was my first Hackathon ever, but it’s very likely not to be the last, I really enjoyed it and showed me that I love programming and hacking even more that I would have thought!

Several weeks before the event I registered for it and since then I were a bit afraid and nervous trying to think what to do and how to face the challenge. After long thoughts I came up with two solutions. Either build my own team and hack around a pretty cool idea I think I had or register myself as ‘Looking for a team’ and then try to find someone really senior and LEARN, LEARN and LEARN! As it was my first time I chose the latter.

So I found myself updating my profile at ChallengePost pretty much like when looking for a job and luckily I received several requests to join teams. After going through them carefully I found most of them to be really simple projects rather than crazy ideas or ‘hacks’ to learn from. But then I received an email with a proper crazy idea. An audio streaming through mesh network app. That sound great! So even considering that the guy who proposed it was not a senior at all, actually he has finished his degree this year, I took it. And I cannot be more proud.

Once the day came everything was massive, around 750 developers arranged in more than 90 teams working in 9 people tables spread all around an old fish market was a very impressive picture I think. We had to move quickly to find a table with 3 free chairs (we were expecting another guy to join us) and as he didn’t turned up, we found someone there that was planning a ‘solo’ hack and luckily decided to join us. We then split tasks and hands on!! I never thought I was going to be able to be highly concentrated for such a long time, but my task had me engaged for almost 20 hours non stop!

My responsibility was to implement a module able to stream out or stream in and play songs, that then would be used by the mesh network layer to either transmit the song or receive it. We had a stressful time but in the end we managed to achieve our goal and the music was playing!! I think I’ll never forget the first 15 seconds of this song I downloaded for testing purposes from Last.fm.

Then, the presentations time came, an impressive stage was waiting for us to present that music discovery/streaming app that we had developed with so much effort and the disappointment came. The live demo failed, I guess this is one of the worst things anyone can experience but at some point, everyone experiences and that was our time. The music never played and we had tested it thousands of times! It turned out to be a bad connection of the audio cable, because most of the presentations with sound involved failed as well :(.

But anyway we are proud of it, proud of our effort, proud of the people we met and proud of ourselves. For me, was particularly pleasant to come back to C programming after such a long time and find myself strong enough to complete the task!

And now, let’s see if we manage to push that app forward but first, let me share with you some photos of the event and even the video of our unsuccessful presentation!!

Thanks Zurab and Tony for that awesome experience we shared.

And here the photos and the video. Enjoy!