KOBAS tames technology for hospitality management. Our Tech Talk articles focus on the technology behind KOBAS.
Tech Talk – AWS RDS Benchmarks
As I’ve mentioned before, we use the excellent Amazon Web Services (AWS) to host Kobas Cloud. A bunch of our reserved Relational Database Services (RDS) instances are due for renewal soon, so we’re in the process of reviewing our options there to see what our next step forward should be. In doing so we performed some benchmark tests and thought that the results were worth sharing.
Optimising resource
In an ideal world we would always have exactly as much resource as we needed; never too much, and never too little. In the ancient times before Cloud computing I used to have to always specify enough server power to tackle peak demand, which would mean resource sitting idle most of the time. AWS generally aims to be more optimal, and we certainly can run very lean with our elastic on demand web server clusters, but the realities are that database provisioning still isn’t so easy.
Database shards
We take an approach that’s pretty common in SaaS architecture, which is to run a number of database shards. That’s a situation where you split your databases up so that rather than having one mega database containing all your customers’ data, you can run groups of customers on a shard database, or even a customer on every shard if you wish.
At Kobas we are completely flexible about this; it’s easy for us to move a particular account from one shard to another. Larger customers can have their own dedicated shard, where groups of small customers co-habit together on a shard. If one of those smaller customers goes through a period of rapid expansion, we can simply pop them onto their own shard.
Which RDS size should we use for shards?
RDS is of course a godsend here. We don’t need to order a server, physically put an OS on it, install MySQL, a firewall etc, drive it to a data-centre and hook it up. We just tap away in the AWS control panel and spawn an instance of our choosing. But how do we know which one to choose? Should we aim for a more powerful instance to hold 10 customers, or 10 low power instances so that we can give every customer a dedicated instance?
A distinct advantage of putting every customer on their own instance is performance segregation. If we get our calculations wrong and that customer presents sufficient load as to overwhelm that database shard, while it’s not ideal for that customer, at least all our other customers are unaffected. However as is typical with any extreme, there are downsides to that approach.
Some customers, for instance nightclubs, only trade a few nights of the week and operate from 9pm to 6am. Other customers such as pop-up snack shops might trade Monday to Friday from 7am to 3pm. If each of these customers had their own database shard, we can be sure those shards will be idle most of the time. It’s inefficient and of course, more shards do mean more maintenance overhead for our engineering team. It makes sense for these two clients to share a shard.
Bang for your buck
The point of the benchmarks we’ve run this weekend is to examine whether we should scale horizontally with the RDS m1.small instance, or whether we should trial stepping up to fewer m1.medium instances. The cost per annum basically doubles with that change, so we would need to know we would get enough of a performance increase to handle twice the number of customers per shard.
At the point where we had put together a customised Kobas RDS benchmark test, it made sense to try other options too, so we’ve also tested the lowly t1.micro, the comparable m3.medium, and the lusty m1.large.
The benchmark tests
We performed four tests on each of our temporary RDS instances:
- We imported a chunky customer account (667MB of SQL) onto the instance, and timed that.
- We fed a particularly nasty query we found in our slow query log, with caching disabled, on our test harness which forked into five processes which all ran the query. We timed that.
- We took a sample chunk of SQL from our logs, added some nasty queries from the slow query log, disabled caching, and then ran that lot on our test harness this time forking into ten processes all of which applied those query blocks simultaneously. As you might have guessed, we timed that too.
- We ran MySQLSlap. It was configured to perform 2,000 queries with a concurrency of 30 connections, and to iterate across that query block 10 times. We took the average result time.
The import test isn’t particularly representative of our load pattern, but as it was a necessary task in order for us to be able to perform the other tests, it made sense to time this too. It’s a single thread of sequential queries. The other tests though are fairly typical of what’s going on when reports are requested, rotas are being created and EPoS servers are making API calls etc.
The tests were run from an m1.small EC2 instance in the same availability zone as the target RDS instances. The target RDS instances were all running MySQL 5.6.13.
The benchmark test results
t1.micro | m1.small | m1.medium | m3.medium | m1.large | |
---|---|---|---|---|---|
Cost per annum (£) | £100.85 | £201.10 | £402.80 | £345.42 | £820.87 |
RAM | 613 MiB | 1.7 GiB | 3.75 GiB | 3.75 GiB | 7.5 GiB |
1) Data import time (minutes : seconds) | 6:07 | 7:35 | 5:15 | 6:20 | 5:02 |
Data import notes | 100% CPU, 290 IOPS | 90% CPU, 280 IOPS | 50% CPU, 350 IOPS | 55% CPU, 310 IOPS | 25% CPU, 340 IOPS |
2) Query pack 1 test time (seconds) | 26.12 | 33.55 | 24.82 | 29.44 | 8.02 |
3) Query pack 2 test time (seconds) | 107.0 | 178.0 | 100.5 | 115.3 | 52.7 |
4) MySQL Slap test average time (seconds) | 7.681 | 2.783 | 2.315 | 2.636 | 2.110 |
Analysis
The fact that the m1.small and the m3.medium were out performed by the t1.micro in the first three tests is extremely interesting. Alas it probably highlights that our benchmark tests don’t represent what happens over an extended duration of load, as the t1.micro’s lack of RAM means it can’t hold nearly as much information in its query cache. Fortunately the MySQL Slap result shows the micro’s weakness. The micro is great for a short burst of activity, but the way it is throttled under sustained load puts a significant question mark over its suitability for our purposes.
That said, Kobas is a fairly write-heavy application. It is being constantly updated with staff time and attendance, sales activity, and stock movements. It suffers significant read load when generating reports, but those reports require up to date information from tables whose cache has probably been invalidated due to a recent change.
It’s also telling that the second generation m3.medium is out performed by the older m1.medium in every test. A quick hunt around the Internet revealed many others with a similar observation.
All this is fantastic food for thought and will make for some interesting discussions in the office over the coming week.