Google Cloud Platform Blog
Multi-million operations per second on a single Google Compute Engine instance
Thursday, July 30, 2015
The emergence of
affordable
high IOPS storage, such as Google Compute Engine local SSDs, enables a new generation of technologies to re-invent storage.
Helium
, an embedded key-value store from
Levyx
, is one such example -- designed to scale with multi-core CPUs, SSDs, and memory efficient indexing.
At Levyx, we believe in a
“scale-in before you scale-out”
mantra. Often times technology vendors advertise scale-out as a way to achieve high performance. It is a proven approach, but it is often used to mask single node inefficiencies. Without a well balanced system where CPU, memory, network, and local storage are properly balanced, this is simply what we call “throwing hardware at the problem”. Hardware that, virtual or not, customers pay for.
To demonstrate this, we decided to check Helium’s performance on a single node on Google Cloud Platform with a workload similar to the one previously used to showcase
Aerospike
and
Cassandra
(200 byte objects and 100 million operations). With
Cassandra
, the data store contained 3 billion indices. Helium starts with an empty data store. The setup consists of:
Single
n1-highcpu-32
instance -- 32 virtual CPUs and 28.8 GB memory.
Four local SSDs (4 x 375 GB) for the Helium datastore. (Note:
local-SSDs
is limited in terms of create time flexibility and reliability compared to
persistence-disks
, but the goal of this blog post is to test with highest performing GCP IO devices).
OS: Debian 7.7 (kernel 3.16-0.bpo.4-amd64, NVMe drivers).
The gists and tests are on
github
.
Scaling and Performance with CPUs
The test first populates an empty datastore followed by reading the entire datastore sequentially and then randomly. Finally, the test deletes all objects. The 100 million objects are in memory with
persistence
on SSD, which acts as the local storage every replicated system requires. The total datastore size is kept fixed.
Takeaways
Single node performance of over
4 Million
inserts/sec (write path) and over
9 Million
gets/sec (read path) with persistence that is as durable as the local SSDs.
99% (in memory) latency for updates
< 15 usec
, and
< 5 usec
for gets.
Almost linear scaling helps with the math of provisioning instances.
Scaling with SSDs and Pure SSD Performance
Compute Engine provides high IOPS, low latency
local SSDs
. To demonstrate a case where data is read purely from SSDs (and not take advantage of memory), let’s run the same benchmark with 4K object size x 5 million objects, and reduce Helium’s cache to a minimal 2% (400 MB) of total data size (20GB). Only random gets performance is shown below because it is a better stress test than sequential gets.
Take aways
:
Single node SSDs capable of updates at
1.6 GB/sec (400K IOPS)
and random gets at
1.9 GB/sec (480K IOPS)
.
IOPS scaling with SSDs.
Numbers comparable to fio, a pure IO benchmark.
With four SSDs and 256 threads, median latency
< 600 usec
, and 95% latency
< 2 msec
.
Deterministic memory usage (< 1GB) by not relying on OS page caches.
Cost Analysis
The
cost
of this Google Compute Engine instance for one hour is $1.22 (n1-highcpu-32) + $0.452 (4 x Local SSD) =
$1.67.
Based on 200-byte objects, this boils down to:
2.5 Million updates per dollar
4.6 Million gets per dollar
To put this in perspective, New York’s population is ~8.4 million; therefore, you can scan through a Helium datastore containing everyone’s record (assuming each record is less than 200 bytes. Eg: name, address and phone) in one second on a single Google Cloud Platform instance for under $2 per hour.
Summary
Helium running on Google Compute Engine
commodity
VMs enables processing data at near memory speeds using SSDs. The combination of Cloud Platform and Helium makes high throughput, low latency data processing affordable for everyone. Welcome to the era of dollar store priced datastores at enterprise grade reliability!
For details about running Helium on Google Cloud Platform, contact
info@levyx.com
.
- Posted by Siddharth Choudhuri, Principal Engineer at Levyx
No comments :
Post a Comment
Don't Miss Next '17
Use promo code NEXT1720 to save $300 off general admission
REGISTER NOW
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Labels
Announcements
56
Big Data & Machine Learning
91
Compute
156
Containers & Kubernetes
36
CRE
7
Customers
90
Developer Tools & Insights
80
Events
34
Infrastructure
24
Management Tools
39
Networking
18
Open Source
105
Partners
63
Pricing
24
Security & Identity
23
Solutions
16
Stackdriver
19
Storage & Databases
111
Weekly Roundups
16
Archive
2017
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Subscribe by email
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow
No comments :
Post a Comment