Is Arm ready for server dominance?

For a long time, Arm processors have been the kings of mobile. But that didn’t make a dent in the server market, still dominated by Intel and AMD and their x86 instruction set. Major disruptive shifts in technology don’t come often. Displacing strong incumbents is hard and predictions about industry-wide replacement are, more often than not, wrong.

But this kind of major disruption does happen. In fact we believe the server market is seeing tectonic changes which will eventually favor Arm-based servers. Companies that prepare for it are ripe for extracting value from this trend.

This became clear this week, when AWS announced their new generation of Graviton2-based chips. The Graviton2 System on a Chip (SoC) is based on the Arm Neoverse N1 core. AWS claims they are much faster than their predecessors, a claim that we put to the test in this article.

There is also movement in other parts of the ecosystem: startups like Nuvia are also a sign of times to come. With a $53 million Series A just announced, with a founder team that packs years of experience in chip design and employs as VP of Software Jon Masters, a well-known Arm-advocate in the Linux community and previous Chief Arm Architect at Red Hat, Nuvia is a name you will hear a lot more about pretty soon.

At ScyllaDB, we believe these recent events are an indication of a fundamental shift and not just a new inconsequential offer. Companies that ride this trend are in a good position to profit from it.

The commoditization of servers and instruction sets

The infrastructure business is a numbers game. In the end, personalization matters little and those who can provide the most efficient service wins. For this reason, Arm-based processors, now the dominant force in the mobile world, have been perennially on the verge of a server surge. It’s easy to see why: Arm-based servers are known to be extremely energy efficient. With power accounting for almost 20% of datacenter costs, a move towards energy efficient is definitely a welcome one.

The explosion of mobile and IoT have been at the forefront of the dramatic shift in the eternal evolutionary battle of RISC (Arm) vs. CISC (x86). As Hennessy and Patterson observed last year “In today’s post-PC era, x86 shipments have fallen almost 10% per year since the peak in 2011, while chips with RISC processors have skyrocketed to 20 billion.” Still as recently as 2017 Arm only accounted for 1% of the server market. We believe that the market is now on the inflection point of change. We’re far from the only ones with that same thought.

There have been, however, challenges for adoption in practice. Unlike in the x86 world, there hasn’t so far been a dominant set of vendors offering a standardized platform. The Arm world is still mostly custom made, which is an advantage in mobile but a disadvantage for the server and consumer market. This is where startups like Nuvia can change the game, by offering a viable standard-based platform.

The cloud also radically changes the economics of platform selection: inertia and network effects are stronger in a market with many buyers that will naturally gravitate towards their comfort zone. But as more companies offload (and upload) their workloads to the cloud and refrain from running their own datacenters, innovation becomes easier if the cost is indeed justified.

By analogy if you look at the gaming market, there has been a strong lock-in based on your platform of choice: Xbox, PS4 or a high end gaming PC. But as cloud gaming emerges from a niche into the mainstream, like Project xCloud or any number of other cloud gaming platforms, enabling you to play your favorite games from just about any nominal device, that hardware lock-in becomes less prevalent. The power shifts from the hardware platform to the cloud.

Changes are easier when they are encapsulated. And that’s exactly what the cloud brings to the table. Compatibility of server applications are not a problem for new architectures: Linux runs just as well across multiple platforms and as applications become more and more high level, the moat provided by the instruction set gets demolished and the decision shifts to economic factors. In an age where most applications are serverless and/or microservices oriented interacting with cloud-native services, does it really matter what chipset goes underneath?

Arm’s first foray in the cloud: EC2 A1 instances

AWS announced in late 2018 the EC2 A1 instances, featuring their own AWS-manufactured Arm silicon. This was definitely a signal of a potential change, but back then, we took it for a spin and the results were underwhelming.

Executing a CPU benchmark in the EC2 A1 and comparing it to the x86-based M5d.metal hints just how big the gap is. As you can see in Table 1 below, the EC2 A1 instances perform much worse in any of the CPU benchmark tests conducted, with the exception of the cache benchmark. For most others, the difference is not only present but also huge, certainly much bigger than the 46% price difference that the A1 instance has compared to their M5 x86 counterparts.

Test EC2 A1 EC2 M5d.metal Difference
cache 1280 311 311.58%
icache 18209 34368 -47.02%
matrix 77932 252190 -69.10%
cpu 9336 24077 -61.22%
memcpy 21085 111877 -81.15%
qsort 522 728 -28.30%
dentry 1389634 2770985 -49.85%
timer 4970125 15367075 -67.66%

Table 1: Result of the stress command: stress-ng --metrics-brief --cache 16 --icache 16 --matrix 16 --cpu 16 --memcpy 16 --qsort 16 --dentry 16 --timer 16 -t 1m

But microbenchmarks can be misleading. At the end of the day, what truly matters is application performance. To put that to the test, we ran a standard read benchmark of the Scylla NoSQL database, in a single-node configuration. Using the m5.4xlarge as a comparison point — it has the same number of vCPUs as the EC2 A1 — we can see that while the m5.4xlarge sustains around 610,000 reads per second, the a1.metal is capable of doing only 102,000 reads/s. In both cases, all available CPUs are at 100% utilization.

This corresponds to an 84% decrease in performance, which doesn’t justify the lower price.

Figure1: Benchmarking a Scylla NoSQL database read workload with small metadata payloads, which makes it CPU-bound. EC2 m5.4xlarge vs EC2 a1.metal. Scylla is able to achieve 600,000 reads per second in this configuration for the x86-based m5.4xlarge, but the performance difference is 84% worse for the a1.metal, while the price is only 46% cheaper.

Aside from just the CPU power, the EC2 A1 instances are EBS-only instances, which means running a high performance database or any other data-intense application is a challenge on its own, since they lack the fast NVMe devices that are present in other instances like the M5d.

In summary, while the A1 is a nice wave to the Arm community, and may allow some interesting use cases, it does little to change the dynamics of the server market.

Arm reaches again: the EC2 M6 instances

This all changed this week when AWS during its annual re:Invent conference announced the availability of their new class of Arm-based servers, the M6g and M6gd instances among others, based on the Graviton2 processor.

We ran the same stress-ng benchmark set as before, but this time comparing the EC2 M5d.metal and EC2 M6g. The results are more inline with what we would expect from running a microbenchmark set against such different architectures: The Arm-based instance performs better, and sometimes much better in some tests, while the x86-based instance performs better in others.

Test EC2 M6g EC2 M5d.metal Difference
cache 218 311 -29.90%
icache 45887 34368 33.52%
matrix 453982 252190 80.02%
cpu 14694 24077 -38.97%
memcpy 134711 111877 20.53%
qsort 943 728 29.53%
dentry 3088242 2770985 11.45%
timer 55515663 15367075 261.26%

Table 2: Result of the stress command: stress-ng --metrics-brief --cache 16 --icache 16 --matrix 16 --cpu 16 --memcpy 16 --qsort 16 --dentry 16 --timer 16 -t 1m

Figure2 : EC2 M6g vs EC2 A1. The M6g class is 5 times faster than A1 for running reads in the Scylla NoSQL database, in the same workload presented in Figure 1.

Figure3: EC2 M6g vs x86-based M5, both of the same size. The performance of the Arm-based server is comparable to the x86 instance. With AWS claiming that prices will be 20% lower than x86, economic forces will push M6g ahead.

Figure4: CPU utilization during the read benchmark, for 14 CPUs. They are all operating at capacity. Shown in the picture is the data for M6g, but all 3 platforms achieve the same thing. Scylla uses two virtual CPUs for interrupt delivery, which are not shown, summing up to 16.

For database workloads the biggest change comes with the announcement of the new M6gd instance family. Just like what you get with the M5 and M5d x86-based families, the M6gd features fast local NVMe to serve demanding data-driven applications.

We took them for a spin as well using IOTune, a utility distributed with Scylla that is used to benchmark the storage system for database tuning once the database is installed.

We compared storage for each of the instances, in both cases using 2 NVMe cards set up in a RAID0 array:

M5d.metal

Starting Evaluation. This may take a while...
Measuring sequential write bandwidth: 1517 MB/s
Measuring sequential read bandwidth: 3525 MB/s
Measuring random write IOPS: 381329 IOPS
Measuring random read IOPS: 765004 IOPS

M6gd.metal

Starting Evaluation. This may take a while...
Measuring sequential write bandwidth: 2027 MB/s
Measuring sequential read bandwidth: 5753 MB/s
Measuring random write IOPS: 393617 IOPS
Measuring random read IOPS: 908742 IOPS

M6gd.metal M5d.metal Difference
Write bandwidth (MB/s) 2027 1517 +33.62%
Read bandwidth (MB/s) 5753 3525 +63.21%
Write IOPS 393617 381329 +3.22%
Read IOPS 908742 765004 +18.79%

Table 3: Result of IOTune utility testing

M6gd NVMe cards are, surprisingly, even faster than the ones provided by the M5d.metal. This is likely in virtue of them being newer, but clearly shows that certainly there are no penalties posed by the new architecture.
Summary

Much has been said for years about the rise of Arm-based processors in the server market, but so far we still live in an x86-dominated world. However, key dynamics of the industry are changing: with the rise of cloud-native applications hardware selection is now the domain of the cloud provider, not of the individual organization.

AWS, the biggest of the existing cloud providers released an Arm-based offering in 2018 and now in 2019 catapults that offering to a world-class spot. With results comparable to x86-based instances and AWS’s sure ability to offer a lower price due to well known attributes of the Arm-based servers like power efficiency, we consider the new M6g instances to be a game changer in a red-hot market ripe for change.

Editor’s Note: The microbenchmarks in this article have been updated to reflect the fact that running a single instance of stress-ng would skew the results in favor of the x86 platforms, since in SMT architectures a single thread may not be enough to use all resources available in the physical core. Thanks to our readers for bringing this to our attention.

The post Is Arm ready for server dominance? appeared first on ScyllaDB.

Managed Cassandra on AWS, Our Take

Amazon definitely piqued our interest by announcing their new Managed Cassandra Service (MCS) yesterday. As a database vendor, we’ve been asked by many about our view and even whether they’re running Scylla under the hood since they promise single digit latency. Here is our quick analysis while the discussion is still being hotly debated on Hacker News.

What Is it, Exactly?

Like many new services it’s hard to figure out what exactly being offered. An open source blog was published but it’s hard to sort out the architecture (and the limitations or advantages) of their implementation. This is about the 4th re-implementation of the Cassandra API (after Scylla, Yugabyte, CosmosDB), and certainly not the first cloud-hosted, managed Cassandra-as-a-service (Scylla Cloud, DataStax, Instaclustr, Bitnami, Aiven, and so on), so why keep it a secret? AWS may be doing themselves a disservice in being less transparent.

For AWS’ sake, we can say that they continue to surprise the industry with their rigor and their relentless execution. No service is immune from AWS. It doesn’t matter if it’s open or closed source, hardware or software — any worthy technology will eventually be offered as a service on AWS, even if they have an already robust alternate offering. Usually AWS prefers to go horizontal, broadening their portfolio like in the Cassandra/MongoDB case, instead of going vertically and focusing on a narrower set of better offerings. And they are always looking to learn and adapt, like the switch from Xen to KVM+Nitro.

That said, let’s take a look at what Amazon did here. MCS is a form of chimera: the front end of Apache Cassandra (including the CQL API) running on top of a DynamoDB back end (both for the storage engine and the data replication mechanisms). As confirmed on Twitter:

If you reverse engineer their pricing, you’ll discover it closely resembles the DynamoDB prices with an extra 16%, probably for the added translation layer. It’s compatible with a mix of Apache Cassandra 2.0 and 3.11 CQL features with many limitations we’ll discuss soon, but let’s be positive and start with the upside:

  • It’s ‘serverless’ — As anyone who’s used to spin-up nodes and clusters before you can use them, this is quite refreshing. When you login to the MCS console, you can create a keystore and tables without spinning up anything, very convenient!
  • It’s especially convenient for a complex service like Cassandra with all of its tuning challenges and administration overhead.
  • It’s integrated with IAM, AWS’s authentication service. That’s very nice and secure too.
  • The compute is detached from the storage. That’s a big upside for many Cassandra workloads.
  • Fast Lightweight Transactions (LWT) – This is great actually, DynamoDB is based on a leader-follower model, thus simple transactions do not pay a penalty — nice!

Looking deeper, the functional differences are significant. There is no multi-region support, no UDT, no ALTER TABLE, no counters — all pretty fundamental for Cassandra users.

Since Cassandra is used, a JVM is used under the hood, and we are left to speculate if and how garbage collection (GC) may strike. Obviously GC mitigation and Cassandra tuning can be problematic in an “as a service” offering since the user has no control on their tuning.

There are no materialized views, no ability to load SSTables directly (a capability open source users have gotten used to). It’s unclear whether the DynamoDB storage limit of 1 megabyte per item exists here. Scylla for example can store objects at the size of 10s of megabytes.

Yes, there’s no doubt this is expensive, with $1.45 per million writes, a workload consisting of an average of 100k iops will cost 1.45×0.1×24×3600×365 = $4,572,720/year! To compare, Scylla runs 200k iops with three i3.2xlarge nodes. At $0.312 hourly per node, that’s 0.312×3×24×365 = $8,900 annually plus licensing.

Lastly, there is the flexible replication and consistency level. MCS allows you to use consistency level of one or local quorum (but no more than this), and no control over the replication factor (we assume its 3 base on DynamoDB architecture). Scylla and Cassandra allows you to have any replication factor and many more consistency levels.

Who Is This Aimed at?

As the MCS service is more expensive but less powerful than DynamoDB, Amazon must be aiming its new service at pure Cassandra migrators. However, the current set of limitations is quite a hurdle for those who wish to move to the convenience of serverless distributed Cassandra API. The service is still in its infancy and we’re sure it will get better, yet some basic limitations are inherent with their implementation model.

What’s ScyllaDB’s Take?

AWS continues to march on with impressive execution speed. Some products are very impressive, such as the new ARM chip, everything around s3 and Aurora. However, we find MCS to be as a hybrid, ‘chimera’ — half-Cassandra, half-Dynamo, semi-compatible, serverless but with a potential for GC, mostly proprietary with a dash of open source.

Yes, AWS promises to contribute back to Cassandra. The question is whether those contributions be more than self-serving ones such as the AWS IAM authentication or bigger contributions beyond pluggable backends. Time will tell but with the underlying DynamoDB database there are no incentives to make real progress. To Amazon’s credit, they already made a single major contribution in the form of the Dynamo paper, 12 years ago.

We at Scylla can certainly learn from how Amazon implemented MCS. Its integration is solid, the autoscale capability is great, and the GUI is well done. Competition is vital for our brains, for the whole industry, and for you, our users, developers and community members.

In that spirit of competition, if you’ve read this far, we invite you to have a look at our Scylla Cloud, which supports both CQL and our new DynamoDB-compatible API (Project Alternator) that offers 5x-20x price/performance gains over other NoSQL databases and killer features like workload prioritizations that guarantee per workload SLA.

TRY SCYLLA CLOUD

The post Managed Cassandra on AWS, Our Take appeared first on ScyllaDB.

Scylla Manager 1.4.3

Scylla Manager Release Note

The Scylla team is pleased to announce the release of Scylla Manager 1.4.3.

Scylla Manager is a management system that automates maintenance tasks on a Scylla cluster.
Release 1.4.3 is a bug fix release of the Scylla Manager 1.4 release.

Related Links

Bugs fixed in this release

  • Manager exited without a proper error message when the certificates directory was not accessible
  • sctool task progress command might have shown a wrong progress value when resuming a repair job and changed segments per repair config option
  • Added support for mTLS protocol in the Scylla Manager REST API.

The post Scylla Manager 1.4.3 appeared first on ScyllaDB.

Scylla Open Source Release 3.1.2

Scylla Open Source Release Notes

The ScyllaDB team announces the release of Scylla Open Source 3.1.2, a bugfix release of the Scylla 3.1 stable branch. Scylla Open Source 3.1.2, like all past and future 3.x.y releases, is backward compatible and supports rolling upgrades.

Note: if and only if you installed a fresh Scylla 3.1.0, you must add the following line to scylla.yaml of each node before upgrading to 3.1.2:

enable_3_1_0_compatibility_mode: true

This is not relevant if your cluster was upgraded to 3.1.0 from an older version, or you are upgrading from or to any other Scylla releases, like 3.1.1.
If you have doubts, please contact us using the user mailing list.

Related links:

Issues fixed in this release

  • CQL: wrong key type used when creating non-frozen map virtual column #5165
  • CQL: One second before expiration, TTLed columns return as null values #4263, #5290
  • CQL: using queries with paging, ALLOW FILTERING and aggregation functions return intermediate aggregated results, not the full one #4540
  • CQL Local Index: potentially incorrect partition slices might cause a minor performance impact #5241
  • Stability: non-graceful handling of end-of-disk space state, may cause Scylla to exit with a coredump #4877
  • Stability: core dump on OOM during cache update after memtable flush, with !_snapshot->is_locked()' failed error message #5327
  • Stability: long-running cluster sees bad gossip generation when a node restarts #5164 (similar to CASSANDRA-10969)
  • Stability: Oversized allocation warning in reconcilable_result, for example, when paging is disabled #4780
  • Stability: running manual operations like nodetool compact will crash if the controller is disabled #5016
  • Stability: possible use-after-free during shutdown #5242
  • Stability: Under heavy read load, the read execution stage queue size can grow without bounds #4749
  • Stability: a rare race condition can cause a crash when updating schema during stress view updates #4856, #4766
  • Stability: certain node failures during repair are not handled correctly, might cause Scylla to exit #5238
  • UX: sstables: delete_atomically: misplaced parenthesis in warning message #4861
  • UX: scylla_setup script: NIC selection prompt causes error when Enter is pressed key without a NIC name #4517
  • Build from source: libthread_db.so.1 is missing in the relocatable package installation causing GDB thread debugging to be disabled #4996
  • Build from source: relocatable builds (via reloc/build_reloc.sh) are not reproducible #5222

The post Scylla Open Source Release 3.1.2 appeared first on ScyllaDB.

See you at AWS re:Invent 2019!

Are you at AWS re:Invent 2019? We’d love to see you! Stop by booth #2225 in the Expo Hall at the Venetian to talk with our solution engineers, learn about Scylla, play our fun iPad game and pick up some cool Sea Monster swag.

Our solutions architects are on-hand for demos and you can:

  • See a live demo of Scylla running millions of operations per second on a single cluster, plus our new Workload Prioritization feature which supports multiple workloads (Ingestion, OLTP & OLAP) running on the same cluster.
  • Hear how Scylla Cloud has better performance than DynamoDB with significant cost savings for similar workloads.
  • Learn about Project Alternator, the Scylla Open Source Amazon DynamoDB-compatible API that enables users to migrate to an open-source database that runs anywhere.

 

The post See you at AWS re:Invent 2019! appeared first on ScyllaDB.

Cassandra Elastic Auto-Scaling using Instaclustr’s Dynamic Cluster Resizing

This is the third and final part of a mini-series looking at the Instaclustr Provisioning API, including the new Open Service Broker.  In the last blog we demonstrated a complete end to end example using the Instaclustr Provisioning API, which included dynamic Cassandra cluster resizing.  This blog picks up where we left off and explores dynamic resizing in more detail.

  1. Dynamic Resizing

Let’s recap how Instaclustr’s dynamic Cassandra cluster resizing works.  From our documentation:

  1. Cluster health is checked Instaclustr’s monitoring system including synthetic transactions.
  2. The cluster’s schema is checked to ensure it is configured for the required redundancy for the operation.
  3. Cassandra on the node is stopped, and the AWS instance associated with the node is switched to a smaller or larger size, retaining the EBS containing the Cassandra data volume, so no data is lost.
  4. Cassandra on the node is restarted. No restreaming of data is necessary.
  5. Monitor the cluster to wait until all nodes have come up cleanly and have been processing transactions for at least one minute (again, using our synthetic transaction monitoring) and then move on to the next nodes.

Steps 1 and 2 are performed once per cluster resize, steps 3 and 4 are performed for each node, and step 5 is performed per resize operation. Nodes can be resized one at a time, or concurrently, in which case multiple steps 3 and 4 are performed concurrently.  Concurrent resizing allows up to one rack at a time to be replaced for faster overall resizing.

Last blog we ran the Provisioning API demo for a 6 node cluster (3 racks, 2 nodes per rack), which included dynamic cluster resizing one node at a time (concurrency = 1).  Here it is again:

Welcome to the automated Instaclustr Provisioning API Demonstration

We’re going to Create a cluster, check it, add a firewall rule, resize the cluster, then delete it

STEP 1: Create cluster ID = 5501ed73-603d-432e-ad71-96f767fab05d

racks = 3, nodes Per Rack = 2, total nodes = 6

Wait until cluster is running…

progress = 0.0%………………….progress = 16.666666666666664%……progress = 33.33333333333333%…..progress = 50.0%….progress = 66.66666666666666%……..progress = 83.33333333333334%…….progress = 100.0%

Finished cluster creation in time = 708s

STEP 2: Create firewall rule

create Firewall Rule 5501ed73-603d-432e-ad71-96f767fab05d

Finished firewall rule create in time = 1s

STEP 3 (Info): get IP addresses of cluster: 3.217.63.37 3.224.221.152 35.153.249.73 34.226.175.212 35.172.132.18 3.220.244.132 

STEP 4 (Info): Check connecting to cluster…

TESTING Cluster via public IP: Got metadata, cluster name = DemoCluster1

TESTING Cluster via public IP: Connected, got release version = 3.11.4

Cluster check via public IPs = true

STEP 5: Resize cluster…

Resize concurrency = 1

progress = 0.0%………………………………………………………progress = 16.666666666666664%………………………..progress = 33.33333333333333%……………………….progress = 50.0%……………………….progress = 66.66666666666666%………………………………………..progress = 83.33333333333334%………………………….progress = 100.0%

Resized data centre Id = 50e9a356-c3fd-4b8f-89d6-98bd8fb8955c to resizeable-small(r5-xl)

Total resizing time = 2771s

STEP 6: Delete cluster…

Deleting cluster 5501ed73-603d-432e-ad71-96f767fab05d

Delete Cluster result = {“message”:”Cluster has been marked for deletion.”}

*** Instaclustr Provisioning API DEMO completed in 3497s, Goodbye!

This time we’ll run it again with concurrency = 2. Because we have 2 nodes per rack, this will resize all the nodes in each rack concurrently before moving onto the next racks.

Welcome to the automated Instaclustr Provisioning API Demonstration

We’re going to Create a cluster, check it, add a firewall rule, resize the cluster, then delete it

STEP 1: Create cluster ID = 2dd611fe-8c66-4599-a354-1bd2e94549c1

racks = 3, nodes Per Rack = 2, total nodes = 6

Wait until cluster is running…

progress = 0.0%………………..progress = 16.666666666666664%…..progress = 33.33333333333333%…..progress = 50.0%…..progress = 66.66666666666666%…….progress = 83.33333333333334%…….progress = 100.0%

Finished cluster creation in time = 677s

STEP 2: Create firewall rule

create Firewall Rule 2dd611fe-8c66-4599-a354-1bd2e94549c1

Finished firewall rule create in time = 1s

STEP 3 (Info): get IP addresses of cluster: 52.44.222.173 54.208.0.230 35.172.244.249 34.192.193.34 3.214.214.142 3.227.210.169 

STEP 4 (Info): Check connecting to cluster…

TESTING Cluster via public IP: Got metadata, cluster name = DemoCluster1

TESTING Cluster via public IP: Connected, got release version = 3.11.4

Cluster check via public IPs = true

STEP 5: Resize cluster…

Resize concurrency = 2

progress = 0.0%…………………….progress = 16.666666666666664%….progress = 33.33333333333333%………………………progress = 50.0%….progress = 66.66666666666666%……………………..progress = 83.33333333333334%…..progress = 100.0%

Resized data centre Id = ae984ecb-0455-42fc-ab6d-d7eccadc6f94 to resizeable-small(r5-xl)

Total resizing time = 1177s

STEP 6: Delete cluster…

Deleting cluster 2dd611fe-8c66-4599-a354-1bd2e94549c1

Delete Cluster result = {“message”:”Cluster has been marked for deletion.”}

*** Instaclustr Provisioning API DEMO completed in 1872s, Goodbye!

 

This graph compares these two results, and shows the total time (in minutes) for provisioning and dynamically resizing the cluster.  The provisioning times are similar, and the resizing times are longer. Resizing by rack is 2.35 times faster than resizing by node (19 minutes c.f. 47 minutes).

Dynamic Cluster Resizing - Total times by node vs by rack

This is a graph of the resize time for each unit of resize (nodes, or rack). There are 6 resize operations (by node) and 3 (by rack, 2 nodes at a time). Each rack resize operation is actually slightly shorter than each node resize operation. There is obviously some constant overhead per resize operation, on top of the actual time to resize each node. In particular (Step 5 above) the cluster is monitored for at least a minute after each resizing operation, before moving onto the next operation.

Dynamic Cluster Resizing - Resizing times by node vs by rack

In summary, cluster resizing by rack is significantly faster than by node, as multiple nodes in a rack can be resized concurrently (up to the maximum number of nodes in the rack), and each resize operation has an overhead, so a smaller number of resize operations are more efficient.

  1.  Dynamics of Resizing

What actually happens to the cluster during the resize?  The following graphs visualise the dynamics of the cluster resizing over time, by showing the change to the number of CPU cores for each node in the cluster.   For simplicity, this cluster has 2 racks (represented by blue and green bars) with 3 nodes per rack, so 6 nodes in total. We’ll start with the visually simplest case of resizing by rack. This is the start state. Each node in a rack starts with 2 CPU cores. 

Resize by Rack - Cores Per Node

At the start of the first rack resize all the nodes in the first rack (blue) are reduced to 0 cores.

Dynamics of Resizing - Cores Per Node

By the end of the first resize, all of the 2 core nodes in the first rack have been resized to 4 core nodes (blue).

Second Rack - Resize by Rack

The same process is repeated for the second rack (green).

Second Rack - Resize by Rack, 4 cores per node

So that we end up with a cluster than has 4 cores per node.

Second Rack - Resize by Rack, resize 2 end

Next we’ll show what happens doing resize by node. It’s basically the same process, but as only one node at a time is resized there are more operations (we only show the 1st and the 6th).  The initial state is the same, with all nodes having 2 cores per node.

Resize by Node - Start

At the start of the first resize, one node is reduced to 0 cores. 

Resize by Node - Resize 1 Start

And at the end of the operation, this node has been resized to 4 cores.

Resize by Node - Resize 1 end

This process is repeated (not shown) for the remaining cores, until we start the 6th and last resize operation:

Resize by Node - Resize 6 start

Again resulting in the cluster being resized to 4 cores per node.

Resize by Node - Resize 6 end

These graphs show that during the initial resize operation there is a temporary reduction in total cluster capacity. The reduction is more substantial when resizing by rack, as a complete rack is unavailable until it’s resized (3/6 nodes = 50% initial capacity for this example), than if resizing by node, as only 1 node at a time is unavailable (⅚ nodes = 83% initial capacity for this example). 

  1. Resizing Modelling

In order to give more insight into the dynamics of resizing we built a simple time vs. throughput model for dynamic resizing for Cassandra clusters. It shows the changing capacity of the cluster as nodes are resized up from 2 to 4 core nodes. It can be used to better understand the differences between dynamic resizing by node or by rack. We assume a cluster with 3 racks, 2 nodes per rack, and 6 nodes in total.

First, let’s look at resizing one node at a time. This graph shows time versus cluster capacity. We assume 2 units of time per node resize (to capture the start, reduced capacity, and end, increased capacity – states of each operation as in the above graphs). The initial cluster capacity is 100% (dotted red line), and the final capacity will be 200% (dotted green line). As resizing is by node, initially 1 node is turned off, and eventually replaced by a new node with double the number of cores. However, during the 1st resize, only 5 nodes are available, so the capacity of the cluster is temporarily reduced to ⅚th of the original throughput (83%, the dotted orange line). After the node has been replaced the theoretical total capacity of the cluster has increased to 116% of the original capacity. This process continues until all the nodes have been resized (note that for simplicity we’ve approximated the cluster capacity as a sine wave, in reality it’s an asymmetrical square wave).

Resizing by Node

The next graph shows resizing by rack. For each resize, 2 nodes are replaced concurrently. During the 1st rack resize the capacity of the cluster is reduced more than for the single node resize, to 4/6th of the original throughput (67%). After two nodes have been replaced the theoretical capacity of the cluster has increased to 133% of the original capacity. So, for rack resizing the final theoretical maximum capacity is still double the original, but it happens faster, and we lose more (⅓) of the starting capacity during the 1st rack resize.

Resize by Rack - 6 nodes, 3 racks

The model has 2 practical implications (irrespective of what type of resizing is used). First, the maximum practical capacity of the cluster during the initial resize operation will be less than 100% of the initial cluster capacity. Exceeding this load during the initial resize operation will overload the cluster and increase latencies. Second, even though the theoretical capacity of the cluster increases as more resize operations are completed, the practical maximum capacity of the cluster will be somewhat less. This is because Cassandra load balancing assumes equal sized nodes, and is therefore unable to perfectly load balance requests across the heterogeneous node sizes. As more nodes are resized there is an increased chance of a request hitting one of the resized nodes, but the useable capacity is still impacted by the last remaining original small node (resulting in increased latencies for requests directed to it). Eventually when all the nodes have been resized the load can safely be increased as load balancing will be back to normal due to homogeneous node sizes.

The difference between resizing by node and by rack is clearer in the following combined graph (resizing by node, blue, by rack, orange):

Resizing by node cf by rack

In summary, it shows that resizing by rack (orange) is faster but has a lower maximum capacity during the first resize operation.  This graph summaries these two significant differences (practical maximum capacity during first resize operation, and relative time to resize the cluster).

Practical max capacity and resize time

What happens if you have different sized clusters (number of nodes)? This graph shows that the maximum useable capacity during resizing by node, ranges from 0.5 to 0.93 of the original capacity with increasing numbers of nodes (from 2 to 15). Obviously the more nodes the cluster has the less impact there is if only one node is resized at a time.

Max Capacity during resizing (by node)

The problem with resizing by node is that the total cluster resizing time is linearly increasing, so prohibitively high for larger clusters.  

Linear increase in cluster resizing time

  1. Auto Scaling Cassandra Elasticity

Another use case for the Instaclustr Provisioning API is to use it for automating Cassandra Elasticity, by initiating dynamic resizing on demand.   A common use case for dynamic scaling is to increase capacity well in advance for predictable loads such as weekend batch jobs or peak shopping seasons. Another less common use case is when the workload unexpectedly increases and the resizing has to be more dynamic. However, to do this you need to know when to trigger a resize, and what type of resize to trigger (by node or rack). Another variable is how much to resize by, but for simplicity we’ll assume we only resize from one instance size up to the next size (the analysis can easily be generalised).

We assume that the Instaclustr monitoring API is used to monitor Cassandra cluster load and utilisation metrics. A very basic auto scaling mechanism could trigger a resize based on exceeding a simple threshold cluster utilization. The threshold would need to be set low enough so that the load doesn’t exceed the reduced load due to the initial resize operation. This obviously has to be lower for resize by rack than by node, which could result in the cluster having lower utilisation than economical, and “over eager” resizing.   A simple threshold trigger also doesn’t help you decide which type of resize to perform (although a simple rule of thumb may be sufficient to decide, e.g. if you have more than x nodes in the cluster then always resize by rack, else by node).

Is there a more sophisticated way of deciding when and what type of resize to trigger? Linear regression is a simple way of using past data to predict future trends, and has good support in the Apache Maths library.  Linear regression could therefore be run over the cluster utilisation metrics to predict how fast the load is changing.

This graph shows the maximum capacity of the cluster initially (100%). The utilisation of the cluster is measured for 60 minutes, and the load appears to be increasing. A linear regression line is computed to predict the expected load in the future (dotted line). The graph shows that we predict the cluster will reach 100% capacity around the 280 minute mark (220 minutes in the future). 

Utilisation and Regression

What can we do to preempt the cluster being overloaded? We can initiate a dynamic resize sufficiently ahead of the predicted time of overload to preemptively increase the cluster capacity.  The following graphs show auto scaling (up) using the Instaclustr Provisioning API to Dynamically Resize Cassandra clusters from one r5 instance size to the next size up in the same family. The capacity of the cluster is 100% initially, and is assumed to be double this (200%) once the resize is completed. The resize times are based on the examples above (but note that there may be some variation in practice), for a 6 node cluster with 3 racks. 

The first graph shows resize by node (orange). As we discovered above, the effective maximum capacity during a resize by node is limited to 83% of the initial capacity, and the total time to resize was measured at 47 minutes. We therefore need to initiate a cluster resize by node (concurrency=1) at least 47 minutes prior (at the 155 minute mark) to the time that the regression predicts we’ll hit a utilisation of 83% (200m mark), as shown in this graph:

Resize by Node

The second graph shows resizing by rack. As we discovered above, the maximum capacity during a resize by rack is limited to 67% of the initial capacity, and the total time to resize was measured at 20 minutes. We therefore need to initiate a cluster resize by rack at least 20 minutes prior (at the 105 minute mark) to the time that the regression predicts we’ll hit a utilisation of 67% (125m mark), as shown in this graph:

Resize by Rack

This graph shows the comparison, and reveals that resize by rack (blue) must be initiated sooner (but completes faster) than resize by node (orange).

Resize by Node vs by Rack

How well does this work in practice? Well, it depends on how accurate the linear regression is. In some cases the load may increase faster than predicted, in other cases slower than predicted, or may even drop off. For our randomly generated workload it turns out that the utilization actually increases faster than predicted (green), and both resizing approaches were initiated too late, as the actual load had increased to higher than safe load for either resize types.

Resize by Node vs by rack 2:2

What could we do better? Resizing earlier is obviously advantageous. A better way of using linear regression is to compute confidence intervals (and is included in Apache Maths). The following graph shows a 90% upper confidence interval (red dotted line). Using this line for the predictions to trigger resizes results in earlier cluster resizing (at the 60 and 90 minute marks. The resize by rack (blue) is now triggered immediately after the 60 minutes of measured data is evaluated), resulting in the resizing operations being completed by the time the load becomes critical as shown in this graph:

Resize by node vs by rack - Triggered with upper confidence interval

The potential downside of using a 90% confidence interval to trigger the resize is that it’s possible that the actual utilisation never reaches the predicted value. For example, here’s a different run of randomly generated load showing a significantly lower actual utilisation (green). In this case it’s apparent that we’ve resized the cluster far too early, and the cluster may even need to be resized down again depending on the eventual trend. 

Resize by node vs by rack - Triggered with upper confidence interval 2:2

The risk of resizing unnecessarily can be reduced by ensuring that the regressions are regularly recomputed (e.g. every 5 minutes), and resizing requests are only issued just in time. For example, in this case the resize by node would not have been requested if the regression was recomputed at the 90 minute mark, using only the previous 60 minutes of data, and extrapolated 60 minutes ahead, as the upper confidence interval is only 75%, which is less than the 83% trigger threshold for resizing by node.

  1. Resizing Rules – by node or rack

As a thought experiment (not tested yet) I’ve developed some pseudo code rules for deciding when and what type of upsize to initiate, assuming the variables are updated every 5 minutes, and correct values have been set for a 6 node (3 rack, 2 nodes per rack) cluster.

now = current time

load = current utilisation (last 5m)

rack_threshold = 67% (for 3 racks), in general = 100 – ((1/racks)*100)

node_threshold = 83% (for 6 nodes), in general = 100 – ((1/nodes)*100)

rack_resize_time = 25m (rounded up)

node_resize_time = 50m (rounded up)

predict_time_rack_threshold (time when rack_threshold utilisation is predicted to be reached)

predict_time_node_threshold (time when node_threshold utilisation is predicted to be reached)

predicted_load_increasing = true if regression positive else negative

sufficient_time_for_rack_resize = (predict_time_rack_threshold – rack_resize_time) > now

sufficient_time_for_node_resize = (predict_time_node_threshold – node_resize_time) > now

trigger_rack_resize_now = (predict_time_rack_threshold – rack_resize_time) <= now

trigger_node_resize_now = (predict_time_node_threshold – node_resize_time) <= now

 

1 Every 5 minutes recompute linear regression function (upper 90% confidence interval) from past 60 minutes of cluster utilisation metrics and update the above variables.

 

2 IF predict_load_increasing THEN

IF load < rack_threshold AND

NOT sufficient_time_for_node_resize AND

trigger_rack_resize_now THEN

TRIGGER RACK RESIZE to next size up (if available)

WAIT for resize to complete

ELSE IF load < node_threshold AND

trigger_node_resize_now THEN

TRIGGER NODE RESIZE to next size up (if available)

WAIT for resize to complete

How does the rule work? The general idea is that if the load is increasing we may need to upsize the cluster, but the type of resize (by rack or by node) depends on the current load and the time remaining to safely do the different types of resizes. We also want to leave a resize as late as possible in case the regression prediction changes signalling that we don’t really need to resize yet. 

If the current load is under the resize by rack threshold (67%), then both options are still possible, but we only trigger the rack resize if there is no longer sufficient time to do a node resize, and it’s the latest possible time to trigger it.

However, if the load has already exceeded the safe rack resize threshold and it’s the latest possible time to trigger a node resize, then trigger a node resize. 

If the load has already exceeded the safe node threshold then it’s too late to resize and an alert should be issued.  An alert should also be issued if the cluster is already using the largest resizeable instances available as dynamic resizing is not possible.

Also note that the rack and node thresholds above are the theoretical best case values, and should be reduced for a production cluster with strict SLA requirements to allow for more headroom during resizing, say by 10% (or more). Allowing an extra 10% headroom reduces the thresholds to 60% (by rack) and 75% (by node) for the 6 node, 3 rack cluster example.

If you resize a cluster up you should also implement a process for “downsizing” (resizing down). 

This works in a similar way to the above, but because downsizing (eventually) halves the capacity, you can be more conservative about when to downsize. You should trigger it when the load is trending down, and when the capacity is well under 50% so that an “upsize” isn’t immediately triggered after the downsize (e.g. triggering at 40% will mean that the downsized clusters are running at 80% at the end of the downsize, assuming the load doesn’t drop significantly during the downsize operation). Note that you also need to wait until the metrics for the resized cluster have stabilised otherwise the upsizing rules will be triggered (as it will incorrectly look like the load has increased).

After an upsizing has completed you should also wait for close to the billing period for the instances (e.g. an hour) before downsizing, as you have to pay for them for an hour anyway so there’s no point in downsizing until you’ve got value for money out of them, and this gives a good safety period for the metrics to stabilise with the new instance sizes.

Simple downsizing rules (not using any regression or taking into account resize methods) would look like this:

3 IF more than an hour size last resize up complete AND

   NOT predict_load_increasing THEN

IF load < 40% THEN

TRIGGER RACK RESIZE to next size down (if available)

WAIT for resize to complete + some safety margin

We’ve also assumed a simple resize step from one size to the next size up. The algorithm can be easily adapted to resize from/to any size instances in the same family, which would be useful if the load is increasing more rapidly and is predicted to exceed the capacity of a single jump in instances sizes in the available time (but in practice you should do some benchmarking to ensure that you know the actual capacity improvement for different instance sizes).  

  1. Resizing Rules – any concurrency

We above analysis assumes you can only size by node or rack. In practice, concurrency can be any integer value between 1 and the number of nodes per rack, so you can resize by part rack concurrently. In practice, resizing by part rack may be the best choice as it can mitigate against losing quorum and is reasonable fast (if a “live” node fails during resizing you can lose quorum, the more nodes resized at once, the greater the risk). These rules can therefore be generalised as follows, with resize by node and rack now just edge cases. We keep track of the Concurrency which starts from an initial value equal to the number of nodes per rack (resize by “rack”) and eventually decreases to 1 (resize by “node”). We also now have a couple of functions that compute values for different values of concurrency, basically the current concurrency and the next concurrency (concurrency – 1).  Depending on the number of nodes per rack, some values of concurrency will have different threshold values but the same resize time (if the number of nodes isn’t evenly divisible by concurrency). 

now = current time

load = current utilisation (last 5m)

racks = number of racks (e.g. 3)

nodes_per_rack = number of nodes per rack (e.g. 10)

total_nodes = racks * nodes_per_rack

concurrency = nodes_per_rack

threshold(C) = 100 – ((C/total_nodes) * 100)

resize_operations(C) = roundup(nodes_per_rack/C) * racks

resize _time(C) = resize_operations(C) * average time per resize operation (measured)

predict_time_threshold(C) = time when threshold(C) utilisation is predicted to be reached

predicted_load_increasing = true if regression positive else negative

sufficient_time_for_resize(C) = (predict_time_threshold(C) – resize_time(C)) > now

trigger_resize_now(C) = (predict_time_threshold(C) – resize_time(C)) <= now

 

1 Every 5 minutes recompute linear regression function (upper 90% confidence interval) from past 60 minutes of cluster utilisation metrics and update the above variables.

 

2 IF predict_load_increasing THEN

IF load < threshold(concurrency) AND

(concurrency == 1

 OR 

resize_time(concurrency) == resize_time(concurrency – 1) 

 OR

 NOT sufficient_time_for_resize(concurrency – 1))  

AND

Trigger_resize_now(concurrency) THEN

TRIGGER RESIZE(concurrency) to next size up (if available)

WAIT for resize to complete

ELSE IF concurrency == 1

exception(“load has exceeded node size threshold and insufficient time to resize”)

ELSE IF load < Threshold(Concurrency – 1) 

Concurrency = Concurrency – 1

This graph shows the threshold (U%) and cluster resize time for an example 30 node cluster, with 3 racks and 10 nodes per rack and a time for each resize operation of 3 minutes (faster than reality, but easier to graph). Starting with concurrency = 10 (resize by “rack’), the rules will trigger a resize if the current load is increasing, under the threshold (67%, in the blue region), there is just sufficient predicted time for the resize to take place, and the next concurrency (9) resize time is the same or there is insufficient time for the next concurrency (9) resize. However, if the load has already exceeded the threshold then the next concurrency level (9) is selected and the rules are rerun 5 minutes later.  The graph illustrates the fundamental tradeoff with the dynamic cluster resizing, which is that smaller concurrencies enable resizing with higher starting loads (blue), with the downside that resizes take longer (orange). 

Resize Concurrency vs. Threshold and Cluster resize time

These (or similar) rules may or may not be a good fit for specific use case. For example, they are designed to be conservative to prevent over eager resizing by waiting as late as possible to initiate a resize at the current concurrency level. This may mean that due to load increases you miss the opportunity to do a resize at higher concurrency level, so have to wait longer for a resize at a lower concurrency level. Instead you could choose to trigger a resize at the highest possible concurrency as soon as possible, by simply removing the check for there being insufficient time to resize at the next concurrency level.

Finally, these or similar rules could be implemented with monitoring data from the Instaclustr Cassandra Prometheus API, using Prometheus PromQL linear regression and rules to trigger Instaclustr provisioning API dynamic resizing.

The post Cassandra Elastic Auto-Scaling using Instaclustr’s Dynamic Cluster Resizing appeared first on Instaclustr.

Scylla Enterprise Release 2019.1.3

Scylla Enterprise Release Notes

The ScyllaDB team announces the release of Scylla Enterprise 2019.1.3, which is a production-ready Scylla Enterprise patch release. As always, Scylla Enterprise customers are encouraged to upgrade to Scylla Enterprise 2019.1.3 in coordination with the Scylla support team.

The focus of Scylla Enterprise 2019.1.3 is improving stability and robustness, by fixing issues and improving security by enabling two new key providers for Encryption at Rest. More below.

Related Links

New Providers for Scylla Enterprise Encryption at Rest

We introduced encryption at rest with Scylla 2019.1.1. Scylla Enterprise protects your sensitive data with data-at-rest encryption.

One of the key elements of encryption at rest is storing the encryption keys, and for obvious reasons: if lost, your data become unreadable; if compromised, your data might be exposed.

The encryption key storage options are determined in Scylla by selecting a Key Provider.

Scylla 2019.1.1 only supported with one Key Provider: Local, which allows you to keep keys the file system for each node.

Scylla 2019.1.3 adds two more Key Providers:

  • Table provider, allows you to store table keys in Scylla Tables and eliminates the need to copy the table key to each server.
  • KMIP provider. KMIP is a standard protocol for exchanging keys in a secure way. With this key provider, you can use any KMIP compatible server to secure Scylla Encryption keys.

More on the new providers here

Fixed issues in this release are listed below, with open source references, if present:

  • Stability: Fix of handling of schema alterations and evictions in the cache, which may result in a node crash #5127 #5128 #5134 #5135
  • Stability: Fix a bug in cache accounting #5123
  • Stability: Fix a bug that can cause streaming to a new node to fail with “Cannot assign requested address” error #4943
  • Stability: A race condition in node boot can fail the init process #4709
  • Stability: Can not replace a node which is failed in the middle of the boot-up process (same root cause as #4709 above) #4723
  • Stability: Perftune.py script fails to start with “name ‘logging’ is not defined” error #4958 #4922
  • Stability: Scylla may hang or even segfaults when querying system.size_estimates #4689
  • Performance: Range scans run in the wrong service level (workload prioritization) (internal #1052)
  • Performance: Wrong priority for View streaming slow down user requests #4615
  • Hinted handoff:
    • Fix races that may lead to use-after-free events and file system level exceptions during shutdown and drain #4685 #4836
    • Commit log error “Checksum error in segment chunk at” #4231
  • Docker: An issue in command-line options parsing prevents Scylla Docker from starting, reporting “error: too many positional options have been specified on the command line” error #4141
  • In-Transit Encryption: Streaming in local DC fails if only inter-DC encryption is enabled #4953

The post Scylla Enterprise Release 2019.1.3 appeared first on ScyllaDB.

ApacheCon Berlin, 22-24 October 2019

ApacheCon Europe, October 22-24, 2019, Kulturbrauerei Berlin #ACEU19 https://aceu19.apachecon.com/

What’s better than one ApacheCon? Another ApacheCon! This year there were two Apache Conferences, one in Las Vegas and then again in Berlin.

They were similar but different. What were some differences between ApacheCon Berlin and Las Vegas? The location. In contrast to the hyper-real gambling oasis in a bone dry desert of Las Vegas, Berlin is a real capital city steeped in history. The venue was a historic brewery (once the largest in the world, but unfortunately no longer brewing so also “dry”, the Kulturbrauerei):

ApacheCon Berlin 2019 - Kulturbrauerei

It has multiple night clubs, and the concrete-bunker like main auditorium of the Kesselhaus (boiler house) is the perfect venue for a heavy metal concert (and the conference keynotes).  You also can’t escape the history, as next door to the conference there was a museum of everyday life in East Germany before the wall came down 30 years ago.  The East Germans were often creative to cope with the restrictions and shortages of life under Communism, and came up with innovations such as this “camping car”!

ApacheCon - Camping Car

The Berlin ApacheCon was also smaller, but in a more compact location and with less tracks (General, Community, Machine Learning, IoT, Big Data), so on average the talks had more buzz than Las Vegas, with an environment more conducive to catching up with people more than once for ongoing conservations afterwards. There was also an Open Source Design workshop focussing on Usability). I’d met some of the people involved in this at the speakers reception and had a lively dinner conversation (perhaps because I was the only non-designer at the table so asked lots of silly questions). It’s good to see Open Source UX design getting the attention it deserves, as once upon a time “Open Source” was synonymous with “Badly Designed’!

This “State of the feather” talk by David Nalley (Executive Vice President, ASF), espoused the Apache Way resulting in vendor neutrality, independence, trust and safety for contributors and users (and the photo reveals something of the industrial ambience of the boiler house):

“State of the feather” talk by David Nalley

Instaclustr was proud to be one of the ApacheCon EU sponsors:

ApacheCon EU Sponsors 2019

I had the privilege of kicking off the Machine Learning track held in the (appropriately named) “Maschinenhaus” with my talk “Kafka, Cassandra and Kubernetes at Scale – Real-time Anomaly detection on 19 billion events a day”. 

ApacheCon Berlin 2019 - Paul Brebner's talk on Building a Scalable Streaming Iot Application using Apache Kafka

My second talk of the day was in the more intimate venue, the “Frannz Salon”, in the IoT track, on “Kongo: Building a Scalable Streaming IoT Application using Apache Kafka”.

I managed to attend some talks by other speakers at ApacheCon Berlin. These are some of the highlights.

This IoT talk intersected with mine in terms of the problem domain (real-time RFID asset tracking), but provided a good explanation of Apache Flink, including Flink pattern matching which is a powerful CEP library: “7 Reasons to use Apache Flink for your IoT Project – How We Built a Real-time Asset Tracking System”.

In the Big Data track, there was a talk on the hot topic of how to use Kubernetes to run Apache software: Patterns and Anti-Patterns of running Apache bigdata projects in Kubernetes.

And finally a talk about an impressive Use Case for Apache Kafka, monitoring the largest machine in the world, the Large Hadron Collider at CERN: “Open Source Big Data Tools accelerating physics research at CERN”. Monitoring data had to be collected from 100s of accelerators and detector controllers, experiment data catalogues, data centres and system logs. Kafka is used as the buffered transport to collect and direct monitoring data, Spark is used to perform intermediate processing including data enrichment and aggregation, storage is provided by HDFS, ElasticSearch etc, and data analysis by Kibana, Grafana and Zeppelin. This pipeline handles a peak of 500GB of monitoring data a day (with capacity for more). Another innovation is a system called SWAN, to provide ephemeral Spark clusters on Kubernetes, for user managed Spark resources (provision, control, use and dispose).  A similar scalable pipeline enabled by fully managed Kafka and ElasticSearch is available from Instaclustr.

Signboard - East and West Germany

Berlin was a thought-provoking historical location for ApacheCon Europe. For approaching 30 years (1961-1989) the wall had divided East and West Germany, with incompatible political, economic and social systems pitted in a stand-off with only metres separating them, and only a few tightly controlled points of access bridging them. However, 30 years ago the wall came down and Berlin was reunified resulting in rapid social and economic changes. In a similar way, over the last 20 years, Apache has shifted the software world from Licences and Lock-in to Free and Open Source, and created a vibrant ecosystem of projects, people, software and software. I look forward to repeating the experience next year.

After ApacheCon I had a chance to explore a different sort of history. The Berlin technology museum had an impressive display of old computers, including a reconstruction of the 1st programmable (using punched movie film) digital (but mechanical) computer built in 1938 by Konrad Zuse, the Z1.  Perhaps more significantly, in the 1940’s Zuse also designed the first high level programming language, Plankalkül, evidently conceptually decades ahead of other high level languages (he included array and parallel processing, and goal-directed execution), and it probably influenced Algol.

Konrad Zuse and the Z1 computer in the Berlin technology museum

Konrad Zuse and the Z1 computer in the Berlin technology museum

Finally, the friendly Instaclustr team at our stand at ApacheCon Berlin.  If you didn’t have a chance to talk to us in person at ApacheCon, contact us at sales@instaclustr.com

Instaclustr at ApacheCon Berlin 2019

The post ApacheCon Berlin, 22-24 October 2019 appeared first on Instaclustr.