The Strategy Behind ReversingLabs’ Monster Scale Key-Value Migration

Migrating 300+ TB of data and 400+ services from a key-value database to ScyllaDB – with zero downtime ReversingLabs recently completed the largest migration in their history: migrating more than 300 TB of data, more than 400 services, and data models from their internally-developed key-value database to ScyllaDB seamlessly, and with zero downtime. Services using multiple tables — reading, writing, and deleting data, and even using transactions — needed to go through a fast and seamless switch. How did they pull it off? Martina recently shared their strategy, including data modeling changes, the actual data migration, service migration, and a peek at how they addressed distributed locking. Here’s her complete tech talk:   And you can read highlights below… About ReversingLabs Reversing Labs is a security company that aims to analyze every enterprise software package, container and file to identify potential security threats and mitigate cybersecurity risks. They maintain a library of 20B classified samples of known “goodware” (benign) and malware files and packages. Those samples are supported by ~300 TB of metadata, which are processed using a network of approximately 400 microservices. As Martina put it: “It’s a huge system, complex system – a lot of services, a lot of communication, and a lot of maintenance.” Never build your own database (maybe?) When the ReversingLabs team set out to select a database in 2011, the options were limited. Cassandra was at version 0.6, which lacked role-level isolation DynamoDB was not yet released ScyllaDB was not yet released MongoDB 1.6 had consistency issues between replicas PostgreSQL was struggling with multi-version concurrency control (MVCC), which created significant overhead “That was an issue for us—Postgres used so much memory,” Martina explained. “For a startup with limited resources, having a database that ate all our memory was a problem. So we built our own data store. I know, it’s scandalous—a crazy idea today—but in this context, in this market, it made sense.” The team built a simple key-value store tailored to their specific needs—no extra features, just efficiency. It required manual maintenance and was only usable by their specialized database team. But it was fast, used minimal resources, and helped ReversingLabs, as a small startup, handle massive amounts of data (which became a core differentiator). However, after 10 years, ReversingLabs’ growing complexity and expanding use cases became overwhelming – to the database itself and the small database team responsible for it. Realizing that they reached their home-grown database’s tipping point, they started exploring alternatives. Enter ScyllaDB. Martina shared: “After an extensive search, we found ScyllaDB to be the most suitable replacement for our existing database. It was fast, resilient, and scalable enough for our use case. Plus, it had all the features our old database lacked. So, we decided on ScyllaDB and began a major migration project.” Migration Time The migration involved 300 TB of data, hundreds of tables, and 400 services. The system was complex, so the team followed one rule: keep it simple. They made minimal changes to the data model and didn’t change the code at all. “We decided to keep the existing interface from our old database and modify the code inside it,” Martina shared. “We created an interface library and adapted it to work with the ScyllaDB driver. The services didn’t need to know anything about the change—they were simply redeployed with the new version of the library, continuing to communicate with ScyllaDB instead of the old database.” Moving from a database with a single primary node to one with a leaderless ring architecture did require some changes, though. The team had to adjust the primary key structure, but the value itself didn’t need to be changed. In the old key-value store, data was stored as a packed protobuf with many fields. Although ScyllaDB could unpack these protobufs and separate the fields, the team chose to keep them as they were to ensure a smoother migration. At this point, they really just wanted to make it work exactly like before. The migration had to be invisible — they didn’t want API users to notice any differences. Here’s an overview of the migration process they performed once the models were ready: 1. Stream the old database output to Kafka The first step was to set up a Kafka topic dedicated to capturing updates from the old database. 2. Dump the old database into a specified location Once the streaming pipeline was in place, the team exported the full dataset from the old database. 3. Prepare a ScyllaDB table by configuring its structure and settings Before loading the data, they needed to create a ScyllaDB table with the new schema. 4. Prepare and load the dump into the ScyllaDB table With the table ready, the exported data was transformed as needed and loaded into ScyllaDB. 5. Continuously stream data to ScyllaDB They set up a continuous pipeline with a service that listened to the Kafka topic for updates and loaded the data into ScyllaDB. After the backlog was processed, the two databases were fully in sync, with only a negligible delay between the data in the old database and ScyllaDB. It’s a fairly straightforward process…but it had to be repeated for 100+ tables. Next Up: Service Migration The next challenge was migrating their ~400 microservices. Martina introduced the system as follows: “We have master services that act as data generators. They listen for new reports from static analysis, dynamic analysis, and other sources. These services serve as the source of truth, storing raw reports that need further processing. Each master service writes data to its own table and streams updates to relevant queues. The delivery services in the pipeline combine data from different master services, potentially populating, adding, or calculating something with the data, and combining various inputs. Their primary purpose is to store the data in a format that makes it easy for the APIs to read. The delivery services optimize the data for queries and store it in their own database, while the APIs then read from these new databases and expose the data to users.” Here’s the 5-step approach they applied to service migration: 1. Migrate the APIs one by one The team migrated APIs incrementally. Each API was updated to use the new ScyllaDB-backed interface library. After redeploying each API, the team monitored performance and data consistency before moving on to the next one. 2. Prepare for the big migration day Once the APIs were migrated, they had to prepare for the big migration day. Since all the services before the APIs are intertwined, they all had to be migrated all at once. 3. Stop the master services On migration day, the team stopped the master services (data generators), causing input queues to accumulate until the migration was complete. During this time, the APIs continued serving traffic without any downtime. However, the data in the databases was delayed for about an hour or two until all services were fully migrated. 4. Migrate the delivery services After stopping the master services, the team waited for the queues between the master and delivery services to empty – ensuring that the delivery services processed all data and stopped writing. The delivery services were then migrated one by one to the new database. There was no data at this point because the master services were stopped. 5. Migrate and start the master services At last, it was time to migrate and start the master services. The final step was to shut down the old database because everything was now working on ScyllaDB. “It worked great, Martina shared. “We were happy with the latencies we achieved. If you remember, our old architecture had a single master node, which created a single point of failure. Now, with ScyllaDB, we had resiliency and high availability, and we were quite pleased with the results.” And Finally…Resource Locking One final challenge: resource locking. Per Martina, “In the old architecture, resource locking was simple because there was a single master node handling all writes. You could just use a mutex on the master node, and that was it—locking was straightforward. Of course, it needed to be tied to the database connection, but that was the extent of it.” ScyllaDB’s leaderless architecture meant that the team had to figure out distributed locking. They leveraged ScyllaDB’s lightweight transactions and built a distributed locking mechanism on top of it. The team worked closely with ScyllaDB engineers, going through several proofs of concept (POCs)—some successful, others less so. Eventually, they developed a working solution for distributed locking in their new architecture. You can read all the details in Martina’s blog post, Implementing distributed locking with ScyllaDB.  

Efficient Full Table Scans with ScyllaDB Tablets

“Tablets” data distribution makes full table scans on ScyllaDB more performant than ever Full scans are resource-intensive operations reading through an entire dataset. They’re often required by analytical queries such as counting total records, identifying users from specific regions, or deriving top-K rankings. This article describes how ScyllaDB’s shift to tablets significantly improves full scan performance and processing time, as well as how it eliminates the complex tuning heuristics often needed with the previous vNodes based approach. It’s been quite some time since we last touched on the subject of handling full table scans on ScyllaDB. Previously, Avi Kivity described how the CQL token() function could be used in a divide and conquer approach to maximize running analytics on top of ScyllaDB. We also provided sample Go code and demonstrated how easy and efficient full scans could be done. With the recent introduction of tablets, it turns out that full scans are more performant than ever. Token Ring Revisited Prior to tablets, nodes in a ScyllaDB cluster owned fractions of the token ring, also known as token ranges. A token range is nothing more than a contiguous segment represented by two (very large) numbers. By default, each node used to own 256 ranges, also known as vNodes. When data gets written to the cluster, the Murmur3 hashing function is responsible for distributing data to replicas of a given token range. A full table scan thus involved parallelizing several token ranges until clients eventually traverse the entire ring. As a refresher, a scan involves iterating through multiple subranges (smaller vNode ranges) with the help of the token() function, like this: SELECT ... FROM t WHERE token(key) >= ? AND token(key) < ? To fully traverse the ring as fast as possible, clients needed to keep parallelism high enough (number of nodes x shard count x some smudge factor) to fully benefit from all available processing power. In other words, different cluster topologies would require different parallelism settings, which could often change as nodes got added or removed. Traversing vNodes worked nicely, but the approach introduced some additional drawbacks, such as: Sparse tables result in wasted work because most token ranges contain little or no data. Popular and high-density ranges could require fine-grained tuning to prevent uneven load distribution and resource contention. Otherwise, they would be prone to processing bottlenecks and suboptimal utilization. It was impossible to scan a token range owned by a single shard, and particularly difficult to even scan a range owned by a single replica. This increases coordination overhead, and creates a performance ceiling on how fast a single token range could be processed. The old way: system.size_estimates To assist applications during range scans, ScyllaDB provided a node-local system.size_estimates table (something we inherited from Apache Cassandra) whose schema looks like this: CREATE TABLE system.size_estimates ( keyspace_name text, table_name text, range_start text, range_end text, mean_partition_size bigint, partitions_count bigint, PRIMARY KEY (keyspace_name, table_name, range_start, range_end) ) Every token range owned by a given replica provides an estimated number of partitions along with a mean partition size. The product of both columns therefore provides a raw estimate on how much data needs to be retrieved if a scan reads through the entire range. This design works nicely under small clusters and when data isn’t frequently changing. Since the data is node local, an application in charge of the full scan would be required to keep track of 256 vNodes*Node entries to submit its queries. Therefore, larger clusters could introduce higher processing overhead. Even then, (as the table name suggests) the number of partitions and their sizes are just estimates, which can be underestimated or overestimated. Underestimating a token range size makes a scan more prone to timeouts, particularly when its data contains a few large partitions along many smaller sized keys. Overestimating it means a scan may take longer to complete due to wasted cycles while scanning through sparse ranges. Parsing the system.size_estimates table’s data is precisely what connectors like Trino and Spark do when you integrate them with either Cassandra or ScyllaDB. To address estimate skews, these tools often allow you to manually tune settings like split-size in a trial-and-error fashion until it somewhat works for your workload. Its rationale works like this: Clients parse the system.size_estimates data from every node in the cluster (since vNodes are non overlapping ranges, fully describing the ring distribution) The size of a specific range is determined by partitionsCount * meanPartitionSize It then calculates the estimated number of partitions and the size of the table to be scanned It evenly splits each vNode range into subranges, taking its corresponding ring fraction into account Subranges are parallelized across workers and routed to natural replicas as an additional optimization Finally, prior to tablets there was no deterministic way to scan a particular range and target a specific ScyllaDB shard. vNodes have no 1:1 token/shard mapping, meaning a single coordinator request would often need to communicate with other replica shards, making it particularly easier to introduce CPU contention. A layer of indirection: system.tablets Starting with ScyllaDB 2024.2, tablets are production ready. Tablets are the foundation behind ScyllaDB elasticity, while also effectively addressing the drawbacks involved with full table scans under the old vNode structure. In case you missed it, I highly encourage you to watch Avi Kivity talk on Tablets: Rethinking Replication for an in-depth understanding on how tablets evolved from the previous vNodes static topologies. During his talk, Avi mentions that tablets are implemented as a layer of indirection involving a token range to a (replica, shard) tuple. This layer of indirection is exposed in ScyllaDB as the system.tablets table, whose schema looks like this: CREATE TABLE system.tablets ( table_id uuid, last_token bigint, keyspace_name text STATIC, resize_seq_number bigint STATIC, resize_type text STATIC, table_name text STATIC, tablet_count int STATIC, new_replicas frozen<list<frozen<tuple<uuid, int>>>>, replicas frozen<list<frozen<tuple<uuid, int>>>>, session uuid, stage text, transition text, PRIMARY KEY (table_id, last_token) ) A tablet represents a contiguous token range owned by a group of replicas and shards. Unlike the previous static vNode topology, tablets are created on a per table basis and get dynamically split or merged on demand. This is important, because workloads may vary significantly: Some are very throughput intensive under frequently accessed (and small) data sets and will have fewer tablets. These take less time to scan. Others may become considerably storage bound over time, spanning through multiple terabytes (or even petabytes) of disk space. These take longer to scan. A single tablet targets a geometric average size of 5GB before it gets split. Therefore, splits are done when a tablet reaches 10GB and merges at 2.5GB. Note that the average size is configurable, and the default might change in the future. However, scanning over each tablet owned range allows full scans to deterministically determine up to how much data they are reading. The only exception to this rule is when very large (larger than the average) partitions are involved, although this is an edge case. Consider the following set of operations: In this example, we start by defining that we want tables within the ks keyspace to start with 128 tablets each. After we create table t, observe that the tablet_count matches what we’ve set upfront. If we had asked for a non base 2 number, the tablet_count would be rounded to the next base 2 number. The tablet_count represents the total number of tablets across the cluster, where the replicas column represents a tuple of host IDs/shards which are replicas of that tablet, matching our defined replication factor. Therefore, the previous logic can be optimized like this: Clients parse the system.tablets table and retrieve the existing tablet distribution Tablets ranges spanning the same replica-shards get grouped and split together Workers route requests to natural replica/shard endpoints via shard awareness by setting a routingKey for every request. Tablet full scans have lots to benefit from these improvements. By directly querying specific shards, we eliminate the cost of cross CPU and node communication. Traversing the ring is not only more efficient, but effectively removes the problem with sparse ranges and different tuning logic for small and large tables. Finally, given that a tablet has a predetermined size, long gone are the days of fine-tuning splitSizes! Example This GitHub repo contains boilerplate code demonstrating how to carry out these tasks efficiently. The process involves splitting tablets into smaller pieces of work, and scheduling them evenly across its corresponding replica/shards. The scheduler ensures that replica shards are kept busy with at least 2 inflight requests each, whereas the least loaded replica always consumes pending work for processing. The code also simulates real-world latency variability by introducing some jitter during each request processing. [Access from the GitHub repo] Conclusion This is just the beginning of our journey with tablets. The logic explained in this blog is provided for application builders to follow as part of their full scan jobs. It is worth mentioning that the previous vNode technique is backward compatible and still works if you use tablets. Remember that full scans often require reading through lots of data, and we highly recommend you to use BYPASS CACHE to prevent invalidating important cached rows. Furthermore, ScyllaDB Workload Prioritization helps with isolation and ensures latencies from concurrent are kept low. Happy scanning!

The Managed Apache Cassandra® Buyer's Guide

In this post, we'll look at the benefits of using managed Cassandra versus self-hosting, as well as what factors to assess before you make a purchase decision.

From Raw Performance to Price Performance: A Decade of Evolution at ScyllaDB

This is a guest post authored by tech journalist George Anadiotis. It’s a follow-up to articles that he published in 2023 and 2022 In business, they say it takes ten years to become an overnight success. In technology, they say it takes ten years to build a file system. ScyllaDB is in the technology business, offering a distributed NoSQL database that is monstrously fast and scalable. It turns out that it also takes ten years or more to build a successful database. This is something that Felipe Mendes and Guilherme Nogueira know well. Mendes and Nogueira are Technical Directors at ScyllaDB, working directly on the product as well as consulting clients. Recently, they presented some of the things they’ve been working on at ScyllaDB’s Monster Scale Summit, and they shared their insights in an exclusive fireside chat. You can also catch the podcast on AppleSpotify, and Amazon The evolution of ScyllaDB When ScyllaDB started out, it was all about raw performance. The goal was to be “the fastest NoSQL database available in the market, and we did that – we still are” as Mendes put it. However, as he added, raw speed alone does not necessarily make a good database. Features such as materialized views, secondary indexes, and integrations with third party solutions are really important as well. Adding such features marked the second generation in ScyllaDB’s evolution. ScyllaDB started as a performance-oriented alternative to Cassandra, so inevitably, evolution meant feature parity with Cassandra. The third generation of ScyllaDB was marked by the move to the cloud. ScyllaDB Cloud was introduced in 2019, has been growing at 200% YoY. As Nogueira shared, even today there are daily signups of new users ready to try the oddly-named database that’s used by companies such as Discord, Medium, and Tripadvisor, all of which the duo works with. The next generation brought a radical break from what Mendes called the inefficiencies in Cassandra, which involved introducing the Raft protocol for node coordination. Now ScyllaDB is moving to a new generation, by implementing what Mendes and Nogueira referred to as hallmark features: strong consistency and tablets. Strong consistency and tablets The combination of the new Raft and Tablets features enables clusters to scale up in seconds because it enables nodes to join in parallel, as opposed to sequentially which was the case for the Gossip protocol in Cassandra (which ScyllaDB also relied on originally). But it’s not just adding nodes that’s improved, it’s also removing nodes.When a node goes down for maintenance, for example, ScyllaDB’s strong consistency support means that the rest of the nodes in the cluster will be immediately aware. By contrast, in the previously supported regime of eventual consistency via a gossip protocol, it could take such updates a while to propagate. Using Raft means transitioning to a state machine mechanism, as Mendes noted. A node leader is appointed, so when a change occurs in the cluster, the state machine is updated and the change is immediately propagated. Raft is used to propagate updates consistently at every step of a topology change. It also allows for parallel topology updates, such as adding multiple nodes at once. This was not possible under the gossip-based approach. And this is where tablets come in. With tablets, instead of having one single leader per cluster, there is one leader per tablet. A tablet is a logical abstraction that partitions data in tables into smaller fragments. Tablets are load-balanced after new nodes join, ensuring consistent distribution across the cluster. Any changes to Tablets ownership are also ensured to be consistent by using Raft to propagate these changes. Each tablet is independent from the rest, which means that ScyllaDB with Raft can move them to other nodes on demand atomically and in a strongly consistent way as workloads grow or shrink. Speed, economy, elasticity By breaking down tables into smaller and more manageable units, data can be moved between nodes in a cluster much faster. This means that clusters can be scaled up rapidly, as Mendes demonstrated. When new nodes join a cluster, the data is redistributed in minutes rather than hours, which was the case previously (and is still the case with alternatives like Cassandra). When we’re talking about machines that have higher capacity, that also means that they have a higher storage density to be used, as Mendes noted. Tablets balance out in a way that utilizes storage capacity evenly, so all nodes in the cluster will have a similar utilization rate. That’s because the number of tablets at each node is determined according to the number of CPUs, which is always tied to storage in cloud nodes. In this sense, as storage utilization is more flexible and the cluster can scale faster, it also allows users to run at a much higher storage utilization rate. A typical storage utilization rate, Mendes said, is 50% to 60%. ScyllaDB aims to run at up to 90% storage utilization. That’s because tablets and cloud automations enable ScyllaDB Cloud to rapidly scale the cluster once those storage thresholds are exceeded, as ScyllaDB’s benchmarking shows. Going from 60% to 90% storage utilization means an extra 30% per node disk space can be utilized. At scale, that translates to significant savings for users. Further to scaling speed and economy, there is an additional benefit to tablets: enabling the elasticity of cloud operations for cloud deployments, without the complexity. Something old, something new, something borrowed, something blue Beyond strong consistency and tablets, there is a wide range of new features and improvements that the ScyllaDB team is working on. Some of these, such as support for S3 object storage, are efforts that are ongoing. Besides offering users choice, as well as a way to economize even further on storage, object storage support could also serve resilience. Other features, such as workload prioritization or the Alternator DynamoDB-compatible API, have been there for a while but are being improved and re-emphasized. As Mendes shared, when running a variety of workloads, it’s very hard for the database to know which is which and how to prioritize. Workload prioritization enables users to characterize and prioritize workloads, assigning appropriate service levels to each. Last but not least, ScyllaDB is also adding vector capabilities to the database engine. Vector data types, data structures, and query capabilities have been implemented and are being benchmarked. Initial results show great promise, even outperforming pure-play vector databases. This will eventually become a core feature, supported on both on-premise and cloud offerings. Once again, ScyllaDB is keeping with the times in its own characteristic way. As Mendes and Nogueira noted, there are many ScyllaDB clients using ScyllaDB to power AI workloads, some of them like Clearview AI sharing their stories. Nevertheless, ScyllaDB remains focused on database fundamentals, taking calculated steps in the spirit of continuous improvement that has become its trademark. After all, why change something that’s so deeply ingrained in the organization’s culture, is working well for them and appreciated by the ones who matter most – users?

How to Use Testcontainers with ScyllaDB

Learn how to use Testcontainers to create lightweight, throwaway instances of ScyllaDB for testing Why wrestle with all the complexities of database configuration for each round of integration testing? In this blog post, we will explain how to use the Testcontainers library to provide lightweight, throwaway instances of ScyllaDB for testing. We’ll go through a hands-on example that includes creating the database instance and testing against it. Testcontainers: A Valuable Tool for Integration Testing with ScyllaDB You automatically unit test your code and (hopefully) integration test your system…but what about your database? To rest assured that the application works as expected, you need to extend beyond unit testing. You also need to automatically test how the units interact with one another and how they interact with external services and systems (message brokers, data stores, and so on). But running those integration tests requires the infrastructure to be configured correctly, with all the components set to the proper state. You also need to ensure that the tests are isolated and don’t produce any side effects or “test pollution.” How do you reduce the pain…and get it all running in your CI/CD process? This is where Testcontainers comes into play. Testcontainers is an open source library for throwaway, lightweight instances of databases (including ScyllaDB), message brokers, web browsers, or just about anything that can run in a Docker container. You define your dependencies in code, which makes it well-suited for CI/CD processes. When you run your tests, a ScyllaDB container will be created and then deleted. This allows you to test your application against a real instance of the database without having to worry about complex environment configurations. It also ensures that the database setup has no effect on the production environment. Some of the advantages of using Testcontainers with ScyllaDB: It launches Dockerized databases on demand, so you get a fresh environment for every test run. It isolates tests with throwaway containers. There’s no test interference or state leakage since each test gets a pristine database state Tests are fast and realistic, since the container starts in seconds, ScyllaDB responds fast, and actual CQL responses are used. Tutorial: Building a ScyllaDB Test Step-by-Step The Testcontainers ScyllaDB integration works with JavaGo, Python (see example here), and Node.js. Here, we’ll walk through an example of how to use it with Java.The steps described below are applicable to any programming language and its corresponding testing framework. In our specific Java example, we will be using the JUnit 5 testing framework. The integration between Testcontainers and ScyllaDB uses Docker. You can read more about using ScyllaDB with Docker, and learn the Best Practices for Running ScyllaDB on Docker. Step 1: Configure Your Project Dependencies Before we begin, make sure you have: Java 21 or newer installed Docker installed and running (required for Testcontainers) Gradle 8 Note: If you are more comfortable with Maven, you can still follow this tutorial, but the setup and test execution steps will be different. To verify that Java 21 or newer is installed, run: java --version To verify that Docker is installed and running correctly, run: docker run hello-world To verify that Gradle 8 or newer is installed, run: gradle --version Once you have verified that all of the relevant project dependencies are installed and ready, you can move on to creating a new project. mkdir testcontainers-scylladb-java cd testcontainers-scylladb-java gradle init A series of prompts will appear. Here are the relevant choices you need to select: Select application Select java Enter Java version: 21 Enter project name: testcontainers-scylladb-java Select application structure: Single application project Select build script DSL: Groovy Select test framework: JUnit Jupiter For “Generate build using new APIs and behavior” select no After that part is finished, to verify the successful initialization of the new project, run: ./gradlew --version If everything goes well, you should see a build.gradle file in the app folder. You will need to add the following dependencies in your app/build.gradle file: dependencies { // Use JUnit Jupiter for testing. testImplementation libs.junit.jupiter testRuntimeOnly 'org.junit.platform:junit-platform-launcher' // This dependency is used by the application. implementation libs.guava // Add the required dependencies for the test testImplementation 'org.testcontainers:scylladb:1.20.5' testImplementation 'com.scylladb:java-driver-core:4.18.1.0' implementation 'ch.qos.logback:logback-classic:1.4.11' } Also, to get test report output in the terminal, you will need to add testLogging to app/build.gradle file as well: tasks.named('test') { // Use JUnit Platform for unit tests. useJUnitPlatform() // Add this testLogging configuration to get // the test results in terminal testLogging { events "passed", "skipped", "failed" showStandardStreams = true exceptionFormat = 'full' showCauses = true } } Once you’re finished editing the app/build.gradle file, you need to install the dependencies by running this command in the terminal: ./gradlew build You should see the BUILD SUCCESSFUL output in the terminal. The final preparation step is to create a ScyllaDBExampleTest.java file somewhere in the src/test/java folder. JUnit will find all tests in the src/test/java folder. For example: touch src/test/java/org/example/ScyllaDBExampleTest.java Step 2: Launch ScyllaDB in a Container Once the dependencies are installed and the ScyllaDBExampleTest.java file created, you can copy and paste the code provided below to the ScyllaDBExampleTest.java file. This code will start a fresh ScyllaDB instance for every test in this file in the setUp method. To ensure the instance will get shut down after every test, we’ve created the tearDown method, too. Step 3: Connect via the Java Driver You’ll connect to the ScyllaDB container by creating a new session. To do so, you’ll need to update your setUp method in the ScyllaDBExampleTest.java file: Step 4: Define Your Schema Now that you have the code to run ScyllaDB and connect to it, you can use the connection to create the schema for the database. Let’s define the schema by updating your setUp method in the ScyllaDBExampleTest.java file: Step 5: Insert and Query Data Once you have prepared the ScyllaDB instance, you can run operations on it. To do so, let’s add a new method to our ScyllaDBExampleTest class in the ScyllaDBExampleTest.java file: Step 6: Run and Validate the Test Your test is now complete and ready to be executed! Use the following command to execute the test: ./gradlew clean test --no-daemon If the execution is successful, you’ll notice the container starting in the logs, and the test will pass if the assertions are met. Here’s an example of what a successful terminal output might look like: 12:05:26.708 [Test worker] DEBUG com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager -- ep-00000012: connection released [route: {}->unix://localhost:2375][total available: 1; route allocated: 1 of 2147483647; total allocated: 1 of 2147483647] 12:05:26.708 [Test worker] DEBUG org.testcontainers.utility.ResourceReaper -- Removed container and associated volume(s): scylladb/scylla:2025.1 ScyllaDBExampleTest > testScyllaDBOperations() PASSED BUILD SUCCESSFUL in 35s 3 actionable tasks: 3 executed Full code example The repository for the full code example can be found here:  https://github.com/scylladb/scylla-code-samples/tree/master/java-testcontainers Level Up: Extending Your ScyllaDB Tests That’s just the basics. Here are some additional uses you might want to explore on your own: Test schema migrations – Verify that your database evolution scripts work correctly Simulate multi-node clusters – Use multiple containers to test your application with multi-node and multi-dc  scenarios Benchmark performance – Measure your application’s throughput under various workloads Test failure scenarios – Simulate how your application handles network partitions or node failures Wrap-Up: Master ScyllaDB Testing with Confidence You’ve built a fast, real ScyllaDB test in Java that provides realistic database behavior without the overhead of a permanent installation. This approach should give you confidence that your code will work correctly in production. You can try it with an example app on ScyllaDB University, customize it to your project and specific needs, and share your experience with the community! Resources: Dive Deeper ScyllaDB Documentation ScyllaDB with Docker Best Practices for Running ScyllaDB on Docker Testcontainers GitHub Repository with Examples ScyllaDB University – Free courses to master ScyllaDB

Why Teams Are Ditching DynamoDB

Teams sometimes need lower latency, lower costs (especially as they scale) or the ability to run their applications somewhere other than AWS It’s easy to understand why so many teams have turned to Amazon DynamoDB since its introduction in 2012. It’s simple to get started, especially if your organization is already entrenched in the AWS ecosystem. It’s relatively fast and scalable, with a low learning curve. And since it’s fully managed, it abstracts away the operational effort and know-how traditionally required to keep a database up and running in a healthy state. But as time goes on, drawbacks emerge, especially as workloads scale and business requirements evolve. Teams sometimes need lower latency, lower costs (especially as they scale), or the ability to run their applications somewhere other than AWS. In those cases, ScyllaDB, which offers a DynamoDB-compatible API, is often selected as an alternative. Let’s explore the challenges that drove three teams to leave DynamoDB. Multi-Cloud Flexibility and Cost Savings Yieldmo is an online advertising platform that connects publishers and advertisers in real-time using an auction-based system, optimized with ML. Their business relies on delivering ads quickly (within 200-300 milliseconds) and efficiently, which requires ultra-fast, high-throughput database lookups at scale. Database delays directly translate to lost business. They initially built the platform on DynamoDB. However, while DynamoDB had been reliable, significant limitations emerged as they grew. As Todd Coleman, Technical Co-Founder and Chief Architect, explained, their primary concerns were twofold: escalating costs and geographic restrictions. The database was becoming increasingly expensive as they scaled, and it locked them into AWS, preventing true multi-cloud flexibility. While exploring DynamoDB alternatives, they were hoping to find an option that would maintain speed, scalability, and reliability while reducing costs and providing cloud vendor independence. Yieldmo first considered staying with DynamoDB and adding a caching layer. However, caching couldn’t fix the geographic latency issue. Cache misses would be too slow, making this approach impractical. They also explored Aerospike, which offered speed and cross-cloud support. However, Aerospike’s in-memory indexing would have required a prohibitively large and expensive cluster to handle Yieldmo’s large number of small data objects. Additionally, migrating to Aerospike would have required extensive and time-consuming code changes. Then they discovered ScyllaDB. And ScyllaDB’s DynamoDB-compatible API (Alternator) was a game changer. Todd explained, “ScyllaDB supported cross cloud deployments, required a manageable number of servers, and offered competitive costs. Best of all, its API was DynamoDB compatible, meaning we could migrate with minimal code changes. In fact, a single engineer implemented the necessary modifications in just a few days.” The migration process was carefully planned, leveraging their existing Kafka message queue architecture to ensure data integrity. They conducted two proof-of-concept (POC) tests: first with a single table of 28 billion objects, and then across all five AWS regions. The results were impressive. Todd shared, “Our database costs were cut in half, even with DynamoDB reserved capacity pricing.” And beyond cost savings, Yieldmo gained the flexibility to potentially deploy across different cloud providers. Their latency improved, and ScyllaDB was as simple to operate as DynamoDB. Wrapping up, Todd concluded: “One of our initial concerns was moving away from DynamoDB’s proven reliability. However, ScyllaDB has been an excellent partner. Their team provides monitoring of our clusters, alerts us to potential issues, and advises us when scaling is needed in terms of ongoing maintenance overhead. The experience has been comparable to DynamoDB, but with greater independence and substantial cost savings.” Hear from Yieldmo  Migrating to GCP with Better Performance and Lower Costs Digital Turbine, a major player in mobile ad tech with $500 million in annual revenue, faced growing challenges with its DynamoDB implementation. While its primary motivation for migration was standardizing on Google Cloud Platform following acquisitions, the existing DynamoDB solution had been causing both performance and cost concerns at scale. “It can be a little expensive as you scale, to be honest,” explained Joseph Shorter, vice president of Platform Architecture at Digital Turbine. “We were finding some performance issues. We were doing a ton of reads — 90% of all interactions with DynamoDB were read operations. With all those operations, we found that the performance hits required us to scale up more than we wanted, which increased costs.” Digital Turbine needed the migration to be as fast and low-risk as possible, which meant keeping application refactoring to a minimum. The main concern, according to Shorter, was “How can we migrate without radically refactoring our platform, while maintaining at least the same performance and value – and avoiding a crash-and-burn situation? Because if it failed, it would take down our whole company. “ After evaluating several options, Digital Turbine moved to ScyllaDB and achieved immediate improvements. The migration took less than a sprint to implement and the results exceeded expectations. “A 20% cost difference — that’s a big number, no matter what you’re talking about,” Shorter noted. “And when you consider our plans to scale even further, it becomes even more significant.” Beyond the cost savings, they found themselves “barely tapping the ScyllaDB clusters,” suggesting room for even more growth without proportional cost increases. Hear from Digital Turbine High Write Throughput with Low Latency and Lower Costs The User State and Customizations team for one of the world’s largest media streaming services had been using DynamoDB for several years. As they were rearchitecting two existing use cases, they wondered if it was time for a database change. The two use cases were: Pause/resume: If a user is watching a show and pauses it, they can pick up where they left off – on any device, from any location. Watch state: Using that same data, determine whether the user has watched the show. Here’s a simple architecture diagram: Every 30 seconds, the client sends heartbeats with the updated playhead position of the show and then sends those events to the database. The Edge Pipeline loads events in the same region as the user, while the Authority (Auth) Pipeline combines events for all five regions that the company serves. Finally, the data has to be fetched and served back to the client to support playback. Note that the team wanted to preserve separation between the Auth and Edge regions, so they weren’t looking for any database-specific replication between them. The two main technical requirements for supporting this architecture were: To ensure a great user experience, the system had to remain highly available, with low-latency reads and the ability to scale based on traffic surges. To avoid extensive infrastructure setup or DBA work, they needed easy integration with their AWS services. Once those boxes were checked, the team also hoped to reduce overall cost. “Our existing infrastructure had data spread across various clusters of DynamoDB and Elasticache, so we really wanted something simple that could combine these into a much lower cost system” explained their backend engineer. Specifically, they needed a database with: Multiregion support, since the service was popular across five major geographic regions. The ability to handle over 170K writes per second. Updates didn’t have a strict service-level agreement (SLA), but the system needed to perform conditional updates based on event timestamps. The ability to handle over 78K reads per second with a P99 latency of 10 to 20 milliseconds. The use case involved only simple point queries; things like indexes, partitioning and complicated query patterns weren’t a primary concern. Around 10TB of data with room for growth. Why move from DynamoDB? According to their backend engineer, “DynamoDB could support our technical requirements perfectly. But given our data size and high (write-heavy) throughput, continuing with DynamoDB would have been like shoveling money into the fire.” Based on their requirements for write performance and cost, they decided to explore ScyllaDB. For a proof of concept, they set up a ScyllaDB Cloud test cluster with six AWS i4i 4xlarge nodes and preloaded the cluster with 3 billion records. They ran combined loads of 170K writes per second and 78K reads per second. And the results? “We hit the combined load with zero errors. Our P99 read latency was 9 ms and the write latency was less than 1 ms.” These low latencies, paired with significant cost savings (over 50%) convinced them to leave DynamoDB. Beyond lower latencies at lower cost, the team also appreciated the following aspects of ScyllaDB: ScyllaDB’s performance-focused design (being built on the Seastar framework, using C++, being NUMA-aware, offering shard-aware drivers, etc.) helps the team reduce maintenance time and costs. Incremental Compaction Strategy helps them significantly reduce write amplification. Flexible consistency level and replication factors helps them support separate Auth and Edge pipelines. For example, Auth uses quorum consistency while Edge uses a consistency level of “1” due to the data duplication and high throughput. Their backend engineer concluded: “Choosing a database is hard. You need to consider not only features, but also costs. Serverless is not a silver bullet, especially in the database domain. “In our case, due to the high throughput and latency requirements, DynamoDB serverless was not a great option. Also, don’t underestimate the role of hardware. Better utilizing the hardware is key to reducing costs while improving performance.” Learn More Is Your Team Next? If your team is considering a move from DynamoDB, ScyllaDB might be an option to explore. Sign up for a technical consultation to talk more about your use case, SLAs, technical requirements and what you’re hoping to optimize. We’ll let you know if ScyllaDB is a good fit and, if so, what a migration might involve in terms of application changes, data modeling, infrastructure and so on. Bonus: Here’s a quick look at how ScyllaDB compares to DynamoDB

Database Performance Questions from Google Cloud Next

Spiraling cache costs, tombstone nightmares, old Cassandra pains, and more — what people were asking about at Google Cloud Next You’ve likely heard that what happens in Vegas stays in Vegas…but we’re making an exception here. Last week at Google Cloud Next in Las Vegas, my ScyllaDB colleagues and I had the pleasure of meeting all sorts of great people. And among all the whack-a-monster fun, there were lots of serious questions about database performance in general and ScyllaDB in particular. In this blog, I’ll share some of the most interesting questions that attendees asked and recap my responses. Cache We added Redis in front of our Postgres but now its cost is skyrocketing. How can ScyllaDB help in this case? We placed Redis in front of DynamoDB because DAX is too expensive, but managing cache invalidation is hard. Any suggestions? Adding a cache layer to a slower database is a very common pattern. After all, if the cache layer grants low-millisecond range response time while the back-end database serves requests in the 3-digit milliseconds range, the decision might seem like a no brainer. However, the tradeoffs often turn out to be steeper than people initially anticipate: First, you need to properly size the cache so the cost doesn’t outweigh its usefulness. Learning the intricacies of the workload (e.g., which pieces of data are accessed more than others) is essential for deciding what to cache and what to pass-through the backend database. If you underestimate the required cache size, the performance gain of having a cache might be less than ideal. Since only part of the data is in the cache, the database is hit frequently – and elevates latencies across the board. Deciding what to keep in cache is also important. How you define the data eviction policy for data in cache might make or break the data lifecycle in that layer – greatly affecting its impact on long-tail latency. The application is also responsible for caching responses. That means there’s additional code that must be maintained to ensure consistency, synchronicity, and high availability of those operations. Another issue that pops up really often is cache invalidation: how to manage updating a cache that is separate from the backend database. Once a piece of data needs to be deleted or updated, it has to be synchronized with the cache, and that creates a situation where failure means serving stale or old data. Integrated solutions such as DAX for DynamoDB are helpful because they provide a pass-through caching layer: the database is updated first, then the system takes care of reflecting the change on the cache layer. However, the tradeoff of this technique is the cost: you end up paying extra for DAX than you would pay for simply running a similarly-sized Redis cluster. ScyllaDB’s performance characteristics have allowed many teams to replace both their cache and database layers. By bypassing the Linux cache and caching data at the row level, ScyllaDB makes cache space utilization more efficient for maximum performance. By relying on efficient use of cache, ScyllaDB can provide single-digit milliseconds p99 read latency while still reducing the overall infrastructure required to run workloads. Its design allows for extremely fast access to data on disks. Even beyond that caching layer, ScyllaDB efficiently serves data from disk at very predictable ultra-low latency. ScyllaDB’s IO scheduler is optimized to maximize disk bandwidth while still delivering predictable low latency for operations. You can learn more about our IO Scheduler on this blog. ScyllaDB maintains cache performance by leveraging the LRU (Least-Recently Used) algorithm which selectively evicts infrequently accessed data. Keys that were not recently accessed may be evicted to make room for other data to be cached. However, evicted keys are still persisted on disk (and replicated!) and can be efficiently accessed at any time. This is especially advantageous compared to Redis, where relying on a persistent store outside of memory is challenging. Read more in our Cache Internals blog, cache comparison page, and blog on replacing external caches. Tombstones I’ve had tons of issues with tombstones in the past with Cassandra… Performance issues, data resurrection, you name it. It’s still pretty hard dealing with the performance impact. How does ScyllaDB handle these issues? In the LSM (Log-Structured Merge tree) model, deletes are handled just like regular writes. The system accepts the delete command and creates what is called a tombstone: a marker for a deletion. Then, the system later merges the deletion marker with the rest of the data — either in a process called compaction or in memory at read time. Tombstone processing historically poses a couple of challenges. One of them is to handle what is known as range deletes: a single deletion that covers multiple rows. For instance, you can use “DELETE … WHERE … ClusteringKey < X”, which would delete all records that have a Clustering Key lower than X. This usually means the system has to read through an unknown amount of data until it gets to the tombstone, then it would have to discard it all from the result set. If the number of rows is small, it’s still a very efficient read. But if it covers millions of rows, reading just to discard them can be very inefficient. Tombstones are also the source of another concern with distributed systems: data resurrection. Since Cassandra’s (and originally ScyllaDB’s) tombstones were originally kept only up to the grace period (a.k.a. gc_grace_seconds, default of 10 days), a repair had to be run on the cluster within that time frame. Skipping this step could lead to tombstones being purged — and previously deleted data that’s not covered by a tombstone could come back to life (a.k.a. “Data resurrection). ScyllaDB recently introduced tons of improvements in how it handles tombstones, from repair-based garbage collection, to expired tombstone thresholds to trigger early compaction of SSTables. Tombstone processing is now much more efficient and performant than Cassandra’s (and even previous versions of ScyllaDB), especially in workloads that are prone to accumulating tombstones over time. ScyllaDB’s repair-based garbage collection capability also helps prevent data resurrection by ensuring tombstones are only eligible for purging after a repair has been completed. This means workloads can get rid of tombstones much faster and make reads more efficient. Learn more about this functionality on our blog Preventing Data Resurrection with Repair Based Tombstone Garbage Collection. BigTable and Friends When would you recommend ScyllaDB over Spanner/BigTable/BigQuery? Questions about how our product compares to the cloud databases run by the conference host are unsurprisingly common. Google Cloud databases are no exception. Attendees shared a lot of use cases currently leveraging them and were curious about alternatives aligned with our goals (scalability, global replication, performance, cost). Could ScyllaDB help them, or should they move on to another booth? It really depends on how they’re using their database as well as the nature of their database workload. Let’s review the most commonly asked Google Cloud databases: Spanner is highly oriented towards relational workloads at global scale. While it still can perform well under distributed NoSQL workloads, its performance and cost may pose challenges at scale. BigQuery is a high-performance analytical database. It can run really complex analytical queries, but it’s not a good choice for NoSQL workloads that require high throughput and low latency at scale. BigTable is Google Cloud’s NoSQL database. This is the most similar to ScyllaDB’s design, with a focus on scalability and high throughput. From the description above, it’s easy to assess: if the use-case is inherently relational or heavy on complex analytics queries, ScyllaDB might not be the best choice. However, just because they are currently using a relational or analytics database doesn’t mean that they are leveraging the best tool for the job. If the application relies on point queries that fetch data from a single partition (even if it contains multiple rows), then ScyllaDB might be an excellent choice. ScyllaDB implements advanced features such as Secondary Indexes (Local and Global) and Materialized Views, which allow users to have very efficient indexes and table views that still provide the same performance as their base table. Cloud databases are usually very easy to adopt: just a couple of clicks or an API call, and they are ready to serve in your environment. Their performance is usually fine for general use. However, for use cases with latency or throughput requirements, it might be appropriate to consider performance-focused alternatives. ScyllaDB has a track record of being extremely efficient and fast, providing predictable low tail-latency at p99. Cost is another factor. Scaling workloads to millions of operations per second might be technically feasible on some databases, but incur surprisingly high cost. ScyllaDB’s inherent efficiency allows us to run workloads at scale with greatly reduced costs. Another downside of using a cloud vendor’s managed database solution: ecosystem lock-in. If you decide to leave the cloud vendor’s platform, you usually can’t use the same service – either on other cloud providers or even on-premises. If teams need to migrate to other deployment solutions, ScyllaDB provides robust support for moving to any cloud provider or running in an on-premises datacenter. Read our ScyllaDB vs BigTable comparison. Schema mismatch How does ScyllaDB handle specific problems such as schema mismatch? This user shared a painful Cassandra incident where an old node (initially configured to be part of a sandbox cluster) had an incorrect configuration. That mistake, possibly caused by IP overlap resulting from infrastructure drift over time, led to the old node joining the production cluster. At that point, it essentially garbled the production schema and broke it. Since Cassandra relies on the gossip protocol (epidemic peer-to-peer protocol), the schema was replicated to the whole cluster and left it in an unusable state. That mistake ended up costing this user hours of troubleshooting and caused a production outage that lasted for days. Ouch! After they shared their horror story, they inquired: Could ScyllaDB have prevented that? With the introduction of Consistent Schema changes leveraging the Raft distributed consensus algorithm, ScyllaDB made schema changes safe and consistent in a distributed environment. Raft is based on events being handled by a leader node, which ensures that any changes applied to the cluster would effectively be rejected if not agreed upon by the leader. The issue reported by the user simply would not exist in a Raft-enabled ScyllaDB cluster. Schema management would reject the rogue version and the node would fail to join the cluster – exactly what it needed to do to prevent issues! Additionally, ScyllaDB transitioned from using IP addresses to Host UUIDs – effectively removing any chance that an old IP tries to reconnect to a cluster it was never a part of. Read the Consistent Schema Changes blog and the follow up blog. Additionally, learn more about the change to Strongly Consistent Topology Changes. Old Cassandra Pains I have a very old, unmaintained Cassandra cluster running a critical app. How do I safely migrate to ScyllaDB? That is a very common question. First, let’s unpack it a bit. Let’s analyze what “old” means. Cassandra 2.1 was released 10 years ago. But it is still supported by the ScyllaDB Spark connector…and that means it can be easily migrated to a shiny ScyllaDB cluster (as long as its schema is compatible). “Unmaintained” can also mean a lot of things. Did it just miss some upgrade cycles? Or is it also behind on maintenance steps such as repairs? Even if that’s the case – no problem! Our Spark-based ScyllaDB Migrator has tunable consistency for reads and writes. This means it can be configured to use LOCAL_QUORUM or even ALL consistency if required. Although that’s not recommended in most cases (for performance reasons), that would ensure consistent data reads as data is migrated over to a new cluster. Now, let’s discuss migration safety. In order to maintain consistency across the migration, the app should be configured to dual-write to both the source and destination clusters. It can do so by sending parallel writes to each and ensuring that any failures are retried. It may also be a good idea to collect metrics or logs on errors so you can keep track of inconsistencies across the clusters. Once dual writes are enabled, data can be migrated using the Scylla Migrator app. Since it’s based on Spark, the migrator can easily scale to any number of workers that’s required to speed up the migration process. After migrating the historical data, you might run a read validation process – reading from both sources and comparing until you are confident in the migrated data consistency. Once you are confident that all data has been migrated, you can finally get rid of the old cluster and have your application run solely on the new one. If the migration process still seems daunting, we can help. ScyllaDB has a team available to guide you through the migration, from planning to best practices at every step. Reach out to Support if you are considering migrating to ScyllaDB! We have tons of resources on helping users migrate. Here are some of them: ScyllaDB Migrator project Migrate to ScyllaDB Documentation hub Monster Scale Summit presentation: Database Migration Strategies and Pitfalls Migrating from Cassandra or DynamoDB to ScyllaDB using ScyllaDB Migrator Wrap These conversations are only a select few of the many good discussions the ScyllaDB team had at Google Cloud Next. Every year, we are amazed at the wide variety of stories shared by people we meet. Conversations like these are what motivate us to attend Google Cloud Next every year. If you’d like to reach out, share your story, or ask questions, here are a couple of resources you can leverage: ScyllaDB Forum Community Slack If you are wondering if ScyllaDB is the right choice for your use cases, you can reach out for a technical 1:1 meeting.

Cassandra Compaction Throughput Performance Explained

This is the second post in my series on improving node density and lowering costs with Apache Cassandra. In the previous post, I examined how streaming performance impacts node density and operational costs. In this post, I’ll focus on compaction throughput, and a recent optimization in Cassandra 5.0.4 that significantly improves it, CASSANDRA-15452.

This post assumes some familiarity with Apache Cassandra storage engine fundamentals. The documentation has a nice section covering the storage engine if you’d like to brush up before reading this post.