Real-Time Write Heavy Workloads: Considerations & Tips

Let’s focus on the performance-releated complexities that teams commonly face with write-heavy workloads and discuss your options for tackling them Write-heavy database workloads bring a distinctly different set of challenges than read-heavy ones. For example: Scaling writes can be costly, especially if you pay per operation and writes are 5X more costly than reads Locking can add delays and reduce throughput I/O bottlenecks can lead to write amplification and complicate crash recovery Database backpressure can throttle the incoming load While cost matters – quite a lot, in many cases – it’s not a topic we want to cover here. Rather, let’s focus on the performance-releated complexities that teams commonly face and discuss your options for tackling them. What Do We Mean by “a Real-Time Write Heavy Workload”? First, let’s clarify what we mean by a “real-time write-heavy” workload. We’re talking about workloads that: Ingest a large amount of data (e.g., over 50K OPS) Involve more writes than reads Are bound by strict latency SLAs (e.g., single-digit millisecond P99 latency) In the wild, they occur across everything from online gaming to real-time stock exchanges. A few specific examples: Internet of Things (IoT) workloads tend to involve small but frequent append-only writes of time series data. Here, the ingestion rate is primarily determined by the number of endpoints collecting data. Think of smart home sensors or industrial monitoring equipment constantly sending data streams to be processed and stored. Logging and Monitoring systems also deal with frequent data ingestion, but they don’t have a fixed ingestion rate. They may not necessarily append only, as well as may be prone to hotspots, such as when one endpoint misbehaves. Online Gaming platforms need to process real-time user interactions, including game state changes, player actions, and messaging. The workload tends to be spiky, with sudden surges in activity. They’re extremely latency sensitive since even small delays can impact the gaming experience. E-commerce and Retail workloads are typically update-heavy and often involve batch processing. These systems must maintain accurate inventory levels, process customer reviews, track order status, and manage shopping cart operations, which usually require reading existing data before making updates. Ad Tech and Real-time Bidding systems require split-second decisions. These systems handle complex bid processing, including impression tracking and auction results, while simultaneously monitoring user interactions such as clicks and conversions. They must also detect fraud in real time and manage sophisticated audience segmentation for targeted advertising. Real-time Stock Exchange systems must support high-frequency trading operations, constant stock price updates, and complex order matching processes – all while maintaining absolute data consistency and minimal latency. Next, let’s look at key architectural and configuration considerations that impact write performance. Storage Engine Architecture The choice of storage engine architecture fundamentally impacts write performance in databases. Two primary approaches exist: LSM trees and B-Trees. Databases known to handle writes efficiently – such as ScyllaDB, Apache Cassandra, HBase, and Google BigTable – use Log-Structured Merge Trees (LSM). This architecture is ideal for handling large volumes of writes. Since writes are immediately appended to memory, this allows for very fast initial storage. Once the “memtable” in memory fills up, the recent writes are flushed to disk in sorted order. That reduces the need for random I/O. For example, here’s what the ScyllaDB write path looks like: With B-tree structures, each write operation requires locating and modifying a node in the tree – and that involves both sequential and random I/O. As the dataset grows, the tree can require additional nodes and rebalancing, leading to more disk I/O, which can impact performance. B-trees are generally better suited for workloads involving joins and ad-hoc queries. Payload Size Payload size also impacts performance. With small payloads, throughput is good but CPU processing is the primary bottleneck. As the payload size increases, you get lower overall throughput and disk utilization also increases. Ultimately, a small write usually fits in all the buffers and everything can be processed quite quickly. That’s why it’s easy to get high throughput. For larger payloads, you need to allocate larger buffers or multiple buffers. The larger the payloads, the more resources (network and disk) are required to service those payloads. Compression Disk utilization is something to watch closely with a write-heavy workload. Although storage is continuously becoming cheaper, it’s still not free. Compression can help keep things in check – so choose your compression strategy wisely. Faster compression speeds are important for write-heavy workloads, but also consider your available CPU and memory resources. Be sure to look at the compression chunk size parameter. Compression basically splits your data into smaller blocks (or chunks) and then compresses each block separately. When tuning this setting, realize that larger chunks are better for reads while smaller ones are better for writes, and take your payload size into consideration. Compaction For LSM-based databases, the compaction strategy you select also influences write performance. Compaction involves merging multiple SSTables into fewer, more organized files, to optimize read performance, reclaim disk space, reduce data fragmentation, and maintain overall system efficiency. When selecting compaction strategies, you could aim for low read amplification, which makes reads as efficient as possible. Or, you could aim for low write amplification by avoiding compaction from being too aggressive. Or, you could prioritize low space amplification and have compaction purge data as efficiently as possible. For example, ScyllaDB offers several compaction strategies (and Cassandra offers similar ones): Size-tiered compaction strategy (STCS): Triggered when the system has enough (four by default) similarly sized SSTables. Leveled compaction strategy (LCS): The system uses small, fixed-size (by default 160 MB) SSTables distributed across different levels. Incremental Compaction Strategy (ICS): Shares the same read and write amplification factors as STCS, but it fixes its 2x temporary space amplification issue by breaking huge sstables into SSTable runs, which are comprised of a sorted set of smaller (1 GB by default), non-overlapping SSTables. Time-window compaction strategy (TWCS): Designed for time series data. For write-heavy workloads, we warn users to avoid leveled compaction at all costs. That strategy is designed for read-heavy use cases. Using it can result in a regrettable 40x write amplification. Batching In databases like ScyllaDB and Cassandra, batching can actually be a bit of a trap – especially for write-heavy workloads. If you’re used to relational databases, batching might seem like a good option for handling a high volume of writes. But it can actually slow things down if it’s not done carefully. Mainly, that’s because large or unstructured batches end up creating a lot of coordination and network overhead between nodes. However, that’s really not what you want in a distributed database like ScyllaDB. Here’s how to think about batching when you’re dealing with heavy writes: Batch by the Partition Key: Group your writes by the partition key so the batch goes to a coordinator node that also owns the data. That way, the coordinator doesn’t have to reach out to other nodes for extra data. Instead, it just handles its own, which cuts down on unnecessary network traffic. Keep Batches Small and Targeted: Breaking up large batches into smaller ones by partition keeps things efficient. It avoids overloading the network and lets each node work on only the data it owns. You still get the benefits of batching, but without the overhead that can bog things down. Stick to Unlogged Batches: Considering you follow the earlier points, it’s best to use unlogged batches. Logged batches add extra consistency checks, which can really slow down the write. So, if you’re in a write-heavy situation, structure your batches carefully to avoid the delays that big, cross-node batches can introduce. Wrapping Up We offered quite a few warnings, but don’t worry. It was easy to compile a list of lessons learned because so many teams are extremely successful working with real-time write-heavy workloads. Now you know many of their secrets, without having to experience their mistakes. 🙂 If you want to learn more, here are some firsthand perspectives from teams who tackled quite interesting write-heavy challenges: Zillow: Consuming records from multiple data producers, which resulted in out-of-order writes that could result in incorrect updates Tractian: Preparing for 10X growth in high-frequency data writes from IoT devices Fanatics: Heavy write operations like handling orders, shopping carts, and product updates for this online sports retailer Also, take a look at the following video, where we go into even greater depth on these write-heavy challenges and also walk you through what these workloads look like on ScyllaDB.

Inside Tripadvisor’s Real-Time Personalization with ScyllaDB + AWS

See the engineering behind real-time personalization at Tripadvisor’s massive (and rapidly growing) scale What kind of traveler are you? Tripadvisor tries to assess this as soon as you engage with the site, then offer you increasingly relevant information on every click—within a matter of milliseconds. This personalization is powered by advanced ML models acting on data that’s stored on ScyllaDB running on AWS. In this article, Dean Poulin (Tripadvisor Data Engineering Lead on the AI Service and Products team) provides a look at how they power this personalization. Dean shares a taste of the technical challenges involved in delivering real-time personalization at Tripadvisor’s massive (and rapidly growing) scale. It’s based on the following AWS re:Invent talk: Pre-Trip Orientation In Dean’s words … Let’s start with a quick snapshot of who Tripadvisor is, and the scale at which we operate. Founded in 2000, Tripadvisor has become a global leader in travel and hospitality, helping hundreds of millions of travelers plan their perfect trips. Tripadvisor generates over $1.8 billion in revenue and is a publicly traded company on the NASDAQ stock exchange. Today, we have a talented team of over 2,800 employees driving innovation, and our platform serves a staggering 400 million unique visitors per month – a number that’s continuously growing. On any given day, our system handles more than 2 billion requests from 25 to 50 million users. Every click you make on Tripadvisor is processed in real time. Behind that, we’re leveraging machine learning models to deliver personalized recommendations – getting you closer to that perfect trip. At the heart of this personalization engine is ScyllaDB running on AWS. This allows us to deliver millisecond-latency at a scale that few organizations reach. At peak traffic, we hit around 425K operations per second on ScyllaDB with P99 latencies for reads and writes around 1-3 milliseconds. I’ll be sharing how Tripadvisor is harnessing the power of ScyllaDB, AWS, and real-time machine learning to deliver personalized recommendations for every user. We’ll explore how we help travelers discover everything they need to plan their perfect trip: whether it’s uncovering hidden gems, must-see attractions, unforgettable experiences, or the best places to stay and dine. This [article] is about the engineering behind that – how we deliver seamless, relevant content to users in real time, helping them find exactly what they’re looking for as quickly as possible. Personalized Trip Planning Imagine you’re planning a trip. As soon as you land on the Tripadvisor homepage, Tripadvisor already knows whether you’re a foodie, an adventurer, or a beach lover – and you’re seeing spot-on recommendations that seem personalized to your own interests. How does that happen within milliseconds? As you browse around Tripadvisor, we start to personalize what you see using Machine Learning models which calculate scores based on your current and prior browsing activity. We recommend hotels and experiences that we think you would be interested in. We sort hotels based on your personal preferences. We recommend popular points of interest near the hotel you’re viewing. These are all tuned based on your own personal preferences and prior browsing activity. Tripadvisor’s Model Serving Architecture Tripadvisor runs on hundreds of independently scalable microservices in Kubernetes on-prem and in Amazon EKS. Our ML Model Serving Platform is exposed through one of these microservices. This gateway service abstracts over 100 ML Models from the Client Services – which lets us run A/B tests to find the best models using our experimentation platform. The ML Models are primarily developed by our Data Scientists and Machine Learning Engineers using Jupyter Notebooks on Kubeflow. They’re managed and trained using ML Flow, and we deploy them on Seldon Core in Kubernetes. Our Custom Feature Store provides features to our ML Models, enabling them to make accurate predictions The Custom Feature Store The Feature Store primarily serves User Features and Static Features. Static Features are stored in Redis because they don’t change very often. We run data pipelines daily to load data from our offline data warehouse into our Feature Store as Static Features. User Features are served in real time through a platform called Visitor Platform. We execute dynamic CQL queries against ScyllaDB, and we do not need a caching layer because ScyllaDB is so fast. Our Feature Store serves up to 5 million Static Features per second and half a million User Features per second. What’s an ML Feature? Features are input variables to the ML Models that are used to make a prediction. There are Static Features and User Features. Some examples of Static Features are awards that a restaurant has won or amenities offered by a hotel (like free Wi-Fi, pet friendly or fitness center). User Features are collected in real time as users browse around the site. We store them in ScyllaDB so we can get lightning fast queries. Some examples of user features are the hotels viewed over the last 30 minutes, restaurants viewed over the last 24 hours, or reviews submitted over the last 30 days. The Technologies Powering Visitor Platform ScyllaDB is at the core of Visitor Platform. We use Java-based Spring Boot microservices to expose the platform to our clients. This is deployed on AWS ECS Fargate. We run Apache Spark on Kubernetes for our daily data retention jobs, our offline to online jobs. Then we use those jobs to load data from our offline data warehouse into ScyllaDB so that they’re available on the live site. We also use Amazon Kinesis for processing streaming user tracking events. The Visitor Platform Data Flow The following graphic shows how data flows through our platform in four stages: produce, ingest, organize, and activate. Data is produced by our website and our mobile apps. Some of that data includes our Cross-Device User Identity Graph, Behavior Tracking events (like page views and clicks) and streaming events that go through Kinesis. Also, audience segmentation gets loaded into our platform. Visitor Platform’s microservices are used to ingest and organize this data. The data in ScyllaDB is stored in two keyspaces: The Visitor Core keyspace, which contains the Visitor Identity Graph The Visitor Metric keyspace, which contains Facts and Metrics (the things that the people did as they browsed the site) We use daily ETL processes to maintain and clean up the data in the platform. We produce Data Products, stamped daily, in our offline data warehouse – where they are available for other integrations and other data pipelines to use in their processing. Here’s a look at Visitor Platform by the numbers:   Why Two Databases? Our online database is focused on the real-time, live website traffic. ScyllaDB fills this role by providing very low latencies and high throughput. We use short term TTLs to prevent the data in the online database from growing indefinitely, and our data retention jobs ensure that we only keep user activity data for real visitors. Tripadvisor.com gets a lot of bot traffic, and we don’t want to store their data and try to personalize bots – so we delete and clean up all that data. Our offline data warehouse retains historical data used for reporting, creating other data products, and training our ML Models. We don’t want large-scale offline data processes impacting the performance of our live site, so we have two separate databases used for two different purposes. Visitor Platform Microservices We use 5 microservices for Visitor Platform: Visitor Core manages the cross-device user identity graph based on cookies and device IDs. Visitor Metric is our query engine, and that provides us with the ability for exposing facts and metrics for specific visitors. We use a domain specific language called visitor query language, or VQL. This example VQL lets you see the latest commerce click facts over the last three hours. Visitor Publisher and Visitor Saver handle the write path, writing data into the platform. Besides saving data in ScyllaDB, we also stream data to the offline data warehouse. That’s done with Amazon Kinesis. Visitor Composite simplifies publishing data in batch processing jobs. It abstracts Visitor Saver and Visitor Core to identify visitors and publish facts and metrics in a single API call. Roundtrip Microservice Latency This graph illustrates how our microservice latencies remain stable over time. The average latency is only 2.5 milliseconds, and our P999 is under 12.5 milliseconds. This is impressive performance, especially given that we handle over 1 billion requests per day. Our microservice clients have strict latency requirements. 95% of the calls must complete in 12 milliseconds or less. If they go over that, then we will get paged and have to find out what’s impacting the latencies. ScyllaDB Latency Here’s a snapshot of ScyllaDB’s performance over three days. At peak, ScyllaDB is handling 340,000 operations per second (including writes and reads and deletes) and the CPU is hovering at just 21%. This is high scale in action! ScyllaDB delivers microsecond writes and millisecond reads for us. This level of blazing fast performance is exactly why we chose ScyllaDB. Partitioning Data into ScyllaDB This image shows how we partition data into ScyllaDB. The Visitor Metric Keyspace has two tables: Fact and Raw Metrics. The primary key on the Fact table is Visitor GUID, Fact Type, and Created At Date. The composite partition key is the Visitor GUID and Fact Type. The clustering key is Created At Date, which allows us to sort data in partitions by date. The attributes column contains a JSON object representing the event that occurred there. Some example Facts are Search Terms, Page Views, and Bookings. We use ScyllaDB’s Leveled Compaction Strategy because: It’s optimized for range queries It handles high cardinality very well It’s better for read-heavy workloads, and we have about 2-3X more reads than writes Why ScyllaDB? Our solution was originally built using Cassandra on-prem. But as the scale increased, so did the operational burden. It required dedicated operations support in order for us to manage the database upgrades, backups, etc. Also, our solution requires very low latencies for core components. Our User Identity Management system must identify the user within 30 milliseconds – and for the best personalization, we require our Event Tracking platform to respond in 40 milliseconds. It’s critical that our solution doesn’t block rendering the page so our SLAs are very low. With Cassandra, we had impacts to performance from garbage collection. That was primarily impacting the tail latencies, the P999 and P9999 latencies. We ran a Proof of Concept with ScyllaDB and found the throughput to be much better than Cassandra and the operational burden was eliminated. ScyllaDB gave us a monstrously fast live serving database with the lowest possible latencies. We wanted a fully-managed option, so we migrated from Cassandra to ScyllaDB Cloud, following a dual write strategy. That allowed us to migrate with zero downtime while handling 40,000 operations or requests per second. Later, we migrated from ScyllaDB Cloud to ScyllaDB’s “Bring your own account” model, where you can have the ScyllaDB team deploy the ScyllaDB database into your own AWS account. This gave us improved performance as well as better data privacy. This diagram shows what ScyllaDB’s BYOA deployment looks like. In the center of the diagram, you can see a 6-node ScyllaDB cluster that is running on EC2. And then there’s two additional EC2 instances. ScyllaDB Monitor gives us Grafana dashboards as well as Prometheus metrics. ScyllaDB Manager takes care of infrastructure automation like triggering backups and repairs. With this deployment, ScyllaDB could be co-located very close to our microservices to give us even lower latencies as well as much higher throughput and performance. Wrapping up, I hope you now have a better understanding of our architecture, the technologies that power the platform, and how ScyllaDB plays a critical role in allowing us to handle Tripadvisor’s extremely high scale.

How to Build a High-Performance Shopping Cart App with ScyllaDB

Build a shopping cart app with ScyllaDB– and learn how to use ScyllaDB’s Change Data Capture (CDC) feature to query and export the history of all changes made to the tables.This blog post showcases one of ScyllaDB’s sample applications: a shopping cart app. The project uses FastAPI as the backend framework and ScyllaDB as the database. By cloning the repository and running the application, you can explore an example of an API server built on top of ScyllaDB for a CRUD app. Additionally, you’ll see how to use ScyllaDB’s Change Data Capture (CDC) feature to query and export the history of all changes made to the tables.What’s inside the shopping cart sample app?The application has two components: an API server and a database.API server: Python + FastAPIThe backend is built with Python and FastAPI, a modern Python web framework known for its speed and ease of use. FastAPI ensures that you have a framework that can deliver relatively high performance if used with the right database. At the same time, due to its exceptional developer experience, you can easily understand the code of the project and how it works even if you’ve never used it before.The application exposes multiple API endpoints to perform essential operations like:Adding products to the cartRemoving products from the cartUploading new productsUpdating product information (e.g. price)Database: ScyllaDBAt the core of this application is ScyllaDB, a low-latency NoSQL database that provides predictable performance. ScyllaDB excels in handling large volumes of data with single-digit millisecond latency, making it ideal for large-scale real-time applications.ScyllaDB acts as the foundation for a high-performance low-latency app. Moreover, it has additional capabilities that can help you maintain low p99 latency as well as analyze user behavior. ScyllaDB’s CDC feature tracks changes in the database and you can query historical operations. For e-commerce applications, this means you can capture insights into user behavior:What products are being added or removed from the cart and when?How do users interact with the cart?What does a typical journey look like for a user who actually buys something?These and other insights are invaluable for personalizing the user experience, optimizing the buying journey, and increasing conversion rates.Using ScyllaDB for an ecommerce applicationAs studies have shown, low latency is critical for achieving high conversion rates and delivering a smooth user experience. For instance, shopping cart operations – such as adding, updating, and retrieving products – require high performance to prevent cart abandonment.Data modeling, being the foundation for high-performance web applications, must remain a top priority. So let’s start with the process of creating a performant data model.Design a Shopping Cart data modelWe emphasize a practical “query-first” approach to NoSQL data modeling: start with your application’s queries, then design your schema around them. This method ensures your data model is optimized for your specific use cases and the database can provide a reliable and single-digit p99 latency at any scale.Let’s review the specific CRUD operations and queries a typical shopping cart application performs.ProductsList, add, edit and remove products.GET /products?limit=?SELECT * FROM product LIMIT {limit}GET /products/{product_id}SELECT * FROM product WHERE id = ?POST /productsINSERT INTO product () values ()PUT /products/{product_id}UPDATE product SET ? WHERE id = ?DELETE /products/{product_id}DELETE FROM product WHERE id = ?Based on these requirements, you can create a table to store products. You can notice what value is often used to filter products: product id. This is a good indicator that product id should be the partition key or at least part of it.The Product table:Our application is simple, so a single column will suffice as the partition key. However, if your use case requires additional queries and filtering by additional columns, you can consider using a composite partition key or adding a clustering key to the table.CartList, add, remove products from user’s cart.GET /cart/{user_id}SELECT * FROM cart_items WHERE user_id = ? AND cart_id = ?POST /cart/{user_id}INSERT INTO cart() VALUES ()Here we don’t need cart id because the user can only have one active cart at a time. (You could also build another endpoint to list past purchases by the user – that endpoint would require the cart id as well)DELETE /cart/{user_id}DELETE FROM cart_items WHERE user_id = ? AND cart_id = ? AND product_id = ?POST checkout /cart/{user_id}/checkoutUPDATE cart SET is_active = false WHERE user_id = ? AND cart_id = ?The cart-related operations contain a slightly more complicated logic behind the scenes. We have two values that we use to query by: user id and cart id. Those can be used together as composite partition keys.Additionally, one user can have multiple carts – one they’re using right now to shop and possibly other ones that they had in the past that they already paid for. For this reason, we need to have a way to efficiently find the user’s active cart. This query requirement will be handled by a secondary index on the is_active column.The Cart table:Additionally, we also need to create a table which connects the Product and Cart tables. Without this table, it would be impossible to retrieve products from a cart.The Cart_items table:We enable Change Data Capture for this table. This feature logs all data operations performed on the table into another table, cart_items_scylla_cdc_log. Later, we can query this log to retrieve the table’s historical operations. This data can be used to analyze user behavior, such as the products users add or remove from their carts.Final database schema:Now that we’ve covered the data modeling aspect of the project, you can clone the repository and get started with building.Getting startedPrerequisites:Python 3.8+ScyllaDB cluster (with ScyllaDB Cloud or use Docker)Connect to your ScyllaDB cluster using CQLSH and create the schema:Then, install the Python requirements in a new environment:Modify config.py to match your database credentials:Run the server:Generate sample user data:This script populates your ScyllaDB tables with sample data. This is necessary for the next step, where you will run CDC queries to analyze user behavior.Analyze user behavior with CDCCDC records every data change, including deletes, offering a comprehensive view of your data evolution without affecting database performance. For a shopping cart application, some potential use cases for CDC include:Analyzing a specific user’s buying behaviorTracking user actions leading to checkoutEvaluating product popularity and purchase frequencyAnalyzing active and abandoned cartsBeyond these business-specific insights, CDC data can also be exported to external platforms, such as Kafka, for further processing and analysis.Here are a couple of useful tips when working with CDC:The CDC log table contains timeuuid values, which can be converted to readable timestamps using the toTimestamp() CQL function.The cdc$operation column helps filter operations by type. For instance, a value of 2 indicates an INSERT operation.The most efficient and scalable way to query CDC data is to use the ScyllaDB source connector and set up an integration with Kafka.Now, let’s explore a couple of quick questions that CDC can help answer.How many times did users add more than 2 of the same product to the cart?How many carts contain a particular product?Set up ScyllaDB CDC with Kafka ConnectTo provide a scalable way for you to analyze ScyllaDB CDC logs, you can use Kafka to receive messages sent by ScyllaDB. Then, you can use an analytics tool, like Elasticsearch, to get insights. To send CDC logs to Kafka, you need to install the ScyllaDB CDC source connector, and create a new ScyllaDB connection in Kafka Connect.Install the ScyllaDB source connector on the machine/container that’s running Kafka:Then use the following ScyllaDB related parameters when you create the connection:Make sure to enable CDC on each table you want to send messages from. You can do this by executing the following CQL:Try it out yourselfIf you are interested in trying out this application yourself, check out the dedicated documentation site: shopping-cart.scylladb.com and the GitHub repository.If you have any questions about this project or ScyllaDB, submit a question in ScyllaDB the forum.

A Tiny Peek at Monster SCALE Summit 2025

Big things have been happening behind the scenes for the premier Monster SCALE Summit. Ever since we introduced it at P99 CONF, the community response has been overwhelming. We’re now faced with the “good” problem of determining how to fit all the selected speakers into the two half-days we set aside for the event. 😅 If you missed the intro last year, Monster Scale Summit is a highly technical conference that connects the community of professionals designing, implementing, and optimizing performance-sensitive data-intensive applications. It focuses on exploring “monster scale” engineering challenges with respect to extreme levels of throughput, data, and global distribution. The two-day event is free, intentionally virtual, and highly interactive. Register – it’s free and virtual We’ll be announcing the agenda next month. But we’re so excited about the speaker lineup that we can’t wait to share a taste of what you can expect. Here’s a preview of 12 of the 60+ sessions that you can join on March 11 and 12… Designing Data-Intensive Applications in 2025 Martin Kleppmann and Chris Riccomini (Designing Data-Intensive Applications book) Join us for an informal chat with Martin Kleppmann and Chris Riccomini, who are currently revising the famous book Designing Data-Intensive Applications. We’ll cover how data-intensive applications have evolved since the book was first published, the top tradeoffs people are negotiating today, and what they believe is next for data-intensive applications. Martin and Chris will also provide an inside look at the book writing and revision process. The Nile Approach: Re-engineering Postgres for Millions of Tenants Gwen Shapira (Nile) Scaling relational databases is a notoriously challenging problem. Doing so while maintaining consistent low latency, efficient use of resources and compatibility with Postgres may seem impossible. At Nile, we decided to tackle the scaling challenge by focusing on multi-tenant applications. These applications require not only scalability, but also a way to isolate tenants and avoid the noisy neighbor problem. By tackling both challenges, we developed an approach, which we call “virtual tenant databases”, which gives us an efficient way to scale Postgres to millions of tenants while still maintaining consistent performance. In this talk, I’ll explore the limitations of traditional scaling for multi-tenant applications and share how Nile’s virtual tenant databases address these challenges. By combining the best of Postgres existing capabilities, distributed algorithms and a new storage layer, Nile re-engineered Postgres for multi-tenant applications at scale. The Mechanics of Scale Dominik Tornow (Resonate HQ) As distributed systems scale, the complexity of their development and operation skyrockets. A dependable understanding of the mechanics of distributed systems is our most reliable parachute. In this talk, we’ll use systems thinking to develop an accurate and concise mental model of concurrent, distributed systems, their core challenges, and the key principles to address these challenges. We’ll explore foundational problems such as the tension between consistency and availability, and essential techniques like partitioning and replication. Whether you are building a new system from scratch or scaling an existing system to new heights, this talk will provide the understanding to confidently navigate the intricacies of modern, large-scale distributed systems. Feature Store Evolution Under Cost Constraints: When Cost is Part of the Architecture Ivan Burmistrov and David Malinge (ShareChat) At P99 CONF 23, the ShareChat team presented the scaling challenges for the ML Feature Store so it could handle 1 billion features per second. Once the system was scaled to handle the load, the next challenge the team faced was extreme cost constraints: it was required to make the same quality system much cheaper to run. Ivan and David will talk about approaches the team implemented in order to optimize for cost in the Cloud environment while maintaining the same SLA for the service. The talk will touch on such topics as advanced optimizations on various levels to bring down the compute, minimizing the waste when running on Kubernetes, autoscaling challenges for stateful Apache Flink jobs, and others. The talk should be useful for those who are either interested in building or optimizing an ML Feature Store or in general looking into cost optimizations in the cloud environment. Time Travelling at Scale Richard Hart (Antithesis) Antithesis is a continuous reliability platform that autonomously searches for problems in your software within a simulated environment. Every problem we find can be perfectly reproduced, allowing for efficient debugging of even the most complex problems. But storing and querying histories of program execution at scale creates monster large cardinalities. Over a ~10 hour test run, we generate ~1bn rows. The solution: our own tree-database. 30B Images and Counting: Scaling Canva’s Content-Understanding Pipelines Dr. Kerry Halupka (Canva) As the demand for high-quality, labeled image data grows, building systems that can scale content understanding while delivering real-time performance is a formidable challenge. In this talk, I’ll share how we tackled the complexities of scaling content understanding pipelines to support monstrous volumes of data, including backfilling labels for over 30 billion images. At the heart of our system is an extreme label classification model capable of handling thousands of labels and scaling seamlessly to thousands more. I’ll dive into the core components: candidate image search, zero-shot labelling using highly trained teacher models, and iterative refinement with visual critic models. You’ll learn how we balanced latency, throughput, and accuracy while managing evolving datasets and continuously expanding label sets. I’ll also discuss the tradeoffs we faced—such as ensuring precision in labelling without compromising speed—and the techniques we employed to optimise for scale, including strategies to address data sparsity and performance bottlenecks. By the end of the session, you’ll gain insights into designing, implementing, and scaling content understanding systems that meet extreme demands. Whether you’re working with real-time systems, distributed architectures, or ML pipelines, this talk will provide actionable takeaways for pushing large-scale labelling pipelines to their limits and beyond. How Agoda Scaled 50x Throughput with ScyllaDB Worakarn Isaratham (Agoda) In this talk, we will explore the performance tuning strategies implemented at Agoda to optimize ScyllaDB. Key topics include enhancing disk performance, selecting the appropriate compaction strategy, and adjusting SSTable settings to match our usage profile. Who Needs One Database Anyway? Glauber Costa (Turso) Developers need databases. That’s how you store your data. And that’s usually how it goes: you have your large fleet of services, and they connect to one database. But what if it wasn’t like that? What if instead of one database, one application would create one million databases, or even more? In this talk, we’ll explore the market trends that give rise to use cases where this pattern is beneficial, and the infrastructure changes needed to support it. How We Boosted ScyllaDB’s Data Streaming by 25x Asias He (ScyllaDB) Streaming, the process of scaling out of/into other nodes, used to analyze every partition one-by-one. It was too slow and depended on the schema. File-based stream is a new feature that significantly optimizes tablet movement. It streams the entire SSTable files without deserializing SSTable files into mutation fragments and re-serializing them back into SSTables on receiving nodes. As a result, less data is streamed over the network, and less CPU is consumed, especially for data models that contain small cells. Evolving Atlassian Confluence Cloud for Scale, Reliability, and Performance Bhakti Mehta (Atlassian) This session covers the journey of Confluence Cloud – the team workspace for collaboration and knowledge sharing used by thousands of companies – and how we aim to take it to the next level, with scale, performance, and reliability as the key motivators. This session presents a deep dive to provide insights into how the Confluence architecture has evolved into its current form. It discusses how Atlassian deploys, runs, and operates at scale and all challenges encountered along the way. I will cover performance and reliability at scale starting with the fundamentals of measuring everything, re-defining metrics to be insightful of actual customer pain, auditing end-to-end experiences. Beyond just dev-ops and best practices, this means empowering teams to own product stability through practices and tools. Two Leading Approaches to Data Virtualization: Which Scales Better? Dr. Daniel Abadi (University of Maryland) You have a large dataset stored in location X, and some code to process or analyze it in location Y. What is better: move the code to the data, or move the data to the code? For decades, it has always been assumed that the former approach is more scalable. Recently, with the rise of cloud computing, and the push to separate resources for storage and compute, we have seen data increasingly being pushed to code, flying in face of conventional wisdom. What is behind this trend, and is it a dangerous idea? This session will look at this question from academic and practical perspectives, with a particular focus on data virtualization, where there exists an ongoing debate on the merits of push-based vs. pull-based data processing. Scaling a Beast: Lessons from 400x Growth in a High-Stakes Financial System Dmytro Hnatiuk (Wise) Scaling a system from 66 million to over 25 billion records is no easy feat—especially when it’s a core financial system where every number has to be right, and data needs to be fresh right now. In this session, I’ll share the ups and downs of managing this kind of growth without losing my sanity. You’ll learn how to balance high data accuracy with real-time performance, optimize your app logic, and avoid the usual traps of database scaling. This isn’t about turning you into a database expert—it’s about giving you the practical, no-BS strategies you need to scale your systems without getting overwhelmed by technical headaches. Perfect for engineers and architects who want to tackle big challenges and come out on top.

How Supercell Handles Real-Time Persisted Events with ScyllaDB

How a team of just two engineers tackled real-time persisted events for hundreds of millions of players With just two engineers, Supercell took on the daunting task of growing their basic account system into a social platform connecting hundreds of millions of gamers. Account management, friend requests, cross-game promotions, chat, player presence tracking, and team formation – all of this had to work across their five major games. And they wanted it all to be covered by a single solution that was simple enough for a single engineer to maintain, yet powerful enough to handle massive demand in real-time. Supercell’s Server Engineer, Edvard Fagerholm, recently shared how their mighty team of two tackled this task. Read on to learn how they transformed a simple account management tool into a comprehensive cross-game social network infrastructure that prioritized both operational simplicity and high performance. Note: If you enjoy hearing about engineering feats like this, join us at Monster Scale Summit (free + virtual). Engineers from Disney+/Hulu,, Slack, Canva, Uber, Salesforce, Atlassian and more will be sharing strategies and case studies. Background: Who’s Supercell? Supercell is the Finland-based company behind the hit games Hay Day, Clash of Clans, Boom Beach, Clash Royale and Brawl Stars. Each of these games has generated $1B in lifetime revenue.   Somehow they manage to achieve this with a super small staff. Until quite recently, all the account management functionality for games servicing hundreds of millions of monthly active users was being built and managed by just two engineers. And that brings us to Supercell ID. The Genesis of Supercell ID Supercell ID was born as a basic account system – something to help users recover accounts and move them to new devices. It was originally implemented as a relatively simple HTTP API. Edvard explained, “The client could perform HTTP queries to the account API, which mainly returned signed tokens that the client could present to the game server to prove their identity. Some operations, like making friend requests, required the account API to send a notification to another player. For example, ‘Do you approve this friend request?’ For that purpose, there was an event queue for notifications. We would post the event there, and the game backend would forward the notification to the client using the game socket.” Enter Two-Way Communication After Edvard joined the Supercell ID project in late 2020, he started working on the notification backend – mainly for cross-promotion across their five games. He soon realized that they needed to implement two-way communication themselves, and built it as follows: Clients connected to a fleet of proxy servers, then a routing mechanism pushed events directly to clients (without going through the game). This was sufficient for the immediate goal of handling cross-promotion and friend requests. It was fairly simple and didn’t need to support high throughput or low latency. But it got them thinking bigger. They realized they could use two-way communication to significantly increase the scope of the Supercell ID system. Edvard explained, “Basically, it allowed us to implement features that were previously part of the game server. Our goal was to take features that any new games under development might need and package them into our system – thereby accelerating their development.” With that, Supercell ID began transforming into a cross-game social network that supported features like friend graphs, teaming up, chat, and friend state tracking. Evolving Supercell ID into Cross-Game Social Network At this point, the Social Network side of the backend was still a single-person project, so they designed it with simplicity in mind. Enter abstraction. Finding the right abstraction “We wanted to have only one simple abstraction that would support all of our uses and could therefore be designed and implemented by a single engineer,” explained Edvard. “In other words, we wanted to avoid building a chat system, a presence system, etc. We wanted to build one thing, not many.” Finding the right abstraction was key. And a hierarchical key-value store with Change Data Capture fit the bill perfectly. Here’s how they implemented it: The top-level keys in the key-value store are topics that can be subscribed to. There’s a two-layer map under each top-level key – map(string, map(string, string)). Any change to the data under a top-level key is broadcast to all that key’s subscribers. The values in the innermost map are also timestamped. Each data source controls its own timestamps and defines the correct order. The client drops any update with an older timestamp than what it already has stored. A typical change in the data would be something like ‘level equals 10’ changes to ‘level equals 11’. As players play, they trigger all sorts of updates like this, which means a lot of small writes are involved in persisting all the events. Finding the Right Database They needed a database that would support their technical requirements and be manageable, given their minimalist team. That translated to the following criteria: Handles many small writes with low latency Supports a hierarchical data model Manages backups and cluster operations as a service ScyllaDB Cloud turned out to be a great fit. (ScyllaDB Cloud is the fully-managed version of ScyllaDB, a database known for delivering predictable low latency at scale). How it All Plays Out For an idea of how this plays out in Supercell games, let’s look at two examples. First, consider chat messages. A simple chat message might be represented in their data model as follows: <room ID> -> <timestamp_uuid> -> message -> “hi there”                                  metadata -> …                                  reactions -> … Edvard explained, “The top-level key that’s subscribed to is the chat room ID. The next level key is a timestamp-UID, so we have an ordering of each message and can query chat history. The inner map contains the actual message together with other data attached to it.” Next, let’s look at “presence”, which is used heavily in Supercell’s new (and highly anticipated) game, mo.co. The goal of presence, according to Edvard: “When teaming up for battle, you want to see in real-time the avatar and the current build of your friends – basically the weapons and equipment of your friends, as well as what they’re doing. If your friend changes their avatar or build, goes offline, or comes online, it should instantly be visible in the ‘teaming up’ menu.” Players’ state data is encoded into Supercell’s hierarchical map as follows: <player ID> -> “presence” -> weapon -> sword                              level -> 29                              status -> in battle Note that: The top level is the player ID, the second level is the type, and the inner map contains the data. Supercell ID doesn’t need to understand the data; it just forwards it to the game clients. Game clients don’t need to know the friend graph since the routing is handled by Supercell ID. Deeper into the System Architecture Let’s close with a tour of the system architecture, as provided by Edvard. “The backend is split into APIs, proxies, and event routing/storage servers. Topics live on the event routing servers and are sharded across them. A client connects to a proxy, which handles the client’s topic subscription. The proxy routes these subscriptions to the appropriate event routing servers. Endpoints (e.g., for chat and presence) send their data to the event routing servers, and all events are persisted in ScyllaDB Cloud. Each topic has a primary and backup shard. If the primary goes down, the primary shard maintains the memory sequence numbers for each message to detect lost messages. The secondary will forward messages without sequence numbers. If the primary is down, the primary coming up will trigger a refresh of state on the client, as well as resetting the sequence numbers. The API for the routing layers is a simple post-event RPC containing a batch of topic, type, key, value tuples. The job of each API is just to rewrite their data into the above tuple representation. Every event is written in ScyllaDB before broadcasting to subscribers. Our APIs are synchronous in the sense that if an API call gives a successful response, the message was persisted in ScyllaDB. Sending the same event multiple times does no harm since applying the update on the client is an idempotent operation, with the exception of possibly multiple sequence numbers mapping to the same message. When connecting, the proxy will figure out all your friends and subscribe to their topics, same for chat groups you belong to. We also subscribe to topics for the connecting client. These are used for sending notifications to the client, like friend requests and cross promotions. A router reboot triggers a resubscription to topics from the proxy. We use Protocol Buffers to save on bandwidth cost. All load balancing is at the TCP level to guarantee that requests over the same HTTP/2 connection are handled by the same TCP socket on the proxy. This lets us cache certain information in memory on the initial listen, so we don’t need to refetch on other requests. We have enough concurrent clients that we don’t need to separately load balance individual HTTP/2 requests, as traffic is evenly distributed anyway, and requests are about equally expensive to handle across different users. We use persistent sockets between proxies and routers. This way, we can easily send tens of thousands of subscriptions per second to a single router without an issue.” But It’s Not Game Over If you want to watch the complete tech talk, just press play below: And if you want to read more about ScyllaDB’s role in the gaming world, you might also want to read: Epic Games: How Epic Games uses ScyllaDB as a binary cache in front of NVMe and S3 to accelerate global distribution of large game assets used by Unreal Cloud DDC. Tencent Games: How Tencent Games built service architecture based on CQRS and event sourcing patterns with Pulsar and ScyllaDB. Discord: How Discord uses ScyllaDB to power their massive growth, moving from a niche gaming platform to one of the world’s largest communication platforms.

How To Analyze ScyllaDB Cluster Capacity

Monitoring tips that can help reduce cluster size 2-5X without compromising latency Editor’s note: The following is a guest post by Andrei Manakov, Senior Staff Software Engineer at ShareChat. It was originally published on Andrei’s blog. I had the privilege of giving a talk at ScyllaDB Summit 2024, where I briefly addressed the challenge of analyzing the remaining capacity in ScyllaDB clusters. A good understanding of ScyllaDB internals is required to plan your computation cost increase when your product grows or to reduce cost if the cluster turns out to be heavily over-provisioned. In my experience, clusters can be reduced by 2-5x without latency degradation after such an analysis. In this post, I provide more detail on how to properly analyze CPU and disk resources. How Does ScyllaDB Use CPU? ScyllaDB is a distributed database, and one cluster typically contains multiple nodes. Each node can contain multiple shards, and each shard is assigned to a single core. The database is built on the Seastar framework and uses a shared-nothing approach. All data is usually replicated in several copies, depending on the replication factor, and each copy is assigned to a specific shard. As a result, every shard can be analyzed as an independent unit and every shard efficiently utilizes all available CPU resources without any overhead from contention or context switching. Each shard has different tasks, which we can divide into two categories: client request processing and maintenance tasks. All tasks are executed by a scheduler in one thread pinned to a core, giving each one its own CPU budget limit. Such clear task separation allows isolation and prioritization of latency-critical tasks for request processing. As a result of this design, the cluster handles load spikes more efficiently and provides gradual latency degradation under heavy load. [More details about this architecture].
Another interesting result of this design is that ScyllaDB supports workload prioritization. In my experience, this approach ensures that critical latency is not impacted during less critical load spikes. I can’t recall any similar feature in other databases. Such problems are usually tackled by having 2 clusters for different workloads. But keep in mind that this feature is available only in ScyllaDB Enterprise.
However, background tasks may occupy all remaining resources, and overall CPU utilization in the cluster appears spiky. So, it’s not obvious how to find the real cluster capacity. It’s easy to see 100% CPU usage with no performance impact. If we increase the critical load, it will consume the resources (CPU, I/O) from background tasks. Background tasks’ duration can increase slightly, but it’s totally manageable. The Best CPU Utilization Metric How can we understand the remaining cluster capacity when CPU usage spikes up to 100% throughout the day, yet the system remains stable? We need to exclude maintenance tasks and remove all these spikes from the consideration. Since ScyllaDB distributes all the data by shards and every shard has its own core, we take into account the max CPU utilization by a shard excluding maintenance tasks (you can find other task types here). In my experience, you can keep the utilization up to 60-70% without visible degradation in tail latency. Example of a Prometheus query: max(sum(rate(scylla_scheduler_runtime_ms{group!="compaction|streaming"})) by (instance, shard))/10
You can find more details about the ScyllaDB monitoring stack here. In this article, PromQL queries are used to demonstrate how to analyse key metrics effectively.
However, I don’t recommend rapidly downscaling the cluster to the desired size just after looking at max CPU utilization excluding the maintenance tasks. First, you need to look at average CPU utilization excluding maintenance tasks across all shards. In an ideal world, it should be close to max value. In case of significant skew, it definitely makes sense to find the root cause. It can be an inefficient schema with an incorrect partition key or an incorrect token-aware/rack-aware configuration in the driver. Second, you need to take a look at the average CPU utilization of excluded tasks for some your workload specific things. It’s rarely more than 5-10% but you might need to have more buffer if it uses more CPU. Otherwise, compaction will be too tight in resources and reads start to become more expensive with respect to CPU and disk. Third, it’s important to downscale your cluster gradually. ScyllaDB has an in-memory row cache which is crucial for ScyllaDB. It allocates all remaining memory for the cache and with the memory reduction, the hit rate might drop more than you expected. Hence, CPU utilization can be increased unilinearly and low cache hit rate can harm your tail latency. 1- (sum(rate(scylla_cache_reads_with_misses{})) / sum(rate(scylla_cache_reads{})))
I haven’t mentioned RAM in this article as there are not many actionable points. However, since memory cache is crucial for efficient reading in ScyllaDB, I recommend always using memory-optimized virtual machines. The more memory, the better.
Disk Resources ScyllaDB is a LSMT-based database. That means it is optimized for writing by design and any mutation will lead to new appending new data to the disk. The database periodically rewrites the data to ensure acceptable read performance. Disk performance plays a crucial role in overall database performance. You can find more details about the write path and compaction in the scylla documentation. There are 3 important disk resources we will discuss here: Throughput, IOPs and free disk space. All these resources depend on the disk type we attached to our ScyllaDB nodes and their quantity. But how can we understand the limit of the IOPs/throughput? There 2 possible options: Any cloud provider or manufacturer usually provides performance of their disks ; you can find it on their website. For example, NVMe disks from Google Cloud. The actual disk performance can be different compared to the numbers that manufacturers share. The best option might be just to measure it. And we can easily get the result. ScyllaDB performs a benchmark during installation to a node and stores the result in the file io_properties.yaml. The database uses these limits internally for achieving optimal performance. disks: - mountpoint: /var/lib/scylla/data read_iops: 2400000 //iops read_bandwidth: 5921532416//throughput write_iops: 1200000 //iops write_bandwidth: 4663037952//throughput file: io_properties.yaml Disk Throughput sum(rate(node_disk_read_bytes_total{})) / (read_bandwidth * nodeNumber) sum(rate(node_disk_written_bytes_total{})) / (write_bandwidth * nodeNumber) In my experience, I haven’t seen any harm with utilization up to 80-90%. Disk IOPs sum(rate(node_disk_reads_completed_total{})) / (read_iops * nodeNumber) sum(rate(node_disk_writes_completed_total{})) / (write_iops * nodeNumber) Disk free space It’s crucial to have significant buffer in every node. In case you’re running out of space, the node will be basically unavailable and it will be hard to restore it. However, additional space is required for many operations: Every update, write, or delete will be written to the disk and allocate new space. Compaction requires some buffer during cleaning the space. Back up procedure. The best way to control disk usage is to use Time To Live in the tables if it matches your use case. In this case, irrelevant data will expire and be cleaned during compaction. I usually try to keep at least 50-60% of free space. min(sum(node_filesystem_avail_bytes{mountpoint="/var/lib/scylla"}) by (instance)/sum(node_filesystem_size_bytes{mountpoint="/var/lib/scylla"}) by (instance)) Tablets Most apps have significant load variations throughout the day or week. ScyllaDB is not elastic and you need to have provisioned the cluster for the peak load. So, you could waste a lot of resources during night or weekends. But that could change soon. A ScyllaDB cluster distributes data across its nodes and the smallest unit of the data is a partition uniquely identified by a partition key. A partitioner hash function computes tokens to understand in which nodes data are stored. Every node has its own token range, and all nodes make a ring. Previously, adding a new node wasn’t a fast procedure because it required copying (it is called streaming) data to a new node, adjusting token range for neighbors, etc. In addition, it’s a manual procedure. However, ScyllaDB introduced tablets in 6.0 version, and it provides new opportunities. A Tablet is a range of tokens in a table and it includes partitions which can be replicated independently. It makes the overall process much smoother and it increases elasticity significantly. Adding new nodes takes minutes and a new node starts processing requests even before full data synchronization. It looks like a significant step toward full elasticity which can drastically reduce server cost for ScyllaDB even more. You can read more about tablets here. I am looking forward to testing tablets closely soon. Conclusion Tablets look like a solid foundation for future pure elasticity, but for now, we’re planning clusters for peak load. To effectively analyze ScyllaDB cluster capacity, focus on these key recommendations: Target max CPU utilization (excluding maintenance tasks) per shard at 60–70%. Ensure sufficient free disk space to handle compaction and backups. Gradually downsize clusters to avoid sudden cache degradation.  

ScyllaDB University and Training Updates

It’s been a while since my last update. We’ve been busy improving the existing ScyllaDB training material and adding new lessons and labs. In this post, I’ll survey the latest developments and update you on the live training event taking place later this month. You can discuss these topics (and more!) on the community forum. Say hello here. ScyllaDB University LIVE Training In addition to the self-paced online courses you can take on ScyllaDB University (see below), we host online live training events. These events are a great opportunity to improve your NoSQL and ScyllaDB skills, get hands-on practice, and get your questions answered by our team of experts. The next event is ScyllaDB University LIVE, which will occur 29th of January 29. As usual, we’re planning on having two tracks, an Essentials, and an Advanced track. However, this time we’ll change the format and make each track a complete learning path. Stay tuned for more details, and I hope to see you there. Save your spot at ScyllaDB University LIVE ScyllaDB University Content Updates ScyllaDB University is our online learning platform where you can learn about NoSQL and about ScyllaDB and get some hands-on experience. It includes many different self-paced lessons, meaning you can study whenever you have some free time and continue where you left off. The material is free and all you have to do is create a user account. We recently added new lessons and updated many existing ones. All of the following topics were added to the course S201: Data Modeling and Application Development. Start learning New in the How To Write Better Apps Lesson General Data Modeling Guidelines This lesson discusses key principles of NoSQL data modeling, emphasizing a query-driven design approach to ensuring efficient data distribution and balanced workloads. It highlights the importance of selecting high-cardinality primary keys, avoiding bad access patterns, and using ScyllaDB Monitoring to identify and resolve issues such as Hot Partitions and Large Partitions. Neglecting these practices can lead to slow performance, bottlenecks, and potentially unreadable data – underscoring the need for using best practices when creating your data model. To learn more, you can explore the complete lesson here. Large Partitions and Collections This lesson provides insights into common pitfalls in NoSQL data modeling, focusing on issues like large partitions, collections, and improper use of ScyllaDB features. It emphasizes avoiding large partitions due to the impact on performance and demonstrates this with real-world examples and Monitoring data. Collections should generally remain small to prevent high latency. The schema used depends on the use case and on the performance requirements. Practical advice and tools are offered for testing and monitoring. You can learn more in the complete lesson here. Hot Partitions, Cardinality and Tombstones This lesson explores common challenges in NoSQL databases, focusing on hot partitions, low cardinality keys, and tombstones. Hot partitions cause uneven load and bottlenecks, often due to misconfigurations or retry storms. Having many tombstones can degrade read performance due to read amplification. Best practices include avoiding retry storms, using efficient full-table scans over low cardinality views and preferring partition-level deletes to minimize tombstone buildup. Monitoring tools and thoughtful schema design are emphasized for efficient database performance. You can find the complete lesson here. Diagnosis and Prevention This lesson covers strategies to diagnose and prevent common database issues in ScyllaDB, such as large partitions, hot partitions, and tombstone-related inefficiencies. Tools like the nodetool toppartitions command help identify hot partition problems, while features like per-partition rate limits and shard concurrency limits manage load and prevent contention. Properly configuring timeout settings avoids retry storms that exacerbate hot partition problems. For tombstones, using efficient delete patterns helps maintain performance and prevent timeouts during reads. Proactive monitoring and adjustments are emphasized throughout. You can see the complete lesson here. New in the Basic Data Modeling Lesson CQL and the CQL Shell The lesson introduces the Cassandra Query Language (CQL), its similarities to SQL, and its use in ScyllaDB for data definition and manipulation commands. It highlights the interactive CQL shell (CQLSH) for testing and interaction, alongside a high level overview of drivers. Common data types and collections like Sets, Lists, Maps, and User-Defined Types in ScyllaDB are briefly mentioned. The “Pet Care IoT” lab example is presented, where sensors on pet collars record data like heart rate or temperature at intervals. This demonstrates how CQL is applied in database operations for IoT use cases. This example is used in labs later on. You can watch the video and complete lesson here. Data Modeling Overview and Basic Concepts The new video introduces the basics of data modeling in ScyllaDB, contrasting NoSQL and relational approaches. It emphasizes starting with application requirements, including queries, performance, and consistency, to design models. Key concepts such as clusters, nodes, keyspaces, tables, and replication factors are explained, highlighting their role in distributed data systems. Examples illustrate how tables and primary keys (partition keys) determine data distribution across nodes using consistent hashing. The lesson demonstrates creating keyspaces and tables, showing how replication factors ensure data redundancy and how ScyllaDB maps partition keys to replica nodes for efficient reads and writes. You can find the complete lesson here. Primary Key, Partition Key, Clustering Key This lesson explains the structure and importance of primary keys in ScyllaDB, detailing their two components: the mandatory partition key and the optional clustering key. The partition key determines the data’s location across nodes, ensuring efficient querying, while the clustering key organizes rows within a partition. For queries to be efficient, the partition key must be specified to avoid full table scans. An example using pet data illustrates how rows are sorted within partitions by the clustering key (e.g., time), enabling precise and optimized data retrieval. Find the complete lesson here. Importance of Key Selection This video emphasizes the importance of choosing partition and clustering keys in ScyllaDB for optimal performance and data distribution. Partition keys should have high cardinality to ensure even data distribution across nodes and avoid issues like large or hot partitions. Examples of good keys include unique identifiers like user IDs, while low-cardinality keys like states or ages can lead to uneven load and inefficiency. Clustering keys should align with query patterns, considering the order of rows and prioritizing efficient retrieval, such as fetching recent data for time-sensitive applications. Strategic key selection prevents resource bottlenecks and enhances scalability. Learn more in the complete lesson. Data Modeling Lab Walkthrough (three parts) The new three-part video lesson focuses on key aspects of data modeling in ScyllaDB, emphasizing the design and use of primary keys. It demonstrates creating a cluster and tables using the CQL shell, highlighting how partition keys determine data location and efficient querying while showcasing different queries. Some tables use a Clustering key, which organizes data within partitions, enabling efficient range queries. It explains compound primary keys to enhance query flexibility. Next, an example of a different clustering key order (ascending or descending) is given. This enables query optimization and efficient retrieval of data. Throughout the lab walkthrough, different challenges are presented, as well as data modeling solutions to optimize performance, scalability, and resource utilization. You can watch the walkthrough here and also take the lab yourself. New in the Advanced Data Modeling Lesson Collections and Drivers The new lesson discusses advanced data modeling in ScyllaDB, focusing on collections (Sets, Lists, Maps, and User-defined types) to simplify models with multi-value fields like phone numbers or emails. It introduces token-aware and shard-aware drivers as optimizations to enhance query efficiency. Token-aware drivers allow clients to send requests directly to replica nodes, bypassing extra hops through coordinator nodes, while shard-aware clients target specific shards within replica nodes for improved performance. ScyllaDB supports drivers in multiple languages like Java, Python, and Go, along with compatibility with Cassandra drivers. An entire course on Drivers is also available. You can learn more in the complete lesson here. New in the ScyllaDB Operations Course Replica level Write/Read Path The lesson explains ScyllaDB’s read and write paths, focusing on how data is written to Memtables persisted as immutable SSTables. Because the SSTables are immutable, they are compacted periodically. Writes, including updates and deletes, are stored in a commit log before being flushed to SSTables. This ensures data consistency. For reads, a cache is used to optimize performance (also using bloom filters). Compaction merges SSTables to remove outdated data, maintain efficiency, and save storage. ScyllaDB offers different compaction strategies and you can choose the most suitable one based on your use case. Learn more in the full lesson. Tracing Demo The lesson provides a practical demonstration of ScyllaDB’s tracing using a three-node cluster. The tracing tool is showcased as a debugging aid to track request flows and replica responses. The demo highlights how data consistency levels influence when responses are sent back to clients and demonstrates high availability by successfully handling writes even when a node is down, provided the consistency requirements are met. You can find the complete lesson here.

Top Blogs of 2024: Comparisons, Caching & Database Internals

Let’s look back at the top 10 ScyllaDB blog posts written this year – plus 10 “timeless classics” that continue to get attention. Before we start, thank you to all the community members who contributed to our blogs in various ways – from users sharing best practices at ScyllaDB Summit, to engineers explaining how they raised the bar for database performance, to anyone who has initiated or contributed to the discussion on HackerNews, Reddit, and other platforms. And if you have suggestions for 2025 blog topics, please share them with us on our socials. With no further ado, here are the most-read blog posts that we published in 2024…   We Compared ScyllaDB and Memcached and… We Lost? By Felipe Cardeneti Mendes Engineers behind ScyllaDB joined forces with Memcached maintainer dormando for an in-depth look at database and cache internals, and the tradeoffs in each. Read: We Compared ScyllaDB and Memcached and… We Lost? Related: Why Databases Cache, but Caches Go to Disk   Inside ScyllaDB’s Internal Cache By Pavel “Xemul” Emelyanov Why ScyllaDB completely bypasses the Linux cache during reads, using its own highly efficient row-based cache instead. Read: Inside ScyllaDB’s Internal Cache Related: Replacing Your Cache with ScyllaDB   Smooth Scaling: Why ScyllaDB Moved to “Tablets” Data Distribution By Avi Kivity The rationale behind ScyllaDB’s new “tablets” replication architecture, which builds upon a multiyear project to implement and extend Raft. Read: Smooth Scaling: Why ScyllaDB Moved to “Tablets” Data Distribution Related: ScyllaDB Fast Forward: True Elastic Scale   Rust vs. Zig in Reality: A (Somewhat) Friendly Debate By Cynthia Dunlop A (somewhat) friendly P99 CONF popup debate with Jarred Sumner (Bun.js), Pekka Enberg (Turso), and Glauber Costa (Turso) on ThePrimeagen’s stream. Read: Rust vs. Zig in Reality: A (Somewhat) Friendly Debate Related: P99 CONF on demand   Database Internals: Working with IO By Pavel “Xemul” Emelyanov Explore the tradeoffs of different Linux I/O methods and learn how databases can take advantage of a modern SSD’s unique characteristics. Read: Database Internals: Working with IO Related: Understanding Storage I/O Under Load   How We Implemented ScyllaDB’s “Tablets” Data Distribution By Avi Kivity How ScyllaDB implemented its new Raft-based tablets architecture, which enables teams to quickly scale out in response to traffic spikes. Read: How We Implemented ScyllaDB’s “Tablets” Data Distribution Related: Overcoming Distributed Databases Scaling Challenges with Tablets   How ShareChat Scaled their ML Feature Store 1000X without Scaling the Database By Ivan Burmistrov and Andrei Manakov How ShareChat engineers managed to meet their lofty performance goal without scaling the underlying database. Read: How ShareChat Scaled their ML Feature Store 1000X without Scaling the Database Related: ShareChat’s Path to High-Performance NoSQL with ScyllaDB   New Google Cloud Z3 Instances: Early Performance Benchmarks By Łukasz Sójka, Roy Dahan ScyllaDB had the privilege of testing Google Cloud’s brand new Z3 GCE instances in an early preview. We observed a 23% increase in write throughput, 24% for mixed workloads, and 14% for reads per vCPU – all at a lower cost compared to N2. Read:New Google Cloud Z3 Instances: Early Performance Benchmarks Related: A Deep Dive into ScyllaDB’s Architecture   Database Internals: Working with CPUs By Pavel “Xemul” Emelyanov Get a database engineer’s inside look at how the database interacts with the CPU…in this excerpt from the book, “Database Performance at Scale.” Read: Database Internals: Working with CPUs Related: Database Performance at Scale: A Practical Guide [Free Book]   Migrating from Postgres to ScyllaDB, with 349X Faster Query Processing By Dan Harris and Sebastian Vercruysse How Coralogix cut processing times from 30 seconds to 86 milliseconds with a PostgreSQL to ScyllaDB migration. Read: Migrating from Postgres to ScyllaDB, with 349X Faster Query Processing Related: NoSQL Migration Masterclass   Bonus: Top NoSQL Database Blogs From Years Past Many of the blogs published in previous years continued to resonate with the community. Here’s a rundown of 10 enduring favorites: How io_uring and eBPF Will Revolutionize Programming in Linux (Glauber Costa): How io_uring and eBPF will change the way programmers develop asynchronous interfaces and execute arbitrary code, such as tracepoints, more securely. [2020]   Benchmarking MongoDB vs ScyllaDB: Performance, Scalability & Cost (Dr. Daniel Seybold): Dr. Daniel Seybold shares how MongoDB and ScyllaDB compare on throughput, latency, scalability, and price-performance in this third-party benchmark by benchANT. [2023]   Introducing “Database Performance at Scale”: A Free, Open Source Book (Dor Laor): Introducing a new book that provides practical guidance for understanding the opportunities, trade-offs, and traps you might encounter while trying to optimize data-intensive applications for high throughput and low latency. [2023]   DynamoDB: When to Move Out (Felipe Cardeneti Mendes): A look at the top reasons why teams decide to leave DynamoDB: throttling, latency, item size limits, and limited flexibility…not to mention costs. [2023]   ScyllaDB vs MongoDB vs PostgreSQL: Tractian’s Benchmarking & Migration (João Pedro Voltani): TRACTIAN shares their comparison of ScyllaDB vs MongoDB and PostgreSQL, then provides an overview of their MongoDB to ScyllaDB migration process, challenges & results. [2023]   Benchmarking Apache Cassandra (40 Nodes) vs ScyllaDB (4 Nodes) (Juliusz Stasiewicz, Piotr Grabowski, Karol Baryla): We benchmarked Apache Cassandra on 40 nodes vs ScyllaDB on just 4 nodes. See how they stacked up on throughput, latency, and cost. [2022]   How Numberly Replaced Kafka with a Rust-Based ScyllaDB Shard-Aware Application (Alexys Jacob): How Numberly used Rust & ScyllaDB to replace Kafka, streamlining the way all its AdTech components send and track messages (whatever their form). [2023]   Async Rust in Practice: Performance, Pitfalls, Profiling (Piotr Sarna): How our engineers used flamegraphs to diagnose and resolve performance issues in our Tokio framework based Rust driver. [2022]   On Coordinated Omission (Ivan Prisyazhynyy): Your benchmark may be lying to you! Learn why coordinated omissions are a concern, and how we account for them in benchmarking ScyllaDB. [2021]   Why Disney+ Hotstar Replaced Redis and Elasticsearch with ScyllaDB Cloud (Cynthia Dunlop) – Get the inside perspective on how Disney+ Hotstar simplified its “continue watching” data architecture for scale. [2022]