Why Teams Are Ditching DynamoDB
Teams sometimes need lower latency, lower costs (especially as they scale) or the ability to run their applications somewhere other than AWS It’s easy to understand why so many teams have turned to Amazon DynamoDB since its introduction in 2012. It’s simple to get started, especially if your organization is already entrenched in the AWS ecosystem. It’s relatively fast and scalable, with a low learning curve. And since it’s fully managed, it abstracts away the operational effort and know-how traditionally required to keep a database up and running in a healthy state. But as time goes on, drawbacks emerge, especially as workloads scale and business requirements evolve. Teams sometimes need lower latency, lower costs (especially as they scale), or the ability to run their applications somewhere other than AWS. In those cases, ScyllaDB, which offers a DynamoDB-compatible API, is often selected as an alternative. Let’s explore the challenges that drove three teams to leave DynamoDB. Multi-Cloud Flexibility and Cost Savings Yieldmo is an online advertising platform that connects publishers and advertisers in real-time using an auction-based system, optimized with ML. Their business relies on delivering ads quickly (within 200-300 milliseconds) and efficiently, which requires ultra-fast, high-throughput database lookups at scale. Database delays directly translate to lost business. They initially built the platform on DynamoDB. However, while DynamoDB had been reliable, significant limitations emerged as they grew. As Todd Coleman, Technical Co-Founder and Chief Architect, explained, their primary concerns were twofold: escalating costs and geographic restrictions. The database was becoming increasingly expensive as they scaled, and it locked them into AWS, preventing true multi-cloud flexibility. While exploring DynamoDB alternatives, they were hoping to find an option that would maintain speed, scalability, and reliability while reducing costs and providing cloud vendor independence. Yieldmo first considered staying with DynamoDB and adding a caching layer. However, caching couldn’t fix the geographic latency issue. Cache misses would be too slow, making this approach impractical. They also explored Aerospike, which offered speed and cross-cloud support. However, Aerospike’s in-memory indexing would have required a prohibitively large and expensive cluster to handle Yieldmo’s large number of small data objects. Additionally, migrating to Aerospike would have required extensive and time-consuming code changes. Then they discovered ScyllaDB. And ScyllaDB’s DynamoDB-compatible API (Alternator) was a game changer. Todd explained, “ScyllaDB supported cross cloud deployments, required a manageable number of servers, and offered competitive costs. Best of all, its API was DynamoDB compatible, meaning we could migrate with minimal code changes. In fact, a single engineer implemented the necessary modifications in just a few days.” The migration process was carefully planned, leveraging their existing Kafka message queue architecture to ensure data integrity. They conducted two proof-of-concept (POC) tests: first with a single table of 28 billion objects, and then across all five AWS regions. The results were impressive. Todd shared, “Our database costs were cut in half, even with DynamoDB reserved capacity pricing.” And beyond cost savings, Yieldmo gained the flexibility to potentially deploy across different cloud providers. Their latency improved, and ScyllaDB was as simple to operate as DynamoDB. Wrapping up, Todd concluded: “One of our initial concerns was moving away from DynamoDB’s proven reliability. However, ScyllaDB has been an excellent partner. Their team provides monitoring of our clusters, alerts us to potential issues, and advises us when scaling is needed in terms of ongoing maintenance overhead. The experience has been comparable to DynamoDB, but with greater independence and substantial cost savings.” Hear from Yieldmo Migrating to GCP with Better Performance and Lower Costs Digital Turbine, a major player in mobile ad tech with $500 million in annual revenue, faced growing challenges with its DynamoDB implementation. While its primary motivation for migration was standardizing on Google Cloud Platform following acquisitions, the existing DynamoDB solution had been causing both performance and cost concerns at scale. “It can be a little expensive as you scale, to be honest,” explained Joseph Shorter, vice president of Platform Architecture at Digital Turbine. “We were finding some performance issues. We were doing a ton of reads — 90% of all interactions with DynamoDB were read operations. With all those operations, we found that the performance hits required us to scale up more than we wanted, which increased costs.” Digital Turbine needed the migration to be as fast and low-risk as possible, which meant keeping application refactoring to a minimum. The main concern, according to Shorter, was “How can we migrate without radically refactoring our platform, while maintaining at least the same performance and value – and avoiding a crash-and-burn situation? Because if it failed, it would take down our whole company. “ After evaluating several options, Digital Turbine moved to ScyllaDB and achieved immediate improvements. The migration took less than a sprint to implement and the results exceeded expectations. “A 20% cost difference — that’s a big number, no matter what you’re talking about,” Shorter noted. “And when you consider our plans to scale even further, it becomes even more significant.” Beyond the cost savings, they found themselves “barely tapping the ScyllaDB clusters,” suggesting room for even more growth without proportional cost increases. Hear from Digital Turbine High Write Throughput with Low Latency and Lower Costs The User State and Customizations team for one of the world’s largest media streaming services had been using DynamoDB for several years. As they were rearchitecting two existing use cases, they wondered if it was time for a database change. The two use cases were: Pause/resume: If a user is watching a show and pauses it, they can pick up where they left off – on any device, from any location. Watch state: Using that same data, determine whether the user has watched the show. Here’s a simple architecture diagram: Every 30 seconds, the client sends heartbeats with the updated playhead position of the show and then sends those events to the database. The Edge Pipeline loads events in the same region as the user, while the Authority (Auth) Pipeline combines events for all five regions that the company serves. Finally, the data has to be fetched and served back to the client to support playback. Note that the team wanted to preserve separation between the Auth and Edge regions, so they weren’t looking for any database-specific replication between them. The two main technical requirements for supporting this architecture were: To ensure a great user experience, the system had to remain highly available, with low-latency reads and the ability to scale based on traffic surges. To avoid extensive infrastructure setup or DBA work, they needed easy integration with their AWS services. Once those boxes were checked, the team also hoped to reduce overall cost. “Our existing infrastructure had data spread across various clusters of DynamoDB and Elasticache, so we really wanted something simple that could combine these into a much lower cost system” explained their backend engineer. Specifically, they needed a database with: Multiregion support, since the service was popular across five major geographic regions. The ability to handle over 170K writes per second. Updates didn’t have a strict service-level agreement (SLA), but the system needed to perform conditional updates based on event timestamps. The ability to handle over 78K reads per second with a P99 latency of 10 to 20 milliseconds. The use case involved only simple point queries; things like indexes, partitioning and complicated query patterns weren’t a primary concern. Around 10TB of data with room for growth. Why move from DynamoDB? According to their backend engineer, “DynamoDB could support our technical requirements perfectly. But given our data size and high (write-heavy) throughput, continuing with DynamoDB would have been like shoveling money into the fire.” Based on their requirements for write performance and cost, they decided to explore ScyllaDB. For a proof of concept, they set up a ScyllaDB Cloud test cluster with six AWS i4i 4xlarge nodes and preloaded the cluster with 3 billion records. They ran combined loads of 170K writes per second and 78K reads per second. And the results? “We hit the combined load with zero errors. Our P99 read latency was 9 ms and the write latency was less than 1 ms.” These low latencies, paired with significant cost savings (over 50%) convinced them to leave DynamoDB. Beyond lower latencies at lower cost, the team also appreciated the following aspects of ScyllaDB: ScyllaDB’s performance-focused design (being built on the Seastar framework, using C++, being NUMA-aware, offering shard-aware drivers, etc.) helps the team reduce maintenance time and costs. Incremental Compaction Strategy helps them significantly reduce write amplification. Flexible consistency level and replication factors helps them support separate Auth and Edge pipelines. For example, Auth uses quorum consistency while Edge uses a consistency level of “1” due to the data duplication and high throughput. Their backend engineer concluded: “Choosing a database is hard. You need to consider not only features, but also costs. Serverless is not a silver bullet, especially in the database domain. “In our case, due to the high throughput and latency requirements, DynamoDB serverless was not a great option. Also, don’t underestimate the role of hardware. Better utilizing the hardware is key to reducing costs while improving performance.” Learn More Is Your Team Next? If your team is considering a move from DynamoDB, ScyllaDB might be an option to explore. Sign up for a technical consultation to talk more about your use case, SLAs, technical requirements and what you’re hoping to optimize. We’ll let you know if ScyllaDB is a good fit and, if so, what a migration might involve in terms of application changes, data modeling, infrastructure and so on. Bonus: Here’s a quick look at how ScyllaDB compares to DynamoDBDatabase Performance Questions from Google Cloud Next
Spiraling cache costs, tombstone nightmares, old Cassandra pains, and more — what people were asking about at Google Cloud Next You’ve likely heard that what happens in Vegas stays in Vegas…but we’re making an exception here. Last week at Google Cloud Next in Las Vegas, my ScyllaDB colleagues and I had the pleasure of meeting all sorts of great people. And among all the whack-a-monster fun, there were lots of serious questions about database performance in general and ScyllaDB in particular. In this blog, I’ll share some of the most interesting questions that attendees asked and recap my responses. Cache We added Redis in front of our Postgres but now its cost is skyrocketing. How can ScyllaDB help in this case? We placed Redis in front of DynamoDB because DAX is too expensive, but managing cache invalidation is hard. Any suggestions? Adding a cache layer to a slower database is a very common pattern. After all, if the cache layer grants low-millisecond range response time while the back-end database serves requests in the 3-digit milliseconds range, the decision might seem like a no brainer. However, the tradeoffs often turn out to be steeper than people initially anticipate: First, you need to properly size the cache so the cost doesn’t outweigh its usefulness. Learning the intricacies of the workload (e.g., which pieces of data are accessed more than others) is essential for deciding what to cache and what to pass-through the backend database. If you underestimate the required cache size, the performance gain of having a cache might be less than ideal. Since only part of the data is in the cache, the database is hit frequently – and elevates latencies across the board. Deciding what to keep in cache is also important. How you define the data eviction policy for data in cache might make or break the data lifecycle in that layer – greatly affecting its impact on long-tail latency. The application is also responsible for caching responses. That means there’s additional code that must be maintained to ensure consistency, synchronicity, and high availability of those operations. Another issue that pops up really often is cache invalidation: how to manage updating a cache that is separate from the backend database. Once a piece of data needs to be deleted or updated, it has to be synchronized with the cache, and that creates a situation where failure means serving stale or old data. Integrated solutions such as DAX for DynamoDB are helpful because they provide a pass-through caching layer: the database is updated first, then the system takes care of reflecting the change on the cache layer. However, the tradeoff of this technique is the cost: you end up paying extra for DAX than you would pay for simply running a similarly-sized Redis cluster. ScyllaDB’s performance characteristics have allowed many teams to replace both their cache and database layers. By bypassing the Linux cache and caching data at the row level, ScyllaDB makes cache space utilization more efficient for maximum performance. By relying on efficient use of cache, ScyllaDB can provide single-digit milliseconds p99 read latency while still reducing the overall infrastructure required to run workloads. Its design allows for extremely fast access to data on disks. Even beyond that caching layer, ScyllaDB efficiently serves data from disk at very predictable ultra-low latency. ScyllaDB’s IO scheduler is optimized to maximize disk bandwidth while still delivering predictable low latency for operations. You can learn more about our IO Scheduler on this blog. ScyllaDB maintains cache performance by leveraging the LRU (Least-Recently Used) algorithm which selectively evicts infrequently accessed data. Keys that were not recently accessed may be evicted to make room for other data to be cached. However, evicted keys are still persisted on disk (and replicated!) and can be efficiently accessed at any time. This is especially advantageous compared to Redis, where relying on a persistent store outside of memory is challenging. Read more in our Cache Internals blog, cache comparison page, and blog on replacing external caches. Tombstones I’ve had tons of issues with tombstones in the past with Cassandra… Performance issues, data resurrection, you name it. It’s still pretty hard dealing with the performance impact. How does ScyllaDB handle these issues? In the LSM (Log-Structured Merge tree) model, deletes are handled just like regular writes. The system accepts the delete command and creates what is called a tombstone: a marker for a deletion. Then, the system later merges the deletion marker with the rest of the data — either in a process called compaction or in memory at read time. Tombstone processing historically poses a couple of challenges. One of them is to handle what is known as range deletes: a single deletion that covers multiple rows. For instance, you can use “DELETE … WHERE … ClusteringKey < X”, which would delete all records that have a Clustering Key lower than X. This usually means the system has to read through an unknown amount of data until it gets to the tombstone, then it would have to discard it all from the result set. If the number of rows is small, it’s still a very efficient read. But if it covers millions of rows, reading just to discard them can be very inefficient. Tombstones are also the source of another concern with distributed systems: data resurrection. Since Cassandra’s (and originally ScyllaDB’s) tombstones were originally kept only up to the grace period (a.k.a. gc_grace_seconds, default of 10 days), a repair had to be run on the cluster within that time frame. Skipping this step could lead to tombstones being purged — and previously deleted data that’s not covered by a tombstone could come back to life (a.k.a. “Data resurrection). ScyllaDB recently introduced tons of improvements in how it handles tombstones, from repair-based garbage collection, to expired tombstone thresholds to trigger early compaction of SSTables. Tombstone processing is now much more efficient and performant than Cassandra’s (and even previous versions of ScyllaDB), especially in workloads that are prone to accumulating tombstones over time. ScyllaDB’s repair-based garbage collection capability also helps prevent data resurrection by ensuring tombstones are only eligible for purging after a repair has been completed. This means workloads can get rid of tombstones much faster and make reads more efficient. Learn more about this functionality on our blog Preventing Data Resurrection with Repair Based Tombstone Garbage Collection. BigTable and Friends When would you recommend ScyllaDB over Spanner/BigTable/BigQuery? Questions about how our product compares to the cloud databases run by the conference host are unsurprisingly common. Google Cloud databases are no exception. Attendees shared a lot of use cases currently leveraging them and were curious about alternatives aligned with our goals (scalability, global replication, performance, cost). Could ScyllaDB help them, or should they move on to another booth? It really depends on how they’re using their database as well as the nature of their database workload. Let’s review the most commonly asked Google Cloud databases: Spanner is highly oriented towards relational workloads at global scale. While it still can perform well under distributed NoSQL workloads, its performance and cost may pose challenges at scale. BigQuery is a high-performance analytical database. It can run really complex analytical queries, but it’s not a good choice for NoSQL workloads that require high throughput and low latency at scale. BigTable is Google Cloud’s NoSQL database. This is the most similar to ScyllaDB’s design, with a focus on scalability and high throughput. From the description above, it’s easy to assess: if the use-case is inherently relational or heavy on complex analytics queries, ScyllaDB might not be the best choice. However, just because they are currently using a relational or analytics database doesn’t mean that they are leveraging the best tool for the job. If the application relies on point queries that fetch data from a single partition (even if it contains multiple rows), then ScyllaDB might be an excellent choice. ScyllaDB implements advanced features such as Secondary Indexes (Local and Global) and Materialized Views, which allow users to have very efficient indexes and table views that still provide the same performance as their base table. Cloud databases are usually very easy to adopt: just a couple of clicks or an API call, and they are ready to serve in your environment. Their performance is usually fine for general use. However, for use cases with latency or throughput requirements, it might be appropriate to consider performance-focused alternatives. ScyllaDB has a track record of being extremely efficient and fast, providing predictable low tail-latency at p99. Cost is another factor. Scaling workloads to millions of operations per second might be technically feasible on some databases, but incur surprisingly high cost. ScyllaDB’s inherent efficiency allows us to run workloads at scale with greatly reduced costs. Another downside of using a cloud vendor’s managed database solution: ecosystem lock-in. If you decide to leave the cloud vendor’s platform, you usually can’t use the same service – either on other cloud providers or even on-premises. If teams need to migrate to other deployment solutions, ScyllaDB provides robust support for moving to any cloud provider or running in an on-premises datacenter. Read our ScyllaDB vs BigTable comparison. Schema mismatch How does ScyllaDB handle specific problems such as schema mismatch? This user shared a painful Cassandra incident where an old node (initially configured to be part of a sandbox cluster) had an incorrect configuration. That mistake, possibly caused by IP overlap resulting from infrastructure drift over time, led to the old node joining the production cluster. At that point, it essentially garbled the production schema and broke it. Since Cassandra relies on the gossip protocol (epidemic peer-to-peer protocol), the schema was replicated to the whole cluster and left it in an unusable state. That mistake ended up costing this user hours of troubleshooting and caused a production outage that lasted for days. Ouch! After they shared their horror story, they inquired: Could ScyllaDB have prevented that? With the introduction of Consistent Schema changes leveraging the Raft distributed consensus algorithm, ScyllaDB made schema changes safe and consistent in a distributed environment. Raft is based on events being handled by a leader node, which ensures that any changes applied to the cluster would effectively be rejected if not agreed upon by the leader. The issue reported by the user simply would not exist in a Raft-enabled ScyllaDB cluster. Schema management would reject the rogue version and the node would fail to join the cluster – exactly what it needed to do to prevent issues! Additionally, ScyllaDB transitioned from using IP addresses to Host UUIDs – effectively removing any chance that an old IP tries to reconnect to a cluster it was never a part of. Read the Consistent Schema Changes blog and the follow up blog. Additionally, learn more about the change to Strongly Consistent Topology Changes. Old Cassandra Pains I have a very old, unmaintained Cassandra cluster running a critical app. How do I safely migrate to ScyllaDB? That is a very common question. First, let’s unpack it a bit. Let’s analyze what “old” means. Cassandra 2.1 was released 10 years ago. But it is still supported by the ScyllaDB Spark connector…and that means it can be easily migrated to a shiny ScyllaDB cluster (as long as its schema is compatible). “Unmaintained” can also mean a lot of things. Did it just miss some upgrade cycles? Or is it also behind on maintenance steps such as repairs? Even if that’s the case – no problem! Our Spark-based ScyllaDB Migrator has tunable consistency for reads and writes. This means it can be configured to use LOCAL_QUORUM or even ALL consistency if required. Although that’s not recommended in most cases (for performance reasons), that would ensure consistent data reads as data is migrated over to a new cluster. Now, let’s discuss migration safety. In order to maintain consistency across the migration, the app should be configured to dual-write to both the source and destination clusters. It can do so by sending parallel writes to each and ensuring that any failures are retried. It may also be a good idea to collect metrics or logs on errors so you can keep track of inconsistencies across the clusters. Once dual writes are enabled, data can be migrated using the Scylla Migrator app. Since it’s based on Spark, the migrator can easily scale to any number of workers that’s required to speed up the migration process. After migrating the historical data, you might run a read validation process – reading from both sources and comparing until you are confident in the migrated data consistency. Once you are confident that all data has been migrated, you can finally get rid of the old cluster and have your application run solely on the new one. If the migration process still seems daunting, we can help. ScyllaDB has a team available to guide you through the migration, from planning to best practices at every step. Reach out to Support if you are considering migrating to ScyllaDB! We have tons of resources on helping users migrate. Here are some of them: ScyllaDB Migrator project Migrate to ScyllaDB Documentation hub Monster Scale Summit presentation: Database Migration Strategies and Pitfalls Migrating from Cassandra or DynamoDB to ScyllaDB using ScyllaDB Migrator Wrap These conversations are only a select few of the many good discussions the ScyllaDB team had at Google Cloud Next. Every year, we are amazed at the wide variety of stories shared by people we meet. Conversations like these are what motivate us to attend Google Cloud Next every year. If you’d like to reach out, share your story, or ask questions, here are a couple of resources you can leverage: ScyllaDB Forum Community Slack If you are wondering if ScyllaDB is the right choice for your use cases, you can reach out for a technical 1:1 meeting.Cassandra Compaction Throughput Performance Explained
This is the second post in my series on improving node density and lowering costs with Apache Cassandra. In the previous post, I examined how streaming performance impacts node density and operational costs. In this post, I’ll focus on compaction throughput, and a recent optimization in Cassandra 5.0.4 that significantly improves it, CASSANDRA-15452.
This post assumes some familiarity with Apache Cassandra storage engine fundamentals. The documentation has a nice section covering the storage engine if you’d like to brush up before reading this post.
How to Reduce DynamoDB Costs: Expert Tips from Alex DeBrie
DynamoDB consultant Alex DeBrie shares where teams tend to get into trouble DynamoDB pricing can be a blessing and a curse. When you’re just starting off, costs are usually quite reasonable and on-demand pricing can seem like the perfect way to minimize upfront costs. But then perhaps you face “catastrophic success” with an exponential increase of users flooding your database…and your monthly bill far exceeds your budget. The more predictable provisioned capacity model might seem safer. But if you overprovision, you’re burning money – and if you underprovision, your application might be throttled during a critical peak period. It’s complicated. Add in the often-overlooked costs of secondary indexes, ACID transactions, and global tables – plus the nuances of dealing with DAX – and you could find that your cost estimates were worlds away from reality. Rather than learn these lessons the hard (and costly) way, why not take a shortcut: tap the expert known for helping teams reduce their DynamoDB costs. Enter Alex DeBrie, the guy who literally wrote the book on DynamoDB. Alex shared his experiences at the recent Monster SCALE Summit. This article recaps the key points from his talk (you can watch his complete talk here). Watch Alex’s Complete Talk Note: If you need further cost reduction beyond these strategies, consider ScyllaDB. ScyllaDB is an API-compatible DynamoDB alternative that provides better latency at 50% of the cost (or less), thanks to extreme engineering efficiency. Learn more about ScyllaDB as a DynamoDB alternative DynamoDB Pricing: The Basics Alex began the talk with an overview of how DynamoDB’s pricing structure works. Unlike other cloud databases where you provision resources like CPU and RAM, DynamoDB charges directly for operations. You pay for: Read Capacity Units (RCUs): Each RCU allows reading up to 4KB of data per request Write Capacity Units (WCUs): Each WCU allows writing up to 1KB of data per request Storage: Priced per gigabyte-month (similar to EBS or S3) Then there’s DynamoDB billing modes, which determine how you get that capacity for reads and writes. Provisioned Throughput is the traditional billing mode. You specify how many RCUs and WCUs you want available on a per-second basis. Basically, it’s a “use it or lose it” model. You’re paying for what you requested, whether you take advantage of it or not. If you happen to exceed what you requested, your workload gets throttled. And speaking of throttling, Alex called out another important difference between DynamoDB and other databases. With other databases, response times gradually worsen as concurrent queries increase. Not so in DynamoDB. Alex explained, “As you increase the number of concurrent queries, you’ll still hit some saturation point where you might not have provisioned enough throughput to support the reads or writes you want to perform. But rather than giving you long-tail response times, which aren’t ideal, it simply throttles you. It instantly returns a 500 error, telling you, ‘Hey, you haven’t provisioned enough for this particular second. Come back in another second, and you’ll have more reads and writes available.’” As a result, you get predictable response times – to a limit, at least. On-Demand Mode is more like a serverless or pay-per-request mode. Rather than saying how much capacity you want in advance, you just get charged per request. As you throw reads and writes at your DynamoDB database, AWS will charge you fractions of a cent each time. At the end of the month, they’ll total up all those costs and send you a bill. Beyond the Basics For an accurate assessment of your DynamoDB costs, you need to go beyond simply plugging your anticipated read and write estimates into a calculator (either the AWS-hosted DynamoDB cost calculator or the more nuanced DynamoDB cost analyzer we’ve designed). Many other factors – in your DynamoDB configuration as well as your actual application – impact your costs. Critical DynamoDB cost factors that Alex highlighted in his talk include: Table storage classes WCU and RCU cost multipliers Let’s look at each in turn. Table Storage Classes In DynamoDB, “table storage classes” define the underlying storage tier and access patterns for your table’s data. There are two options: standard mode for hot data and Standard-IA for infrequently accessed, historical, or backup data. Standard Mode: This is the traditional table storage class. It provides high-performance storage optimized for frequent access. It’s the cheapest mode for paying for operations. However, be aware that storage cost is more expensive (about 25 cents per Gigabyte-month in the cheapest regions). Standard-IA (Infrequent Access): This is a lower-cost, less performant tier designed for infrequent access. If you have a table with a lot of data and you’re doing fewer operations on it, you can use this option for cheaper storage (only about 10 cents per Gigabyte-month). However, the tradeoffs are that you pay a premium on operations and you cannot reserve capacity. [Amazon’s tips on selecting the table storage class] WCU and RCU Cost Multipliers Beyond the core settings, there’s also an array of “multipliers” that can exponentially increase your capacity unit consumption. Factors such as item size, secondary indexes, transactions, global table replication, and read consistency can all cause costs to skyrocket if you’re not careful. The riskiest cost multipliers that Alex called out include: Item size: Although the standard RCU is 4KB and the standard WCU is 1KB, you can go beyond that (for a cost). If you’re reading a 20KB item, that’s going to be 5 RCUs (20KB / 4KB = 5 RCUs). Or if you’re writing a 10KB item, that’s going to be 10 WCUs (10KB / 1KB= 10 WCUs). Secondary indexes: DynamoDB lets you use secondary indexes, but again – it will cost you. In addition to paying for the writes that go to your main table, you will also pay for all the writes to your secondary indexes. That can really drive up your WCU consumption. ACID Transactions: You can configure ACID transactions to operate on multiple items in a single request in an all-or-nothing way. However, you pay quite a premium for this. Global Tables: DynamoDB Global Tables replicate data across multiple regions, but you really pay the price due to increased write operations as well as increased storage needs. Consistent reads: Consistent reads ensure that a read request always returns the most recent write. But, you pay higher costs compared to eventually consistent reads that might return slightly older data. How to Reduce DynamoDB Costs Alex’s top tip is to “mind your multipliers.” Make sure you really understand the cost impacts of different options. Also, avoid any options that don’t justify their steep costs. In particular… Watch Item Sizes DynamoDB users tend to bloat their item sizes without really thinking about it. This consumes a lot of resources (disk/memory/CPU), so review your item sizes carefully: Remove unused attributes If you have large values, consider storing them in S3 instead Reduce the attribute names (Since AWS charges for the full payload transmitted over the wire, large attribute names result in larger item sizes) If you have a smaller amount of frequently updated data and a larger amount of slow-moving data, consider splitting items into multiple different items (vertical partitioning) Limit Secondary Indexes Secondary indexes are another common culprit behind unexpected DynamoDB costs. Be vigilant about spotting and removing secondary indexes that you don’t really need. Remember, they’re causing you to pay twice: you pay in terms of storage and also on every write. You can also use Projections to limit the number of writes to your secondary indexes and/or limit the size of the items in those indexes. Regularly review secondary indexes to ensure they are being utilized. Remove any index that isn’t being read and evaluate the “write:read” cost ratio to determine if the cost is justified. Use Transactions Sparingly Limit transactions. Alex put it this way: “AWS came out with DynamoDB transactions six or seven years ago. They’re super useful for many things, but I wouldn’t use them willy-nilly. They’re slower than traditional DynamoDB operations and more expensive. So, I try to limit my transactions to high-value, low-volume, low-frequency applications. That’s where I find them worthwhile — if I use them at all. Otherwise, I focus on modeling around them, leaning into DynamoDB’s design to avoid needing transactions in the first place.” Be Selective with Global Tables Global tables are critical if you need data in multiple regions, but make sure they’re really worth it. Given that they will multiply your write and storage costs, they should add significant value to justify their existence. Consider Eventually Consistent Reads Do you really need strongly consistent reads every time? Alex has found that in most cases, users don’t. “You’re almost always going to get the latest version of the item, and even if you don’t, it shouldn’t cause data corruption.” Choose the Right Billing Mode On-demand DynamoDB costs about 3.5X the price of fully utilized provisioned capacity (and this is quite an improvement from the previous 7X the price). However, achieving full utilization of provisioned capacity is difficult because overprovisioning is often necessary to handle traffic spikes. Generally, ~28-29% utilization is needed to make provisioned capacity cost effective. For smaller workloads or those with unpredictable traffic, on-demand is often the better choice. Alex advises: “Use on-demand until it hurts. If your DynamoDB bill is under $1,000 a month, don’t spend too much time optimizing provisioned capacity. Instead, set it to on-demand and see how it goes. Once costs start to rise, then consider whether it’s worth optimizing. If you’re using provisioned capacity, aim for at least 28.8% utilization. If you’re not hitting that, switch to on-demand. Autoscaling can help with provisioned capacity – as long as your traffic doesn’t have rapid spikes. For stable, predictable workloads, reserved capacity (purchased a year in advance) can save you a lot of money.” Review Table Storage Classes Monthly Review your table storage classes every month. When deciding between storage classes, the key metric is whether total operations costs are 2.4X total storage costs. If operations costs exceed this, standard storage is preferable; otherwise, standard infrequent access (IA) is a better choice. Also, be aware that the optimal setting could vary over time. Per Alex, “Standard storage is usually cheaper at first. For example, writing a kilobyte of data costs roughly the same as storing it for five months, so you’ll likely start in standard storage. However, over time, as your data grows, storage costs increase, and it may be worth switching to standard IA.” Another tip on this front: use TTL to your advantage. If you don’t need to keep data forever, use TTL to automatically expire it. This will help with storage costs. “DynamoDB pricing should influence how you build your application” Alex left us with this thought: “DynamoDB pricing should influence how you build your application. You should consider these cost multipliers when designing your data model because you can easily see the connection between resource usage and cost, ensuring you’re getting value from it. For example, if you’re thinking about adding a secondary index, run the numbers to see if it’s better to over-read from your main table instead of paying the write cost for a secondary index. There are many strategies you can use.” Browse our DynamoDB Resources Learn how ScyllaDB Compares to DynamoDBCEP-24 Behind the scenes: Developing Apache Cassandra®’s password validator and generator
Introduction: The need for an Apache Cassandra® password validator and generator
Here’s the problem: while users have always had the ability to create whatever password they wanted in Cassandra–from straightforward to incredibly complex and everything in between–this ultimately created a noticeable security vulnerability.
While organizations might have internal processes for generating secure passwords that adhere to their own security policies, Cassandra itself did not have the means to enforce these standards. To make the security vulnerability worse, if a password initially met internal security guidelines, users could later downgrade their password to a less secure option simply by using “ALTER ROLE” statements.
When internal password requirements are enforced for an individual, users face the additional burden of creating compliant passwords. This inevitably involved lots of trial-and-error in attempting to create a compliant password that satisfied complex security roles.
But what if there was a way to have Cassandra automatically create passwords that meet all bespoke security requirements–but without requiring manual effort from users or system operators?
That’s why we developed CEP-24: Password validation/generation. We recognized that the complexity of secure password management could be significantly reduced (or eliminated entirely) with the right approach–and improving both security and user experience at the same time.
The Goals of CEP-24
A Cassandra Enhancement Proposal (or CEP) is a structured process for proposing, creating, and ultimately implementing new features for the Cassandra project. All CEPs are thoroughly vetted among the Cassandra community before they are officially integrated into the project.
These were the key goals we established for CEP-24:
- Introduce a way to enforce password strength upon role creation or role alteration.
- Implement a reference implementation of a password validator which adheres to a recommended password strength policy, to be used for Cassandra users out of the box.
- Emit a warning (and proceed) or just reject “create role” and “alter role” statements when the provided password does not meet a certain security level, based on user configuration of Cassandra.
- To be able to implement a custom password validator with its own policy, whatever it might be, and provide a modular/pluggable mechanism to do so.
- Provide a way for Cassandra to generate a password which would pass the subsequent validation for use by the user.
The Cassandra Password Validator and Generator builds upon an established framework in Cassandra called Guardrails, which was originally implemented under CEP-3 (more details here).
The password validator implements a custom guardrail introduced
as part of
CEP-24. A custom guardrail can validate and generate values of
arbitrary types when properly implemented. In the CEP-24 context,
the password guardrail provides
CassandraPasswordValidator
by extending
ValueValidator
, while passwords are generated by
CassandraPasswordGenerator
by extending
ValueGenerator
. Both components work with passwords as
String type values.
Password validation and generation are configured in the
cassandra.yaml
file under the
password_validator
section. Let’s explore the key
configuration properties available. First, the
class_name
and generator_class_name
parameters specify which validator and generator classes will be
used to validate and generate passwords respectively.
Cassandra
ships CassandraPasswordValidator
and CassandraPasswordGenerator
out
of the box. However, if a particular enterprise decides that they
need something very custom, they are free to implement their own
validators, put it on Cassandra’s class path and reference it in
the configuration behind class_name parameter. Same for the
validator.
CEP-24 provides implementations of the validator and generator that the Cassandra team believes will satisfy the requirements of most users. These default implementations address common password security needs. However, the framework is designed with flexibility in mind, allowing organizations to implement custom validation and generation rules that align with their specific security policies and business requirements.
password_validator: # Implementation class of a validator. When not in form of FQCN, the # package name org.apache.cassandra.db.guardrails.validators is prepended. # By default, there is no validator. class_name: CassandraPasswordValidator # Implementation class of related generator which generates values which are valid when # tested against this validator. When not in form of FQCN, the # package name org.apache.cassandra.db.guardrails.generators is prepended. # By default, there is no generator. generator_class_name: CassandraPasswordGenerator
Password quality might be looked at as the number of characteristics a password satisfies. There are two levels for any password to be evaluated – warning level and failure level. Warning and failure levels nicely fit into how Guardrails act. Every guardrail has warning and failure thresholds. Based on what value a specific guardrail evaluates, it will either emit a warning to a user that its usage is discouraged (but ultimately allowed) or it will fail to be set altogether.
This same principle applies to password evaluation – each password is assessed against both warning and failure thresholds. These thresholds are determined by counting the characteristics present in the password. The system evaluates five key characteristics: the password’s overall length, the number of uppercase characters, the number of lowercase characters, the number of special characters, and the number of digits. A comprehensive password security policy can be enforced by configuring minimum requirements for each of these characteristics.
# There are four characteristics: # upper-case, lower-case, special character and digit. # If this value is set e.g. to 3, a password has to # consist of 3 out of 4 characteristics. # For example, it has to contain at least 2 upper-case characters, # 2 lower-case, and 2 digits to pass, # but it does not have to contain any special characters. # If the number of characteristics found in the password is # less than or equal to this number, it will emit a warning. characteristic_warn: 3 # If the number of characteristics found in the password is #less than or equal to this number, it will emit a failure. characteristic_fail: 2
Next, there are configuration parameters for each characteristic which count towards warning or failure:
# If the password is shorter than this value, # the validator will emit a warning. length_warn: 12 # If a password is shorter than this value, # the validator will emit a failure. length_fail: 8 # If a password does not contain at least n # upper-case characters, the validator will emit a warning. upper_case_warn: 2 # If a password does not contain at least # n upper-case characters, the validator will emit a failure. upper_case_fail: 1 # If a password does not contain at least # n lower-case characters, the validator will emit a warning. lower_case_warn: 2 # If a password does not contain at least # n lower-case characters, the validator will emit a failure. lower_case_fail: 1 # If a password does not contain at least # n digits, the validator will emit a warning. digit_warn: 2 # If a password does not contain at least # n digits, the validator will emit a failure. digit_fail: 1 # If a password does not contain at least # n special characters, the validator will emit a warning. special_warn: 2 # If a password does not contain at least # n special characters, the validator will emit a failure. special_fail: 1
It is also possible to say that illegal sequences of certain length found in a password will be forbidden:
# If a password contains illegal sequences that are at least this long, it is invalid. # Illegal sequences might be either alphabetical (form 'abcde'), # numerical (form '34567'), or US qwerty (form 'asdfg') as well # as sequences from supported character sets. # The minimum value for this property is 3, # by default it is set to 5. illegal_sequence_length: 5
Lastly, it is also possible to configure a dictionary of passwords to check against. That way, we will be checking against password dictionary attacks. It is up to the operator of a cluster to configure the password dictionary:
# Dictionary to check the passwords against. Defaults to no dictionary. # Whole dictionary is cached into memory. Use with caution with relatively big dictionaries. # Entries in a dictionary, one per line, have to be sorted per String's compareTo contract. dictionary: /path/to/dictionary/file
Now that we have gone over all the configuration parameters, let’s take a look at an example of how password validation and generation look in practice.
Consider a scenario where a Cassandra super-user (such as the default ‘cassandra’ role) attempts to create a new role named ‘alice’.
cassandra@cqlsh> CREATE ROLE alice WITH PASSWORD = 'cassandraisadatabase' AND LOGIN = true; InvalidRequest: Error from server: code=2200 [Invalid query] message="Password was not set as it violated configured password strength policy. To fix this error, the following has to be resolved: Password contains the dictionary word 'cassandraisadatabase'. You may also use 'GENERATED PASSWORD' upon role creation or alteration."
The password is not found in the dictionary, but it is not long enough. When an operator sees this, they will try to fix it by making the password longer:
cassandra@cqlsh> CREATE ROLE alice WITH PASSWORD = 'T8aum3?' AND LOGIN = true; InvalidRequest: Error from server: code=2200 [Invalid query] message="Password was not set as it violated configured password strength policy. To fix this error, the following has to be resolved: Password must be 8 or more characters in length. You may also use 'GENERATED PASSWORD' upon role creation or alteration."
The password is finally set, but it is not completely secure. It satisfies the minimum requirements but our validator identified that not all characteristics were met.
cassandra@cqlsh> CREATE ROLE alice WITH PASSWORD = 'mYAtt3mp' AND LOGIN = true; Warnings: Guardrail password violated: Password was set, however it might not be strong enough according to the configured password strength policy. To fix this warning, the following has to be resolved: Password must be 12 or more characters in length. Passwords must contain 2 or more digit characters. Password must contain 2 or more special characters. Password matches 2 of 4 character rules, but 4 are required. You may also use 'GENERATED PASSWORD' upon role creation or alteration.
The password is finally set, but it is not completely secure. It satisfies the minimum requirements but our validator identified that not all characteristics were met.
When an operator saw this, they noticed the note about the ‘GENERATED PASSWORD’ clause which will generate a password automatically without an operator needing to invent it on their own. This is a lot of times, as shown, a cumbersome process better to be left on a machine. Making it also more efficient and reliable.
cassandra@cqlsh> ALTER ROLE alice WITH GENERATED PASSWORD; generated_password ------------------ R7tb33?.mcAX
The generated password shown above will satisfy all the rules we have configured in the cassandra.yaml automatically. Every generated password will satisfy all of the rules. This is clearly an advantage over manual password generation.
When the CQL statement is executed, it will be visible in the CQLSH history (HISTORY command or in cqlsh_history file) but the password will not be logged, hence it cannot leak. It will also not appear in any auditing logs. Previously, Cassandra had to obfuscate such statements. This is not necessary anymore.
We can create a role with generated password like this:
cassandra@cqlsh> CREATE ROLE alice WITH GENERATED PASSWORD AND LOGIN = true; or by CREATE USER: cassandra@cqlsh> CREATE USER alice WITH GENERATED PASSWORD;
When a password is generated for alice (out of scope of this documentation), she can log in:
$ cqlsh -u alice -p R7tb33?.mcAX ... alice@cqlsh>
Note: It is recommended to save password to ~/.cassandra/credentials, for example:
[PlainTextAuthProvider] username = cassandra password = R7tb33?.mcAX
and by setting auth_provider in ~/.cassandra/cqlshrc
[auth_provider] module = cassandra.auth classname = PlainTextAuthProvider
It is also possible to configure password validators in such a way that a user does not see why a password failed. This is driven by configuration property for password_validator called detailed_messages. When set to false, the violations will be very brief:
alice@cqlsh> ALTER ROLE alice WITH PASSWORD = 'myattempt'; InvalidRequest: Error from server: code=2200 [Invalid query] message="Password was not set as it violated configured password strength policy. You may also use 'GENERATED PASSWORD' upon role creation or alteration."
The following command will automatically generate a new password that meets all configured security requirements.
alice@cqlsh> ALTER ROLE alice WITH GENERATED PASSWORD;
Several potential enhancements to password generation and validation could be implemented in future releases. One promising extension would be validating new passwords against previous values. This would prevent users from reusing passwords until after they’ve created a specified number of different passwords. A related enhancement could include restricting how frequently users can change their passwords, preventing rapid cycling through passwords to circumvent history-based restrictions.
These features, while valuable for comprehensive password security, were considered beyond the scope of the initial implementation and may be addressed in future updates.
Final thoughts and next steps
The Cassandra Password Validator and Generator implemented under CEP-24 represents a significant improvement in Cassandra’s security posture.
By providing robust, configurable password policies with built-in enforcement mechanisms and convenient password generation capabilities, organizations can now ensure compliance with their security standards directly at the database level. This not only strengthens overall system security but also improves the user experience by eliminating guesswork around password requirements.
As Cassandra continues to evolve as an enterprise-ready database solution, these security enhancements demonstrate a commitment to meeting the demanding security requirements of modern applications while maintaining the flexibility that makes Cassandra so powerful.
Ready to experience CEP-24 yourself? Try it out on the Instaclustr Managed Platform and spin up your first Cassandra cluster for free.
CEP-24 is just our latest contribution to open source. Check out everything else we’re working on here.
The post CEP-24 Behind the scenes: Developing Apache Cassandra®’s password validator and generator appeared first on Instaclustr.
Announcing ScyllaDB 2025.1, Our First Source-Available Release
Tablets are enabled by default + new support for mixed clusters with varying core counts and resources The ScyllaDB team is pleased to announce the release of ScyllaDB 2025.1.0 LTS, a production-ready ScyllaDB Long Term Support Major Release. ScyllaDB 2025.1 is the first release under our Source-Available License. It combines all the improvements from Enterprise releases (up to 2024.2) and Open Source Releases (up to 6.2) into a single source-available code base. ScyllaDB 2025.1 enables Tablets by default. It also improves performance and scaling speed and allows mixed clusters (nodes that use different instance types). Several new capabilities, updates, and hundreds of bug fixes are also included. This release is the base for the upcoming ScyllaDB X Cloud, the new and improved ScyllaDB Cloud. That release offers fast boot, fast scaling (out and in), and an upper limit of 90% storage utilization (compared to 70% today). In this blog, we’ll highlight the new capabilities our users have frequently asked about. For the complete details, read the release notes. Read the detailed release notes on our forum Learn more about ScyllaDB Enterprise Get ScyllaDB Enterprise 2025.1 Upgrade from ScyllaDB Enterprise 2024.x to 2025.1 Upgrade from ScyllaDB Open Source 6.2 to 2025.1 ScyllaDB Enterprise customers are encouraged to upgrade to ScyllaDB Enterprise 2025 and are welcome to contact our Support Team with questions. Read the detailed release notes Tablets Overview In this release, ScyllaDB makes tablets the default for new Keyspaces. “Tablets” is a new data distribution algorithm that improves upon the legacy vNodes approach from Apache Cassandra. Unlike vNodes, which statically distribute tables across nodes based on the token ring, Tablets dynamically assign tables to a subset of nodes based on size. Future updates will optimize this distribution using CPU and OPS information. Key benefits of Tablets include: Faster scaling and topology changes allow new nodes to serve reads and writes as soon as the first Tablet is migrated. Together with Raft-based Strongly Consistent Topology Updates, Tablets enable users to add multiple nodes simultaneously. Automatic support for mixed clusters with varying core counts. You can run some Keyspaces with Tablets enabled and others with Tablets disabled. In this case, scaling improvements will only apply to Keyspaces with Tablets enabled. vNodes will continue to be supported for existing and new Keyspaces using the`tablets =
{ 'enabled': false }`
option. Tablet Merge Tablet Merge is a
new feature in 2025.1. The goal of Tablet Merge is to reduce the
tablet count for a shrinking table, similar to how Split increases
the count while the table is growing. The load balancer decision to
merge was implemented today (it came with the infrastructure
introduced for split), but it hasn’t been handled until now. The
topology coordinator will now detect shrunk tables and merge
adjacent tablets to meet the average tablet replica size goal.
#18181
Tablet-Based Keyspace Limitations Tablets Keyspaces are NOT yet
enabled for the following features: Materialized Views Secondary
Indexes Change Data Capture (CDC) Lightweight Transactions (LWT)
Counters Alternator (Amazon DynamoDB API) Using Tablets You can
continue using these features by using a vNode based Keyspace.
Monitoring Tablets To monitor Tablets in real time, upgrade
ScyllaDB Monitoring Stack to release 4.7, and use the new dynamic
Tablet panels shown below. Tablets Driver Support The following
driver versions and newer support Tablets Java driver 4.x, from
4.18.0.2 Java driver 3.x, from 3.11.5.4 Python driver, from 3.28.1
Gocql driver, from 1.14.5 Rust driver, from 0.13.0 Legacy ScyllaDB
and Apache Cassandra drivers will continue to work with ScyllaDB.
However, they will be less efficient when working with tablet-based
Keyspaces. File-based Streaming for Tablets File-based streaming
enhances tablet migration. In previous releases, migration involved
streaming mutation fragments, requiring deserialization and
reserialization of SSTable files. In this release, we directly
stream entire SSTables, eliminating the need to process mutation
fragments. This method reduces network data transfer and CPU usage,
particularly for small-cell data models. File-based streaming is
utilized for tablet migration in all keyspaces with tablets
enabled. More in
Docs. Arbiter and Zero-Token Nodes There is now support for
zero-token nodes. These nodes do not replicate data but can assist
in query coordination and Raft quorum voting. This allows the
creation of an Arbiter: a tiebreaker node that helps maintain
quorum in a symmetrical two-datacenter cluster. If one data center
fails, the Arbiter (placed in a third data center) keeps the quorum
alive without replicating user data or incurring network and
storage costs. #15360 You
can use
nodetool status to find a list of zero token nodes. Additional
Key Features The following features were introduced in ScyllaDB
Enterprise Feature Release 2024.2 and are now available in
Long-Term Support ScyllaDB 2025.1. For a full description of each,
see
2024.2 release notes and ScyllaDB
Docs. Strongly Consistent Topology Updates.
With Raft-managed topology enabled, all topology operations are
internally sequenced consistently. Strongly Consistent Topology
Updates is now the default for new clusters and
should be enabled after upgrade for existing clusters.
Strongly Consistent Auth Updates. Role-Based
Access Control (RBAC)
commands like create role or grant permission are safe to run in
parallel without a risk of getting out of sync with themselves and
other metadata operations, like schema changes. As a result, there
is no need to update system_auth RF or run repair when adding a
DataCenter. Strongly Consistent Service Levels.
Service Levels allow you to define attributes like timeout per
workload. Service levels are now strongly consistent using Raft,
like Schema, Topology, and Auth. Improved network
compression for intra-node RPC. New compression
improvements for node-to-node communication: Using
zstd instead of lz4 Using a shared dictionary re-trained
periodically on the traffic, instead of the message-by-message
compression. Alternator RBAC. Authorization:
Alternator supports Role-Based Access Control (RBAC).
Control is done via CQL. Native Nodetool. The
nodetool utility provides simple command-line interface operations
and attributes. The native nodetool works much faster. Unlike the
Java version, the native nodetool is part of the ScyllaDB repo and
allows easier and faster updates. Removing the JMX
Server. With the Native Nodetool (above), the JMX server
has become redundant and will no longer be part of the default
ScyllaDB Installation or image. Maintenance Mode.
Maintenance mode is a new mode in which the node does not
communicate with clients or other nodes and only listens to the
local maintenance socket and the REST API. Maintenance
Socket. The Maintenance Socket provides a new way to
interact with ScyllaDB from within the node it runs on. It is
mainly for debugging. As described in the
Maintenance Socket docs, you can use cqlsh with the Maintenance
Socket. Read
the detailed release notes Learn Apache Cassandra® 5.0 Data Modeling
We're excited to announce my ground-up Cassandra Data Modeling Series for 2025, a comprehensive application-focused journey designed to arm developers and architects with the latest knowledge.Inside ScyllaDB Rust Driver 1.0: A Fully Async Shard-Aware CQL Driver Using Tokio
The engineering challenges and design decisions that led to the 1.0 release of ScyllaDB Rust Driver ScyllaDB Rust driver is a client-side, shard-aware driver written in pure Rust with a fully async API using Tokio. The Rust Driver project was born back in 2021 during ScyllaDB’s internal developer hackathon. Our initial goal was to provide a native implementation of a CQL driver that’s compatible with Apache Cassandra and also contains a variety of ScyllaDB-specific optimizations. Later that year, we released ScyllaDB Rust Driver 0.2.0 on the Rust community’s package registry, crates.io. Comparative benchmarks for that early release confirmed that this driver was (more than) satisfactory in terms of performance. So we continued working on it, with the goal of an official release – and also an ambitious plan to unify other ScyllaDB-specific drivers by converting them into bindings for our Rust driver. Now that we’ve reached a major milestone for the Rust Driver project (officially releasing ScyllaDB Rust Driver 1.0), it’s time to share the challenges and design decisions that led to this 1.0 release. Learn about our versioning rationale What’s New in ScyllaDB Rust Driver 1.0? Along with stability, this new release brings powerful new features, better performance, and smarter design choices. Here’s a look at what we worked on and why. Refactored Error Types Our original error types met ad hoc needs, but weren’t ideal for long-term production use. They weren’t very type-safe, some of them stringified other errors, and they did not provide sufficient information to diagnose the error’s root cause. Some of them were severely abused – most notablyParseError
. There was a
One-To-Rule-Them-All error type: the ubiquitous
QueryError
, which many user-facing APIs used to
return. Before Back in 0.13 of the driver,
QueryError
looked like this: Note
that: The structure was painfully flat, with extremely niche errors
(such as UnableToAllocStreamId
) being just inline
variants of this enum. Many variants contained just strings. The
worst offender was Invalid Message
, which just jammed
all sorts of different error types into a single string. Many
errors were buried inside IoError
, too. This
stringification broke the clear code path to the underlying errors,
affecting readability and causing chaos. Due to the above
omnipresent stringification, matching on error kinds was virtually
impossible. The error types were public and, at the same time, were
not decorated with the #[non_exhaustive]
attribute.
Due to this, adding any new error variant required breaking the
API! It was unacceptable for a driver that was aspiring to bear the
name of an API-stable library. In version 1.0.0, the new error
types are clearer and more helpful. The error hierarchy now
reflects the code flow. Error conversions are explicit, so no
undesired confusing conversion takes place. The one-to-fit-them-all
error type has been replaced. Instead, APIs return various error
types that exhaustively cover the possible errors, without any need
to match on error variants that can’t occur when executing a given
function. The QueryError
’s new counterpart,
ExecutionError
, looks like this:
Note that: There is much more nesting, reflecting the driver’s
modules and abstraction layers. The stringification is gone! Error
types are decorated with the #[non_exhaustive]
attribute, which requires downstream crates to always have the
“else” case (like _ => { … }
) when matching on
them. This way, we prevent breaking downstream crates’ code when
adding a new error variant. Refactored Module Structure The module
structure also stemmed from various ad-hoc decisions. Users
familiar with older releases of our driver may recall, for example,
the ubiquitous transport
module. It used to contain a
bit of absolutely everything: essentially, it was a flat bag with
no real deeper structure. Back in 0.15.1, the module structure
looked like this (omitting the modules that were not later
restructured): transport load_balancing default.rs
mod.rs plan.rs locator (submodules) caching_session.rs cluster.rs
connection_pool.rs connection.rs
downgrading_consistency_retry_policy.rs errors.rs
execution_profile.rs host_filter.rs iterator.rs metrics.rs node.rs
partitioner.rs query_result.rs retry_policy.rs session_builder.rs
session_test.rs session.rs speculative_execution.rs topology.rs
history.rs routing.rs The new module structure clarifies the
driver’s separate abstraction layers. Each higher-level module is
documented with descriptions of what abstractions it should hold.
We also refined our item export policy. Before, there could be
multiple paths to import items from. Now items can be imported from
just one path: either their original paths (i.e., where they are
defined), or from their re-export paths (i.e., where they are
imported, and then re-exported from). In 1.0.0, the module
structure is the following (again, omitting the unchanged modules):
client caching_session.rs execution_profile.rs
pager.rs self_identity.rs session_builder.rs session.rs
session_test.rs cluster metadata.rs node.rs
state.rs worker.rs network connection.rs
connection_pool.rs errors.rs (top-level module)
policies address_translator.rs host_filter.rs
load_balancing default.rs plan.rs retry default.rs
downgrading_consistency.rs fallthrough.rs retry_policy.rs
speculative_execution.rs observability
driver_tracing.rs history.rs metrics.rs tracing.rs
response query_result.rs request_response.rs
routing locator (unchanged contents)
partitioner.rs sharding.rs Removed Unstable Dependencies From the
Public API With the ScyllaDB Rust Driver 1.0 release, we wanted to
fully eliminate unstable (pre-1.0) dependencies from the public
API. Instead, we now expose these dependencies through feature
flags that explicitly encode the major version number, such as
"num-bigint-03"
. Why did we do this?
API Stability & Semver Compliance – The 1.0
release promises a stable API, so breaking changes must be avoided
in future minor updates. If our public API directly depended on
pre-1.0 crates, any breaking changes in those dependencies would
force us to introduce breaking changes as well. By removing them
from the public API, we shield users from unexpected
incompatibilities. Greater Flexibility for Users –
Developers using the ScyllaDB Rust driver can now opt into specific
versions of optional dependencies via feature flags. This allows
better integration with their existing projects without being
forced to upgrade or downgrade dependencies due to our choices.
Long-Term Maintainability – By isolating unstable
dependencies, we reduce technical debt and make future updates
easier. If a dependency introduces breaking changes, we can simply
update the corresponding feature flag (e.g.,
"num-bigint-04"
) without affecting the core driver
API. Avoiding Unnecessary Dependencies – Some
users may not need certain dependencies at all. Exposing them via
opt-in feature flags helps keep the dependency tree lean, improving
compilation times and reducing potential security risks.
Improved Ecosystem Compatibility – By allowing
users to choose specific versions of dependencies, we minimize
conflicts with other crates in their projects. This is particularly
important when working with the broader Rust ecosystem, where
dependency version mismatches can lead to build failures or
unwanted upgrades. Support for Multiple Versions
Simultaneously – By namespacing dependencies with feature
flags (e.g., "num-bigint-03"
and
"num-bigint-04"
), users can leverage multiple versions
of the same dependency within their project. This is particularly
useful when integrating with other crates that may require
different versions of a shared dependency, reducing version
conflicts and easing the upgrade path. How this impacts
users: The core ScyllaDB Rust driver remains stable and
free from external pre-1.0 dependencies (with one exception: the
popular rand
crate, which is still in 0.*). If you
need functionality from an optional dependency, enable it
explicitly using the appropriate feature flag (e.g.,
"num-bigint-03"
). Future updates can introduce new
versions of dependencies under separate feature flags – without
breaking existing integrations. This change ensures that the
ScyllaDB Rust driver remains stable, flexible, and future-proof,
while still providing access to powerful third-party libraries when
needed. Rustls Support for TLS The driver now supports Rustls,
simplifying TLS connections and removing the need for additional
system C libraries (openssl). Previously, ScyllaDB Rust Driver only
supported OpenSSL-based TLS – like our other drivers did. However,
the Rust ecosystem has its own native TLS library:
Rustls. Rustls is designed for both performance
and security, leveraging Rust’s strong memory safety guarantees
while often outperforming
OpenSSL in real-world benchmarks. With the 1.0.0 release, we
have added Rustls as an alternative TLS backend. This gives users
more flexibility in choosing their preferred implementation.
Additional system C libraries (openssl) are no longer required to
establish secure connections. Feature-Based Backend
Selection Just as we isolated pre-1.0 dependencies via
version-encoded feature flags (see the previous section), we
applied the same strategy to TLS backends. Both
OpenSSL and Rustls are exposed
through opt-in feature flags. This allows users to explicitly
select their desired implementation and ensures: API
Stability – Users can enable TLS support without
introducing unnecessary dependencies in their projects.
Avoiding Unwanted Conflicts – Users can choose the
TLS backend that best fits their project without forcing a
dependency on OpenSSL or Rustls if they don’t need it.
Future-Proofing – If a breaking change occurs in a
TLS library, we can introduce a new feature flag (e.g.,
"rustls-023", "openssl-010"
) without modifying the
core API. Abstraction Over TLS Backends We also introduced an
abstraction layer over the TLS backends. Key enums such as
TlsProvider
, TlsContext
,
TlsConfig
and Tls
now contain variants
corresponding to each backend. This means that switching between
OpenSSL and Rustls (as well as between different versions of the
same backend) is a matter of enabling the respective feature flag
and selecting the desired variant. If you prefer Rustls, enable the
"rustls-023"
feature and use the
TlsContext::Rustls
variant. If you need OpenSSL,
enable "openssl-010"
and use
TlsContext::OpenSSL
. If you want both backends or
different versions of the same backend (in production or just to
explore), you can enable multiple features and it will “just work.”
If you don’t require TLS at all, you can exclude both, reducing
dependency overhead. Our ultimate goal with adding Rustls support
and refining TLS backend selection was to ensure that the ScyllaDB
Rust Driver is both flexible and well-integrated with the Rust
ecosystem. We hope this better accommodates users’ different
performance and security needs. The Battle For The Empty Enums We
really wanted to let users build the driver with no TLS backends
opted in. In particular, this required us to make our enums work
without any variants, (i.e., as empty enums). This was a bit
tricky. For instance, one cannot match over &x
,
where x: X
is an instance of the enum, if
X
is empty. Specifically, consider the following
definition: This
would not compile:error[E0004]:
non-exhaustive patterns: type `&X` is non-empty
–> scylla/src/network/tls.rs:230:11
| 230 | match x {
| ^
| note: `X` defined here
–> scylla/src/network/tls.rs:223:6
| 223 | enum X {
| ^
= note: the matched value is of type
`&X` = note: references are always
considered inhabited help: ensure that all possible cases are being
handled by adding a match arm with a wildcard pattern as shown
| 230 ~ match x { 231 +
_ => todo!(), 232 + }
| Note that references are
always considered inhabited. Therefore, in order to
make code compile in such a case, we have to match on the value
itself, not on a reference:
But if we now enable the "a"
feature, we get
another error… error[E0507]: cannot move out of `x` as enum variant
`A` which is behind a shared reference –>
scylla/src/network/tls.rs:230:11 | 230 |
match *x { |
^^ 231 |
#[cfg(feature = “a”)] 232 | X::A(s)
=> { /* Handle it */ } |
–
|
| |
data moved here
|
move occurs because `s` has type `String`, which does not
implement the `Copy` trait | help: consider
removing the dereference here | 230 –
match *x { 230 + match x {
| Ugh. rustc
literally
advises us to revert the change. No luck… Then we would end up with
the same problem as before. Hmmm… Wait a moment… I vaguely remember
Rust had an obscure reserved word used for matching by reference,
ref
. Let’s try it out.
Yay, it compiles!!! This is how we made our (possibly) empty
enums work… finally!. Faster and Extended Metrics Performance
matters. So we reworked how the driver handles metrics, eliminating
bottlenecks and reducing overhead for those who need real-time
insights. Moreover, metrics are now an opt-in feature, so you only
pay (in terms of resource consumption) for what you use. And we
added even more metrics! Background Benchmarks
showed that the driver may spend significant time logging query
latency.
Flamegraphs revealed that collecting metrics can consume up to
11.68% of CPU time!
We suspected that the culprit was contention on a mutex
guarding the metrics histogram. Even though the issue was
discovered in 2021 (!), we postponed dealing with it because the
publicly available crates didn’t yet include a lock-free histogram
(which we hoped would reduce the overhead). Lock-free
histogram As we approached the 1.0 release deadline, two
contributors (Nikodem Gapski and Dawid Pawlik) engaged with the
issue. Nikodem explored the new generation of the
histogram
crate and discovered that someone had added
a lock-free histogram: AtomicHistogram
. “Great”, he
thought. “This is exactly what’s needed.” Then, he discovered that
AtomicHistogram
is flawed: there’s a logical race due
to insufficient synchronization! To fix the problem, he ported the
Go implementation of LockFreeHistogram
from
Prometheus, which prevents logical races at the cost of execution
time (though it was still performing much better than a mutex).
If you are interested in all the details about what was wrong
with AtomicHistogram
and how
LockFreeHistogram
tries to solve it, see the
discussion in this PR. Eventually, the
histogram
crate’s maintainer joined the discussion and
convinced us that the skew caused by the logical races in
AtomicHistogram
is benign. Long story short, histogram
is a bit skewed anyway, and we need to accept it. In the end, we
accepted AtomicHistogram
for its lower overhead
compared to LockFreeHistogram
.
LockFreeHistogram
is still available on
its author’s dedicated branch. We left ourselves a way to
replace one histogram implementation with another if we decide it’s
needed. More metrics The Rust driver is a proud
base for the cpp-rust-driver
(a rewrite of cpp-driver as a thin
bindings layer on top of – as you can probably guess at this point
– the Rust driver). Before cpp-driver functionalities could be
implemented in cpp-rust-driver, they had to be implemented in the
Rust driver first. This was the case for some metrics, too. The
same two contributors took care of that, too. (Btw, thanks, guys!
Some cool sea monster swag will be coming your way).
Metrics as an opt-in Not every driver user needs
metrics. In fact, it’s quite probable that most users don’t check
them even once. So why force users to pay (in terms of resource
consumption) for metrics they’re not using? To avoid this, we put
the metrics module behind the "metrics"
feature (which
is disabled by default). Even more performance gain! For a
comprehensive list of changes introduced in the 1.0 release,
see our release notes. Stepping Stones on the Path to the 1.0
Release We’ve been working towards this 1.0 release for years, and
it involved a lot of incremental improvements that we rolled out in
minor releases along the way. Here’s a look at the most notable
ones. Ser/De (from versions 0.11 and 0.15) Previous releases
reworked the serialization and deserialization APIs to improve
safety and efficiency. In short, the 0.11 release introduced a
revamped serialization API that leverages Rust’s type system to
catch misserialization issues early. And the 0.15 release refined
deserialization for better performance and memory efficiency. Here
are more details. Serialization API Refactor (released in
0.11): Leverage Rust’s Powerful Type System to Prevent
Misserialization — For Safer and More Robust Query Binding
Before 0.11, the driver’s serialization API had several pitfalls,
particularly around type safety. The old approach relied on loosely
structured traits and structs (Value
,
ValueList
, SerializedValues
,
BatchValues
, etc.), which lacked strong compile-time
guarantees. This meant that if a user mistakenly bound an incorrect
type to a query parameter, they wouldn’t receive an immediate,
clear error. Instead, they might encounter a confusing
serialization error from ScyllaDB — or, in the worst case, could
suffer from silent data corruption! To address these issues, we
introduced a redesigned serialization API that replaces the old
traits with SerializeValue
, SerializeRow
,
and new versions of BatchValues
and
SerializedValues
. This new approach enforces stronger
type safety. Now, type mismatches are caught locally at compile
time or runtime (rather than surfacing as obscure database errors
after query execution). Key benefits of this refactor include:
Early Error Detection – Incorrectly typed bind
markers now trigger clear, local errors instead of ambiguous
database-side failures. Stronger Type Safety – The
new API ensures that only compatible types can be bound to queries,
reducing the risk of subtle bugs. Deserialization API
Refactor (released in 0.15): For Better Performance and Memory
Efficiency Prior to release 0.15, the driver’s
deserialization process was burdened with multiple inefficiencies,
slowing down applications and increasing memory usage. The first
major issue was type erasure — all values were initially converted
into the CQL-type-agnostic CqlValue before being transformed into
the user’s desired type. This unnecessary indirection introduced
additional allocations and copying, making the entire process
slower than it needed to be. But the inefficiencies didn’t stop
there. Another major flaw was the eager allocation of columns and
rows. Instead of deserializing data on demand, every column in a
row was eagerly allocated at once — whether it was needed or not.
Even worse, each page of query results was fully materialized into
a Vec<Row>
. As a result, all rows in a page were
allocated at the same time — all of them in the form of the
ephemeric CqlValue
. This usually required further
conversion to the user’s desired type and incurred allocations. For
queries returning large datasets, this led to excessive memory
usage and unnecessary CPU overhead. To fix these issues, we
introduced a completely redesigned deserialization API. The new
approach ensures that: CQL values are deserialized lazily,
directly into user-defined types, skipping
CqlValue
entirely and eliminating redundant
allocations. Columns are no longer eagerly
deserialized and allocated. Memory is used only for the
fields that are actually accessed. Rows are
streamed instead of eagerly materialized. This avoids
unnecessary bulk allocations and allows more efficient processing
of large result sets. Paging API (released in 0.14) We heard from
our users that the driver’s API for executing queries was prone to
misuse with regard to query paging. For instance, the
Session::query()
and Session::execute()
methods would silently return only the first page of the result if
page size was set on the statement. On the other hand, if page size
was not set, those methods would perform unpaged queries, putting
high and undesirable load on the cluster. Furthermore,
Session::query_paged()
and
Session::execute_paged()
would only fetch a single
page! (if page size was set on the statement; otherwise, the query
would not be paged…!!!) To combat this: We decided
to redesign the paging API in a way that no other driver had done
before. We concluded that the API must be crystal clear about
paging, and that paging will be controlled by the method used, not
by the statement itself. We ditched query()
and
query_paged()
(as well as their execute
counterparts), replacing them with query_unpaged()
and
query_single_page()
, respectively (similarly for
execute*
). We separated the setting of page size from
the paging method itself. Page size is now mandatory on the
statement (before, it was optional). The paging method (no paging,
manual paging, transparent automated paging) is now selected by
using different session methods
({query,execute}_unpaged()
,
{query,execute}_single_page()
, and
{query,execute}_iter()
, respectively). This separation
is likely the most important change we made to help users avoid
footguns and pitfalls. We introduced strongly typed PagingState and
PagingStateResponse abstractions. This made it clearer how to use
manual paging (available using
{query,execute}_single_page()
). Ultimately, we
provided
a cheat sheet in the Docs that describes best practices
regarding statement execution. Looking Ahead The journey doesn’t
stop here. We have many ideas for possible future driver
improvements: Adding a prelude
module
containing commonly used driver’s functionalities. More
performance optimizations to push the limits of
scalability (and benchmarks to track how we’re doing). Extending
CQL execution APIs to combine transparent paging
with zero-copy deserialization, and introducing
BoundStatement
. Designing our own test
harness to enable cluster sharing and reuse between tests
(with hopes of speeding up test suite execution and encouraging
people to write more tests). Reworking CQL execution
APIs for less code duplication and better usability.
Introducing QueryDisplayer
to pretty print results of
the query in a tabular way, similarly to the cqlsh
tool. (In our dreams) Rewriting cqlsh
(based on Python
driver) with cqlsh-rs (a wrapper over Rust driver). And of course,
we’re always eager to hear from the community — your feedback helps
shape the future of the driver! Get Started with ScyllaDB Rust
Driver 1.0 If you’re working on cool Rust applications that use
ScyllaDB and/or you want to contribute to this Rust driver project,
here are some starting points. GitHub Repository:
ScyllaDB
Rust Driver – Contributions welcome!
Crates.io: Scylla Crate
Documentation: crate docs on docs.rs,
the guide
to the driver. And if you have any questions, please contact us
on the community forum or
ScyllaDB User Slack (see
the #rust-driver channel).