Apache Cassandra® 6 Accord transactions: What you need to know
There have always been architectural trade-offs when considering a distributed database like Apache Cassandra versus a relational database. Cassandra excels at linear horizontal scalability, multi-region replication, and fault-tolerant uptime that relational systems couldn’t match. This comes at the expense of general-purpose ACID (Atomicity, Consistency, Isolation, Durability) transactions which allows the ability to express complex, multi-row operations with guaranteed consistency.
With Cassandra 6 on its way to general availability status (and an alpha already released), we’re approaching a turning point where we can revisit whether these trade-offs will still exist. The latest version delivers general-purpose ACID transactions through a new protocol called Accord. With Cassandra 6, those transactional guarantees will be native, without compromising Cassandra’s operational model or availability.
TransactionsIn database parlance, a transaction says, “These operations belong together. They must all be applied, or none of them.” The classic example is a bank transfer. When you move money from one account to another, two things must happen: a debit and a credit. If the debit succeeds but the credit fails, money has disappeared. A transaction prevents this issue by guaranteeing the two operations are atomic, meaning they succeed or fail as a unit; combined with isolation, no other process can see an immediate or half-finished state.
Experiences like these depend on transactional guarantees at the data layer, which rely on ACID semantics, particularly atomicity and isolation, to prevent inconsistent intermediate states.
For most developers who have worked with relational databases, transactions are so fundamental they’re almost invisible. For Cassandra users, comparable guarantees across multiple partitions or tables historically required significant application-level coordination or weren’t natively supported.
Coordination at scale is fundamentally hardBecause Cassandra is designed to deal with data replication and scaling, coordinating atomic changes across multiple nodes is inherently challenging (e.g., decrement a balance here, increment one there). All participating replicas must agree on an order of operations. Distributed consensus protocols exist to solve exactly this, but prior approaches came with trade-offs.
Raft and Zab are examples of protocols that use leaders, which is not suitable for Cassandra since nodes are treated equally.
More information about prior solutions can be found in more details in CEP-15, but generally, leader-based approaches pose issues at scale.
The Accord protocolThe Accord protocol, proposed in CEP-15, is built to achieve fast, general-purpose distributed transactions that remain stable under the same failure conditions Cassandra already tolerates— with no elected leaders.
How it orders transactionsAccord is leaderless so any node can coordinate any transaction. Transactions are assigned unique timestamps using hybrid logical clocks, where each node appends its own unique ID to its clock value to ensure global uniqueness across the cluster. Conflicting transactions execute in timestamp order across all replicas. Under normal conditions, a transaction reaches consensus in a single round trip.
The reorder bufferThe challenge with timestamp-based ordering in a geo-distributed system is that two transactions started concurrently from different regions might arrive at replicas in different orders, breaking fast-path consensus. Accord solves this by having replicas buffer incoming transactions. The wait time is precisely bounded to be just long enough to account for clock differences between nodes and network latency, and no longer. This guarantees that replicas always process transactions in the correct order without needing extra message rounds.
Fast-path electoratesWhen replicas fail, other leaderless protocols fall back to slower, more expensive message patterns. Accord avoids this by dynamically adjusting which replicas participate in fast-path decisions as failures occur. The result is that Accord maintains fast-path availability under failure, avoiding the degradation to slower message patterns that other leaderless protocols experience.
The net effect: strict serializable isolation across multiple partitions and tables, in a single round trip, with no leaders, and preserving performance characteristics under the same minority‑failure conditions that Cassandra is designed to tolerate.
New CQL syntax to support transactionsThe most visible change for developers is new CQL syntax.
Transactions in Cassandra 6 are wrapped in BEGIN
TRANSACTION and COMMIT TRANSACTION blocks,
similar to SQL syntax.
Let’s examine a flight booking transaction that must simultaneously reserve a seat and deduct loyalty miles from two separate tables. Note: Cassandra 6 is pre-release. Syntax shown reflects the current alpha and may evolve before general availability.
BEGIN TRANSACTION LET seat = (SELECT available FROM flight_seats WHERE flight_id = 'ZZ101' AND seat_number = '14C'); LET miles = (SELECT balance FROM loyalty_accounts WHERE member_id = 'M-7823'); IF seat.available = true AND miles.balance >= 25000 THEN UPDATE flight_seats SET available = false, booked_by = 'M-7823' WHERE flight_id = 'ZZ101' AND seat_number = '14C'; UPDATE loyalty_accounts SET balance = miles.balance - 25000 WHERE member_id = 'M-7823'; END IF COMMIT TRANSACTION ;
Everything between BEGIN TRANSACTION and
COMMIT TRANSACTION executes atomically with strict
serializable isolation from the perspective of all other concurrent
transactions. The LET clause reads current values from
the database and binds them to variables. The IF block uses those
values to guard the writes. If the seat is already taken or the
member doesn’t have enough miles, nothing happens. Both updates
either apply together or not at all, across two different tables
and two different partition keys.
This is logic that previously had to live in the application, complete with retry handling, race condition guards, and compensating operations if something failed halfway through. Now it lives in the database.
Enabling Accord in Cassandra 6: The CMS dependencyWe can’t talk about Accord without discussing Cluster Metadata Service (CMS). Before Accord transactions are functional, Cluster Metadata Service (CMS), introduced alongside Accord as CEP-21, must be enabled. For teams upgrading from Cassandra 5, this is the most significant operational change in the release.
CMS is required. Accord needs every replica to have the same authoritative view of cluster topology showing which nodes own which data, and which replicas participate in a given transaction. Before Cassandra 6, this information was propagated via the eventually consistent Gossip Protocol. This is suitable for normal reads and writes, but Accord’s correctness depends on knowing precisely who the transaction participants are before committing. CMS replaces Gossip-based metadata propagation with a distributed, linearized transaction log, giving all nodes a consistent view of cluster state. Without it, Accord’s guarantees don’t hold.
Upgrading from Cassandra 5 to 6—plan carefullyThe upgrade cannot begin until every node in the cluster is running Cassandra 6. CMS initialization requires full cluster agreement; no mixed-version clusters are supported. Before upgrading, disable any automation that could trigger schema changes, node bootstrapping, decommissions, or replacements. These operations are blocked during the upgrade window, and if they fire on an older node before CMS is initialized, the migration can fail in ways that require manual intervention to recover.
Once all nodes are upgraded, run nodetool cms
initialize on one node to activate CMS. This creates the
service with a single member, which is enough to unblock metadata
operations but is not suitable for production. Follow up
immediately with nodetool cms reconfigure to add more
members. CMS uses Paxos internally and requires a minimum of three
nodes for a viable quorum, with more recommended for production
depending on cluster size.
Important: CMS initialization is not easily reversible. Plan the upgrade window accordingly and treat it as a one-way operational step.
On a fresh Cassandra 6 cluster that wasn’t migrated from a previous version, CMS is automatically enabled. First, one node is designated as the initial CMS member. From there, CMS membership scales automatically based on cluster size, with the service adding members as the cluster grows without requiring manual intervention.
Of course, for Instaclustr users, our platform and techops team will take care of most of this for you and walk you through any requirements on your side when the time comes to upgrade.
Coexistence with Lightweight Transactions (LWT)Existing LWT syntax (IF NOT EXISTS, IF
EXISTS, conditional UPDATE/INSERT statements)
continues to work and fundamentally differs from Accord
transactions as LWT is scoped to a single partition and is
extremely limited. Accord doesn’t replace or break existing
applications. Using BEGIN TRANSACTION/END TRANSACTION
is how developers opt into the broader cross-partition
guarantees.
Every prior approach to distributed transactions required accepting one of three constraints: a global leader (single point of failure, WAN latency penalty), limited to single-partition scope (LWT), or degraded performance under failure (prior leaderless protocols). The Accord paper’s central claim is that these constraints are not fundamental. They are artifacts of specific protocol design choices.
By combining flexible fast-path electorates with a timestamp reorder buffer on top of a leaderless execution model, Accord achieves:
- True cross-partition atomicity across multiple tables and partition keys
- Strict serializable isolation with formally proven correctness
- Single round-trip latency under normal operating conditions
- Failure‑tolerant steady‑state performance, avoiding the systematic degradation seen in earlier leaderless protocols
- No elected leaders, consistent with Cassandra’s existing operational model
This opens workloads that were previously natively incompatible with Cassandra: financial transaction processing, distributed inventory reservation, multi-step workflow coordination, and any application where ‘commit these changes together or not at all’ is a strict correctness requirement.
Looking aheadThough the Accord protocol is still maturing, the fundamental capability is finally here. We now have general-purpose, leaderless, multi-partition ACID transactions natively in Apache Cassandra.
The historically difficult problem of achieving strict serializable isolation in a geo-distributed system without compromising fault tolerance now has a proven, working answer.
For Cassandra users, this raises an exciting question: which workloads have you been routing to relational databases specifically because they needed transactional guarantees? It is time to reevaluate.
Stay tuned for a preview release of Cassandra 6 on the Instaclustr Platform and get ready to experience the power of ACID transactions on Cassandra for yourself!
The post Apache Cassandra® 6 Accord transactions: What you need to know appeared first on Instaclustr.
4 DynamoDB Configuration Changes for Significant Cost Savings
Learn about ways to cut DynamoDB costs with minimal code changes, zero migration, and no architectural upheaval If you’re running DynamoDB at scale, your bill might be tens of thousands of dollars higher than it needs to be. However, most teams don’t need a complete migration or architecture overhaul to save significantly. These configuration changes, all easily implemented, can reduce your costs by 50-80%. This guide covers the biggest wins for DynamoDB cost optimization, with the real math behind each recommendation. We will be sharing links to the ScyllaDB Cost Calculator at calculator.scylladb.com, which lets you model different workload scenarios with customized parameters and compare ScyllaDB pricing to DynamoDB pricing at the click of a button. Switch from on-demand to provisioned + reserved capacity This is the single biggest DynamoDB cost lever for most teams. On-demand capacity is convenient at first, with no planning required and just pay-as-you-go. But it’s also expensive. After AWS’s recent price reduction, on-demand costs 7.5x more than provisioned capacity. Before the drop, it was roughly 15x. Either way, the math is brutal. Let’s look at a simple example: a mid-sized workload running 10,000 reads/sec and 10,000 writes/sec, 24/7. On-Demand: ~$239K/year Provisioned: ~$71K/year Reserved: ~$34K/year That’s a 7x difference between on-demand and reserved. Even if your workload isn’t perfectly predictable, reserved capacity often pays for itself within months. The trade-off here is that you need a predictable load and the financial flexibility to commit. If your traffic varies wildly (or you’re short-term focused) provisioned mode without reservation is the middle ground. Still, it’s 3.3x cheaper than on-demand. Optimize item sizes DynamoDB’s billing is granular: writes are charged per 1KB of item size, and reads per 4KB. This means a 1.1KB item costs the same as a 2KB item on writes. If your items are consistently over these thresholds by a small margin, you’re paying 2-3x more than necessary. Let’s look at the same simple example, but with increasing item size for comparison. On-Demand with 1KB items: ~$239K/year On-Demand with 10KB items: ~$2M/year On-Demand with 100KB items: ~$20M/year Common culprits for higher DynamoDB costs here: Nested JSON with whitespace or redundant fields Variable-length strings with no trimming Metadata or audit fields added to every item Base64-encoded payloads What should you do? Compress JSON payloads before storage, remove redundant attributes, move infrequently accessed data to a separate table, or use a columnar storage strategy. Trimming just 200 bytes per item – across millions of items and thousands of writes/sec – adds up to thousands per month. Deploy DAX (DynamoDB Accelerator) for read-heavy workloads If your workload skews heavily toward reads and you’re not using an in-memory cache layer yet, DAX is one of the highest ROI moves you can make. DAX sits in front of DynamoDB and caches frequently accessed items in memory. Cache hits bypass DynamoDB entirely, meaning you avoid the RCU charge. For hot items queried thousands of times per minute, a single DAX cluster can reduce DynamoDB read capacity needs. Let’s look at another simple example: a read-heavy workload running 80,000 reads/sec and 1,000 writes/sec, 24/7. On-Demand: ~$335K/year On-Demand with DAX: ~$158K/year The cost math: a medium sized DAX cluster (3 nodes, cache.r5g.8xlarge) costs roughly $9K/month. A high hit rate on your cache will proportionally reduce your more expensive read costs. That can lead to potentially hundreds of thousands of dollars saved with DynamoDB. Bonus: DAX also improves latency dramatically. Cache hits respond in microseconds rather than milliseconds. Use the DynamoDB Infrequent Access (IA) table class Not all tables are created equal. If you have tables where data is accessed rarely but storage is high (think audit logs, historical records, compliance archives, or cold lookup tables), then the Standard-IA table class can save you substantially on storage. The pricing difference: Standard class: $0.25/GB Standard-IA class: $0.10/GB (up to 60% savings) The catch is that IA has a minimum item size of 100 bytes and a minimum billing duration. It’s designed for cold data. So, if you’re frequently scanning or querying these tables, IA isn’t the right fit (read costs are identical, but you lose the write discount). However, for true archive tables accessed only occasionally, it’s a no-brainer. The bottom line These four DynamoDB changes require minimal code changes, zero migration, and no architectural upheaval. They’re configuration changes, caching tweaks, and data optimization. Combined, they typically deliver massive cost reductions. Start with switching to provisioned + reserved (highest impact), then layer in the others based on your workload shape. Ready to model your savings? Use the ScyllaDB Cost Calculator at calculator.scylladb.com to compare your current DynamoDB costs against these optimizations. And to save even more, see how ScyllaDB compares.Shrinking the Search: Introducing ScyllaDB Vector Quantization
Learn how ScyllaDB Vector Quantization shrinks your vector index memory by up to 30x for cost-efficient, real-time AI applications Earlier this year, ScyllaDB launched integrated Vector Search, delivering sub-2ms P99 latencies for billion-vector datasets. However, high-dimensional vectors are notoriously memory-hungry. To help with memory efficiency, ScyllaDB recently introduced Vector Quantization. This allows you to shrink the index memory footprint for storing vectors by up to 30x (excluding index structure) without sacrificing the real-time performance ScyllaDB is known for. What is Quantization? To understand how we compress massive AI datasets, let’s look to the fundamentals of computer science. As Sam Rose explains in the ngrok blog on quantization, computers store numbers in bits, and representing high-precision decimal numbers (floating point) requires a significant number of them. Standard vectors use 32-bit floating point (f32) precision, where each dimension takes 4 bytes. Quantization is the process of compromising on this “floating point precision” to save space. By sacrificing some significant figures of accuracy, we can represent vectors as smaller 16-bit floats or even 8-bit or 1-bit integers. As Sam notes, while this results in a “precision compromise,” modern AI models are remarkably robust to this loss of information. They often maintain high quality even when compressed significantly. The Trade-off: Memory vs. Accuracy In ScyllaDB 2026.1, quantization is an index-only feature. The original source data remains at full precision in storage, while the in-memory HNSW index is compressed. This allows you to choose the level of “information loss” you are willing to accept for a given memory budget: Level Bytes/Dim Memory Savings Best For f32 (default) 4 1x (None) Small datasets, highest possible recall. f16 / bf16 2 ~2x Good balance of accuracy and memory. i8 1 ~4x Large datasets with moderate recall loss. b1 0.125 ~32x Maximum savings for massive datasets. CRITICAL NOTE: Quantization only compresses the vector data itself. The HNSW graph structure (the “neighbor lists” that make search fast) remains uncompressed to ensure query performance. Because of this fixed graph overhead, an i8 index typically provides a total memory reduction of ~3x rather than a raw 4x. Calculating Your Memory Needs To size your ScyllaDB Vector Search cluster effectively, be sure to consider both vector data and graph overhead. The total memory required for a vector index can be estimated with this formula: Memory ≈ N * (D * B + m * 16) * 1.2 N: Total number of vectors. D: Dimensions (e.g., 768 or 1536). B: Bytes per dimension based on quantization level (f32=4, i8=1, b1=0.125). m: Maximum connections per node (default 16). 1.2: 20% operational headroom for system processes and query handling. Example: 10 Million OpenAI Embeddings (768 Dimensions) Using this formula, let’s see how quantization affects your choice of AWS EC2 instances on ScyllaDB Cloud (which primarily utilizes the r7g Graviton and r7i Intel families): f32 (No Quantization): Requires ~40 GB RAM. You would need an r7g.2xlarge (64 GB) to ensure headroom. i8 Quantization: Requires ~12 GB RAM. You can comfortably drop to an r7g.xlarge (32 GB). b1 (1-bit): Requires ~4 GB RAM. This fits on a tiny r7g.medium (8 GB). By moving from f32 to i8, you can drop 2-3 instance tiers. This gets you significant cost savings. Improving Accuracy with Oversampling and Rescoring To mitigate the accuracy loss from quantization, ScyllaDB provides two complementary mechanisms. Oversampling retrieves a larger candidate set during the initial index search, increasing the chance that the true nearest neighbors are included. When a client requests the top K vectors, the algorithm retrieves ceiling(K * oversampling) candidates, sorts them by distance, and returns only the top K. A larger candidate pool means better recall without any extra round-trips to the application. Even without quantization, setting oversampling above 1.0 can improve recall on high-dimensionality datasets. Rescoring is a second-pass operation that recalculates distances using the original full-precision vectors stored in ScyllaDB, then re-ranks candidates before returning results. Because it must fetch and recompute exact distances for every candidate, rescoring can reduce search throughput by roughly 2x – so enable it only when high recall is critical. Note that rescoring is only beneficial when quantization is enabled; for unquantized indexes (default f32), the index already contains full-precision data, making the rescoring pass redundant. Both features are configured as index options when creating a vector index:CREATE CUSTOM INDEX ON myapp.comments(comment_vector)
USING 'vector_index' WITH OPTIONS = { 'similarity_function':
'COSINE', 'quantization': 'i8', 'oversampling': '5.0', 'rescoring':
'true' }; When (and When Not) to Use Quantization
Use quantization when: You are managing millions
or billions of vectors and need to control costs. You are
memory-constrained but can tolerate a small drop in recall. You are
using high-dimensional vectors (≥ 768), where the savings are most
pronounced. Avoid quantization when: You have a
small dataset where memory is not a bottleneck. Highest possible
recall is your only priority. Your application cannot afford the
~2x throughput reduction that comes with
rescoring—the process of recalculating exact
distances using the original f32 data to improve accuracy. Choosing
the Right Configuration for Your Scenario Here are some guidelines
to help you select the right configuration:
Scenario Recommendation Small
dataset, high recall required Use default f32 — no quantization
needed. Large dataset, memory-constrained Use i8 or f16 with
oversampling of 3.0–10.0. Add rescoring: true only if very high
recall is required. Very large dataset, approximate results
acceptable Use b1 for maximum memory savings. Enable oversampling
to compensate for accuracy loss. High-dimensionality vectors (≥
768) Consider oversampling > 1.0 even with f32 to improve
recall. Try ScyllaDB Vector Search Now Quantization is just one
part of the
ScyllaDB 2026.1 release, which also includes
Filtering,
Similarity Values, and
Real-Time Ingestion. With these tools, you can build
production-grade RAG applications that are both blazingly fast and
cost-efficient. Vector Search is available in ScyllaDB Cloud.
Get Started: Check out the
Quick Start Guide to Vector Search in ScyllaDB Cloud.
Deep Dive: Read our past posts on
building a Movie Recommendation App or our
1-billion vector benchmark. Documentation:
View the full ScyllaDB
Cloud Vector Search Documentation. Try ScyllaDB Cloud for free
today and see how quantization can supercharge your AI
infrastructure. The Great Stream Fix: Interleaving Writes in Seastar with AI-Powered Invariants Tracing
How we used AI-assisted invariant-based testing to locate and resolve tricky hidden bugs with complex state transitions Seastar is a high-performanceC++ framework for writing asynchronous server
applications. It powers projects like ScyllaDB and Redpanda. One of its core rules is
simple but strict: no blocking allowed. Every operation that could
take time (e.g., reading from disk, writing to a socket, waiting
for a lock) must be expressed asynchronously by returning a future
that resolves when the work is completed. This makes Seastar
applications extremely efficient on modern hardware. However, it
also means that even seemingly mundane things, like writing data to
a stream, require careful thought about ownership, lifetimes, and
buffering. Moreover Seastar’s output stream has always
experienced a limitation: the inability to freely mix small,
buffered writes with large, zero-copy chunks. It was something that
developers avoided and tolerated – but we always considered it
something worth improving … someday. Fixing this requires a deep
dive into complex state transitions, which inherently creates a
high risk for introducing sequencing bugs. A standard coding
approach won’t work; the task requires a way to trace the system’s
state across millions of test cases. This post describes the
process of using AI-assisted invariant-based testing to try to
locate and resolve these tricky hidden bugs. TL;DR What could have
been an extremely complicated fix ultimately was actually
surprisingly smooth and effective. Output streams Output
stream is Seastar’s output byte flow abstraction. It’s used
wherever data needs to go out of an application. For example, it’s
used for disk files, network connections, and stackable virtual
streams that transform data on the fly (such as compression or
encryption layers sitting on top of another stream). Whatever the
underlying sink is, the output stream presents a
uniform interface to the caller. It gives callers two ways to push
data through: Buffered writes: Copy bytes into an
internal buffer; flush when the buffer fills up or
when explicitly requested. Zero-copy writes: Hand
over memory buffers directly; the stream passes it to the sink
without copying a single byte of the buffer data.
Zero-copy is important for large blobs since we want
to avoid copying megabytes of data. Buffered writes
are important for building up small pieces efficiently. In a real
application, it’s natural to interleave both: write a small header
into the buffer, then attach a large payload as a zero-copy
buffer, then write a small trailer. There is also a
trim_to_size stream option. When enabled, the stream
guarantees that no chunk delivered to the underlying sink exceeds
the configured stream buffer size. This matters for
sinks that have an upper limit on how much data they can accept in
a single call – certain network APIs, for instance, or aligned disk
I/O. Without it, a larger buffer can pass through as-is. The
Problem Until recently, mixing the two write modes was not
supported. Internally, buffered and zero-copy writes used two
different storages: internal buffer for the former
data, and dedicated container for the latter. There
was no clean way to append buffered bytes onto the tail of pending
zero-copy data while preserving ordering. The code simply asserted
that the zero-copy container was empty whenever a
buffered write arrived and vice-versa. The nearby code
comment, however, stated that mixing writes was not supported
yet – so the intention to fix it had always been there.
The goal of the work described here was to make it happen. Start
with the Tests We figured we should build a solid test foundation
before touching the implementation. We had some pre-existing tests
for output streams, but they were really just a
collection of ad-hoc cases (specific input sequences with hardcoded
expected outputs). This was fine for catching regressions but not
great for systematically exploring the large space of possible
inputs against drastic code changes. The new approach was
invariant-based testing. Rather than checking exact output
sequences, the tests need to verify that certain properties always
hold, regardless of input. Specifically, we wanted to check that:
All written bytes arrive at the sink, in order, with no corruption.
Every chunk delivered to the sink (except the last) must be at
least stream_size bytes with no undersized non-last
chunks. With the trimming option enabled, all outgoing chunks must
be exactly stream_size bytes. With these invariants
defined, the test iterates over all combinations of chunk sizes (1
byte through 3x times the stream_size bytes) and all
assignments of write type (buffered or zero-copy) to
each chunk. For n chunks ,that’s 2^n type
patterns plus trimming option giving about 1.6 million combinations
in total. The ad-hoc tests were then removed – the invariant test
subsumed them. One practical issue: 1.6 million cases ran fast in a
regular build (~5 seconds), but under sanitizers
(ASan, UBSan) it ballooned to over two
minutes. Given the whole seastar test suite runs for
several minutes, this new timing had to be improved somehow. The
fix was to turn an exhaustive test into a fuzzy one: in debug
builds, shuffle all 2^n masks, always keep the
all-buffered and all-zero-copy patterns, and sample ~10% of the
rest. That brought sanitizer runs down to less than twenty seconds.
Implementing the Fix With tests in place, the implementation work
began. The key challenge was making the internal
buffer and zero-copy container interoperate
cleanly. Two transitions required handling: Buffered → zero-copy
Zero-copy → buffered Buffered → zero-copy When a
zero-copy write arrives and there’s buffered data.
That data needs to be folded into the zero-copy
container so that ordering is preserved. The naive approach
– trim buffer to its filled length and move it into container –
works, but it wastes the rest of the buffer
allocation. Instead, the filled buffer prefix is shared into
the container as a view or sub-span, and the buffer itself is
advanced past it, thus sharing the underlying memory. This way, the
tail of the original allocation is still available for
future buffered writes after the zero-copy
sequence. No reallocation is needed on the mode switch. This
tail – trimmer buffer, pointing at unused capacity within the
original allocation – is what we call the
remnant. It is a new concept introduced by
this change. Before mixed-mode writes were supported, the buffer
was always either full (and flushed) or freshly allocated. The
remnant is an optimization. But (as will become clear
shortly), its existence also introduced several subtle failure
modes that took time to track down. Zero-copy →
buffered When a buffered write arrives and
the zero-copy container is non-empty, the new data can
just be appended to the internal buffer regularly. A
new zero-copy arrival, if it happens, would keep the
buffer ordering as described above. The flushing path was also
updated to handle the case where buffered and zero-copy
data are both available at that time. The buffered
tail is folded into the zero-copy vector before
forwarding both downstream. Bugs Found Along the Way Once the
implementation was written, the tests were run. They failed – many
times, in different ways. To facilitate debugging, each test
failure logged the exact stream configuration and the write
sequence that triggered the problem, for example:
stream_size=5 trim_to_size=0 input_chunks=[3, 2]
write_types=[buffered, zero_copy] This made the diagnosis
unusually direct. Given a specific failing case, we could trace the
execution step by step – mentally simulating the stream, tracking
the bytes accumulated in the buffer and zero-copy container
state at each point, and comparing that against what the
state should be at that moment. The stream’s logic is not
huge, but it has enough branches and interacting state variables to
make it excessively time- and effort-consuming for a human. So, we
turned to AI to hopefully save time and effort. Each time a test
failed, instead of walking the input_stream state
step-by-step, we prompted the AI with the exact test-case
description string, the code itself, and the request to find where
the problem could be. The AI then started “thinking” and its
mind-flow looked as if it was simulating the
input_stream code execution step-by-step, like a
developer would do, just much faster. This turned out to be an
incredibly fast way to find where the invariant breaks. In many
cases, it could correctly analyze the desired state the stream had
to happen in. Sometimes, it even suggested excellent ways to fix
the problem that was found. Identifying the root cause is often
only half the work; understanding which fix is correct, safe, and
doesn’t introduce new assumptions that will break later is the
other half. AI found the problems for us and was quite helpful in
helping us fix them. For example, one of the first debugging
sessions with AI looked like this… Developer:
The test fails like that fatal error: in
"test_splitting_invariants": data integrity check failed:
stream_size=5 trim_to_size=0 input_chunks=[1, 1, 1, 4]
write_types=[zero_copy, buffered, buffered, buffered]
Please, analyze
AI: Let me trace through this case carefully.
stream_size=5, trim_to_size=false, writes:
[zc(1), buf(1), buf(1), buf(4)] After
zc(1): _zc_bufs = [tb(1)], _zc_len = 1, _end =
0 _zc_len < _size, nothing flushed
After buf(1): Goes to write(const char*,
1) fast path — but !_buf, so falls to
slow_write _end=0, n=1 < _size=5, so
skips the !_end && n >= _size branch Allocates
_buf, copies 1 byte, _end = 1
After buf(1): Fast path: fits in _buf, _end
= 2 After buf(4): _end=2, n=4, n >
_size - _end = 3, so goes to slow_write
_end=2, so skips the !_end && n >=
_size branch now = min(4, 5-2) = 3, copies 3
bytes, _end = 5 now != n (3
!= 4), so allocates next with
max(4-3, 5) = 5 bytes, copies 1 byte into it
_zc_bufs is non-empty, so enters the new branch: seals
_buf.share(0, 5) into _zc_bufs
_zc_len += 5 → _zc_len = 6 _zc_len += n - now =
1 → _zc_len = 7 pushes next into
_zc_bufs trim_to_size=false → calls
zero_copy_put(_zc_bufs) So zero_copy_put
sends [tb(1), tb(5), tb(5)] — but next
only has 1 byte of valid data in it, yet it’s a 5-byte allocation!
We’re pushing the full _size-sized next
buffer but only n - now = 1 bytes were written into
it. The buffer isn’t trimmed before being pushed. The fix: trim
next to n - now bytes before pushing it
into _zc_bufs when _zc_bufs is non-empty.
Shall I go ahead and fix it? The root-case was obvious and the
suggested fix was simple and correct. So, we accepted the
suggestion and the debugging session went on. We went through
several rounds like that, with each step producing more lengthy
analysis and more sophisticated fix proposals. At some point, a
test with a small buffered write followed by a
zero-copy write showed data corruption downstream.
Tracing the execution revealed that after trimming the
internal buffer past the filled region, the
trimmed buffer still appeared non-null
even though it had zero usable capacity remaining. The
fast-path check for buffer availability wasn’t
prepared for that and didn’t trigger reallocation on
the next write. As a result, bytes were written into unowned
memory. Another case involved a buffered write code
branch which handles writes larger than
stream_size by chopping them into chunks. After a mode
switch, the internal buffer might become a small
remnant – smaller than the tail chunk the
code in question was trying to store there. Tracing through the
code, we saw that it was the place where the capacity
check wasn’t prepared to meet with the remnant
buffer. It silently assumed that the internal
buffer always had room for a full-sized tail. The result was
an ASan-detected heap overflow. A more
subtle issue arose around the same remnant buffer in a
different scenario. When buffered write chopping code
encounters a tail chunk that is smaller than the
stream_size, but larger than the
remnant's remaining capacity, it has to make a choice.
It could either fill the remnant partially and
asynchronously put it before allocating a fresh buffer for the
rest, or simply abandon the remnant and allocate a
fresh full buffer. The first option is more space-efficient, but
would require an async flushing inside what is
otherwise a synchronous setup step, significantly complicating the
code. The second option wastes the unused bytes of the
remnant's allocation – but crucially, it doesn’t leak
them. The remnant shares its underlying allocation
with the sealed buffer already in the zero-copy
container, so the memory is freed once that buffer is
flushed and all references to the allocation are dropped. The
deliberate trade-off – wasted but not leaked – was worth making,
and a comment in the code explains the reasoning for whoever reads
it next. Each bug effectively had the same shape: a subtle
assumption about stream state that held in the
original single-mode code silently broke in mixed-mode scenarios.
The invariant test exposed the bugs by providing a
minimal reproducible case and a clear description of which
invariant was violated. Plus, it also made each one straightforward
to reason about and fix. The Result The work touches tests and
implementation in roughly equal measure, which feels about right
for a change like this. The test suite grew from a handful of
hand-crafted cases into an exhaustive invariant-based
framework that covers all combinations of chunk sizes and
write types – something that would have been impractical to write
by hand. On the implementation side, the long-standing restriction
on mixed-mode writes is gone. Buffered and
zero-copy writes can now be freely interleaved in any
order, with the stream handling the transitions internally. This
preserves ordering and the chunk-size invariants that
sinks depend on. In general, writing a test that covers as many
possible situations as possible and then making sure that the code
passes those tests is a very good approach. It makes sure the end
code is correct. In rare cases when the test covers all
possible situations the code may have to deal with, we can say that
“the code is officially bug free.” Making AI facilitate testing
turned out to be the best decision made in this work. Given the
amount of test cases and the number of possible combinations of
input_stream inner states, debugging each failing test
case would be a nightmare for the developer. The Hidden Insanity of DynamoDB Pricing
Learn how to navigate some of the sneakiest aspects of DynamoDB pricing DynamoDB’s pricing model has some head-scratching quirks that slyly inflate bills by hundreds of thousands of dollars per year. Most of these aren’t malicious; they’re just design decisions from 2012 that made sense at the time, but became increasingly absurd at scale. This post walks through four of the most egregious examples and the real cost impact on teams running large workloads. Cost per item size is punitive DynamoDB charges you for writes per 1KB chunk and reads per 4KB chunk. This means: 1KB write = 1 WCU 1.1KB write = 2 WCUs (you’re charged for 2KB, but only used 1.1KB) 1.5KB write = 2 WCUs 2.1KB write = 3 WCUs Every byte over a threshold doubles your cost for that operation. It’s a tax on items that don’t fit neatly into the billing boundary. And almost nothing fits neatly: JSON payloads with nested objects, variable-length strings, metadata, timestamps… Most real-world items end up hitting those boundaries, so you risk paying 2x or more for the overage. Consider a team logging 100M events per day, averaging 1.2KB each. That’s ~120M writes, almost all hitting the 2KB billing threshold. They’re paying for 200M KB instead of 120M KB. That’s a 67% surcharge baked into every bill. If their write cost is $10,000/month, that surcharge alone is ~$6,700/month in wasted capacity. On demand comes at a premium On-demand pricing was introduced as a convenience layer for unpredictable workloads. It saves teams the pain of provisioning and forecasting (“just pay for what you use”). The trade-off is that pricing is steep. Even after AWS’s recent price cut (it used to be ~15x!), on-demand is 7.5x more expensive than provisioned capacity. For a team that starts on on-demand and never switches, the cost difference is catastrophic. For example, say a SaaS company launches a new product on DynamoDB; they start with on-demand for convenience and quickly scale to 20K reads/sec and 20K writes/sec. On-demand now costs $39K/month. Switching to provisioned would drop that down to $11K/month. And teams often don’t switch because ‘it works’ or ‘the bill surprise hasn’t happened yet.’ The convenience tax on DynamoDB is insane. Even if you wanted to retain that flexibility, ScyllaDB would cost $3K/month for on-demand or just $1K/month with a hybrid subscription + flex component. Multi-region network costs are deceptive Global Tables already charge replicated writes (rWCUs) at a premium. But there’s a second hidden cost too: data transfer. AWS charges for cross-region data transfer at standard EC2 rates: $0.02/GB to adjacent regions, up to $0.09/GB to distant regions. As a result, Global Tables end up costing 2-3x more than expected. These hidden network costs often don’t appear as a line item on your DynamoDB bill. They’re rolled up into ‘Data Transfer’ charges. Many teams don’t notice or attribute it correctly. ScyllaDB can’t escape the variable costs of cross-region data transfer that AWS enforces. However, we have a number of cost reduction mechanisms that assist with these costs. ScyllaDB handles multi-DC replication natively. You provision nodes in each data center, and replication is built into the protocol. There are also shard-aware and rack-aware drivers, which help minimize network overhead. Add network compression, and your cross-region data costs get even lower. Reserved capacity requires you to predict capacity Reserved capacity offers massive discounts, up to 70% off. But there’s a catch: you must commit for 1 or 3 years upfront, and you must predict your read and write throughput independently. This is absurdly difficult. Your workload changes: new features launch, old features get deprecated, customer behavior shifts, and traffic patterns evolve. Predicting the exact read/write ratio years out is impossible. Teams either over-commit (wasting money on unused capacity) or under-commit (paying on-demand rates for the overage). Example: You commit to 200K reads/sec and 500K writes/sec for 1 year. On DynamoDB, that is going to cost $1.4M/year for the upfront and annual commitment. But six months into the year, growth exceeds your capacity estimates and your application starts having requests throttled. You revert to autoscaling a mixture of reserved plus on-demand. Now, you’re paying the 7.5x markup – and that costly misjudgment is locked in for the remainder of the year. The solution? Over-commit to hedge your bets. This guarantees you’re wasting money on overprovisioning, just to avoid even higher on-demand charges. It’s a no-win scenario. Compare this to ScyllaDB with a hybrid subscription + flex component that automatically scales to your requirements throughout the year, which might cost $133K/year to start with. Radically less expensive and more flexible (on both compute and storage requirements) thanks to true elastic scaling with X Cloud. Why does this matter? These four pricing quirks aren’t hypothetical. Combined, they add tens of thousands to six figures per year to bills across the industry. They’re especially brutal for write-heavy workloads, multi-region systems, and large items. And because they’re partially hidden, buried in separate line items, masked by the per-operation model, or justified by architectural constraints… Teams often don’t realize how much they’re paying. Some of this is inevitable with a fully managed service. But databases built on different cost models can deliver the same durability, consistency, flexibility and scale at a fraction of the price. For example, this is the case with ScyllaDB, which charges by the node and includes replication and large items at no extra cost. Curious what your workload actually costs? Use the ScyllaDB DynamoDB Cost Calculator at calculator.scylladb.com to model your real costs, including all the hidden charges, and see how ScyllaDB pricing stacks up.Powering a Billion Dreams: Scaling Meesho’s E-commerce Platform
How ScyllaDB plays a critical role in handling Meesho’s millions of transactions – optimizing our catalog rankings and ensuring ultra-low-latency operations With over a billion Indians set to shop online, Meesho is redefining e-commerce by making it accessible, affordable, and inclusive at an unprecedented scale. But scaling for Bharat isn’t just about growth—it’s about building a tech backbone that can handle massive traffic surges, dynamic pricing, real-time recommendations, and seamless user experiences. Let me take you behind the scenes of Meesho’s journey to democratize e-commerce while operating at monster scale. We’ll cover how ScyllaDB plays a critical role in handling Meesho’s millions of transactions – optimizing our catalog rankings and ensuring ultra-low-latency operations. Note: Adarsha Das from Meesho will be presenting a keynote at the upcoming Monster Scale Summit India/APAC. That talk is on BharatMLStack, an open-source, end-to-end machine learning infrastructure stack built at Meesho to support real-time and batch ML workloads at Bharat scale. Join Monster Scale Summit India/APAC — it’s free and virtual About Meesho In case you’re not familiar with Meesho, we’re an Indian e-commerce platform. The company was founded in 2015 to connect small and medium enterprises in India. Meesho helps consumers from these areas access products from all over India, beyond their local markets. Meesho focuses on bringing affordable product selections to Tier 2 cities and smaller markets. The company operates with a zero-commission model that reduces barriers for sellers. We function as an asset-light marketplace that connects sellers, logistics partners, and consumers. We make the listing process quite simple. Sellers just need to take a picture of the product, upload it, set the price, and start selling. Why Personalization is Essential for Meesho Meesho’s architecture aims to support people who are new to e-commerce. Tech-savvy users from Tier 1 cities likely know how to use search, tweak keywords, and find what they want. But someone from a Tier 2 city, new to e-commerce, needs discovery to be simpler. That’s why we invested in a lot of tech to build personalized experiences on the app. Specifically, we invested significantly in AI and personalization technologies to create intuitive app experiences. We personalize all the way from the moment the app opens to order completion. For example, different users see different homepages and product selections based on their preferences and purchase history. We also personalize for sellers, helping them create product descriptions that make sense to their buyers. Real-Time Feed-First Personalization Meesho meets these needs with a fundamentally feed-first app. We create a tailored product feed, ranking products based on preferences and actions (searches, clicks, etc). To do this, we built a CTR (click-through rate) prediction model to decide what product tiles to show each user, and in what order. Two people logging in will see different selections based on their behavior. Given all this, Meesho had to move from traditional recommendation systems to real-time, highly personalized experiences. Batch processing wasn’t sufficient; our personalization must respond instantly to recent user actions. That requires low-latency databases and systems at scale, with the ability to support millions of sellers and users on the app simultaneously. Why ScyllaDB We experimented with a few different databases and data stores: SQL, NoSQL, columnar, and non-columnar. Some worked at certain scales. But as we kept growing, we had to reinvent our storage strategy. Then we discovered ScyllaDB, which met our needs and proved itself at Meesho scale. More specifically, ScyllaDB provided… Horizontal Scaling Given the ever-increasing scale of Meesho – where user transactions kept increasing and users kept growing over years – horizontal scalability was very important to us. Today, I might be running with X nodes. If that becomes 2X tomorrow, how do you scale in a live manner? Being a low-cost e-commerce platform, we are conscious about server spend, so we try to emulate traffic patterns by dynamically scaling up and down based on demand. For example, not all 24 hours have the same number of orders; there are peaks and lows. We want to provision for baseline load and auto-scale for demand without downtime, since the cost of downtime for a business like ours is very high. Downtime can result in user churn and loss of trust, so we prioritize reliability and availability above all. Moreover, we expect that adding new nodes will linearly increase throughput. For example, if I run an X-node cluster and add nodes, I should get a proportionate throughput increase. This is critical as we scale up or down. We observed that in distributed systems with a primary-secondary configuration, the primary can become a bottleneck. So, we wanted a peer-to-peer architecture like ScyllaDB’s, where each node can service writes as well as reads. ScyllaDB gives us linear scalability. Low-Level Optimizations for Efficiency The database’s efficiency is also a factor for us. A major challenge we saw in JVM- or Java-based systems was garbage collection and related overheads. These impact performance, interrupt scaling, and limit hardware utilization. That’s why we prefer C++-based or other low-level language implementations, with minimal JVM or garbage collection issues, and minimal memory overhead. Most of our use cases require low-latency, real-time personalization, where every bit of memory is used for application logic and data, not overhead. Smart Architecture and Fault Tolerance Having a smart, fault-tolerant architecture was another consideration. Much of our user base is in Tier 3 and 4 cities, where network connectivity is sometimes flaky. We want to provide a Tier 1 user experience to Tier 4 users, so low latency is critical. We prioritize keeping latency within a few milliseconds. One of ScyllaDB’s key features is token-aware routing. When a query comes, it goes directly to the node with relevant data – reducing network hops since each node acts as its own master. This is the kind of distributed architecture we were looking for, and the token-level routing helps with horizontal scalability. Reliability and fault tolerance are also major requirements. When running on a public cloud, a big pain point is a particular zone going down. We’ve seen cloud regions and zones go down before. To minimize impact, we look for automatic data replication across zones and seamless failover in case of failures, so that user impact is minimal. Building trust with first-time e-commerce users is hard. If we lose it, getting them back is even harder. That’s why this capability is critical. Operational Simplicity Another thing we wanted is operational simplicity—having a system where adding or removing nodes is as simple as running a script or clicking a button. We like having an engine where we don’t need to tune everything ourselves. Results So Far We’ve been using ScyllaDB to power very low-latency systems at high throughput, for both reads and writes. We started with small workflows, scaled to platform workflows like ML platform and experimentation performance, and continue to scale. It’s been a good journey so far, and we’re looking forward to using it for more use cases.Agentic AI State Management with ScyllaDB and LangGraph
How to combine LangGraph and ScyllaDB for durable state management, crash recovery, and a highly available backend for your agentic AI applications. Most agent implementations today are request-response loops. The challenge with this approach is that you are just one network issue or server restart away from losing context and progress. We have more powerful LLMs than ever, yet we’re wrapping them in fragile infrastructure. As an example, assume you have an agent process that takes three minutes and involves seven API calls. There are a lot of places where it can go wrong. The process dies, the state disappears, and the agent starts over with no recollection of what it was doing. Implementing a well-designed workflow orchestration client is not enough to solve this problem. You also need to implement a distributed and highly available backend to support your agents, something with: multi-region, durable storage automatic data replication fault tolerance high-throughput This post shows you how you can simplify your backend by using a single mature database that handles both high availability and durable storage for your agents. You write agent state to a persistent store, it survives crashes by default, and you can still meet 5ms P99 latency requirements. Pair that with an orchestration framework like LangChain’s LangGraph that saves state after every step, and you get a reliable and scalable agentic backend. Let’s see why and how you should implement a system like that with ScyllaDB. Achieving zero agent downtime with ScyllaDB ScyllaDB is a high-performance distributed NoSQL database designed to stay up and available for mission-critical applications. The Raft consensus algorithm handles topology changes and schema updates with strong consistency. Replication is automatic: you set a replication factor and ScyllaDB distributes copies across nodes, racks, and datacenters. On temporary node loss, Hinted Handoffs record missed writes and replay them when the node returns. For longer outages, row-level repair brings a replacement node up to date in the background. You don’t need load balancers, external replication jobs, or manual failover steps. ScyllaDB Cloud is a mature cloud offering. Multi-region clusters with tunable replication factors per datacenter, rack and availability-zone awareness, and zero-downtime operations are all available out of the box, with no extra components required. ScyllaDB also provides practical features for agentic use cases… Persistent by design Every write goes to durable storage. There is no configuration flag to enable durability; it is the default, not an option. Persistence allows your agent to recover from crashes and continue a process. Data model In ScyllaDB, you design tables around the queries your application will run. A partition key determines which node owns the data, rows within a partition are sorted by a clustering key, and that sort order is fixed at schema creation time. This design is a great fit for key-value agentic systems. Lightweight transactions ScyllaDB supports LWTs to provide compare-and-set semantics natively, without client-side locking:INSERT IF NOT EXISTS and UPDATE ... IF ...
This feature enables idempotent checkpoint writes. Time-to-live
Agentic sessions eventually go stale. ScyllaDB provides a native
way to expire old data from your database. ScyllaDB’s role in your
agentic infrastructure Now let’s explore specific use cases where
ScyllaDB helps you build agentic applications. The following
examples use LangGraph (TypeScript) and the community-created
ScyllaDBSaver checkpointer. What is a checkpointer?
Checkpointer is LangGraph’s abstraction for a persistence backend.
This is how LangGraph integrates with databases.
Durable conversation memory One of the main technical problems with
agents is handling failures such as: network hiccups server
restarts other reasons a process gets killed midway through The
in-memory state is gone, and the agent behaves as if the
conversation never happened. LangGraph’s
MemorySaver (built-in in-memory checkpointer) makes this
reproducible. Run two turns, discard the saver object, create a new
one, and run a third turn:
thread_id: a named
conversation/session in LangGraph; all checkpoints for one
conversation share the same thread. With ScyllaDB as the
checkpointer, all three requests operate identically from an
application standpoint. The agent picks up exactly where it left
off because the conversation state lives in the database rather
than in the server process.
ScyllaDBSaver example: The query that loads state
on every invoke() is: Note that we don’t use
ORDER BY or run a full-table scan. There’s only one
row returned: the most recent checkpoint for the thread. Why does
LIMIT 1 return the newest row without an explicit
sort? Let’s see how the ScyllaDB data model enables this kind of
query.
Source:
https://aws.amazon.com/blogs/database/build-durable-ai-agents-with-langgraph-and-amazon-dynamodb/
Query-first schema design: reading the latest checkpoint LangGraph
reads the latest checkpoint on every invoke(). In a
busy agent server, that is a read-heavy query pattern. The
checkpoints table is defined with a compound primary
key: The partition key is (thread_id, checkpoint_ns).
That means this key will be used to partition your data across the
ScyllaDB cluster. All checkpoints for a single conversation land in
the same partition. “Get all steps for this conversation” never
requires cross-node coordination. The clustering key is
checkpoint_id DESC. It makes sure that the rows within
each partition are sorted according to that column in descending
order. Because checkpoint_id is a UUIDv6
(which encodes a timestamp in its bit layout), rows are physically
stored on disk with the newest checkpoint first. LIMIT
1 on a partition scan reads only the first row; no full scan
is required.
Source: https://docs.langchain.com/oss/python/langgraph/persistence
Crash recovery with idempotent writes A node in an agent graph can
fail mid-execution after it has already written some of its output.
Without a write-ahead
log, the only safe option on retry is to re-run the node from
scratch. This may produce duplicates, trigger external side
effects, or be expensive for long-running LLM calls. ScyllaDB and
LangGraph solves this with a second table,
checkpoint_writes, that acts as a write-ahead log at
the
channel level: Before a checkpoint row is written to
checkpoints, each individual channel write is staged
in checkpoint_writes using a lightweight transaction:
IF NOT EXISTS is an idempotent insert. Here’s what
happens if the server crashes after three of five channel writes
have landed and then restarts: LangGraph loads the latest
checkpoints row It loads the pending
checkpoint_writes for that checkpoint ID It finds the
three completed writes It resumes from there without re-running
successful steps The partition key on
checkpoint_writes is (thread_id, checkpoint_ns,
checkpoint_id). All pending writes for a single checkpoint
are in the same partition. “Load all pending writes for checkpoint
X” is a single-partition scan, not a cross-cluster lookup. The two
tables serve different query patterns. Keeping them separate makes
both queries efficient. Time-travel and conversation history
LangGraph exposes historical snapshots through the checkpointer’s
list() method: Each tuple is a full
CheckpointTuple: the serialized state at that step, the
metadata (source, step number, what changed), and the config needed
to resume from that exact point. That last part is what enables
time-travel: pass a past checkpoint_id as the starting
configuration and LangGraph replays from there, branching the
conversation into an alternative trajectory without modifying the
original history. Here’s the underlying ScyllaDB query: You get all
rows for one thread in one partition, sorted newest-first. This is
the same partition that hosts the latest-checkpoint read. No
additional indexes are required for the history use case. The
source field indicates what kind of step produced
it: "input" (user message ingested, before any node
ran) "loop" (a node executed) "update"
(state was patched directly via graph.updateState()).
Secondary indexes on source and step
allow filtering across all threads when needed: Auto-expire data
with time-to-live Production agent deployments accumulate
checkpoint data continuously. A customer support agent with 10,000
active threads, each with a 10-turn history, generates tens of
thousands of checkpoint rows. Sessions eventually go stale. You
might decide, for example, that a thread abandoned by the user
after one message can be deleted and stored elsewhere after a
certain period of time. In ScyllaDB, TTL is part of the data model.
You attach it directly to the inserted row at write time:
USING TTL 86400 tells ScyllaDB to delete this row
after 24 hours. The same TTL clause appears on
checkpoint_writes in the same write batch. The
ScyllaDBSaver accepts a ttlConfig
parameter that applies this clause to every write: Change
defaultTTLSeconds and every subsequent write picks up
the new expiry. No migration required. Integrate ScyllaDB into your
LangGraph project To use ScyllaDB as a persistent store in your
LangGraph application, you need to install the ScyllaDB
checkpointer. This package will handle the migration and all
subsequent CQL queries for you. Install the package: npm
install @gbyte.tech/langgraph-checkpoint-scylladb Create the
schema: npm run migrate # runs: CREATE KEYSPACE IF NOT EXISTS
langgraph ... # CREATE TABLE IF NOT EXISTS langgraph.checkpoints
... # CREATE TABLE IF NOT EXISTS langgraph.checkpoint_writes
... Wire the checkpointer into your graph: Wrapping up By
combining LangGraph with ScyllaDB’s built-in durability and high
availability, you move from fragile, stateful processes to
resilient agent systems. Restarts, retries, or lost context won’t
be a problem because your architecture treats failure as a normal
condition and continues seamlessly. This shift simplifies your
infrastructure as well as enables more ambitious, long-running
agent workflows to operate reliably at scale. Learn more about
ScyllaDB and agentic applications: Clone the example
application Read how others use
ScyllaDB for AI use cases Sign up for ScyllaDB Cloud