The Great Stream Fix: Interleaving Writes in Seastar with AI-Powered Invariants Tracing
How we used AI-assisted invariant-based testing to locate and resolve tricky hidden bugs with complex state transitions Seastar is a high-performanceC++ framework for writing asynchronous server
applications. It powers projects like ScyllaDB and Redpanda. One of its core rules is
simple but strict: no blocking allowed. Every operation that could
take time (e.g., reading from disk, writing to a socket, waiting
for a lock) must be expressed asynchronously by returning a future
that resolves when the work is completed. This makes Seastar
applications extremely efficient on modern hardware. However, it
also means that even seemingly mundane things, like writing data to
a stream, require careful thought about ownership, lifetimes, and
buffering. Moreover Seastar’s output stream has always
experienced a limitation: the inability to freely mix small,
buffered writes with large, zero-copy chunks. It was something that
developers avoided and tolerated – but we always considered it
something worth improving … someday. Fixing this requires a deep
dive into complex state transitions, which inherently creates a
high risk for introducing sequencing bugs. A standard coding
approach won’t work; the task requires a way to trace the system’s
state across millions of test cases. This post describes the
process of using AI-assisted invariant-based testing to try to
locate and resolve these tricky hidden bugs. TL;DR What could have
been an extremely complicated fix ultimately was actually
surprisingly smooth and effective. Output streams Output
stream is Seastar’s output byte flow abstraction. It’s used
wherever data needs to go out of an application. For example, it’s
used for disk files, network connections, and stackable virtual
streams that transform data on the fly (such as compression or
encryption layers sitting on top of another stream). Whatever the
underlying sink is, the output stream presents a
uniform interface to the caller. It gives callers two ways to push
data through: Buffered writes: Copy bytes into an
internal buffer; flush when the buffer fills up or
when explicitly requested. Zero-copy writes: Hand
over memory buffers directly; the stream passes it to the sink
without copying a single byte of the buffer data.
Zero-copy is important for large blobs since we want
to avoid copying megabytes of data. Buffered writes
are important for building up small pieces efficiently. In a real
application, it’s natural to interleave both: write a small header
into the buffer, then attach a large payload as a zero-copy
buffer, then write a small trailer. There is also a
trim_to_size stream option. When enabled, the stream
guarantees that no chunk delivered to the underlying sink exceeds
the configured stream buffer size. This matters for
sinks that have an upper limit on how much data they can accept in
a single call – certain network APIs, for instance, or aligned disk
I/O. Without it, a larger buffer can pass through as-is. The
Problem Until recently, mixing the two write modes was not
supported. Internally, buffered and zero-copy writes used two
different storages: internal buffer for the former
data, and dedicated container for the latter. There
was no clean way to append buffered bytes onto the tail of pending
zero-copy data while preserving ordering. The code simply asserted
that the zero-copy container was empty whenever a
buffered write arrived and vice-versa. The nearby code
comment, however, stated that mixing writes was not supported
yet – so the intention to fix it had always been there.
The goal of the work described here was to make it happen. Start
with the Tests We figured we should build a solid test foundation
before touching the implementation. We had some pre-existing tests
for output streams, but they were really just a
collection of ad-hoc cases (specific input sequences with hardcoded
expected outputs). This was fine for catching regressions but not
great for systematically exploring the large space of possible
inputs against drastic code changes. The new approach was
invariant-based testing. Rather than checking exact output
sequences, the tests need to verify that certain properties always
hold, regardless of input. Specifically, we wanted to check that:
All written bytes arrive at the sink, in order, with no corruption.
Every chunk delivered to the sink (except the last) must be at
least stream_size bytes with no undersized non-last
chunks. With the trimming option enabled, all outgoing chunks must
be exactly stream_size bytes. With these invariants
defined, the test iterates over all combinations of chunk sizes (1
byte through 3x times the stream_size bytes) and all
assignments of write type (buffered or zero-copy) to
each chunk. For n chunks ,that’s 2^n type
patterns plus trimming option giving about 1.6 million combinations
in total. The ad-hoc tests were then removed – the invariant test
subsumed them. One practical issue: 1.6 million cases ran fast in a
regular build (~5 seconds), but under sanitizers
(ASan, UBSan) it ballooned to over two
minutes. Given the whole seastar test suite runs for
several minutes, this new timing had to be improved somehow. The
fix was to turn an exhaustive test into a fuzzy one: in debug
builds, shuffle all 2^n masks, always keep the
all-buffered and all-zero-copy patterns, and sample ~10% of the
rest. That brought sanitizer runs down to less than twenty seconds.
Implementing the Fix With tests in place, the implementation work
began. The key challenge was making the internal
buffer and zero-copy container interoperate
cleanly. Two transitions required handling: Buffered → zero-copy
Zero-copy → buffered Buffered → zero-copy When a
zero-copy write arrives and there’s buffered data.
That data needs to be folded into the zero-copy
container so that ordering is preserved. The naive approach
– trim buffer to its filled length and move it into container –
works, but it wastes the rest of the buffer
allocation. Instead, the filled buffer prefix is shared into
the container as a view or sub-span, and the buffer itself is
advanced past it, thus sharing the underlying memory. This way, the
tail of the original allocation is still available for
future buffered writes after the zero-copy
sequence. No reallocation is needed on the mode switch. This
tail – trimmer buffer, pointing at unused capacity within the
original allocation – is what we call the
remnant. It is a new concept introduced by
this change. Before mixed-mode writes were supported, the buffer
was always either full (and flushed) or freshly allocated. The
remnant is an optimization. But (as will become clear
shortly), its existence also introduced several subtle failure
modes that took time to track down. Zero-copy →
buffered When a buffered write arrives and
the zero-copy container is non-empty, the new data can
just be appended to the internal buffer regularly. A
new zero-copy arrival, if it happens, would keep the
buffer ordering as described above. The flushing path was also
updated to handle the case where buffered and zero-copy
data are both available at that time. The buffered
tail is folded into the zero-copy vector before
forwarding both downstream. Bugs Found Along the Way Once the
implementation was written, the tests were run. They failed – many
times, in different ways. To facilitate debugging, each test
failure logged the exact stream configuration and the write
sequence that triggered the problem, for example:
stream_size=5 trim_to_size=0 input_chunks=[3, 2]
write_types=[buffered, zero_copy] This made the diagnosis
unusually direct. Given a specific failing case, we could trace the
execution step by step – mentally simulating the stream, tracking
the bytes accumulated in the buffer and zero-copy container
state at each point, and comparing that against what the
state should be at that moment. The stream’s logic is not
huge, but it has enough branches and interacting state variables to
make it excessively time- and effort-consuming for a human. So, we
turned to AI to hopefully save time and effort. Each time a test
failed, instead of walking the input_stream state
step-by-step, we prompted the AI with the exact test-case
description string, the code itself, and the request to find where
the problem could be. The AI then started “thinking” and its
mind-flow looked as if it was simulating the
input_stream code execution step-by-step, like a
developer would do, just much faster. This turned out to be an
incredibly fast way to find where the invariant breaks. In many
cases, it could correctly analyze the desired state the stream had
to happen in. Sometimes, it even suggested excellent ways to fix
the problem that was found. Identifying the root cause is often
only half the work; understanding which fix is correct, safe, and
doesn’t introduce new assumptions that will break later is the
other half. AI found the problems for us and was quite helpful in
helping us fix them. For example, one of the first debugging
sessions with AI looked like this… Developer:
The test fails like that fatal error: in
"test_splitting_invariants": data integrity check failed:
stream_size=5 trim_to_size=0 input_chunks=[1, 1, 1, 4]
write_types=[zero_copy, buffered, buffered, buffered]
Please, analyze
AI: Let me trace through this case carefully.
stream_size=5, trim_to_size=false, writes:
[zc(1), buf(1), buf(1), buf(4)] After
zc(1): _zc_bufs = [tb(1)], _zc_len = 1, _end =
0 _zc_len < _size, nothing flushed
After buf(1): Goes to write(const char*,
1) fast path — but !_buf, so falls to
slow_write _end=0, n=1 < _size=5, so
skips the !_end && n >= _size branch Allocates
_buf, copies 1 byte, _end = 1
After buf(1): Fast path: fits in _buf, _end
= 2 After buf(4): _end=2, n=4, n >
_size - _end = 3, so goes to slow_write
_end=2, so skips the !_end && n >=
_size branch now = min(4, 5-2) = 3, copies 3
bytes, _end = 5 now != n (3
!= 4), so allocates next with
max(4-3, 5) = 5 bytes, copies 1 byte into it
_zc_bufs is non-empty, so enters the new branch: seals
_buf.share(0, 5) into _zc_bufs
_zc_len += 5 → _zc_len = 6 _zc_len += n - now =
1 → _zc_len = 7 pushes next into
_zc_bufs trim_to_size=false → calls
zero_copy_put(_zc_bufs) So zero_copy_put
sends [tb(1), tb(5), tb(5)] — but next
only has 1 byte of valid data in it, yet it’s a 5-byte allocation!
We’re pushing the full _size-sized next
buffer but only n - now = 1 bytes were written into
it. The buffer isn’t trimmed before being pushed. The fix: trim
next to n - now bytes before pushing it
into _zc_bufs when _zc_bufs is non-empty.
Shall I go ahead and fix it? The root-case was obvious and the
suggested fix was simple and correct. So, we accepted the
suggestion and the debugging session went on. We went through
several rounds like that, with each step producing more lengthy
analysis and more sophisticated fix proposals. At some point, a
test with a small buffered write followed by a
zero-copy write showed data corruption downstream.
Tracing the execution revealed that after trimming the
internal buffer past the filled region, the
trimmed buffer still appeared non-null
even though it had zero usable capacity remaining. The
fast-path check for buffer availability wasn’t
prepared for that and didn’t trigger reallocation on
the next write. As a result, bytes were written into unowned
memory. Another case involved a buffered write code
branch which handles writes larger than
stream_size by chopping them into chunks. After a mode
switch, the internal buffer might become a small
remnant – smaller than the tail chunk the
code in question was trying to store there. Tracing through the
code, we saw that it was the place where the capacity
check wasn’t prepared to meet with the remnant
buffer. It silently assumed that the internal
buffer always had room for a full-sized tail. The result was
an ASan-detected heap overflow. A more
subtle issue arose around the same remnant buffer in a
different scenario. When buffered write chopping code
encounters a tail chunk that is smaller than the
stream_size, but larger than the
remnant's remaining capacity, it has to make a choice.
It could either fill the remnant partially and
asynchronously put it before allocating a fresh buffer for the
rest, or simply abandon the remnant and allocate a
fresh full buffer. The first option is more space-efficient, but
would require an async flushing inside what is
otherwise a synchronous setup step, significantly complicating the
code. The second option wastes the unused bytes of the
remnant's allocation – but crucially, it doesn’t leak
them. The remnant shares its underlying allocation
with the sealed buffer already in the zero-copy
container, so the memory is freed once that buffer is
flushed and all references to the allocation are dropped. The
deliberate trade-off – wasted but not leaked – was worth making,
and a comment in the code explains the reasoning for whoever reads
it next. Each bug effectively had the same shape: a subtle
assumption about stream state that held in the
original single-mode code silently broke in mixed-mode scenarios.
The invariant test exposed the bugs by providing a
minimal reproducible case and a clear description of which
invariant was violated. Plus, it also made each one straightforward
to reason about and fix. The Result The work touches tests and
implementation in roughly equal measure, which feels about right
for a change like this. The test suite grew from a handful of
hand-crafted cases into an exhaustive invariant-based
framework that covers all combinations of chunk sizes and
write types – something that would have been impractical to write
by hand. On the implementation side, the long-standing restriction
on mixed-mode writes is gone. Buffered and
zero-copy writes can now be freely interleaved in any
order, with the stream handling the transitions internally. This
preserves ordering and the chunk-size invariants that
sinks depend on. In general, writing a test that covers as many
possible situations as possible and then making sure that the code
passes those tests is a very good approach. It makes sure the end
code is correct. In rare cases when the test covers all
possible situations the code may have to deal with, we can say that
“the code is officially bug free.” Making AI facilitate testing
turned out to be the best decision made in this work. Given the
amount of test cases and the number of possible combinations of
input_stream inner states, debugging each failing test
case would be a nightmare for the developer. The Hidden Insanity of DynamoDB Pricing
Learn how to navigate some of the sneakiest aspects of DynamoDB pricing DynamoDB’s pricing model has some head-scratching quirks that slyly inflate bills by hundreds of thousands of dollars per year. Most of these aren’t malicious; they’re just design decisions from 2012 that made sense at the time, but became increasingly absurd at scale. This post walks through four of the most egregious examples and the real cost impact on teams running large workloads. Cost per item size is punitive DynamoDB charges you for writes per 1KB chunk and reads per 4KB chunk. This means: 1KB write = 1 WCU 1.1KB write = 2 WCUs (you’re charged for 2KB, but only used 1.1KB) 1.5KB write = 2 WCUs 2.1KB write = 3 WCUs Every byte over a threshold doubles your cost for that operation. It’s a tax on items that don’t fit neatly into the billing boundary. And almost nothing fits neatly: JSON payloads with nested objects, variable-length strings, metadata, timestamps… Most real-world items end up hitting those boundaries, so you risk paying 2x or more for the overage. Consider a team logging 100M events per day, averaging 1.2KB each. That’s ~120M writes, almost all hitting the 2KB billing threshold. They’re paying for 200M KB instead of 120M KB. That’s a 67% surcharge baked into every bill. If their write cost is $10,000/month, that surcharge alone is ~$6,700/month in wasted capacity. On demand comes at a premium On-demand pricing was introduced as a convenience layer for unpredictable workloads. It saves teams the pain of provisioning and forecasting (“just pay for what you use”). The trade-off is that pricing is steep. Even after AWS’s recent price cut (it used to be ~15x!), on-demand is 7.5x more expensive than provisioned capacity. For a team that starts on on-demand and never switches, the cost difference is catastrophic. For example, say a SaaS company launches a new product on DynamoDB; they start with on-demand for convenience and quickly scale to 20K reads/sec and 20K writes/sec. On-demand now costs $39K/month. Switching to provisioned would drop that down to $11K/month. And teams often don’t switch because ‘it works’ or ‘the bill surprise hasn’t happened yet.’ The convenience tax on DynamoDB is insane. Even if you wanted to retain that flexibility, ScyllaDB would cost $3K/month for on-demand or just $1K/month with a hybrid subscription + flex component. Multi-region network costs are deceptive Global Tables already charge replicated writes (rWCUs) at a premium. But there’s a second hidden cost too: data transfer. AWS charges for cross-region data transfer at standard EC2 rates: $0.02/GB to adjacent regions, up to $0.09/GB to distant regions. As a result, Global Tables end up costing 2-3x more than expected. These hidden network costs often don’t appear as a line item on your DynamoDB bill. They’re rolled up into ‘Data Transfer’ charges. Many teams don’t notice or attribute it correctly. ScyllaDB can’t escape the variable costs of cross-region data transfer that AWS enforces. However, we have a number of cost reduction mechanisms that assist with these costs. ScyllaDB handles multi-DC replication natively. You provision nodes in each data center, and replication is built into the protocol. There are also shard-aware and rack-aware drivers, which help minimize network overhead. Add network compression, and your cross-region data costs get even lower. Reserved capacity requires you to predict capacity Reserved capacity offers massive discounts, up to 70% off. But there’s a catch: you must commit for 1 or 3 years upfront, and you must predict your read and write throughput independently. This is absurdly difficult. Your workload changes: new features launch, old features get deprecated, customer behavior shifts, and traffic patterns evolve. Predicting the exact read/write ratio years out is impossible. Teams either over-commit (wasting money on unused capacity) or under-commit (paying on-demand rates for the overage). Example: You commit to 200K reads/sec and 500K writes/sec for 1 year. On DynamoDB, that is going to cost $1.4M/year for the upfront and annual commitment. But six months into the year, growth exceeds your capacity estimates and your application starts having requests throttled. You revert to autoscaling a mixture of reserved plus on-demand. Now, you’re paying the 7.5x markup – and that costly misjudgment is locked in for the remainder of the year. The solution? Over-commit to hedge your bets. This guarantees you’re wasting money on overprovisioning, just to avoid even higher on-demand charges. It’s a no-win scenario. Compare this to ScyllaDB with a hybrid subscription + flex component that automatically scales to your requirements throughout the year, which might cost $133K/year to start with. Radically less expensive and more flexible (on both compute and storage requirements) thanks to true elastic scaling with X Cloud. Why does this matter? These four pricing quirks aren’t hypothetical. Combined, they add tens of thousands to six figures per year to bills across the industry. They’re especially brutal for write-heavy workloads, multi-region systems, and large items. And because they’re partially hidden, buried in separate line items, masked by the per-operation model, or justified by architectural constraints… Teams often don’t realize how much they’re paying. Some of this is inevitable with a fully managed service. But databases built on different cost models can deliver the same durability, consistency, flexibility and scale at a fraction of the price. For example, this is the case with ScyllaDB, which charges by the node and includes replication and large items at no extra cost. Curious what your workload actually costs? Use the ScyllaDB DynamoDB Cost Calculator at calculator.scylladb.com to model your real costs, including all the hidden charges, and see how ScyllaDB pricing stacks up.Powering a Billion Dreams: Scaling Meesho’s E-commerce Platform
How ScyllaDB plays a critical role in handling Meesho’s millions of transactions – optimizing our catalog rankings and ensuring ultra-low-latency operations With over a billion Indians set to shop online, Meesho is redefining e-commerce by making it accessible, affordable, and inclusive at an unprecedented scale. But scaling for Bharat isn’t just about growth—it’s about building a tech backbone that can handle massive traffic surges, dynamic pricing, real-time recommendations, and seamless user experiences. Let me take you behind the scenes of Meesho’s journey to democratize e-commerce while operating at monster scale. We’ll cover how ScyllaDB plays a critical role in handling Meesho’s millions of transactions – optimizing our catalog rankings and ensuring ultra-low-latency operations. Note: Adarsha Das from Meesho will be presenting a keynote at the upcoming Monster Scale Summit India/APAC. That talk is on BharatMLStack, an open-source, end-to-end machine learning infrastructure stack built at Meesho to support real-time and batch ML workloads at Bharat scale. Join Monster Scale Summit India/APAC — it’s free and virtual About Meesho In case you’re not familiar with Meesho, we’re an Indian e-commerce platform. The company was founded in 2015 to connect small and medium enterprises in India. Meesho helps consumers from these areas access products from all over India, beyond their local markets. Meesho focuses on bringing affordable product selections to Tier 2 cities and smaller markets. The company operates with a zero-commission model that reduces barriers for sellers. We function as an asset-light marketplace that connects sellers, logistics partners, and consumers. We make the listing process quite simple. Sellers just need to take a picture of the product, upload it, set the price, and start selling. Why Personalization is Essential for Meesho Meesho’s architecture aims to support people who are new to e-commerce. Tech-savvy users from Tier 1 cities likely know how to use search, tweak keywords, and find what they want. But someone from a Tier 2 city, new to e-commerce, needs discovery to be simpler. That’s why we invested in a lot of tech to build personalized experiences on the app. Specifically, we invested significantly in AI and personalization technologies to create intuitive app experiences. We personalize all the way from the moment the app opens to order completion. For example, different users see different homepages and product selections based on their preferences and purchase history. We also personalize for sellers, helping them create product descriptions that make sense to their buyers. Real-Time Feed-First Personalization Meesho meets these needs with a fundamentally feed-first app. We create a tailored product feed, ranking products based on preferences and actions (searches, clicks, etc). To do this, we built a CTR (click-through rate) prediction model to decide what product tiles to show each user, and in what order. Two people logging in will see different selections based on their behavior. Given all this, Meesho had to move from traditional recommendation systems to real-time, highly personalized experiences. Batch processing wasn’t sufficient; our personalization must respond instantly to recent user actions. That requires low-latency databases and systems at scale, with the ability to support millions of sellers and users on the app simultaneously. Why ScyllaDB We experimented with a few different databases and data stores: SQL, NoSQL, columnar, and non-columnar. Some worked at certain scales. But as we kept growing, we had to reinvent our storage strategy. Then we discovered ScyllaDB, which met our needs and proved itself at Meesho scale. More specifically, ScyllaDB provided… Horizontal Scaling Given the ever-increasing scale of Meesho – where user transactions kept increasing and users kept growing over years – horizontal scalability was very important to us. Today, I might be running with X nodes. If that becomes 2X tomorrow, how do you scale in a live manner? Being a low-cost e-commerce platform, we are conscious about server spend, so we try to emulate traffic patterns by dynamically scaling up and down based on demand. For example, not all 24 hours have the same number of orders; there are peaks and lows. We want to provision for baseline load and auto-scale for demand without downtime, since the cost of downtime for a business like ours is very high. Downtime can result in user churn and loss of trust, so we prioritize reliability and availability above all. Moreover, we expect that adding new nodes will linearly increase throughput. For example, if I run an X-node cluster and add nodes, I should get a proportionate throughput increase. This is critical as we scale up or down. We observed that in distributed systems with a primary-secondary configuration, the primary can become a bottleneck. So, we wanted a peer-to-peer architecture like ScyllaDB’s, where each node can service writes as well as reads. ScyllaDB gives us linear scalability. Low-Level Optimizations for Efficiency The database’s efficiency is also a factor for us. A major challenge we saw in JVM- or Java-based systems was garbage collection and related overheads. These impact performance, interrupt scaling, and limit hardware utilization. That’s why we prefer C++-based or other low-level language implementations, with minimal JVM or garbage collection issues, and minimal memory overhead. Most of our use cases require low-latency, real-time personalization, where every bit of memory is used for application logic and data, not overhead. Smart Architecture and Fault Tolerance Having a smart, fault-tolerant architecture was another consideration. Much of our user base is in Tier 3 and 4 cities, where network connectivity is sometimes flaky. We want to provide a Tier 1 user experience to Tier 4 users, so low latency is critical. We prioritize keeping latency within a few milliseconds. One of ScyllaDB’s key features is token-aware routing. When a query comes, it goes directly to the node with relevant data – reducing network hops since each node acts as its own master. This is the kind of distributed architecture we were looking for, and the token-level routing helps with horizontal scalability. Reliability and fault tolerance are also major requirements. When running on a public cloud, a big pain point is a particular zone going down. We’ve seen cloud regions and zones go down before. To minimize impact, we look for automatic data replication across zones and seamless failover in case of failures, so that user impact is minimal. Building trust with first-time e-commerce users is hard. If we lose it, getting them back is even harder. That’s why this capability is critical. Operational Simplicity Another thing we wanted is operational simplicity—having a system where adding or removing nodes is as simple as running a script or clicking a button. We like having an engine where we don’t need to tune everything ourselves. Results So Far We’ve been using ScyllaDB to power very low-latency systems at high throughput, for both reads and writes. We started with small workflows, scaled to platform workflows like ML platform and experimentation performance, and continue to scale. It’s been a good journey so far, and we’re looking forward to using it for more use cases.Agentic AI State Management with ScyllaDB and LangGraph
How to combine LangGraph and ScyllaDB for durable state management, crash recovery, and a highly available backend for your agentic AI applications. Most agent implementations today are request-response loops. The challenge with this approach is that you are just one network issue or server restart away from losing context and progress. We have more powerful LLMs than ever, yet we’re wrapping them in fragile infrastructure. As an example, assume you have an agent process that takes three minutes and involves seven API calls. There are a lot of places where it can go wrong. The process dies, the state disappears, and the agent starts over with no recollection of what it was doing. Implementing a well-designed workflow orchestration client is not enough to solve this problem. You also need to implement a distributed and highly available backend to support your agents, something with: multi-region, durable storage automatic data replication fault tolerance high-throughput This post shows you how you can simplify your backend by using a single mature database that handles both high availability and durable storage for your agents. You write agent state to a persistent store, it survives crashes by default, and you can still meet 5ms P99 latency requirements. Pair that with an orchestration framework like LangChain’s LangGraph that saves state after every step, and you get a reliable and scalable agentic backend. Let’s see why and how you should implement a system like that with ScyllaDB. Achieving zero agent downtime with ScyllaDB ScyllaDB is a high-performance distributed NoSQL database designed to stay up and available for mission-critical applications. The Raft consensus algorithm handles topology changes and schema updates with strong consistency. Replication is automatic: you set a replication factor and ScyllaDB distributes copies across nodes, racks, and datacenters. On temporary node loss, Hinted Handoffs record missed writes and replay them when the node returns. For longer outages, row-level repair brings a replacement node up to date in the background. You don’t need load balancers, external replication jobs, or manual failover steps. ScyllaDB Cloud is a mature cloud offering. Multi-region clusters with tunable replication factors per datacenter, rack and availability-zone awareness, and zero-downtime operations are all available out of the box, with no extra components required. ScyllaDB also provides practical features for agentic use cases… Persistent by design Every write goes to durable storage. There is no configuration flag to enable durability; it is the default, not an option. Persistence allows your agent to recover from crashes and continue a process. Data model In ScyllaDB, you design tables around the queries your application will run. A partition key determines which node owns the data, rows within a partition are sorted by a clustering key, and that sort order is fixed at schema creation time. This design is a great fit for key-value agentic systems. Lightweight transactions ScyllaDB supports LWTs to provide compare-and-set semantics natively, without client-side locking:INSERT IF NOT EXISTS and UPDATE ... IF ...
This feature enables idempotent checkpoint writes. Time-to-live
Agentic sessions eventually go stale. ScyllaDB provides a native
way to expire old data from your database. ScyllaDB’s role in your
agentic infrastructure Now let’s explore specific use cases where
ScyllaDB helps you build agentic applications. The following
examples use LangGraph (TypeScript) and the community-created
ScyllaDBSaver checkpointer. What is a checkpointer?
Checkpointer is LangGraph’s abstraction for a persistence backend.
This is how LangGraph integrates with databases.
Durable conversation memory One of the main technical problems with
agents is handling failures such as: network hiccups server
restarts other reasons a process gets killed midway through The
in-memory state is gone, and the agent behaves as if the
conversation never happened. LangGraph’s
MemorySaver (built-in in-memory checkpointer) makes this
reproducible. Run two turns, discard the saver object, create a new
one, and run a third turn:
thread_id: a named
conversation/session in LangGraph; all checkpoints for one
conversation share the same thread. With ScyllaDB as the
checkpointer, all three requests operate identically from an
application standpoint. The agent picks up exactly where it left
off because the conversation state lives in the database rather
than in the server process.
ScyllaDBSaver example: The query that loads state
on every invoke() is: Note that we don’t use
ORDER BY or run a full-table scan. There’s only one
row returned: the most recent checkpoint for the thread. Why does
LIMIT 1 return the newest row without an explicit
sort? Let’s see how the ScyllaDB data model enables this kind of
query.
Source:
https://aws.amazon.com/blogs/database/build-durable-ai-agents-with-langgraph-and-amazon-dynamodb/
Query-first schema design: reading the latest checkpoint LangGraph
reads the latest checkpoint on every invoke(). In a
busy agent server, that is a read-heavy query pattern. The
checkpoints table is defined with a compound primary
key: The partition key is (thread_id, checkpoint_ns).
That means this key will be used to partition your data across the
ScyllaDB cluster. All checkpoints for a single conversation land in
the same partition. “Get all steps for this conversation” never
requires cross-node coordination. The clustering key is
checkpoint_id DESC. It makes sure that the rows within
each partition are sorted according to that column in descending
order. Because checkpoint_id is a UUIDv6
(which encodes a timestamp in its bit layout), rows are physically
stored on disk with the newest checkpoint first. LIMIT
1 on a partition scan reads only the first row; no full scan
is required.
Source: https://docs.langchain.com/oss/python/langgraph/persistence
Crash recovery with idempotent writes A node in an agent graph can
fail mid-execution after it has already written some of its output.
Without a write-ahead
log, the only safe option on retry is to re-run the node from
scratch. This may produce duplicates, trigger external side
effects, or be expensive for long-running LLM calls. ScyllaDB and
LangGraph solves this with a second table,
checkpoint_writes, that acts as a write-ahead log at
the
channel level: Before a checkpoint row is written to
checkpoints, each individual channel write is staged
in checkpoint_writes using a lightweight transaction:
IF NOT EXISTS is an idempotent insert. Here’s what
happens if the server crashes after three of five channel writes
have landed and then restarts: LangGraph loads the latest
checkpoints row It loads the pending
checkpoint_writes for that checkpoint ID It finds the
three completed writes It resumes from there without re-running
successful steps The partition key on
checkpoint_writes is (thread_id, checkpoint_ns,
checkpoint_id). All pending writes for a single checkpoint
are in the same partition. “Load all pending writes for checkpoint
X” is a single-partition scan, not a cross-cluster lookup. The two
tables serve different query patterns. Keeping them separate makes
both queries efficient. Time-travel and conversation history
LangGraph exposes historical snapshots through the checkpointer’s
list() method: Each tuple is a full
CheckpointTuple: the serialized state at that step, the
metadata (source, step number, what changed), and the config needed
to resume from that exact point. That last part is what enables
time-travel: pass a past checkpoint_id as the starting
configuration and LangGraph replays from there, branching the
conversation into an alternative trajectory without modifying the
original history. Here’s the underlying ScyllaDB query: You get all
rows for one thread in one partition, sorted newest-first. This is
the same partition that hosts the latest-checkpoint read. No
additional indexes are required for the history use case. The
source field indicates what kind of step produced
it: "input" (user message ingested, before any node
ran) "loop" (a node executed) "update"
(state was patched directly via graph.updateState()).
Secondary indexes on source and step
allow filtering across all threads when needed: Auto-expire data
with time-to-live Production agent deployments accumulate
checkpoint data continuously. A customer support agent with 10,000
active threads, each with a 10-turn history, generates tens of
thousands of checkpoint rows. Sessions eventually go stale. You
might decide, for example, that a thread abandoned by the user
after one message can be deleted and stored elsewhere after a
certain period of time. In ScyllaDB, TTL is part of the data model.
You attach it directly to the inserted row at write time:
USING TTL 86400 tells ScyllaDB to delete this row
after 24 hours. The same TTL clause appears on
checkpoint_writes in the same write batch. The
ScyllaDBSaver accepts a ttlConfig
parameter that applies this clause to every write: Change
defaultTTLSeconds and every subsequent write picks up
the new expiry. No migration required. Integrate ScyllaDB into your
LangGraph project To use ScyllaDB as a persistent store in your
LangGraph application, you need to install the ScyllaDB
checkpointer. This package will handle the migration and all
subsequent CQL queries for you. Install the package: npm
install @gbyte.tech/langgraph-checkpoint-scylladb Create the
schema: npm run migrate # runs: CREATE KEYSPACE IF NOT EXISTS
langgraph ... # CREATE TABLE IF NOT EXISTS langgraph.checkpoints
... # CREATE TABLE IF NOT EXISTS langgraph.checkpoint_writes
... Wire the checkpointer into your graph: Wrapping up By
combining LangGraph with ScyllaDB’s built-in durability and high
availability, you move from fragile, stateful processes to
resilient agent systems. Restarts, retries, or lost context won’t
be a problem because your architecture treats failure as a normal
condition and continues seamlessly. This shift simplifies your
infrastructure as well as enables more ambitious, long-running
agent workflows to operate reliably at scale. Learn more about
ScyllaDB and agentic applications: Clone the example
application Read how others use
ScyllaDB for AI use cases Sign up for ScyllaDB Cloud Why We Changed ScyllaDB’s Approach to Repair
By focusing solely on unrepaired data, we made ScyllaDB’s incremental repair 10X faster Maintaining data consistency in large-scale distributed databases often comes at a high performance cost. As clusters grow and data volume expands rapidly, traditional repair methods often become a bottleneck. At ScyllaDB, we needed a way to make consistency checks faster and more efficient. In response, we implemented Incremental Repair for ScyllaDB’s tablets data distribution. It’s an optimization designed to minimize repair overhead by focusing solely on unrepaired data. This blog explains what it is, how we implemented it, and the performance gains it delivered. What is Incremental Repair? Before we talk about the incremental part, let’s look at what repair actually involves in a distributed system context. In a system like ScyllaDB, repair is an essential maintenance operation. Even with the best hardware, replicas can drift due to network hiccups, disk failures, or load. Repair detects mismatches between replicas and fixes them to ensure every node has the latest, correct version of the data. It’s a safety net that guarantees data consistency across the cluster. Incremental repair is a new feature in ScyllaDB (currently available for tablets-based tables). The idea behind it is simple: why worry about data that we have already repaired? Traditional repair scans everything. Incremental repair targets only unrepaired data. Technically, this is achieved by splitting SSTables into two distinct sets: repaired and unrepaired. The repaired set is consistent and synchronized, while the unrepaired set is potentially inconsistent and requires validation. We created two modes of incremental repair: incremental and full. In incremental mode, only SSTables in the unrepaired set are selected for the repair process. Once the repair completes, those SSTables are marked as repaired and promoted into the repaired set. This should be your default mode because it significantly minimizes the IO and CPU required for repair. In full mode, the incremental logic is still active, but the selection criteria change. Instead of skipping the repaired set, it selects all data (both repaired and unrepaired). Once the process is finished, all participating SSTables are marked as repaired. Think of this as a “trust but verify” mode. Use this when you want to revalidate the entire data set from scratch while still using the incremental infrastructure. Finally, there’s disabled mode, where the incremental repair logic is turned off. In this case, the repair behavior is exactly the same as in previous versions of ScyllaDB – before the incremental repair feature was introduced. It selects both repaired and unrepaired SSTables for the repair process. After repair completes, the system does not mark SSTables as repaired. This is useful for scenarios where you want to run a repair without affecting the metadata that tracks the repair state. Incremental repair is integrated directly into the existing workflow, with three options: nodetool lets you use the standard nodetool cluster repair command with incremental flags. ScyllaDB Manager also supports the same flags for automated scheduling. A REST API makes incremental repair available to teams building custom tools. Making incremental repair work (internals) To make incremental repair work, we had to solve a classic distributed systems problem: state consistency. We need to know exactly which data is repaired, and that state must survive crashes. So, we track this using two decoupled markers. repaired_at, number stored directly in the SSTable metadata on disk. sstables_repaired_at, a value stored in our system tables. The logic follows a two-phase commit model. Phase one: prepare. First, we run the row-level repair. Once that is finished, we update the repaired_at value in the SSTables. At this point, the system still treats them as unrepaired because they haven’t been activated yet. Phase two: commit. After every node confirms the row-level repair and the updated repaired_at value, we update the sstables_repaired_at value in the system table. We define an SSTable as repaired if and only if it is not zero and it is less than or equal to the system’s repaired_at value. If a node crashes between phases, the mismatch between the file and the system table ensures that we don’t accidentally skip data that wasn’t 100% verified. Under normal operations, you don’t need to run full repairs regularly. Still, it’s needed occasionally. If you experience significant loss of SSTables (perhaps due to a disk failure), then a full repair is required to reconstruct missing data across the cluster. In practice, we suggest one full repair after a long series of incremental runs. This gives you an extra layer of security, even if it is not strictly required. This brings us to a critical challenge: compaction. If we let compaction mix repaired and unrepaired data, the repaired status would be lost, and we’d need to re-repair everything. To solve this, we introduce compaction barriers. We effectively split the tablets into two independent worlds. The unrepaired set, where all new writes and memtable flushes go. Compaction only merges unrepaired SSTables with other unrepaired ones. The repaired set, where SSTables are compacted together to maintain and optimize the read path. The rule is that a compaction strategy can never merge an unrepaired SSTable into the repaired set. The only bridge between these sets is repair. This prevents potentially inconsistent data from polluting the repaired set. With this new design, compaction now has a dependency on repair. As we run repairs, the repaired sets grow. But because incremental repair is so much lighter than traditional methods, we encourage you to run it much more frequently. We are currently working on an automatic repair feature that will trigger those runs at the very moment the unrepaired set grows too large. That should keep your unrepaired window as small as possible. The efficiency of incremental repair depends on your workload: Update heavy: If you have lots of overwrites or deletes, new data will invalidate older repaired SSTables. In extreme cases, there could be so much data to repair that it looks a lot like full repair. Append heavy: This is a perfect use case, like IoT or logging. Since new data doesn’t invalidate old data, the repaired set stays consistent and untouched. This should provide nice performance gains. Even in update-heavy cases, don’t lose anything by choosing incremental repair. In the worst case, it performs the same amount of work as a full repair would. In almost all real-world scenarios, you could gain significant improvements without any trade-off with respect to consistency. Performance Improvements To understand how this approach translates to performance improvements, let’s model the improvement ratio. Say that n is the size of your new unrepaired data and E is the size of your existing repaired data. A full repair works on E plus n. Incremental repair works only on n. So, the improvement ratio equals n divided by E plus n. If you ingest 100 gigabytes a day on a 10-terabyte node, you are repairing only 1% of the data instead of 100%. This is an order-of-magnitude shift in overhead. Our testing confirms that theory. We ran multiple insert and repair cycles. In the first round, nearly all the data was new, so the repair time was almost the same as with full repair. In the second round, with a 50/50 split, the time dropped by half. In the third round, as the repaired set became dominant, incremental repair took only 35% of the time that a full repair would have taken. To wrap up, incremental repair for tablets is faster, lighter, and more efficient. It is a foundational step toward our goal of a fully autonomous database that handles its own maintenance. By adopting this feature, you reduce operational burden and ensure your cluster remains consistent without repair storms.Stop Answering the Same Question Twice: Interval-Aware Caching for Druid at Netflix Scale
By Ben Sykes
In a previous post, we described how Netflix uses Apache Druid to ingest millions of events per second and query trillions of rows, providing the real-time insights needed to ensure a high-quality experience for our members. Since that post, our scale has grown considerably.
With our database holding over 10 trillion rows and regularly ingesting up to 15 million events per second, the value of our real-time data is undeniable. But this massive scale introduced a new challenge: queries. The live show monitoring, dashboards, automated alerting, canary analysis, and A/B test monitoring that are built on top of Druid became so heavily relied upon that the repetitive query load started to become a scaling concern in itself.
This post describes an experimental caching layer we built to address this problem, and the trade-offs we chose to accept.
The Problem
Our internal dashboards are heavily used for real-time monitoring, especially during high-profile live shows or global launches. A typical dashboard has 10+ charts, each triggering one or more Druid queries; one popular dashboard with 26 charts and stats generates 64 queries per load. When dozens of engineers view the same dashboards and metrics for the same event, the query volume quickly becomes unmanageable.
Take the popular dashboard above: 64 queries per load, refreshing every 10 seconds, viewed by 30 people. That’s 192 queries per second from one dashboard, mostly for nearly identical data. We still need Druid capacity for automated alerting, canary analysis, and ad-hoc queries. And because these dashboards request a rolling last-few-hours window, each refresh changes slightly as the time range advances.
Druid’s built-in caches are effective. Both the full-result cache and the per-segment cache. But neither is designed to handle the continuous, overlapping time-window shifts inherent to rolling-window dashboards. The full-result cache misses for two reasons.
- If the time window shifts even slightly, the query is different, so it’s a cache miss.
- Druid deliberately refuses to cache results that involve realtime segments (those still being indexed), because it values deterministic, stable cache results and query correctness over a higher cache hit rate.
The per-segment cache does help avoid redundant scans on historical nodes, but we still need to collect those cached segment results from each data node and merge them in the brokers with data from the realtime nodes for every query.
During major shows, rolling-window dashboards can generate a flood of near-duplicate queries that Druid’s caches mostly miss, creating heavy redundant load. At our scale, solving this by simply adding more hardware is prohibitively expensive.
We needed a smarter approach.
The Insight
When a dashboard requests the last 3 hours of data, the vast majority of that data, everything except the most recent few minutes, is already settled. The data from 2 hours ago won’t change.
What if we could remember the older portions of the result and only ask Druid for the part that’s actually new?
This is the core idea behind a new caching service that understands the structure of Druid queries and serves previously-seen results from cache while fetching only the freshest portion from Druid.

A Deliberate Trade-Off
Before diving into the implementation, it’s worth being explicit about the trade-off we’re making. Caching query results introduces some staleness, specifically, up to 5 seconds for the newest data. This is acceptable for most of our operational dashboards, which refresh every 10 to 30 seconds. In practice, many of our queries already set an end time of now-1m or now-5s to avoid the “flappy tail” that can occur with currently-arriving data.
Since our end-to-end data pipeline latency is typically under 5 seconds at P90, a 5-second cache TTL on the freshest data introduces negligible additional staleness on top of what’s already inherent in the system. We decided it was better to accept this small amount of staleness in exchange for significantly lower query load on Druid. But a 5s cache on its own is not very useful.
Exponential TTLs
Not all data points are equally trustworthy. In real-time analytics, there’s a well-known late-arriving data problem. Events can arrive out of order or be delayed in the ingestion pipeline. A data point from 30 seconds ago might still change as late-arriving events trickle in. A data point from 30 minutes ago is almost certainly final.
We use this observation to set cache TTLs that increase exponentially with the age of the data. Data less than 2 minutes old gets a minimum TTL of 5 seconds. After that, the TTL doubles for each additional minute of age: 10 seconds at 2 minutes old, 20 seconds at 3 minutes, 40 seconds at 4 minutes, and so on, up to a maximum TTL of 1 hour.
The effect is that fresh data cycles through the cache rapidly, so any corrections from late-arriving events in the most recent couple of minutes are picked up quickly. Older data lingers much longer, because our confidence in its accuracy grows with time.
For a 3-hour rolling window, the exponential TTL ensures the vast majority of the query is served from the cache, leaving Druid to only scan the most recent, unsettled data.

Bucketing
If we were to use a single-level cache key for the query and interval, similar to Druid’s existing result-level cache, we wouldn’t be able to extract only the relevant time range from cached results. A shifted window means a different key, which means a cache miss.
Instead, we use a map-of-maps. The top-level key is the query hash without the time interval; the inner keys are timestamps bucketed to the query granularity (or 1 minute, whichever is larger) and encoded as big-endian bytes so lexicographic order matches time. This enables efficient range scans; fetching all cached buckets between times A and B for a query hash. A 3-hour query at 1-minute granularity becomes 180 independent cached buckets, each with its own TTL; when the window shifts (e.g., 30 seconds later), we reuse most buckets from cache and only query Druid for the new data.

How It Works
Today, the cache runs as an external service integrated transparently by intercepting requests at the Druid Router and redirecting them to the cache. If the cache fully satisfies a request, it returns the result; otherwise it shrinks the time interval to the uncached portion and calls back into the Router, bypassing the redirect to query Druid normally. Non-cached requests (e.g., metadata queries or queries without time group-bys) pass straight through to Druid unchanged.
This intercepting proxy design allows us to enable or disable caching without any client changes and is a key to its adoption. We see this setup as temporary while we work out a way to better integrate this capability into Druid more natively.
When a cacheable query arrives, those that are grouping-by time (timeseries, groupBy), the cache performs the following steps.
Parsing and Hashing. We parse each incoming query to extract the time interval, granularity, and structure, then compute a SHA-256 hash of the query with the time interval and parts of the context removed. That hash is the cache key: it encodes what is being asked (datasource, filters, aggregations, granularity) but not when, so the same logical query over different overlapping time windows maps to the same cache entry. There are some context properties that can alter the response structure or contents, so these are included in the cache-key.

Cache Lookup. Using the cache key, we fetch cached points within the requested range, but only if they’re contiguous from the start. Because bucket TTLs can expire unevenly, gaps can appear; when we hit a gap, we stop and fetch all newer data from Druid. This guarantees a complete, unbroken result set while sending at most one Druid query, rather than “filling gaps” with multiple small, fragmented queries that would increase Druid load.
Fetching the Missing Tail. On a partial cache hit (e.g., 2h 50m of a 3h window), we rebuild the query with a narrowed interval for the missing 10 minutes and send only that to Druid. Since Druid then scans just the recent segments for a small time range, the query is usually faster and cheaper than the original.
Combining. The cached data and fresh data are concatenated, sorted by timestamp, and returned to the client. From the client’s perspective, the response looks identical to what Druid would have returned, same JSON format, same fields.
Asynchronous Caching. The fresh data from Druid is parsed into individual time-granularity buckets and written back to the cache asynchronously, so we don’t add latency to the response path.

Negative Caching
Some metrics are sparse. Certain time buckets may genuinely have no data. Without special handling, the cache would treat these empty buckets as gaps and re-query Druid for them every time.
We handle this by caching empty sentinel values for time buckets where Druid returned no data. Our gap-detection logic recognizes these empty entries as valid cached data rather than missing data, preventing needless re-queries for naturally sparse metrics.
However, we’re careful not to negative-cache trailing empty buckets. If a query returns data up to minute 45 and nothing after, we only cache empty entries for gaps between data points, not after the last one. This avoids incorrectly caching “no data” for time periods where events simply haven’t arrived yet, which would exacerbate the chart delays of late arriving data.
The Storage Layer
For the backing store, we use Netflix’s Key-Value Data Abstraction Layer (KVDAL), backed by Cassandra. KVDAL provides a two-level map abstraction, a natural fit for our needs. The outer key is the query hash, and the inner keys are timestamps. Crucially, KVDAL supports independent TTLs on each inner key-value pair, eliminating the need for us to manage cache eviction manually.
This two-level structure gives us efficient range queries over the inner keys, which is exactly what we need for partial cache lookups: “give me all cached buckets between time A and time B for query hash X.”
Results
The biggest win is during high-volume events (e.g., live shows): when many users view the same dashboards, the cache serves most identical queries as full hits, so the query rate reaching Druid is essentially the same with 1 viewer or 100. The scaling bottleneck moves from Druid’s query capacity to the much cheaper-to-scale cache, and with ~5.5 ms P90 cache responses, dashboards load faster for everyone.
On a typical day, 82% of real user queries get at least a partial cache hit, and 84% of result data is served from cache. As a result, the queries that reach Druid scan much narrower time ranges, touching fewer segments and processing less data, freeing Druid to focus on aggregating the newest data instead of repeatedly re-querying historical segments.

An experiment validated this, showing about a 33% drop in queries to Druid and a 66% improvement in overall P90 query times. It also cut result bytes and segments queried, and in some cases, enabling the cache reduced result bytes by more than 14x. Caveat: the size of these gains depends heavily on how similar and repetitive the query workload is.

Looking Ahead
This caching layer is still experimental, but results are promising and we’re exploring next steps. We’ve added partial support for templated SQL so dashboard tools can benefit without writing native Druid queries.
Longer term, we’d like interval-aware caching to be built into Druid: an external proxy adds infrastructure to manage, extra network hops, and workarounds (like SQL templating) to extract intervals. Implemented inside Druid, it could be more efficient, with direct access to the query planner and segment metadata, and benefit the broader community without custom infrastructure. We’d likely ship it as an opt-in, configurable, result-level cache in the Brokers, with metrics to tune TTLs and measure effectiveness. Please leave a comment if you have a use-case that could benefit from this feature.
More broadly, this strategy, splitting time-series results into independently cached, granularity-aligned buckets with age-based exponential TTLs, isn’t Druid-specific and could apply to any time-series database with frequent overlapping-window queries.
Summary
As more Netflix teams rely on real-time analytics, query volume grows too. Dashboards are essential at our scale, but their popularity can become a scaling bottleneck. By inserting an intelligent cache between dashboards and Druid, one that understands query structure, breaks results into granularity-aligned buckets, and trades a small amount of staleness for much lower Druid load, we’ve increased query capacity without scaling infrastructure proportionally, and hope to deliver these benefits to the Druid community soon as a built-in Druid feature.
Sometimes the best way to handle a flood of queries is to stop answering the same question twice.
Stop Answering the Same Question Twice: Interval-Aware Caching for Druid at Netflix Scale was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Powering Multimodal Intelligence for Video Search
Synchronizing the Senses: Powering Multimodal Intelligence for Video Search
By Meenakshi Jindal and Munya Marazanye
Today’s filmmakers capture more footage than ever to maximize their creative options, often generating hundreds, if not thousands, of hours of raw material per season or franchise. Extracting the vital moments needed to craft compelling storylines from this sheer volume of media is a notoriously slow and punishing process. When editorial teams cannot surface these key moments quickly, creative momentum stalls and severe fatigue sets in.
Meanwhile, the broader search landscape is undergoing a profound transformation. We are moving beyond simple keyword matching toward AI-driven systems capable of understanding deep context and intent. Yet, while these advances have revolutionized text and image retrieval, searching through video, the richest medium for storytelling, remains a daunting “needle in a haystack” challenge.
The solution to this bottleneck cannot rely on a single algorithm. Instead, it demands orchestrating an expansive ensemble of specialized models: tools that identify specific characters, map visual environments, and parse nuanced dialogue. The ultimate challenge lies in unifying these heterogeneous signals, textual labels, and high-dimensional vectors into a cohesive, real-time intelligence. One that cuts through the noise and responds to complex queries at the speed of thought, truly empowering the creative process.
Why Video Search is Deceptively Complex
Since video is a multi-layered medium, building an effective search engine required us to overcome significant technical bottlenecks. Multi-modal search is exponentially more complex than traditional indexing: it demands the unification of outputs from multiple specialized models, each analyzing a different facet of the content to generate its own distinct metadata. The ultimate challenge lies in harmonizing these heterogeneous data streams to support rich, multi-dimensional queries in real time.
- Unifying the Timeline
To ensure critical moments aren’t lost across scene boundaries, each model segments the video into overlapping intervals. The resulting metadata varies wildly, ranging from discrete text-based object labels to dense vector embeddings. Synchronizing these disjointed, multi-modal timelines into a unified chronological map presents a massive computational hurdle. - Processing at Scale
A standard 2,000-hour production archive can contain over 216 million frames. When processed through an ensemble of specialized models, this baseline explodes into billions of multi-layered data points. Storing, aligning, and intersecting this staggering volume of records while maintaining sub-second query latency far exceeds the capabilities of traditional database architectures. - Surfacing the Best Moments
Surface-level mathematical similarity is not enough to identify the most relevant clip. Because continuous shots naturally generate thousands of visually redundant candidates, the system must dynamically cluster and deduplicate results to surface the singular best match for a given scene. To achieve this, effective ranking relies on a sophisticated hybrid scoring engine that weighs symbolic text matches against semantic vector embeddings, ensuring both precision and interpretability. - Zero-Friction Search
For filmmakers, search is a stream-of-consciousness process, and a ten-second delay can disrupt the creative flow. Because sequential scanning of raw footage is fundamentally unscalable, our architecture is built to navigate and correlate billions of vectors and metadata records efficiently, operating at the speed of thought.
The Ingestion and Fusion Pipeline
To ensure system resilience and scalability, the transition from raw model output to searchable intelligence follows a decoupled, three-stage process:
1. Transactional Persistence
Raw annotations are ingested via high-availability pipelines and stored in our annotation service, which leverages Apache Cassandra for distributed storage. This stage strictly prioritizes data integrity and high-speed write throughput, guaranteeing that every piece of model output is safely captured.
{
"type": "SCENE_SEARCH",
"time_range": {
"start_time_ns": 4000000000,
"end_time_ns": 9000000000
},
"embedding_vector": [
-0.036, -0.33, -0.29 ...
],
"label": "kitchen",
"confidence_score": 0.72
}
Figure 2: Sample Scene Search Model Annotation Output
2. Offline Data Fusion
Once the annotation service securely persists the raw data, the system publishes an event via Apache Kafka to trigger an asynchronous processing job. Serving as the architecture’s central logic layer, this offline pipeline handles the heavy computational lifting out-of-band. It performs precise temporal intersections, fusing overlapping annotations from disparate models into cohesive, unified records that empower complex, multi-dimensional queries.
Cleanly decoupling these intensive processing tasks from the ingestion pipeline guarantees that complex data intersections never bottleneck real-time intake. As a result, the system maintains maximum uptime and peak responsiveness, even when processing the massive scale of the Netflix media catalog.
To achieve this intersection at scale, the offline pipeline normalizes disparate model outputs by mapping them into fixed-size temporal buckets. This discretization process unfolds in three steps:
- Bucket Mapping: Continuous detections are segmented into discrete intervals. For example, if a model detects a character “Joey” from seconds 2 through 8, the pipeline maps this continuous span of frames into seven distinct one-second buckets.
- Annotation Intersection: When multiple models generate annotations for the same temporal bucket, such as character recognition “Joey” and scene detection “kitchen” overlapping in second 4, the system fuses them into a single, comprehensive record.
- Optimized Persistence: These newly enriched records are written back to Cassandra as distinct entities. This creates a highly optimized, second-by-second index of multi-modal intersections, perfectly associating every fused annotation with its source asset.
The following record shows the overlap of the character “Joey” and scene “kitchen” annotations during a 4 to 5 second window in a video asset:
{
"associated_ids": {
"MOVIE_ID": "81686010",
"ASSET_ID": "01325120–7482–11ef-b66f-0eb58bc8a0ad"
},
"time_bucket_start_ns": 4000000000,
"time_bucket_end_ns": 5000000000,
"source_annotations": [
{
"annotation_id": "7f5959b4–5ec7–11f0-b475–122953903c43",
"annotation_type": "CHARACTER_SEARCH",
"label": "Joey",
"time_range": {
"start_time_ns": 2000000000,
"end_time_ns": 8000000000
}
},
{
"annotation_id": "c9d59338–842c-11f0–91de-12433798cf4d",
"annotation_type": "SCENE_SEARCH",
"time_range": {
"start_time_ns": 4000000000,
"end_time_ns": 9000000000
},
"label": "kitchen",
"embedding_vector": [
0.9001, 0.00123 ....
]
}
]
}
Figure 4: Sample Intersection Record for Character + Scene Search
3. Indexing for Real Time Search
Once the enriched temporal buckets are securely persisted in Cassandra, a subsequent event triggers their ingestion into Elasticsearch.
To guarantee absolute data consistency, the pipeline executes upsert operations using a composite key (asset ID + time bucket) as the unique document identifier. If a temporal bucket already exists for a specific second of video, perhaps populated by an earlier model run, the system intelligently updates the existing record rather than generating a duplicate. This mechanism establishes a single, unified source of truth for every second of footage.
Architecturally, the pipeline structures each temporal bucket as a nested document. The root level captures the overarching asset context, while associated child documents house the specific, multi-modal annotation data. This hierarchical data model is precisely what empowers users to execute highly efficient, cross-annotation queries at scale.
Multimodal Discovery and Result Ranking
The search service provides a high-performance interface for real-time discovery across the global Netflix catalog. Upon receiving a user request, the system immediately initiates a query preprocessing phase, generating a structured execution plan through three core steps:
- Query Type Detection: Dynamically categorizes the incoming request to route it down the most efficient retrieval path.
- Filter Extraction: Isolates specific semantic constraints such as character names, physical objects, or environmental contexts to rapidly narrow the candidate pool.
- Vector Transformation: Converts raw text into high-dimensional, model-specific embeddings to enable deep, context-aware semantic matching.
Once generated, the system compiles this structured plan into a highly optimized Elasticsearch query, executing it directly against the pre-fused temporal buckets to deliver instantaneous, frame-accurate results.
Fine-Tuning Semantic Search
To support the diverse workflows of different production teams, the system provides fine-grained control over search behavior through configurable parameters:
- Exact vs. Approximate Search: Users can toggle between exact k-Nearest Neighbors (k-NN) for uncompromising precision, and Approximate Nearest Neighbor (ANN) algorithms (such as HNSW) to maintain blazing speed when querying massive datasets.
- Dynamic Similarity Metrics: The system supports multiple distance calculations, including cosine similarity and Euclidean distance. Because different models shape their high-dimensional vector spaces distinctly based on their underlying training architectures, the flexibility to swap metrics ensures that mathematical closeness perfectly translates to true semantic relevance.
- Confidence Thresholding: By establishing strict minimum score boundaries for results, users can actively prune the long tail of low-probability matches. This aggressively filters out visual noise, guaranteeing that creative teams are not distracted and only review results that meet a rigorous standard of mathematical similarity.
Textual Analysis & Linguistic Precision
To handle the deep nuances of dialogue-heavy searches, such as isolating a character’s exact catchphrase amidst thousands of hours of speech, we implement a sophisticated text analysis strategy within Elasticsearch. This ensures that conversational context is captured and indexed accurately.
- Phrase & Proximity Matching: To respect the narrative weight of specific lines (e.g., “Friends don’t lie” in Stranger Things), we leverage match-phrase queries with a configurable slop parameter. This guarantees the system retrieves the correct scene even if the user’s memory slightly deviates from the exact transcription.
- N-Gram Analysis for Partial Discovery: Because video search is inherently exploratory, we utilize edge N-gram tokenizers to support search-as-you-type functionality. By actively indexing dialogue and metadata substrings, the system surfaces frame-accurate results the moment an editor begins typing, drastically reducing cognitive load.
- Tokenization and Linguistic Stemming: To seamlessly support the global scale of the Netflix catalog, our analysis chain applies sophisticated stemming across multiple languages. This ensures a query for “running” automatically intersects with scenes tagged with “run” or “ran” collapsing grammatical variations into a single, unified search intent.
- Levenshtein Fuzzy Matching: To account for transcription anomalies or phonetic misspellings, we incorporate fuzzy search capabilities based on Levenshtein distance algorithms. This intelligent soft-matching approach ensures that high-value shots are never lost to minor data-entry errors or imperfect queries.
Aggregations and Flexible Grouping
The architecture operates at immense scale, seamlessly executing queries within a single title or across thousands of assets simultaneously. To combat result fatigue, the system leverages custom aggregations to intelligently cluster and group outputs based on specific parameters, such as isolating the top 5 most relevant clips of an actor per episode. This guarantees a diverse, highly representative return set, preventing any single asset from dominating the search results.
Search Response Curation
While temporal buckets are the internal mechanism for search efficiency, the system post-processes Elasticsearch results to reconstruct original time boundaries. The reconstruction process ensures results reflect narrative scene context rather than arbitrary intervals. Depending on the query intent, the system generates results based on two logic types:
- Union: Returns the full span of all matching annotations (3–8 sec), which prioritizes breadth, capturing any instance where a specified feature occurs.
- Intersection: Returns only the exact overlapping duration of matching signals (4–6 sec). The intersection logic focuses on co-occurrence, isolating moments when multiple criteria align.
{
"entity_id": {
"entity_type": "ASSET",
"id": "1bba97a1–3562–4426–9cd2-dfbacddcb97b"
},
"range_intervals": [
{
"intersection_time_range": {
"start_time_ns": 4000000000,
"end_time_ns": 8000000000
},
"union_time_range": {
"start_time_ns": 2000000000,
"end_time_ns": 9000000000
},
"source_annotations": [
{
"annotation_id": "fc1525d0–93a7–11ef-9344–1239fc3a8917",
"annotation_type": "SCENE_SEARCH",
"metadata": {
"label": "kitchen"
}
},
{
"annotation_id": "5974fb01–93b0–11ef-9344–1239fc3a8917",
"annotation_type": "CHARACTER_SEARCH",
"metadata": {
"character_name": [
"Joey"
]
}
}
]
}
]
}
Figure 7: Sample Response for “Joey” + “Kitchen” Query
Future Extensions
While our current architecture establishes a highly resilient and scalable foundation, it represents only the first phase of our multi-modal search vision. To continuously close the gap between human intuition and machine retrieval, our roadmap focuses on three core evolutions:
- Natural Language Discovery: Transitioning from structured JSON payloads to fluid, conversational interfaces (e.g., “Find the best tracking shots of Tom Holland running on a roof”). This will abstract away underlying query complexity, allowing creatives to interact with the archive organically.
- Adaptive Ranking: Implementing machine learning feedback loops to dynamically refine scoring algorithms. By continuously analyzing how editorial teams interact with and select clips, the system will self-tune its mathematical definition of semantic relevance over time.
- Domain-Specific Personalization: Dynamically calibrating search weights and retrieval behaviors to match the exact context of the user. The platform will tailor its results depending on whether a team is cutting high-action marketing trailers, editing narrative scenes, or conducting deep archival research.
Ultimately, these advancements will elevate the platform from a highly optimized search engine into an intelligent creative partner, fully equipped to navigate the ever-growing complexity and scale of global video media.
Acknowledgements
We would like to extend our gratitude to the following teams and individuals whose expertise and collaboration were instrumental in the development of this system:
- Data Science Engineering: Nagendra Kamath, Chao Pan, Prachee Sharma, Ying Liao and Carolyn Soo for the critical media model insights that informed our architectural design.
- Product Management: Nimesh Narayan, Ian Krabacher, Ananya Poddar, Meghan Bailey and Anita Kuc for defining the user requirements and product vision.
- Media Production Suite Team: Szymon Borodziuk, Mike Czarnota, Dominika Sarkowicz, Bohdan Koval and Sasha Sabov for their work in engineering the end-user search experience.
- Asset Management Platform Team: For their collaborative efforts in operationalizing this design and bringing the system into production.
Powering Multimodal Intelligence for Video Search was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.
ScyllaDB Vector Search in Action: Real-time RAG and Anomaly Detection
Learn how ScyllaDB vector search simplifies scalable semantic search & RAG with real-time performance – plus see FAQs on vector search scaling, embeddings, and architecture As a Solutions Architect at ScyllaDB, I talk a lot about vector search these days. Most vector search conversations start with RAG. Build a chatbot, embed some documents, do a similarity lookup. That’s an important use case, but it’s just the beginning of what vector search can do when you have a platform like ScyllaDB that can scale to massive datasets and hit those p99 latencies At Monster Scale Summit, I showed two live demos that cover what I think are the two most practical patterns for vector search today: hybrid memory for RAG, and real-time anomaly detection. Both run entirely on ScyllaDB, without needing an external cache (e.g., Redis) or dedicated vector database (e.g., Pinecone). This post explains what I showed and also answers the top questions from attendees. The Limits of Traditional Search and Anomaly Detection The old way of doing anomaly detection, retrieving data, doing lookups, using inverted indexes, and running semantic search is brittle. Anyone who has tried to scale an index-based system has experienced this firsthand. The first step was figuring out what was said before, what data was loaded. Redis caches, key-value stores, document databases, and SQL databases were used to handle that. Then, to determine what documents are relevant, we built custom search pipelines using Lucene-based indexes and similar tools. Next, we built complex rules engines that became difficult to troubleshoot and couldn’t scale. These systems became extremely complex, with SQL-based regressions and similar mechanisms. The problem is that they’re hard to scale and very rigid. Every new failure mode required building a new rule, sometimes entirely new rules engines. Often, these detection paths required completely different systems. The result was graph use cases, vector systems, and multiple databases: a collection of systems solving what are ultimately simple problems. As you might imagine, these systems become quite brittle and expensive to operate. You could avoid this with a conceptual shift: from matching rules to finding things that look similar. That shift from rigid logic to semantic similarity is what vector search is trying to achieve, whether through text embeddings or other encodings. RAG with Hybrid Memory The first demo showed how we can build three distinct memory tiers for RAG, all living within ScyllaDB. This “hybrid memory” pattern allows you to handle everything from immediate context to long-forgotten conversations without a complex conglomerate of different databases. Short-term memory: This handles the last five records of your interaction as a rolling window. It is very fast, uses a short TTL, and ensures the AI stays on track with the immediate conversation. Long-term memory: This is where vectors shine. If you’ve been chatting with a bot for 6 days, that history won’t fit in a standard context window. We use vector search to retrieve specific conversation parts that matter based on embedded similarity, loading only the relevant history into the prompt. This is the piece that makes a chatbot feel like it actually remembers. Document enrichment: This phase takes uploaded files (PDFs, Markdown files, etc.), chunks them semantically, embeds each chunk, and stores those vectors in ScyllaDB. When you ask a question, the system retrieves the most relevant chunks and feeds them to the model. Basically, I used ScyllaDB to manage key-value lookups, vector similarity, and TTL expiration in a single cluster, which meant I didn’t need extra infrastructure like Redis or Pinecone. It processed LinkedIn profiles by breaking them into semantic chunks (one profile generated 43 chunks) and stored those embeddings in ScyllaDB Cloud to be queried in real-time. The frontend is built in Elixir, the backend is ScyllaDB Cloud with vector search enabled. The code is on my GitHub. Pull it, run it, and try it out with your own documents. Anomaly Detection: Three Paths, One Database The second demo is where vector search truly earns its keep. The setup models an IoT pipeline: Devices generate metrics every 10 seconds Metrics stream through Kafka They’re aggregated into 60-second windows Finally, they land in ScyllaDB as 384-dimensional embeddings generated by Ollama. From there, three detection paths run in parallel against every snapshot. Path 1: Rules engine. This is classic Z-score analysis. If four or more metrics exceed a Z-score of 6.0, flag it. This is fast and interpretable. It catches known anomaly patterns and misses everything else. Path 2: Embedding similarity. Build a behavioral fingerprint for each device. Compare every new snapshot against that fingerprint using cosine similarity. If the score drops below 0.92, something has drifted. This catches gradual behavioral changes that hard thresholds miss. Path 3: Vector search. Run an ANN query across the device’s historical snapshots in ScyllaDB. Post-filter for the same device, non-anomalous records, and a cosine similarity above 0.75. If fewer than five similar normal snapshots exist, that snapshot is anomalous. You don’t need to write any rules or tune thresholds per device. The data tells you what normal looks like. In the demo, I showed a sensor drift scenario where power consumption dropped slightly and a few other metrics shifted just enough to be unusual. This slipped past the rules engine and embedding similarity, but vector search caught it. It’s Exhibit A for how vector search exposes subtle anomalies that rigid systems can’t see. The reason this works on a single cluster is that ScyllaDB handles vectors alongside time series, key-value, and TTL data with sub-10 millisecond queries under heavy concurrent load. Snapshots, profiles, anomaly events, and the vector index all live in one place. The anomaly detection demo runs in Docker if you want to try it on your own data. Common Questions About ScyllaDB Vector Search Technical demos naturally lead to questions, and we heard plenty at MonsterScale (thanks for keeping us busy in the chat). Here are responses to the top questions we fielded… “What about new devices with no history?” Fair concern. Path 3 decides “anomaly” when it finds fewer than five similar normal snapshots. A brand new device with ten records will trigger on everything. The fix is a minimum history gate. Don’t run the vector path until a device has enough baseline data to make comparisons meaningful. In practice, the rules engine and embedding similarity cover the gap while the device builds history. “How do I know my embeddings are good enough?” Test them against known anomalies before you go to production. Bad embeddings produce bad neighbors, and similarity search will give you confidently wrong answers. Run a batch of labeled data through your embedding model, query for nearest neighbors, and check whether the results make sense. If your model does not capture the right features, no amount of tuning the similarity threshold will save you. “Does this scale to millions of devices?” The ANN query itself scales; the part you want to watch is post-filtering. The query returns candidates, and your application code still has to filter by device, status, and similarity threshold. At millions of devices, that step needs to be efficient. Batch your comparisons and keep your filter criteria tight. Someone at the talk asked about the performance impact of deletions on the vector index as old snapshots expire. ScyllaDB’s index structure is optimized for deletion, so the index stays healthy as data churns through. “Can I just use vector search and skip the rules engine?” Yes, you can…but you’ll regret it. The three-path approach works because each path catches a different class of problem. Rules engines are still the fastest way to catch known, well-defined failures. Someone at the talk asked whether an LLM with MCP could just explore the context and find anomalies itself. It can. But vector search is cheaper. No LLM invocation, no context window limit, and it runs at the speed of a database query. Use each tool for what it is good at. Try It Yourself If you want to see what ScyllaDB can do for your vector workloads, sign up for a demo. I’m always happy to walk through architectures and talk through what makes most sense for your specific requirements. Connect with me on LinkedIn if you have questions, want to collaborate, or just want to talk about distributed systems. I read every message.ScyllaDB R&D Year in Review: Elasticity, Efficiency, and Real-Time Vector Search
Learn about ScyllaDB’s new features, performance improvements, and innovations — plus, get a sneak peek into what’s coming next. 2025 was a busy year for ScyllaDB R&D. We shipped one major release, three minor releases, and a continuous stream of updates across both ScyllaDB and ScyllaDB Cloud. Our users are most excited about elasticity, vector search, and reduced total cost of ownership. We also made nice progress on years-long projects like strong consistency and object storage I raced through a year-in-review recap in my recent Monster Scale Summit talk (time limits, sorry), which you can watch on-demand. But I also want to share a written version that links to related talks and blog posts with additional detail. Feel free to choose your own adventure. Watch the talk Tablets: Fast Elasticity ScyllaDB’s “tablets” data distribution approach is now fully supported across all ScyllaDB capabilities, for both CQL and Alternator (our DynamoDB-compatible API). Tablets are the key technology that dynamically scales clusters in and out. It’s been in production for over a year now; some customers use it to scale for the usual daily fluctuations, others rely on its fast responses to workload volatility like expected events, unpredictable spikes, etc. Right now, the autoscaling lets users maximize their disks, which reduces costs. Soon, we’ll automatically scale based on workload characteristics as well as storage. A few things make this genuinely different from the vNodes design that we originally inherited from Cassandra, but replaced with tablets: When you add new nodes, load balancing starts immediately and in parallel. No cleanup operation is needed. There’s also no resharding; rebalancing is automated. Moreover, we shifted from mutation-based streaming to file-based streaming: stream the entire SSTable files without deserializing them into mutation fragments and reserializing them back into SSTables on receiving nodes. As a result, 3X less data is streamed over the network and less CPU is consumed, especially for data models that contain small cells. This change provides up to 25X faster streaming. As Avi’s talk explains, we can scale out in minutes and the tablets load balancing algorithm balances nodes based on their storage consumption. That means more usable space on an ongoing basis – up to 90% disk utilization. Also noteworthy: you can now scale with different node sizes – so you can increase capacity in much smaller increments. You can add tiny instances first, then replace them with larger ones if needed. That means you rarely pay for unused capacity. For example, before, if you started with an i4i.16xlarge node that had 15 TB of storage and you hit 70% utilization, you had to launch another i4i.16xlarge – adding 15 TB at once. Now, you might add two xlarge nodes (1.8 TB each) first. Then, if you need more storage, you add more small nodes, and eventually replace them with larger nodes. Vector Search: Real-Time AI Customers like Tripadvisor, ShareChat, Medium, and Agoda have been using ScyllaDB as a fast feature store in their AI pipelines for several years now. Now, ScyllaDB also has real-time vector search, which is embedded in ScyllaDB Cloud. Our vector search takes advantage of ScyllaDB’s unique architecture. Technologies like our super-fast Rust driver, CDC, and tablets help us deliver real-time AI queries. We can handle datasets of 1 billion vectors with P99 latency as low as 1.7 ms and throughput up to 252,000 QPS. Dictionary-Based Compression I’ve always been interested in compression, even as a student. It’s particularly interesting for databases, though. It allows you to deliver more with less, so it’s yet another way to reduce costs. ScyllaDB Cloud has always had compression enabled by default, both for data at rest and for data in transit. We compress our SSTables on disk, we compress traffic between nodes, and optionally between clients and nodes. In 2025, we improved its efficiency up to 80% by enabling dictionary-based compression. The compressor gains better context of the data being compressed. That gives it higher compression ratios and, in some cases, even better performance. Our two most popular compression algorithms, LZ4 and ZSTD, both benefit from dictionaries now. It works by sampling some data, from which it creates a dictionary and then uses it to further compress the data. The graph on the lower left shows the impact of enabling dictionary compression for network traffic. Both compression algorithms are working and both drop nicely – from 70% to less than 50% for LZ4 and from around 50% to 33% for ZSTD. The table on the lower right shows a similar change. It shows the benefit for disk utilization on a customer’s production cluster, essentially cutting down storage consumption from 50% to less than 30%. Note that the 50% was an already compressed dataset. With this new compression, we further compressed it to less than 30%, a significant saving. Raft-Based Topology We’ve been working on Raft and features built on top of it for several years now. Currently, we use Raft for multiple purposes. Schema consistency was the first step, but topology is the more interesting improvement. With Raft-based fast parallel scaling and safe schema updates, we’re ready to finally retire Gossip-based topology. Other features that use Raft-based topology are authentication, service levels (also known as workload prioritization), and tablets. We are actively working on making strong consistency available for data as well. ScyllaDB X Cloud ScyllaDB X Cloud is the next generation of ScyllaDB Cloud, a truly elastic database-as-a-service. It builds upon innovation in both ScyllaDB core (such as tablets and Raft-based topology), as well as innovation and improvements in ScyllaDB Cloud itself (such as parallel setup of cloud resources, reduced boot time, and the new resource resizing algorithm) to provide immediate elasticity to clusters. Curious how it works? It’s quite simple, really. You just select two important parameters for your cluster, and those define the minimum values for resources: The minimum vCPU count, which is a way to measure the ‘horsepower’ that you initially want to reserve for your clusters. The minimum storage size. And that’s it. You can do this via the API or UI. In the UI, it looks like this: If you wish, you can also limit it to a specific instance family. Now, let’s see how this scaling looks in action. A few things to note here: There are three somewhat large nodes and three additional nodes that are smaller. Some of the tablets are not equal, and that’s perfectly fine. The load was very high initially, then additional load moved gradually to the new nodes. The workload itself, in terms of requests, didn’t change. It changed which nodes it is going to, but the overall value remained the same. The average latency, the P95 latency, and the P99 latency are all great: even the P99s are in the single-digit milliseconds. And here’s a look at additional ScyllaDB Cloud updates before we move on: A Few More Things We Shipped Last Year Backup is a critical feature of any database, and we run it regularly in ScyllaDB Cloud for our customers. In the past, we used an external utility that backed up snapshots of the data. That was somewhat inefficient. Also, it competed with the memory, CPU, disk, and network resources that ScyllaDB was consuming – potentially affecting the throughput and latency of user workloads. We reimplemented the backup client so it now runs inside ScyllaDB’s scheduler and cooperates with the rest of the system. The result is minimal impact on user workload and 11X faster execution that scales linearly with the number of shards per node. On the infrastructure side, the new AWS Graviton i8g instances proved themselves this year. We measured up to 2X throughput and lower latency at the same price, along with higher usable storage from improved compression and utilization. We don’t embrace every new instance type since we have very specific and demanding requirements. However, when we see clear value like this, we encourage customers to move to newer generations. On the security side, all new clusters now have their data encrypted at rest by default. When creating a cluster, you can either use your own key (known as ‘BYOK’) or use the ScyllaDB Cloud key. We also reached general availability of our Rust driver. This is interesting because it’s our fastest driver. Also, its binding is the foundation for our grand plan: unifying our drivers under the Rust infrastructure. We started with a new C++ driver. Next up (almost ready) is our NodeJS driver – and we’ll continue with others as well. We also released our C# driver (another popular demand) and across the board improved our drivers’ reliability, capabilities, compatibility, and performance. Finally, our Alternator clients in various languages received some important updates as well, such as network compression to both requests and responses. What’s Next for ScyllaDB Finally, let’s close with a glimpse at some of the major things we are expecting to deliver in 2026: An efficient, easy-to-use, and online process for migrating from vNodes to tablets. Strong data consistency for data (beyond metadata and internal features). A fast dedicated data path to key-value data Additional capabilities focused on 1) improved total cost of ownership and 2) more real-time AI features and integrations. There’s a lot to look forward to! We’d love to answer questions or hear your feedback.Rethinking “Designing Data-Intensive Applications”
How Martin Kleppmann’s iconic book evolved for AI and cloud-native architectures Since its release in 2017, Designing Data-Intensive Applications (DDIA) has become known as the bible for anyone working on large-scale, data-driven systems. The book’s focus on fundamentals (like storage engines, replication, and partitioning) has helped it age well. Still, the world of distributed systems has evolved substantially in the past decade. Cloud-native architectures are now the default. Object storage has become a first-class building block. Databases increasingly run everywhere from embedded and edge deployments to fully managed cloud services. And AI-driven workloads have made a mark on storage formats, query engines, and indexing strategies. So, after years of requests for a second edition, Martin Kleppmann revisited the book – and he enlisted his longtime collaborator Chris Riccomini as a co-author. Their goal: Keep the core intact while weaving in new ideas and refreshing the details throughout. With the second edition of DDIA now arriving at people’s doorsteps, let’s look at what Kleppmann and Riccomini shared about the project at Monster Scale Summit 2025. That conversation was recorded mid-revision. You can watch the entire chat or read highlights below. Extra: Hear what Kleppmann and Riccomini had to say when the book went off to the printers this year. They were featured again at Monster SCALE Summit 2026, which just concluded. This talk is available on-demand, alongside 60+ others by antirez, Pat Helland, Camille Fournier, Discord, Disney, and more. People have been requesting a second edition for a while – why now? Kleppmann: Yeah, I’ve long wanted to do a second edition, but I also have a lot of other things I want to do, so it’s always a matter of prioritization. In the end, though, I felt that the technology had moved on enough. Most of the book focuses on fundamental concepts that don’t change that quickly. They change on the timescale of decades, not months. In that sense, it’s a nice book to revise, because there aren’t that many big changes. At the same time, many details have changed. The biggest one is how people use cloud services today compared to ten years ago. Cloud existed back then, of course, but I think its impact on data systems architecture has only been felt in the last few years. That’s something we’re trying to work in throughout the second edition of the book. How have database architectures evolved since the first edition? Riccomini: I’ve been thinking about whether database architecture has changed now that we’re putting databases in more places – cloud, BYOC, on-prem, on-client, on-edge, etc. I think so. I have a hypothesis that successful future databases will be able to move or scale with you, from your laptop, to your server, to your cloud, and even to another cloud. We’re already seeing evidence of this with things like DuckDB and MotherDuck, which span embedded to cloud-based use cases. I think PGlite and Postgres are another example, where you see Postgres being embedded. SQLite is an obvious signal on the embedded side. On the query engine side, you see this with systems like Daft, which can run locally and in a distributed setting. So the answer is yes. The biggest shift is the split between control, data, and compute planes. That architecture is widely accepted now and has proven flexible when moving between SaaS and BYOC. There’s also an operational aspect to this. When you talk about SaaS versus non-SaaS, you’re talking about things like multi-tenancy and how much you can leverage cloud-provider infrastructure. I had an interesting discussion about Confluent Freight versus WarpStream, two competing Kafka streaming systems. Freight is built to take advantage of a lot of the in-cloud SaaS infrastructure Confluent has, while WarpStream is built more like a BYOC system and doesn’t rely on things like a custom network plane. Operationally, there’s a lot to consider regarding security and multi-tenancy, and I’m not sure we’re as far along as we could be. A lot of what SaaS companies are doing still feels proprietary and internal. That’s my read on the situation. Kleppmann: I’d add a little to that. At the time of the first edition, the model for databases was that a node ran on a machine, storing data on its local file system. Storage was local disks, and replication happened at the application layer on top of that. Now we’re increasingly seeing a model where storage is an object store. It’s not a local file system; it’s a remote service, and it’s already replicated. Building on top of an abstraction like object storage lets you do fundamentally different things compared to local disk storage. I’m not saying one is better than the other – there are always tradeoffs. But this represents a new point in the tradeoff space that really wasn’t present at the time of the first edition. Of course, object stores existed back then, but far fewer databases took advantage of them in the way people do now. We’ve seen a proliferation of specialized databases recently – do you think we’re moving toward consolidation? Riccomini: With my investment hat on, this is the million-dollar…or billion-dollar…question. Which of these is it? I think the answer is probably both. The reality is that Postgres has really taken the world by storm lately, and its extension ecosystem has become pretty robust in recent versions. For most people, when they’re starting out, they’re naturally going to build on something like Postgres. They’ll use pg\_search or something similar for their search index, pgvector for vector embeddings, and PG analytics or pg\_duckdb for their data lake. Then the question is: as they scale, will that still be okay? And in some cases, yes. In other cases, no. My personal hypothesis is that as you not only scale up but also need features that are core to your product, you’re more likely to move to a specialized system. pgvector, for example, is a reasonable starting point. But if your entire product is like Cursor AI or an IDE that does code completion, you probably need something more robust and scalable than pgvector can provide. At that point, you’d likely look at something like Pinecone or Turbopuffer or companies like that. So I think it’s both. And because Postgres is going to eat the bottom of the market, I do think there will be fewer specialized vendors, but I don’t think they’ll disappear entirely. What are some of the key tradeoffs you see with streaming systems today? Kleppmann: Streaming sits in a slightly weird space. A typical stream processor has a one-record-at-a-time, callback-based API. It’s very imperative. On top of that, you can build things like relational operators and query plans. But if you keep pushing in that direction, the result starts to look much more like a database that does incremental view maintenance. There are projects like Materialize that are aiming there. You just give it a SQL query, and the fact that it’s streaming is an implementation detail that’s almost hidden. I don’t know if that means the result for many of these systems is this: if you have a query you can express in SQL, you hand it off to one of these systems and let it maintain the view. And what we currently think of as streaming, with the lower-level APIs, is used for a more specialized set of applications. That might be very high-scale use cases, or queries that just don’t fit well into a relational style. Riccomini: Another thing I’d add is the fundamental tradeoff between latency and throughput, which most streaming systems have to deal with. Ideally, you want the lowest possible latency. But when you do that, it becomes harder to get higher throughput. The usual way to increase throughput is to batch writes. But as soon as you start batching writes, you increase the latency between when a message is sent and when it’s received. How is AI impacting data-intensive applications? Kleppmann: There’s some AI-plus-data work we’ve been exploring in research (not really part of the book). The idea is this: if you want to give an AI some control over a system – if it’s allowed to press buttons that affect data, like editing or updating it – then the safest way to do that is through a well-defined API. That API defines which buttons the AI is allowed to press, and those actions correspond to things that make sense and maybe fulfill certain consistency properties. More generally, it seems important to have interfaces that allow AI agents and humans to work safely together, with the database as common ground. Humans can update data, the AI can update data, and both can see each other’s changes. You can imagine workflows where changes are reviewed, compared, and merged. Those kinds of processes will be necessary if we want good collaboration between humans and AI systems. Riccomini: From an implementation perspective, storage formats are definitely going to evolve. We’re already seeing this with systems like LanceDB, which are trying to support multimodal data better. Arrow, for example, is built for columnar data, which may not be the best fit for some multimodal use cases. And this goes beyond storage into things like Arrow RPC as well. On the query engine side, there’s also a lot of ongoing work around query optimization and indexing. The idea is to build smarter databases that can look at query patterns and adjust themselves over time. There was a good paper from Google a while back that used more traditional machine learning techniques to do dynamic indexing based on query patterns. That line of work will continue. And then, of course, support for embeddings, vector search, and semantic search will become more common. Good integrations with RAG (Retrieval-Augmented Generation)…that’s also important. We’re still very much at the forefront of all of this, so it’s tricky. Watch the 2026 Chat with Martin and ChrisMonster SCALE Summit 2026 Recap: From Database Elasticity to Nanosecond AI
It seemed like just yesterday I was here in the US hosting P99 CONF…and then suddenly I’m back again for Monster SCALE Summit 2026. The pace of it all says something – not so much about my travel between the US and Australia, but about how quickly this space is moving. Monster SCALE Summit delivered an overwhelming amount of content – so much that you need both time and distance to process it all. Sitting here now at Los Angeles International Airport, I can conclude this was the strongest summit to date. Watch sessions on-demand Day 1 To start with, the keynotes. As expected, Dor Laor set the tone with a strong opening – making a clear case that elasticity is the defining capability behind ScyllaDB operating at scale. “Elastic” is one of those overloaded terms in cloud infrastructure, but the distinction here was sharp. The focus was on a database that continuously adapts under load without breaking performance. Hot on the heels of Dor was the team from Discord, showing us just how they operate at scale while automating everything from standing up new clusters to expanding them under load, all with ScyllaDB. What stood out wasn’t just the scale but also the level of automation. I mention it at every conference, but [Discord engineer] Bo Ingram’s ScyllaDB in Action book is required reading for anyone running databases at scale. Since the conference was hosted by ScyllaDB, there was no shortage of deep dives into operating at scale. Felipe Cardeneti Mendes, author of Database Performance at Scale, expanded on the engineering behind ScyllaDB. Tablets play a key role as a core abstraction that links predictable performance with true elasticity. If you didn’t get a chance to participate in any of the live lounge events watching Felipe scale ScyllaDB to millions of operations per second, then you can try it yourself. The gap between demo and production is enticingly small, so it’s well worth a whirl. From the customer side, Freshworks and ShareChat, both with their own grounded presentations about super low latency and super low costs, focused on what actually matters in production. Enough of ScyllaDB for a bit, though. Let me recommend Joy Gao’s presentation from ClickHouse about cross-system database replication at petabyte scale. As you would expect, that’s a difficult problem to solve. Or if you want to know about ad reporting at scale, then check out the presentation from Pinterest. The presentation of the day for me was Thea Aarrestad talking about nanosecond AI at the Large Hadron Collider. The metrics in this presentation were insane, with datasets far exceeding the size of the Netflix catalog being generated per second… all in the pursuit of capturing the rarest particle interactions in real time. It’s a useful reminder that at true extreme scale, you don’t store then analyze. Instead, you analyze inline, or you lose the data forever. A big thanks to Thea. We asked for monster scale, and you certainly delivered. At the end of day one, we had another dense block of quality content from equally strong presenters. Long-time favourite, Ben Cane from American Express, is always entertaining and informative. We had some great research presented by Murat Demirbas from MongoDB around designing for distributed systems. And on the topic of distributed systems, Stephanie Wang showed us how to avoid some distributed traps with fast, simple analytics using DuckDB. Miguel Young de la Sota from Buf Technologies indulged us with speedrunning Super Mario 64 and its relationship to performance engineering. In parallel, Yaniv from ScyllaDB core engineering showed us what’s new and what’s next for our database. Pat Helland and Daniel May finished Day 1 with a closing keynote on “Yours, Mine, and Ours,” which was all about set reconciliation in the distributed world. This is one of those topics that sounds niche, but sits at the core of correctness at scale. When systems diverge – and they always do – then reconciliation becomes the real problem, not replication. Pat is a legend in this space and his writing at https://pathelland.substack.com/ expands on many of these ideas. Day 2 Day 2 got off to an early start with a pre-show lounge hosted by Tzach Livyatan and Attila Toth sharing tips for getting started with ScyllaDB. The opening keynote from Avi Kivity went deep into the mechanics behind ScyllaDB tablets, with the kind of under-the-hood engineering that makes extreme scale tractable rather than theoretical. I then had the opportunity to host Camille Fournier, joining remotely for a live session on engineering leadership and platform strategy under increasing system complexity. AI was a recurring theme, but the sharper takeaway was how platform teams need to evolve their role as abstraction layers become more dynamic and less predictable. Camille also stayed on for a live Q&A, engaging directly with the audience on both leadership and the practical realities of operating modern platforms at scale. Big thanks to Camille! The next block shifted back into deep technical content, starting with Dominik Tornow from Resonate HQ, sharing a first-principles approach to agentic systems. Alex Dathskovsky followed with a forward-looking session on the future of data consistency in ScyllaDB, reframing consistency not as a static tradeoff but as something increasingly adaptive and workload-aware. Szymon Wasik went deep on ScyllaDB vector search internals with real-time performance at billion-vector scale. It was great to see how ScyllaDB avoids the typical latency cliffs seen in ANN systems. From LinkedIn, Satya Lakshmikanth shared practical lessons from operating large-scale data systems in production, grounding the conversation in real-world constraints. Asias He showed us how incremental repair for ScyllaDB tablets is much more operationally feasible without the traditional overhead around consistency. We also heard from Tyler Denton, showcasing ScyllaDB vector search in action across two use cases (RAG with Hybrid Memory and anomaly detection at scale). Before lunch, another top quality keynote from TigerBeetle’s Joran Dirk Greef who had plenty of quotable quotes in his talk, “You Can’t Scale When You’re Dead.” And crowd favorite antirez from Redis gave us his take on HNSW indexes and all the tradeoffs that need to be considered. The final block in an already monster-sized conference followed up with Teiva Harsanyi from Google, sharing lessons drawn from large-scale distributed systems like Gemini. MoEngage brought a practitioner view on operating high-throughput, user-facing systems where latency directly impacts engagement. Benny Halevy outlined the path to tiered storage on ScyllaDB. Rivian discussed real-world automotive data platforms and Brian Jones from SAS highlighted how they tackled tension between batch and realtime workloads at scale by applying ScyllaDB. I also enjoyed the AWS talk from KT Tambe with Tzach Livyatan, tying cloud infrastructure back to database design decisions with the I8g family. To wrap the conference, we heard from Brendan Cox on how Sprig simplified their database infrastructure and achieved predictable low latency with – I’m sure you’ve guessed by now – ScyllaDB. Throughout the event, attendees could also binge-watch talks in Instant Access, which featured gems from Martin Kleppmann and Chris Ricommini, Disney/Hulu, DBOS, Tiket, and more. Prepare for Takeoff Final call to board my monstrous 15-hour flight home… It’s a great privilege to host these virtual conferences live backed by the team at ScyllaDB. The quality of this event ultimately comes down to the people – both the presenters who bring great depth and detail and the audience that keeps the conversation engaged and interesting. Thanks to everyone who contributed to making this the best Monster SCALE Summit to date. Catch up on what you missed If you want to chat with our engineers about how ScyllaDB might work with your use case, book a technical strategy session.Announcing ScyllaDB Operator, with Red Hat OpenShift Certification
OpenShift users gain a trusted, validated path for installing and managing ScyllaDB Operator – backed by enterprise-grade support ScyllaDB Operator is an open-source project that helps you run ScyllaDB on Kubernetes by managing ScyllaDB clusters deployed to Kubernetes and automating tasks related to operating a ScyllaDB cluster. For example, it automates installation, vertical and horizontal scaling, as well as rolling upgrades. The latest release (version 1.20) is now Red Hat certified and available directly in the Red Hat OpenShift ecosystem catalog. Additionally, it brings new features, stability improvements and documentation updates. Red Hat OpenShift Certification and Catalog Availablity OpenShift has become a cornerstone platform for enterprise Kubernetes deployments, and we’ve been working to ensure ScyllaDB Operator feels like a native part of that ecosystem. With ScyllaDB Operator 1.20, we’re taking a significant step forward: the operator is now Red Hat certified and available directly in the Red Hat OpenShift ecosystem catalog. See the ScyllaDB Operator project in the Red Hat Ecosystem Catalog. This milestone gives OpenShift users a trusted, validated path for installing and managing ScyllaDB Operator – backed by enterprise-grade support. With this release, you can install ScyllaDB Operator through OLM (Operator Lifecycle Manager) using either the OpenShift Web Console or CLI. For detailed installation instructions and OpenShift-specific configuration examples – including guidance for platforms like Red Hat OpenShift Service on AWS (ROSA) – see the Installing ScyllaDB Operator on OpenShift guide. IPv6 support ScyllaDB Operator 1.20 brings native support for IPv6 and dual-stack networking to your ScyllaDB clusters. With dual-stack support, your ScyllaDB clusters can operate with both IPv4 and IPv6 addresses simultaneously. That provides flexibility for gradual migration scenarios or environments requiring support for both protocols. You can also configure IPv6-first deployments, where ScyllaDB uses IPv6 for internal communication while remaining accessible via both protocols. You can control the IP addressing behaviour of your cluster through v1.ScyllaCluster’s .spec.network. The API abstracts away the underlying ScyllaDB configuration complexity so you can focus on your networking requirements rather than implementation details. With this release, IPv4-first dual-stack and IPv6-first dual-stack configurations are production-ready. IPv6-only single-stack mode is available as an experimental feature under active development; it’s not recommended for production use. See Production readiness for details. Learn more about IPv6 networking in the IPv6 networking documentation. Other notable changes Documentation improvements This release includes a new guide on automatic data cleanups. It explains how ScyllaDB Operator ensures that your ScyllaDB clusters maintain storage efficiency and data integrity by removing stale data and preventing data resurrection. Dependency updates This release also includes regular updates of ScyllaDB Monitoring and the packaged dashboards to support the latest ScyllaDB releases (4.12.1->4.14.0, #3250), as well as its dependencies: Grafana (12.2.0->12.3.2) and Prometheus (v3.6.0->v3.9.1). For more changes and details, check out the GitHub release notes. Upgrade instructions For instructions on upgrading ScyllaDB Operator to 1.20, please refer to the Upgrading ScyllaDB Operator section of the documentation. Supported versions ScyllaDB 2024.1, 2025.1, 2025.3 – 2025.4 Red Hat OpenShift 4.20 Kubernetes 1.32 – 1.35 Container Runtime Interface API v1 ScyllaDB Manager 3.7 – 3.8 Getting started with ScyllaDB Operator ScyllaDB Operator Documentation Learn how to deploy ScyllaDB on Google Kubernetes Engine (GKE) Learn how to deploy ScyllaDB on Amazon Elastic Kubernetes Engine (EKS) Learn how to deploy ScyllaDB on a Kubernetes Cluster Related links ScyllaDB Operator source (on GitHub) ScyllaDB Operator image on DockerHub ScyllaDB Operator Helm Chart repository ScyllaDB Operator documentation ScyllaDB Operator for Kubernetes lesson in ScyllaDB University Report a problem Your feedback is always welcome! Feel free to open an issue or reach out on the #scylla-operator channel in ScyllaDB User Slack.Instaclustr product update: March 2026
Here’s a roundup of the latest features and updates that we’ve recently released.
If you have any particular feature requests or enhancement ideas that you would like to see, please get in touch with us.
Major announcements Introducing AI Cluster Health: Smarter monitoring made simpleTurn complex metrics into clear, actionable insights with AI-powered health indicators—now available in the NetApp Instaclustr console. The new AI Cluster Health page simplifies cluster monitoring, making it easy to understand your cluster’s state at a glance without requiring deep technical expertise. This AI-driven analysis reviews recent metrics, highlights key indicators, explains their impact, and assigns an easy traffic-light health score for a quick status overview.
NetApp introduces Apache Kafka® Tiered Storage support on GCP in public previewTiered Storage is now available in public review for Instaclustr for Apache Kafka on Google Cloud Platform. This feature enables customers to optimize storage costs and improve scalability by offloading older Kafka log segments from local disk to Google Cloud Storage (GCS), while keeping active data local for fast access. Kafka clients continue to consume data seamlessly with no changes required, allowing teams to reduce infrastructure costs, simplify cluster scaling, and extend retention periods for analytics or compliance.
Other significant changes Apache Cassandra®- Released Apache Cassandra 4.0.19 and 5.0.6 into general availability on the NetApp Instaclustr Managed Platform, giving customers access to the latest stability, performance, and security improvements.
- Multi–data center Apache Cassandra clusters can now be provisioned across public and private networks via the Instaclustr Console, API, and Terraform provider, enabling customers to provision multi-DC clusters from day one.
- Single CA feature is now available for Apache Cassandra clusters. See Single CA for more details.
- GCP n4 machine types are now supported for Apache Cassandra in the available regions.
- Apache Kafka and Kafka Connect 4.1.1 are now generally available.
- Added Client Telemetry feature support for Kafka in private preview.
- Single CA feature is now available for Apache Kafka clusters. See Single CA for more details.
- ClickHouse 25.8.11 has been added to our managed platform in General Availability.
- Enabled ClickHouse
system.session_logtable that plays a key role in tracking session lifecycle and auditing user activities for enhanced session monitoring. This helps you with troubleshooting client-side connectivity issues and provides insights into failed connections.
- OpenSearch 2.19.4 and 3.3.2 have been released to general availability.
- Added support for the OpenSearch Assistant feature in OpenSearch Dashboards for clusters with Dashboards and AI Search enabled.
- PostgreSQL version 18.1 has now been released to general availability, alongside PostgreSQL version 17.7.
- PgBouncer version 1.25.0 has now been released to general availability.
- Added self-service Tags Management feature—allowing users to add, edit, or delete tags for their clusters directly through the Instaclustr console, APIs, or Terraform provider for RIYOA deployments
- Added new region Germany West Central for Azure
- Following the private preview release, Kafka’s Client Telemetry feature is progressing toward general availability soon. Read more here.
- We plan to extend the current ClickHouse integration with FSxN data sources by adding support for deployments across different VPCs, enabling broader enterprise lakehouse architectures.
- Apache Iceberg and Delta Lake integration are planned to soon be available for ClickHouse on the NetApp Instaclustr Platform, giving you a practical way to run analytics on open table formats while keeping control of your existing data platforms.
- We plan to soon introduce fully integrated AWS PrivateLink as a ClickHouse Add-On for secure and seamless connectivity with ClickHouse.
- We’re aiming to launch PostgreSQL integrated with FSx for NetApp ONTAP (FSxN) along with NVMe support into general availability soon. This enhancement is designed to combine enterprise-grade PostgreSQL with FSxN’s scalable, cost-efficient storage, enabling customers to optimize infrastructure costs while improving performance and flexibility. NVMe support is designed to deliver up to 20% greater throughput vs NFS.
- An AI Search plugin for OpenSearch is being released to GA (currently in public preview) to enhance search experiences using AI‑powered techniques such as semantic, hybrid, and conversational search, enabling more relevant, context‑aware results and unlocking new use cases including retrieval‑augmented generation (RAG) and AI‑driven chatbots.
- Following the public preview release, Zero Inbound Access is progressing to General Availability, designed to deliver the most secure management connectivity by eliminating inbound internet exposure and removing the need for any routable public IP addresses, including bastion or gateway instances.
- Explore how to freeze your streaming data for long-term
analytical queries in the future with our two-part blog series:
Freezing streaming data into Apache Iceberg
—Part 1: Using Apache Kafka®Connect Iceberg Sink Connector
introduces Apache Iceberg and demonstrates streaming Kafka data
using the Apache Kafka Connect Iceberg Sink Connector and
Freezing streaming data into Apache Iceberg
—Part 2: Using Iceberg Topics examines the experimental
approach of using Kafka Tiered Storage and Iceberg Topics, where
non‑active Kafka segments are copied to remote storage while
remaining transparently readable by Kafka clients. - Modern search applications go beyond simple keyword matching, requiring a deep understanding of user intent and context to deliver relevant, meaningful results. From keywords to concepts: How OpenSearch® AI search outperforms traditional search explores how semantic and hybrid search methods in OpenSearch AI search compare to traditional keyword search, and how you can use these capabilities for more relevant results.
- Generative AI and Large Language Models (LLMs) are booming, and they’ve put a spotlight on a crucial technology: vector search. Many applications today demand high throughput, low latency, and constant availability for retrieving information. Slow vector search can become a significant bottleneck, delaying responses and degrading the user experience. Our two-part series blogs Vector search benchmarking: Setting up embeddings, insertion, and retrieval with PostgreSQL® and Vector search benchmarking: Embeddings, insertion, and searching documents with ClickHouse® and Apache Cassandra® explore hands-on findings from our benchmarking projects, the role of databases in vector search, how to set up vector search for embeddings, insertion, and retrieval, and practical strategies for building faster, more efficient semantic search systems.
If you have any questions or need further assistance with these enhancements to the Instaclustr Managed Platform, please contact us.
SAFE HARBOR STATEMENT: Any unreleased services or features referenced in this blog are not currently available and may not be made generally available on time or at all, as may be determined in NetApp’s sole discretion. Any such referenced services or features do not represent promises to deliver, commitments, or obligations of NetApp and may not be incorporated into any contract. Customers should make their purchase decisions based upon services and features that are currently generally available.
The post Instaclustr product update: March 2026 appeared first on Instaclustr.
Integrated Gauges: Lessons Learned Monitoring Seastar’s IO Stack
Why integrated gauges are a better approach for measuring frequently changing values Many performance metrics and system parameters are inherently volatile or fluctuate rapidly. When using a monitoring system that periodically “scrapes” (polls) a target for its current metric value, the collected data point is merely a snapshot of the system’s state at that precise moment. It doesn’t reveal much about what’s actually happening in that area. Sometimes it’s possible to overcome this problem by accumulating those values somehow – for example, by using histograms or exporting a derived monotonically increasing counter. This article suggests yet another way to extend this approach for a broader set of frequently changing parameters. The Problem with Instantaneous Metrics For rapidly changing values such as queue lengths or request latencies, a single snapshot is most often misleading. If the scrape happens during a peak load, the value will appear high. If it happens during an idle moment, the value will appear low. Over time, the sampled data usually does not accurately represent the system’s true average behavior, leading to misinformed alerting or capacity planning. This phenomenon is apparent when examining synthetically generated data. Take a look at the example below. On that plot, there’s a randomly generated XY series. The horizontal axis represents the event number, the vertical axis represents the frequently fluctuating value. The “value” changes on every event. Fig.1 Random frequently changing data Now let’s thin the data out and see what would happen if we show only every 40th point, as if it were a real monitoring system that captures the value at a notably lower rate. The next plot shows what it would look like. Fig.2 Sampling every 40th point from the data Apparently, this is a very poor approximation of the first plot. It becomes crystal clear how poor it is if we zoom into the range [80-120] of both plots. For the latter (dispersed) plot, we’ll see a couple of dots with values of 2 and 0. But the former (original) plot would reveal drastically different information, shown below. Fig.3 Zoom-in range [80-120] from the data Now remember that the problem revealed itself at the ratio of the real rate to scrape rate being just 40. In real systems, internal parameters may change thousands of times per second while the scrape period is minutes. In that case, you end up scraping less than a fraction of a percent. A better way to monitor the frequently changing data is needed…badly. Histograms Request latency is one example of a statistic that usually contains many data points per second. Histograms can help you see how slow or fast the requests are – without falling into the problem described above. A monitoring histogram is a type of metric that samples observations and counts them in configurable buckets. Instead of recording just a single average or maximum latency value, a histogram provides a distribution of these values over a given period. By analyzing the bucket counts, you can calculate percentiles, including p50 (known as the median value) or p99 (the value below which 99% of all requests fell). This provides a much more robust and actionable view of performance volatility than simple instantaneous gauges or averages. Fig.4 Histogram built from data X-points It’s a static snapshot of the data at the last “time point.” If the values change over time, modern monitoring systems can offer various data views, such as time-series percentiles or heatmaps, to provide the necessary insight into the data. Histograms, however, have a drawback: They require system memory to work, node networking throughput to get reported, and monitoring disk space to be stored. A histogram of N buckets consumes N times more resources than a single value. Addable latencies When it comes to reporting latencies, a good compromise solution between efficiency and informativeness is “counter” metrics coupled with the rate() function. Counters and rates Counter metrics are one of the simplest and most fundamental metric types in the Prometheus monitoring system. It’s a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero upon a restart of the monitored target. Because a counter itself is just an ever-increasing total, it is rarely analyzed directly. Instead, one derives another meaningful metric from counters, most notably a “rate”. Below is a mathematical description of how the prometheus rate() function works. Internally, the system calculates the total sum of the observed values. After scraping the value several times, the monitoring system calculates the “rate” function of the value, which takes the total value difference from the previous scrape, divided by the time passed since then. The rated value thus shows the average value over the specified period of time. Now let’s get back to latencies. To report average latency over the scraping period using counters, the system should accumulate two of them: the total sum of request latencies (e.g., in milliseconds) and the total number of completed requests. The monitoring system would then “rate” both counters to get their averages and divide them, thus showing the average per-request latency. It looks like this: Since the total number of requests served is useful on its own to see the IOPS value, this method is as efficient as exporting the immediate latency value. Yet, it provides a much more stable and representative metric than an instantaneous gauge. We can try to adopt counters to the artificial data that was introduced earlier. The plot below shows what the averaged values for the sub-ranges of length 40 (the sampling step from above) would look like. It differs greatly from the sampled plot of Figure 2 and provides a more accurate approximation of the real data set. Fig.5 Average values for sub-ranges of length 40 Of course, this method has its drawbacks when compared to histograms. It swallows latency spikes that last for a short duration, as compared to the scraping period. The shorter the spike duration, the more likely it would go unnoticed. Queue length Summing up individual values is not always possible though. Queue length is another example of a statistic that’s also volatile and may contain thousands of data points. For example, this could be the queue of tasks to be executed or the queue of IO requests. Whenever an entry is added to the queue, its length increases. When an entry is removed, the length decreases – but it makes little sense to add up the updated queue length elsewhere. If we return to Figure 3 showing the real values in the range of [80-120], what would the average queue length over that period be? Apparently, it’s the sum of individual values divided by their number. But even though we understand what the “average request latency” is, the idea of “average queue length” is harder to accept, mainly because the X value of the above data is usually not an “event”, but it’s rather a “time”. And if, for example, a queue was empty most of the time and then changed to N elements for a few milliseconds, we’d have just two events that changed the queue. Seeing the average queue length of N/2 is counterintuitive. Some time ago, we researched how the “real” length of an IO queue can be implicitly derived from the net sum of request latencies. Here, we’ll show another approach to getting an idea of the queue length. Integrated gauge The approach is a generalization of averaging the sum of individual data points from the data set using counters and the “rate” function. Let’s return to our synthetic data when it was summed and rated (Figure 3) and look at the corresponding math from a slightly different angle. If we treat each value point as a bar with the width of 1 and the height being equal to the value itself, we can see that: The total value is the total sum of the bars’ squares The difference between two scraped total values is the sum of squares of bars between two scraping points The number of points in the range equals the distance between those two scraping points Fig.6 Geometrical interpretation of the exported counters Figure 6 shows this area in green. The average value is thus the height of the imaginary rectangular whose width equals the scraping period. If the distance between two adjacent points is not 1, but some duration, the interpretation still works. We can still calculate the square under the plot and divide it by the area width to get its “average” value. But we need to apply two changes. First, the square of the individual bar is now the product of the value itself and the duration of time passed from the previous point. Think of it this way: previously, the duration was 1, so the multiplication was invisible.Now, it is explicit. Second, we no longer need to count the number of points in the range. Instead, the denominator of the averaging function will be the duration between scraping points. In other words, previously the number of points was the sum of 1-s (one-s) between scraping points; now, it’s the sum of durations – total scraping period It’s now clear that the average value is the result of applying the “rate()” function to the total value. Naming problem (a side note) There are two hard problems in computer science: cache invalidation, naming, and off-by-one errors. Here we had to face one of them – how to name the newly introduced metrics? It is the queue size multiplied by the time the queue exists in such a state. A very close analogy comes from project management. There’s a parameter called “human-hour” and it’s widely used to estimate the resources needed to accomplish some task. In physics, there are not many units that measure something multiplied by time. But there are plenty of parameters that measure something divided by time, like velocity (distance-per-second), electric current (charge-per-second) or power (energy-per-second). Even though it’s possible to define (e.g., charge as current multiplied by time), it’s still charge that’s the primary unit and current is secondary. So far I’ve found quite a few examples of such units. In classical physics, one is called “action” and measures energy-times-time, and another one is “impulse,” which is measured in newtons-times-seconds. Finally, in kinematics there’s a thing called “absement,” which is literally meters-times-seconds. That describes the measure of an object displacement from its initial position. Integrated queue length The approach described above was recently [applied] to the Seastar IO stack – the length of IO classes’ request queues was patched to account for and export the integrated value. Below is the screenshot of a dashboard showing the comparison of both compaction class queue lengths, randomly scraped and integrated. Interesting (but maybe hard to accept) are the fraction values of the queue length. That’s OK, since the number shows the average queue length over a scrape period of a few minutes. A notable advantage of the integrated metric can be seen on the bottom two plots that show the length of the in-disk queue. Only the integrated metric (bottom right plot) shows that disk load actually decreased over time, while the randomly scraped one (bottom left plot) just tells us that the disk wasn’t idling…but not more than that. This effect of “hiding” the reality from the observer can also be seen in the plot that shows how the commitlog class queue length was changing over time. Here we can see a clear rectangular “spike” in the integrated disk queue length plot. That spike was completely hidden by the randomly scraped one. Similarly, the software queue length (upper pair of plots) dynamics is only visible from the integrated counter (upper right plot). Conclusion Relying solely on instantaneous metrics, or gauges, to monitor rapidly fluctuating system parameters very often leads to misleading data that poorly reflects the system’s actual behavior over a period of time. While solutions like histograms offer better statistical insights, they incur notable resource overhead. For metrics where the individual values can be aggregated (like request latencies), converting them into cumulative counters and deriving a rate provides a much more stable and representative average over the scraping interval. This offers a very efficient compromise between informative granularity and resource consumption. For metrics where instantaneous values cannot be simply summed (such as queue lengths), the concept of the Integrated Gauge offers a generalization of the same efficiency. In essence, by treating the gauge not as a point-in-time value but as a continuously accumulating measure of time-at-value, integral gauges provide a highly reliable and definitive representation of a parameter’s average behavior across any given measurement interval.From ScyllaDB to Kafka: Natura’s Approach to Real-Time Data at Scale
How Natura built a real-time data pipeline to support orders, analytics, and operations Natura, one of the world’s largest cosmetics companies, relies on a network of millions of beauty consultants generating a massive amount of orders, events, and business data every day. From an infrastructure perspective, this requires processing vast amounts of data to support orders, campaigns, online KPIs, predictive analytics, and commercial operations. Natura’s Rodrigo Luchini (Software Engineering Manager) and Marcus Monteiro (Senior Engineering Tech Lead) shared the technical challenges and architecture behind these use cases at Monster SCALE Summit 2025. Fitting for the conference theme, they explained how the team powers real-time sales insights at massive scale by building upon ScyllaDB’s CDC Source Connector. About Natura Rodrigo kicked off the talk with a bit of background on Natura. Natura was founded in 1969 by Antônio Luiz Seabra. It is a Brazilian multinational cosmetics and personal care company known for its commitment to sustainability and ethical sourcing. They were one of the first companies to focus on products connected to beauty, health, and self-care. Natura has three core pillars: Sustainability: They are committed to producing products without animal testing and to supporting local producers, including communities in the Amazon rainforest. People: They value diversity and believe in the power of relationships. This belief drives how they work at Natura and is reflected in their beauty consultant network. Technology: They invest heavily in advanced engineering as well as product development. The Technical Challenge: Managing Massive Data Volume with Real-Time Updates The first challenge they face is integrating a high volume of data (just imagine millions of beauty consultants generating data and information every single day). The second challenge is having real-time updated data processing for different consumers across disconnected systems. Rodrigo explained, “Imagine two different challenges: creating and generating real-time data, and at the same time, delivering it to the network of consultants and for various purposes within Natura.” This is where ScyllaDB comes in. Here’s a look at the initial architecture flow, focusing on the order and API flow. As soon as a beauty consultant places an order, they need to update and process the related data immediately. This is possible for three main reasons: As Rodrigo put it, “The first is that ScyllaDB is a fast database infrastructure that operates in a resilient way. The second is that it is a robust database that replicates data across multiple nodes, which ensures data consistency and reliability. The last but not least reason is scalability – it’s capable of supporting billions of data processing operations.” The Natura team architected their system as follows: In addition to the order ingestion flow mentioned above, there’s an order metrics flow closely connected to ScyllaDB Cloud (ScyllaDB’s fully-managed database-as-a-service offering), as well as the ScyllaDB CDC Source Connector (A Kafka source connector capturing ScyllaDB CDC changes). Together, this enables different use cases in their internal systems. For example, it’s used to determine each beauty consultant’s business plan and also to report data across the direct sales network, including beauty consultants, leaders, and managers. These real-time reports drive business metrics up the chain for accurate, just-in-time decisions. Additionally, the data is used to define strategy and determine what products to offer customers. Contributing to the ScyllaDB CDC Source Connector When Natura started testing the ScyllaDB Connector, they noticed a significant spike in the cluster’s resource consumption. This continued until the CDC log table was fully processed, then returned to normal. At that point, the team took a step back. After reviewing the documentation, they learned that the connector operates with small windows (15 seconds by default) for reading the CDC log tables and sending the results to Kafka. However, at startup, these windows are actually based on the table TTL, which ranged from one to three days in Natura’s use case. Marcus shared: “Now imagine the impact. A massive amount of data, thousands of partitions, and the database reading all of it and staying in that state until the connector catches up to the current time window. So we asked ourselves: ‘Do we really need all the data?’ No. We had already run a full, readable load process for the ScyllaDB tables. What we really needed were just incremental changes, not the last three days, not the last 24 hours, just the last 15 minutes.” So, as ScyllaDB was adding this feature to the GitHub repo, the Natura team created a new option: scylla.custom.window.start. This let them tell the connector exactly where to start, so they could avoid unnecessary reads and relieve unnecessary load on the database. Marcus wrapped up the talk with the payoff: “This results in a highly efficient real-time data capture system that streams the CDC events straight to Kafka. From there, we can do anything—consume the data, store it, or move it to any database. This is a gamechanger. With this optimization, we unlocked a new level of efficiency and scalability, and this made a real difference for us.”Claude Code Marketplace Now Available
Claude Code has become an indispensable part of my daily workflow. I use it for everything from writing code to debugging production issues. But while Claude is incredibly capable out of the box, there are areas where injecting specialized domain knowledge makes it dramatically more useful.
That’s why I built a plugin marketplace. Yesterday I released rustyrazorblade/skills, a collection of Claude Code plugins that extend Claude with expert-level knowledge in specific domains. The first plugin is something I’ve been talking about doing for a while: a Cassandra expert.