The message queue built for AI workflows.
DriftQ is the AI-native reliability layer for agent workflows. DriftQ-Core is the broker you can run today. See the roadmap for what’s next.
DriftQ-Core is the repo you run today.
Alpha — not production-ready
Shipped:DriftQ-Core MVP
Shipped:Docker images, docs polish, etc.
Now:Replayable Workflow Runtime
What you can build with DriftQ
DriftQ is not “just a queue”. It’s the reliability layer for systems that need retries, backoff, idempotency, and observability — without fragile glue code.
- • Agent pipelines that fan-out work and safely retry failures
- • Long-running workflows with durable steps and strict DLQ handling
- • Streaming consumers that can crash and recover without double-processing
- • Backpressure-aware producers that get explicit 429 + Retry-After
See real-world use cases
Concrete scenarios: Next.js + FastAPI + LangChain pipelines, retries, DLQ, replay, and more.
Example
# Create a topic curl -i -X POST "http://localhost:8080/v1/topics?name=t&partitions=1" # Produce with a retry policy curl -i -X POST "http://localhost:8080/v1/produce?topic=t&value=hello&retry_max_attempts=2" # Stream consume (NDJSON) curl --no-buffer "http://localhost:8080/v1/consume?topic=t&group=g&owner=o&lease_ms=5000" # Ack a message curl -i -X POST "http://localhost:8080/v1/ack?topic=t&group=g&owner=o&partition=0&offset=0" # Metrics curl -s "http://localhost:8080/metrics" | findstr consumer_lag
Core capabilities
The primitives you need for safe, retryable AI work.
Streaming consume
NDJSON streaming from
/v1/consume with per-owner leases.Retry + DLQ
Automatic redelivery, envelope retry policy, and strict DLQ routing.
Idempotency
Consume-scope idempotency keys to prevent duplicate side effects.
Observability
Prometheus metrics for inflight, lag, DLQ totals, and backpressure rejects.
