v1.0.0 Koder Lang AGPL-3.0 7 Protocols

Every message delivered.
No matter what.

Fast, durable, multi-protocol message queue and event streaming platform. Named queues, pub/sub topics, consumer groups, priority, DLQ, transactions, stream processing — one binary, zero dependencies.

Get Started View Source
$ kmq serve

Everything You Need

Combines the best features from Kafka, RabbitMQ, NATS, Pulsar, and Redis Streams

Q

Named Queues

Persistent FIFO queues with WAL storage, configurable ack timeout, visibility timeout, and automatic dead letter routing.

T

Topics & Partitions

Pub/sub with partitioned streams, key-based ordering, multiple subscription modes (shared, exclusive, failover, key-shared).

G

Consumer Groups

Automatic partition assignment with range, round-robin, and sticky rebalancing. Heartbeat monitoring and session timeouts.

P

Priority Queues

10 priority levels (0-9) with weighted fair scheduling or strict priority mode. Per-level sub-queues for isolation.

D

Delayed & Scheduled

Timing wheel scheduler for delayed messages up to 7 days. Cron-like recurring schedules with cancel support.

X

Dead Letter Queues

Automatic DLQ routing after max retries. Inspect, replay, and purge operations. Reason tracking and breakdown.

TX

Transactions

Multi-queue atomic publish with begin/commit/rollback. Ensures all-or-nothing delivery across destinations.

R

Replay & Seek

Rewind to any timestamp or offset. Seek to earliest, latest, or specific position. Time index for fast lookups.

S

Schema Registry

JSON Schema, Avro, Protobuf validation. Backward/forward/full compatibility checks. Auto-register on publish.

SP

Stream Processing

Built-in map, filter, aggregate, windowing (tumbling, sliding, session). Stateful processing with checkpointing.

C

Connectors

Source and sink connectors for PostgreSQL, HTTP webhooks, and files. Batch processing with retry and error handling.

M

Metrics & Dashboard

Prometheus metrics with per-queue, per-topic, per-consumer granularity. Built-in admin dashboard web UI.

7 Protocols, One Broker

Connect from any language, any framework, any device

9400
Native Binary
Highest performance
5672
AMQP 0.9.1
RabbitMQ compatible
1883
MQTT 5.0
IoT & lightweight
61613
STOMP 1.2
Simple text-based
9401
HTTP/REST
Universal access
9401
WebSocket
Real-time streaming
9402
gRPC
HTTP/2 RPC

Simple, Powerful API

Get started in seconds with any protocol

REST API

Publish and consume messages with simple HTTP requests. No client library needed.

  • Create queues and topics
  • Publish with key, priority, delay
  • Batch consume with auto-ack
  • WebSocket for real-time
REST API
# Create a queue curl -X POST http://localhost:9401/api/v1/queues \ -H "Content-Type: application/json" \ -d '{"name": "orders"}' # Publish a message curl -X POST http://localhost:9401/api/v1/queues/orders/messages \ -H "Content-Type: application/json" \ -d '{ "body": "{\"id\":1,\"item\":\"laptop\"}", "key": "order-1", "priority": 5 }' # Consume messages curl -X POST http://localhost:9401/api/v1/queues/orders/consume \ -H "Content-Type: application/json" \ -d '{"max_count": 10, "auto_ack": true}'
Koder Lang
require "koder-mq" # Start broker app = KoderMQ::App.new # Create resources broker = app.broker broker.create_queue("orders") broker.create_topic("events", num_partitions: 8) # Publish to queue msg = KoderMQ::Message.new( body: '{"order_id": 42}', key: "order-42", priority: 5 ) broker.get_queue("orders").publish(msg) # Consume from queue result = broker.get_queue("orders").consume( consumer_id: "worker-1", auto_ack: false ) message, token = result puts message.body broker.get_queue("orders").ack(message.id) # Pub/sub on topics topic = broker.get_topic("events") sub = topic.subscribe( name: "analytics", mode: :shared, from: :earliest ) # Stream processing processor = KoderMQ::Stream::Processor.new( name: "order-count", source: "events", broker: broker ) processor .filter { |msg| msg.headers["type"] == "order" } .group_by { |msg| msg.key } .count .to("order-counts") .start

Koder Lang SDK

Full programmatic control with the native Koder Lang client. Type-safe, zero-copy, maximum performance.

  • Queue and topic management
  • Producer with batching and transactions
  • Consumer with prefetch and flow control
  • Built-in stream processing DSL
  • Consumer groups with auto-rebalance

How Koder MQ Compares

One binary that replaces your entire messaging stack

Feature Koder MQ Kafka RabbitMQ NATS Pulsar Redis Streams
Named QueuesYesNoYesYesYesNo
Partitioned TopicsYesYesNoYesYesNo
Consumer GroupsYesYesNoYesYesYes
Priority Queues10 levelsNo255NoNoNo
Delayed MessagesYesNoPluginNoYesNo
Dead Letter QueueAutoNoYesNoYesNo
TransactionsYesYesYesNoYesYes
Schema RegistryBuilt-inSeparateNoNoBuilt-inNo
Stream ProcessingBuilt-inStreamsNoNoFunctionsNo
AMQP0.9.1No0.9.1NoNoNo
MQTT5.0NoPluginYesNoNo
HTTP/RESTYesProxyPluginYesYesNo
gRPCYesNoNoNoYesNo
WebSocketYesNoPluginYesYesNo
Single BinaryYesJVMErlangYesJVMYes
No External DepsYesZK/KRaftYesYesZK/BKYes

Powerful CLI

Manage everything from the command line

$ kmq serve
Start the message broker with all protocols
$ kmq queues list
List all queues with depth and consumer count
$ kmq queues create --name orders
Create a new persistent queue
$ kmq topics create --name events --partitions 8
Create a topic with 8 partitions
$ kmq produce --name orders --body '{"id":1}'
Publish a message to a queue
$ kmq consume --name orders --count 10
Consume up to 10 messages from a queue
$ kmq stats
Show broker statistics and resource usage
$ kmq config
Print current configuration

Ready to Deliver

Get started with Koder MQ in seconds

$ kmq serve --config production.toml
View on Koder Flow Download Latest