You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that since 3.12.0 requires all feature flags to be enabled before upgrading, there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.
However, all users are highly encouraged to enable all feature flags before upgrading to this release from 3.12.x.
Mixed version cluster compatibility
RabbitMQ 3.13.0 nodes can run alongside 3.12.x nodes. 3.13.x-specific features can only be made available when all nodes in the cluster upgrade to 3.13.0 or a later patch release in the new series.
While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below. Once all nodes are upgraded to 3.13.0, these irregularities will go away.
Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended periods of time (no more than a few hours).
Compatibility Notes
This release includes a few potentially breaking changes&
Minimum Supported Erlang Version
Starting with this release, RabbitMQ requires Erlang 26.0 or later versions. Nodes will fail to start on older Erlang releases.
Client Library Compatibility
Client libraries that were compatible with RabbitMQ 3.12.x will be compatible with 3.13.0. RabbitMQ Stream Protocol clients must be upgraded to use the stream filtering feature introduced in this release.
Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia
Khepri has an important difference from Mnesia when it comes to schema modifications such as queue or stream declarations, or binding declarations. These changes won't be noticeable with many workloads but can affect some, in particular, certain integration tests.
Consider two scenarios, A and B.
Scenario A
There is only one client. The client performs the following steps:
It declares a queue Q
It binds Q to an exchange X
It publishes a message M to the exchange X
It expects the message to be routed to queue Q
It consumes the message
In this scenario, there should be no observable difference in behavior. Client's expectations will be met.
Scenario B
There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host. Node R2 has no client connections.
Client One performs the following steps:
It declares a queue Q
It binds Q to an exchange X
It gets a queue declaration confirmation back
It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
Client Two publishes a message M to X
Clients One and Two expect the message to be routed to Q
In this scenario, on step three Mnesia would return when all cluster nodes have committed an update. Khepri, however, will return when a majority of nodes, including the node handling Client One's operations, have returned.
This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3 in the above example is not guaranteed not be routed.
Once all schema changes propagate to node R3, Client Two's subsequent publishes on node R3 will be guaranteed to be routed.
This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes can be considered a succeess.
Workaround Strategies
To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas) queries of bindings when routing messages but that would have a significant impact on throughput of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).
Applications that rely on multiple connections that depend on a shared topology have several coping strategies.
If an application uses two or more connections to different nodes, it can declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with other operations.
Applications that rely on dynamic topologies can switch to use a "static" set of exchanges and bindings.
Application components that do not need to use a shared topology can each configure its own queues/streams/bindings.
Test suites that use multiple connections to different nodes can choose to use just one connection or connect to the same node, or inject a pause, or await a certain condition that indicates that the topology is in place.
Management Plugin and HTTP API
GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in 25% in traffic saving.
MQTT Plugin
mqtt.subscription_ttl (in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds (in seconds). A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set. For example, if you set mqtt.subscription_ttl = 3600000 (1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600 (1 hour) in 3.13.
rabbitmqctl node_health_check is Now a No-Op
rabbitmqctl node_health_check has been deprecated for over three years and is now an no-op (does nothing).
See the Health Checks section in the monitoring guide to find out what modern alternatives are available.
openSUSE Leap Package is not Provided
An openSUSE Leap package will not be provided with this release of RabbitMQ.
This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory but the package depends on glibc 2.34, and all currently available openSUSE Leap releases (up to 15.5) ship with 2.31 at most.
Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26 package becomes publicly available.
In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store that will predictably recover from network partitions and node failures, the same way quorum queues and streams already do. At the same time, this means that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.
Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.
Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.
Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core: a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state, and other purposes.
AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container structure internally.
This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ, and that most cross-protocol translation rough edges can be smoothened.
Target quorum queue replica state is now continuously reconciled.
When the number of online replicas of a quorum queue goes below (or above) its target, new replicas will be automatically placed if enough cluster nodes are available. This is a more automatic version of how quorum queue replicas have originally been grown.
For automatic shrinking of queue replicas, the user must opt in.
Reduced memory footprint, improved memory use predictability and throughput of classic queues (version 2, or CQv2). This particularly benefits classic queues with longer backlogs.
Classic queue v2 (CQv2) storage implementation is now the default. It is possible to switch the default back to CQv1 using rabbitmq.conf:
# uses CQv1 by defaultclassic_queue.default_version = 1
Individual queues can be declared by passing x-queue-version argument and/or through a queue-version policy.
Revisited peer discovery implementation that further reduces the probability of two or more sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.
Several rarely used queue metrics were removed to reduce inter-node data transfers and CPU burn during API response generation. The effects will be particularly pronounced for the GET /api/queues endpoint used without filtering or pagination, which can produce enormously large responses.
A couple of relevant queue metrics or state fields were lifted to the top level.
This is a potentially breaking change.
Note that Prometheus is the recommended option for monitoring, not the management plugin's HTTP API.
This allows consumers that are only interested in a subset of data in a stream to receive less data. Note that false positives are possible, so this feature should be accompanied by client library or application-level filtering.
During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as x-correlation-id (instead of x-correlation) for values longer than 255 bytes.
To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz instead of the source tarball produced by GitHub.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
RabbitMQ 3.13.0-rc.4
RabbitMQ
3.13.0-rc.4is a candidate of a new feature release.Highlights
This release includes several new features and optimizations.
The user-facing areas that have seen the biggest improvements in this release are
in RabbitMQ, replacing Mnesia
4 KiB (or a different customized CQ index embedding threshold)
This should significantly improve performance of non-mirrored classic queues
See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.
Release Artifacts
RabbitMQ preview releases are distributed via GitHub.
Community Docker image is another installation option
for previews. It is updated with a delay (usually a few days).
Erlang/OTP Compatibility Notes
This release requires Erlang 26.0 or later.
Provisioning Latest Erlang Releases explains
what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.
Upgrading to 3.13
Documentation guides on upgrades
See the Upgrading guide for documentation on upgrades and RabbitMQ change log
for release notes of other releases.
Note that since 3.12.0 requires all feature flags to be enabled before upgrading,
there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.
Required Feature Flags
This release does not graduate any feature flags.
However, all users are highly encouraged to enable all feature flags before upgrading to this release from
3.12.x.
Mixed version cluster compatibility
RabbitMQ 3.13.0 nodes can run alongside
3.12.xnodes.3.13.x-specific features can only be made available when all nodes in the clusterupgrade to 3.13.0 or a later patch release in the new series.
While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below.
Once all nodes are upgraded to 3.13.0, these irregularities will go away.
Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended
periods of time (no more than a few hours).
Compatibility Notes
This release includes a few potentially breaking changes&
Minimum Supported Erlang Version
Starting with this release, RabbitMQ requires Erlang 26.0 or later versions. Nodes will fail to start
on older Erlang releases.
Client Library Compatibility
Client libraries that were compatible with RabbitMQ
3.12.xwill be compatible with3.13.0.RabbitMQ Stream Protocol clients must be upgraded to use the stream filtering feature
introduced in this release.
Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia
Khepri has an important difference from Mnesia when it comes to schema modifications such as queue
or stream declarations, or binding declarations. These changes won't be noticeable with many workloads
but can affect some, in particular, certain integration tests.
Consider two scenarios, A and B.
Scenario A
There is only one client. The client performs the following steps:
In this scenario, there should be no observable difference in behavior. Client's expectations
will be met.
Scenario B
There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host.
Node R2 has no client connections.
Client One performs the following steps:
In this scenario, on step three Mnesia would return when all cluster nodes have committed an update.
Khepri, however, will return when a majority of nodes, including the node handling Client One's operations,
have returned.
This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3
in the above example is not guaranteed not be routed.
Once all schema changes propagate to node R3, Client Two's subsequent
publishes on node R3 will be guaranteed to be routed.
This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes
can be considered a succeess.
Workaround Strategies
To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas)
queries of bindings when routing messages but that would have a significant impact on throughput
of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).
Applications that rely on multiple connections that depend on a shared topology have
several coping strategies.
If an application uses two or more connections to different nodes, it can
declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with
other operations.
Applications that rely on dynamic topologies can switch to use a "static" set of
exchanges and bindings.
Application components that do not need to use a shared topology can each configure
its own queues/streams/bindings.
Test suites that use multiple connections to different nodes can choose to use just one connection or
connect to the same node, or inject a pause, or await a certain condition that indicates that the topology
is in place.
Management Plugin and HTTP API
GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in 25% in traffic saving.
MQTT Plugin
mqtt.subscription_ttl(in milliseconds) configuration setting was replaced withmqtt.max_session_expiry_interval_seconds(in seconds).A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set.
For example, if you set
mqtt.subscription_ttl = 3600000(1 hour) prior to 3.13, replace that setting withmqtt.max_session_expiry_interval_seconds = 3600(1 hour) in 3.13.rabbitmqctl node_health_check is Now a No-Op
rabbitmqctl node_health_checkhas been deprecated for over three yearsand is now an no-op (does nothing).
See the Health Checks section in the monitoring guide
to find out what modern alternatives are available.
openSUSE Leap Package is not Provided
An openSUSE Leap package will not be provided with this release of RabbitMQ.
This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory
but the package depends on
glibc2.34, and all currently available openSUSE Leap releases(up to 15.5) ship with 2.31 at most.
Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26
package becomes publicly available.
Getting Help
Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or
on our community Discord.
Changes Worth Mentioning
Release notes are kept under rabbitmq-server/release-notes.
Core Server
Enhancements
Khepri now can be used as an alternative schema data store
in RabbitMQ, by enabling a feature flag:
In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store
that will predictably recover from network partitions and node failures, the same way quorum queues
and streams already do. At the same time, this means
that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.
Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features
of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.
Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.
GitHub issue: #7206
Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core:
a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state,
and other purposes.
AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container
structure internally.
This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ,
and that most cross-protocol translation rough edges can be smoothened.
GitHub issue: #5077
Target quorum queue replica state is now continuously reconciled.
When the number of online replicas of a quorum queue goes below (or above) its target,
new replicas will be automatically placed if enough cluster nodes are available.
This is a more automatic version of how quorum queue replicas have originally been grown.
For automatic shrinking of queue replicas, the user must opt in.
Contributed by @SimonUnge (AWS).
GitHub issue: #8218
Reduced memory footprint, improved memory use predictability and throughput of classic queues (version 2, or CQv2).
This particularly benefits classic queues with longer backlogs.
Classic queue v2 (CQv2) storage implementation is now the default. It is possible to switch
the default back to CQv1 using
rabbitmq.conf:Individual queues can be declared by passing
x-queue-versionargument and/or through aqueue-versionpolicy.GitHub issue: #8308
Revisited peer discovery implementation that further reduces the probability of two or more
sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.
GitHub issue: #9797
Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.
GitHub issue: #6090, #8507
A subsystem for marking features as deprecated.
GitHub issue: #7390
Plugins now can register custom queue types. This means that a plugin now can provide
a custom queue type.
Contributed by @luos (Erlang Solutions).
GitHub issues: #8834, #8927
Bug Fixes
This release includes all bug fixes shipped in the
3.12.xseries.Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.
GitHub issue: #8477
Feature flag discovery operations will now be retried multiple times in case of network failures.
GitHub issue: #8491
The state of node maintenance status across the cluster is now replicated. It previously was accessible
to all nodes but not replicated.
GitHub issue: #9005
Management Plugin
Enhancements
New API endpoint,
GET /api/stream/{vhost}/{name}/tracking, can be used to trackpublisher and consumer offsets in a stream.
GitHub issue: #9642
Several rarely used queue metrics were removed to reduce inter-node data transfers
and CPU burn during API response generation. The effects will be particularly pronounced
for the
GET /api/queuesendpoint used without filtering or pagination, which can produceenormously large responses.
A couple of relevant queue metrics or state fields were lifted to the top level.
This is a potentially breaking change.
Note that Prometheus is the recommended option for monitoring,
not the management plugin's HTTP API.
GitHub issues: #9437, #9578, #9633
Stream Plugin
Enhancements
Support for (consumer) stream filtering.
This allows consumers that are only interested in a subset of data in a stream to receive
less data. Note that false positives are possible, so this feature should be accompanied by
client library or application-level filtering.
GitHub issue: #8207
MQTT Plugin
Enhancements
Support for MQTTv5 (with limitations).
GitHub issues: #7263, #8681
Negative message acknowledgements are now propagated to MQTTv5 clients.
GitHub issue: #9034
Potential incompatibility:
mqtt.subscription_ttlconfiguration was replaced withmqtt.max_session_expiry_interval_secondsthat targets MQTTv5.GitHub issue: #8846
AMQP 1.0 Plugin
Bug Fixes
During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as
x-correlation-id(instead ofx-correlation) for values longer than 255 bytes.This is a potentially breaking change.
GitHub issue: #8680
Dependency Changes
rawas upgraded to2.7.1osiriswas upgraded to1.6.9Source Code Archives
To obtain source code of the entire distribution, please download the archive named
rabbitmq-server-3.13.0.tar.xzinstead of the source tarball produced by GitHub.
This discussion was created from the release RabbitMQ 3.13.0-rc.4.
Beta Was this translation helpful? Give feedback.
All reactions