11

I have a Redis 3.0.5 instance that tend to show growing mem_fragmentation_ratio over time.

The application using that instance is constantly creating and deleting keys.

After one month, I end up with a mem_fragmentation_ratio > 1.30. This impacts the memory footprint of Redis on that server:

~$ redis-cli info memory # Memory used_memory:7711297480 used_memory_human:7.18G used_memory_rss:10695098368 used_memory_peak:11301744128 used_memory_peak_human:10.53G used_memory_lua:95232 mem_fragmentation_ratio:1.39 mem_allocator:jemalloc-3.6.0 

If I restart the Redis service and reload from AOF, mem_fragmentation_ratio goes back to acceptable level (1.06):

~$ redis-cli info memory # Memory used_memory:7493466968 used_memory_human:6.98G used_memory_rss:7924920320 used_memory_peak:8279112992 used_memory_peak_human:7.71G used_memory_lua:91136 mem_fragmentation_ratio:1.06 mem_allocator:jemalloc-3.6.0 

Recycling Redis is impacting for our application (even if we do this with a Sentinel failover after a slave restart).

Is there another way to reduce mem_fragmentation_ratio, like a 'defragmentation' process that I could schedule off-peak ?

4
  • 1
    Olivier did you ever make any progress with this? Commented Mar 3, 2017 at 11:19
  • Well, hum, no :) But there is hope ahead! After discussing with #redis folks on irc/Freenode, I received a message from a contributor telling me that: "there will be an active-defragmentation option in upcoming Redis". I can't tell more... Commented Mar 6, 2017 at 21:55
  • So that you know, we in fact decided to use a different strategy now and simply discard Redis more frequently. We typically used a single Redis between multiple processes on the same machine, but instead now we couple one Redis to one process so that when the process is terminated / recycled, we discard the old Redis and bring on another. This solved the problem for us. Commented Mar 7, 2017 at 1:23
  • One lame way to deal with this is to start new Redis and to redirect the traffic there, then shutdown the old one. Commented Dec 15, 2017 at 17:22

2 Answers 2

8

Memory fragmentation is a non-trivial issue.

Before v4, the only way to resolve it was restarting the process (possibly after making a slave, promoting it and redirecting traffic to it). As of v4, there's an experimental active memory defragmentation mechanism that may be enabled with a simple CONFIG SET activedefrag yes.

4

Active defragmentation (introduced in Redis 4) has been improved in Redis 5. To quote from the AWS announcement about Redis 5:

This release ships with what can be called active defrag 2: It's faster, smarter, and has lower latency. This feature is especially useful for workloads where the allocator cannot keep the fragmentation low enough, so the strategy is for both Redis and the allocator to cooperate. For this to work, the Jemalloc allocator has to be used. Luckily, it's the default allocator on Linux.

Another quote from the Redis main developer:

Active defragmentation version 2. Defragmenting the memory of a running server is black magic, but Oran Agra improved his past effort and now it works better than before. Very useful for long running workloads that tend to fragment Jemalloc.

If you are using the Jemalloc allocator and fighting with fragmentation, I would recommend to turn the feature on:

CONFIG SET activedefrag yes 

If you on ElastiCache Redis from AWS, Jemalloc is the default and active defrag is supported. Running memory doctor also recommends to enable that feature once the fragmentation level becomes too high.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.