Setting `maxmemory` to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB.
When the specified amount of memory is reached, it is possible to select among different behaviors, called **policies**. Redis can just return errors for commands that could result in more memory being used, or it can evict some old data in order to return back to the specified limit every time new data is added.
...
...
@@ -55,7 +55,7 @@ The exact behavior Redis follows when the `maxmemory` limit is reached is config
The following policies are available:
Redis 支持以下这些策略(当前版本,Redis 3.0):
当前版本,Redis 3.0 支持的策略包括:
-**noeviction**: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but [DEL](https://redis.io/commands/del) and a few more exceptions).
The policies **volatile-lru**, **volatile-random** and **volatile-ttl** behave like **noeviction** if there are no keys to evict matching the prerequisites.
To pick the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis [INFO](https://redis.io/commands/info) output in order to tune your setup.
重要的是根据应用的访问模式, 选择适当的驱逐策略。 当然, 在运行过程中也可以动态设置驱逐策略, 并使用Redis命令 [INFO](https://redis.io/commands/info) 来监控缓存 miss 和命中率, 以进行调优。
In general as a rule of thumb:
一般经验法则:
一般来说:
- Use the **allkeys-lru** policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. **This is a good pick if you are unsure**.
@@ -211,22 +211,23 @@ As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8,
Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys that the LRU approximated algorithm will be able to handle well.
However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate.