README.rst 32.1 KB
Newer Older
A
andy 已提交
1 2
redis-py
========
A
andy 已提交
3 4 5

The Python interface to the Redis key-value store.

6
.. image:: https://secure.travis-ci.org/andymccurdy/redis-py.svg?branch=master
7
        :target: https://travis-ci.org/andymccurdy/redis-py
8 9 10 11
.. image:: https://readthedocs.org/projects/redis-py/badge/?version=latest&style=flat
        :target: https://redis-py.readthedocs.io/en/latest/
.. image:: https://badge.fury.io/py/redis.svg
        :target: https://pypi.org/project/redis/
12 13
.. image:: https://codecov.io/gh/andymccurdy/redis-py/branch/master/graph/badge.svg
  :target: https://codecov.io/gh/andymccurdy/redis-py
A
andy 已提交
14

A
andy 已提交
15 16
Installation
------------
A
andy 已提交
17

A
andy 已提交
18
redis-py requires a running Redis server. See `Redis's quickstart
19
<https://redis.io/topics/quickstart>`_ for installation instructions.
A
andy 已提交
20

21 22
redis-py can be installed using `pip` similar to other Python packages. Do not use `sudo`
with `pip`. It is usually good to work in a
I
Itamar Haber 已提交
23 24
`virtualenv <https://virtualenv.pypa.io/en/latest/>`_ or
`venv <https://docs.python.org/3/library/venv.html>`_ to avoid conflicts with other package
25
managers and Python projects. For a quick introduction see
I
Itamar Haber 已提交
26
`Python Virtual Environments in Five Minutes <https://bit.ly/py-env>`_.
27

A
andy 已提交
28 29 30
To install redis-py, simply:

.. code-block:: bash
A
andy 已提交
31

I
Itamar Haber 已提交
32
    $ pip install redis
A
andy 已提交
33

A
andy 已提交
34 35 36
or from source:

.. code-block:: bash
A
andy 已提交
37

I
Itamar Haber 已提交
38
    $ python setup.py install
A
andy 已提交
39 40


A
andy 已提交
41 42 43 44
Getting Started
---------------

.. code-block:: pycon
A
andy 已提交
45 46

    >>> import redis
A
Andy McCurdy 已提交
47
    >>> r = redis.Redis(host='localhost', port=6379, db=0)
A
andy 已提交
48 49 50 51 52
    >>> r.set('foo', 'bar')
    True
    >>> r.get('foo')
    'bar'

53 54 55 56 57
By default, all responses are returned as `bytes` in Python 3 and `str` in
Python 2. The user is responsible for decoding to Python 3 strings or Python 2
unicode objects.

If **all** string responses from a client should be decoded, the user can
A
Andy McCurdy 已提交
58
specify `decode_responses=True` to `Redis.__init__`. In this case, any
59 60
Redis command that returns a string type will be decoded with the `encoding`
specified.
N
nsantiago2719 已提交
61

A
Andy McCurdy 已提交
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103

Upgrading from redis-py 2.X to 3.0
----------------------------------

redis-py 3.0 introduces many new features but required a number of backwards
incompatible changes to be made in the process. This section attempts to
provide an upgrade path for users migrating from 2.X to 3.0.


Python Version Support
^^^^^^^^^^^^^^^^^^^^^^

redis-py 3.0 now supports Python 2.7 and Python 3.4+. Python 2.6 and 3.3
support has been dropped.


Client Classes: Redis and StrictRedis
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

redis-py 3.0 drops support for the legacy "Redis" client class. "StrictRedis"
has been renamed to "Redis" and an alias named "StrictRedis" is provided so
that users previously using "StrictRedis" can continue to run unchanged.

The 2.X "Redis" class provided alternative implementations of a few commands.
This confused users (rightfully so) and caused a number of support issues. To
make things easier going forward, it was decided to drop support for these
alternate implementations and instead focus on a single client class.

2.X users that are already using StrictRedis don't have to change the class
name. StrictRedis will continue to work for the forseeable future.

2.X users that are using the Redis class will have to make changes if they
use any of the following commands:

* SETEX: The argument order has changed. The new order is (name, time, value).
* LREM: The argument order has changed. The new order is (name, num, value).
* TTL and PTTL: The return value is now always an int and matches the
  official Redis command (>0 indicates the timeout, -1 indicates that the key
  exists but that it has no expire time set, -2 indicates that the key does
  not exist)


104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
SSL Connections
^^^^^^^^^^^^^^^

redis-py 3.0 changes the default value of the `ssl_cert_reqs` option from
`None` to `'required'`. See
`Issue 1016 <https://github.com/andymccurdy/redis-py/issues/1016>`_. This
change enforces hostname validation when accepting a cert from a remote SSL
terminator. If the terminator doesn't properly set the hostname on the cert
this will cause redis-py 3.0 to raise a ConnectionError.

This check can be disabled by setting `ssl_cert_reqs` to `None`. Note that
doing so removes the security check. Do so at your own risk.

It has been reported that SSL certs received from AWS ElastiCache do not have
proper hostnames and turning off hostname verification is currently required.


A
Andy McCurdy 已提交
121 122 123 124
MSET, MSETNX and ZADD
^^^^^^^^^^^^^^^^^^^^^

These commands all accept a mapping of key/value pairs. In redis-py 2.X
A
Andy McCurdy 已提交
125 126 127 128 129
this mapping could be specified as ``*args`` or as ``**kwargs``. Both of these
styles caused issues when Redis introduced optional flags to ZADD. Relying on
``*args`` caused issues with the optional argument order, especially in Python
2.7. Relying on ``**kwargs`` caused potential collision issues of user keys with
the argument names in the method signature.
A
Andy McCurdy 已提交
130 131

To resolve this, redis-py 3.0 has changed these three commands to all accept
132 133 134
a single positional argument named mapping that is expected to be a dict. For
MSET and MSETNX, the dict is a mapping of key-names -> values. For ZADD, the
dict is a mapping of element-names -> score.
A
Andy McCurdy 已提交
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155

MSET, MSETNX and ZADD now look like:

.. code-block:: pycon

    def mset(self, mapping):
    def msetnx(self, mapping):
    def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False):

All 2.X users that use these commands must modify their code to supply
keys and values as a dict to these commands.


ZINCRBY
^^^^^^^

redis-py 2.X accidentily modified the argument order of ZINCRBY, swapping the
order of value and amount. ZINCRBY now looks like:

.. code-block:: pycon

A
Andy McCurdy 已提交
156
    def zincrby(self, name, amount, value):
A
Andy McCurdy 已提交
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209

All 2.X users that rely on ZINCRBY must swap the order of amount and value
for the command to continue to work as intended.


Encoding of User Input
^^^^^^^^^^^^^^^^^^^^^^

redis-py 3.0 only accepts user data as bytes, strings or numbers (ints, longs
and floats). Attempting to specify a key or a value as any other type will
raise a DataError exception.

redis-py 2.X attempted to coerce any type of input into a string. While
occasionally convenient, this caused all sorts of hidden errors when users
passed boolean values (which were coerced to 'True' or 'False'), a None
value (which was coerced to 'None') or other values, such as user defined
types.

All 2.X users should make sure that the keys and values they pass into
redis-py are either bytes, strings or numbers.


Locks
^^^^^

redis-py 3.0 drops support for the pipeline-based Lock and now only supports
the Lua-based lock. In doing so, LuaLock has been renamed to Lock. This also
means that redis-py Lock objects require Redis server 2.6 or greater.

2.X users that were explicitly referring to "LuaLock" will have to now refer
to "Lock" instead.


Locks as Context Managers
^^^^^^^^^^^^^^^^^^^^^^^^^

redis-py 3.0 now raises a LockError when using a lock as a context manager and
the lock cannot be acquired within the specified timeout. This is more of a
bug fix than a backwards incompatible change. However, given an error is now
raised where none was before, this might alarm some users.

2.X users should make sure they're wrapping their lock code in a try/catch
like this:

.. code-block:: pycon

    try:
        with r.lock('my-lock-key', blocking_timeout=5) as lock:
            # code you want executed only after the lock has been acquired
    except LockError:
        # the lock wasn't acquired


A
andy 已提交
210 211
API Reference
-------------
A
andy 已提交
212

213
The `official Redis command documentation <https://redis.io/commands>`_ does a
A
Andy McCurdy 已提交
214
great job of explaining each command in detail. redis-py attempts to adhere
A
andy 已提交
215
to the official command syntax. There are a few exceptions:
A
andy 已提交
216

A
andy 已提交
217
* **SELECT**: Not implemented. See the explanation in the Thread Safety section
A
andy 已提交
218
  below.
A
andy 已提交
219
* **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py
A
andy 已提交
220
  uses 'delete' instead.
A
andy 已提交
221
* **MULTI/EXEC**: These are implemented as part of the Pipeline class. The
A
andy 已提交
222 223 224
  pipeline is wrapped with the MULTI and EXEC statements by default when it
  is executed, which can be disabled by specifying transaction=False.
  See more about Pipelines below.
A
andy 已提交
225
* **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate
A
andy 已提交
226 227 228
  class as it places the underlying connection in a state where it can't
  execute non-pubsub commands. Calling the pubsub method from the Redis client
  will return a PubSub instance where you can subscribe to channels and listen
A
andy 已提交
229 230 231 232
  for messages. You can only call PUBLISH from the Redis client (see
  `this comment on issue #151
  <https://github.com/andymccurdy/redis-py/issues/151#issuecomment-1545015>`_
  for details).
M
Marc Abramowitz 已提交
233
* **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they
A
Andy McCurdy 已提交
234 235 236 237
  exist in the Redis documentation. In addition, each command has an equivilant
  iterator method. These are purely for convenience so the user doesn't have
  to keep track of the cursor while iterating. Use the
  scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior.
A
andy 已提交
238 239


A
andy 已提交
240 241
More Detail
-----------
A
andy 已提交
242

A
andy 已提交
243 244
Connection Pools
^^^^^^^^^^^^^^^^
A
andy 已提交
245 246 247 248 249 250

Behind the scenes, redis-py uses a connection pool to manage connections to
a Redis server. By default, each Redis instance you create will in turn create
its own connection pool. You can override this behavior and use an existing
connection pool by passing an already created connection pool instance to the
connection_pool argument of the Redis class. You may choose to do this in order
A
andy 已提交
251 252 253 254
to implement client side sharding or have finer grain control of how
connections are managed.

.. code-block:: pycon
A
andy 已提交
255 256 257 258

    >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
    >>> r = redis.Redis(connection_pool=pool)

A
andy 已提交
259 260
Connections
^^^^^^^^^^^
A
andy 已提交
261 262 263 264 265 266 267 268 269 270

ConnectionPools manage a set of Connection instances. redis-py ships with two
types of Connections. The default, Connection, is a normal TCP socket based
connection. The UnixDomainSocketConnection allows for clients running on the
same device as the server to connect via a unix domain socket. To use a
UnixDomainSocketConnection connection, simply pass the unix_socket_path
argument, which is a string to the unix domain socket file. Additionally, make
sure the unixsocket parameter is defined in your redis.conf file. It's
commented out by default.

A
andy 已提交
271 272
.. code-block:: pycon

A
andy 已提交
273 274 275 276 277 278
    >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock')

You can create your own Connection subclasses as well. This may be useful if
you want to control the socket behavior within an async framework. To
instantiate a client class using your own connection, you need to create
a connection pool, passing your class to the connection_class argument.
Ł
Łukasz Jernaś 已提交
279
Other keyword parameters you pass to the pool will be passed to the class
A
andy 已提交
280 281
specified during initialization.

A
andy 已提交
282 283
.. code-block:: pycon

A
andy 已提交
284 285 286
    >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass,
                                    your_arg='...', ...)

A
andy 已提交
287 288
Parsers
^^^^^^^
A
andy 已提交
289 290 291 292 293 294 295 296 297 298 299 300 301

Parser classes provide a way to control how responses from the Redis server
are parsed. redis-py ships with two parser classes, the PythonParser and the
HiredisParser. By default, redis-py will attempt to use the HiredisParser if
you have the hiredis module installed and will fallback to the PythonParser
otherwise.

Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was
kind enough to create Python bindings. Using Hiredis can provide up to a
10x speed improvement in parsing responses from the Redis server. The
performance increase is most noticeable when retrieving many pieces of data,
such as from LRANGE or SMEMBERS operations.

302
Hiredis is available on PyPI, and can be installed via pip just like redis-py.
A
andy 已提交
303

A
andy 已提交
304 305
.. code-block:: bash

A
andy 已提交
306 307
    $ pip install hiredis

A
andy 已提交
308 309
Response Callbacks
^^^^^^^^^^^^^^^^^^
A
andy 已提交
310 311 312 313 314 315 316 317 318 319

The client class uses a set of callbacks to cast Redis responses to the
appropriate Python type. There are a number of these callbacks defined on
the Redis client class in a dictionary called RESPONSE_CALLBACKS.

Custom callbacks can be added on a per-instance basis using the
set_response_callback method. This method accepts two arguments: a command
name and the callback. Callbacks added in this manner are only valid on the
instance the callback is added to. If you want to define or override a callback
globally, you should make a subclass of the Redis client and add your callback
320
to its RESPONSE_CALLBACKS class dictionary.
A
andy 已提交
321 322 323 324 325 326 327

Response callbacks take at least one parameter: the response from the Redis
server. Keyword arguments may also be accepted in order to further control
how to interpret the response. These keyword arguments are specified during the
command's call to execute_command. The ZRANGE implementation demonstrates the
use of response callback keyword arguments with its "withscores" argument.

A
andy 已提交
328 329
Thread Safety
^^^^^^^^^^^^^
A
andy 已提交
330 331 332 333 334 335 336 337 338 339 340 341

Redis client instances can safely be shared between threads. Internally,
connection instances are only retrieved from the connection pool during
command execution, and returned to the pool directly after. Command execution
never modifies state on the client instance.

However, there is one caveat: the Redis SELECT command. The SELECT command
allows you to switch the database currently in use by the connection. That
database remains selected until another is selected or until the connection is
closed. This creates an issue in that connections could be returned to the pool
that are connected to a different database.

A
andy 已提交
342 343 344 345
As a result, redis-py does not implement the SELECT command on client
instances. If you use multiple Redis databases within the same application, you
should create a separate client instance (and possibly a separate connection
pool) for each database.
A
andy 已提交
346 347 348

It is not safe to pass PubSub or Pipeline objects between threads.

A
andy 已提交
349 350
Pipelines
^^^^^^^^^
A
andy 已提交
351 352 353 354 355 356 357 358

Pipelines are a subclass of the base Redis class that provide support for
buffering multiple commands to the server in a single request. They can be used
to dramatically increase the performance of groups of commands by reducing the
number of back-and-forth TCP packets between the client and server.

Pipelines are quite simple to use:

A
andy 已提交
359 360
.. code-block:: pycon

A
andy 已提交
361 362 363 364 365 366 367 368 369 370 371 372 373 374 375
    >>> r = redis.Redis(...)
    >>> r.set('bing', 'baz')
    >>> # Use the pipeline() method to create a pipeline instance
    >>> pipe = r.pipeline()
    >>> # The following SET commands are buffered
    >>> pipe.set('foo', 'bar')
    >>> pipe.get('bing')
    >>> # the EXECUTE call sends all buffered commands to the server, returning
    >>> # a list of responses, one for each command.
    >>> pipe.execute()
    [True, 'baz']

For ease of use, all commands being buffered into the pipeline return the
pipeline object itself. Therefore calls can be chained like:

A
andy 已提交
376 377
.. code-block:: pycon

A
andy 已提交
378 379 380 381 382 383 384 385
    >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute()
    [True, True, 6]

In addition, pipelines can also ensure the buffered commands are executed
atomically as a group. This happens by default. If you want to disable the
atomic nature of a pipeline but still want to buffer commands, you can turn
off transactions.

A
andy 已提交
386 387
.. code-block:: pycon

A
andy 已提交
388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405
    >>> pipe = r.pipeline(transaction=False)

A common issue occurs when requiring atomic transactions but needing to
retrieve values in Redis prior for use within the transaction. For instance,
let's assume that the INCR command didn't exist and we need to build an atomic
version of INCR in Python.

The completely naive implementation could GET the value, increment it in
Python, and SET the new value back. However, this is not atomic because
multiple clients could be doing this at the same time, each getting the same
value from GET.

Enter the WATCH command. WATCH provides the ability to monitor one or more keys
prior to starting a transaction. If any of those keys change prior the
execution of that transaction, the entire transaction will be canceled and a
WatchError will be raised. To implement our own client-side INCR command, we
could do something like this:

A
andy 已提交
406 407
.. code-block:: pycon

A
andy 已提交
408
    >>> with r.pipeline() as pipe:
J
Jeff Widman 已提交
409
    ...     while True:
A
andy 已提交
410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436
    ...         try:
    ...             # put a WATCH on the key that holds our sequence value
    ...             pipe.watch('OUR-SEQUENCE-KEY')
    ...             # after WATCHing, the pipeline is put into immediate execution
    ...             # mode until we tell it to start buffering commands again.
    ...             # this allows us to get the current value of our sequence
    ...             current_value = pipe.get('OUR-SEQUENCE-KEY')
    ...             next_value = int(current_value) + 1
    ...             # now we can put the pipeline back into buffered mode with MULTI
    ...             pipe.multi()
    ...             pipe.set('OUR-SEQUENCE-KEY', next_value)
    ...             # and finally, execute the pipeline (the set command)
    ...             pipe.execute()
    ...             # if a WatchError wasn't raised during execution, everything
    ...             # we just did happened atomically.
    ...             break
    ...        except WatchError:
    ...             # another client must have changed 'OUR-SEQUENCE-KEY' between
    ...             # the time we started WATCHing it and the pipeline's execution.
    ...             # our best bet is to just retry.
    ...             continue

Note that, because the Pipeline must bind to a single connection for the
duration of a WATCH, care must be taken to ensure that the connection is
returned to the connection pool by calling the reset() method. If the
Pipeline is used as a context manager (as in the example above) reset()
will be called automatically. Of course you can do this the manual way by
J
John Clover 已提交
437
explicitly calling reset():
A
andy 已提交
438

A
andy 已提交
439 440
.. code-block:: pycon

A
andy 已提交
441
    >>> pipe = r.pipeline()
J
Jeff Widman 已提交
442
    >>> while True:
A
andy 已提交
443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458
    ...     try:
    ...         pipe.watch('OUR-SEQUENCE-KEY')
    ...         ...
    ...         pipe.execute()
    ...         break
    ...     except WatchError:
    ...         continue
    ...     finally:
    ...         pipe.reset()

A convenience method named "transaction" exists for handling all the
boilerplate of handling and retrying watch errors. It takes a callable that
should expect a single parameter, a pipeline object, and any number of keys to
be WATCHed. Our client-side INCR command above can be written like this,
which is much easier to read:

A
andy 已提交
459 460
.. code-block:: pycon

A
andy 已提交
461 462 463 464 465 466 467 468 469
    >>> def client_side_incr(pipe):
    ...     current_value = pipe.get('OUR-SEQUENCE-KEY')
    ...     next_value = int(current_value) + 1
    ...     pipe.multi()
    ...     pipe.set('OUR-SEQUENCE-KEY', next_value)
    >>>
    >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
    [True]

A
Andy McCurdy 已提交
470 471 472 473
Publish / Subscribe
^^^^^^^^^^^^^^^^^^^

redis-py includes a `PubSub` object that subscribes to channels and listens
A
typos  
Andy McCurdy 已提交
474
for new messages. Creating a `PubSub` object is easy.
A
Andy McCurdy 已提交
475 476 477

.. code-block:: pycon

A
Andy McCurdy 已提交
478
    >>> r = redis.Redis(...)
A
Andy McCurdy 已提交
479 480 481
    >>> p = r.pubsub()

Once a `PubSub` instance is created, channels and patterns can be subscribed
A
typos  
Andy McCurdy 已提交
482
to.
A
Andy McCurdy 已提交
483 484 485 486 487 488 489

.. code-block:: pycon

    >>> p.subscribe('my-first-channel', 'my-second-channel', ...)
    >>> p.psubscribe('my-*', ...)

The `PubSub` instance is now subscribed to those channels/patterns. The
A
typos  
Andy McCurdy 已提交
490 491
subscription confirmations can be seen by reading messages from the `PubSub`
instance.
A
Andy McCurdy 已提交
492 493 494 495 496 497 498 499 500 501 502

.. code-block:: pycon

    >>> p.get_message()
    {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L}
    >>> p.get_message()
    {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L}
    >>> p.get_message()
    {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L}

Every message read from a `PubSub` instance will be a dictionary with the
A
typos  
Andy McCurdy 已提交
503
following keys.
A
Andy McCurdy 已提交
504 505 506 507 508

* **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe',
  'punsubscribe', 'message', 'pmessage'
* **channel**: The channel [un]subscribed to or the channel a message was
  published to
A
typos  
Andy McCurdy 已提交
509 510
* **pattern**: The pattern that matched a published message's channel. Will be
  `None` in all cases except for 'pmessage' types.
A
Andy McCurdy 已提交
511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531
* **data**: The message data. With [un]subscribe messages, this value will be
  the number of channels and patterns the connection is currently subscribed
  to. With [p]message messages, this value will be the actual published
  message.

Let's send a message now.

.. code-block:: pycon

    # the publish method returns the number matching channel and pattern
    # subscriptions. 'my-first-channel' matches both the 'my-first-channel'
    # subscription and the 'my-*' pattern subscription, so this message will
    # be delivered to 2 channels/patterns
    >>> r.publish('my-first-channel', 'some data')
    2
    >>> p.get_message()
    {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
    >>> p.get_message()
    {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'}

Unsubscribing works just like subscribing. If no arguments are passed to
A
typos  
Andy McCurdy 已提交
532
[p]unsubscribe, all channels or patterns will be unsubscribed from.
A
Andy McCurdy 已提交
533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548

.. code-block:: pycon

    >>> p.unsubscribe()
    >>> p.punsubscribe('my-*')
    >>> p.get_message()
    {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'}
    >>> p.get_message()
    {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'}
    >>> p.get_message()
    {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'}

redis-py also allows you to register callback functions to handle published
messages. Message handlers take a single argument, the message, which is a
dictionary just like the examples above. To subscribe to a channel or pattern
with a message handler, pass the channel or pattern name as a keyword argument
A
typos  
Andy McCurdy 已提交
549
with its value being the callback function.
A
Andy McCurdy 已提交
550

A
typos  
Andy McCurdy 已提交
551 552 553 554
When a message is read on a channel or pattern with a message handler, the
message dictionary is created and passed to the message handler. In this case,
a `None` value is returned from get_message() since the message was already
handled.
A
Andy McCurdy 已提交
555 556 557 558 559 560

.. code-block:: pycon

    >>> def my_handler(message):
    ...     print 'MY HANDLER: ', message['data']
    >>> p.subscribe(**{'my-channel': my_handler})
A
typos  
Andy McCurdy 已提交
561
    # read the subscribe confirmation message
A
Andy McCurdy 已提交
562 563 564 565 566 567 568 569 570 571 572 573 574 575 576
    >>> p.get_message()
    {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L}
    >>> r.publish('my-channel', 'awesome data')
    1
    # for the message handler to work, we need tell the instance to read data.
    # this can be done in several ways (read more below). we'll just use
    # the familiar get_message() function for now
    >>> message = p.get_message()
    MY HANDLER:  awesome data
    # note here that the my_handler callback printed the string above.
    # `message` is None because the message was handled by our handler.
    >>> print message
    None

If your application is not interested in the (sometimes noisy)
A
typos  
Andy McCurdy 已提交
577 578 579 580
subscribe/unsubscribe confirmation messages, you can ignore them by passing
`ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all
subscribe/unsubscribe messages to be read, but they won't bubble up to your
application.
A
Andy McCurdy 已提交
581 582 583 584 585 586

.. code-block:: pycon

    >>> p = r.pubsub(ignore_subscribe_messages=True)
    >>> p.subscribe('my-channel')
    >>> p.get_message()  # hides the subscribe message and returns None
H
horizon365 已提交
587
    >>> r.publish('my-channel', 'my data')
A
Andy McCurdy 已提交
588 589
    1
    >>> p.get_message()
J
Jitendra Nair 已提交
590
    {'channel': 'my-channel', 'data': 'my data', 'pattern': None, 'type': 'message'}
A
Andy McCurdy 已提交
591 592 593 594 595 596 597 598

There are three different strategies for reading messages.

The examples above have been using `pubsub.get_message()`. Behind the scenes,
`get_message()` uses the system's 'select' module to quickly poll the
connection's socket. If there's data available to be read, `get_message()` will
read it, format the message and return it or pass it to a message handler. If
there's no data to be read, `get_message()` will immediately return None. This
A
typos  
Andy McCurdy 已提交
599 600
makes it trivial to integrate into an existing event loop inside your
application.
A
Andy McCurdy 已提交
601 602 603 604 605 606 607 608 609 610 611 612

.. code-block:: pycon

    >>> while True:
    >>>     message = p.get_message()
    >>>     if message:
    >>>         # do something with the message
    >>>     time.sleep(0.001)  # be nice to the system :)

Older versions of redis-py only read messages with `pubsub.listen()`. listen()
is a generator that blocks until a message is available. If your application
doesn't need to do anything else but receive and act on messages received from
A
typos  
Andy McCurdy 已提交
613
redis, listen() is an easy way to get up an running.
A
Andy McCurdy 已提交
614 615 616 617 618 619 620 621

.. code-block:: pycon

    >>> for message in p.listen():
    ...     # do something with the message

The third option runs an event loop in a separate thread.
`pubsub.run_in_thread()` creates a new thread and starts the event loop. The
A
typos  
Andy McCurdy 已提交
622
thread object is returned to the caller of `run_in_thread()`. The caller can
A
Andy McCurdy 已提交
623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664
use the `thread.stop()` method to shut down the event loop and thread. Behind
the scenes, this is simply a wrapper around `get_message()` that runs in a
separate thread, essentially creating a tiny non-blocking event loop for you.
`run_in_thread()` takes an optional `sleep_time` argument. If specified, the
event loop will call `time.sleep()` with the value in each iteration of the
loop.

Note: Since we're running in a separate thread, there's no way to handle
messages that aren't automatically handled with registered message handlers.
Therefore, redis-py prevents you from calling `run_in_thread()` if you're
subscribed to patterns or channels that don't have message handlers attached.

.. code-block:: pycon

    >>> p.subscribe(**{'my-channel': my_handler})
    >>> thread = p.run_in_thread(sleep_time=0.001)
    # the event loop is now running in the background processing messages
    # when it's time to shut it down...
    >>> thread.stop()

A PubSub object adheres to the same encoding semantics as the client instance
it was created from. Any channel or pattern that's unicode will be encoded
using the `charset` specified on the client before being sent to Redis. If the
client's `decode_responses` flag is set the False (the default), the
'channel', 'pattern' and 'data' values in message dictionaries will be byte
strings (str on Python 2, bytes on Python 3). If the client's
`decode_responses` is True, then the 'channel', 'pattern' and 'data' values
will be automatically decoded to unicode strings using the client's `charset`.

PubSub objects remember what channels and patterns they are subscribed to. In
the event of a disconnection such as a network error or timeout, the
PubSub object will re-subscribe to all prior channels and patterns when
reconnecting. Messages that were published while the client was disconnected
cannot be delivered. When you're finished with a PubSub object, call its
`.close()` method to shutdown the connection.

.. code-block:: pycon

    >>> p = r.pubsub()
    >>> ...
    >>> p.close()

A
AngusP 已提交
665 666 667 668 669 670 671 672 673 674 675 676 677 678 679

The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also
supported:

.. code-block:: pycon

    >>> r.pubsub_channels()
    ['foo', 'bar']
    >>> r.pubsub_numsub('foo', 'bar')
    [('foo', 9001), ('bar', 42)]
    >>> r.pubsub_numsub('baz')
    [('baz', 0)]
    >>> r.pubsub_numpat()
    1204

680 681
Monitor
^^^^^^^
A
Andy McCurdy 已提交
682 683 684
redis-py includes a `Monitor` object that streams every command processed
by the Redis server. Use `listen()` on the `Monitor` object to block
until a command is received.
685 686 687

.. code-block:: pycon

A
Andy McCurdy 已提交
688 689 690 691
    >>> r = redis.Redis(...)
    >>> with r.monitor() as m:
    >>>     for command in m.listen():
    >>>         print(command)
A
AngusP 已提交
692

J
John Belmonte 已提交
693
Lua Scripting
A
andy 已提交
694
^^^^^^^^^^^^^
A
andy 已提交
695 696 697 698 699 700 701

redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are
a number of edge cases that make these commands tedious to use in real world
scenarios. Therefore, redis-py exposes a Script object that makes scripting
much easier to use.

To create a Script instance, use the `register_script` function on a client
J
John Belmonte 已提交
702
instance passing the Lua code as the first argument. `register_script` returns
A
andy 已提交
703 704
a Script instance that you can use throughout your code.

J
John Belmonte 已提交
705
The following trivial Lua script accepts two parameters: the name of a key and
A
andy 已提交
706
a multiplier value. The script fetches the value stored in the key, multiplies
A
andy 已提交
707 708
it with the multiplier value and returns the result.

A
andy 已提交
709 710
.. code-block:: pycon

A
Andy McCurdy 已提交
711
    >>> r = redis.Redis()
A
andy 已提交
712 713 714 715 716 717 718 719 720
    >>> lua = """
    ... local value = redis.call('GET', KEYS[1])
    ... value = tonumber(value)
    ... return value * ARGV[1]"""
    >>> multiply = r.register_script(lua)

`multiply` is now a Script instance that is invoked by calling it like a
function. Script instances accept the following optional arguments:

A
andy 已提交
721
* **keys**: A list of key names that the script will access. This becomes the
J
John Belmonte 已提交
722 723
  KEYS list in Lua.
* **args**: A list of argument values. This becomes the ARGV list in Lua.
A
andy 已提交
724 725 726 727
* **client**: A redis-py Client or Pipeline instance that will invoke the
  script. If client isn't specified, the client that intiially
  created the Script instance (the one that `register_script` was
  invoked from) will be used.
A
andy 已提交
728 729 730

Continuing the example from above:

A
andy 已提交
731 732
.. code-block:: pycon

A
andy 已提交
733 734 735 736 737
    >>> r.set('foo', 2)
    >>> multiply(keys=['foo'], args=[5])
    10

The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is
J
John Belmonte 已提交
738
passed to the script along with the multiplier value of 5. Lua executes the
A
andy 已提交
739 740 741 742 743
script and returns the result, 10.

Script instances can be executed using a different client instance, even one
that points to a completely different Redis server.

A
andy 已提交
744 745
.. code-block:: pycon

A
Andy McCurdy 已提交
746
    >>> r2 = redis.Redis('redis2.example.com')
A
andy 已提交
747 748 749 750
    >>> r2.set('foo', 3)
    >>> multiply(keys=['foo'], args=[5], client=r2)
    15

J
John Belmonte 已提交
751
The Script object ensures that the Lua script is loaded into Redis's script
A
andy 已提交
752 753 754 755 756 757 758 759
cache. In the event of a NOSCRIPT error, it will load the script and retry
executing it.

Script objects can also be used in pipelines. The pipeline instance should be
passed as the client argument when calling the script. Care is taken to ensure
that the script is registered in Redis's script cache just prior to pipeline
execution.

A
andy 已提交
760 761
.. code-block:: pycon

A
andy 已提交
762 763 764 765 766 767
    >>> pipe = r.pipeline()
    >>> pipe.set('foo', 5)
    >>> multiply(keys=['foo'], args=[5], client=pipe)
    >>> pipe.execute()
    [True, 25]

768 769 770
Sentinel support
^^^^^^^^^^^^^^^^

771
redis-py can be used together with `Redis Sentinel <https://redis.io/topics/sentinel>`_
A
Andy McCurdy 已提交
772 773 774 775 776
to discover Redis nodes. You need to have at least one Sentinel daemon running
in order to use redis-py's Sentinel support.

Connecting redis-py to the Sentinel instance(s) is easy. You can use a
Sentinel connection to discover the master and slaves network addresses:
777 778 779 780 781 782 783

.. code-block:: pycon

    >>> from redis.sentinel import Sentinel
    >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
    >>> sentinel.discover_master('mymaster')
    ('127.0.0.1', 6379)
V
Vitja Makarov 已提交
784 785
    >>> sentinel.discover_slaves('mymaster')
    [('127.0.0.1', 6380)]
786

Ł
Łukasz Jernaś 已提交
787
You can also create Redis client connections from a Sentinel instance. You can
A
Andy McCurdy 已提交
788 789
connect to either the master (for write operations) or a slave (for read-only
operations).
790 791 792 793 794 795 796 797 798

.. code-block:: pycon

    >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
    >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
    >>> master.set('foo', 'bar')
    >>> slave.get('foo')
    'bar'

A
Andy McCurdy 已提交
799
The master and slave objects are normal Redis instances with their
A
Andy McCurdy 已提交
800 801 802 803 804
connection pool bound to the Sentinel instance. When a Sentinel backed client
attempts to establish a connection, it first queries the Sentinel servers to
determine an appropriate host to connect to. If no server is found,
a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are
subclasses of ConnectionError.
805

A
Andy McCurdy 已提交
806 807 808 809
When trying to connect to a slave client, the Sentinel connection pool will
iterate over the list of slaves until it finds one that can be connected to.
If no slaves can be connected to, a connection will be established with the
master.
V
Vitja Makarov 已提交
810

811
See `Guidelines for Redis clients with support for Redis Sentinel
812
<https://redis.io/topics/sentinel-clients>`_ to learn more about Redis Sentinel.
813

A
Andy McCurdy 已提交
814
Scan Iterators
815 816
^^^^^^^^^^^^^^

M
Marc Abramowitz 已提交
817
The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While
A
Andy McCurdy 已提交
818 819 820
these commands are fully supported, redis-py also exposes the following methods
that return Python iterators for convenience: `scan_iter`, `hscan_iter`,
`sscan_iter` and `zscan_iter`.
821 822 823

.. code-block:: pycon

A
Andy McCurdy 已提交
824 825 826 827 828 829 830
    >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')):
    ...     r.set(key, value)
    >>> for key in r.scan_iter():
    ...     print key, r.get(key)
    A 1
    B 2
    C 3
831

A
andy 已提交
832
Author
A
andy 已提交
833
^^^^^^
A
andy 已提交
834 835

redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com).
836
It can be found here: https://github.com/andymccurdy/redis-py
A
andy 已提交
837 838 839 840 841 842 843 844

Special thanks to:

* Ludovico Magnocavallo, author of the original Python Redis client, from
  which some of the socket code is still used.
* Alexander Solovyov for ideas on the generic response callback system.
* Paul Hubbard for initial packaging support.