When a node timeout occurs, then phpredis will try to connect to another
node, whose answer probably will be MOVED redirect. After this we need
more time to accomplish the redirection, otherwise we get "Timed out
attempting to find data in the correct node" error message.
Fixes#795#888#1142#1385#1633#1707#1811#2407
* Fix LZF decompression logic.
Rework how we decompress LZF data. Previously it was possible to
encounter a double-free, if the error was not E2BIG.
* .
* Fix expire check in testttl
The previous logic was timing related and also kind of testing Redis'
expiration logic itself.
* Use a smaller cluster in GitHub CI
This commit is an attempt at detecting unconsumed data on a socket when
we pull it from the connection pool.
Two new INI settings are introduced related to the changes:
redis.pconnect.pool_detect_dirty:
Value Explanation
----- ----------------------------------------------------------------
0 Don't execute new logic at all.
1 Abort and close the socket if we find unconsumed bytes in the
read buffer.
2 Seek to the end of our read buffer if we find unconsumed bytes
and then poll the socket FD to detect if we're still readable
in which case we fail and close the socket.
redis.pconnect.pool_poll_timeout:
The poll timeout to employ when checking if the socket is readable.
This value is in milliseconds and can be zero.
Review changes
See #2013
Perform cheaper PHP liveness check first.
The "distribute" option for session.save_path is listed like "persistent," as if it is a boolean option. But this isn't the case. The code in redis_session.c#L893-L902 expects "distribute" to be a parameter to the "failover" option. The updated cluster.markdown reflects this.
This commit splits compression and serialization into two distinct parts
and adds some utility functions so the user can compress/uncompress
or pack/unpack data explicily.
See #1939