While caching speeds up the Windows builds, allocating an available
runner takes some time (especially for many pushes in a short time).
Thus we only build PHP 8.4 x64,zts, which should already suffice to
detect most issues.
Make it possible to use setOption to set Memcached::OPT_COMPRESSION_LEVEL which
was missed in the original zstd PR #539
zlib compression was using the default zlib compression level of 6. With this PR
it is now possible to choose other levels for zlib as well. The default remains
at 6 so nothing will change for people upgrading unless they explicitly set a
different level.
Here is some more benchmarking data using php serialized data
https://gist.github.com/rlerdorf/b9bae385446d5a30b65e6e241e34d0a8
fastlz is not really useful at any value size anymore. Anybody looking for
lightning quick compression and decompression should use zstd at level 1.
compression_level is not applied to fastlz because it only has 2 levels and
php-memcached already switches from level 1 to 2 automatically for values larger
than 65535 bytes. Forcing it to one or the other doesn't seem useful.
Add a configuration option to enforce an item size limit on the client side. This avoids sending large items
over the wire and getting rejected by the server which can cause delays. The default is 0 for no limit.
The same error code RES_E2BIG is used for the client side limit as for the server side limit.
Tests in the experimental/ folder were not executed on CI. The ones that work move up, the few
that remain in experimental/ need further investigation to get working or remove.
This adds zstd compression support.
The current two options, zlib and fastlz is basically a choice between performance and compression ratio.
You would choose zlib if you are memory-bound and fastlz if you are cpu-bound. With zstd, you get the
performance of fastlz with the compression of zlib. And often it wins on both. See this benchmark I ran
on json files of varying sizes: https://gist.github.com/rlerdorf/788f3d0144f9c5514d8fee9477cbe787
Taking just a 40k json blob, we see that zstd at compression level 3 reduces it to 8862 bytes. Our current
zlib 1 gets worse compression at 10091 bytes and takes longer both to compress and decompress.
C Size ratio% C MB/s D MB/s SCORE Name File
8037 19.9 0.58 2130.89 0.08 zstd 22 file-39.54k-json
8204 20.3 31.85 2381.59 0.01 zstd 10 file-39.54k-json
8371 20.7 47.52 547.12 0.01 zlib 9 file-39.54k-json
8477 20.9 74.84 539.83 0.01 zlib 6 file-39.54k-json
8862 21.9 449.86 2130.89 0.01 zstd 3 file-39.54k-json
9171 22.7 554.62 2381.59 0.01 zstd 1 file-39.54k-json
10091 24.9 153.94 481.99 0.01 zlib 1 file-39.54k-json
10646 26.3 43.39 8097.40 0.01 lz4 16 file-39.54k-json
10658 26.3 72.30 8097.40 0.01 lz4 10 file-39.54k-json
13004 32.1 1396.10 6747.83 0.01 lz4 1 file-39.54k-json
13321 32.9 440.08 1306.03 0.01 fastlz 2 file-39.54k-json
14807 36.6 444.91 1156.77 0.01 fastlz 1 file-39.54k-json
15517 38.3 1190.79 4048.70 0.02 zstd -10 file-39.54k-json
The fact that decompression a dramatically faster with zstd is a win for most common memcache uses
since they tend to be read-heavy. The PR also adds a `memcache.compression_level` INI switch which
currently only applies to zstd compression. It could probably be made to also apply to zlib and fastlz.