Thanks to Ilija Tovilo for noticing and reporting this problem. Thanks
also to Michael Voříšek for finding the StackOverflow post which
explained the reason for the failure.
* PHP-8.2:
Propagate error checks for mbfl_filt_conv_illegal_output()
Use CK() macro to check the output function in mbfilter_unicode2sjis_emoji_sb()
Make error checks on encoding methods for docomo, kddi, sb consistent
* PHP-8.1:
Propagate error checks for mbfl_filt_conv_illegal_output()
Use CK() macro to check the output function in mbfilter_unicode2sjis_emoji_sb()
Make error checks on encoding methods for docomo, kddi, sb consistent
Some places use an if check, which implicitly checks for a non-zero
value, and some places use > 0. The > 0 is the correct one because at
least some of those functions already use the CK() macro to return -1 on
error. Because -1 != 0 this is wrongly interpreted as a success instead
of a failure.
Multiple tests had to be changed to escape the arguments in shell
commands. Some tests are skipped because they behave differently with
spaces in the path versus without. One notable example of this is the
hashbang test which does not work because spaces in hashbangs paths are
not supported in Linux.
Co-authored-by: Michael Voříšek <mvorisek@mvorisek.cz>
The new implementation is 2.5x-3x faster.
If an invalid charset name was used, the old implementation would get
'stuck' trying to parse the charset name and would not interpret any
other MIME encoded words up to the end of the input string. The new
implementation fixes this bug.
If an (invalid) encoded word ends abruptly and a new (valid) encoded
word starts, the old implementation would not decode the valid encoded
word. The new implementation also fixes this.
Otherwise, the behavior of the new implementation has been designed to
closely match that of the old implementation.
In ed0c0df351, Niels Dossche fixed a bug in mbstring whereby
mb_convert_encoding could dereference a NULL pointer and crash if
it was called on an array, with multiple candidate encodings, and at
least one of the strings inside the array was invalid in all the
candidate encodings.
He kindly included a test case, but after being merged into master,
the test case was not actually testing what it was intended to test.
That is now fixed.
Fixes GH-10627
The php_mb_convert_encoding() function can return NULL on error, but
this case was not handled, which led to a NULL pointer dereference and
hence a crash.
Closes GH-10628
Signed-off-by: George Peter Banyard <girgias@php.net>
Commit 8bbd0952e5 added a check rejecting empty strings; in the
merge commiot 379d9a1cfc however it was changed to a NULL check,
one that did not make sense because ZSTR_VAL() is guaranteed to never
be NULL; the length check was accidently removed by that merge commit.
This bug was found by GCC's -Waddress warning:
ext/mbstring/mbstring.c:748:27: warning: the comparison will always evaluate as ‘true’ for the address of ‘val’ will never be NULL [-Waddress]
748 | if (!new_value || !ZSTR_VAL(new_value)) {
| ^
Closes GH-10532
Signed-off-by: George Peter Banyard <girgias@php.net>
As with other SIMD-accelerated functions in php-src, the new UTF-16
encoding and decoding routines can be compiled either with AVX2
acceleration "always on", "always off", or else with runtime detection
of AVX2 support.
With the new UTF-16 decoder/encoder, conversion of extremely short
strings (as in several bytes) has the same performance as before,
and conversion of medium-length (~100 character) strings is about 65%
faster, but conversion of long (~10,000 character) strings is around
6 times faster.
Many other mbstring functions will also be faster now when handling
UTF-16; for example, mb_strlen is almost 3 times faster on medium
strings, and almost 9 times faster on long strings. (Why does mb_strlen
benefit more from AVX2 acceleration than mb_convert_encoding? It's
because mb_strlen only needs to decode, but not re-encode, the input
string, and the UTF-16 decoder benefits much more from SIMD
acceleration than the UTF-16 encoder.)
In a GitHub thread, Michael Voříšek and Kamil Tekiela mentioned that
the PCRE2 function `pcre_match` can be used to validate UTF-8, and that
historically it was more efficient than mbstring's `mb_check_encoding`.
`mb_check_encoding` is now much faster on hosts with SSE2, and much
faster again on hosts with AVX2. However, while all x86-64 CPUs support
at least SSE2, not all PHP users run their code on x86-64 hardware.
For example, some use recent Macs with ARM CPUs.
Therefore, borrow PCRE2's UTF-8 validation function as a fallback for
hosts with no SSE2/AVX2 support. On long UTF-8 strings, this code is
50% faster than mbstring's existing fallback code.
From some local benchmarks which I ran, the AVX2-based version is about
2.8x faster than the SSE2-based version on long (~10,000 byte) strings,
1.6x faster on medium (~100 byte) strings, and just about the same
on very short strings.
I followed the example of the code in the 'standard' module, using
preprocessor directives so that the code can be compiled in any of
4 ways:
1) With no AVX2 support at all (for example, when PHP is compiled for
CPU architectures other than AMD64)
2) For CPUs with AVX2 only (for example, when PHP is built with
CCFLAGS='-march=native' on a host which implements AVX2)
3) With runtime detection of AVX2 performed by the dynamic linker;
this requires a dynamic linker which supports the STT_GNU_IFUNC
symbol type extension to the ELF binary standard. This is true of
glibc's dynamic linker, as of late 2009.
4) With runtime detection of AVX2 performed by the module init function.
The detection is done by checking the output of CPUID and then a
function pointer is set accordingly. In this case, all calls to the
UTF-8 validation routine are indirect calls through that
function pointer.
This code is a few percent faster for short UTF-8 strings. For long
(~10,000 byte) strings, it is also consistently faster on my local
microbenchmarks, but by less than 1%.
This moves them from ``.data`` to ``.rodata`` and allows more compiler optimizations.
* ext/opcache/zend_accelerator_hash: make prime_numbers const
* Zend/zend_signal: make zend_sigs const
* ext/dba: make dba_handler pointers const
* ext/exif: make php_tiff_bytes_per_format and other globals const
* ext/intl/grapheme: make grapheme_extract_iters const
* ext/mstring: make rare_codepoint_bitvec const
* ext/snmp: make objid_mib const
* ext/opcache: make all zend_shared_memory_handlers const
When this INI option is enabled, it reverts the line separator for
headers and message to LF which was a non conformant behavior in PHP 7.
It is done because some non conformant MTAs fail to parse CRLF line
separator for headers and body.
This is used for mail and mb_send_mail functions.
Thanks to the GitHub user 'titanz35' for pointing out that the new
implementation of mb_detect_encoding had poor detection accuracy on
UTF-8 and UTF-16 strings with a byte-order mark.
The new SSE2-based implementation of mb_check_encoding for UTF-8 is
about 10% faster for 0-5 byte strings, more than 3 times faster for
~100-byte strings, and just under 4 times faster for ~10,000-byte
strings.
I believe it may be possible to make this function much faster again.
Some possible directions for further performance optimization include:
• If other ISA extensions like AVX or AVX-512 are available, use a
similar algorithm, but process text in blocks of 32 or 64 bytes
(instead of 16 bytes).
• If other SIMD ISA extensions are available, use the greater variety
of available instructions to make some of the checks tighter.
• Even if only SSE/SSE2 are available, find clever ways to squeeze
instructions out of the hot path. This would probably require a lot
of perusing instruction mauals and thinking hard about which SIMD
instructions could be used to perform the same checks with fewer
instructions.
• Find a better algorithm, possibly one where more checks could be
combined (just as the current algorithm combines the checks for
certain overlong code units and reserved codepoints).
Instead of checking the 'encoding number' to see if we are converting
case for ISO-8859-9 text, compare pointers instead.
This should free up 1 register in php_unicode_convert_case.
The capital Greek letter sigma (Σ) should be lowercased as σ except
when it appears at the end of a word; in that case, it should be
lowercased as the special form ς.
This rule is included in the Unicode data file SpecialCasing.txt.
The condition for applying the rule is called "Final_Sigma" and is
defined in Unicode technical report 21. The rule is:
• For the special casing form to apply, the capital letter sigma must
be preceded by 0 or more "case-ignorable" characters, preceded by
at least 1 "cased" character.
• Further, capital sigma must NOT be followed by 0 or more
case-ignorable characters and then at least 1 cased character.
"Case-ignorable" characters include certain punctuation marks, like
the apostrophe, as well as various accent marks. There are actually
close to 500 different case-ignorable characters, including accent marks
from Cyrillic, Hebrew, Armenian, Arabic, Syriac, Bengali, Gujarati,
Telugu, Tibetan, and many other alphabets. This category also includes
zero-width spaces, codepoints which indicate RTL/LTR text direction,
certain musical symbols, etc.
Since the rule involves scanning over "0 or more" of such
case-ignorable characters, it may be necessary to scan arbitrarily far
to the left and right of capital sigma to determine whether the special
lowercase form should be used or not. However, since we are trying to
be both memory-efficient and CPU-efficient, this implementation limits
how far to the left we will scan. Generally, we scan up to 63 characters
to the left looking for a "cased" character, but not more.
When scanning to the right, we go up to the end of the string if
necessary, even if it means scanning over thousands of characters.
Anyways, it is almost impossible to imagine that natural text will
include "words" with more than 63 successive apostrophes (for example)
followed by a capital sigma.
Closes GH-8096.
We now have a couple of mbstring functions which have fast paths for
strings marked as 'valid UTF-8'. Later, we may likely have more. So
that these fast paths can be used more frequently, mark UTF-8 strings
emitted by mbstring as 'valid UTF-8'. This is always a correct thing
to do, because mbstring never returns invalid UTF-8 as the result of
a conversion (or similar) operation.
Internally, we do have a conversion mode which deliberately emits
invalid UTF-8 in some cases. (This is done to prevent unwanted matches
when we are converting strings to UTF-8 before performing matching
operations on them.) For such strings, don't set the 'valid UTF-8' flag.
It probably wouldn't hurt anything to set it, because strings generated
using that special conversion mode should *never* be returned to
userland, and I don't think we do anything with them which cares about
the IS_STR_VALID_UTF8 flag... but still, it would likely cause
confusion for developers.
One small piece of this was obtained from Stack Overflow. According to
Stack Overflow's Terms of Service, all user-contributed code on SO is
provided under a Creative Commons license. I believe this license is
compatible with the code being included in PHP.
Benchmarking results (UTF-8 only, for strings which have already been
checked using mb_check_encoding):
For very short (0-5 byte) strings, mb_strlen is 12% faster.
The speedup gets greater and greater on longer input strings; for
strings around 100KB, mb_strlen is 23 times faster.
Currently the 'fast' code is gated behind a GC flag check which ensures
it is only used on strings which have already been checked for UTF-8
validity. This is because the accelerated code will return different
results on some invalid UTF-8 strings.
I like the asm which gcc -O3 generates on this modified code...
and guess what: my CPU likes it too!
(The asm is noticeably tighter, without any extra operations in the
path which dispatches to the code for decoding a 1-byte, 2-byte,
3-byte, or 4-byte character. It's just CMP, conditional jump, CMP,
conditional jump, CMP, conditional jump.
...Though I was admittedly impressed to see gcc could implement the
boolean expression `c >= 0xC2 && c <= 0xDF` with just 3 instructions:
add, CMP, then conditional jump. Pretty slick stuff there, guys.)
Benchmark results:
UTF-8, short - to UTF-16LE faster by 7.36% (0.0001 vs 0.0002)
UTF-8, short - to UTF-16BE faster by 6.24% (0.0001 vs 0.0002)
UTF-8, medium - to UTF-16BE faster by 4.56% (0.0003 vs 0.0003)
UTF-8, medium - to UTF-16LE faster by 4.00% (0.0003 vs 0.0003)
UTF-8, long - to UTF-16BE faster by 1.02% (0.0215 vs 0.0217)
UTF-8, long - to UTF-16LE faster by 1.01% (0.0209 vs 0.0211)
MacJapanese has a somewhat unusual feature that when mapped to
Unicode, many characters map to sequences of several codepoints.
Add test cases demonstrating how mb_str_split and mb_substr behave in
this situation.
When adding these tests, I found the behavior of mb_substr was wrong
due to an inconsistency between the string "length" as measured by
mb_strlen and the number of native MacJapanese characters which
mb_substr would count when iterating over the string using the
mblen_table. This has been fixed.
I believe that mb_strstr will also return wrong results in some cases
for MacJapanese. I still need to come up with unit tests which
demonstrate the problem and figure out how to fix it.
Various mbstring legacy text encodings have what is called an 'mblen_table';
a table which gives the length of a multi-byte character using a lookup on
the first byte value. Several mbstring functions have a 'fast path' which uses
this table when it is available.
However, it turns out that iterating through a string using the mblen_table
is surprisingly slow. I found that by deleting this 'fast path' from mb_strlen,
while mb_strlen becomes a few percent slower on very small strings (0-5 bytes),
very large performance gains can be achieved on medium to long input strings.
Part of the reason for this is because our text decoding filters are so much
faster now.
Here are some benchmarks:
EUC-KR, short (0-5 chars) - master faster by 11.90% (0.0000 vs 0.0000)
EUC-JP, short (0-5 chars) - master faster by 10.88% (0.0000 vs 0.0000)
BIG-5, short (0-5 chars) - master faster by 10.66% (0.0000 vs 0.0000)
UTF-8, short (0-5 chars) - master faster by 8.91% (0.0000 vs 0.0000)
CP936, short (0-5 chars) - master faster by 6.27% (0.0000 vs 0.0000)
UHC, short (0-5 chars) - master faster by 5.38% (0.0000 vs 0.0000)
SJIS, short (0-5 chars) - master faster by 5.20% (0.0000 vs 0.0000)
UTF-8, medium (~100 chars) - new faster by 127.51% (0.0004 vs 0.0002)
UTF-8, long (~10000 chars) - new faster by 87.94% (0.0319 vs 0.0170)
UTF-8, very long (~100000 chars) - new faster by 88.25% (0.3199 vs 0.1699)
SJIS, medium (~100 chars) - new faster by 208.89% (0.0004 vs 0.0001)
SJIS, long (~10000 chars) - new faster by 253.57% (0.0319 vs 0.0090)
CP936, medium (~100 chars) - new faster by 126.08% (0.0004 vs 0.0002)
CP936, long (~10000 chars) - new faster by 200.48% (0.0319 vs 0.0106)
EUC-KR, medium (~100 chars) - new faster by 146.71% (0.0004 vs 0.0002)
EUC-KR, long (~10000 chars) - new faster by 212.05% (0.0319 vs 0.0102)
EUC-JP, medium (~100 chars) - new faster by 186.68% (0.0004 vs 0.0001)
EUC-JP, long (~10000 chars) - new faster by 295.37% (0.0320 vs 0.0081)
BIG-5, medium (~100 chars) - new faster by 173.07% (0.0004 vs 0.0001)
BIG-5, long (~10000 chars) - new faster by 269.19% (0.0319 vs 0.0086)
UHC, medium (~100 chars) - new faster by 196.99% (0.0004 vs 0.0001)
UHC, long (~10000 chars) - new faster by 256.39% (0.0323 vs 0.0091)
This does raise the question: is using the 'mblen_table' worthwhile for
other mbstring functions, such as mb_str_split? The answer is yes, it
is worthwhile; you see, while mb_strlen only needs to decode the input
string but not re-encode it, when mb_str_split is implemented using
the conversion filters, it needs to both decode the string and then
re-encode it. This means that there is more potential to gain
performance by using the 'mblen_table'. Benchmarking shows that in a
few cases, mb_str_split becomes faster when the 'mblen_table fast path'
is deleted, but in the majority of cases, it becomes slower.
As a performance optimization, mbstring implements some functions using
tables which give the (byte) length of a multi-byte character using a
lookup based on the value of the first byte. These tables are called
`mblen_table`.
For many years, the mblen_table for SJIS has had '2' in position 0x80.
That is wrong; it should have been '1'. Reasons:
For SJIS, SJIS-2004, and mobile variants of SJIS, 0x80 has never been
treated as the first byte of a 2-byte character. It has always been
treated as a single erroneous byte. On the other hand, 0x80 is a valid
character in MacJapanese... but a 1-byte character, not a 2-byte one.
The same applies to bytes 0xFD-FF; these are 1-byte characters in
MacJapanese, and in other SJIS variants, they are not valid (as the
first byte of a character).
Thanks to the GitHub user 'youkidearitai' for finding this problem.
This boosts the speed of BIG5 encoding conversion by just 1-2%.
I tried various other tweaks to the BIG5 decoding routine to see if
I could make it faster at the cost of using a larger conversion table,
but at least on the machine I am using for benchmarking, these other
changes just made things slower.