* Use binary search for cp932ext*_ucs_table lookups
A large amount of time is spent doing a linear search through these
tables in the CP932 encoding. Instead of that, we can add sorted
versions of these tables that also store the index of the non-sorted
version and perform a binary search on those sorted versions.
This reduces the time spent from 1.54s to 0.91s for the reference
benchmark [1].
[1] https://github.com/php/php-src/issues/12684#issuecomment-1813799924
* Fix search bounds
mbfl_name2encoding() uses a linear loop through the encodings, comparing
the name one by one, which is very slow. For the benchmark [1] just
looking up the name takes about 50% of run-time.
By using perfect hashing instead, we no longer have to loop over the
list, and the number of string comparisons is reduced to just a single
one. The perfect hashing table is generated using GNU gperf and amended
manually to fit in with mbstring and manually changed to reduce the
cache size.
[1] https://github.com/php/php-src/issues/12684#issuecomment-1813799924
Similar to the fast, specialized mb_strcut implementation for UTF-8
in 1f0cf133db, this new implementation of mb_strcut for UTF-16 strings
just examines a few bytes before each cut point.
Even for short strings, the new implementation is around 2x faster.
For strings around 10,000 bytes in length, it comes out about 100-500x
faster in my microbenchmarks.
The new implementation behaves identically to the old one on valid
UTF-16 strings; a fuzzer was used to help verify this.
The old implementation runs through the entire string to pick out the
part which should be returned by mb_strcut. This creates significant
performance overhead. The new specialized implementation of mb_strcut
for UTF-8 usually only examines a few bytes around the starting and
ending cut points, meaning it generally runs in constant time.
For UTF-8 strings just a few bytes long, the new implementation is
around 10% faster (according to microbenchmarks which I ran locally).
For strings around 10,000 bytes in length, it is 50-300x faster.
(Yes, that is 300x and not 300%.)
The new implementation behaves identically to the old one on VALID
UTF-8 strings; a fuzzer was used to help ensure this is the case.
On invalid UTF-8 strings, there is a difference: in some cases, the
old implementation will pass invalid byte sequences through unchanged,
while in others it will remove them. The new implementation has
behavior which is perhaps slightly more predictable: it simply backs
up the starting and ending cut points to the preceding "starter
byte" (one which is not a UTF-8 continuation byte).
I don't believe such a buffer overrun will ever occur, but just in
case the code is changed in the future, it will be good to have an
assertion here to help catch bugs. (A similar assertion is already
used in the UTF-7 version of this function.)
...So conditionally including code which uses __builtin_usub_overflow
(for performance) if the macro is defined is not correct. We also need
to check if the macro is defined as a non-zero value.
Apparently this broke the build for a user whose C compiler is GCC
4.9.4. Sorry, user! That was my fault!
Thanks to Jakub Zelenka for reporting the issue.
This bug was introduced in e837a8800b. In that commit, I increased the
performance of CP949 text conversion, but accidentally broke the case
where 0xC9 (illegal byte to start a character) is followed by a valid
character with a first byte less than 0xA1. The 'broken' behavior is
that both the 0xC9 byte and the following valid character would be
converted to error markers.
These (static) tables were defined in a header file, which was included
in two different .c files. That will result in two copies of the tables
being included in the PHP binary.
But the tables were only used in one of the two .c files. Move it where
it is used to avoid needlessly bloating the binary. (I checked in a
hex editor and confirmed that while the previous binary contained two
copies of these tables, it now only contains one.)
Conversion of SJIS-2004 text to UTF-8 using `mb_convert_encoding` is
now about 60% faster than before. (Many other mbstring functions will
also be faster now on SJIS-2004 text.)
This will make it easier to combine duplicated code between all the
CJK text encodings (a significant amount is already combined in this
commit, such as the repeated definitions of SJIS_DECODE and
SJIS_ENCODE), but I hope to remove even more redundancy in the future.
The table used to implement mb_strlen for CP932 has been changed to
the same table as "SJIS-win".
We're setting the encoding from PHP_FUNCTION(mb_strpos), but mbfl_strpos would
discard it, setting it to mbfl_encoding_pass, making zend_memnrstr fail due to a
null-pointer exception.
Fixes GH-11217
Closes GH-11220
Compiling in release mode with UBSAN gives me the following compiler warning:
```
In function ‘mb_wchar_to_sjismac’:
mbfilter_sjis.c:1419:89: warning: ‘i’ may be used uninitialized [-Wmaybe-uninitialized]
1419 | buf->state = (i << 24) | (index << 16) | (w & 0xFFFF);
| ^~
mbfilter_sjis.c:1398:42: note: ‘i’ was declared here
1398 | for (int i = 0; i < code_tbl_m_len; i++) {
| ^
```
Since the if condition will always be taken after the goto, we can get
rid of the warning by moving the label inside the if.
Signed-off-by: Alex Dowad <alexinbeijing@gmail.com>
For mb_parse_str, when mbstring.http_input (INI parameter) is a list of
multiple possible text encodings (which is not the case by default),
this new implementation is about 25% faster.
When mbstring.http_input is a single value, then nothing is changed.
(No automatic encoding detection is done in that case.)
This (rare) situation was already handled correctly for the 1st and 2nd
of every 3 codepoints in a Base64-encoded section of a UTF-7 string.
However, it was not handled correctly if it happened on the 3rd,
6th, 9th, etc. codepoint of such a Base64-encoded section.
Previously, mbstring used the same logic for encoding validation as for
encoding conversion.
However, there are cases where we want to use different logic for validation
and conversion. For example, if a string ends up with missing input
required by the encoding, or if a character is input that is invalid
as an encoding but can be converted, the conversion should succeed and
the validation should fail.
To achieve this, a function pointer mb_check_fn has been added to
struct mbfl_encoding to implement the logic used for validation.
Also, added implementation of validation logic for UTF-7, UTF7-IMAP,
ISO-2022-JP and JIS.
(The same change has already been made to PHP 8.2 and 8.3; see
6fc8d014df. This commit is backporting the change to PHP 8.1.)
When I built and tested 0779950768 locally, the build was successful
and all tests passed. However, in CI, some CI jobs are failing due to
compile errors. Fix those.
Previously, mbstring used the same logic for encoding validation as for
encoding conversion.
However, there are cases where we want to use different logic for validation
and conversion. For example, if a string ends up with missing input
required by the encoding, or if a character is input that is invalid
as an encoding but can be converted, the conversion should succeed and
the validation should fail.
To achieve this, a function pointer mb_check_fn has been added to
struct mbfl_encoding to implement the logic used for validation.
Also, added implementation of validation logic for UTF-7, UTF7-IMAP,
ISO-2022-JP and JIS.
The behavior of the new mb_encode_mimeheader implementation closely
follows the old implementation, except for three points:
• The old implementation was missing a call to the mbfl_convert_filter
flush function. So it would sometimes truncate the input string just
before its end.
• The old implementation would drop zero bytes when QPrint-encoding.
So for example, if you tried to QPrint-encode the UTF-32BE string
"\x00\x00\x12\x34", its QPrint-encoding would be "=12=34", which
does not decode to a valid UTF-32BE string. This is now fixed.
• In some rare corner cases, the new implementation will choose to
Base64-encode or QPrint-encode the input string, where the old
implementation would have just added newlines to it. Specifically,
this can happen when there is a non-space ASCII character, followed
by a large number of ASCII spaces, followed by a non-ASCII character.
The new implementation is around 2.5-8x faster than the old one,
depending on the text encoding and transfer encoding used. Performance
gains are greater with Base64 transfer encoding than with QPrint
transfer encoding; this is not because QPrint-encoding bytes is slow,
but because QPrint-encoded output is much bigger than Base64-encoded
output and takes more lines, so we have to go through the process of
finding the right place to break a line many more times.
* PHP-8.2:
Propagate error checks for mbfl_filt_conv_illegal_output()
Use CK() macro to check the output function in mbfilter_unicode2sjis_emoji_sb()
Make error checks on encoding methods for docomo, kddi, sb consistent
* PHP-8.1:
Propagate error checks for mbfl_filt_conv_illegal_output()
Use CK() macro to check the output function in mbfilter_unicode2sjis_emoji_sb()
Make error checks on encoding methods for docomo, kddi, sb consistent
Some places use an if check, which implicitly checks for a non-zero
value, and some places use > 0. The > 0 is the correct one because at
least some of those functions already use the CK() macro to return -1 on
error. Because -1 != 0 this is wrongly interpreted as a success instead
of a failure.
The new implementation is 2.5x-3x faster.
If an invalid charset name was used, the old implementation would get
'stuck' trying to parse the charset name and would not interpret any
other MIME encoded words up to the end of the input string. The new
implementation fixes this bug.
If an (invalid) encoded word ends abruptly and a new (valid) encoded
word starts, the old implementation would not decode the valid encoded
word. The new implementation also fixes this.
Otherwise, the behavior of the new implementation has been designed to
closely match that of the old implementation.
As with other SIMD-accelerated functions in php-src, the new UTF-16
encoding and decoding routines can be compiled either with AVX2
acceleration "always on", "always off", or else with runtime detection
of AVX2 support.
With the new UTF-16 decoder/encoder, conversion of extremely short
strings (as in several bytes) has the same performance as before,
and conversion of medium-length (~100 character) strings is about 65%
faster, but conversion of long (~10,000 character) strings is around
6 times faster.
Many other mbstring functions will also be faster now when handling
UTF-16; for example, mb_strlen is almost 3 times faster on medium
strings, and almost 9 times faster on long strings. (Why does mb_strlen
benefit more from AVX2 acceleration than mb_convert_encoding? It's
because mb_strlen only needs to decode, but not re-encode, the input
string, and the UTF-16 decoder benefits much more from SIMD
acceleration than the UTF-16 encoder.)
We now have a couple of mbstring functions which have fast paths for
strings marked as 'valid UTF-8'. Later, we may likely have more. So
that these fast paths can be used more frequently, mark UTF-8 strings
emitted by mbstring as 'valid UTF-8'. This is always a correct thing
to do, because mbstring never returns invalid UTF-8 as the result of
a conversion (or similar) operation.
Internally, we do have a conversion mode which deliberately emits
invalid UTF-8 in some cases. (This is done to prevent unwanted matches
when we are converting strings to UTF-8 before performing matching
operations on them.) For such strings, don't set the 'valid UTF-8' flag.
It probably wouldn't hurt anything to set it, because strings generated
using that special conversion mode should *never* be returned to
userland, and I don't think we do anything with them which cares about
the IS_STR_VALID_UTF8 flag... but still, it would likely cause
confusion for developers.
I like the asm which gcc -O3 generates on this modified code...
and guess what: my CPU likes it too!
(The asm is noticeably tighter, without any extra operations in the
path which dispatches to the code for decoding a 1-byte, 2-byte,
3-byte, or 4-byte character. It's just CMP, conditional jump, CMP,
conditional jump, CMP, conditional jump.
...Though I was admittedly impressed to see gcc could implement the
boolean expression `c >= 0xC2 && c <= 0xDF` with just 3 instructions:
add, CMP, then conditional jump. Pretty slick stuff there, guys.)
Benchmark results:
UTF-8, short - to UTF-16LE faster by 7.36% (0.0001 vs 0.0002)
UTF-8, short - to UTF-16BE faster by 6.24% (0.0001 vs 0.0002)
UTF-8, medium - to UTF-16BE faster by 4.56% (0.0003 vs 0.0003)
UTF-8, medium - to UTF-16LE faster by 4.00% (0.0003 vs 0.0003)
UTF-8, long - to UTF-16BE faster by 1.02% (0.0215 vs 0.0217)
UTF-8, long - to UTF-16LE faster by 1.01% (0.0209 vs 0.0211)
As a performance optimization, mbstring implements some functions using
tables which give the (byte) length of a multi-byte character using a
lookup based on the value of the first byte. These tables are called
`mblen_table`.
For many years, the mblen_table for SJIS has had '2' in position 0x80.
That is wrong; it should have been '1'. Reasons:
For SJIS, SJIS-2004, and mobile variants of SJIS, 0x80 has never been
treated as the first byte of a 2-byte character. It has always been
treated as a single erroneous byte. On the other hand, 0x80 is a valid
character in MacJapanese... but a 1-byte character, not a 2-byte one.
The same applies to bytes 0xFD-FF; these are 1-byte characters in
MacJapanese, and in other SJIS variants, they are not valid (as the
first byte of a character).
Thanks to the GitHub user 'youkidearitai' for finding this problem.
This boosts the speed of BIG5 encoding conversion by just 1-2%.
I tried various other tweaks to the BIG5 decoding routine to see if
I could make it faster at the cost of using a larger conversion table,
but at least on the machine I am using for benchmarking, these other
changes just made things slower.
This gives a 25% speed boost for conversion operations on long strings
(~10,000 codepoints). For shorter strings, the speed boost is less
(as the input gets smaller, it is progressively swamped more and more
by the overhead of entering and exiting the conversion function).
When benchmarking string conversion speed, we are measuring not only
the speed of the decoder, but also the time which it takes to re-encode
the string in another encoding like UTF-8 or UTF-16. So the performance
increase for functions which only need to decode but not re-encode the
input string will be much more than 25%.
As with CP936, iterating over the PUA table and looking for matches in
it was a significant bottleneck for GB18030 decoding (though not as
severe a bottleneck as for CP936, since more is involved in GB18030
decoding than CP936 decoding).
Here are some benchmark results after optimizing out that bottleneck:
GB18030, medium - to UTF-16BE - faster by 60.71% (0.0007 vs 0.0017)
GB18030, medium - to UTF-8 - faster by 59.88% (0.0007 vs 0.0017)
GB18030, long - to UTF-8 - faster by 44.91% (0.0669 vs 0.1214)
GB18030, long - to UTF-16BE - faster by 43.05% (0.0672 vs 0.1181)
GB18030, short - to UTF-8 - faster by 27.22% (0.0003 vs 0.0004)
GB18030, short - to UTF-16BE - faster by 26.98% (0.0003 vs 0.0004)
(The 'short' test strings had 0-5 codepoints each, 'medium' ~100
codepoints, and 'long' ~10,000 codepoints. For each benchmark, the
test harness cycled through all the test strings 40,000 times.)