The documentation for mb_strcut states:
mb_strcut(
string $string,
int $start,
?int $length = null,
?string $encoding = null
): string
mb_strcut() extracts a substring from a string similarly to mb_substr(),
but operates on bytes instead of characters. If the cut position happens
to be between two bytes of a multi-byte character, the cut is performed
starting from the first byte of that character.
My understanding of the $length parameter for mb_strcut is that it
specified the range of bytes to extract from $string, and that all
characters encoded by those bytes should be included in the returned
string, even if that means the returned string would be longer than
$length bytes. This can happen either if 1) there is more than one way
to encode the same character in $encoding, and one way requires more
bytes than the other, or 2) $encoding uses escape sequences.
However, discussion with users of mb_strcut indicates that many of them
interpret $length as the maximum length of the *returned* string.
This is also the historical behavior of the function.
Hence, there is no need to modify the behavior of mb_strcut and then
remove XFAIL from these test cases afterwards. We can keep the current
behavior.
This (rare) situation was already handled correctly for the 1st and 2nd
of every 3 codepoints in a Base64-encoded section of a UTF-7 string.
However, it was not handled correctly if it happened on the 3rd,
6th, 9th, etc. codepoint of such a Base64-encoded section.
Previously, mbstring used the same logic for encoding validation as for
encoding conversion.
However, there are cases where we want to use different logic for validation
and conversion. For example, if a string ends up with missing input
required by the encoding, or if a character is input that is invalid
as an encoding but can be converted, the conversion should succeed and
the validation should fail.
To achieve this, a function pointer mb_check_fn has been added to
struct mbfl_encoding to implement the logic used for validation.
Also, added implementation of validation logic for UTF-7, UTF7-IMAP,
ISO-2022-JP and JIS.
The behavior of the new mb_encode_mimeheader implementation closely
follows the old implementation, except for three points:
• The old implementation was missing a call to the mbfl_convert_filter
flush function. So it would sometimes truncate the input string just
before its end.
• The old implementation would drop zero bytes when QPrint-encoding.
So for example, if you tried to QPrint-encode the UTF-32BE string
"\x00\x00\x12\x34", its QPrint-encoding would be "=12=34", which
does not decode to a valid UTF-32BE string. This is now fixed.
• In some rare corner cases, the new implementation will choose to
Base64-encode or QPrint-encode the input string, where the old
implementation would have just added newlines to it. Specifically,
this can happen when there is a non-space ASCII character, followed
by a large number of ASCII spaces, followed by a non-ASCII character.
The new implementation is around 2.5-8x faster than the old one,
depending on the text encoding and transfer encoding used. Performance
gains are greater with Base64 transfer encoding than with QPrint
transfer encoding; this is not because QPrint-encoding bytes is slow,
but because QPrint-encoded output is much bigger than Base64-encoded
output and takes more lines, so we have to go through the process of
finding the right place to break a line many more times.
Thanks to Ilija Tovilo for noticing and reporting this problem. Thanks
also to Michael Voříšek for finding the StackOverflow post which
explained the reason for the failure.
Multiple tests had to be changed to escape the arguments in shell
commands. Some tests are skipped because they behave differently with
spaces in the path versus without. One notable example of this is the
hashbang test which does not work because spaces in hashbangs paths are
not supported in Linux.
Co-authored-by: Michael Voříšek <mvorisek@mvorisek.cz>
The new implementation is 2.5x-3x faster.
If an invalid charset name was used, the old implementation would get
'stuck' trying to parse the charset name and would not interpret any
other MIME encoded words up to the end of the input string. The new
implementation fixes this bug.
If an (invalid) encoded word ends abruptly and a new (valid) encoded
word starts, the old implementation would not decode the valid encoded
word. The new implementation also fixes this.
Otherwise, the behavior of the new implementation has been designed to
closely match that of the old implementation.
In ed0c0df351, Niels Dossche fixed a bug in mbstring whereby
mb_convert_encoding could dereference a NULL pointer and crash if
it was called on an array, with multiple candidate encodings, and at
least one of the strings inside the array was invalid in all the
candidate encodings.
He kindly included a test case, but after being merged into master,
the test case was not actually testing what it was intended to test.
That is now fixed.
Fixes GH-10627
The php_mb_convert_encoding() function can return NULL on error, but
this case was not handled, which led to a NULL pointer dereference and
hence a crash.
Closes GH-10628
Signed-off-by: George Peter Banyard <girgias@php.net>
As with other SIMD-accelerated functions in php-src, the new UTF-16
encoding and decoding routines can be compiled either with AVX2
acceleration "always on", "always off", or else with runtime detection
of AVX2 support.
With the new UTF-16 decoder/encoder, conversion of extremely short
strings (as in several bytes) has the same performance as before,
and conversion of medium-length (~100 character) strings is about 65%
faster, but conversion of long (~10,000 character) strings is around
6 times faster.
Many other mbstring functions will also be faster now when handling
UTF-16; for example, mb_strlen is almost 3 times faster on medium
strings, and almost 9 times faster on long strings. (Why does mb_strlen
benefit more from AVX2 acceleration than mb_convert_encoding? It's
because mb_strlen only needs to decode, but not re-encode, the input
string, and the UTF-16 decoder benefits much more from SIMD
acceleration than the UTF-16 encoder.)
When this INI option is enabled, it reverts the line separator for
headers and message to LF which was a non conformant behavior in PHP 7.
It is done because some non conformant MTAs fail to parse CRLF line
separator for headers and body.
This is used for mail and mb_send_mail functions.
Thanks to the GitHub user 'titanz35' for pointing out that the new
implementation of mb_detect_encoding had poor detection accuracy on
UTF-8 and UTF-16 strings with a byte-order mark.
The new SSE2-based implementation of mb_check_encoding for UTF-8 is
about 10% faster for 0-5 byte strings, more than 3 times faster for
~100-byte strings, and just under 4 times faster for ~10,000-byte
strings.
I believe it may be possible to make this function much faster again.
Some possible directions for further performance optimization include:
• If other ISA extensions like AVX or AVX-512 are available, use a
similar algorithm, but process text in blocks of 32 or 64 bytes
(instead of 16 bytes).
• If other SIMD ISA extensions are available, use the greater variety
of available instructions to make some of the checks tighter.
• Even if only SSE/SSE2 are available, find clever ways to squeeze
instructions out of the hot path. This would probably require a lot
of perusing instruction mauals and thinking hard about which SIMD
instructions could be used to perform the same checks with fewer
instructions.
• Find a better algorithm, possibly one where more checks could be
combined (just as the current algorithm combines the checks for
certain overlong code units and reserved codepoints).
The capital Greek letter sigma (Σ) should be lowercased as σ except
when it appears at the end of a word; in that case, it should be
lowercased as the special form ς.
This rule is included in the Unicode data file SpecialCasing.txt.
The condition for applying the rule is called "Final_Sigma" and is
defined in Unicode technical report 21. The rule is:
• For the special casing form to apply, the capital letter sigma must
be preceded by 0 or more "case-ignorable" characters, preceded by
at least 1 "cased" character.
• Further, capital sigma must NOT be followed by 0 or more
case-ignorable characters and then at least 1 cased character.
"Case-ignorable" characters include certain punctuation marks, like
the apostrophe, as well as various accent marks. There are actually
close to 500 different case-ignorable characters, including accent marks
from Cyrillic, Hebrew, Armenian, Arabic, Syriac, Bengali, Gujarati,
Telugu, Tibetan, and many other alphabets. This category also includes
zero-width spaces, codepoints which indicate RTL/LTR text direction,
certain musical symbols, etc.
Since the rule involves scanning over "0 or more" of such
case-ignorable characters, it may be necessary to scan arbitrarily far
to the left and right of capital sigma to determine whether the special
lowercase form should be used or not. However, since we are trying to
be both memory-efficient and CPU-efficient, this implementation limits
how far to the left we will scan. Generally, we scan up to 63 characters
to the left looking for a "cased" character, but not more.
When scanning to the right, we go up to the end of the string if
necessary, even if it means scanning over thousands of characters.
Anyways, it is almost impossible to imagine that natural text will
include "words" with more than 63 successive apostrophes (for example)
followed by a capital sigma.
Closes GH-8096.
One small piece of this was obtained from Stack Overflow. According to
Stack Overflow's Terms of Service, all user-contributed code on SO is
provided under a Creative Commons license. I believe this license is
compatible with the code being included in PHP.
Benchmarking results (UTF-8 only, for strings which have already been
checked using mb_check_encoding):
For very short (0-5 byte) strings, mb_strlen is 12% faster.
The speedup gets greater and greater on longer input strings; for
strings around 100KB, mb_strlen is 23 times faster.
Currently the 'fast' code is gated behind a GC flag check which ensures
it is only used on strings which have already been checked for UTF-8
validity. This is because the accelerated code will return different
results on some invalid UTF-8 strings.
MacJapanese has a somewhat unusual feature that when mapped to
Unicode, many characters map to sequences of several codepoints.
Add test cases demonstrating how mb_str_split and mb_substr behave in
this situation.
When adding these tests, I found the behavior of mb_substr was wrong
due to an inconsistency between the string "length" as measured by
mb_strlen and the number of native MacJapanese characters which
mb_substr would count when iterating over the string using the
mblen_table. This has been fixed.
I believe that mb_strstr will also return wrong results in some cases
for MacJapanese. I still need to come up with unit tests which
demonstrate the problem and figure out how to fix it.
Various mbstring legacy text encodings have what is called an 'mblen_table';
a table which gives the length of a multi-byte character using a lookup on
the first byte value. Several mbstring functions have a 'fast path' which uses
this table when it is available.
However, it turns out that iterating through a string using the mblen_table
is surprisingly slow. I found that by deleting this 'fast path' from mb_strlen,
while mb_strlen becomes a few percent slower on very small strings (0-5 bytes),
very large performance gains can be achieved on medium to long input strings.
Part of the reason for this is because our text decoding filters are so much
faster now.
Here are some benchmarks:
EUC-KR, short (0-5 chars) - master faster by 11.90% (0.0000 vs 0.0000)
EUC-JP, short (0-5 chars) - master faster by 10.88% (0.0000 vs 0.0000)
BIG-5, short (0-5 chars) - master faster by 10.66% (0.0000 vs 0.0000)
UTF-8, short (0-5 chars) - master faster by 8.91% (0.0000 vs 0.0000)
CP936, short (0-5 chars) - master faster by 6.27% (0.0000 vs 0.0000)
UHC, short (0-5 chars) - master faster by 5.38% (0.0000 vs 0.0000)
SJIS, short (0-5 chars) - master faster by 5.20% (0.0000 vs 0.0000)
UTF-8, medium (~100 chars) - new faster by 127.51% (0.0004 vs 0.0002)
UTF-8, long (~10000 chars) - new faster by 87.94% (0.0319 vs 0.0170)
UTF-8, very long (~100000 chars) - new faster by 88.25% (0.3199 vs 0.1699)
SJIS, medium (~100 chars) - new faster by 208.89% (0.0004 vs 0.0001)
SJIS, long (~10000 chars) - new faster by 253.57% (0.0319 vs 0.0090)
CP936, medium (~100 chars) - new faster by 126.08% (0.0004 vs 0.0002)
CP936, long (~10000 chars) - new faster by 200.48% (0.0319 vs 0.0106)
EUC-KR, medium (~100 chars) - new faster by 146.71% (0.0004 vs 0.0002)
EUC-KR, long (~10000 chars) - new faster by 212.05% (0.0319 vs 0.0102)
EUC-JP, medium (~100 chars) - new faster by 186.68% (0.0004 vs 0.0001)
EUC-JP, long (~10000 chars) - new faster by 295.37% (0.0320 vs 0.0081)
BIG-5, medium (~100 chars) - new faster by 173.07% (0.0004 vs 0.0001)
BIG-5, long (~10000 chars) - new faster by 269.19% (0.0319 vs 0.0086)
UHC, medium (~100 chars) - new faster by 196.99% (0.0004 vs 0.0001)
UHC, long (~10000 chars) - new faster by 256.39% (0.0323 vs 0.0091)
This does raise the question: is using the 'mblen_table' worthwhile for
other mbstring functions, such as mb_str_split? The answer is yes, it
is worthwhile; you see, while mb_strlen only needs to decode the input
string but not re-encode it, when mb_str_split is implemented using
the conversion filters, it needs to both decode the string and then
re-encode it. This means that there is more potential to gain
performance by using the 'mblen_table'. Benchmarking shows that in a
few cases, mb_str_split becomes faster when the 'mblen_table fast path'
is deleted, but in the majority of cases, it becomes slower.
As a performance optimization, mbstring implements some functions using
tables which give the (byte) length of a multi-byte character using a
lookup based on the value of the first byte. These tables are called
`mblen_table`.
For many years, the mblen_table for SJIS has had '2' in position 0x80.
That is wrong; it should have been '1'. Reasons:
For SJIS, SJIS-2004, and mobile variants of SJIS, 0x80 has never been
treated as the first byte of a 2-byte character. It has always been
treated as a single erroneous byte. On the other hand, 0x80 is a valid
character in MacJapanese... but a 1-byte character, not a 2-byte one.
The same applies to bytes 0xFD-FF; these are 1-byte characters in
MacJapanese, and in other SJIS variants, they are not valid (as the
first byte of a character).
Thanks to the GitHub user 'youkidearitai' for finding this problem.
Regarding the optional 3rd `strict` argument to mb_detect_encoding,
the documentation states:
Controls the behaviour when string is not valid in any of the listed encodings.
If strict is set to false, the closest matching encoding will be returned;
if strict is set to true, false will be returned.
(Ref: https://www.php.net/manual/en/function.mb-detect-encoding.php)
Because of bugs in the implementation, mb_detect_encoding did not always
behave according to this description when `strict` was false.
For example:
<?php
echo var_export(mb_detect_encoding("\xc0\x00", "UTF-8", false));
// Before this commit, prints: false
// After this commit, prints: 'UTF-8'
Because `strict` is false in the above example, mb_detect_encoding
should return the 'closest matching encoding', which is UTF-8, since
that is the only candidate encoding. (Incidentally, this example shows
that using mb_detect_encoding with a single candidate encoding in
non-strict mode is useless.)
The new implementation fixes this bug. It also fixes another problem
with the old implementation as regards non-strict detection mode:
The old implementation would stop processing of the input string using
a particular candidate encoding as soon as it saw an error in that
encoding, even in non-strict mode. This means that it could not really
detect the 'closest matching encoding'; rather, what it would return
in non-strict mode was 'the encoding in which the first decoding error
is furthest from the beginning of the input string'.
In non-strict mode, the new implementation continues trying to process
the input string to its end even after seeing an error. This makes it
possible to determine in which candidate encoding the string has the
smallest number of errors, i.e. the 'closest matching encoding'.
Rejecting candidate encodings as soon as it saw an error gave the old
implementation a marked performance advantage in non-strict mode;
however, the new implementation still beats it in most cases. Here are
a few sample microbenchmark results:
UTF-8, ~100 codepoints, strict mode
Old: 0.080s (100,000 calls)
New: 0.026s (" " )
UTF-8, ~100 codepoints, non-strict mode
Old: 0.079s (100,000 calls)
New: 0.033s (" " )
UTF-8, ~10000 codepoints, strict mode
Old: 6.708s (60,000 calls)
New: 1.383s (" " )
UTF-8, ~10000 codepoints, non-strict mode
Old: 6.705s (60,000 calls)
New: 3.044s (" " )
Notice that the old implementation had almost identical performance
between strict and non-strict mode, while the new suffers a significant
performance penalty for non-strict detection. This is the cost of
implementing the behavior specified in the documentation.
A couple more sample results:
SJIS, ~10000 codepoints, strict mode
Old: 4.563s
New: 1.084s
SJIS, ~10000 codepoints, non-strict mode
Old: 4.569s
New: 2.863s
This is the only case I found where the new implementation loses:
UTF-16LE, ~10000 codepoints, non-strict mode
Old: 1.514s
New: 2.813s
The reason is because the test strings happened to be invalid right from
the first few bytes for all the candidate encodings except for UTF-16LE;
so the old implementation would immediately reject all those encodings
and only process the entire string in UTF-16LE.
I believe mb_detect_encoding could be made much faster if we identified
good criteria for when to reject candidate encodings before reaching
the end of the input string.
There is no great difference between the old and new code for text
encodings which either 1) use a fixed number of bytes per codepoint or
2) for which we have an 'mblen' table which enables us to find the
length of a multi-byte character using a table lookup indexed by the
first byte value.
The big difference is for other text encodings, where we have to
actually decode the string to split it. For such text encodings,
such as ISO-2022-JP and UTF-16, I measured a speedup of 50%-120% over
the previous implementation.
Add 4 codepoints commonly used to write Turkish text to our table
of 'commonly used' Unicode codepoints. These are:
• U+011F LATIN SMALL LETTER G WITH BREVE
• U+0130 LATIN CAPITAL LETTER I WITH DOT ABOVE
• U+0131 LATIN SMALL LETTER DOTLESS I
• U+015F LATIN SMALL LETTER S WITH CEDILLA
The 'h' flag makes mb_convert_kana convert zenkaku hiragana to hankaku
katakana; 'k' makes it convert zenkaku katakana to hankaku katakana.
When working on the implementation of mb_convert_kana, I added some
additional checks to catch combinations of flags which do not make
sense; but there is no conflict between 'h' and 'k' (they control
conversions for two disjoint ranges of codepoints) and this combination
should not have been restricted.
Thanks to the GitHub user 'akira345' for reporting this problem.
Closes GH-10174.
I way want to confirm different on mbstring PHP 8.1 or newer and
PHP 8.0 or older, but when I port to PHP 8.0 from PHP 8.1 or newer
phpt files, it stopped die() function when test failed. I want to
make a list, so I don't want to stop it.
If you execute full test, set $testFailedLimit to -1 in
encoding_tests.inc.
In GitHub issue 9613, it was reported that mb_strpos wrongly matches the
character '?' against any invalid string, even when the character '?'
clearly does not appear in the invalid string. This behavior has existed
at least since PHP 5.2.
The reason for the behavior is that mb_strpos internally converts the
haystack and needle to UTF-8 before performing a search. When converting
to UTF-8, regardless of the setting of mb_substitute_character, libmbfl
would use '?' as an error marker for invalid byte sequences. Once those
invalid input sequences were replaced with '?', then naturally, they
would match against occurrences of the actual character '?' (when it
appeared as a 'normal' character, not as an error marker). This would
happen regardless of whether the error was in the haystack and '?' was
used in the needle, or whether the error was in the needle and '?' was
used in the haystack.
Why would libmbfl use '?' rather than the mb_substitute_character set
by the user? Remember that libmbfl was originally a separate library
which was imported into the PHP codebase. mb_substitute_character is an
mbstring API function, not something built into libmbfl. When mbstring
would call into libmbfl, it would provide the error replacement
character to libmbfl as a parameter. However, when libmbfl would perform
conversion operations internally, and not because of a direct call from
mbstring, it would use its own error replacement character.
Example:
<?php
$questionMark = "\x00?";
$badUTF16 = "\xDB\x00"; // half of a surrogate pair
echo mb_strpos($questionMark, $badUTF16, 0, 'UTF-16BE'), "\n";
echo mb_strpos($badUTF16, $questionMark, 0, 'UTF-16BE'), "\n";
Incidentally, this behavior does not occur if the text encoding is
UTF-8, because no conversion is needed in that case.
mb_stripos had a similar issue, but instead of always using '?' as an
error marker internally, it would use the selected
mb_substitute_character. So, for example, if the mb_substitute_character
was '%', then occurrences of '%' in the haystack would match invalid
bytes in the needle, and vice versa.
Example:
<?php
mb_substitute_character(0x25); // '%'
$percent = "\x00%";
$badUTF16 = "\xDB\x00"; // half of a surrogate pair
echo mb_stripos($percent, $badUTF16, 0, 'UTF-16BE'), "\n";
echo mb_stripos($badUTF16, $percent, 0, 'UTF-16BE'), "\n";
This behavior (of mb_stripos) still occurs even if the text encoding is
UTF-8, because case folding is still needed to make the search
case-insensitive.
It is not hard to think of scenarios where these strange and unintuitive
behaviors could cause security vulnerabilities. In the discussion on
GH issue 9613, Christoph Becker suggested that mb_str{i,}pos should
simply refuse to operate on invalid strings. However, this would almost
certainly break existing production code.
This commit mitigates the problem in a less intrusive way: it ensures
that while invalid haystacks can match invalid needles (even if the
specific invalid bytes are different), invalid bytes in the haystack
will never match '?' OR occurrences of the mb_substitute_character in
the needle, and vice versa.
This does represent a backwards compatibility break, but a small one.
Since it mitigates a potential security problem, I believe this is
appropriate.
Closes GH-9613.
Instead of case-folding a string and then converting it to UTF-8 as a
separate operation, why not convert it to UTF-8 at the same time as
we fold case?
For non-UTF-8 encodings, this typically makes mb_stripos about 2x
faster.
The performance gain from this change depends on the text encoding and
input string size. For very small strings, other overheads tend to swamp
the performance gains to some extent, such that the speedup is less than
2x. For medium-length strings (~100 bytes or so), the speedup is
typically around 2.5x.
The greatest performance gains are for UTF-8 strings which have already
been marked as valid (using the GC flags on the zend_string object);
for those, the speedup is more than 10x in many cases.
The previous implementation first converted the haystack and needle to
wchars, then searched for matches between the two sequences of wchars.
Because we use -1 as an error marker when converting to wchars, error
markers from invalid byte sequences in the haystack would match error
markers from invalid byte sequences in the needle, even if the specific
invalid byte sequence was different. I am not sure whether this behavior
is really desirable or not, but anyways, this new implementation
follows the same behavior so as not to cause BC breaks.
This boosts the performance of mb_strpos, mb_stripos, mb_strrpos,
mb_strripos, mb_strstr, mb_stristr, mb_strrchr, and mb_strrichr when
used on non-UTF-8 strings. mb_substr is also faster.
With UTF-8 input, there is no appreciable difference in performance for
mb_strpos, mb_stripos, mb_strrpos, etc. This is expected, since the only
real difference here (aside from shorter and simpler code) is that the
new text conversion code is used when converting non-UTF-8 input strings
to UTF-8. (This is done because internally, mb_strpos, etc. work only
on UTF-8 text.)
For ASCII, speed is boosted by 30-65%. For other legacy text encodings,
the degree of performance improvement will depend on how slow the
legacy conversion code was.
One other minor, but notable difference is that strings encoded using
UTF-8 variants from Japanese mobile vendors (SoftBank, KDDI, Docomo)
will not undergo encoding conversion but will be processed "as is". It
is expected that this will result in a large performance boost for
such input strings; but realistically, the number of users who work
with such strings is probably minute.
I was not originally planning to include mb_substr in this commit, but
fuzzing of the reimplemented mb_strstr revealed that mb_substr needed
to be reimplemented, too; using the old mbfl_substr, which was based
on the old text conversion filters, in combination with functions which
use the new text conversion filters caused bugs.
The performance boost for mb_substr varies from 10%-500%, depending
on the encoding and input string used.
In b5ff87ca71, I made a number of adjustments to our conversion code
for CP1252. One of the adjustments was to make the mappings match those
published by the Unicode Consortium in the file CP1252.TXT. These do
not include mappings for the CP1252 bytes 0x81, 0x8D, 0x8F, 0x90, and
0x9D.
Rostyslav Gulka reported that this caused a problem. His application
stores binary JPEG data in an MS-SQL database. When they SELECT the
binary data out of the database, it is treated as CP1252 text and
automatically converted to UTF-8. To recover the original binary
data, they then do a conversion from UTF-8 to CP1252.
Obviously, that does not work if certain CP1252 bytes do not map to
any Unicode codepoint at all.
While this is a very unusual application of text encoding conversion,
and we might choose not to support it if there was no other basis for
including those mappings, it seems that Microsoft does actually include
them in the Win32 API as "best fit" mappings. These are extra mappings
from Unicode to other text encodings, which the Win32 API function
WideCharToMultiByte uses by default unless the WC_NO_BEST_FIT_CHARS
flag was passed.
A list of these "best fit" mappings for CP1252 can be found here:
https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WindowsBestFit/bestfit1252.txt
For JIS encoding, hiragana and katakana can be input in multiple forms.
One form uses JISX 0201 escape sequences. Another is called 'GR-invoked'
kana.
In the context of ISO-2022 encoding, bytes with a zero bit in the MSB
are called "GL" (or "graphics left") and those with the MSB set are
called "GR" (or "graphics right"). Regarding the variants of
ISO-2022-JP which are called "JIS7" and "JIS8", Wikipedia states:
"Other, older variants known as JIS7 and JIS8 build directly on the
7-bit and 8-bit encodings defined by JIS X 0201 and allow use of JIS X
0201 kana from G1 without escape sequences, using Shift Out and Shift
In or setting the eighth bit (GR-invoked), respectively."
In harmony with this, we have always accepted bytes from 0xA3-0xDF and
decoded them to the corresponding hiragana/katakana. However, at some
point I accidentally broke output for these kana. You can see the
problem in 3v4l.org by running this program:
<?php
echo bin2hex(mb_convert_encoding("\xA3", 'JIS', 'JIS'));
The results are:
Output for 8.2rc1 - rc3
1b244200231b2842
Output for 7.4.0 - 7.4.33, 8.0.1 - 8.0.25, 8.1.12
1b2849231b2842
Output for 8.1.0 - 8.1.11
1b284923
You can see that from 8.1.0 - 8.1.11, there was a missing escape
sequence at the end. That was caused because the flush functions were
not being called properly, and has already been fixed. However, this
also shows that the output for 8.2rc1-rc3 is completely invalid.
It is trying to output a JISX 0208 sequence, but with 0x00 as one of
the JISX 0208 bytes, which is illegal.
Add the missing code which will make the new text conversion filters
behave the same as the old ones when outputting hiragana/katakana in
JIS encoding.
This bug was found when I was fuzzing a patch related to mb_strpos.
In some cases, the legacy text conversion code for UTF-7 (and
UTF7-IMAP) would correctly recognize an error for a Base64-encoded
section which was not correctly padded with zero bits, but the new
(and faster) text conversion code would not.
Specifically, if the input string ended abruptly after the 4th or 7th
byte of a Base64-encoded section, the new conversion code would
confirm that the trailing padding bits from the previous byte (3rd or
6th) were zeroes, but would not check whether the 4th or 7th byte
itself encoded any non-zero bits. The legacy conversion code did
perform this check and would treat the input string as invalid.
Actually, even if the 4th or 7th byte does encode only (padding) zero
bits, this is still a problem, because there is no reason to have a
4th (or 7th) byte in that case. The UTF-7 string should have ended
on the previous byte instead.
Apply the same fix for both UTF-7 and UTF7-IMAP.