1
0
mirror of https://github.com/php/doc-en.git synced 2026-03-23 23:32:18 +01:00

docs(fann): fix grammar, punctuation, and terminology in constants (#5090)

- Fix subject-verb agreement (neurons which differ, criterion is)
- Capitalize 'Gaussian' as a proper noun
- Add missing hyphens in compound adjectives (sigmoid-like)
- Fix 'insight in' to 'insight into'
- Complete the phrase 'from the desired' to 'from the desired output'
- Remove redundant commas
This commit is contained in:
Anton L.
2026-01-12 01:25:47 +05:00
committed by GitHub
parent 64e1182c98
commit 6a8c8a94c8

View File

@@ -28,9 +28,9 @@
<listitem>
<simpara>
Standard backpropagation algorithm, where the weights are updated after calculating the mean square error
for the whole training set. This means that the weights are only updated once during a epoch.
For this reason some problems, will train slower with this algorithm. But since the mean square
error is calculated more correctly than in incremental training, some problems will reach a better
for the whole training set. This means that the weights are only updated once during an epoch.
For this reason, some problems will train slower with this algorithm. But since the mean square
error is calculated more correctly than in incremental training, some problems will reach better
solutions with this algorithm.
</simpara>
</listitem>
@@ -43,11 +43,11 @@
<listitem>
<simpara>
A more advanced batch training algorithm which achieves good results for many problems. The RPROP
training algorithm is adaptive, and does therefore not use the learning_rate. Some other parameters
training algorithm is adaptive, and therefore does not use the learning_rate. Some other parameters
can however be set to change the way the RPROP algorithm works, but it is only recommended
for users with insight in how the RPROP training algorithm works. The RPROP training algorithm
is described by [Riedmiller and Braun, 1993], but the actual learning algorithm used here is
the iRPROP- training algorithm which is described by [Igel and Husken, 2000] which is an variety
the iRPROP- training algorithm which is described by [Igel and Husken, 2000] which is a variety
of the standard RPROP training algorithm.
</simpara>
</listitem>
@@ -61,7 +61,7 @@
<simpara>
A more advanced batch training algorithm which achieves good results for many problems.
The quickprop training algorithm uses the learning_rate parameter along with other more advanced parameters,
but it is only recommended to change these advanced parameters, for users with insight in how the quickprop
but it is only recommended to change these advanced parameters for users with insight into how the quickprop
training algorithm works. The quickprop training algorithm is described by [Fahlman, 1988].
</simpara>
</listitem>
@@ -175,7 +175,7 @@
</term>
<listitem>
<simpara>
Symmetric gaussian activation function.
Symmetric Gaussian activation function.
</simpara>
</listitem>
</varlistentry>
@@ -186,7 +186,7 @@
</term>
<listitem>
<simpara>
Stepwise gaussian activation function.
Stepwise Gaussian activation function.
</simpara>
</listitem>
</varlistentry>
@@ -197,7 +197,7 @@
</term>
<listitem>
<simpara>
Fast (sigmoid like) activation function defined by David Elliott.
Fast (sigmoid-like) activation function defined by David Elliott.
</simpara>
</listitem>
</varlistentry>
@@ -208,7 +208,7 @@
</term>
<listitem>
<simpara>
Fast (symmetric sigmoid like) activation function defined by David Elliott.
Fast (symmetric sigmoid-like) activation function defined by David Elliott.
</simpara>
</listitem>
</varlistentry>
@@ -300,7 +300,7 @@
<listitem>
<simpara>
Tanh error function; usually better but may require a lower learning rate. This error function aggressively
targets outputs that differ much from the desired, while not targeting outputs that only differ slightly.
targets outputs that differ much from the desired output, while not targeting outputs that differ only slightly.
Not recommended for cascade or incremental training.
</simpara>
</listitem>
@@ -326,8 +326,8 @@
</term>
<listitem>
<simpara>
Stop criteria is number of bits that fail. The number of bits means the number of output neurons
which differs more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). The bits are counted
Stop criterion is number of bits that fail. The number of bits means the number of output neurons
which differ more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). The bits are counted
in all of the training data, so this number can be higher than the number of training data.
</simpara>
</listitem>