Accumulator Error Feedback

From Wikimization

(Difference between revisions)
Jump to: navigation, search
(links)
Current revision (04:01, 27 February 2018) (edit) (undo)
(refining)
 
(50 intermediate revisions not shown.)
Line 1: Line 1:
[[Image:Gleich.jpg|thumb|right|429px|<tt>csum()</tt> in Digital Signal Processing terms:
[[Image:Gleich.jpg|thumb|right|429px|<tt>csum()</tt> in Digital Signal Processing terms:
-
z<sup>-1</sup> is a unit delay,<br>Q is a 64-bit floating-point quantizer.
+
z<sup>-1</sup> is a unit delay,<br>Q is a 64-bit floating-point quantizer.
-
Algebra represents neither a sequence of instructions or algorithm.
+
-
It is only meant to remind that an imperfect accumulator introduces noise into a series.
+
-
<br><i>q</i><sub><i>i</i></sub> represents error due to quantization (additive by definition).
+
]]
]]
<pre>
<pre>
-
function s_hat = csum(x)
+
function s = csum(x)
% CSUM Sum of elements using a compensated summation algorithm.
% CSUM Sum of elements using a compensated summation algorithm.
%
%
-
% For large vectors, the native sum command in Matlab does
+
% This Matlab code implements
-
% not appear to use a compensated summation algorithm which
+
% Kahan's compensated summation algorithm (1964)
-
% can cause significant roundoff errors.
+
%
%
-
% This Matlab code implements a variant of Kahan's compensated
+
% Example:
-
% summation algorithm (1964) which often takes about twice as long,
+
% clear all; clc
-
% but produces more accurate sums when the number of
+
-
% elements is large. -David Gleich
+
-
%
+
-
% Also see SUM.
+
-
%
+
-
% % Matlab csum() Example:
+
-
% clear all
+
% csumv=0; rsumv=0;
% csumv=0; rsumv=0;
 +
% n = 100e6;
 +
% t = ones(n,1);
% while csumv <= rsumv
% while csumv <= rsumv
-
% v = randn(13e6,1);
+
% v = randn(n,1);
-
% rsumv = abs(sum(v) - sum(v(end:-1:1)));
+
%
-
% disp(['rsumv = ' num2str(rsumv,'%18.16f')]);
+
% rsumv = abs((t'*v - t'*v(end:-1:1))/sum(v));
-
% [~, idx] = sort(abs(v),'descend');
+
% disp(['rsumv = ' num2str(rsumv,'%1.16f')]);
-
% x = v(idx);
+
%
-
% csumv = abs(csum(x) - csum(x(end:-1:1)));
+
% csumv = abs((csum(v) - csum(v(end:-1:1)))/sum(v));
-
% disp(['csumv = ' num2str(csumv,'%18.16e')]);
+
% disp(['csumv = ' num2str(csumv,'%1.16e')]);
% end
% end
-
s_hat=0; e=0;
+
s=0; e=0;
for i=1:numel(x)
for i=1:numel(x)
-
s_hat_old = s_hat;
+
s_old = s;
y = x(i) + e;
y = x(i) + e;
-
s_hat = s_hat_old + y;
+
s = s + y;
-
e = (s_hat_old - s_hat) + y; %calculate difference first (Higham)
+
e = y - (s - s_old);
end
end
return
return
</pre>
</pre>
 +
 +
When there is heavy cancellation during summation, due to existence of both positive and negative summands, low-order bit loss becomes inevitable;
 +
especially so, under the high dynamic range of floating-point.
 +
Consequently, numerical error contaminates least significant bits of a double precision sum.
 +
 +
The purpose of compensated summation is to produce a double precision sum, of double precision summands,
 +
that is as accurate as a quadruple precision summer.
 +
 +
=== summing ===
 +
We need a stable reference for comparison of results from any method of summation.
 +
&nbsp;<tt>ones(1,n)*v</tt>&nbsp; and &nbsp;<tt>sum(v)</tt>&nbsp; produce different results in Matlab 2017b with vectors having only a few hundred entries.
 +
 +
Matlab's VPA <b>(</b>variable precision arithmetic, <tt>vpa()</tt>, <tt>sym()</tt><b>)</b>, from Mathworks' Symbolic Math Toolbox, cannot accurately sum even only a few hundred entries in quadruple precision. Error creeps up above |2e-16| for sequences with high condition number <b>(</b>heavy cancellation defined as large sum|<i>x</i>|/|sum <i>x</i>|<b>)</b>.
 +
 +
[https://www.advanpix.com Advanpix Multiprecision Computing Toolbox], for MATLAB, is a stable reference.
 +
Advanpix is also hundreds of times faster than Matlab VPA. Higham measures speed here:
 +
[https://nickhigham.wordpress.com/2017/08/31/how-fast-is-quadruple-precision-arithmetic https://nickhigham.wordpress.com/2017/08/31/how-fast-is-quadruple-precision-arithmetic]
=== sorting ===
=== sorting ===
-
Sorting is not integral above because the commented Example
+
Floating-point compensated-summation accuracy is data dependent.
-
(inspired by Higham) would then display false positive results.<br>
+
Substituting a unit sinusoid at arbitrary frequency, instead of a random number sequence input,
-
In practice, input sorting
+
can make compensated summation fail to produce more accurate results than a simple sum.
-
should begin the <tt>csum()</tt> function to achieve the most accurate summation:
+
 
 +
Input sorting, in descending order of absolute value, achieves more accurate summation whereas ascending order reliably fails.
 +
Sorting is not integral to Kahan's algorithm above because it would defeat input sequence reversal in the commented example.
 +
Sorting later became integral to modifications of Kahan's algorithm, such as Priest's
 +
([http://servidor.demec.ufpr.br/CFD/bibliografia/Higham_2002_Accuracy%20and%20Stability%20of%20Numerical%20Algorithms.pdf Higham] Algorithm&nbsp;4.3),
 +
but the same accuracy dependence on input data prevails.
 +
 
 +
=== refining ===
 +
Eight years after introduction in 1964 of summation compensation, Kahan proposed appending final error to the sum (after input has vanished). This makes sense from a Digital Signal Processing perspective because the marginally stable recursive system illustrated has
 +
[https://ccrma.stanford.edu/~dattorro/Integrator.pdf persistent zero-input response (ZIR)].
 +
The modified Kahan algorithm becomes:
<pre>
<pre>
-
function s_hat = csum(x)
+
function s = ksum(x)
-
s_hat=0; e=0;
+
[~, idx] = sort(abs(x),'descend');
[~, idx] = sort(abs(x),'descend');
x = x(idx);
x = x(idx);
 +
s=0; e=0;
for i=1:numel(x)
for i=1:numel(x)
-
s_hat_old = s_hat;
+
s_old = s;
-
y = x(i) + e;
+
s = s + x(i);
-
s_hat = s_hat_old + y;
+
e = e + x(i) - (s - s_old);
-
e = (s_hat_old - s_hat) + y; %calculate difference first (Higham)
+
end
end
 +
s = s + e;
return
return
</pre>
</pre>
-
Even in complete absence of sorting, <tt>csum()</tt> can be more accurate than conventional summation by orders of magnitude.
+
Input sorting is now integral to the algorithm. Error <tt>e</tt> becomes superfluous to <tt>s</tt> in the loop.
 +
This algorithm always succeeds, even on sequences with high cancellation; data dependency has been eliminated.
 +
Use the
 +
[https://www.advanpix.com Advanpix Multiprecision Computing Toolbox]
 +
to compare results.
-
=== links ===
+
=== references ===
[http://servidor.demec.ufpr.br/CFD/bibliografia/Higham_2002_Accuracy%20and%20Stability%20of%20Numerical%20Algorithms.pdf Accuracy and Stability of Numerical Algorithms 2e, ch.4.3, Nicholas J. Higham, 2002]
[http://servidor.demec.ufpr.br/CFD/bibliografia/Higham_2002_Accuracy%20and%20Stability%20of%20Numerical%20Algorithms.pdf Accuracy and Stability of Numerical Algorithms 2e, ch.4.3, Nicholas J. Higham, 2002]
[http://www.convexoptimization.com/TOOLS/Kahan.pdf Further Remarks on Reducing Truncation Errors, William Kahan, 1964]
[http://www.convexoptimization.com/TOOLS/Kahan.pdf Further Remarks on Reducing Truncation Errors, William Kahan, 1964]
-
For multiplier error feedback, see:
+
[http://www.mathworks.com/matlabcentral/fileexchange/26800-xsum XSum() Matlab program - Fast Sum with Error Compensation, Jan Simon, 2014]
 +
 
 +
For fixed-point multiplier error feedback, see:
[http://ccrma.stanford.edu/~dattorro/HiFi.pdf Implementation of Recursive Digital Filters for High-Fidelity Audio]
[http://ccrma.stanford.edu/~dattorro/HiFi.pdf Implementation of Recursive Digital Filters for High-Fidelity Audio]
[http://ccrma.stanford.edu/~dattorro/CorrectionsHiFi.pdf Comments on Implementation of Recursive Digital Filters for High-Fidelity Audio]
[http://ccrma.stanford.edu/~dattorro/CorrectionsHiFi.pdf Comments on Implementation of Recursive Digital Filters for High-Fidelity Audio]

Current revision

csum() in Digital Signal Processing terms:  z-1 is a unit delay,Q is a 64-bit floating-point quantizer.
csum() in Digital Signal Processing terms: z-1 is a unit delay,
Q is a 64-bit floating-point quantizer.
function s = csum(x)
% CSUM Sum of elements using a compensated summation algorithm.
%
% This Matlab code implements 
% Kahan's compensated summation algorithm (1964) 
%
% Example:
% clear all; clc
% csumv=0;  rsumv=0;
% n = 100e6;
% t = ones(n,1);
% while csumv <= rsumv
%    v = randn(n,1);
%
%    rsumv = abs((t'*v - t'*v(end:-1:1))/sum(v));
%    disp(['rsumv = ' num2str(rsumv,'%1.16f')]);
%
%    csumv = abs((csum(v) - csum(v(end:-1:1)))/sum(v));
%    disp(['csumv = ' num2str(csumv,'%1.16e')]);
% end

s=0; e=0;
for i=1:numel(x)
   s_old = s; 
   y = x(i) + e; 
   s = s + y; 
   e = y - (s - s_old); 
end
return

When there is heavy cancellation during summation, due to existence of both positive and negative summands, low-order bit loss becomes inevitable; especially so, under the high dynamic range of floating-point. Consequently, numerical error contaminates least significant bits of a double precision sum.

The purpose of compensated summation is to produce a double precision sum, of double precision summands, that is as accurate as a quadruple precision summer.

Contents

summing

We need a stable reference for comparison of results from any method of summation.  ones(1,n)*v  and  sum(v)  produce different results in Matlab 2017b with vectors having only a few hundred entries.

Matlab's VPA (variable precision arithmetic, vpa(), sym()), from Mathworks' Symbolic Math Toolbox, cannot accurately sum even only a few hundred entries in quadruple precision. Error creeps up above |2e-16| for sequences with high condition number (heavy cancellation defined as large sum|x|/|sum x|).

Advanpix Multiprecision Computing Toolbox, for MATLAB, is a stable reference. Advanpix is also hundreds of times faster than Matlab VPA. Higham measures speed here: https://nickhigham.wordpress.com/2017/08/31/how-fast-is-quadruple-precision-arithmetic

sorting

Floating-point compensated-summation accuracy is data dependent. Substituting a unit sinusoid at arbitrary frequency, instead of a random number sequence input, can make compensated summation fail to produce more accurate results than a simple sum.

Input sorting, in descending order of absolute value, achieves more accurate summation whereas ascending order reliably fails. Sorting is not integral to Kahan's algorithm above because it would defeat input sequence reversal in the commented example. Sorting later became integral to modifications of Kahan's algorithm, such as Priest's (Higham Algorithm 4.3), but the same accuracy dependence on input data prevails.

refining

Eight years after introduction in 1964 of summation compensation, Kahan proposed appending final error to the sum (after input has vanished). This makes sense from a Digital Signal Processing perspective because the marginally stable recursive system illustrated has persistent zero-input response (ZIR). The modified Kahan algorithm becomes:

function s = ksum(x)
[~, idx] = sort(abs(x),'descend'); 
x = x(idx);
s=0; e=0;
for i=1:numel(x)
   s_old = s; 
   s = s + x(i); 
   e = e + x(i) - (s - s_old);  
end
s = s + e;
return

Input sorting is now integral to the algorithm. Error e becomes superfluous to s in the loop. This algorithm always succeeds, even on sequences with high cancellation; data dependency has been eliminated. Use the Advanpix Multiprecision Computing Toolbox to compare results.

references

Accuracy and Stability of Numerical Algorithms 2e, ch.4.3, Nicholas J. Higham, 2002

Further Remarks on Reducing Truncation Errors, William Kahan, 1964

XSum() Matlab program - Fast Sum with Error Compensation, Jan Simon, 2014

For fixed-point multiplier error feedback, see:

Implementation of Recursive Digital Filters for High-Fidelity Audio

Comments on Implementation of Recursive Digital Filters for High-Fidelity Audio

Personal tools