Accumulator Error Feedback

From Wikimization

Jump to: navigation, search
csum() in Digital Signal Processing terms:  z-1 is a unit delay,Q is a 64-bit floating-point quantizer.
csum() in Digital Signal Processing terms: z-1 is a unit delay,
Q is a 64-bit floating-point quantizer.
function s = csum(x)
% CSUM Sum of elements using a compensated summation algorithm.
%
% This Matlab code implements 
% Kahan's compensated summation algorithm (1964) 
%
% Example:
% clear all; clc
% csumv=0;  rsumv=0;
% n = 100e6;
% t = ones(n,1);
% while csumv <= rsumv
%    v = randn(n,1);
%
%    rsumv = abs((t'*v - t'*v(end:-1:1))/sum(v));
%    disp(['rsumv = ' num2str(rsumv,'%1.16f')]);
%
%    csumv = abs((csum(v) - csum(v(end:-1:1)))/sum(v));
%    disp(['csumv = ' num2str(csumv,'%1.16e')]);
% end

s=0; e=0;
for i=1:numel(x)
   s_old = s; 
   y = x(i) + e; 
   s = s + y; 
   e = y - (s - s_old); 
end
return

When there is heavy cancellation during summation, due to existence of both positive and negative summands, low-order bit loss becomes inevitable; especially so, under the high dynamic range of floating-point. Consequently, numerical error contaminates least significant bits of a double precision sum.

The purpose of compensated summation is to produce a double precision sum, of double precision summands, that is as accurate as a quadruple precision summer.

Contents

summing

We need a stable reference for comparison of results from any method of summation.  ones(1,n)*v  and  sum(v)  produce different results in Matlab 2017b with vectors having only a few hundred entries.

Matlab's VPA (variable precision arithmetic, vpa(), sym()), from Mathworks' Symbolic Math Toolbox, cannot accurately sum even only a few hundred entries in quadruple precision. Error creeps up above |2e-16| for sequences with high condition number (heavy cancellation defined as large sum|x|/|sum x|).

Advanpix Multiprecision Computing Toolbox, for MATLAB, is a stable reference. Advanpix is also hundreds of times faster than Matlab VPA. Higham measures speed here: https://nickhigham.wordpress.com/2017/08/31/how-fast-is-quadruple-precision-arithmetic

sorting

Floating-point compensated-summation accuracy is data dependent. Substituting a unit sinusoid at arbitrary frequency, instead of a random number sequence input, can make compensated summation fail to produce more accurate results than a simple sum.

Input sorting, in descending order of absolute value, achieves more accurate summation whereas ascending order reliably fails. Sorting is not integral to Kahan's algorithm above because it would defeat input sequence reversal in the commented example. Sorting later became integral to modifications of Kahan's algorithm, such as Priest's (Higham Algorithm 4.3), but the same accuracy dependence on input data prevails.

refining

Eight years after introduction in 1964 of summation compensation, Kahan proposed appending final error to the sum (after input has vanished). This makes sense from a Digital Signal Processing perspective because the marginally stable recursive system illustrated has persistent zero-input response (ZIR). The modified Kahan algorithm becomes:

function s = ksum(x)
[~, idx] = sort(abs(x),'descend'); 
x = x(idx);
s=0; e=0;
for i=1:numel(x)
   s_old = s; 
   s = s + x(i); 
   e = e + x(i) - (s - s_old);  
end
s = s + e;
return

Input sorting is now integral to the algorithm. Error e becomes superfluous to s in the loop. This algorithm always succeeds, even on sequences with high cancellation; data dependency has been eliminated. Use the Advanpix Multiprecision Computing Toolbox to compare results.

references

Accuracy and Stability of Numerical Algorithms 2e, ch.4.3, Nicholas J. Higham, 2002

Further Remarks on Reducing Truncation Errors, William Kahan, 1964

XSum() Matlab program - Fast Sum with Error Compensation, Jan Simon, 2014

For fixed-point multiplier error feedback, see:

Implementation of Recursive Digital Filters for High-Fidelity Audio

Comments on Implementation of Recursive Digital Filters for High-Fidelity Audio

Personal tools