Artificial Time Integration

Artificial Time Integration

Uri Ascher1         Hui Huang1         K. Van Den Doel1

  University of British Columbia


Figure 1: Rapidly remove heavy Gaussian noise from a corrupted model: (a) original Max Planck model (50K vertices); (b) model is corrupted by 20% Gaussian noise in normal direction; (c) smoothed by AL (4 iterations, 5.1 secs).

Many recent algorithmic approaches involve the construction of a differential equation model for computational purposes, typically by introducing an artificial time variable. The actual computational model involves a discretization of the now time-dependent differential system, usually employing forward Euler. The resulting dynamics of such an algorithm is then a discrete dynamics, and it is expected to be “close enough” to the dynamics of the continuous system (which is typically easier to analyze) provided that small – hence many – time steps, or iterations, are taken. Indeed, recent papers in inverse problems and image processing routinely report results requiring thousands of iterations to converge. This makes one wonder if and how the computational modeling process can be improved to better reflect the actual properties sought. In this article we elaborate on several problem instances that illustrate the above observations. Algorithms may often lend themselves to a dual interpretation, in terms of a simply discretized differential equation with artificial time and in terms of a simple optimization algorithm; such a dual interpretation can be advantageous. We show how a broader computational modeling approach may possibly lead to algorithms with improved efficiency. 
title = {Artificial Time Integration},
author = {Uri Ascher and Hui Huang and Kees van den Doel},
journal = {BIT Numerical Mathematics},
volume = {47},
pages = {3 - 25 },
year = {2007},

Downloads (faster for people in China)

Downloads (faster for people in other places)