Monte Carlo Strategies in Scientific Computing
An early experiment that conceives the basic idea of Monte Carlo compu tation is known as "Buffon's needle" (Dorrie 1965), first stated by Georges Louis Leclerc Comte de Buffon in 1777. In this well-known experiment, one throws a needle of length l onto a flat surface with a grid of parallel lines with spacing D (D > l). It is easy to compute that, under ideal conditions, the chance that the needle will intersect one of the lines is 2l / 1r D. Thus, if we let PN be the proportion of "intersects" in N throws, we can have an estimate of 1r as • 1" 2l 1r= 1m -D, N-too PN which will "converge" to 1r as N increases to infinity. Numerous investiga tors actually used this setting to estimate 1r. The idea of simulating random processes so as to help evaluate certain quantities of interest is now an es sential part of scientific computing. A systematic use of the Monte Carlo method for real scientific prob lems appeared in the early days of electronic computing (1945-55) and accompanied the development of the world's first programmable "super" computer, MANIAC (Mathematical Analyzer, Numerical Integrator and Computer), at Los Alamos during World War II. In order to make a good use of these fast computing machines, scientists (Stanislaw Ulam, John von Neumann, Nicholas Metropolis, Enrico Fermi, etc.
The author is a leading researcher in a very active area of researchEmphasis is on making these methods accessible to scientists who want to apply themIncludes examples from artificial intelligence, computational biology, computer vision and chemistry