Today's Topics:
-------------------------------------------------------
From: Flavio Sartoretto <SARTORET%IPDUDMSA.BITNET@ICINECA.CINECA.IT>
Date: Fri, 28 Jun 91 09:38:46 EDT
Subject: Arnoldi's method.
We should like to compare a code for eigenanalysis of sparse nonsymmetric
matrices with Arnoldi's method, proposed in "W. E. Arnoldi, The principle
of minimized iteration in the solution of the matrix eigenvalue problem,
Quart. Appl. Math., vol.9, pp. 17-29 (1951)", or one of its variations.
If someone has a FORTRAN code, we should like to get a copy.
Thanks in advance from
G. Pini and F. Sartoretto.
University of Padua
Dipartimento Metodi e Modelli Matematici
Italy
Please send answers to SARTORET at IPDUDMSA.bitnet
------------------------------
From: Paul Van Dooren <vandooren@prlb.philips.be>
Date: Tue, 25 Jun 91 10:39:19 N
Subject: Change of Address Paul Van Dooren
Change of address of Paul Van Dooren
Next academic year I will be moving to the University of Illinois at
Urbana-Champaign where I take a position of full professor in the
Department of Electrical and Computer Engineering.
For all future correspondence, please use the following addresses :
before August 10, 1991 use my current home address :
Paul Van Dooren
Herfstlaan 4
B-3010 Kessel-Lo (BELGIUM)
Tel Home : +32 16 255148
Fax (c/o Mark Moonen) : +32 16 22 18 55
after August 10, 1991 use my new office address :
Paul Van Dooren
University of Illinois at Urbana-Champaign
Department of Electrical and Computer Engineering
1101 West Springfield Avenue
Urbana, Ill 61801 (U.S.A.)
for e-mail use vandooren@prlb.philips.be or vdooren@esat.kuleuven.ac.be
Messages are being forwarded to me from these addresses at any time.
------------------------------
From: Paul Van Dooren <vandooren@prlb.philips.be>
Date: Tue, 25 Jun 91 15:56:49 N
Subject: PRLB Going Down .. Forever
Death of a Research Lab
As a consequence of the present difficult economic situation of Philips,
Philips Research Laboratory Brussels will close down on June 30, 1991 and
all its members will be laid off.
Philips Res Lab Brussels (Prlb) was created in 1963. Its mission had been to
provide the Philips Concern with a research support in applied mathematics
and computer science. It is a fact that Prlb had gained in the course of years
an international reputation for the excellence of its research work in
various fields and especially in applied mathematics. Let us mention in
that respect its research contributions in circuit theory, coding theory and
combinatorics, signal processing, neural nets, numerical linear algebra,
telecommunications and cryptography. A number of Prlb experts were well
known within the NANET community. The current fate of Prlb means the
disappearance of one of the very few european industrial research centers in
applied mathematics.
(Current) PRLB members working in applied mathematics include :
Xavier Aubert
Chris Blondia
Pierre-Jacques Courtois
Marc Davio
Philippe Delsarte
Claude Dierieck
Yves Genin
Yves Kamp
Vinciane Lacroix
Benoit Macq
Philippe Piret
Jean-Jacques Quisquater
Christian Ronse
Guy Scheys
Pierre Semal
Dirk Slock
Andre Thayse
Vincent Van Dongen
Paul Van Dooren
Philips Research Laboratory Brussels
Avenue Albert Einstein, 4
B-1348 Louvain la Neuve (Belgium)
------------------------------
From: G. W. Stewart <stewart@cs.umd.edu>
Date: Fri, 21 Jun 91 14:23:37 -0400
Subject: Maybe We Should Call It "Lagrangian Elimination"
While I was doing some library work, I came across an article by
Lagrange, published in 1759, which anticipates Gaussian elimination.
He was concerned with determining sufficient conditions for a
stationary point of a function Z of several variables to be a
minimum (or a maximum). Essentially, he proves that if Gaussian
elimination is performed on the matrix of second derivatives and the
pivots are all positive then the stationary point is a minimum.
Lagrange first considers the case of two variables and derives the
usual conditions found in college calculus texts. The important
part, though, is his treatment of the case of three variables. I
have appended a rough translation of the relevant passage to this
note (it is in vanilla LaTeX). There are three observations to make
about the passage.
1. Lagrange solved the problem by reducing it from three variables
to the two variable problem he treated before. Since it is clear
from the case of three variables how to start the procedure for any
number of variables, his development amounts to a recursive
derivation of Gaussian elimination. Indeed, Lagrange goes on to to
say, "The same theory can be extended to functions of four or more
variables. Anyone who has truly grasped the spirit the reductions I
have employed up to now will be able to discover the reductions
appropriate to any particular case."
2. Lagrange develops the algorithm as a method for simplifying a
quadratic form--just as Gauss did in the Theoria Motus, where he
first presented his version of the method.
3. Lagrange did not use the method to solve linear systems, as Gauss
did. But his application of the method is no less worthy.
\documentstyle[12pt]{article}
\setlength{\topmargin}{.25in}
\setlength{\textheight}{7.5in}
\setlength{\oddsidemargin}{.375in}
\setlength{\evensidemargin}{.375in}
\setlength{\textwidth}{5.75in}
\begin{document}
\thispagestyle{plain}
\begin{center}
Excerpt from
\smallskip\\
\large
Researches sur la
M\'etode de Maximis et Minimis
\medskip\\
J.-L. Lagrange\footnote{{\it Miscellanea Taurinensia\/}, t.~I,
1759. Also in {\it Oeuvres\/}, v.~I, 1--16. Translated
by G. W. Stewart.}
\end{center}
6. When there are three variables, say $t$, $u$, $x$,
the differential $d^2Z$ takes the form
\[
d^2Z = Adt^2 + 2Bdt\,du + Cdu^2 + 2Ddt\,dx + 2Edu\,dx + Fdx^2,
\]
which we first reduce to the form
\[
A \left(dt + \frac{Bdu}{A} + \frac{Ddx}{A} \right)^2
+ \left(C - \frac{B^2}{A}\right)du^2
+ \left(E - \frac{BD}{A}\right)du\,dx
+ \left(F - \frac{D^2}{A}\right)dx^2.
\]
Setting
\[
C - \frac{B^2}{A}t = a, \quad
E - \frac{BD}{A} = b, \quad
F - \frac{D^2}{A} = c,
\]
we get
\[
d^2Z = A \left(dt + \frac{Bdu}{A} + \frac{Ddx}{A} \right)^2
+ a\,du^2 + 2b\,du\,dx + c\,dx^2.
\]
If we now operate on the last three terms as we did in
section (4) our total differential $d^2Z$ becomes
\[
A \left(dt + \frac{Bdu}{A} + \frac{Ddx}{A} \right)^2
+ a\left(du + \frac{b\,dx}{a}\right)^2
+ \left(c - \frac{b^2}{a}\right) dx^2.
\]
But since the squares $\displaystyle\left(dt + \frac{Bdu}{A} +
\frac{Ddx}{A} \right)^2$, $\displaystyle\left(du +
\frac{b\,dx}{a}\right)^2$, and $dx^2$ are always positive, the total
differential will likewise be positive if the coefficients $A$, $a$,
and $\displaystyle c - \frac{b^2}{a}$ each have the sign $+$.
Therefore, we have the following conditions for a minimum:
\[
A > 0, \quad a > 0, \quad ca > b^2.
\]
\end{document}
P.S.
After Cleve received the above note and translation, he pointed out
that many digest readers might want to know more about the history of
Gaussian elimination. I am putting the following comments in a
postscript, so that those who are not interested can skip to the next
digest entry.
An independent Chinese tradition of solving linear equations by what
we would call Gaussian elimination with back substitution was
established by the first century BC. Although the method was
described using numerical examples, as was the custom in Chinese
mathematics, the general algorithm is perfectly clear. Curiously,
the tradition was carried to Japan, where it resulted in the
discovery of determinants, some years before Leibnitz introduced them
to Europe.
The Western tradition before Gauss is not at all clear. It would
seem that there was a standard elimination method for solving and
inverting systems--probably what we would now call Gauss-Jordan
elimination. If this is the case, Gauss's contribution amounts
abbreviating the elimination to produce a triangular form, instead of
a diagonal form, and using back substitution to solve the resulting
equation.
The story of Gauss's involvement begins with an entry his diary, a
remarkable mathematical chronical that he began during his school
days at Goettingen (a facsimile and transcription may be found in the
tenth volume of his collected works). The entry, dated June 17,
1798, states cryptically, "Probability calculus defended against La
Place." Later Gauss identified this passage as refering to his first
probabilistic justification of the method of least squares, which
method he had discovered a few years earlier. Since elimination is
an essential part of the numerical algorithm, Gauss must have been
thinking about the topic.
The following entry in the diary provides confirmation: "The problem
of elimination so solved that nothing further can be desired." The
people who annotated the diary refer this entry to Gauss's
dissertation, where, as part of a review of past attempts to prove
the fundamental theorem of algebra, he discusses the problem of
elimination of coefficients among polynomials and promises a future
treatment of elimination in general. I take it to also refer to
Gaussian elimination. There is no contradiction in this, since both
topics are intimately related, and Gauss regarded elimination as much
as a theoretical tool as a numerical procedure.
The story now jumps to 1809, when Gauss published his astronomical
treatise "The Theory of the Motion of Heavenly Bodies." The last
part of the work contains Gauss's first treatment of least squares.
The details of his justification of the principle are not important
here, and we note only that after deriving the normal equations,
Gauss says they can be solved by the usual [vulgaris] method of
elimination--ironic in view of the fact that this work is usually
cited as the source of the numerical algorithm. However, Gauss does
go on to describe Gaussian elimination, not as a computational tool,
but as a device for getting at the variances of least squares
estimates.
Gauss's derivation is not the one we use today. Gauss starts
with the residual sum of squares W written as a function of the
unknowns, x, y, z, etc. He then decomposes W into the sum
W = au^2 + bv^2 + cw^2 + etc. + const,
where u depends on x, y, z, etc.; v on y, z, etc.; w on z, etc.
(This is exactly what Lagrange does; the unknowns x, y, z correspond
to the differentials dt, du, dx.) It is easy to see that this
elimination procedure corresponds to our usual Gaussian elimination
applied to the augmented normal equations.
Only in 1810, in an article entitled "Disquisitio de elementis
ellipticis Palladis," does Gauss get around to setting down the
computational algorithm. He gives what today we would call the
outer-product form of the algorithm, both in general formulas and in
a numerical example.
Gauss returned to the subject in the 1820's in a series of three
papers under the common title of "The Theory of the Combination of
Observations that is Least Subject to Error." (I am preparing a
translation with commentary for publication.) There is not space to
go into all the numerical ideas--updating and block Gauss-Seidel
iteration among others--in this work. As far as Gaussian elimination
is concerned, Gauss shows how to compute the quadratic form
x^TA^{-1}x without computing the inverse of A. Here he is thoroughly
modern in using the triangular factor computed by elimination as a
platform from which to perform further computations. In the last
paper, Gauss gives the inner product form of the elimination
algorithm that is usually associated with the name of Crout. At this
point, Gauss has said virtually all there is to say about algorithms
for solving and manipulating dense, positive definite systems.
To return to Lagrange and Gauss, it would be interesting to know if
Gauss had read Lagrange's paper, though there is no need to assume
so: Gauss was quite capable of figuring things out for himself. In
any event, their derivations are essentially the same, although
their applications are different. In working with quadratic forms
they set the tone for much of nineteenth century linear algebra. In
fact, many of our workaday matrix decompositions--e.g., the LU
decomposition, Jordan canonical form, Kronecker canonical form, and
the singular value decomposition--were presented as simplifications
of quadratic or bilinear forms. However, Gauss took the method much
further than Lagrange, and in my opinion we should continue to call
it Gaussian elimination, while acknowledging Lagrange's earlier
contribution.
Pete Stewart
------------------------------
From: Hans Schneider <schneide@math1.uni-bielefeld.de>
Date: Mon, 24 Jun 91 10:35:24 +0200
Subject: Special Issue of LAA
Special issue of Linear Algebra and its Applications:
Numerical Linear Algebra Methods in Control, Signals and Systems
In recent years there has been an increased cooperation between
mathematicians and engineers concerning the development and analysis
of fast and reliable numerical linear algebra methods in the areas
of signal processing, system theory and control theory. This special
issue of Linear Algebra and its Applications is devoted to research
papers in these areas, particularly the numerical solution of
-- structured eigenvalue problems,
-- structured linear systems,
-- inverse eigenvalue problems (like pole placement or stabilization),
-- generalized eigenvalue problems,
-- special matrix decompositions,
-- linear quadratic control problems,
-- Riccati, Lyapunov, Sylvester or Stein equations.
Deadline for submission is 31 July 1992.
Special editors for the special issue are:
Gregory Ammar,
Deptartment of Mathematical Sciences, Northern Illinois University,
De Kalb, Illinois 60115-2888, USA
Volker Mehrmann,
Fakultaet fuer Mathematik, Universitaet Bielefeld,
Postfach 8640, D-4800 Bielefeld 1, FRG
Nancy K. Nichols,
Dept. of Mathematics, University of Reading,
Whiteknights Park, GB-Reading, RG6 2AX, Great Britain
Paul Van Dooren,
Department of Electrical and Computer Engineering,
University of Illinois at Urbana-Champaign,
1101 West Springfield Av., Urbana, Illinois 61801, USA.
Papers may be submitted to any of these editors.
------------------------------
From: SIAM Publications Department <SIAMPUBS@WILMA.WHARTON.UPENN.EDU>
Date: Tue, 25 Jun 91 12:43 EDT
Subject: Contents: SIAM Review
SIAM Review Table of Contents Sept. 1991
Vorticity, Turbulence, and Acoustics in Fluid Flow
Andrew J. Majda
The Fractional Fourier Transform and Applications
David H. Bailey and Paul N. Swartztrauber
The Role of the Strengthened Cauchy-Buniakowskii-Schwarz
Inequality in Multilevel Methods
Victor Eijkhout and Panayot Vassilevski
Parallel Algorithms for Sparse Linear Systems
Michael T. Heath, Esmond Ng, and Barry W. Peyton
Book Reviews
Mathematical Problems for Combustion Theory (Jerrold Bebernes
and David Eberly) J. W. Dold
Estimation Techniques for Distributed Parameter Systems (H.
T. Banks and K. Kunisch) Donald Ludwig
Modern Cryptology: A Tutorial (Gilles Brassard) R. Creighton
Buck
Integral Manifolds and Inertial Manifolds for Dissipative
Partial Differential Equations (P. Constantin, C. Foias, B.
Nicolaenko, and R. Temam) Xiaosong Liu
Basic Hypergeometric Series (George Gasper and Mizan Rahman)
Jet Wimp
Huygens and Barrow, Newton and Hooke: Pioneers in
Mathematical Analysis and Catastrophe Theory from Evolvents
to Quasicrystals (V. I. Arnol'd, trans. by Eric J. F.
Primrose) Nicholas D. Kazarinoff
Multiphase Averaging for Classical Systems: With Applications
to Adiabatic Theorems (P. Lochak and C. Meunier, trans. by
H. S. Dumas) Brian Coomes
The Science of Fractal Images (Heinz-Otto Peitgen and Dietmar
Saupe, eds.) I. J. Good
The Averaged Moduli of Smoothness: With Applications in
Numerical Methods and Approximation (Blagovest Sendov and
Vasil A. Popov; trans. ed., G. M. Phillips) Vilmos Totik
Approximation by Spline Function (Gunther Nurnberger) Kurt
Jetter
Spline Models for Observational Data (Grace Wahba) Larry L.
Schumaker
Optimization and Stability Theory for Economic Analysis
(Brian Beavis and Ian Dobbs) Zvi Artstein
Plant and Crop Modelling (John H. M. Thornley and Ian R.
Johnson) P. L. Antonelli
The Numerical Solution of Differential-Algebraic Systems by
Runge-Kutta Methods (Ernst Hairer, Christian Lubich, and
Michel Roche) Stephen L. Campbell
Numerical Methods for Conservation Laws (Randall J. Leveque)
E. Bruce Pitman
Aspects of Quantum Field Theory in Curved Space-Time (S. A.
Fulling) David G. Boulware
Finite Quantum Electrodynamics (G. Scharf) Arthur S. Wightman
Functional Analysis: An Introduction for Physicists (Nino
Boccara) J. Dimock
Mathematical Foundations of Classical Statistical Mechanics:
Continuous Systems (D. Ya. Petrina, V. I. Gerasimenko, and P.
V. Malyshev; trans. by P. V. Malyshev and D. V. Malyshev)
George A. Baker, Jr.
Topics in Boundary Element Research. Vol. 6: Electromagnetic
Applications. (C. A. Brebbia, ed.) W. Lord
Matrix Theory and Applications (Charles R. Johnson, ed.)
Jeffrey L. Stuart
------------------------------
From: Taketomo Mitsui <a41794a@nucc.cc.nagoya-u.ac.jp>
Date: Sat, 29 Jun 91 16:24:11 JST
Subject: IUTAM Symposium on Inverse Problems
CALL FOR PAPERS
IUTAM Symposium on Inverse Problems in Engineering Mechanics
May 11 - 15, 1992, Tokyo, Japan
Organized by : International Union of Theoretical and Applied Machanics
(IUTAM).
Co-sponsored by :
The Acoustical Society of Japan (ASJ)
Ecole Polytechnique (Palaiseua/FRANCE)
Electricite de France (Clamart/FRANCE)
The Japan Society for Computational Methods in
Engineering (JASCOME)
The Japan Society for Aeronautical and Space Sciences
(JSASS)
The Japan Society for Industrial and Applied Mathmatics
(JSIAM)
The Japan Society for Mechanical Engineers (JSME)
The Japanese Society for Nondestructive Inspection (JSNI)
The Japan Society of Precision Engineering (JSPE)
The Japan Society for Simulation Technology (JSST)
The Japan Society for Technology of Plasticity (JSTP)
OBJECTIVES
The Synposium will provide an opportunity for fruitful discussion on
inverse problems in engineering sciences, particularly in material sciences,
structural mechanics and all other fields relating to applied mechanics,
to make breakthrough of computational and experimental approaches to inverse
problems. Different apsects for solving the ill-posed inverse problems
would be considered all together for applied mathematics and applied
mechanics. Although pure mathematical aspects are not the main objectives
of the Symposium, papers that focus on new formulations and new computational
methods are welcome as well as papers concerning different applications of
existing methods, such as optimization, boundary data control and regular-
ization techniques, to engineering inverse problems. Several papers of a
review nature are also encouraged.
The main topics of the Symposium are as follows:
Computational and experimental aspects of inverse problems
Non-destructive inspection or evaluation
Identification of contact stresses
Identification of initial or residual stresses
Identification of material properties and constructive laws
Shape optimization and sensitivity analysis
Inverse problems in process engineering
Inverse problems in metal forming
Inverse problems in dynamics
Active or semi-active control of noise and vibration
Inverse problems in bio-engineering
Image and data processing
Other topics relating to inverse problems
LANGUAGE/PROCEEDINGS
The official language of the Symposium will be English. The Symposium
book including the selected papers will be published from Springer-Verlag
after the Symposium, while the book form of extended abstracts of all
the papers to be presented will be distributed for attendees at the
Symposium.
TIME SCHEDULE
Return reply form: As soon as possible to the adress below.
Submit an extended abstract of three pages: November 11, 1991.
Submit the final manuscript: May 15, 1992.
SYMPOSIUM CHAIRMEN
Prof. Masataka TANAKA
Dept of Mech. Systems Eng., Faculty of Eng., Shinshu Univ.
500 Wakasato, Nagano 380, Japan
Tel: +81 - 262 - 26 - 4101
Fax: +81 - 262 - 24 - 6515
Prof. H.D. BUI
Laboratoire de Mecanique des Solides, Ecole Polytechnique
Palaiseau 91128, France
Tel: +33 - 1 - 69334786 or - 47654355
Fax: +33 - 1 - 69333026
CONTACT ADDRESS
Mr. K. Sato
JASCOME Office, c/o Kozo Keikaku Engineering Inc.
Dai-ichi Seimei Building 24F, 2-7-1 Nishi-shinjuku, Shinjuku-ku, Tokyo 163
Japan
Tel: + 81 - 3 - 3348 - 0644
Fax: + 81 - 3 - 3346 - 1274
------------------------------
End of NA Digest
**************************
-------