This question leads to a new generation of elimination algorithms based on
the abstract data type Directed Acyclic Graph, ``DAG" for short (see Giusti&Heintz 1993, Giusti *et al.* 1993 , Fitchas *et al.* 1995 , Krick &Pardo 1994, 1996; Puddu &Sabia 1997,
Giusti *et al.* 1995, 96, 97a, 97b; Bank *et al.*
1996). The cornerstone of the new algorithms and their complexity relies on
the observation that the polynomials appearing as intermediate results
during the execution of any elimination process own typically a
straight-line program encoding which is much shorter than their usual dense
or sparse representation.
Furthermore, if we consider these polynomials as being
univariate with respect to some distinguished variable, the complexity of
their interpolation is polynomial in their degree and their circuit size.
The new algorithms use as far as possible circuit (straight-line program)
encoding for the representation of polynomials and employ coefficient
representation at most with respect to one single variable. These
``univariate" polynomials which appear during our elimination procedures are
eliminating forms and describe suitable projections of algebraic varieties
produced by the algorithm. Being eliminating forms they have inherently
"small" circuit size (i.e. circuit size which is polynomial in their degree
and the length of the input).

This observation allows us during the procedure to compress periodically the
potentially expanding DAGs which codify our data. The resulting sequential
time complexity of our elimination procedures is finally polynomial in all
parameters which measure their behavior: the extrinsic parameters number of
variables, degree of input equations and their circuit length and an
intrinsic parameter, which is a suitably defined ``degree of the input
system" (see Giusti *et al.* 1995, 96, 97a, 97b ; Bank
*et al.* 1996). As far as arithmetic operations are counted only at unit
costs this description of the complexity character of our algorithms is
quite exact.

For bit complexity considerations one has to take into account the (logarithmic) height of the input polynomials and a suitable defined ``height" or ``arithmetic degree" of the input system. Again it turns out that the algorithms have a bit complexity which is polynomial in the aforementioned parameters. These State of the Art algorithms permit us to distinguish between ``well suited" and ``ill posed" polynomial equation system. The well suited systems are those whose degree (and height) are ``small" in comparison with the product of the degrees of the input polynomials (this quantity is called the ``Bézout number" of the system). Ill posed systems are those having degree (and height) close to the Bézout number (combinatorial problems typically produce such systems).

In this context let us to remark that any system which has the property to
be ``well suited" in the above sense is provably difficult to tackle by pure
numerical methods (this follows from an application of a suitable
multivariate version of Liouville's classical theorem on diophantine
approximation, see Giusti *et al.* (1997a)).

To summarize, the complexity of the new generation of our algorithms depends essentially on two groups of parameters, namely a group of syntactical ones which measure the (extrinsic) length of the input description and a group of semantical ones which measure the intrinsic geometric (and arithmetic) character of the input system. With respect to these two groups of parameters, the algebraic and the bit complexity of our algorithms have polynomial character. In particular, there is no exponential (or hyperexponential) dependence of the complexity of our algorithms with respect to the number of variables occuring in the input equations, as this is the case in rewriting based procedures. Moreover our input systems may depend on symbolic parameters as this often happens in application problems in engineering, physics or chemistry. (By the way, symbolic parameters often modelize the dependence of the input system on "numerically given real values"). Our algorithms accept as inputs straigth-line programs and this representation depends only linearly on the number of variables.

On the other hand, the degree of the input system is bounded by a function which is polynomial in the degree of the input equations and single exponential in the variables to be eliminated (namely the Bézout number of the system). At worst (and this is the generic case) this bound is reached, which makes the worst case complexity of our algorithms exponential. Therefore they are infeasible for a generalistic treatment of all possible systems. Nevertheless we claim that in many practical applications the geometric (and arithmetic) degree of the system remains ``small" and in this case our algorithms are able to take this particular circumstance into account (rewriting based algorithms generally can not do so). Thus the question arises whether there exists a possibility of finding algorithms whose complexity is independent of the degree of the input system and polynomial only in the length of the input given by a straight-line program or in sparse representation. As we have seen before, one has to expect a negative answer to this question.

Closest to our theoretical complexity results (however not to
our algorithms) comes the work of M. Shub and S. Smale about the
complexity of
finding approximate zeroes of polynomial equation systems (recall that an
approximate zero is a starting point for Newton iteration). Recognizing the
fact, that there doesn't exist a straightforward comparison between
symbolic and numerical methods in polynomial equation solving, we content
ourselves with pointing out two fundamental differences between our
algorithms and theirs (see e.g. Shub &Smale (1994)). First of
all, the bit complexity aspect of the computations and its relation to
diophantine approximation is not considered in the work of M. Shub and S.
Smale. Secondly the input must be given in dense representation and the best
complexity behaviour is only obtained for *generic*
systems (which makes the
distinction between well suited special and ill posed generic systems
obsolete). As a consequence they obtain interesting worst case bounds for
the task of approximating zeroes of *generic *
polynomial equation systems but they
don't produce problem adapted algorithms for solving *special*
polynomial equation systems as they occur in practical applications. Similar
observations may be made with respect to another research direction which is
oriented toward the development of fast algorithms for sparse systems (see
e.g. Canny-Emiris (1993), Emiris (1996)).

The complexity analysis of our algorithms is primarily based on a theoretic (in fact ``seminumeric", ``arithmetic" or ``algebraic") model which counts arithmetic operations at constant instead of ``real life" bit costs. This complexity model was created in the fifties by A. M. Ostrowski and A. N. Kolmogorov and reflects quite well the computer science aspect of floating point arithmetics (disregarding however aspects of numeric stability). The theoretic complexity analysis in terms of this model would therefore suggest a numerical implementation of our algorithms making use of already existing packages with high performant floating point arithmetics. Numeric subroutines have the advantage that they can be very well combined with our abstract data type ``straight-line program" (observe that our notion of ``straight-line program" modelizes the compiled form of a FORTRAN program). The only theoretic (not pragmatic) argument which speaks against this symbolic/numeric hybridization is based on the mathematical theory of diophantine approximation. One of the key results of this theory is the theorem of Liouville mentioned above (in fact it was the starting point of this field in the 19-th century) saying that the solutions of polynomial equation systems are hard to approximate (because they are typically algebraic numbers). This result implies that we will never dispose of a satisfactory mathematical theory which justifies theoretically what we are practically doing when combining floating point arithmetics with our algorithms. We think that it is worth to test the pragmatic value of this hybridization disregarding the lack of theoretic foundation in the hope that the ``worst case" which corresponds to Liouville's estimate is practically less relevant.