The Structure of Parameterized Complexity Classes (III)


Parameterized complexity theory provides a framework for a refined analysis of hard algorithmic problems.

Classical complexity theory analyzes and classifies problems by the amount of a resource, usually time or space, that is required by algorithms solving them. It was a fundamental idea, going back to the work of Hartmanis and Stearns in the early 1960s, to measure the required amount of the resource as a function of the size of the input. This has lead to a manageable variety of complexity classes and a clean-cut theory of intractability, most importantly in form of the theory of NP-completeness. However, measuring complexity only in terms of the input size means ignoring any structural information about the input instances in the resulting complexity theory. Sometimes, this makes problems appear harder than they typically are. Parameterized complexity theory takes a step backwards and measures complexity not only in terms of the input size, but in addition in terms of a parameter, which is a numerical value that may depend on the input in an arbitrary way. Parameterized complexity theory's main intention is to address complexity issues in situations where we know that the parameter is comparatively small.

Consider, for example, the problem of evaluating a database query. Its input has two parts, a database and the query. Observe that these two parts will usually differ in size quite significantly; the database will be much larger than the query. A natural parameter for a parameterized complexity analysis of this problem is the size of the query. As a more theoretically motivated example, consider approximation schemes for optimization problems. Their input consists of a problem instance and an error bound . A natural parameter is 1/ . If we can accept an error of 5% in an approximation, we have a parameter value 1/ =20 for our approximation scheme. Typical parameters for many algorithmic problems on graphs are the tree-width or the maximum degree of the input graph. Numerous other examples of naturally parameterized problems can be found in other application areas such as automated verification, artificial intelligence, or computational biology.

The central notion of parameterized complexity theory is fixed-parameter tractability. It relaxes the classical notion of tractability, polynomial time solvability, by admitting algorithms whose "non-polynomial behavior" is confined by the parameter.

Of course algorithms have always been analyzed and optimized in terms of many different input parameters, and no complexity theory was needed to do this. The main contribution of the theory is to provide a framework for establishing the intractability of certain problems. In absence of techniques for actually proving lower bounds for natural problems, the main goal of such a theory will be to classify problems into complexity classes by means of suitable reductions. Since the parameterized theory is two-dimensional, depending not only on the input size but also on the parameter, it is not surprising that it leads to a much larger variety of complexity classes and to more complicated reductions than the classical, one-dimensional complexity theory.

The goal of this project is obtain a clearer picture of the complicated structure of parameterized complexity classes and their relation to classical complexity classes.

Principal investigators
Grohe, Martin Prof. Dr. (Details) (Theoretical Computer Science)

Financer
DFG: Sachbeihilfe

Duration of project
Start date: 03/2007
End date: 07/2011

Last updated on 2022-07-09 at 21:07