Skip to main content

Algorithms and Complexity

Algorithms and Complexity

Solving a computational problem requires resources and the fundamental question studied in this research area is to determine, as closely as possible, the computational difficulty of basic problems. This general question has many flavors as there are many different types of resources and many different computational problems. As larger instances of the same problem are harder to solve the key object to study is how resources consumed grow as a function of the size of the input.

The best known resource is time, which is measured as the number of elementary operations needed to solve the problem at hand. A theoretical notion, that mostly agrees well with practice, is that a problem is efficiently solvable if and only if it can be solved by an algorithm with running time that grows as a polynomial function of the size of the inputs. The most famous problem in complexity theory is whether NP equals P. In every day terms this is the question whether for any problem where a good solution can be verified efficiently can such a solution can also be found efficiently.

Results in algorithms and complexity comes in two different forms. Algorithmic results where algorithms are designed and analyzed and hardness results, where a proof, possibly using a well established conjecture such as NP is not in P, is given that a computational problem requires large resources.

Approximation algorithms

For problems where it is known that it is hard to find the globally optimal solution it is possible to have efficient algorithms that finds a solution of provably good quality. One example is to find a solution that is only 10% worse than the optimal solution. We are interested in proving both the existence and nonexistence of such algorithms. In some other situations it is interesting to look for algorithms running in mildly exponential time.

One key result in this area is a hardness result for systems of linear equations over finite fields. If such a system has a solution that satisfies all equations then it can be found efficiently by Gaussian elimination. If, however, the best solution falsifies a constant fraction of the equations (say 1 %), then it is computationally difficult to find a solution that does significantly better than a random assignment.


The existence of efficient algorithms is central in cryptography. As opposed to many other situations it is here very interesting to know that a computational problem is infeasible and cannot be solved with existing computational resources. In cryptographic situations this may translate in to a cryptosystem being secure or that a digital signature cannot be forged.

Due to the recent progress in the construction of quantum computers and the often long time horizons it is here also interesting to study which cryptographic construction that remain secure if general purpose quantum computers are constructed. Of particular interest are algorithm for integer factorization and discrete logarithms. Among our results here we find improvements and a more detailed analysis of Shor's classical algorithms as well as an extension of Regev's recent integer factorization algorithm to compute discrete logarithms.

This project is a joint project with the department of computer science.


PhD students

Members from EECS

Page responsible:Isaac Ren
Belongs to: Department of Mathematics
Last changed: Feb 27, 2024