People of ACM - Fedor Fomin

July 23, 2024

How has your research focus changed since you began your career?

Quite a bit. I studied mathematics, and my PhD was on pursuit-evasion games on graphs. I proved combinatorial bounds; there were no algorithms or computational complexity involved. Nowadays, almost all of my work revolves around algorithms.

Along with your frequent collaborators Erik Demaine, MohammadTaghi Hajiaghayi, and Dimitrios Thilikos, you received the European Association for Theoretical Computer Science Nerode Prize for your work on bidimensionality. What are bidimensional problems and how did your new algorithms change the paradigm for working with them?

Bidimensionality is the property of optimization problems on graphs. It characterizes a broad range of computational problems that admit much better parameterized, approximation, and kernelization algorithms on sparse classes of graphs, such as planar graphs, map graphs, or graphs excluding a fixed minor, than on general graphs. Examples of bidimensional problems include classical NP-hard problems like Minimum Vertex Cover, Maximum Independent Set, Longest Cycle, and many others. A problem is bidimensional if, in addition to a few natural conditions, any optimal solution of this problem on a (k×k)-grid (view the grid as a graph) contains roughly k 2 vertices. Thus, instead of designing a new algorithm for every new problem, it could be enough to determine if the problem is bidimensional, which is much more straightforward.

Your most downloaded article in the ACM Digital Library is the 2013 article you wrote with Petteri Kaski, “Exact Exponential Algorithms,” which was included in Communications of the ACM. One insight of the article was that while exact exponential algorithms date back to the early days of computing, recent surprises have emerged. What are a few examples of the “surprises” you were referring to?

There were several exciting breakthroughs in exact exponential algorithms around 2010. For example, consider the Hamiltonian cycle problem, a classical NP-complete problem where we are given a graph on n vertices, and the task is to decide whether the graph has a Hamiltonian cycle, which is a cycle visiting every vertex of the graph exactly once. The naive way to solve this problem would be to force through every possible permutation of the vertices of the graph and check whether each permutation corresponds to a cycle. Since the number of all permutations of n elements is n!, the running time of such an algorithm would be dominated by n!. Is this algorithm the best one can hope for? Of course, if P=NP, there should be a polynomial time algorithm solving our problem, but not many people would bet on that option.

A better algorithm for this problem, based on dynamic programming, was discovered in 1962 independently by Bellman and by Held and Karp. This algorithm runs in time roughly 2n, which is significantly better than n!, and this is the algorithm you usually teach as an example of dynamic programming in an introductory course on algorithms. For almost half a century, there was no progress on this problem, and people started to believe that maybe 2n is the natural computational barrier for resolving Hamiltonicity. This is why Andreas Björklund's paper breaking the 2n "natural" barrier came as a big surprise. Similar surprises occurred with several other old problems like Graph Coloring or Maximum Cut. We ended that survey on an optimistic note, writing that new discoveries could be just around the corner. Still, I am not aware of any progress on any of the three open problems we mentioned in the conclusion.

One of your research interests is the algorithmic foundations of machine learning. What is a key challenge in this area of work?

Since the early days of computer science, the well-established way to measure an algorithm's quality is its worst-case time complexity. It evaluates the algorithm's efficiency by its worst performance on any input of a given size. However, with modern algorithmic developments, it appears that for many computational problems, as Tim Roughgarden put it, “the downside of worst-case analysis rears its ugly head”. For instance, in machine learning, while almost every model that is interesting enough to use in practice leads to the worst-case computationally hard problems, in practice, many such problems are solved routinely. A similar situation occurs in many other domains, including mathematical optimization, CSP/SAT solvers, and sequencing bioinformatics. We face a severe gap in our understanding of computation caused by the discrepancy between the predictions derived from the worst-case analysis of algorithms and their performance in practice. Tim Roughgarden's recent book, Beyond the Worst-Case Analysis of Algorithms, surveys various approaches to this challenge. However, we are still very far from closing this gap.

 

Fedor V. Fomin is a Professor of Computer Science at the University of Bergen. Within the broad discipline of theoretical computer science, his research interests include graph algorithms, parameterized complexity, algorithmic fairness, algorithmic foundations of machine learning, and combinatorial games.

Fomin is a member of the Norwegian Academy of Science and Letters, the Norwegian Academy of Technological Sciences, the Academia Europaea, and a Fellow of the European Association for Theoretical Computer Science (EATCS). He was recently named an ACM Fellow for contributions to the development of parameterized complexity and exact exponential algorithms.