Lehre am Lehrstuhl für Algorithm Engineering
Bachelor
Vorlesungen im Bachelor finden teils auf Deutsch teils auf Englisch statt. / Some Bachelor lectures are given in German and some in English.
Einführung in die theoretische Informatik (A1, Pflichtvorlesung, seit WS19/20, jährlich, auf Deutsch)
Einführung in grundlegende Konzepte der Theoretischen Informatik. Im Zentrum stehen Automatentheorie (endliche Automaten, Kellerautomaten und Turingmaschinen), formale Sprachen (Chomsky-Hierarchie), Berechenbarkeit (Unentscheidbarkeit des Halteproblems, Satz von Rice) und Komplexität (P-vs.-NP-Problem, NP-Vollständigkeit). Daneben werden zum Umgang mit schwer lösbaren Problemen erste algorithmische Ansätze zur approximativen oder randomisierten Lösung von NP-harten Problemen aufgezeigt. (Zitat aus der SPO.)
Algorithmen und Datenstrukturen II (W08-16, seit SS 19, jährlich, auf Deutsch)
Das Modul Algorithmen und Datenstrukturen II erweitert und vertieft die Inhalte des Pflichtmoduls Algorithmen und Datenstrukturen. Auf algorithmischer Seite geht es zum Beispiel um kürzeste Wege, maximale Flüsse, und String Matching. Hinsichtlich Datenstrukturen werden insbesondere Varianten von Heaps, Suchbäumen und Hashing betrachtet. Allgemein liegt der Fokus auf effizienten Algorithmen und den dafür notwendigen Datenstrukturen.
Exact Exponential Algorithms (W06-04, forschungsorientiert, ab WS 21/22, alle zwei Jahre, in English)
This lecture focuses on exponential-time algorithms for solving getting optimal solutions to NP-hard problems. Most of the lecture is about different algorithmic techniques for coping with the intractability of the considered problems and still getting as fast as possible algorithms. The lecture is based on the book by the same title, authored by Fedor V. Fomin and Dieter Kratsch.
Introduction to Combinatorial Optimization (W06-xx, since WS 24/25, every two years, in English)
Combinatorial optimization lies at the intersection of discrete mathematics and theoretical computer science. In this lecture, we will learn about core concepts of combinatorial optimization such as network and minimum cost flows, bipartite and general matching, as well as linear programming and the simplex algorithm. As time permits, we will cover further topics such as integer programming and matroids.
Master
All lectures for Master students are given in English.
Parameterized Algorithms (Q10-30, since winter term 2017/18, every two years)
Parameterized algorithms are an approach for coping with the intractability of NP-hard computational problems. The central idea therein is to quantify the structure of input instances by one or more parameters. Then, one seeks algorithms that provably perform well when the chosen parameters are sufficiently small. In this way, we can formalize the intuition that typical instances may have plenty of useful structure, which distinguishes them from the worst case.
There is a rich toolbox of algorithmic techniques that will be covered in the lecture. These include branching algorithms, kernelization, iterative compression, color coding, dynamic programming on tree decompositions, inclusion-exclusion, and others. The algorithmic techniques are complemented by lower bound methods that allow to rule out fast parameterized algorithms or that prove optimality of certain running times under appropriate assumptions.
Fine-Grained Analysis of Algorithms (new: Q6-18, old: Q10-31, since summer term 2018, every two years)
For many fundamental polynomial-time solvable problems like Longest Common Subsequence or All-Pairs Shortest Paths there has been no substantial improvement in worst-case running time for decades. The area of Fine-Grained Analysis of Algorithms seeks to explain this lack of improvement. By careful reductions between problems it has been showed that progress for very different problems is often tightly related. E.g. there is a truly subcubic algorithm for All-Pairs Shortest Paths if and only if a bunch of other problems, like Minimum Weight Triangle, have truly subcubic algorithms. Similarly, many problems can only have faster algorithms if there is a breakthrough for solving the Satisfiability problem.
The lecture covers lower bounds for many fundamental problems. We will discuss the required complexity assumptions, e.g., the hypothesis that there are no truly subquadratic algorithms for the Orthogonal Vectors problem. By means of appropriate reductions we then get the lower bounds or even asymptotic equivalence for some problems. Optionally, we will discuss implications for dynamic problems, where input changes over time, and for certain NP-hard problems.
Exact Exponential Algorithms (W06-04, research-oriented, starting winter term 2021/22, every two years)
See above. This Bachelor lecture is labeled "forschungsorientiert" and Master students may take one such lecture from the pool of Bachelor lectures.
Approximation Algorithms (Q6-20, starting summer term 2023, every two years)
Many relevant computational problems are by nature not decision, but optimization problems; in the sense that one does not simply want a yes or no answer, but is interested in finding a best among a set of possible solutions. Famous examples of such problems are scheduling, facility location, or knapsack. Efficient computation of an optimum solution to such problems is often very difficult, usually testified by the NP-hardness of their underlying decision problem. This does however not exclude efficient computation of good solutions by so-called approximation algorithms.
This lecture is about the design and analysis of approximation algorithms. We will discuss standard methods like greedy, local search, cost scaling, etc. and how to generally assess the quality of approximations. Further, we will learn about special types of reductions to transfer approximation results between different optimization problems. Such reductions will also be used to show limitations of approximation algorithms.
Efficient Preprocessing (Q06-19, starting summer term 2023, every two years)
Efficient preprocessing refers to the simplification of input instances before starting the actual computation for solving them. Usually the goal is to shrink the input without changing the result of solving it. This is especially useful in the case of NP-hard problems where algorithms may take exponential time to solve inputs, and where polynomial-time preprocessing may therefore greatly reduce the computational effort.
Most of the lecture focuses on the notion of kernelization from parameterized complexity. We will learn how to design and analyze kernelization algorithms for NP-hard problems but also how to prove lower bounds for kernelization. We will also discuss relaxed variants of kernelization such as Turing kernelization and lossy kernelization. Further topics include preprocessing for tractable problems as well as preprocessing under uncertainty.