Minimal Separators and their Algorithmic Applications - Fedor V. Fomin
For nonadjacent pair of vertices u,v of a connected graph G, vertex set S is a minimal (u,v)-separator if it separates u from v but no proper subset of S does. We discuss the following algorithmic applications of minimal separators
- Polynomial time algorithms for many graph optimization problems on various classes of graphs;
- Exact exponential algorithms for treewidth and many other graph optimization problems;
- Parameterized subexponential algorithms.
- Polynomial time algorithms for many graph optimization problems on various classes of graphs;
- Exact exponential algorithms for treewidth and many other graph optimization problems;
- Parameterized subexponential algorithms.
Efficient Algorithms for Hard Problems on Structured Electorates - Neeldhara Misra
Computational social choice is a rapidly evolving research trend concerned with the design and analysis of methods for collective decision making. In this talk, we will focus mostly on voting-inspired scenarios, where we have multiple agents (voters) expressing their preferences over a fixed set of alternatives (candidates), and the goal is to find a consensus (a winner or a committee or a ranking) that, as far as possible, reflects the collective opinions of all the agents.
We will survey some algorithmic developments in the context of problems like winner determination, committee formation, and manipulation, all of which are fundamental to voting scenarios. In particular, we will survey various restricted domains, both in the context of rankings and dichotomous preferences. We will consider examples that demonstrate how the structure in these domains can be exploited to make otherwise hard problems tractable. We will also describe some ideas for detecting whether a given instance belongs to these domains, and more generally, if the instance is close to a structured domain. In the context of the latter, the notion of closeness will usually be based on candidate or voter deletion.
We will survey some algorithmic developments in the context of problems like winner determination, committee formation, and manipulation, all of which are fundamental to voting scenarios. In particular, we will survey various restricted domains, both in the context of rankings and dichotomous preferences. We will consider examples that demonstrate how the structure in these domains can be exploited to make otherwise hard problems tractable. We will also describe some ideas for detecting whether a given instance belongs to these domains, and more generally, if the instance is close to a structured domain. In the context of the latter, the notion of closeness will usually be based on candidate or voter deletion.
Fine-grained complexity analysis of two classic TSP variants - Mark de Berg
It is a joint work with Kevin Buchin, Bart Jansen, and Gerhard Woeginger.
In this talk I will discuss new results on two classic TSP variants.
The first set of results concerns BITONIC TSP: given a set of n points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamic-programming exercise to solve this problem in O(n^2) time. While the near-quadratic dependency of similar dynamic programs for LONGEST COMMON SUBSEQUENCE and DISCRETE FRECHET DISTANCE has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves BITONIC TSP in O(n log^2 n) time and its bottleneck version in O(n log^3 n) time.
The second set of results concerns the popular k-OPT heuristic for TSP in the graph setting. More precisely, we study the k-OPT decision problem, which asks whether a given tour can be improved by a k-OPT move that replaces k edges in the tour by k new edges, for some given constant k. A simple algorithm solves k-OPT in O(n^k) time. For 2-OPT, this is easily seen to be optimal. For 3-OPT we prove that an algorithm with a runtime of the form O*(n^{3-eps}) exists if and only if ALL-PAIRS SHORTEST PATHS in weighted digraphs has such an algorithm. For general k-OPT, it is known that a runtime of f(k) \cdot n^{o(k/\log k)} would contradict the Exponential Time Hypothesis. The results for k=2,3 may suggest that the actual time complexity of k-OPT is \Theta(n^k). We show that this is not the case, by presenting an algorithm that finds the best k-move in O(n^{floor{2k/3} + 1}) time. Finally, we show how to beat the quadratic barrier for 2-OPT in two important settings, namely for points in the plane and when we want to solve 2-OPT repeatedly.
In this talk I will discuss new results on two classic TSP variants.
The first set of results concerns BITONIC TSP: given a set of n points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamic-programming exercise to solve this problem in O(n^2) time. While the near-quadratic dependency of similar dynamic programs for LONGEST COMMON SUBSEQUENCE and DISCRETE FRECHET DISTANCE has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves BITONIC TSP in O(n log^2 n) time and its bottleneck version in O(n log^3 n) time.
The second set of results concerns the popular k-OPT heuristic for TSP in the graph setting. More precisely, we study the k-OPT decision problem, which asks whether a given tour can be improved by a k-OPT move that replaces k edges in the tour by k new edges, for some given constant k. A simple algorithm solves k-OPT in O(n^k) time. For 2-OPT, this is easily seen to be optimal. For 3-OPT we prove that an algorithm with a runtime of the form O*(n^{3-eps}) exists if and only if ALL-PAIRS SHORTEST PATHS in weighted digraphs has such an algorithm. For general k-OPT, it is known that a runtime of f(k) \cdot n^{o(k/\log k)} would contradict the Exponential Time Hypothesis. The results for k=2,3 may suggest that the actual time complexity of k-OPT is \Theta(n^k). We show that this is not the case, by presenting an algorithm that finds the best k-move in O(n^{floor{2k/3} + 1}) time. Finally, we show how to beat the quadratic barrier for 2-OPT in two important settings, namely for points in the plane and when we want to solve 2-OPT repeatedly.
Deterministic Edge Connectivity in Near-Linear Time - Mikkel Thorup
It is based on joint work with Ken-Ichi Kawarabayashi.
We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time.
The previous fastest deterministic algorithm by Gabow from STOC'91 took ~O(m+k^2 n), where k is the edge connectivity, but k could be as big as n-1.
At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes.
Our main technical contribution is a near-linear time algorithm that contract vertex sets of a simple input graph G with minimum degree d, producing a multigraph with ~O(m/d) edges which preserves all minimum cuts of G with at least 2 vertices on each side.
In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.
We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time.
The previous fastest deterministic algorithm by Gabow from STOC'91 took ~O(m+k^2 n), where k is the edge connectivity, but k could be as big as n-1.
At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes.
Our main technical contribution is a near-linear time algorithm that contract vertex sets of a simple input graph G with minimum degree d, producing a multigraph with ~O(m/d) edges which preserves all minimum cuts of G with at least 2 vertices on each side.
In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.
The Index and the Finger - Amihood Amir
Having grown accustomed to the presence of all the web data at our fingertips, it is easy to forget that a mere 25 years ago, an encyclopedia on a CD was the height of information technology.
At the root of efficient access of this staggering amount of data is indexing. The concept of indexing is that one spends time and effort preprocessing the entire data and constructing auxiliary data structures that will make it possible to efficiently answer future queries of the form: “does input instance I appear in our data?”.
Efficient indexing algorithms exist for exact textual retrieval. However, there is crucial need for indexing approximate matching under various distance metrics. Almost no such efficient indexing exists.
In this talk we describe an exploration of the complexity of indexing. We consider some very similar basic problems whose matching complexity is linear. These problems are blockmass indexing, histogram indexing, difference indexing, and product indexing.
Surprisingly, some of these problems have a different indexing complexity. Moreover, weshow that the histogram indexing problem, which has recently been very actively researched, is unlikely to have efficient indexing, since it is 3−SUM hard.
At the root of efficient access of this staggering amount of data is indexing. The concept of indexing is that one spends time and effort preprocessing the entire data and constructing auxiliary data structures that will make it possible to efficiently answer future queries of the form: “does input instance I appear in our data?”.
Efficient indexing algorithms exist for exact textual retrieval. However, there is crucial need for indexing approximate matching under various distance metrics. Almost no such efficient indexing exists.
In this talk we describe an exploration of the complexity of indexing. We consider some very similar basic problems whose matching complexity is linear. These problems are blockmass indexing, histogram indexing, difference indexing, and product indexing.
Surprisingly, some of these problems have a different indexing complexity. Moreover, weshow that the histogram indexing problem, which has recently been very actively researched, is unlikely to have efficient indexing, since it is 3−SUM hard.
Integer Linear Programming in Computational Biology - Daniel M. Gusfield
Integer Linear Programming is a versatile modeling and optimization technique that was developed for complex planning and operational decision making. However, it is increasingly used in computational biology in non-traditional ways, most importantly and inventively as a computational tool and language to model and study biological phenomena, to analyze biological data, and to extract biological insight from the models and the data. Integer linear programming is often very effective in solving *instances* of biological problems on realistic data of current importance, even for hard computational problems that lack a worst-case efficient solution method. Moreover, even if a worst-case efficient algorithm is possible for a particular problem, the time and effort needed to find and implement such an algorithm is typically much greater than the time and effort needed to find and implement an ILP formulation for the problem. Then, for any given problem instance, highly engineered ILP solvers can be used to solve the ILP formulation. The improvement of the best ILP solvers has been spectacular, with an estimate that (combined with faster computers) benchmark ILP problems can now be solved 200-billion times faster than twenty-five years ago.
The effectiveness of the best modern ILP solvers on problem instances of importance in biology opens huge opportunities -- broad exploitation of integer programming could have a truly transformative effect on computation in biology and perhaps medicine.
The effectiveness of the best modern ILP solvers on problem instances of importance in biology opens huge opportunities -- broad exploitation of integer programming could have a truly transformative effect on computation in biology and perhaps medicine.
Electrical Flows, Laplacian Systems, and Fast Graph Algorithms - Aleksander Madry
In recent years, there was an emergence of new type of fast algorithms for various fundamental graph problems. A key object employed in these algorithms are electrical flows, which correspond to solutions to Laplacian linear systems.
In this talk, I will discuss this central role of electrical flows in all these developments, as well as sketch their potential further applications to graph algorithms.
In this talk, I will discuss this central role of electrical flows in all these developments, as well as sketch their potential further applications to graph algorithms.
Online and Approximation in Optical Networks and Scheduling - Shmuel Zaks
We present results concerning the complexity of online and approximation algorithms that originated in studies of optical networks. These results have interesting implications to problems stated within scheduling theory. These results relate to problems that optimize the use of components in optical networks, specifically Add-Drop Multiplexers (ADMs) and regenerators.
First we discuss the online version of the problem of minimizing the number of ADMs in optical networks. In this case lightpaths need to be colored such that overlapping paths get different colors, path that share an endpoint can get the same color, and the cost is the total number endpoints (=ADMs); the key point is that an endpoint shared by two same-colored paths is counted only once. We present a 1.75 and 1.5 lower bounds on the competitive ratio for a general and a path network. Both bounds are best possible.
We next present problems that deal with the use of regenerators in optical networks. Given a set of lightpaths in a network and a positive integer d, regenerators must be d-satisfied; that is, they have to be placed in such a way that in any lightpath there are no more than d hops without meeting a regenerator. We first discuss the online version of the problem of optimizing the number of locations where regenerators are placed. When there is a bound on the number of regenerators in a single node, there is not necessarily a solution for a given input. We distinguish between feasible inputs and infeasible ones. For the latter case our objective is to satisfy the maximum number of lightpaths. For a path topology we consider the case where d=2, and show a lower bound of sqrt(t) /2 for the competitive ratio (where t is the number of internal nodes of the longest lightpath) on infeasible inputs, and a tight bound of 3 for the competitive ratio on feasible inputs.
We then study the problem where we are given a finite set of p possible traffic patterns (each given by a set of lightpaths), and our objective is to place the minimum number of regenerators at the nodes so that each of the traffic patterns is d-satisfied. We prove that the problem is very hard to approximate (it does not admit a PTAS for any d,p > 1).
Last we consider the problem of optimizing the use of regenerators, but where a bounded number g of lightpaths can use the same regenerator (this corresponds to the notion of grooming in optical networks). The goal is to minimize the total number of regenerators. The problem is known to be NP-hard already for g=2. We present an approximation algorithm, whose approximation algorithm is at least 3 and at most 4.
First we discuss the online version of the problem of minimizing the number of ADMs in optical networks. In this case lightpaths need to be colored such that overlapping paths get different colors, path that share an endpoint can get the same color, and the cost is the total number endpoints (=ADMs); the key point is that an endpoint shared by two same-colored paths is counted only once. We present a 1.75 and 1.5 lower bounds on the competitive ratio for a general and a path network. Both bounds are best possible.
We next present problems that deal with the use of regenerators in optical networks. Given a set of lightpaths in a network and a positive integer d, regenerators must be d-satisfied; that is, they have to be placed in such a way that in any lightpath there are no more than d hops without meeting a regenerator. We first discuss the online version of the problem of optimizing the number of locations where regenerators are placed. When there is a bound on the number of regenerators in a single node, there is not necessarily a solution for a given input. We distinguish between feasible inputs and infeasible ones. For the latter case our objective is to satisfy the maximum number of lightpaths. For a path topology we consider the case where d=2, and show a lower bound of sqrt(t) /2 for the competitive ratio (where t is the number of internal nodes of the longest lightpath) on infeasible inputs, and a tight bound of 3 for the competitive ratio on feasible inputs.
We then study the problem where we are given a finite set of p possible traffic patterns (each given by a set of lightpaths), and our objective is to place the minimum number of regenerators at the nodes so that each of the traffic patterns is d-satisfied. We prove that the problem is very hard to approximate (it does not admit a PTAS for any d,p > 1).
Last we consider the problem of optimizing the use of regenerators, but where a bounded number g of lightpaths can use the same regenerator (this corresponds to the notion of grooming in optical networks). The goal is to minimize the total number of regenerators. The problem is known to be NP-hard already for g=2. We present an approximation algorithm, whose approximation algorithm is at least 3 and at most 4.
Weighted flow time - Naveen Garg
This talk would be about the problem of minimizing weighted flow time on a single machine in the offline setting. A quasi-PTAS was given by Chekuri and Khanna[2002] and an O(loglog nP) approximation algorithm was given by Bansal-Pruhs[2010]. Obtaining a polynomial time constant factor approximation has been open for long. I will present our recent work on this problem.