- Max Bannach, Till Tantau:
Computing Hitting Set Kernels By AC^0-Circuits.
In Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science, LIPIcs,
2018.
Go to website | Show PDF | Show abstract
Given a hypergraph $H = (V,E)$, what is the smallest subset $X
\subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$?
This problem, known as the \emph{hitting set problem,} is a
basic problem in parameterized complexity theory. There are well-known
kernelization algorithms for it, which get a hypergraph~$H$ and a
number~$k$ as input and output a hypergraph~$H'$ such that (1)
$H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a
hitting set and (2) the size of $H'$ depends only on $k$
and on the maximum cardinality $d$ of edges in~$H$. The
algorithms run in polynomial time, but are highly
sequential. Recently, it has been shown that one of them can be parallelized
to a certain degree: one can compute hitting set kernels in parallel
time $O(d)$ -- but it was conjectured that this is
the best parallel algorithm possible. We
refute this conjecture and show how hitting set kernels can be
computed in \emph{constant} parallel time. For our proof, we
introduce a new, generalized notion of hypergraph sunflowers and
show how iterated applications of the color coding technique can
sometimes be collapsed into a single application.
- Max Bannach, Till Tantau:
Computing Kernels in Parallel: Lower and Upper Bounds.
In Proceedings of the 13th International Symposium on Parameterized and Exact Computation (IPEC 2018), LIPIcs,
2018.
Show PDF | Show abstract
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC$^0$-circuits, a quadratic kernel by TC$^0$-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.
- Max Bannach, Sebastian Berndt:
Practical Access to Dynamic Programming on Tree Decompositions.
In Proceedings of the 26th Annual European Symposium on Algorithms, LIPIcs,
2018.
Show PDF | Show abstract
Parameterized complexity theory has lead to a wide range of algorithmic
breakthroughs within the last decades, but the practicability of these
methods for real-world problems is still not well understood. We investigate the practicability of one
of the fundamental approaches of this field: dynamic programming on tree
decompositions. Indisputably, this is a key technique in parameterized
algorithms and modern algorithm design. Despite the enormous impact of this
approach in theory, it still has very little influence on practical
implementations. The reasons for this phenomenon are manifold. One of them is
the simple fact that such an implementation requires a long chain of
non-trivial tasks (as computing the decomposition, preparing
it,\dots). We provide an easy way to implement such dynamic programs that only
requires the definition of the update rules.
With this interface, dynamic programs for
various problems, such as \Lang{3-coloring}, can be implemented easily in
about 100 lines of structured Java code.
The theoretical foundation of the success of dynamic programming on tree
decompositions is well understood due to Courcelle's celebrated theorem, which
states that every \textsc{MSO}-definable problem can be efficiently solved if
a tree decomposition of small width is given. We seek to provide practical
access to this theorem as well, by presenting a lightweight model-checker for
a small fragment of \textsc{MSO}. This fragment is powerful enough to
describe many natural problems, and our model-checker turns out to be very
competitive against similar state-of-the-art tools.
- Sebastian Berndt, Maciej Liskiewicz:
On the Gold Standard for Security of Universal Steganography.
In Proc. Advances in Cryptology - EUROCRYPT 2018 - 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Volume vol. 10820 of LNCS, pp. 29-60.
Springer,
2018.
Go to website | Show abstract
While symmetric-key steganography is quite well understood both in the
information-theoretic and in the computational setting, many fundamental
questions about its public-key counterpart resist persistent attempts to solve
them. The computational model for public-key steganography was proposed by von
Ahn and Hopper in EUROCRYPT 2004. At TCC 2005, Backes and Cachin gave the
first universal public-key stegosystem - i.e. one that works on all
channels - achieving security against replayable chosen-covertext attacks
(SS-RCCA) and asked whether security against non-replayable
chosen-covertext attacks (SS-CCA) is achievable. Later, Hopper
(ICALP 2005) provided such a stegosystem for every efficiently sampleable
channel, but did not achieve universality. He posed the question whether
universality and SS-CCA-security can be achieved simultaneously.
No progress on this question has been achieved since more than a decade. In
our work we solve Hopper's problem in a somehow complete manner: As our main
positive result we design an SS-CCA-secure stegosystem that works for
every memoryless channel. On the other hand, we prove that this result
is the best possible in the context of universal steganography. We provide a
family of 0-memoryless channels - where the already sent documents
have only marginal influence on the current distribution - and prove that no
SS-CCA-secure steganography for this family exists in the standard
non-look-ahead model.
- Florian Thaeter, Rüdiger Reischuk:
Improving Anonymization Clustering.
In SICHERHEIT 2018, pp. 69-82.
Gesellschaft für Informatik e.V.,
2018.
Go to website