- Michael Elberfeld, Johannes Textor:
Efficient Algorithms for String-Based Negative Selection.
In Proceedings of the 8th International Conference on Artificial Immune Systems (ICARIS 2009), Volume 5666 of Lecture Notes in Computer Science, pp. 109-121.
Springer,
2009.
Show PDF | Go to website | Show abstract
String-based negative selection is an immune-inspired classification scheme: Given a self-set S of strings, generate a set D of detectors that do not match any element of S. Then, use these detectors to partition a monitor set M into self and non-self elements. Implementations of this scheme are often impractical because they need exponential time in the size of S to construct D. Here, we consider r-chunk and r-contiguous detectors, two common implementations that suffer from this problem, and show that compressed representations of D are constructible in polynomial time for any given S and r. Since these representations can themselves be used to classify the elements in M, the worst-case running time of r-chunk and r-contiguous detector based negative selection is reduced from exponential to polynomial.
- Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
In Proceedings of the 6th Annual Conference on Theory and Applications of Models of Computation (TAMC 2009), Volume 5532 of Lecture Notes in Computer Science, pp. 201-210.
Springer,
2009.
Show PDF | Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes from genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing as is often the case in laboratory data, the resulting incomplete perfect phylogeny haplotyping problem IPPH is NP-complete and no theoretical results are known concerning its approximability, fixed-parameter tractability, or exact algorithms for it. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete; but here a fixed-parameter algorithm is known. We show that such drastic and ad hoc simplifications are not necessary to make IPPH fixed-parameter tractable: We present the first theoretical analysis of an algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. On the negative side we show that restricting the topology of perfect phylogenies does not always reduce the computational complexity: while the incomplete directed perfect phylogeny problem is well-known to be solvable in polynomial time, we show that the same problem restricted to path topologies is NP-complete.
- Markus Hinkelmann, Andreas Jakoby, Nina Moebius, Tiark Rompf, Peer Stechert:
A Cryptographically t-Private Auction System.
In Proceedings of Network and System Security 2009 (NSS 2009), pp. 44-51.
IEEE Computer Society,
2009.
Go to website | Show abstract
We present a feasible cryptographically t-private protocol for electronic auctions. Our construction is based on Yao's garbled circuits and pseudorandom number generators (PRNG). Our protocol involves a field of (t+1)^2 parties for the generation of the garbled circuit and permits an arbitrary large number of bidders. The computational requirements are low: Only t+1 parties of the field have to use the PRNG, the remaining parties execute simple primitives (XOR, permuting and sharing). Independently from each other, the bidders have to stay active for one round of communication. Furthermore, each bidder has to compute t+1 XOR-operations, only. We present an implementation and evaluate its performance. The observed running time of our protocol is linear in the size of the auction circuit and the number of bidders and, as expected, grows quadratically in the parameter t.
- Markus Hinkelmann, Andreas Jakoby:
Preserving Privacy versus Data Retention.
In 6th International Conference on Theory and Applications of Models of Computation (TAMC 2009), Volume 5532 of Lecture Notes in Computer Science, pp. 251-260.
Springer,
2009.
Go to website | Show abstract
The retention of communication data has recently attracted much public interest, mostly because of the possibility of its misuse. In this paper, we present protocols that address the privacy concerns of the communication partners. Our data retention protocols store streams of encrypted data items, some of which may be flagged as critical (representing misbehavior). The frequent occurrence of critical data items justifies the self-decryption of all recently stored data items, critical or not. Our first protocol allows the party gathering the retained data to decrypt all data items collected within, say, the last half year whenever the number of critical data items reaches some threshold within, say, the last month. The protocol ensures that the senders of data remain anonymous but may reveal that different critical data items came from the same sender. Our second, computationally more complex scheme obscures this affiliation of critical data with high probability.
- Christian Hundt, Maciej Liskiewicz:
New Complexity Bounds for Image Matching under Rotation and Scaling.
In Proceedings of Symposium on Combinatorial Pattern Matching (CPM), Volume 5577 of Lecture Notes in Computer Science, pp. 127-141.
Springer,
2009.
Go to website - Johannes Textor, Benjamin Feldner:
An XML Pipeline Based System Architecture for Managing Bibliographic Metadata.
In Metadata and Semantics Research (MTSR'09), Volume 46 of Communications in Computer and Information Science, pp. 130-140.
Springer,
2009.
Show PDF | Go to website | Show abstract
In our knowledge-based society, bibliographic metadata is everywhere. Although several metadata standards for bibliographic information have been developed and established by the professional librarian community, home-grown ad-hoc solutions are still widespread in small to medium-sized institutions. This paper presents a framework for storing, indexing, and browsing bibliographic metadata that is designed to lower the barrier for metadata standard adoption by facilitating legacy data import and integration into existing infrastructure. These goals are achieved using XML pipelines as a central design paradigm. As a practical use case, we discuss the implementation of the described architecture at a research institute in our university, where it is now in productive use for managing publication lists and the local library.