The GNUnet Bibliography

The GNUnet Bibliography | Selected Papers in Meshnetworking

By topic | By date | By author


Publications by date

1944

The Theory of Games and Economic Behavior (PDF)
by John von Neumann and Oskar Morgenstern.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1950

Equilibrium points in n-person games (PDF)
by John F. Nash Jr.
In PNAS. Proceedings of the National Academy of Sciences of the USA 36, January 1950, pages 48-49. (BibTeX entry) (Download bibtex record)
(direct link) (website)

One may define a concept of an n-person game in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n-tuple of pure strategies, one strategy being taken for each player. For mixed strategies, which are probability distributions over the pure strategies, the pay-off functions are the expectations of the players, thus becoming polylinear forms

[Go to top]

1958

On programming of arithmetic operations
by Andrey Petrovych Ershov.
In Commun. ACM 1(8), 1958, pages 3-6. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1959

On Random Graphs I (PDF)
by Paul Erd os and Alfréd Rényi.
In Publicationes Mathematicae (Debrecen) 6, January 1959, pages 290-297. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1960

Polynomial codes over certain finite fields (PDF)
by Irving Reed and Golomb Solomon.
In Journal of the Society of Industrial and Applied Mathematics 8(2), June 1960, pages 300-304. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1962

Low-density parity-check codes (PDF)
by Robert G. Gallager.
In Information Theory, IRE Transactions on 8, 1962, pages 21-28. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed numberj geq 3of l's and each row contains a small fixed numberk > jof l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixedj. When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixedj. A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. Forj > 3and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound

[Go to top]

1968

The Tragedy of the Commons (PDF)
by Garrett Hardin.
In Science 162, 1968, pages 1243-1248. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1970

Space/Time Trade-offs in Hash Coding with Allowable Errors
by Burton H. Bloom.
In Communications of the ACM 13, 1970, pages 422-426. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash- coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency

[Go to top]

An Efficient Heuristic Procedure for Partitioning Graphs (PDF)
by Brian W. Kernighan and S. Lin.
In The Bell System Technical Journal 49, January 1970, pages 291-307. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. This problem arises in several physical situations- for example, in assigning the components of electronic circuits to circuit boards to minimize the number of connections between boards. This paper presents a heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems

[Go to top]

The market for "lemons": Quality uncertainty and the market mechanism (PDF)
by George A. Akerlof.
In The Quarterly Journal of Economics 84, August 1970, pages 488-500. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I. Introduction, 488.–II. The model with automobiles as an example, 489.–III. Examples and applications, 492.–IV. Counteracting institutions, 499.–V. Conclusion, 500

[Go to top]

1971

The Evolution of Reciprocal Altruism (PDF)
by Robert L. Trivers.
In The Quarterly Review of Biology 46, March 1971, pages 35-57. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A model is presented to account for the natural selection of what is termed reciprocally altruistic behavior. The model shows how selection can operate against the cheater (non-reciprocator) in the system. Three instances of altruistic behavior are discussed, the evolution of which the model can explain: (1) behavior involved in cleaning symbioses; (2) warning cries in birds; and (3) human reciprocal altruism. Regarding human reciprocal altruism, it is shown that the details of the psychological system that regulates this altruism can be explained by the model. Specifically, friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt, and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system. Each individual human is seen as possessing altruistic and cheating tendencies, the expression of which is sensitive to developmental variables that were selected to set the tendencies at a balance appropriate to the local social and ecological environment

[Go to top]

1976

New directions in cryptography (PDF)
by Whitfield Diffie and Martin E. Hellman.
In IEEE Transactions on Information Theory 22, November 1976, pages 644-654. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing

[Go to top]

1977

Towards a methodology for statistical disclosure control
by T. Dalenius.
In Statistik Tidskrift 15, 1977, pages 2-1. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Non-Discretionary Access Control for Decentralized Computing Systems (PDF)
by Paul A. Karger.
S. M. amp; E. E. thesis, Laboratory for Computer Science, Massachusetts Institute of Technology, May 1977. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This thesis examines the issues relating to non-discretionary access controls for decentralized computing systems. Decentralization changes the basic character of a computing system from a set of processes referencing a data base to a set of processes sending and receiving messages. Because messages must be acknowledged, operations that were read-only in a centralized system become read-write operations. As a result, the lattice model of non-discretionary access control, which mediates operations based on read versus read-write considerations, does not allow direct transfer of algorithms from centralized systems to decentralized systems. This thesis develops new mechanisms that comply with the lattice model and provide the necessary functions for effective decentralized computation. Secure protocols at several different levels are presented in the thesis. At the lowest level, a host or host protocol is shown that allows communication between hosts with effective internal security controls. Above this level, a host independent naming scheme is presented that allows generic naming of services in a manner consistent with the lattice model. The use of decentralized processing to aid in the downgrading of information is shown in the design of a secure intelligent terminal. Schemes are presented to deal with the decentralized administration of the lattice model, and with the proliferation of access classes as the user community of a decentralized system become more diverse. Limitations in the use of end-to-end encryption when used with the lattice model are identified, and a scheme is presented to relax these limitations for broadcast networks. Finally, a scheme is presented for forwarding authentication information between hosts on a network, without transmitting passwords (or their equivalent) over a network

[Go to top]

1978

Limitations of End-to-End Encryption in Secure Computer Networks
by Michael A. Padlipsky, David W. Snow, and Paul A. Karger.
In unknown(ESD-TR-78-158), August 1978. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1979

Compact Encodings of List Structure
by Daniel G. Bobrow and Douglas W. Clark.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

List structures provide a general mechanism for representing easily changed structured data, but can introduce inefficiencies in the use of space when fields of uniform size are used to contain pointers to data and to link the structure. Empirically determined regularity can be exploited to provide more space-efficient encodings without losing the flexibility inherent in list structures. The basic scheme is to provide compact pointer fields big enough to accommodate most values that occur in them and to provide escape mechanisms for exceptional cases. Several examples of encoding designs are presented and evaluated, including two designs currently used in Lisp machines. Alternative escape mechanisms are described, and various questions of cost and implementation are discussed. In order to extrapolate our results to larger systems than those measured, we propose a model for the generation of list pointers and we test the model against data from two programs. We show that according to our model, list structures with compact cdr fields will, as address space grows, continue to be compacted well with a fixed-width small field. Our conclusion is that with a microcodable processor, about a factor of two gain in space efficiency for list structure can be had for little or no cost in processing time

[Go to top]

1980

Protocols for Public Key Cryptosystems
by Ralph C. Merkle.
In Security and Privacy, IEEE Symposium on, 1980, pages 0-122. (BibTeX entry) (Download bibtex record)
(direct link) (website)

New Cryptographic protocols which take full advantage of the unique properties of public key cryptosystems are now evolving. Several protocols for public key distribution and for digital signatures are briefly compared with each other and with the conventional alternative

[Go to top]

1981

Untraceable electronic mail, return addresses, and digital pseudonyms (PDF)
by David Chaum.
In Communications of the ACM 24(2), February 1981, pages 84-90. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication–in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym which appears in a roster of acceptable clients

[Go to top]

1982

The Byzantine Generals Problem (PDF)
by Leslie Lamport, Robert Shostak, and Marshall Pease.
In ACM Trans. Program. Lang. Syst 4(3), 1982, pages 382-401. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Protocols for Secure Computations (PDF)
by Andrew C. Yao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1984

Capability-Based Computer Systems (PDF)
by Henry M. Levy.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

1985

Impossibility of distributed consensus with one faulty process (PDF)
by Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson.
In J. ACM 32(2), 1985, pages 374-382. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the Byzantine Generals problem

[Go to top]

RCS—a system for version control (PDF)
by Walter F. Tichy.
In Softw. Pract. Exper 15(7), 1985, pages 637-654. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An important problem in program development and maintenance is version control, i.e., the task of keeping a software system consisting of many versions and configurations well organized. The Revision Control System (RCS) is a software tool that assists with that task. RCS manages revisions of text documents, in particular source programs, documentation, and test data. It automates the storing, retrieval, logging and identification of revisions, and it provides selection mechanisms for composing configurations. This paper introduces basic version control concepts and discusses the practice of version control using RCS. For conserving space, RCS stores deltas, i.e., differences between successive revisions. Several delta storage methods are discussed. Usage statistics show that RCS's delta storage method is space and time efficient. The paper concludes with a detailed survey of version control tools

[Go to top]

A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms (PDF)
by Taher Gamal.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A new signature scheme is proposed together with an implementation of the Diffie–Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the difficulty of computing discrete logarithms over finite fields

[Go to top]

Networks Without User Observability – Design Options
by Andreas Pfitzmann and Michael Waidner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In present-day communication networks, the network operator or an intruder could easily observe when, how much and with whom the users communicate (traffic analysis), even if the users employ end-to-end encryption. With the increasing use of ISDNs, this becomes a severe threat. Therefore, we summarize basic concepts to keep the recipient and sender or at least their relationship unobservable, consider some possible implementations and necessary hierarchical extensions, and propose some suitable performance and reliability enhancements

[Go to top]

Security without Identification: Transaction Systems to Make Big Brother Obsolete (PDF)
by David Chaum.
In Communications of the ACM 28(10), October 1985, pages 1030-1044. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations

[Go to top]

1986

Networks Without User Observability Design Options (PDF)
by Andreas Pfitzmann and Michael Waidner.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In usual communication networks, the network operator or an intruder could easily observe when, how much and with whom the users communicate (traffic analysis), even if the users employ end-to-end encryption. When ISDNs are used for almost everything, this becomes a severe threat. Therefore, we summarize basic concepts to keep the recipient and sender or at least their relationship unobservable, consider some possible implementations and necessary hierarchical extensions, and propose some suitable performance and reliability enhancements

[Go to top]

Revised report on the algorithmic language scheme (PDF)
by Jonathan Rees, William Clinger, and Richard Kelsey.
In SIGPLAN Not 21(12), 1986, pages 37-79. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The report gives a defining description of the programming language Scheme. Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme. The introduction offers a brief history of the language and of the report. The first three chapters present the fundamental ideas of the language and describe the notational conventions used for describing the language and for writing programs in the language

[Go to top]

Using Sparse Capabilities in a Distributed Operating System (PDF)
by Andrew Tanenbaum, Sape J. Mullender, and Robbert Van Renesse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper we discuss a system, Amoeba, that uses capabilities for naming and protecting objects. In contrast to traditional, centralized operating systems, in which capabilities are managed by the operating system kernel, in Amoeba all the capabilities are managed directly by user code. To prevent tampering, the capabilities are protected cryptographically. The paper describes a variety of the issues involved, and gives four different ways of dealing with the access rights

[Go to top]

1987

How to Play ANY Mental Game or A Completeness Theorem for Protocols with Honest Majority (PDF)
by O. Goldreich, S. Micali, and A. Wigderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players, produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest. Our algorithm automatically solves all the multi-party protocol problems addressed in complexity-based cryptography during the last 10 years. It actually is a completeness theorem for the class of distributed protocols with honest majority. Such completeness theorem is optimal in the sense that, if the majority of the players is not honest, some protocol problems have no efficient solution [C]

[Go to top]

A simple and efficient implementation of a small database (PDF)
by Andrew D. Birrell, Michael B. Jones, and Edward P. Wobber.
In SIGOPS Oper. Syst. Rev 21(5), 1987, pages 149-154. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a technique for implementing the sort of small databases that frequently occur in the design of operating systems and distributed systems. We take advantage of the existence of very large virtual memories, and quite large real memories, to make the technique feasible. We maintain the database as a strongly typed data structure in virtual memory, record updates incrementally on disk in a log and occasionally make a checkpoint of the entire database. We recover from crashes by restoring the database from an old checkpoint then replaying the log. We use existing packages to convert between strongly typed data objects and their disk representations, and to communicate strongly typed data across the network (using remote procedure calls). Our memory is managed entirely by a general purpose allocator and garbage collector. This scheme has been used to implement a name server for a distributed system. The resulting implementation has the desirable property of being simultaneously simple, efficient and reliable

[Go to top]

Strategies for decentralized resource management (PDF)
by Michael Stumm.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Decentralized resource management in distributed systems has become more practical with the availability of communication facilities that support multicasting. In this paper we present several example solutions for managing resources in a decentralized fashion, using multicasting facilities. We review the properties of these solutions in terms of scalability, fault tolerance and efficiency. We conclude that decentralized solutions compare favorably to centralized solutions with respect to all three criteria

[Go to top]

1988

Completeness Theorems for Non-cryptographic Fault-tolerant Distributed Computation (PDF)
by Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t < n/2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t < n/3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight!

[Go to top]

The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability (PDF)
by David Chaum.
In Journal of Cryptology 1, 1988, pages 65-75. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Keeping confidential who sends which messages, in a world where any physical transmission can be traced to its origin, seems impossible. The solution presented here is unconditionally or cryptographically secure, depending on whether it is based on one-time-use keys or on public keys, respectively. It can be adapted to address efficiently a wide variety of practical considerations

[Go to top]

Founding Crytpography on Oblivious Transfer (PDF)
by Joe Kilian.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done

[Go to top]

1990

Skip lists: a probabilistic alternative to balanced trees (PDF)
by William Pugh.
In Commun. ACM 33(6), 1990, pages 668-676. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Skip lists are data structures that use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees

[Go to top]

The dining cryptographers in the disco: unconditional sender and recipient untraceability with computationally secure serviceability (PDF)
by Michael Waidner and Birgit Pfitzmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In Journal of Cryptology 1/1 (1988) 65-75 (= [Chau_88]), David Chaum describes a beautiful technique, the DC-net, which should allow participants to send and receive messages anonymously in an arbitrary network. The untraceability of the senders is proved to be unconditional, but that of the recipients implicitly assumes a reliable broadcast network. This assumption is unrealistic in some networks, but it can be removed completely by using the fail-stop key generation schemes by Waidner (these proceedings, =[Waid_89]). In both cases, however, each participant can untraceably and permanently disrupt the entireDC-net. We present a protocol which guarantees unconditional untraceability, the original goal of the DC-net, onthe inseparability assumption (i.e. the attacker must be unable to prevent honest participants fromcommunicating, which is considerably less than reliable broadcast), and computationally secureserviceability: Computationally restricted disrupters can be identified and removed from the DC-net. On the one hand, our solution is based on the lovely idea by David Chaum [Chau_88 2.5] of setting traps for disrupters. He suggests a scheme to guarantee unconditional untraceability and computationally secure serviceability, too, but on the reliable broadcast assumption. The same scheme seems to be used by Bos and den Boer (these proceedings, = [BoBo_89]). We show that this scheme needs some changes and refinements before being secure, even on the reliable broadcast assumption. On the other hand, our solution is based on the idea of digital signatures whose forgery by an unexpectedly powerful attacker is provable, which might be of independent interest. We propose such a (one-time) signature scheme based on claw-free permutation pairs; the forgery of signatures is equivalent to finding claws, thus in a special case to the factoring problem. In particular, with such signatures we can, for the first time, realize fail-stop Byzantine Agreement, and also adaptive Byzantine Agreement, i.e. Byzantine Agreement which can only be disrupted by an attacker who controls at least a third of all participants and who can forge signatures. We also sketch applications of these signatures to a payment system, solving disputes about shared secrets, and signatures which cannot be shown round

[Go to top]

1991

Distributed Constraint Optimization as a Formal Model of Partially Adversarial Cooperation (PDF)
by Makoto Yokoo and Edmund H. Durfee.
In unknown(CSE-TR-101-9), 1991. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we argue that partially adversarial and partially cooperative (PARC) problems in distributed arti cial intelligence can be mapped into a formalism called distributed constraint optimization problems (DCOPs), which generalize distributed constraint satisfaction problems [Yokoo, et al. 90] by introducing weak constraints (preferences). We discuss several solution criteria for DCOP and clarify the relation between these criteria and di erent levels of agent rationality [Rosenschein and Genesereth 85], and show the algorithms for solving DCOPs in which agents incrementally exchange only necessary information to converge on a mutually satis able bsolution

[Go to top]

Intrusion Tolerance in Distributed Computing Systems (PDF)
by Yves Deswarte, Laurent Blain, and Jean-charles Fabre.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An intrusion-tolerant distributed system is a system which is designed so that any intrusion into apart of the system will not endanger confidentiality, integrity and availability. This approach is suitable for distributed systems, because distribution enables isolation of elements so that an intrusion gives physical access to only a part of the system. By intrusion, we mean not only computer break-ins by non-registered people, but also attempts by registered users to exceed or to abuse their privileges. In particular, possible malice of security administrators is taken into account. This paper describes how some functions of distributed systems can be designed to tolerate intrusions, in particular security functions such as user authentication and authorization, and application functions such as file management

[Go to top]

ISDN-mixes: Untraceable communication with very small bandwidth overhead (PDF)
by Andreas Pfitzmann, Birgit Pfitzmann, and Michael Waidner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Untraceable communication for services like telephony is often considered infeasible in the near future because of bandwidth limitations. We present a technique, called ISDN-MIXes, which shows that this is not the case. As little changes as possible are made to the narrowband-ISDN planned by the PTTs. In particular, we assume the same subscriber lines with the same bit rate, and the same long-distance network between local exchanges, and we offer the same services. ISDN-MIXes are a combination of a new variant of CHAUM's MIXes, dummy traffic on the subscriber lines (where this needs no additional bandwidth), and broadcast of incoming-call messages in the subscriber-area

[Go to top]

1992

Transferred Cash Grows in Size (PDF)
by David Chaum and Torben P. Pedersen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

All known methods for transferring electronic money have the disadvantages that the number of bits needed to represent the money after each payment increases, and that a payer can recognize his money if he sees it later in the chain of payments (forward traceability). This paper shows that it is impossible to construct an electronic money system providing transferability without the property that the money grows when transferred. Furthermore it is argued that an unlimited powerful user can always recognize his money later. Finally, the lower bounds on the size of transferred electronic money are discussed in terms of secret sharing schemes

[Go to top]

1993

Cryptographic Defense Against Traffic Analysis (PDF)
by Charles Rackoff and Daniel R. Simon.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Efficient anonymous channel and all/nothing election scheme (PDF)
by Choonsik Park, Kazutomo Itoh, and Kaoru Kurosawa.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The contribution of this paper are twofold. First, we present an efficient computationally secure anonymous channel which has no problme of ciphertext length expansion. The length is irrelevant to the number of MIXes(control centers). It improves the efficiency of Chaums's election scheme based on the MIX net automatically. Second, we show an election scheme which satisfies fairness. That is, if some vote is disrupted, no one obtains any infromation about all the other votes. Each voter sends O(nk) bits so that the probability of the fairness is 1-2^-k, where n is the bit length of the ciphertext

[Go to top]

Elliptic Curve Public Key Cryptosystems
by Alfred J. Menezes.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Elliptic curves have been intensively studied in algebraic geometry and number theory. In recent years they have been used in devising efficient algorithms for factoring integers and primality proving, and in the construction of public key cryptosystems. Elliptic Curve Public Key Cryptosystems provides an up-to-date and self-contained treatment of elliptic curve-based public key cryptology. Elliptic curve cryptosystems potentially provide equivalent security to the existing public key schemes, but with shorter key lengths. Having short key lengths means smaller bandwidth and memory requirements and can be a crucial factor in some applications, for example the design of smart card systems. The book examines various issues which arise in the secure and efficient implementation of elliptic curve systems. Elliptic Curve Public Key Cryptosystems is a valuable reference resource for researchers in academia, government and industry who are concerned with issues of data security. Because of the comprehensive treatment, the book is also suitable for use as a text for advanced courses on the subject

[Go to top]

A Persistent System in Real Use–Experiences of the First 13 Years (PDF)
by Jochen Liedtke.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Eumel and its advanced successor L3 are operating systems built by GMD which have been used, for 13 years and 4 years respectively, as production systems in business and education. More than 2000 Eumel systems and 500 L3 systems have been shipped since 1979 and 1988. Both systems rely heavily on the paradigm of persistence (including fault-surviving persistence). Both data and processes, in principle all objects are persistent, files are implemented by means of persistent objects (not vice versa) etc. In addition to the principles and mechanisms of Eumel /L3, general and specific experiences are described: these relate to the design, implementation and maintenance of the systems over the last 13 years. For general purpose timesharing systems the idea is powerful and elegant, it can be efficiently implemented, but making a system really usable is hard work

[Go to top]

Allocative Efficiency of Markets with Zero-Intelligence Traders: Market as a Partial Substitute for Individual Rationality (PDF)
by Dhananjay K. Gode and Shyam Sunder.
In Journal of Political Economy 101, February 1993, pages 119-137. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We report market experiments in which human traders are replaced by "zero-intelligence" programs that submit random bids and offers. Imposing a budget constraint (i.e., not permitting traders to sell below their costs or buy above their values) is sufficient to raise the allocative efficiency of these auctions close to 100 percent. Allocative efficiency of a double auction derives largely from its structure, independent of traders' motivation, intelligence, or learning. Adam Smith's invisible hand may be more powerful than some may have thought; it can generate aggregate rationality not only from individual rationality but also from individual irrationality

[Go to top]

SURF-2: A program for dependability evaluation of complex hardware and software systems
by C. Beounes, M. Aguera, J. Arlat, S. Bachmann, C. Bourdeau, J. -. Doucet, K. Kanoun, J. -. Laprie, S. Metge, J. Moreira de Souza, D. Powell, and P. Spiesser.
In the Proceedings of FTCS-23 The Twenty-Third International Symposium on Fault-Tolerant Computing, June 1993, pages 668-673. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SURF-2, a software tool for evaluating system dependability, is described. It is especially designed for an evaluation-based system design approach in which multiple design solutions need to be compared from the dependability viewpoint. System behavior may be modeled either by Markov chains or by generalized stochastic Petri nets. The tool supports the evaluation of different measures of dependability, including pointwise measures, asymptotic measures, mean sojourn times and, by superposing a reward structure on the behavior model, reward measures such as expected performance or cost

[Go to top]

1994

The Bayou Architecture: Support for Data Sharing among Mobile Users (PDF)
by Alan Demers, Karin Petersen, Mike Spreitzer, Douglas Terry, Marvin Theimer, and Brent Welch.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Bayou System is a platform of replicated, highly-available, variable-consistency, mobile databases on which to build collaborative applications. This paper presents the preliminary system architecture along with the design goals that influenced it. We take a fresh, bottom-up and critical look at the requirements of mobile computing applications and carefully pull together both new and existing techniques into an overall architecture that meets these requirements. Our emphasis is on supporting application-specific conflict detection and resolution and on providing application controlled inconsistency

[Go to top]

File system design for an NFS file server appliance (PDF)
by Dave Hitz, James Lau, and Michael Malcolm.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Appliance Corporation recently began shipping a new kind of network server called an NFS file server appliance, which is a dedicated server whose sole function is to provide NFS file service. The file system requirements for an NFS appliance are different from those for a general-purpose UNIX system, both because an NFS appliance must be optimized for network file access and because an appliance must be easy to use. This paper describes WAFL (Write Anywhere File Layout), which is a file system designed specifically to work in an NFS appliance. The primary focus is on the algorithms and data structures that WAFL uses to implement Snapshotst, which are read-only clones of the active file system. WAFL uses a copy-on-write technique to minimize the disk space that Snapshots consume. This paper also describes how WAFL uses Snapshots to eliminate the need for file system consistency checking after an unclean shutdown

[Go to top]

Finding Similar Files in a Large File System (PDF)
by Udi Manber.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a tool, called sif, for finding all similar files in a large file system. Files are considered similar if they have significant number of common pieces, even if they are very different otherwise. For example, one file may be contained, possibly with some changes, in another file, or a file may be a reorganization of another file. The running time for finding all groups of similar files, even for as little as 25 similarity, is on the order of 500MB to 1GB an hour. The amount of similarity and several other customized parameters can be determined by the user at a post-processing stage, which is very fast. Sif can also be used to very quickly identify all similar files to a query file using a preprocessed index. Application of sif can be found in file management, information collecting (to remove duplicates), program reuse, file synchronization, data compression, and maybe even plagiarism detection. 1. Introduction Our goal is to identify files that came from the same source

[Go to top]

Libckpt: Transparent Checkpointing under Unix (PDF)
by James S. Plank, Micah Beck, Gerry Kingsley, and Kai Li.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Checkpointing is a simple technique for rollback recovery: the state of an executing program is periodically saved to a disk file from which it can be recovered after a failure. While recent research has developed a collection of powerful techniques for minimizing the overhead of writing checkpoint files, checkpointing remains unavailable to most application developers. In this paper we describe libckpt, a portable checkpointing tool for Unix that implements all applicable performance optimizations which are reported in the literature. While libckpt can be used in a mode which is almost totally transparent to the programmer, it also supports the incorporation of user directives into the creation of checkpoints. This user-directed checkpointing is an innovation which is unique to our work. 1 Introduction Consider a programmer who has developed an application which will take a long time to execute, say five days. Two days into the computation, the processor on which the application is

[Go to top]

1995

Balanced Distributed Search Trees Do Not Exist (PDF)
by Brigitte Kröll and Peter Widmayer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper is a first step towards an understanding of the inherent limitations of distributed data structures. We propose a model of distributed search trees that is based on few natural assumptions. We prove that any class of trees within our model satisfies a lower bound of p m) on the worst case height of distributed search trees for m keys. That is, unlike in the single site case, balance in the sense that the tree height satisfies a logarithmic upper bound cannot be achieved. This is true although each node is allowed to have arbitrary degree (note that in this case, the height of a single site search tree is trivially bounded by one). By proposing a method that generates trees of height O( p m), we show the bound to be tight. 1 Introduction Distributed data structures have attracted considerable attention in the past few years. From a practical viewpoint, this is due to the increasing availability of networks of workstations

[Go to top]

Exploiting weak connectivity for mobile file access (PDF)
by Lily B. Mummert, Maria Ebling, and Mahadev Satyanarayanan.
In SIGOPS Oper. Syst. Rev 29(5), 1995, pages 143-155. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The final frontier: Embedding networked sensors in the soil (PDF)
by Nithya Ramanathan, Tom Schoellhammer, Deborah Estrin, Mark Hansen, Tom Harmon, Eddie Kohler, and Mani Srivastava.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the first systematic design of a robust sensing system suited for the challenges presented by soil environments. We describe three soil deployments we have undertaken: in Bangladesh, and in California at the James Reserve and in the San Joaquin River basin. We discuss our experiences and lessons learned in deploying soil sensors. We present data from each deployment and evaluate our techniques for improving the information yield from these systems. Our most notable results include the following: in-situ calibration techniques to postpone labor-intensive and soil disruptive calibration events developed at the James Reserve; achieving a 91 network yield from a Mica2 wireless sensing system without end-to-end reliability in Bangladesh; and the javelin, a new platform that facilitates the deployment, replacement and in-situ calibration of soil sensors, deployed in the San Joaquin River basin. Our techniques to increase information yield have already led to scientifically promising results, including previously unexpected diurnal cycles in various soil chemistry parameters across several deployments

[Go to top]

Private Information Retrieval (PDF)
by Benny Chor, Oded Goldreich, Eyal Kushilevitz, and Madhu Sudan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Publicly accessible databases are an indispensable resource for retrieving up-to-date information. But they also pose a significant risk to the privacy of the user, since a curious database operator can follow the user's queries and infer what the user is after. Indeed, in cases where the users' intentions are to be kept secret, users are often cautious about accessing the database. It can be shown that when accessing a single database, to completely guarantee the privacy of the user, the whole database should be down-loaded; namely n bits should be communicated (where n is the number of bits in the database).In this work, we investigate whether by replicating the database, more efficient solutions to the private retrieval problem can be obtained. We describe schemes that enable a user to access k replicated copies of a database (k>=2) and privately retrieve information stored in the database. This means that each individual server (holding a replicated copy of the database) gets no information on the identity of the item retrieved by the user. Our schemes use the replication to gain substantial saving. In particular, we present a two-server scheme with communication complexity O(n1/3)

[Go to top]

Receipt-Free MIX-Type Voting Scheme–A Practical Solution to the Implementation of a Voting Booth (PDF)
by Joe Kilian and Kazue Sako.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a receipt-free voting scheme based on a mix- type anonymous channel [Cha81, PIK93]. The receipt-freeness property [BT94] enables voters to hide how they have voted even from a powerful adversary who is trying to coerce him. The work of [BT94] gave the first solution using a voting booth, which is a hardware assumption not unlike that in current physical elections. In our proposed scheme, we reduce the physical assumptions required to obtain receipt-freeness. Our sole physical assumption is the existence of a private channel through which the center can send the voter a message without fear of eavesdropping

[Go to top]

Preserving Privacy in a Network of Mobile Computers (PDF)
by David A. Cooper and Kenneth P. Birman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Even as wireless networks create the potential for access to information from mobile platforms, they pose a problem for privacy. In order to retrieve messages, users must periodically poll the network. The information that the user must give to the network could potentially be used to track that user. However, the movements of the user can also be used to hide the user's location if the protocols for sending and retrieving messages are carefully designed. We have developed a replicated memory service which allows users to read from memory without revealing which memory locations they are reading. Unlike previous protocols, our protocol is efficient in its use of computation and bandwidth. We show how this protocol can be used in conjunction with existing privacy preserving protocols to allow a user of a mobile computer to maintain privacy despite active attacks

[Go to top]

1996

An Empirical Study of Delta Algorithms
by James J. Hunt, Kiem-Phong Vo, and Walter F. Tichy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Delta algorithms compress data by encoding one file in terms of another. This type of compression is useful in a number of situations: storing multiple versions of data, distributing updates, storing backups, transmitting video sequences, and others. This paper studies the performance parameters of several delta algorithms, using a benchmark of over 1300 pairs of files taken from two successive releases of GNU software. Results indicate that modern delta compression algorithms based on Ziv-Lempel techniques significantly outperform diff, a popular but older delta compressor, in terms of compression ratio. The modern compressors also correlate better with the actual difference between files; one of them is even faster than diff in both compression and decompression speed

[Go to top]

Establishing identity without certification authorities (PDF)
by Carl M. Ellison.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper is that a traditional identity certificate is neither necessary nor sufficient for this purpose. It is especially useless if the two parties concerned did not have the foresight to obtain such certificates before desiring to open a secure channel. There are many methods for establishing identity without using certificates from trusted certification authorities. The relationship between verifier and subject guides the choice of method. Many of these relationships have easy, straight-forward methods for binding a public key to an identity, using a broadcast channel or 1:1 meetings, but one relationship makes it especially difficult. That relationship is one with an old friend with whom you had lost touch but who appears now to be available on the net. You make contact and share a few exchanges which suggest to you that this is, indeed, your old friend. Then you want to form a secure channel in order to carry on a more extensive conversation in private. This case is subject to the man-in-themiddle attack. For this case, a protocol is presented which binds a pair of identities to a pair of public keys without using any certificates issued by a trusted CA. The apparent direct conflict between conventional wisdom and the thesis of this paper lies in the definition of the word "identity" – a word which is commonly left undefined in discussions of certification

[Go to top]

Mixing email with babel (PDF)
by Ceki Gulcu and Gene Tsudik.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Increasingly large numbers of people communicate today via electronic means such as email or news forums. One of the basic properties of the current electronic communication means is the identification of the end-points. However, at times it is desirable or even critical to hide the identity and/or whereabouts of the end-points (e.g., human users) involved. This paper discusses the goals and desired properties of anonymous email in general and introduces the design and salient features of Babel anonymous remailer. Babel allows email users to converse electronically while remaining anonymous with respect to each other and to other– even hostile–parties. A range of attacks and corresponding countermeasures is considered. An attempt is made to formalize and quantify certain dimensions of anonymity and untraceable communication

[Go to top]

Reducing Power Consumption of Network Interfaces in Hand-Held Devices (Extended Abstract) (PDF)
by Mark Stemm, Paul Gauthier, Daishi Harada, and Randy H. Katz.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An important issue to be addressed for the next generation of wirelessly-connected hand-held devices is battery longevity. In this paper we examine this issue from the point of view of the Network Interface (NI). In particular, we measure the power usage of two PDAs, the Apple Newton Messagepad and Sony Magic Link, and four NIs, the Metricom Ricochet Wireless Modem, the ATamp;T Wavelan operating at 915 MHz and 2.4 GHz, and the IBM Infrared Wireless LAN Adapter. These measurements clearly indicate that the power drained by the network interface constitutes a large fraction of the total power used by the PDA. We also conduct trace-driven simulation experiments and show that by using applicationspecific policies it is possible to

[Go to top]

Hiding Routing Information (PDF)
by David Goldschlag, Michael Reed, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Abstract. This paper describes an architecture, Onion Routing, that limits a network's vulnerability to trac analysis. The architecture provides anonymous socket connections by means of proxy servers. It provides real-time, bi-directional, nonymous communication for any protocol that can be adapted to use a proxy service. Speci cally, the architecture provides for bi-directional communication even though no-one but the initiator's proxy server knows anything but previous and next hops in the communication chain. This implies that neither the respondent nor his proxy server nor any external observer need know the identity of the initiator or his proxy server. A prototype of Onion Routing has been implemented. This prototype works with HTTP (World Wide Web) proxies. In addition, an analogous proxy for TELNET has been implemented. Proxies for FTP and SMTP are under development

[Go to top]

Mixed constraint satisfaction: a framework for decision problems under incomplete knowledge (PDF)
by Hélène Fargier, Jérôme Lang, and Thomas Schiex.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Constraint satisfaction is a powerful tool for representing and solving decision problems with complete knowledge about the world. We extend the CSP framework so as to represent decision problems under incomplete knowledge. The basis of the extension consists in a distinction between controllable and uncontrollable variables – hence the terminology "mixed CSP" – and a "solution" gives actually a conditional decision. We study the complexity of deciding the consistency of a mixed CSP. As the problem is generally intractable, we propose an algorithm for finding an approximate solution

[Go to top]

Prospects for Remailers (PDF)
by Sameer Parekh.
In First Monday 1(2), August 1996. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Remailers have permitted Internet users to take advantage of the medium as a means to communicate with others globally on sensitive issues while maintaining a high degree of privacy. Recent events have clearly indicated that privacy is increasingly at risk on the global networks. Individual efforts have, so far, worked well in maintaining for most Internet users a modicum of anonymity. With the growth of increasingly sophisticated techniques to defeat anonymity, there will be a need for both standards and policies to continue to make privacy on the Internet a priority

[Go to top]

The Eternity Service (PDF)
by Ross Anderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Internet was designed to provide a communications channel that is as resistant to denial of service attacks as human ingenuity can make it. In this note, we propose the construction of a storage medium with similar properties. The basic idea is to use redundancy and scattering techniques to replicate data across a large set of machines (such as the Internet), and add anonymity mechanisms to drive up the cost of selective service denial attacks. The detailed design of this service is an interesting scientific problem, and is not merely academic: the service may be vital in safeguarding individual rights against new threats posed by the spread of electronic publishing

[Go to top]

1997

Computationally private information retrieval (extended abstract) (PDF)
by Benny Chor and Niv Gilboa.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private information retrieval (PIR) schemes enable a user to access k replicated copies of a database (k 2), and privately retrieve one of the n bits of data stored in the databases. This means that the queries give each individual database no partial information (in the information theoretic sense) on the identity of the item retrieved by the user. Today, the best two database scheme (k = 2) has communication complexity O(n 1=3 ), while for any constant number, k, the best k database scheme has communication complexity O(n 1=(2k1) ). The motivation for the present work is the question whether this complexity can be reduced if one is willing to achieve computational privacy, rather than information theoretic privacy. (This means that privacy is guaranteed only with respect to databases that are restricted to polynomial time computations.) We answer this question affirmatively, and Computer Science Dept., Technion, Haifa, Israel

[Go to top]

Fault Tolerant Anonymous Channel (PDF)
by Wakaha Ogata, Kaoru Kurosawa, Kazue Sako, and Kazunori Takatani.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a zero-knowledge proof that a mix in onion routing can perform in order to proof that it did route the messages properly. This allows the deployment of a mix-net where malicious mixes can be detected without using dummy-traffic to probe for correctness. Technical

[Go to top]

A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing (PDF)
by Sally Floyd, Van Jacobson, Ching-Gung Liu, Steven McCanne, and Lixia Zhang.
In IEEE/ACM Trans. Netw 5, 1997, pages 784-803. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for light-weight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has been prototyped in wb, a distributed whiteboard application, which has been used on a global scale with sessions ranging from a few to a few hundred participants. The paper describes the principles that have guided the SRM design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies

[Go to top]

Privacy-enhancing Technologies for the Internet (PDF)
by Ian Goldberg, David Wagner, and Eric Brewer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The increased use of the Internet for everyday activities is bringing new threats to personal privacy. This paper gives an overview of existing and potential privacy-enhancing technologies for the Internet, as well as motivation and challenges for future work in this field

[Go to top]

Practical Loss-Resilient Codes (PDF)
by Michael Luby, Michael Mitzenmacher, M. Amin Shokrollahi, Daniel A. Spielman, and Volker Stemann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a randomized construction of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encoding and decoding algorithms for these codes have fast and simple software implementations. Partial implementations of our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm for this problem. We expect these codes will be extremely useful for applications such as real-time audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design and analysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph, where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithm by a set of differential equations. The solution to these equations can then be expressed as polynomials in one variable with coefficients determined by the graph structure. Based on these polynomials, we design a graph structure that guarantees successful decoding with high probability

[Go to top]

TAZ servers and the rewebber network: Enabling anonymous publishing on the world wide web (PDF)
by Ian Goldberg and David Wagner.
In First Monday 3(4), August 1997. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The World Wide Web has recently matured enough to provide everyday users with an extremely cheap publishing mechanism. However, the current WWW architecture makes it fundamentally difficult to provide content without identifying yourself. We examine the problem of anonymous publication on the WWW, propose a design suitable for practical deployment, and describe our implementation. Some key features of our design include universal accessibility by pre-existing clients, short persistent names, security against social, legal, and political pressure, protection against abuse, and good performance

[Go to top]

1998

Analysis of random processes via And-Or tree evaluation (PDF)
by Michael Luby, Michael Mitzenmacher, and M. Amin Shokrollahi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced

[Go to top]

Anonymous Connections and Onion Routing (PDF)
by Michael Reed, Paul Syverson, and David Goldschlag.
In IEEE Journal on Selected Areas in Communications 16, 1998, pages 482-494. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Onion routing's anonymous connections are bidirectional and near realtime, and can be used anywhere a socket connection can be used. Any identifying information must be in the data stream carried over an anonymous connection. An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection. Onions themselves appear differently to each onion router as well as to network observers. The same goes for data carried over the connections they establish. Proxy aware applications, such as web browsing and e-mail, require no modification to use onion routing, and do so through a series of proxies. A prototype onion routing network is running between our lab and other sites. This paper describes anonymous connections and their imple

[Go to top]

Crowds: Anonymity for web transactions (PDF)
by Michael K. Reiter and Aviel D. Rubin.
In ACM Transactions on Information and System Security 1, 1998, pages 66-92. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Crowds is a system that allows anonymous web-surfing. For each host, a random static path through the crowd is formed that then acts as a sequence of proxies, indirecting replies and responses. Vulnerable when facing adversaries that can perform traffic analysis at the local node and without responder anonymity. But highly scalable and efficient

[Go to top]

Low Density MDS Codes and Factors of Complete Graphs (PDF)
by Lihao Xu, Vasken Bohossian, Jehoshua Bruck, and David Wagner.
In IEEE Trans. on Information Theory 45, 1998, pages 1817-1826. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We reveal an equivalence relation between the construction of a new class of low density MDS array codes, that we call B-Code, and a combinatorial problem known as perfect onefactorization of complete graphs. We use known perfect one-factors of complete graphs to create constructions and decoding algorithms for both B-Code and its dual code. B-Code and its dual are optimal in the sense that (i) they are MDS, (ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal and (iii) they have optimal length. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-codes of arbitrary odd length will provide an affirmative answer to the conjecture

[Go to top]

Modelling with Generalized Stochastic Petri Nets (PDF)
by Marco Ajmone Marsan, Gianfranco Balbo, Gianni Conte, Susanna Donatelli, and Giuliana Franceschinis.
In SIGMETRICS Perform. Eval. Rev 26(2), 1998, pages 0-2. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

PipeNet 1.1
by Wei Dai.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A Random Server Model for Private Information Retrieval or How to Achieve Information Theoretic PIR Avoiding Database Replication (PDF)
by Yael Gertner, Shafi Goldwasser, and Tal Malkin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private information retrieval (PIR) schemes provide a user with information from a database while keeping his query secret from the database manager. We propose a new model for PIR, utilizing auxiliary random servers providing privacy services for database access. The principal database initially engages in a preprocessing setup computation with the random servers, followed by the on-line stage with the users. Using this model we achieve the first PIR information theoretic solutions in which the database does not need to give away its data to be replicated, and with minimal on-line computation cost for the database. This solves privacy and efficiency problems inherent to all previous solutions. Specifically, in all previously existing PIR schemes the database on-line computation for one query is at least linear in the size of the data, and all previous information theoretic schemes require multiple replications of the database which are not allowed to communicate with each other.This poses a privacy problem for the database manager, who is required to hand his data to multiple foreign entities, and to the user, who is supposed to trust the multiple copies of the database not to communicate. In contrast, in our solutions no replication is needed, and the database manager only needs to perform O(1) amount of computation to answer questions of users, while all the extra computations required on line for privacy are done by the auxiliary random servers, who contain no information about the data

[Go to top]

Real-Time MIXes: A Bandwidth-Efficient Anonymity Protocol
by Anja Jerichow, Jan Müller, Andreas Pfitzmann, Birgit Pfitzmann, and Michael Waidner.
In IEEE Journal on Selected Areas in Communications 16(4), 1998, pages 495-509. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present techniques for efficient anonymous communication with real-time constraints as necessary for services like telephony, where a continuous data stream has to be transmitted. For concreteness, we present the detailed protocols for the narrow-band ISDN (integrated services digital network), although the heart of our techniques-anonymous channels-can also be applied to other networks. For ISDN, we achieve the same data rate as without anonymity, using the same subscriber lines and without any significant modifications to the long-distance network. A precise performance analysis is given. Our techniques are based on mixes, a method for anonymous communication for e-mail-like services introduced by D. Chaum (1981)

[Go to top]

Secure Multi-Party Computation
by Oded Goldreich.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Stop-and-Go MIXes: Providing Probabilistic Anonymity in an Open System (PDF)
by Dogan Kesdogan, Jan Egner, and Roland Büschkes.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Currently known basic anonymity techniques depend on identity verification. If verification of user identities is not possible due to the related management overhead or a general lack of information (e.g. on the Internet), an adversary can participate several times in a communication relationship and observe the honest users. In this paper we focus on the problem of providing anonymity without identity verification. The notion of probabilistic anonymity is introduced. Probabilistic anonymity is based on a publicly known security parameter, which determines the security of the protocol. For probabilistic anonymity the insecurity, expressed as the probability of having only one honest participant, approaches 0 at an exponential rate as the security parameter is changed linearly. Based on our security model we propose a new MIX variant called Stop-and-Go-MIX (SG-MIX) which provides anonymity without identity verification, and prove that it is probabilistically secure

[Go to top]

Universally Verifiable mix-net With Verification Work Independent of The Number of mix Servers
by Masayuki Abe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we construct a universally verifiable Mix-net where the amount of work done by a verifier is independent of the number of mix-servers. Furthermore, the computational task of each mix-server is constant against the number of mix-servers except for some negligible tasks like addition. The scheme is robust, too

[Go to top]

PipeNet 1.0 (PDF)
by Wei Dai.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A digital fountain approach to reliable distribution of bulk data (PDF)
by John W. Byers, Michael Luby, Michael Mitzenmacher, and Ashutosh Rege.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The proliferation of applications that must reliably distribute bulk data to a large number of autonomous clients motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous clients to acquire bulk data with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates.We develop a protocol that closely approximates a digital fountain using a new class of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation and performance of an experimental system

[Go to top]

The Design, Implementation and Operation of an Email Pseudonym Server (PDF)
by David Mazières and Frans M. Kaashoek.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Attacks on servers that provide anonymity generally fall into two categories: attempts to expose anonymous users and attempts to silence them. Much existing work concentrates on withstanding the former, but the threat of the latter is equally real. One particularly e$$ective attack against anonymous servers is to abuse them and stir up enough trouble that they must shut down. This paper describes the design, implementation, and operation of nym.alias.net, a server providing untraceable email aliases. We enumerate many kinds of abuse the system has weathered during two years of operation, and explain the measures we enacted in response. From our experiences, we distill several principles by which one can protect anonymous servers from similar attacks

[Go to top]

1999

Ant algorithms for discrete optimization (PDF)
by Marco Dorigo, Gianni Caro, and Luca M. Gambardella.
In Artif. Life 5(2), 1999, pages 137-172. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic

[Go to top]

Burt: The Backup and Recovery Tool (PDF)
by Eric Melski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Burt is a freely distributed parallel network backup system written at the University of Wisconsin, Madison. It is designed to backup large heterogeneous networks. It uses the Tcl scripting language and standard backup programs like dump(1) and GNUTar to enable backups of a wide variety of data sources, including UNIX and Windows NT workstations, AFS based storage, and others. It also uses Tcl for the creation of the user interface, giving the system administrator great flexibility in customizing the system. Burt supports parallel backups to ensure high backup speeds, and checksums to ensure data integrity. The principal contribution of Burt is that it provides a powerful I/O engine within the context of a flexible scripting language; this combination enables graceful solutions to many problems associated with backups of large installations. At our site, we use Burt to backup data from 350 workstations and from our AFS servers, a total of approximately 900 GB every two weeks

[Go to top]

Deciding when to forget in the Elephant file system (PDF)
by Douglas S. Santry, Michael J. Feeley, Norman C. Hutchinson, Alistair C. Veitch, Ross W. Carton, and Jacob Ofir.
In SIGOPS Oper. Syst. Rev 33(5), 1999, pages 110-123. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite. This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using filegrain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system. Elephant is implemented as a new Virtual File System in the FreeBSD kernel

[Go to top]

A Distributed Decentralized Information Storage and Retrieval System
by Ian Clarke.
Ph.D. thesis, University of Edinburgh, 1999. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This report describes an algorithm which if executed by a group of interconnected nodes will provide a robust key-indexed information storage and retrieval system with no element of central control or administration. It allows information to be made available to a large group of people in a similar manner to the "World Wide Web". Improvements over this existing system include:–No central control or administration required–Anonymous information publication and retrieval–Dynamic duplication of popular information–Transfer of information location depending upon demand There is also potential for this system to be used in a modified form as an information publication system within a large organisation which may wish to utilise unused storage space which is distributed across the organisation. The system's reliability is not guaranteed, nor is its efficiency, however the intention is that the efficiency and reliability will be sufficient to make the system useful, and demonstrate that

[Go to top]

Flash mixing (PDF)
by Markus Jakobsson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

New Sequences of Linear Time Erasure Codes Approaching the Channel Capacity (PDF)
by M. Amin Shokrollahi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We will introduce a new class of erasure codes built from irregular bipartite graphs that have linear time encoding and decoding algorithms and can transmit over an erasure channel at rates arbitrarily close to the channel capacity. We also show that these codes are close to optimal with respect to the trade-off between the proximity to the channel capacity and the running time of the recovery algorithm

[Go to top]

Next century challenges: scalable coordination in sensor networks (PDF)
by Deborah Estrin, Ramesh Govindan, John Heidemann, and Satish Kumar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Networked sensors – those that coordinate amongst themselves to achieve a larger sensing task – will revolutionize information gathering and processing both in urban environments and in inhospitable terrain. The sheer numbers of these sensors and the expected dynamics in these environments present unique challenges in the design of unattended autonomous sensor networks. These challenges lead us to hypothesize that sensor network coordination applications may need to be structured differently from traditional network applications. In particular, we believe that localized algorithms (in which simple local node behavior achieves a desired global objective) may be necessary for sensor network coordination. In this paper, we describe localized algorithms, and then discuss directed diffusion, a simple communication model for describing localized algorithms

[Go to top]

Onion Routing for Anonymous and Private Internet Connections (PDF)
by David Goldschlag, Michael Reed, and Paul Syverson.
In Communications of the ACM 42, 1999, pages 39-41. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this article's publication, the prototype network is processing more than 1 million Web connections per month from more than six thousand IP addresses in twenty countries and in all six main top level domains. [7] Onion Routing operates by dynamically building anonymous connections within a network of real-time Chaum Mixes [3]. A Mix is a store and forward device that accepts a number of fixed-length messages from numerous sources, performs cryptographic transformations on the messages, and then forwards the messages to the next destination in a random order. A single Mix makes tracking of a particular message either by specific bit-pattern, size, or ordering with respect to other messages difficult. By routing through numerous Mixes in the network, determining who is talking to whom becomes even more difficult. Onion Routing's network of core onion-routers (Mixes) is distributed, faulttolerant, and under the control of multiple administrative domains, so no single onion-router can bring down the network or compromise a user's privacy, and cooperation between compromised onion-routers is thereby confounded

[Go to top]

Operation-based update propagation in a mobile file system (PDF)
by Yui-Wah Lee, Kwong-Sak Leung, and Mahadev Satyanarayanan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we describe a technique called operation-based update propagation for efficiently transmitting updates to large files that have been modified on a weakly connected client of a distributed file system. In this technique, modifications are captured above the file-system layer at the client, shipped to a surrogate client that is strongly connected to a server, re-executed at the surrogate, and the resulting files transmitted from the surrogate to the server. If re-execution fails to produce a file identical to the original, the system falls back to shipping the file from the client over the slow network. We have implemented a prototype of this mechanism in the Coda File System on Linux, and demonstrated performance improvements ranging from 40 percents to nearly three orders of magnitude in reduced network traffic and elapsed time. We also found a novel use of forward error correction in this context

[Go to top]

Public-key Cryptosystems Based on Composite Degree Residuosity Classes (PDF)
by Pascal Paillier.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model

[Go to top]

Group Principals and the Formalization of Anonymity (PDF)
by Paul Syverson and Stuart Stubblebine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce the concept of a group principal and present a number of different classes of group principals, including threshold-group-principals. These appear to naturally useful concepts for looking at security. We provide an associated epistemic language and logic and use it to reason about anonymity protocols and anonymity services, where protection properties are formulated from the intruder's knowledge of group principals. Using our language, we give an epistemic characterization of anonymity properties. We also present a specification of a simple anonymizing system using our theory

[Go to top]

The Theory of Moral Hazard and Unobservable Behaviour: Part I
by James A. Mirrlees.
In Review of Economic Studies 66(1), January 1999, pages 3-21. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This article presents information on principal-agent models in which outcomes conditional on the agent's action are uncertain, and the agent's behavior therefore unobservable. For a model with bounded agent's utility, conditions are given under which the first-best equilibrium can be approximated arbitrarily closely by contracts relating payment to observable outcomes. For general models, it is shown that the solution may not always be obtained by using the agent's first-order conditions as constraint. General conditions of Lagrangean type are given for problems in which contracts are finite-dimensional

[Go to top]

Algorithms for Selfish Agents (PDF)
by Noam Nisan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper considers algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own self-interest. Such scenarios arise, in particular, when computers or users aim to cooperate or trade over the Internet. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents' interests are best served by behaving correctly. This exposition presents a model to formally study such algorithms. This model, based on the field of mechanism design, is taken from the author's joint work with Amir Ronen, and is similar to approaches taken in the distributed AI community in recent years. Using this model, we demonstrate how some of the techniques of mechanism design can be applied towards distributed computation problems. We then exhibit some issues that arise in distributed computation which require going beyond the existing theory of mechanism design

[Go to top]

Algorithmic Mechanism Design (PDF)
by Noam Nisan and Amir Ronen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own self-interest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents ' interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice

[Go to top]

2000

Adapting Publish/Subscribe Middleware to Achieve Gnutella-like Functionality (PDF)
by Dennis Heimbigner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Gnutella represents a new wave of peer-to-peer applications providing distributed discovery and sharing of resources across the Internet. Gnutella is distinguished by its support for anonymity and by its decentralized architecture. The current Gnutella architecture and protocol have numerous flaws with respect to efficiency, anonymity, and vulnerability to malicious actions. An alternative design is described that provides Gnutella-like functionality but removes or mitigates many of Gnutella's flaws. This design, referred to as Query/Advertise (Q/A) is based upon a scalable publish/subscribe middleware system called Sienab. A prototype implementation of Q/A is described. The relative benefits of this approach are discussed, and a number of open research problems are identified with respect to Q/A systems

[Go to top]

Anonymity, Unobservability, and Pseudonymity–A Proposal for Terminology
by Andreas Pfitzmann and Marit Köhntopp.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Attack for Flash MIX (PDF)
by Masashi Mitomo and Kaoru Kurosawa.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

AMIX net takes a list of ciphertexts (c 1,... , c N) and outputs a permuted list of the plaintexts (m 1,... ,m N) without revealing the relationship between (c 1,... , c N) and (m 1,... ,m N). This paper shows that the Jakobsson's flash MIX of PODC'99, which was believed to be the most efficient robust MIX net, is broken. The first MIX server can prevent computing the correct output with probability 1 in our attack. We also present a countermeasure for our attack

[Go to top]

Energy-Efficient Communication Protocol for Wireless Microsensor Networks (PDF)
by Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated

[Go to top]

Enforcing service availability in mobile ad-hoc WANs (PDF)
by Levente Buttyán and Jean-Pierre Hubaux.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we address the problem of service availability in mobile ad-hoc WANs. We present a secure mechanism to stimulate end users to keep their devices turned on, to refrain from overloading the network, and to thwart tampering aimed at converting the device into a "selfish" one. Our solution is based on the application of a tamper resistant security module in each device and cryptographic protection of messages

[Go to top]

Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs (PDF)
by William J. Bolosky, John R. Douceur, David Ely, and Marvin Theimer.
In SIGMETRICS Performance Evaluation Review 28(1), 2000, pages 34-43. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days

[Go to top]

Fisheye State Routing in Mobile Ad Hoc Networks (PDF)
by Guangyu Pei, Mario Gerla, and Tsu-Wei Chen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present a novel routing protocol for wireless ad hoc networks – Fisheye State Routing (FSR). FSR introduces the notion of multi-level fisheye scope to reduce routing update overhead in large networks. Nodes exchange link state entries with their neighbors with a frequency which depends on distance to destination. From link state entries, nodes construct the topology map of the entire network and compute optimal routes. Simulation experiments show that FSR is simple, efficient and scalable routing solution in a mobile, ad hoc environment. 1 Introduction As the wireless and embedded computing technologies continue to advance, increasing numbers of small size and high performance computing and communication devices will be capable of tetherless communications and ad hoc wireless networking. An ad hoc wireless network is a selforganizing and self-configuring network with the capability of rapid deployment in response to application needs

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

How To Break a Practical MIX and Design a New One (PDF)
by Yvo Desmedt and Kaoru Kurosawa.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A MIX net takes a list of ciphertexts (c 1, ..., c N) and outputs a permuted list of the plaintexts (m 1, ..., m N) without revealing the relationship between (c 1,..., c N) and (m 1, ...,m N). This paper first shows that the Jakobsson's MIX net of Eurocrypt'98, which was believed to be resilient and very efficient, is broken. We next propose an efficient t-resilient MIX net with O(t 2) servers in which the cost of each MIX server is O(N). Two new concepts are introduced, existential-honesty and limited-open-verification. They will be useful for distributed computation in general. A part of this research was done while the author visited the Tokyo Institute of Technology, March 4-19, 1999. He was then at the University of Wisconsin Milwaukee

[Go to top]

A Length-Invariant Hybrid MIX (PDF)
by Miyako Ohkubo and Masayuki Abe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents a secure and flexible Mix-net that has the following properties; it efficiently handles long plaintexts that exceed the modulus size of underlying public-key encryption as well as very short ones (length-flexible), input ciphertext length is not impacted by the number of mix-servers (length-invariant), and its security in terms of anonymity is proven in a formal way (provably secure). One can also add robustness i.e. it outputs correct results in the presence of corrupt servers. The security is proved in the random oracle model by showing a reduction from breaking the anonymity of our Mix-net to breaking a sort of indistinguishability of the underlying symmetric encryption scheme or solving the Decision Diffie-Hellman problem

[Go to top]

OceanStore: an architecture for global-scale persistent storage (PDF)
by John Kubiatowicz, David Bindel, Yan Chen, Steven Czerwinski, Patrick Eaton, Dennis Geels, Ramakrishna Gummadi, Sean C. Rhea, Hakim Weatherspoon, Chris Wells, and Ben Y. Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development

[Go to top]

Onion Routing Access Configurations (PDF)
by Paul Syverson, Michael Reed, and David Goldschlag.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Thus it hides not only the data being sent, but who is talking to whom. Onion Routing's anonymous connections are bidirectional and near real-time, and can be used anywhere a socket connection can be used. Proxy aware applications, such as web browsing and e-mail, require no modification to use Onion Routing, and do so through a series of proxies. Other applications, such as remote login, can also use the system without modification. Access to an onion routing network can be configured in a variety of ways depending on the needs, policies, and facilities of those connecting. This paper describes some of these access configurations and also provides a basic overview of Onion Routing and comparisons with related work

[Go to top]

A Protocol for Anonymous Communication Over the Internet (PDF)
by Clay Shields and Brian Neil Levine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents a new protocol for initiator anonymity called Hordes, which uses forwarding mechanisms similar to those used in previous protocols for sending data, but is the first protocol to make use of the anonymity inherent in multicast routing to receive data. We show this results in shorter transmission latencies and requires less work of the protocol participants, in terms of the messages processed. We also present a comparison of the security and anonymity of Hordes with previous protocols, using the first quantitative definition of anonymity and unlinkability. Our analysis shows that Hordes provides anonymity in a degree similar to that of Crowds and Onion Routing, but also that Hordes has numerous performance advantages

[Go to top]

On the Scaling of Feedback Algorithms for Very Large Multicast Groups (PDF)
by Thomas Fuhrmann.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Feedback from multicast group members is vital for many multicast protocols. In order to avoid feedback implosion in very large groups feedback algorithms with well behaved scaling-properties must be chosen. In this paper we analyse the performance of three typical feedback algorithms described in the literature. Apart from the basic trade-off between feedback latency and response duplicates we especially focus on the algorithms'' sensitivity to the quality of the group size estimation. Based on this analysis we give recommendations for the choice of well behaved feedback algorithms that are suitable for very large groups

[Go to top]

Set Reconciliation with Nearly Optimal Communication Complexity (PDF)
by Yaron Minsky, Ari Trachtenberg, and Richard Zippel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Seven Degrees of Separation in Mobile Ad Hoc Networks (PDF)
by Maria Papadopouli and Henning G. Schulzrinne.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an architecture that enables the sharing of information among mobile, wireless, collaborating hosts that experience intermittent connectivity to the Internet. Participants in the system obtain data objects from Internet-connected servers, cache them and exchange them with others who are interested in them. The system exploits the fact that there is a high locality of information access within a geographic area. It aims to increase the data availability to participants with lost connectivity to the Internet. We discuss the main components of the system and possible applications. Finally, we present simulation results that show that the ad hoc networks can be very e$$ective in distributing popular information. 1 Introduction In a few years, a large percentage of the population in metropolitan areas will be equipped with PDAs, laptops or cell phones with built-in web browsers. Thus, access to information and entertainment will become as important as voice communications

[Go to top]

The small-world phenomenon: an algorithm perspective (PDF)
by Jon Kleinberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Long a matter of folklore, the small-world phenomenon the principle that we are all linked by short chains of acquaintances was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960's. This work was among the first to make the phenomenon quantitative, allowing people to speak of the six degrees of separation between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. One of the most refined of these models was formulated in recent work of Watts and Strogatz; their framework provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. But existing models are insufficient to explain the striking algorithmic component of Milgram's original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove that no decentralized algorithm, operating with local information only, can construct short paths in these networks with non-negligible probability. We then define an infinite family of network models that naturally generalizes the Watts-Strogatz model, and show that for one of these models, there is a decentralized algorithm capable of finding short paths with high probability. More generally, we provide a strong characterization of this family of network models, showing that there is in fact a unique model within the family for which decentralized algorithms are effective

[Go to top]

Trust Economies in The Free Haven Project (PDF)
by Ron Rivest, Arthur C. Smith, and Brian T. Sniffen.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Free Haven Project aims to deploy a system for distributed data storage which is robust against attempts by powerful adversaries to find and destroy stored data. Free Haven uses a secure mixnet for communication, and it emphasizes distributed, reliable, and anonymous storage over e$$cient retrieval. We provide a system for building trust between pseudonymous entities, based entirely on records of observed behavior. Modelling these observed behaviors as an economy allows us to draw heavily on previous economic theory, as well as on existing data havens which base their accountability on financial loss. This trust system provides a means of enforcing accountability without sacrificing anonymity

[Go to top]

Trust-region methods
by Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

XMill: an efficient compressor for XML data (PDF)
by Hartmut Liefke and Dan Suciu.
In SIGMOD Rec 29(2), 2000, pages 153-164. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a tool for compressing XML data, with applications in data exchange and archiving, which usually achieves about twice the compression ratio of gzip at roughly the same speed. The compressor, called XMill, incorporates and combines existing compressors in order to apply them to heterogeneous XML data: it uses zlib, the library function for gzip, a collection of datatype specific compressors for simple data types, and, possibly, user defined compressors for application specific data types

[Go to top]

Xor-trees for efficient anonymous multicast and reception (PDF)
by Shlomi Dolev and Rafail Ostrovsky.
In ACM Trans. Inf. Syst. Secur 3(2), 2000, pages 63-84. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this work we examine the problem of efficient anonymous broadcast and reception in general communication networks. We show an algorithm which achieves anonymous communication with O(1) amortized communication complexity on each link and low computational complexity. In contrast, all previous solutions require polynomial (in the size of the network and security parameter) amortized communication complexity

[Go to top]

Practical Techniques for Searches on Encrypted Data (PDF)
by Dawn Xiaodong Song, David Wagner, and Adrian Perrig.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that one has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certain words, it was not previously known how to let the data storage server perform the search and answer the query without loss of data confidentiality

[Go to top]

A case for end system multicast (keynote address) (PDF)
by Yang-hua Chu, Sanjay G. Rao, and Hui Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, ten years after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and support for higher layer functionality such as error, flow and congestion control. In this paper, we explore an alternative architecture for small and sparse groups, where end systems implement all multicast related functionality including membership management and packet replication. We call such a scheme End System Multicast. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP Multicast. However, the key concern is the performance penalty associated with such a model. In particular, End System Multicast introduces duplicate packets on physical links and incurs larger end-to-end delay than IP Multicast. In this paper, we study this question in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. In addition, Narada attempts to optimize the efficiency of the overlay based on end-to-end measurements. We present details of Narada and evaluate it using both simulation and Internet experiments. Preliminary results are encouraging. In most simulations and Internet experiments, the delay and bandwidth penalty are low. We believe the potential benefits of repartitioning multicast functionality between end systems and routers significantly outweigh the performance penalty incurred

[Go to top]

Anonymity, Unobservability, and Pseudonymity: A Consolidated Proposal for Terminology (PDF)
by Andreas Pfitzmann and Marit Hansen.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Based on the nomenclature of the early papers in the field, we propose a terminology which is both expressive and precise. More particularly, we define anonymity, unlinkability, unobservability, pseudonymity (pseudonyms and digital pseudonyms, and their attributes), and identity management. In addition, we describe the relationships between these terms, give a rational why we define them as we do, and sketch the main mechanisms to provide for the properties defined

[Go to top]

The disadvantages of free MIX routes and how to overcome them (PDF)
by Oliver Berthold, Andreas Pfitzmann, and Ronny Standtke.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There are different methods to build an anonymity service using MIXes. A substantial decision for doing so is the method of choosing the MIX route. In this paper we compare two special configurations: a fixed MIX route used by all participants and a network of freely usable MIXes where each participant chooses his own route. The advantages and disadvantages in respect to the freedom of choice are presented and examined. We'll show that some additional attacks are possible in networks with freely chosen MIX routes. After describing these attacks, we estimate their impact on the achievable degree of anonymity. Finally, we evaluate the relevance of the described attacks with respect to existing systems like e.g. Mixmaster, Crowds, and Freedom

[Go to top]

The Free Haven Project: Distributed Anonymous Storage Service (PDF)
by Roger Dingledine, Michael J. Freedman, and David Molnar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a design for a system of anonymous storage which resists the attempts of powerful adversaries to find or destroy any stored data. We enumerate distinct notions of anonymity for each party in the system, and suggest a way to classify anonymous systems based on the kinds of anonymity provided. Our design ensures the availability of each document for a publisher-specified lifetime. A reputation system provides server accountability by limiting the damage caused from misbehaving servers. We identify attacks and defenses against anonymous storage services, and close with a list of problems which are currently unsolved

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

Freenet: A Distributed Anonymous Information Storage and Retrieval System (PDF)
by Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and di$$cult for a node operator to determine or be held responsible for the actual physical contents of her own node

[Go to top]

Towards an Analysis of Onion Routing Security (PDF)
by Paul Syverson, Gene Tsudik, Michael Reed, and Carl Landwehr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents a security analysis of Onion Routing, an application independent infrastructure for traffic-analysis-resistant and anonymous Internet connections. It also includes an overview of the current system design, definitions of security goals and new adversary models

[Go to top]

Traffic Analysis: Protocols, Attacks, Design Issues, and Open Problems (PDF)
by Jean-Fran& cedil;cois Raymond.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present the trafic analysis problem and expose the most important protocols, attacks and design issues. Afterwards, we propose directions for further research. As we are mostly interested in efficient and practical Internet based protocols, most of the emphasis is placed on mix based constructions. The presentation is informal in that no complex definitions and proofs are presented, the aim being more to give a thorough introduction than to present deep new insights

[Go to top]

Web MIXes: A system for anonymous and unobservable Internet access (PDF)
by Oliver Berthold, Hannes Federrath, and Stefan Köpsell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present the architecture, design issues and functions of a MIX-based system for anonymous and unobservable real-time Internet access. This system prevents traffic analysis as well as flooding attacks. The core technologies include an adaptive, anonymous, time/volumesliced channel mechanism and a ticket-based authentication mechanism. The system also provides an interface to inform anonymous users about their level of anonymity and unobservability

[Go to top]

Can Pseudonymity Really Guarantee Privacy? (PDF)
by Josyula R. Rao and Pankaj Rohatgi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

One of the core challenges facing the Internet today is the problem of ensuring privacy for its users. It is believed that mechanisms such as anonymity and pseudonymity are essential building blocks in formulating solutions to address these challenges and considerable effort has been devoted towards realizing these primitives in practice. The focus of this effort, however, has mostly been on hiding explicit identify information (such as source addresses) by employing a combination of anonymizing proxies, cryptographic techniques to distribute trust among them and traffic shaping techniques to defeat traffic analysis. We claim that such approaches ignore a significant amount of identifying information about the source that leaks from the contents of web traffic itself. In this paper, we demonstrate the significance and value of such information by showing how techniques from linguistics and stylometry can use this information to compromise pseudonymity in several important settings. We discuss the severity of this problem and suggest possible countermeasures

[Go to top]

Publius: A robust, tamper-evident, censorship-resistant and source-anonymous web publishing system (PDF)
by Marc Waldman, Aviel D. Rubin, and Lorrie Cranor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a system that we have designed and implemented for publishing content on the web. Our publishing scheme has the property that it is very difficult for any adversary to censor or modify the content. In addition, the identity of the publisher is protected once the content is posted. Our system differs from others in that we provide tools for updating or deleting the published content, and users can browse the content in the normal point and click manner using a standard web browser and a client-side proxy that we provide. All of our code is freely available

[Go to top]

Overcast: reliable multicasting with on overlay network (PDF)
by John Jannotti, David K. Gifford, Kirk L. Johnson, Frans M. Kaashoek, and James W. Jr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Overcast is an application-level multicasting system that can be incrementally deployed using today's Internet infrastructure. These properties stem from Overcast's implementation as an overlay network. An overlay network consists of a collection of nodes placed at strategic locations in an existing network fabric. These nodes implement a network abstraction on top of the network provided by the underlying substrate network. Overcast provides scalable and reliable single-source multicast using a simple protocol for building efficient data distribution trees that adapt to changing network conditions. To support fast joins, Overcast implements a new protocol for efficiently tracking the global status of a changing distribution tree. Results based on simulations confirm that Overcast provides its added functionality while performing competitively with IP Multicast. Simulations indicate that Overcast quickly builds bandwidth-efficient distribution trees that, compared to IP Multicast, provide 70-100 of the total bandwidth possible, at a cost of somewhat less than twice the network load. In addition, Overcast adapts quickly to changes caused by the addition of new nodes or the failure of existing nodes without causing undue load on the multicast source

[Go to top]

Freedom Systems 2.0 Architecture (PDF)
by Philippe Boucher, Adam Shostack, and Ian Goldberg.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This white paper, targeted at the technically savvy reader, offers a detailed look at the Freedom 2.0 System architecture. It is intended to give the reader a good understanding of the components that make up this system and the relationships between them, as well as to encourage analysis of the system

[Go to top]

A Pseudonymous Communications Infrastructure for the Internet (PDF)
by Ian Goldberg.
phd, UC Berkeley, December 2000. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Pseudonymous Communications Infrastructure for the Internet by Ian Avrum Goldberg Doctor of Philosophy in Computer Science University of California at Berkeley Professor Eric Brewer, Chair As more and more of people's everyday activities are being conducted online, there is an ever-increasing threat to personal privacy. Every communicative or commercial transaction you perform online reveals bits of information about you that can be compiled into large dossiers, often without your permission, or even your knowledge

[Go to top]

Reputation systems (PDF)
by Paul Resnick, Ko Kuwabara, Richard Zeckhauser, and Eric Friedman.
In Communications of the ACM 43, December 2000, pages 45-48. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

2001

On Algorithms for Efficient Data Migration (PDF)
by Joseph Hall, Jason D. Hartline, Anna R. Karlin, Jared Saia, and John Wilkes.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The data migration problem is the problem of computing an efficient plan for moving data stored on devices in a network from one configuration to another. Load balancing or changing usage patterns could necessitate such a rearrangement of data. In this paper, we consider the case where the objects are fixed-size and the network is complete. The direct migration problem is closely related to edge-coloring. However, because there are space constraints on the devices, the problem is more complex. Our main results are polynomial time algorithms for finding a near-optimal migration plan in the presence of space constraints when a certain number of additional nodes is available as temporary storage, and a 3/2-approximation for the case where data must be migrated directly to its destination

[Go to top]

An Analysis of the Degradation of Anonymous Protocols (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There have been a number of protocols proposed for anonymous network communication. In this paper we prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to attacks by corrupt group members that degrade the anonymity of each protocol over time. We use this result to place an upper bound on how long existing protocols including Crowds, Onion Routing, Hordes, and DC-Net, can maintain anonymity in the face of the attacks described. Our results show that fully-connected DC-Net is the most resilient to these attacks, but is subject to simple denial-of-service attacks. Additionally, we show how a variant of the attack allows attackers to setup other participants to falsely appear to be the initiator of a connection

[Go to top]

Application-Level Multicast Using Content-Addressable Networks (PDF)
by Sylvia Paul Ratnasamy, Mark Handley, Richard Karp, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most currently proposed solutions to application-level multicast organise the group members into an application-level mesh over which a Distance-Vector routingp rotocol, or a similar algorithm, is used to construct source-rooted distribution trees. The use of a global routing protocol limits the scalability of these systems. Other proposed solutions that scale to larger numbers of receivers do so by restricting the multicast service model to be single-sourced. In this paper, we propose an application-level multicast scheme capable of scaling to large group sizes without restrictingthe service model to a single source. Our scheme builds on recent work on Content-Addressable Networks (CANs). Extendingthe CAN framework to support multicast comes at trivial additional cost and, because of the structured nature of CAN topologies, obviates the need for a multicast routingalg orithm. Given the deployment of a distributed infrastructure such as a CAN, we believe our CAN-based multicast scheme offers the dual advantages of simplicity and scalability

[Go to top]

Authentic Attributes with Fine-Grained Anonymity Protection (PDF)
by Stuart Stubblebine and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Collecting accurate profile information and protecting an individual's privacy are ordinarily viewed as being at odds. This paper presents mechanisms that protect individual privacy while presenting accurate-indeed authenticated-profile information to servers and merchants. In particular, we give a pseudonym registration scheme and system that enforces unique user registration while separating trust required of registrars, issuers, and validators. This scheme enables the issuance of global unique pseudonyms (GUPs) and attributes enabling practical applications such as authentication of accurate attributes and enforcement of one-to-a-customer properties. We also present a scheme resilient to even pseudonymous profiling yet preserving the ability of merchants to authenticate the accuracy of information. It is the first mechanism of which the authors are aware to guarantee recent validity for group signatures, and more generally multi-group signatures, thus effectively enabling revocation of all or some of the multi-group certificates held by a principal

[Go to top]

Bayeux: an architecture for scalable and fault-tolerant wide-area data dissemination (PDF)
by Shelley Zhuang, Ben Y. Zhao, Anthony D. Joseph, Randy H. Katz, and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The demand for streaming multimedia applications is growing at an incr edible rate. In this paper, we propose Bayeux, an efficient application-level multicast system that scales to arbitrarily large receiver groups while tolerating failures in routers and network links. Bayeux also includes specific mechanisms for load-balancing across replicate root nodes and more efficient bandwidth consumption. Our simulation results indicate that Bayeux maintains these properties while keeping transmission overhead low. To achieve these properties, Bayeux leverages the architecture of Tapestry, a fault-tolerant, wide-area overlay routing and location network

[Go to top]

Buses for Anonymous Message Delivery (PDF)
by Amos Beimel and Shlomi Dolev.
In Journal of Cryptology 16, 2001, pages 0-2003. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Applies graph theory to anonymity. The paper suffers from the fundamental problem that it does not discuss attacks on the scheme, and there are a couple of pretty basic ways to break anonymity. Also, the scheme uses lots of traffic; some variants end up looking much like a pipenet

[Go to top]

CliqueNet: A Self-Organizing, Scalable, Peer-to-Peer Anonymous Communication Substrate (PDF)
by Emin Gün Sirer, Milo Polte, and Mark Robson.
In unknown, 2001. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is critical for many networked applications. Yet current Internet protocols provide no support for masking the identity of communication endpoints. This paper outlines a design for a peer-to-peer, scalable, tamper-resilient communication protocol that provides strong anonymity and privacy. Called CliqueNet, our protocol provides an information-theoretic guarantee: an omnipotent adversary that can wiretap at any location in the network cannot determine the sender of a packet beyond a clique, that is, a set of k hosts, where k is an anonymizing factor chosen by the participants. CliqueNet is resilient to jamming by malicious hosts and can scale with the number of participants. This paper motivates the need for an anonymous communication layer and describes the self-organizing, novel divide-and-conquer approach that enables CliqueNet to scale while offering a strong anonymity guarantee. CliqueNet is widely applicable as a communication substrate for peer-to-peer applications that require anonymity, privacy and anti-censorship guarantees

[Go to top]

Competitive Hill-Climbing Strategies for Replica Placement in a Distributed File System (PDF)
by John R. Douceur and Roger Wattenhofer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Farsite distributed file system stores multiple replicas of files on multiple machines, to provide file access even when some machines are unavailable. Farsite assigns file replicas to machines so as to maximally exploit the different degrees of availability of different machines, given an allowable replication factor R. We use competitive analysis and simulation to study the performance of three candidate hillclimbing replica placement strategies, MinMax, MinRand, and RandRand, each of which successively exchanges the locations of two file replicas. We show that the MinRand and RandRand strategies are perfectly competitive for R = 2 and 2/3-competitive for R = 3. For general R, MinRand is at least 1/2-competitive and RandRand is at least 10/17-competitive. The MinMax strategy is not competitive. Simulation results show better performance than the theoretic worst-case bounds

[Go to top]

CORE: A Collaborative Reputation Mechanism to enforce node cooperation in Mobile Ad hoc Networks (PDF)
by Pietro Michiardi and Refik Molva.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Countermeasures for node misbehavior and selfishness are mandatory requirements in MANET. Selfishness that causes lack of node activity cannot be solved by classical security means that aim at verifying the correctness and integrity of an operation. We suggest a generic mechanism based on reputation to enforce cooperation among the nodes of a MANET to prevent selfish behavior. Each network entity keeps track of other entities' collaboration using a technique called reputation. The reputation is calculated based on various types of information on each entity's rate of collaboration. Since there is no incentive for a node to maliciously spread negative information about other nodes, simple denial of service attacks using the collaboration technique itself are prevented. The generic mechanism can be smoothly extended to basic network functions with little impact on existing protocols

[Go to top]

DVD COPY CONTROL ASSOCIATION vs. ANDREW BUNNER
by unknown.
In unknown, 2001. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Efficient erasure correcting codes (PDF)
by Michael Luby, Michael Mitzenmacher, M. Amin Shokrollahi, and Daniel A. Spielman.
In IEEE Transactions on Information Theory 47, 2001, pages 569-584. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate R and any given real number a family of linear codes of rate R which can be encoded in time proportional to ln(1/) times their block length n. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length (1+)Rn or more. The recovery algorithm also runs in time proportional to n ln(1/). Our algorithms have been implemented and work well in practice; various implementation issues are discussed

[Go to top]

An Efficient Scheme for Proving a Shuffle (PDF)
by Jun Furukawa and Kazue Sako.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we propose a novel and efficient protocol for proving the correctness of a shuffle, without leaking how the shuffle was performed. Using this protocol, we can prove the correctness of a shuffle of n data with roughly 18n exponentiations, where as the protocol of Sako-Kilian[SK95] required 642n and that of Abe[Ab99] required 22n log n. The length of proof will be only 211 n bits in our protocol, opposed to 218 n bits and 214 n log n bits required by Sako-Kilian and Abe, respectively. The proposed protocol will be a building block of an efficient, universally verifiable mix-net, whose application to voting system is prominent

[Go to top]

An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation (PDF)
by Jan Camenisch and Anna Lysyanskaya.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A credential system is a system in which users can obtain credentials from organizations and demonstrate possession of these credentials. Such a system is anonymous when transactions carried out by the same user cannot be linked. An anonymous credential system is of significant practical relevance because it is the best means of providing privacy for users. In this paper we propose a practical anonymous credential system that is based on the strong RSA assumption and the decisional Diffie-Hellman assumption modulo a safe prime product and is considerably superior to existing ones: 1 We give the first practical solution that allows a user to unlinkably demonstrate possession of a credential as many times as necessary without involving the issuing organization. 2 To prevent misuse of anonymity, our scheme is the first to offer optional anonymity revocation for particular transactions. 3 Our scheme offers separability: all organizations can choose their cryptographic keys independently of each other. Moreover, we suggest more effective means of preventing users from sharing their credentials, by introducing all-or-nothing sharing: a user who allows a friend to use one of her credentials once, gives him the ability to use all of her credentials, i.e., taking over her identity. This is implemented by a new primitive, called circular encryption, which is of independent interest, and can be realized from any semantically secure cryptosystem in the random oracle model

[Go to top]

Extremum Feedback for Very Large Multicast Groups (PDF)
by Jörg Widmer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In multicast communication, it is often required that feedback is received from a potentially very large group of responders while at the same time a feedback implosion needs to be pre- vented. To this end, a number of feedback control mechanisms have been proposed, which rely either on tree-based feedback aggregation or timer-based feedback suppression. Usually, these mechanisms assume that it is not necessary to discriminate be- tween feedback from different receivers. However, for many applications this is not the case and feedback from receivers with certain response values is preferred (e.g., highest loss or largest delay)

[Go to top]

A Generalisation, a Simplification and Some Applications of Paillier's Probabilistic Public-Key System (PDF)
by Ivan Damg and Mats Jurik.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without loosing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting.This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes.W e show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits

[Go to top]

The Gnutella Protocol Specification v0.4
by TODO.
In unknown, 2001. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A brief description of the gnutella protocol

[Go to top]

Herald: Achieving a Global Event Notification Service
by Luis Felipe Cabrera, Michael B. Jones, and Marvin Theimer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design philosophy and initial design decisions of Herald: a highly scalable global event notification system that is being designed and built at Microsoft Research. Herald is a distributed system designed to transparently scale in all respects, including numbers of subscribers and publishers, numbers of event subscription points, and event delivery rates. Event delivery can occur within a single machine, within a local network or Intranet, and throughout the Internet

[Go to top]

Improved low-density parity-check codes using irregular graphs (PDF)
by Michael Luby, Michael Mitzenmacher, M. Amin Shokrollahi, and Daniel A. Spielman.
In IEEE Trans. Inform. Theory 47, 2001, pages 585-598. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We construct new families of error-correcting codes based on Gallager's low-density parity-check codes. We improve on Gallager's results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels. For example, using belief propagation, for rate I R codes on 16 000 bits over a binary-symmetric channel, previous low-density parity-check codes can correct up to approximately 16 errors, while our codes correct over 17. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density parity-check codes may be able to match or beat turbo code performance. Index TermsBelief propagation, concentration theorem, Gallager codes, irregular codes, low-density parity-check codes

[Go to top]

Incentives for Sharing in Peer-to-Peer Networks (PDF)
by Philippe Golle, Kevin Leyton-Brown, Ilya Mironov, and Mark Lillibridge.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We consider the free-rider problem in peer-to-peer file sharing networks such as Napster: that individual users are provided with no incentive for adding value to the network. We examine the design implications of the assumption that users will selfishly act to maximize their own rewards, by constructing a formal game theoretic model of the system and analyzing equilibria of user strategies under several novel payment mechanisms. We support and extend this work with results from experiments with a multi-agent reinforcement learning model

[Go to top]

Information-Theoretic Private Information Retrieval: A Unified Construction (PDF)
by Amos Beimel and Yuval Ishai.
In Lecture Notes in Computer Science 2076, 2001, pages 89-98. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Private Information Retrieval (PIR) protocol enables a user to retrieve a data item from a database while hiding the identity of the item being retrieved. In a t-private, k-server PIR protocol the database is replicated among k servers, and the user's privacy is protected from any collusion of up to t servers. The main cost-measure of such protocols is the communication complexity of retrieving a single bit of data. This work addresses the information-theoretic setting for PIR, in which the user's privacy should be unconditionally protected from collusions of servers. We present a unified general construction, whose abstract components can be instantiated to yield both old and new families of PIR protocols. A main ingredient in the new protocols is a generalization of a solution by Babai, Kimmel, and Lokam to a communication complexity problem in the so-called simultaneous messages model. Our construction strictly improves upon previous constructions and resolves some previous anomalies. In particular, we obtain: (1) t-private k-server PIR protocols with O(n 1/ (2k-1)/tc) communication bits, where n is the database size. For t > 1, this is a substantial asymptotic improvement over the previous state of the art; (2) a constant-factor improvement in the communication complexity of 1-private PIR, providing the first improvement to the 2-server case since PIR protocols were introduced; (3) efficient PIR protocols with logarithmic query length. The latter protocols have applications to the construction of efficient families of locally decodable codes over large alphabets and to PIR protocols with reduced work by the servers

[Go to top]

Instrumenting The World With Wireless Sensor Networks (PDF)
by Deborah Estrin, Gregory J. Pottie, L. Girod, and Mani Srivastava.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Pervasive micro-sensing and actuation may revolutionize the way in which we understand and manage complex physical systems: from airplane wings to complex ecosystems. The capabilities for detailed physical monitoring and manipulation offer enormous opportunities for almost every scientific discipline, and it will alter the feasible granularity of engineering

[Go to top]

A low-bandwidth network file system (PDF)
by Athicha Muthitacharoen, Benjie Chen, and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Users rarely consider running network file systems over slow or wide-area networks, as the performance would be unacceptable and the bandwidth consumption too high. Nonetheless, efficient remote file access would often be desirable over such networks—particularly when high latency makes remote login sessions unresponsive. Rather than run interactive programs such as editors remotely, users could run the programs locally and manipulate remote files through the file system. To do so, however, would require a network file system that consumes less bandwidth than most current file systems.This paper presents LBFS, a network file system designed for low-bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client's cache. Using this technique in conjunction with conventional compression and caching, LBFS consumes over an order of magnitude less bandwidth than traditional network file systems on common workloads

[Go to top]

Multiparty Computation from Threshold Homomorphic Encryption (PDF)
by Ronald Cramer, Ivan Damgárd, and JesperB Nielsen.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a new approach to multiparty computation (MPC) basing it on homomorphic threshold crypto-systems. We show that given keys for any sufficiently efficient system of this type,general MPC protocols for n parties can be devised which are secure against an active adversary that corrupts any minority of the parties. The total number of bits broadcast is O(nk|C|),where k is the security parameter and |C| is the size of a (Boolean) circuit computing the function to be securely evaluated. An earlier proposal by Franklin and Haber with the same complexity was only secure for passive adversaries,while all earlier protocols with active security had complexity at least quadratic in n. We give two examples of threshold cryptosystems that can support our construction and lead to the claimed complexities

[Go to top]

An Optimally Robust Hybrid Mix Network (Extended Abstract) (PDF)
by Markus Jakobsson and Ari Juels.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a mix network that achieves efficient integration of public-key and symmetric-key operations. This hybrid mix network is capable of natural processing of arbitrarily long input elements, and is fast in both practical and asymptotic senses. While the overhead in the size of input elements is linear in the number of mix servers, it is quite small in practice. In contrast to previous hybrid constructions, ours has optimal robustness, that is, robustness against any minority coalition of malicious servers

[Go to top]

PAST: A large-scale, persistent peer-to-peer storage utility (PDF)
by Peter Druschel and Antony Rowstron.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper sketches the design of PAST, a large-scale, Internet-based, global storage utility that provides scalability, high availability, persistence and security. PAST is a peer-to-peer Internet application and is entirely selforganizing. PAST nodes serve as access points for clients, participate in the routing of client requests, and contribute storage to the system. Nodes are not trusted, they may join the system at any time and may silently leave the system without warning. Yet, the system is able to provide strong assurances, efficient storage access, load balancing and scalability

[Go to top]

Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems (PDF)
by Antony Rowstron and Peter Druschel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer applications.Pastry performs application-level routing and object location in a potentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties

[Go to top]

Peer-to-Peer: Harnessing the Power of Disruptive Technologies
by Andy Oram.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Upstart software projects Napster, Gnutella, and Freenet have dominated newspaper headlines, challenging traditional approaches to content distribution with their revolutionary use of peer-to-peer file-sharing technologies. Reporters try to sort out the ramifications of seemingly ungoverned peer-to-peer networks. Lawyers, business leaders, and social commentators debate the virtues and evils of these bold new distributed systems. But what's really behind such disruptive technologies – the breakthrough innovations that have rocked the music and media worlds? And what lies ahead? In this book, key peer-to-peer pioneers take us beyond the headlines and hype and show how the technology is changing the way we communicate and exchange information. Those working to advance peer-to-peer as a technology, a business opportunity, and an investment offer their insights into how the technology has evolved and where it's going. They explore the problems they've faced, the solutions they've discovered, the lessons they've learned, and their goals for the future of computer networking. Until now, Internet communities have been limited by the flat interactive qualities of email and network newsgroups, where people can exchange recommendations and ideas but have great difficulty commenting on one another's postings, structuring information, performing searches, and creating summaries. Peer-to-peer challenges the traditional authority of the client/server model, allowing shared information to reside instead with producers and users. Peer-to-peer networks empower users to collaborate on producing and consuming information, adding to it, commenting on it, and building communities around it. This compilation represents the collected wisdom of today's peer-to-peer luminaries. It includes contributions from Gnutella's Gene Kan, Freenet's Brandon Wiley, Jabber's Jeremie Miller, and many others – plus serious discussions of topics ranging from accountability and trust to security and performance. Fraught with questions and promise, peer-to-peer is sure to remain on the computer industry's center stage for years to come

[Go to top]

Peer-To-Peer: Harnessing the Power of Disruptive Technologies – Chapter 12: Free Haven
by Roger Dingledine, Michael J. Freedman, and David Molnar.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Description of the problems that arise when one tries to combine anonymity and accountability. Note that the Free Haven design described here charges for storing data in the network (downloads are free), whereas in GNUnet adding data is free and only the downloads are considered as utilization

[Go to top]

Poblano: A distributed trust model for peer-to-peer networks (PDF)
by Rita Chen and William Yeager.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

pStore: A Secure Peer-to-Peer Backup System (PDF)
by Christopher Batten, Kenneth Barr, Arvind Saraf, and Stanley Trepetin.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In an effort to combine research in peer-to-peer systems with techniques for incremental backup systems, we propose pStore: a secure distributed backup system based on an adaptive peer-to-peer network. pStore exploits unused personal hard drive space attached to the Internet to provide the distributed redundancy needed for reliable and effective data backup. Experiments on a 30 node network show that 95 of the files in a 13 MB dataset can be retrieved even when 7 of the nodes have failed. On top of this reliability, pStore includes support for file encryption, versioning, and secure sharing. Its custom versioning system permits arbitrary version retrieval similar to CVS. pStore provides this functionality at less than 10 of the network bandwidth and requires 85 less storage capacity than simpler local tape backup schemes for a representative workload

[Go to top]

The Quest for Security in Mobile Ad Hoc Networks (PDF)
by Jean-Pierre Hubaux, Levente Buttyán, and Srdan Capkun.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The quest for security in mobile ad hoc networks (PDF)
by Jean-Pierre Hubaux, Levente Buttyán, and Srdan Capkun.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

So far, research on mobile ad hoc networks has been forcused primarily on routing issues. Security, on the other hand, has been given a lower priority. This paper provides an overview of security problems for mobile ad hoc networks, distinguishing the threats on basic mechanisms and on security mechanisms. It then describes our solution to protect the security mechanisms. The original features of this solution include that (i) it is fully decentralized and (ii) all nodes are assigned equivalent roles

[Go to top]

A Reputation System to Increase MIX-net Reliability
by Roger Dingledine, Michael J. Freedman, David Hopwood, and David Molnar.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a design for a reputation system that increases the reliability and thus efficiency of remailer services. Our reputation system uses a MIX-net in which MIXes give receipts for intermediate messages. Together with a set of witnesses, these receipts allow senders to verify the correctness of each MIX and prove misbehavior to the witnesses

[Go to top]

Resilient overlay networks (PDF)
by David Andersen, Hari Balakrishnan, Frans M. Kaashoek, and Robert Morris.
In SIGOPS Oper. Syst. Rev 35(5), 2001, pages 131-145. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today's wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics.Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON's routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 of the transfers doubled their TCP throughput and 5 of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems

[Go to top]

A scalable content-addressable network (PDF)
by Sylvia Paul Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Hash tables–which map "keys" onto "values"–are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation

[Go to top]

Search in JXTA and Other Distributed Networks
by Sherif Botros and Steve Waterhouse.
In Peer-to-Peer Computing, IEEE International Conference on, 2001, pages 0-0030. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

The social cost of cheap pseudonyms (PDF)
by Eric Friedman and Paul Resnick.
In Journal of Economics and Management Strategy 10(2), 2001, pages 173-199. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problems of societal norms for cooperation and reputation when it is possible to obtain cheap pseudonyms, something that is becoming quite common in a wide variety of interactions on the Internet. This introduces opportunities to misbehave without paying reputational consequences. A large degree of cooperation can still emerge, through a convention in which newcomers "pay their dues" by accepting poor treatment from players who have established positive reputations. One might hope for an open society where newcomers are treated well, but there is an inherent social cost in making the spread of reputations optional. We prove that no equilibrium can sustain significantly more cooperation than the dues-paying equilibrium in a repeated random matching game with a large number of players in which players have finite lives and the ability to change their identities, and there is a small but nonvanishing probability of mistakes. Although one could remove the inefficiency of mistreating newcomers by disallowing anonymity, this is not practical or desirable in a wide variety of transactions. We discuss the use of entry fees, which permits newcomers to be trusted but excludes some players with low payoffs, thus introducing a different inefficiency. We also discuss the use of free but unreplaceable pseudonyms, and describe a mechanism that implements them using standard encryption techniques, which could be practically implemented in electronic transactions

[Go to top]

Tangler: A Censorship-Resistant Publishing System Based On Document Entanglements (PDF)
by Marc Waldman and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The basic idea is to protect documents by making it impossible to remove one document from the system without loosing others. The underlying assumption that the adversary cares about collateral damage of this kind is a bit far fetched. Also, the entanglement doubles the amount of data that needs to be moved to retrieve a document

[Go to top]

Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and Routing (PDF)
by Ben Y. Zhao, Ben Y. Zhao, John Kubiatowicz, John Kubiatowicz, Anthony D. Joseph, and Anthony D. Joseph.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In today's chaotic network, data and services are mobile and replicated widely for availability, durability, and locality. Components' within this infrastructure interact in rich and complex ways, greatly stressing traditional approaches to name service and routing. This paper explores an alternative to traditional approaches called Tapestry. Tapestry is an overlay location and routing infrastructure that provides location-independent routing of messages directly to the closest copy of an object or service using only point-to-point links and without centralized resources. The routing and directory information within this' infrastructure is purely soft state and easily repaired. Tapestry is self-administering, fault-tolerant, and resilient under load. This paper presents' the architecture and algorithms of Tapestry and explores their advantages through a number of experiments

[Go to top]

The Theory of Incentives: The Principal-Agent Model (PDF)
by Jean-Jacques Laffont and David Martimort.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Economics has much to do with incentives–not least, incentives to work hard, to produce quality products, to study, to invest, and to save. Although Adam Smith amply confirmed this more than two hundred years ago in his analysis of sharecropping contracts, only in recent decades has a theory begun to emerge to place the topic at the heart of economic thinking. In this book, Jean-Jacques Laffont and David Martimort present the most thorough yet accessible introduction to incentives theory to date. Central to this theory is a simple question as pivotal to modern-day management as it is to economics research: What makes people act in a particular way in an economic or business situation? In seeking an answer, the authors provide the methodological tools to design institutions that can ensure good incentives for economic agents. This book focuses on the principal-agent model, the "simple" situation where a principal, or company, delegates a task to a single agent through a contract–the essence of management and contract theory. How does the owner or manager of a firm align the objectives of its various members to maximize profits? Following a brief historical overview showing how the problem of incentives has come to the fore in the past two centuries, the authors devote the bulk of their work to exploring principal-agent models and various extensions thereof in light of three types of information problems: adverse selection, moral hazard, and non-verifiability. Offering an unprecedented look at a subject vital to industrial organization, labor economics, and behavioral economics, this book is set to become the definitive resource for students, researchers, and others who might find themselves pondering what contracts, and the incentives they embody, are really all about

[Go to top]

The Vesta Approach to Software Configuration Management (PDF)
by Allan Heydon, Roy Levin, Timothy Mann, and Yuan Yu.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Vesta is a system for software configuration management. It stores collections of source files, keeps track of which versions of which files go together, and automates the process of building a complete software artifact from its component pieces. Vesta's novel approach gives it three important properties. First, every build is repeatable, because its component sources and build tools are stored immutably and immortally, and its configuration description completely specifies what components and tools are used and how they are put together. Second, every build is incremental, because results of previous builds are cached and reused. Third, every build is consistent, because all build dependencies are automatically captured and recorded, so that a cached result from a previous build is reused only when doing so is certain to be correct. In addition, Vesta's flexible language for writing configuration descriptions makes it easy to describe large software configurations in a modular fashion and to create variant configurations by customizing build parameters. This paper gives a brief overview of Vesta, outlining Vesta's advantages over traditional tools, how those benefits are achieved, and the system's overall performance

[Go to top]

Wide-area cooperative storage with CFS (PDF)
by Frank Dabek, Frans M. Kaashoek, David Karger, Robert Morris, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail

[Go to top]

Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications (PDF)
by Ion Stoica, Robert Morris, David Karger, Frans M. Kaashoek, and Hari Balakrishnan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Efficiently determining the node that stores a data item in a distributed network is an important and challenging problem. This paper describes the motivation and design of the Chord system, a decentralized lookup service that stores key/value pairs for such networks. The Chord protocol takes as input an m-bit identifier (derived by hashing a higher-level application specific key), and returns the node that stores the value corresponding to that key. Each Chord node is identified by an m-bit identifier and each node stores the key identifiers in the system closest to the node's identifier. Each node maintains an m-entry routing table that allows it to look up keys efficiently. Results from theoretical analysis, simulations, and experiments show that Chord is incrementally scalable, with insertion and lookup costs scaling logarithmically with the number of Chord nodes

[Go to top]

Automated Negotiation: Prospects, Methods and Challenges (PDF)
by Nicholas R Jennings, Peyman Faratin, Alessio R. Lomuscio, Simon Parsons, Carles Sierra, and Michael Wooldridge.
In Group Decision and Negociation 10, March 2001, pages 199-215. (BibTeX entry) (Download bibtex record)
(direct link)

This paper is to examine the space of negotiation opportunities for autonomous agents, to identify and evaluate some of the key techniques, and to highlight some of the major challenges for future automated negotiation research. This paper is not meant as a survey of the field of automated negotiation. Rather, the descriptions and assessments of the various approaches are generally undertaken with particular reference to work in which the authors have been involved. However, the specific issues raised should be viewed as being broadly applicable

[Go to top]

Investigating the energy consumption of a wireless network interface in an ad hoc networking environment (PDF)
by Laura Marie Feeney and Martin Nilsson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Energy-aware design and evaluation of network protocols requires knowledge of the energy consumption behavior of actual wireless interfaces. But little practical information is available about the energy consumption behavior of well-known wireless network interfaces and device specifications do not provide information in a form that is helpful to protocol developers. This paper describes a series of experiments which obtained detailed measurements of the energy consumption of an IEEE 802.11 wireless network interface operating in an ad hoc networking environment. The data is presented as a collection of linear equations for calculating the energy consumed in sending, receiving and discarding broadcast and point-to-point data packets of various sizes. Some implications for protocol design and evaluation in ad hoc networks are discussed

[Go to top]

Real World Patterns of Failure in Anonymity Systems (PDF)
by Richard Clayton, George Danezis, and Markus G. Kuhn.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present attacks on the anonymity and pseudonymity provided by a "lonely hearts" dating service and by the HushMail encrypted email system. We move on to discuss some generic attacks upon anonymous systems based on the engineering reality of these systems rather than the theoretical foundations on which they are based. However, for less sophisticated users it is social engineering attacks, owing nothing to computer science, that pose the biggest day-to-day danger. This practical experience then permits a start to be made on developing a security policy model for pseudonymous communications

[Go to top]

A Reputation System to Increase MIX-net Reliability (PDF)
by Roger Dingledine, Michael J. Freedman, David Hopwood, and David Molnar.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a design for a reputation system that increases the reliability and thus efficiency of remailer services. Our reputation system uses a MIX-net in which MIXes give receipts for intermediate messages. Together with a set of witnesses, these receipts allow senders to verify the correctness of each MIX and prove misbehavior to the witnesses

[Go to top]

The Strong Eternity Service (PDF)
by Tonda Benes.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Strong Eternity Service is a safe and very reliable storage for data of high importance. We show how to establish persistent pseudonyms in a totally anonymous environment and how to create a unique fully distributed name-space allowing both computer-efficient and human-acceptable access. We also present a way how to retrieve information from such data storage. We adapt the notion of the mix-network so that it can provide symmetric anonymity to both the client and the server. Finally we propose a system of after-the-act payments that can support operation of the Service without compromising anonymity

[Go to top]

Traffic Analysis Attacks and Trade-Offs in Anonymity Providing Systems (PDF)
by Adam Back, Ulf Möller, and Anton Stiglic.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We discuss problems and trade-offs with systems providing anonymity for web browsing (or more generally any communication system that requires low latency interaction). We focus on two main systems: the Freedom network [12] and PipeNet [8]. Although Freedom is efficient and reasonably secure against denial of service attacks, it is vulnerable to some generic traffic analysis attacks, which we describe. On the other hand, we look at PipeNet, a simple theoretical model which protects against the traffic analysis attacks we point out, but is vulnerable to denial of services attacks and has efficiency problems. In light of these observations, we discuss the trade-offs that one faces when trying to construct an efficient low latency communication system that protects users anonymity

[Go to top]

Freedom Systems 2.1 Security Issues and Analysis (PDF)
by Adam Back, Ian Goldberg, and Adam Shostack.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

We describe attacks to which Freedom, or Freedom users, may be vulnerable. These attacks are those that reduce the privacy of a Freedom user, through exploiting cryptographic, design or implementation issues. We include issues which may not be Freedom security issues which arise when the system is not properly used. This disclosure includes all known design or implementation flaws, as well as places where various trade-offs made while creating the system have privacy implications. We also discuss cryptographic points that are needed for a complete understanding of how Freedom works, including ones we don't believe can be used to reduce anyone's privacy

[Go to top]

The Design and Implementation of a Transparent Cryptographic File System for UNIX (PDF)
by Giuseppe Cattaneo, Luigi Catuogno, Aniello Del Sorbo, and Pino Persiano.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recent advances in hardware and communication technologies have made possible and cost e ective to share a file system among several machines over a local (but possibly also a wide) area network. One of the most successful and widely used such applications is Sun's Network File System (NFS). NFS is very simple in structure but assumes a very strong trust model: the user trusts the remote le system server (which might be running on a machine in di erent country) and a network with his/her data. It is easy to see that neither assumption is a very realistic one. The server (or anybody with superuser privileges) might very well read the data on its local lesytem and it is well known that the Internet or any local area network (e.g, Ethernet) is very easy to tap (see for example, Berkeley's tcpdump 7, 5] application program). Impersoni cation of users is also another security drawback of NFS. In fact, most of the permission checking over NFS are performed in the kernel of the client. In such a context a pirate can temporarely assign to his own workstation the Internet address of victim. Without secure RPC 9] no further authentication procedure is requested. From here on, the pirate can issue NFS requests presenting himself with any (false) uid and therefore accessing for reading and writing any private data on the server, even protected data. Given the above, a user seeking a certain level of security should take some measures. Possible solutions are to use either user-level cryptography or application level cryptography. A discussion of the drawbacks of these approaches is found in 4]. A better approach is to push encryption services into the operating system as done by M. Blaze in the design of his CFS 4]. In this paper, we propose a new cryptographic le system, which we call TCFS , as a suitable solution to the problem of privacy for distributed le system (see section 2.1). Our work improves on CFS by providing a deeper integration between the encryption service and the le system which results in a complete transparency of use to the user applications

[Go to top]

Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems (PDF)
by Antony Rowstron and Peter Druschel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer applications. Pastry performs application-level routing and object location in a potentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties Work done in part while visiting Microsoft Research, Cambridge, UK

[Go to top]

Responder Anonymity and Anonymous Peer-to-Peer File Sharing (PDF)
by Vincent Scarlata, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Data transfer over TCP/IP provides no privacy for network users. Previous research in anonymity has focused on the provision of initiator anonymity. We explore methods of adapting existing initiator-anonymous protocols to provide responder anonymity and mutual anonymity. We present Anonymous Peer-to-peer File Sharing (APFS) protocols, which provide mutual anonymity for peer-topeer le sharing. APFS addresses the problem of longlived Internet services that may outlive the degradation present in current anonymous protocols. One variant of APFS makes use of unicast communication, but requires a central coordinator to bootstrap the protocol. A second variant takes advantage of multicast routing to remove the need for any central coordination point. We compare the TCP performance of APFS protocol to existing overt le sharing systems such as Napster. In providing anonymity, APFS can double transfer times and requires that additional trac be carried by peers, but this overhead is constant with the size of the session. 1

[Go to top]

Tangler: a censorship-resistant publishing system based on document entanglements (PDF)
by Marc Waldman and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe the design of a censorship-resistant system that employs a unique document storage mechanism. Newly published documents are dependent on the blocks of previously published documents. We call this dependency an entanglement. Entanglement makes replication of previously published content an intrinsic part of the publication process. Groups of files, called collections, can be published together and named in a host-independent manner. Individual documents within a collection can be securely updated in such a way that future readers of the collection see and tamper-check the updates. The system employs a self-policing network of servers designed to eject non-compliant servers and prevent them from doing more harm than good

[Go to top]

A Verifiable Secret Shuffle and its Application to E-Voting (PDF)
by Andrew C. Neff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a mathematical construct which provides a cryptographic protocol to verifiably shuffle a sequence of k modular integers, and discuss its application to secure, universally verifiable, multi-authority election schemes. The output of the shuffle operation is another sequence of k modular integers, each of which is the same secret power of a corresponding input element, but the order of elements in the output is kept secret. Though it is a trivial matter for the "shuffler" (who chooses the permutation of the elements to be applied) to compute the output from the input, the construction is important because it provides a linear size proof of correctness for the output sequence (i.e. a proof that it is of the form claimed) that can be checked by an arbitrary verifiers. The complexity of the protocol improves on that of Furukawa-Sako[16] both measured by number of exponentiations and by overall size.The protocol is shown to be honest-verifier zeroknowledge in a special case, and is computational zeroknowledge in general. On the way to the final result, we also construct a generalization of the well known Chaum-Pedersen protocol for knowledge of discrete logarithm equality [10], [7]. In fact, the generalization specializes exactly to the Chaum-Pedersen protocol in the case k = 2. This result may be of interest on its own.An application to electronic voting is given that matches the features of the best current protocols with significant efficiency improvements. An alternative application to electronic voting is also given that introduces an entirely new paradigm for achieving Universally Verifiable elections

[Go to top]

2002

AdHocFS: Sharing Files in WLANs (PDF)
by Malika Boulkenafed and Valerie Issarny.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the ADHOCFS file system for mobileusers, which realizes transparent, adaptive file accessaccording to the users' specific situations (e.g., device inuse, network connectivity, etc).The paper concentratesmore specifically on the support of ADHOCFS for collaborativefile sharing within ad hoc groups of trusted nodesthat are in the local communication of each other using theunderlying ad hoc network, which has not been addressedin the past

[Go to top]

AMnet 2.0: An Improved Architecture for Programmable Networks (PDF)
by Thomas Fuhrmann, Till Harbaum, Marcus Schoeller, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

AMnet 2.0 is an improved architecture for programmable networks that is based on the experiences from the previous implementation of AMnet. This paper gives an overview of the AMnet architecture and Linux-based implementation of this software router. It also discusses the differences to the previous version of AMnet. AMnet 2.0 complements application services with net-centric services in an integrated system that provides the fundamental building blocks both for an active node itself and the operation of a larger set of nodes, including code deployment decisions, service relocation, resource management

[Go to top]

Anonymizing Censorship Resistant Systems (PDF)
by Andrei Serjantov.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a new Peer-to-Peer architecture for a censorship resistant system with user, server and active-server document anonymity as well as efficient document retrieval. The retrieval service is layered on top of an existing Peer-to-Peer infrastructure, which should facilitate its implementation

[Go to top]

Aspects of AMnet Signaling (PDF)
by Anke Speer, Marcus Schoeller, Thomas Fuhrmann, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

AMnet provides a framework for flexible and rapid service creation. It is based on Programmable Networking technologies and uses active nodes (AMnodes) within the network for the provision of individual, application-specific services. To this end, these AMnodes execute service modules that are loadable on-demand and enhance the functionality of intermediate systems without the need of long global standardization processes. Placing application-dedicated functionality within the network requires a flexible signaling protocol to discover and announce as well as to establish and maintain the corresponding services. AMnet Signaling was developed for this purpose and will be presented in detail within this paper

[Go to top]

Breaking the $O(n^1/(2k-1))$ Barrier for Information-Theoretic Private Information Retrieval (PDF)
by Amos Beimel, Yuval Ishai, Eyal Kushilevitz, and Jean-Fran& cedil;cois Raymond.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Private Information Retrieval (PIR) protocols allow a user to retrieve a data item from a database while hiding the identity of the item being retrieved. Specifically, in information-theoretic, k-server PIR protocols the database is replicated among k servers, and each server learns nothing about the item the user retrieves. The cost of such protocols is measured by the communication complexity of retrieving one out of n bits of data. For any fixed k, the complexity of the best protocols prior to our work was 0(n^12k–1) (Ambainis, 1997). Since then several methods were developed in an attempt to beat this bound, but all these methods yielded the same asymptotic bound.In this work, this barrier is finally broken and the complexity of information-theoretic k-server PIR is improved to n^0( kk k). The new PIR protocols can also be used to construct k-query binary locally decodable codes of length exp (n^0( kk k)), compared to exp(n^1k–1) in previous constructions. The improvements presented in this paper apply even for small values of k: the PIR protocols are more efficient than previous ones for every k 3, and the locally decodable codes are shorter for every k 4

[Go to top]

Censorship Resistant Peer-to-Peer Content Addressable Networks (PDF)
by Amos Fiat and Jared Saia.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a censorship resistant peer-to-peer network for accessing n data items in a network of n nodes. Each search for a data item in the network takes O(log n) time and requires at most O(log2n) messages. Our network is censorship resistant in the sense that even after adversarial removal of an arbitrarily large constant fraction of the nodes in the network, all but an arbitrarily small fraction of the remaining nodes can obtain all but an arbitrarily small fraction of the original data items. The network can be created in a fully distributed fashion. It requires only O(log n) memory in each node. We also give a variant of our scheme that has the property that it is highly spam resistant: an adversary can take over complete control of a constant fraction of the nodes in the network and yet will still be unable to generate spam

[Go to top]

Choosing reputable servents in a P2P network (PDF)
by Fabrizio Cornelli, Ernesto Damiani, Sabrina De Capitani di Vimercati, Stefano Paraboschi, and Pierangela Samarati.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

COCA: A secure distributed online certification authority (PDF)
by Lidong Zhou, Fred B. Schneider, and Robbert Van Renesse.
In ACM Trans. Comput. Syst 20(4), 2002, pages 329-368. (BibTeX entry) (Download bibtex record)
(direct link) (website)

COCA is a fault-tolerant and secure online certification authority that has been built and deployed both in a local area network and in the Internet. Extremely weak assumptions characterize environments in which COCA's protocols execute correctly: no assumption is made about execution speed and message delivery delays; channels are expected to exhibit only intermittent reliability; and with 3t + 1 COCA servers up to t may be faulty or compromised. COCA is the first system to integrate a Byzantine quorum system (used to achieve availability) with proactive recovery (used to defend against mobile adversaries which attack, compromise, and control one replica for a limited period of time before moving on to another). In addition to tackling problems associated with combining fault-tolerance and security, new proactive recovery protocols had to be developed. Experimental results give a quantitative evaluation for the cost and effectiveness of the protocols

[Go to top]

Cooperative Backup System (PDF)
by Sameh Elnikety, Mark Lillibridge, Mike Burrows, and Willy Zwaenepoel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper presents the design of a novel backup system built on top of a peer-to-peer architecture with minimal supporting infrastructure. The system can be deployed for both large-scale and small-scale peer-to-peer overlay networks. It allows computers connected to the Internet to back up their data cooperatively. Each computer has a set of partner computers and stores its backup data distributively among those partners. In return, such a way as to achieve both fault-tolerance and high reliability. This form of cooperation poses several interesting technical challenges because these computers have independent failure modes, do not trust each other, and are subject to third party attacks

[Go to top]

CPCMS: A Configuration Management System Based on Cryptographic Names (PDF)
by Jonathan S. Shapiro and John Vanderburgh.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

CPCMS, the Cryptographically Protected Configuration Management System is a new configuration management system that provides scalability, disconnected commits, and fine-grain access controls. It addresses the novel problems raised by modern open-source development practices, in which projects routinely span traditional organizational boundaries and can involve thousands of participants. CPCMS provides for simultaneous public and private lines of development, with post hoc "publication" of private branches

[Go to top]

Design and implementation of the idemix anonymous credential system (PDF)
by Jan Camenisch and Els Van Herreweghen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous credential systems [8, 9, 12, 24] allow anonymous yet authenticated and accountable transactions between users and service providers. As such, they represent a powerful technique for protecting users' privacy when conducting Internet transactions. In this paper, we describe the design and implementation of an anonymous credential system based on the protocols developed by [6]. The system is based on new high-level primitives and interfaces allowing for easy integration into access control systems. The prototype was realized in Java. We demonstrate its use and some deployment issues with the description of an operational demonstration scenario

[Go to top]

Design Evolution of the EROS Single-Level Store (PDF)
by Jonathan S. Shapiro and Jonathan Adams.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

File systems have (at least) two undesirable characteristics: both the addressing model and the consistency semantics differ from those of memory, leading to a change in programming model at the storage boundary. Main memory is a single flat space of pages with a simple durability (persistence) model: all or nothing. File content durability is a complex function of implementation, caching, and timing. Memory is globally consistent. File systems offer no global consistency model. Following a crash recovery, individual files may be lost or damaged, or may be collectively inconsistent even though they are individually sound

[Go to top]

Don't Shoot the Messenger: Limiting the Liability of Anonymous Remailers
by Robyn Wagner.
In New Mexico Law Review 32(Winter), 2002, pages 99-142. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I will close the remailer for the time being because the legal issues concerning the Internet in Finland are yet undefined. The legal protection of the users needs to be clarified. At the moment the privacy of Internet messages is judicially unclearI have also personally been a target because of the remailer. Unjustified accusations affect both my job and my private life

[Go to top]

Dynamic Accumulators and Application to Efficient Revocation of Anonymous Credentials (PDF)
by Jan Camenisch and Anna Lysyanskaya.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce the notion of a dynamic accumulator. An accumulator scheme allows one to hash a large set of inputs into one short value, such that there is a short proof that a given input was incorporated into this value. A dynamic accumulator allows one to dynamically add and delete a value, such that the cost of an add or delete is independent of the number of accumulated values. We provide a construction of a dynamic accumulator and an efficient zero-knowledge proof of knowledge of an accumulated value. We prove their security under the strong RSA assumption. We then show that our construction of dynamic accumulators enables efficient revocation of anonymous credentials, and membership revocation for recent group signature and identity escrow schemes

[Go to top]

Dynamically Fault-Tolerant Content Addressable Networks (PDF)
by Jared Saia, Amos Fiat, Steven D. Gribble, Anna R. Karlin, and Stefan Saroiu.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a content addressable network which is robust in the face of massive adversarial attacks and in a highly dynamic environment. Our network is robust in the sense that at any time, an arbitrarily large fraction of the peers can reach an arbitrarily large fraction of the data items. The network can be created and maintained in a completely distributed fashion

[Go to top]

Efficient Sharing of Encrypted Data (PDF)
by Krista Bennett, Christian Grothoff, Tzvetan Horozov, and Ioana Patrascu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet (PDF)
by Philo Juang, Hidekazu Oki, Yong Wang, Margaret Martonosi, Li Shiuan Peh, and Daniel Rubenstein.
In SIGARCH Comput. Archit. News 30(5), 2002, pages 96-107. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Over the past decade, mobile computing and wireless communication have become increasingly important drivers of many new computing applications. The field of wireless sensor networks particularly focuses on applications involving autonomous use of compute, sensing, and wireless communication devices for both scientific and commercial purposes. This paper examines the research decisions and design tradeoffs that arise when applying wireless peer-to-peer networking techniques in a mobile sensor network designed to support wildlife tracking for biology research.The ZebraNet system includes custom tracking collars (nodes) carried by animals under study across a large, wild area; the collars operate as a peer-to-peer network to deliver logged data back to researchers. The collars include global positioning system (GPS), Flash memory, wireless transceivers, and a small CPU; essentially each node is a small, wireless computing device. Since there is no cellular service or broadcast communication covering the region where animals are studied, ad hoc, peer-to-peer routing is needed. Although numerous ad hoc protocols exist, additional challenges arise because the researchers themselves are mobile and thus there is no fixed base station towards which to aim data. Overall, our goal is to use the least energy, storage, and other resources necessary to maintain a reliable system with a very high data homing' success rate. We plan to deploy a 30-node ZebraNet system at the Mpala Research Centre in central Kenya. More broadly, we believe that the domain-centric protocols and energy tradeoffs presented here for ZebraNet will have general applicability in other wireless and sensor applications

[Go to top]

Erasure Coding Vs. Replication: A Quantitative Comparison (PDF)
by Hakim Weatherspoon and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer systems are positioned to take advantage of gains in network bandwidth, storage capacity, and computational resources to provide long-term durable storage infrastructures. In this paper, we quantitatively compare building a distributed storage infrastructure that is self-repairing and resilient to faults using either a replicated system or an erasure-resilient system. We show that systems employing erasure codes have mean time to failures many orders of magnitude higher than replicated systems with similar storage and bandwidth requirements. More importantly, erasure-resilient systems use an order of magnitude less bandwidth and storage to provide similar system durability as replicated systems

[Go to top]

Experiences Deploying a Large-Scale Emergent Network (PDF)
by Bryce W. O'Hearn.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mojo Nationquot;w as a netw ork for robust, decentralized file storage and transfer

[Go to top]

Exploiting network proximity in distributed hash tables (PDF)
by Miguel Castro, Peter Druschel, and Y. Charlie Hu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Self-organizing peer-to-peer (p2p) overlay networks like CAN, Chord, Pastry and Tapestry (also called distributed hash tables or DHTs) offer a novel platform for a variety of scalable and decentralized distributed applications. These systems provide efficient and fault-tolerant routing, object location, and load balancing within a self-organizing overlay network. One important aspect of these systems is how they exploit network proximity in the underlying Internet. Three basic approaches have been proposed to exploit network proximity in DHTs, geographic layout, proximity routing and proximity neighbour selection. In this position paper, we briefly discuss the three approaches, contrast their strengths and shortcomings, and consider their applicability in the different DHT routing protocols. We conclude that proximity neighbor selection, when used in DHTs with prefixbased routing like Pastry and Tapestry, is highly effective and appears to dominate the other approaches

[Go to top]

Exploiting network proximity in peer-to-peer overlay networks (PDF)
by Miguel Castro, Peter Druschel, Y. Charlie Hu, and Antony Rowstron.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The authors give an overview over various ways to use proximity information to optimize routing in peer-to-peer networks. Their study focuses on Pastry and describe in detail the protocols that are used in Pastry to build routing tables with neighbours that are close in terms of the underlying network. They give some analytical and extensive experimental evidence that the protocols are effective in reducing the length of the routing-path in terms of the link-to-link latency that their implementation uses to measure distance

[Go to top]

Hordes — A Multicast Based Protocol for Anonymity (PDF)
by Brian Neil Levine and Clay Shields.
In Journal of Computer Security 10(3), 2002, pages 213-240. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With widespread acceptance of the Internet as a public medium for communication and information retrieval, there has been rising concern that the personal privacy of users can be eroded by cooperating network entities. A technical solution to maintaining privacy is to provide anonymity. We present a protocol for initiator anonymity called Hordes, which uses forwarding mechanisms similar to those used in previous protocols for sending data, but is the first protocol to make use of multicast routing to anonymously receive data. We show this results in shorter transmission latencies and requires less work of the protocol participants, in terms of the messages processed. We also present a comparison of the security and anonymity of Hordes with previous protocols, using the first quantitative definition of anonymity and unlinkability

[Go to top]

How to Fool an Unbounded Adversary with a Short Key
by Alexander Russell and Hong Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Improving Data Availability through Dynamic Model-Driven Replication in Large Peer-to-Peer Communities (PDF)
by Kavitha Ranganathan, Adriana Iamnitchi, and Ian Foster.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Efficient data sharing in global peer-to-peer systems is complicated by erratic node failure, unreliable networkconnectivity and limited bandwidth.Replicating data onmultiple nodes can improve availability and response time.Yet determining when and where to replicate data in orderto meet performance goals in large-scale systems withmany users and files, dynamic network characteristics, and changing user behavior is difficult.We propose anapproach in which peers create replicas automatically in a decentralized fashion, as required to meet availabilitygoals.The aim of our framework is to maintain a thresholdlevel of availability at all times.We identify a set of factors that hinder data availabilityand propose a model that decides when more replication isnecessary.We evaluate the accuracy and performance ofthe proposed model using simulations.Our preliminaryresults show that the model is effective in predicting therequired number of replicas in the system

[Go to top]

Infranet: Circumventing Web Censorship and Surveillance
by Nick Feamster, Magdalena Balazinska, Greg Harfst, Hari Balakrishnan, and David Karger.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An increasing number of countries and companies routinely block or monitor access to parts of the Internet. To counteract these measures, we propose Infranet, a system that enables clients to surreptitiously retrieve sensitive content via cooperating Web servers distributed across the global Internet. These Infranet servers provide clients access to censored sites while continuing to host normal uncensored content. Infranet uses a tunnel protocol that provides a covert communication channel between its clients and servers, modulated over standard HTTP transactions that resemble innocuous Web browsing. In the upstream direction, Infranet clients send covert messages to Infranet servers by associating meaning to the sequence of HTTP requests being made. In the downstream direction, Infranet servers return content by hiding censored data in uncensored images using steganographic techniques. We describe the design, a prototype implementation, security properties, and performance of Infranet. Our security analysis shows that Infranet can successfully circumvent several sophisticated censoring techniques

[Go to top]

Introducing Tarzan, a Peer-to-Peer Anonymizing Network Layer (PDF)
by Michael J. Freedman, Emil Sit, Josh Cates, and Robert Morris.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce Tarzan, a peer-to-peer anonymous network layer that provides generic IP forwarding. Unlike prior anonymizing layers, Tarzan is flexible, transparent, decentralized, and highly scalable. Tarzan achieves these properties by building anonymous IP tunnels between an open-ended set of peers. Tarzan can provide anonymity to existing applications, such as web browsing and file sharing, without change to those applications. Performance tests show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route

[Go to top]

IPTPS '01: Revised Papers from the First International Workshop on Peer-to-Peer Systems
by TODO.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Ivy: A Read/Write Peer-to-Peer File System (PDF)
by Athicha Muthitacharoen, Robert Morris, Thomer M. Gil, and Bengie Chen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Ivy is a multi-user read/write peer-to-peer file system. Ivy has no centralized or dedicated components, and it provides useful integrity properties without requiring users to fully trust either the underlying peer-to-peer storage system or the other users of the file system

[Go to top]

k-Anonymity: A Model for Protecting Privacy
by Latanya Sweeney.
In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10(5), 2002, pages 557-570. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

A Key-Management Scheme for Distributed Sensor Networks (PDF)
by Laurent Eschenauer and Virgil D. Gligor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensorcapture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs

[Go to top]

LT Codes
by Michael Luby.
In Foundations of Computer Science, Annual IEEE Symposium on, 2002, pages 0-271. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce LT codes, the first rateless erasure codes that are very efficient as the data length grows

[Go to top]

On memory-bound functions for fighting spam (PDF)
by Cynthia Dwork, Andrew Goldberg, and Moni Naor.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 1992, Dwork and Naor proposed that e-mail messages be accompanied by easy-to-check proofs of computational effort in order to discourage junk e-mail, now known as spam. They proposed specific CPU-bound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memory-bound functions may behave more equitably than CPU-bound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [5]. We further investigate this intriguing proposal. Specifically, we 1) Provide a formal model of computation and a statement of the problem; 2) Provide an abstract function and prove an asymptotically tight amortized lower bound on the number of memory accesses required to compute an acceptable proof of effort; specifically, we prove that, on average, the sender of a message must perform many unrelated accesses to memory, while the receiver, in order to verify the work, has to perform significantly fewer accesses; 3) Propose a concrete instantiation of our abstract function, inspired by the RC4 stream cipher; 4) Describe techniques to permit the receiver to verify the computation with no memory accesses; 5) Give experimental results showing that our concrete memory-bound function is only about four times slower on a 233 MHz settop box than on a 3.06 GHz workstation, and that speedup of the function is limited even if an adversary knows the access sequence and uses optimal off-line cache replacement

[Go to top]

Mnemosyne: Peer-to-Peer Steganographic Storage (PDF)
by Steven Hand and Timothy Roscoe.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Online codes (Extended Abstract) (PDF)
by Petar Maymounkov.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce online codes – a class of near-optimal codes for a very general loss channel which we call the free channel. Online codes are linear encoding/decoding time codes, based on sparse bipartite graphs, similar to Tornado codes, with a couple of novel properties: local encodability and rateless-ness. Local encodability is the property that each block of the encoding of a message can be computed independently from the others in constant time. This also implies that each encoding block is only dependent on a constant-sized part of the message and a few preprocessed bits. Rateless-ness is the property that each message has an encoding of practically infinite size. We argue that rateless codes are more appropriate than fixed-rate codes for most situations where erasure codes were considered a solution. Furthermore, rateless codes meet new areas of application, where they are not replaceable by fixed-rate codes. One such area is information dispersal over peer-to-peer networks

[Go to top]

Pastiche: Making Backup Cheap and Easy (PDF)
by Landon P. Cox, Christopher D. Murray, and Brian D. Noble.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Backup is cumbersome and expensive. Individual users almost never back up their data, and backup is a significant cost in large organizations. This paper presents Pastiche, a simple and inexpensive backup system. Pastiche exploits excess disk capacity to perform peer-to-peer backup with no administrative costs. Each node minimizes storage overhead by selecting peers that share a significant amount of data. It is easy for common installations to find suitable peers, and peers with high overlap can be identified with only hundreds of bytes. Pastiche provides mechanisms for confidentiality, integrity, and detection of failed or malicious peers. A Pastiche prototype suffers only 7.4 overhead for a modified Andrew Benchmark, and restore performance is comparable to cross-machine copy

[Go to top]

Performance analysis of the CONFIDANT protocol (PDF)
by Sonja Buchegger and Jean-Yves Le Boudec.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile ad-hoc networking works properly only if the participating nodes cooperate in routing and forwarding. However,it may be advantageous for individual nodes not to cooperate. We propose a protocol, called CONFIDANT, for making misbehavior unattractive; it is based on selective altruism and utilitarianism. It aims at detecting and isolating misbehaving nodes, thus making it unattractive to deny cooperation. Trust relationships and routing decisions are based on experienced, observed, or reported routing and forwarding behavior of other nodes. The detailed implementation of CONFIDANT in this paper assumes that the network layer is based on the Dynamic Source Routing (DSR) protocol. We present a performance analysis of DSR fortified by CONFIDANT and compare it to regular defenseless DSR. It shows that a network with CONFIDANT and up to 60 of misbehaving nodes behaves almost as well as a benign network, in sharp contrast to a defenseless network. All simulations have been implemented and performed in GloMoSim

[Go to top]

Practical Set Reconciliation (PDF)
by Yaron Minsky and Ari Trachtenberg.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Query-flood DoS attacks in gnutella (PDF)
by Neil Daswani and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a simple but effective traffic model that can be used to understand the effects of denial-of-service (DoS) attacks based on query floods in Gnutella networks. We run simulations based on the model to analyze how different choices of network topology and application level load balancing policies can minimize the effect of these types of DoS attacks. In addition, we also study how damage caused by query floods is distributed throughout the network, and how application-level policies can localize the damage

[Go to top]

Reliable MIX Cascade Networks through Reputation (PDF)
by Roger Dingledine and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a MIX cascade protocol and a reputation system that together increase the reliability of a network of MIX cascades. In our protocol, MIX nodes periodically generate a communally random seed that, along with their reputations, determines cascade configuration

[Go to top]

A Reputation-Based Approach for Choosing Reliable Resources in Peer-to-Peer Networks
by Ernesto Damiani, Sabrina De Capitani di Vimercati, Stefano Paraboschi, Pierangela Samarati, and Fabio Violante.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (P2P) applications have seen an enormous success, and recently introduced P2P services have reached tens of millions of users. A feature that significantly contributes to the success of many P2P applications is user anonymity. However, anonymity opens the door to possible misuses and abuses, exploiting the P2P network as a way to spread tampered with resources, including Trojan Horses, viruses, and spam. To address this problem we propose a self-regulating system where the P2P network is used to implement a robust reputation mechanism. Reputation sharing is realized through a distributed polling algorithm by which resource requestors can assess the reliability of a resource offered by a participant before initiating the download. This way, spreading of malicious contents will be reduced and eventually blocked. Our approach can be straightforwardly piggybacked on existing P2P protocols and requires modest modifications to current implementations

[Go to top]

A Reputation-Based Approach for Choosing Reliable Resources in Peer-to-Peer Networks
by Ernesto Damiani, Sabrina De Capitani di Vimercati, Stefano Paraboschi, Pierangela Samarati, and Fabio Violante.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (P2P) applications have seen an enormous success, and recently introduced P2P services have reached tens of millions of users. A feature that significantly contributes to the success of many P2P applications is user anonymity. However, anonymity opens the door to possible misuses and abuses, exploiting the P2P network as a way to spread tampered with resources, including Trojan Horses, viruses, and spam. To address this problem we propose a self-regulating system where the P2P network is used to implement a robust reputation mechanism. Reputation sharing is realized through a distributed polling algorithm by which resource requestors can assess the reliability of a resource offered by a participant before initiating the download. This way, spreading of malicious contents will be reduced and eventually blocked. Our approach can be straightforwardly piggybacked on existing P2P protocols and requires modest modifications to current implementations

[Go to top]

Robust information-theoretic private information retrieval (PDF)
by Amos Beimel and Yoav Stahl.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A Private Information Retrieval (PIR) protocol allows a user to retrieve a data item of its choice from a database, such that the servers storing the database do not gain information on the identity of the item being retrieved. PIR protocols were studied in depth since the subject was introduced in Chor, Goldreich, Kushilevitz, and Sudan 1995. The standard definition of PIR protocols raises a simple question–what happens if some of the servers crash during the operation? How can we devise a protocol which still works in the presence of crashing servers? Current systems do not guarantee availability of servers at all times for many reasons, e.g., crash of server or communication problems. Our purpose is to design robust PIR protocols, i.e., protocols which still work correctly even if only k out of l servers are available during the protocols' operation (the user does not know in advance which servers are available). We present various robust PIR protocols giving different tradeofis between the different parameters. These protocols are incomparable, i.e., for different values of n and k we will get better results using different protocols. We first present a generic transformation from regular PIR protocols to robust PIR protocols, this transformation is important since any improvement in the communication complexity of regular PIR protocol will immediately implicate improvement in the robust PIR protocol communication. We also present two specific robust PIR protocols. Finally, we present robust PIR protocols which can tolerate Byzantine servers, i.e., robust PIR protocols which still work in the presence of malicious servers or servers with corrupted or obsolete databases

[Go to top]

Scalable application layer multicast (PDF)
by Suman Banerjee, Bobby Bhattacharjee, and Christopher Kommareddy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100

[Go to top]

A scalable content-addressable network (PDF)
by Sylvia Paul Ratnasamy.
phd, University of California, Berkeley, 2002. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

SCRIBE: A large-scale and decentralized application-level multicast infrastructure (PDF)
by Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony Rowstron.
In IEEE Journal on Selected Areas in Communications (JSAC) 20, 2002, pages 0-2002. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents Scribe, a scalable application-level multicast infrastructure. Scribe supports large numbers of groups, with a potentially large number of members per group. Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet, and leverages Pastry's reliability, self-organization, and locality properties. Pastry is used to create and manage groups and to build efficient multicast trees for the dissemination of messages to each group. Scribe provides best-effort reliability guarantees, but we outline how an application can extend Scribe to provide stronger reliability. Simulation results, based on a realistic network topology model, show that Scribe scales across a wide range of groups and group sizes. Also, it balances the load on the nodes while achieving acceptable delay and link stress when compared to IP multicast

[Go to top]

Secure routing for structured peer-to-peer overlay networks (PDF)
by Miguel Castro, Peter Druschel, Ayalvadi Ganesh, Antony Rowstron, and Dan S. Wallach.
In SIGOPS Oper. Syst. Rev 36(SI), 2002, pages 299-314. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Structured peer-to-peer overlay networks provide a substrate for the construction of large-scale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peer-to-peer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peer-to-peer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes

[Go to top]

Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures (PDF)
by Chris Karlof and David Wagner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider routing security in wireless sensor networks. Many sensor network routing protocols have been proposed, but none of them have been designed with security as a goal. We propose security goals for routing in sensor networks, show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks, introduce two classes of novel attacks against sensor networks — sinkholes and HELLO floods, and analyze the security of all the major sensor network routing protocols. We describe crippling attacks against all of them and suggest countermeasures and design considerations. This is the first such analysis of secure routing in sensor networks

[Go to top]

Security Considerations for Peer-to-Peer Distributed Hash Tables (PDF)
by Emil Sit and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recent peer-to-peer research has focused on providing efficient hash lookup systems that can be used to build more complex systems. These systems have good properties when their algorithms are executed correctly but have not generally considered how to handle misbehaving nodes. This paper looks at what sorts of security problems are inherent in large peer-to-peer systems based on distributed hash lookup systems. We examine the types of problems that such systems might face, drawing examples from existing systems, and propose some design principles for detecting and preventing these problems

[Go to top]

A Signature Scheme with Efficient Protocols (PDF)
by Jan Camenisch and Anna Lysyanskaya.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Digital signature schemes are a fundamental cryptographic primitive, of use both in its own right, and as a building block in cryptographic protocol design. In this paper, we propose a practical and provably secure signature scheme and show protocols (1) for issuing a signature on a committed value (so the signer has no information about the signed value), and (2) for proving knowledge of a signature on a committed value. This signature scheme and corresponding protocols are a building block for the design of anonymity-enhancing cryptographic systems, such as electronic cash, group signatures, and anonymous credential systems. The security of our signature scheme and protocols relies on the Strong RSA assumption. These results are a generalization of the anonymous credential system of Camenisch and Lysyanskaya

[Go to top]

Simple Load Balancing for Distributed Hash Tables (PDF)
by John W. Byers, Jeffrey Considine, and Michael Mitzenmacher.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can be achieved more simply and more cost-effectively. First, we suggest the direct application of the power of two choices paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally be extended to support other load balancing strategies, including load-stealing or load-shedding, as well as providing natural fault-tolerance mechanisms

[Go to top]

Small Worlds in Security Systems: an Analysis of the PGP Certificate Graph (PDF)
by Srdan Capkun, Levente Buttyán, and Jean-Pierre Hubaux.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a new approach to securing self-organized mobile ad hoc networks. In this approach, security is achieved in a fully self-organized manner; by this we mean that the security system does not require any kind of certification authority or centralized server, even for the initialization phase. In our work, we were inspired by PGP [15] because its operation relies solely on the acquaintances between users. We show that the small-world phenomenon naturally emerges in the PGP system as a consequence of the self-organization of users. We show this by studying the PGP certificate graph properties and by quantifying its small-world characteristics. We argue that the certificate graphs of self-organized security systems will exhibit a similar small-world phenomenon, and we provide a way to model self-organized certificate graphs. The results of the PGP certificate graph analysis and graph modelling can be used to build new self-organized security systems and to test the performance of the existing proposals. In this work, we refer to such an example

[Go to top]

A State-of-the-Art Survey on Software Merging
by Tom Mens.
In IEEE Trans. Softw. Eng 28(5), 2002, pages 449-462. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Software merging is an essential aspect of the maintenance and evolution of large-scale software systems. This paper provides a comprehensive survey and analysis of available merge approaches. Over the years, a wide variety of different merge techniques has been proposed. While initial techniques were purely based on textual merging, more powerful approaches also take the syntax and semantics of the software into account. There is a tendency towards operation-based merging because of its increased expressiveness. Another tendency is to try to define merge techniques that are as general, accurate, scalable, and customizable as possible, so that they can be used in any phase in the software life-cycle and detect as many conflicts as possible. After comparing the possible merge techniques, we suggest a number of important open problems and future research directions

[Go to top]

Statistically Unique and Cryptographically Verifiable (SUCV) Identifiers and Addresses (PDF)
by Gabriel Montenegro.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper addresses the identifier ownership problem. It does so by using characteristics of Statistic Uniqueness and Cryptographic Verifiability (SUCV) of certain entities which this document calls SUCV Identifiers and Addresses. Their characteristics allow them to severely limit certain classes of denial of service attacks and hijacking attacks. SUCV addresses are particularly applicable to solve the address ownership problem that hinders mechanisms like Binding Updates in Mobile IPv6

[Go to top]

A Survey of Peer-to-Peer Security Issues (PDF)
by Dan S. Wallach.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications. We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems

[Go to top]

A survey of rollback-recovery protocols in message-passing systems (PDF)
by Mootaz Elnozahy, Lorenzo Alvisi, Yi-Min Wang, and David B. Johnson.
In ACM Comput. Surv 34(3), 2002, pages 375-408. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This survey covers rollback-recovery techniques that do not require special language constructs. In the first part of the survey we classify rollback-recovery protocols into checkpoint-based and log-based. Checkpoint-based protocols rely solely on checkpointing for system state restoration. Checkpointing can be coordinated, uncoordinated, or communication-induced. Log-based protocols combine checkpointing with logging of nondeterministic events, encoded in tuples called determinants. Depending on how determinants are logged, log-based protocols can be pessimistic, optimistic, or causal. Throughout the survey, we highlight the research issues that are at the core of rollback-recovery and present the solutions that currently address them. We also compare the performance of different rollback-recovery protocols with respect to a series of desirable properties and discuss the issues that arise in the practical implementations of these protocols

[Go to top]

Towards an Information Theoretic Metric for Anonymity (PDF)
by Andrei Serjantov and George Danezis.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we look closely at the popular metric of anonymity, the anonymity set, and point out a number of problems associated with it. We then propose an alternative information theoretic measure of anonymity which takes into account the probabilities of users sending and receiving the messages and show how to calculate it for a message in a standard mix-based anonymity system. We also use our metric to compare a pool mix to a traditional threshold mix, which was impossible using anonymity sets. We also show how the maximum route length restriction which exists in some fielded anonymity systems can lead to the attacker performing more powerful traffic analysis. Finally, we discuss open problems and future work on anonymity measurements

[Go to top]

Towards Measuring Anonymity (PDF)
by Claudia Diaz, Stefaan Seys, Joris Claessens, and Bart Preneel.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the dierent users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing dierent systems amongst each other

[Go to top]

Understanding BGP misconfiguration (PDF)
by Ratul Mahajan, David Wetherall, and Thomas Anderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

It is well-known that simple, accidental BGP configuration errors can disrupt Internet connectivity. Yet little is known about the frequency of misconfiguration or its causes, except for the few spectacular incidents of widespread outages. In this paper, we present the first quantitative study of BGP misconfiguration. Over a three week period, we analyzed routing table advertisements from 23 vantage points across the Internet backbone to detect incidents of misconfiguration. For each incident we polled the ISP operators involved to verify whether it was a misconfiguration, and to learn the cause of the incident. We also actively probed the Internet to determine the impact of misconfiguration on connectivity.Surprisingly, we find that configuration errors are pervasive, with 200-1200 prefixes (0.2-1.0 of the BGP table size) suffering from misconfiguration each day. Close to 3 in 4 of all new prefix advertisements were results of misconfiguration. Fortunately, the connectivity seen by end users is surprisingly robust to misconfigurations. While misconfigurations can substantially increase the update load on routers, only one in twenty five affects connectivity. While the causes of misconfiguration are diverse, we argue that most could be prevented through better router design

[Go to top]

Venti: A New Approach to Archival Storage (PDF)
by Sean Quinlan and Sean Dorward.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems

[Go to top]

Viceroy: a scalable and dynamic emulation of the butterfly (PDF)
by Dahlia Malkhi, Moni Naor, and David Ratajczak.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a family of constant-degree routing networks of logarithmic diameter, with the additional property that the addition or removal of a node to the network requires no global coordination, only a constant number of linkage changes in expectation, and a logarithmic number with high probability. Our randomized construction improves upon existing solutions, such as balanced search trees, by ensuring that the congestion of the network is always within a logarithmic factor of the optimum with high probability. Our construction derives from recent advances in the study of peer-to-peer lookup networks, where rapid changes require efficient and distributed maintenance, and where the lookup efficiency is impacted both by the lengths of paths to requested data and the presence or elimination of bottlenecks in the network

[Go to top]

A Computational Model of Trust and Reputation (PDF)
by Lik Mui, Mojdeh Mohtashemi, and Ari Halberstadt.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Despite their many advantages, e-businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for e-Businesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents' trust and reputation scores

[Go to top]

Finite-length analysis of low-density parity-check codes on the binary erasure channel (PDF)
by Changyan Di, David Proietti, I. Emre Telatar, Thomas J. Richardson, and Rüdiger L. Urbanke.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area

[Go to top]

A Measurement Study of Peer-to-Peer File Sharing Systems (PDF)
by Stefan Saroiu, P. Krishna Gummadi, and Steven D. Gribble.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

An Analysis of the Degradation of Anonymous Protocols (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There have been a number of protocols proposed for anonymous network communication. In this paper we investigate attacks by corrupt group members that degrade the anonymity of each protocol over time. We prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Hordes, Web Mixes, and DC-Net, can maintain anonymity in the face of the attacks described. Our results show that fully-connected DC-Net is the most resilient to these attacks, but it su$$ers from scalability issues that keep anonymity group sizes small. Additionally, we show how violating an assumption of the attack allows malicious users to setup other participants to falsely appear to be the initiator of a connection

[Go to top]

Anonymizing censorship resistant systems (PDF)
by Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a new Peer-to-Peer architecture for a censorship resistant system with user, server and active-server document anonymity as well as efficient document retrieval. The retrieval service is layered on top of an existing Peer-to-Peer infrastructure, which should facilitate its implementation. The key idea is to separate the role of document storers from the machines visible to the users, which makes each individual part of the system less prone to attacks, and therefore to censorship. Indeed, if one server has been pressured into removal, the other server administrators may simply follow the precedent and remove the offending content themselves

[Go to top]

Complex Queries in DHT-based Peer-to-Peer Networks (PDF)
by Matthew Harren, Joseph M. Hellerstein, Ryan Huebsch, Boon Thau Loo, S Shenker, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently a new generation of P2P systems, offering distributed hash table (DHT) functionality, have been proposed. These systems greatly improve the scalability and exact-match accuracy of P2P systems, but offer only the exact-match query facility. This paper outlines a research agenda for building complex query facilities on top of these DHT-based P2P systems. We describe the issues involved and outline our research plan and current status

[Go to top]

Kademlia: A Peer-to-peer Information System Based on the XOR Metric (PDF)
by Petar Maymounkov and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users

[Go to top]

Reliable MIX Cascade Networks through Reputation (PDF)
by Roger Dingledine and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a MIX cascade protocol and a reputation system that together increase the reliability of a network of MIX cascades. In our protocol, MIX nodes periodically generate a communally random seed that, along with their reputations, determines cascade configuration. Nodes send test messages to monitor their cascades. Senders can also demonstrate message decryptions to convince honest cascade members that a cascade is misbehaving. By allowing any node to declare the failure of its own cascade, we eliminate the need for global trusted witnesses

[Go to top]

The Sybil Attack (PDF)
by John R. Douceur.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these "Sybil attacks" is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities

[Go to top]

Distributed Data Location in a Dynamic Network (PDF)
by Kirsten Hildrum, John Kubiatowicz, Satish Rao, and Ben Y. Zhao.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Modern networking applications replicate data and services widely, leading to a need for location-independent routing – the ability to route queries directly to objects using names that are independent of the objects' physical locations. Two important properties of a routing infrastructure are routing locality and rapid adaptation to arriving and departing nodes. We show how these two properties can be achieved with an efficient solution to the nearest-neighbor problem. We present a new distributed algorithm that can solve the nearest-neighbor problem for a restricted metric space. We describe our solution in the context of Tapestry, an overlay network infrastructure that employs techniques proposed by Plaxton, Rajaraman, and Richa

[Go to top]

Dummy Traffic Against Long Term Intersection Attacks (PDF)
by Oliver Berthold and Heinrich Langos.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a method to prevent so called intersection attacks on anonymity services. Intersection attacks are possible if not all users of such a service are active all the time and part of the transfered messages are linkable. Especially in real systems, the group of users (anonymity set) will change over time due to online and off-line periods. Our proposed solution is to send pregenerated dummy messages to the communication partner (e.g. the web server), during the user's off-line periods. For a detailed description of our method we assume a cascade of Chaumian MIXes as anonymity service and respect and fulfill the MIX attacker model

[Go to top]

Fingerprinting Websites Using Traffic Analysis (PDF)
by Andrew Hintz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

I present a traffic analysis based vulnerability in Safe Web, an encrypting web proxy. This vulnerability allows someone monitoring the traffic of a Safe Web user to determine if the user is visiting certain websites. I also describe a successful implementation of the attack. Finally, I discuss methods for improving the attack and for defending against the attack

[Go to top]

Internet pricing with a game theoretical approach: concepts and examples (PDF)
by Xi-Ren Cao, Hong-Xia Shen, Rodolfo Milito, and Patrica Wirth.
In IEEE/ACM Trans. Netw 10, April 2002, pages 208-216. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The basic concepts of three branches of game theory, leader-follower, cooperative, and two-person nonzero sum games, are reviewed and applied to the study of the Internet pricing issue. In particular, we emphasize that the cooperative game (also called the bargaining problem) provides an overall picture for the issue. With a simple model for Internet quality of service (QoS), we demonstrate that the leader-follower game may lead to a solution that is not Pareto optimal and in some cases may be "unfair," and that the cooperative game may provide a better solution for both the Internet service provider (ISP) and the user. The practical implication of the results is that government regulation or arbitration may be helpful. The QoS model is also applied to study the competition between two ISPs, and we find a Nash equilibrium point from which the two ISPs would not move out without cooperation. The proposed approaches can be applied to other Internet pricing problems such as the Paris Metro pricing scheme

[Go to top]

Privacy-enhancing technologies for the Internet, II: Five years later (PDF)
by Ian Goldberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Five years ago, Privacy-enhancing technologies for the Internet [23] examined the state of the then newly emerging privacy-enhancing technologies. In this survey paper, we look back at the last five years to see what has changed, what has stagnated, what has succeeded, what has failed, and why. We also look at current trends with a view towards the future

[Go to top]

Towards an Information Theoretic Metric for Anonymity (PDF)
by Andrei Serjantov and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we look closely at the popular metric of anonymity, the anonymity set, and point out a number of problems associated with it. We then propose an alternative information theoretic measure of anonymity which takes into account the probabilities of users sending and receiving the messages and show how to calculate it for a message in a standard mix-based anonymity system. We also use our metric to compare a pool mix to a traditional threshold mix, which was impossible using anonymity sets. We also show how the maximum route length restriction which exists in some fielded anonymity systems can lead to the attacker performing more powerful traffic analysis. Finally, we discuss open problems and future work on anonymity measurements

[Go to top]

Towards measuring anonymity (PDF)
by Claudia Diaz, Stefaan Seys, Joris Claessens, and Bart Preneel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper introduces an information theoretic model that allows to quantify the degree of anonymity provided by schemes for anonymous connections. It considers attackers that obtain probabilistic information about users. The degree is based on the probabilities an attacker, after observing the system, assigns to the different users of the system as being the originators of a message. As a proof of concept, the model is applied to some existing systems. The model is shown to be very useful for evaluating the level of privacy a system provides under various attack scenarios, for measuring the amount of information an attacker gets with a particular attack and for comparing different systems amongst each other

[Go to top]

Unobservable Surfing on the World Wide Web: Is Private Information Retrieval an alternative to the MIX based Approach? (PDF)
by Dogan Kesdogan, Mark Borning, and Michael Schmeink.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The technique Private Information Retrieval (PIR) perfectly protects a user's access pattern to a database. An attacker cannot observe (or determine) which data element is requested by a user and so cannot deduce the interest of the user. We discuss the application of PIR on the World Wide Web and compare it to the MIX approach. We demonstrate particularly that in this context the method does not provide perfect security, and we give a mathematical model for the amount of information an attacker could obtain. We provide an extension of the method under which perfect security can still be achieved

[Go to top]

Statistical Identification of Encrypted Web Browsing Traffic (PDF)
by Qixiang Sun, Daniel R. Simon, Yi-Min Wang, Wilf Russell, Venkata N. Padmanabhan, and Lili Qiu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Encryption is often proposed as a tool for protecting the privacy of World Wide Web browsing.However, encryption–particularly astypically implemented in, or in concert with popular Webbrowsers–does not hide all information about the encryptedplaintext.Specifically, HTTP object count and sizes are oftenrevealed (or at least incompletely concealed). We investigate theidentifiability of World Wide Web traffic based on this unconcealedinformation in a large sample of Web pages, and show that it sufficesto identify a significant fraction of them quite reliably.We also suggest some possible countermeasures against the exposure of this kind of information and experimentally evaluate their effectiveness

[Go to top]

Analysis of an Anonymity Network for Web Browsing (PDF)
by Marc Rennhard, Sandro Rafaeli, Laurent Mathy, Bernhard Plattner, and David Hutchison.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Various systems offering anonymity for near real-time Internet traffic have been operational. However, they did not deliver many quantitative results about performance, bandwidth overhead, or other issues that arise when implementing or operating such a system. Consequently, the problem of designing and operating these systems in a way that they provide a good balance between usability, protection from attacks, and overhead is not well understood. In this paper, we present the analysis of an anonymity network for web browsing that offers a high level of anonymity against a sophisticated attacker and good end-to-end performance at a reasonable bandwidth overhead. We describe a novel way of operating the system that maximizes the protection from traffic analysis attacks while minimizing the bandwidth overhead. We deliver quantitative results about the performance of our system, which should help to give a better understanding of anonymity networks

[Go to top]

Cebolla: Pragmatic IP Anonymity (PDF)
by Zach Brown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Cebolla is an intersection of cryptographic mix networks and the environment of the public Internet. Most of the history of cryptographic mix networks lies in academic attempts to provide anonymity of various sorts to the users of the network. While based on strong cryptographic principles, most attempts have failed to address properties of the public network and the reasonable expectations of most of its users. Cebolla attempts to address this gulf between the interesting research aspects of IP level anonymity and the operational expectations of most uses of the IP network

[Go to top]

Detecting shared congestion of flows via end-to-end measurement (PDF)
by Dan Rubenstein, Jim Kurose, and Don Towsley.
In IEEE/ACM Transactions on Networking 10, June 2002, pages 381-395. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Current Internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety of applications, ranging from aggregated TCP transmissions to multiple-sender multicast applications. However, in order for this cooperation to be effective, one must first identify the flows that are congested at the same set of resources. We present techniques based on loss or delay observations at end hosts to infer whether or not two flows experiencing congestion are congested at the same network resources. Our novel result is that such detection can be achieved for unicast flows, but the techniques can also be applied to multicast flows. We validate these techniques via queueing analysis, simulation and experimentation within the Internet. In addition, we demonstrate preliminary simulation results that show that the delay-based technique can determine whether two TCP flows are congested at the same set of resources. We also propose metrics that can be used as a measure of the amount of congestion sharing between two flows

[Go to top]

The GNet Whitepaper (PDF)
by Krista Bennett, Tiberius Stef, Christian Grothoff, Tzvetan Horozov, and Ioana Patrascu.
In unknown, June 2002. (BibTeX entry) (Download bibtex record)
(direct link)

This paper describes GNet, a reliable anonymous distributed backup system with reasonable defenses against malicious hosts and low overhead in traffic and CPU time. The system design is described and compared to other publicly used services with similar goals. Additionally, the implementation and the protocols of GNet are presented

[Go to top]

Probabilistic Location and Routing (PDF)
by Sean C. Rhea and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We propose probabilistic location to enhance the performance of existing peer-to-peer location mechanisms in the case where a replica for the queried data item exists close to the query source. We introduce the attenuated Bloom filter, a lossy distributed index data structure. We describe how to use these data structures for document location and how to maintain them despite document motion. We include a detailed performance study which indicates that our algorithm performs as desired, both finding closer replicas and finding them faster than deterministic algorithms alone

[Go to top]

Building secure file systems out of Byzantine storage (PDF)
by David Mazières and Dennis Shasha.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper shows how to implement a trusted network file system on an untrusted server. While cryptographic storage techniques exist that allow users to keep data secret from untrusted servers, this work concentrates on the detection of tampering attacks and stale data. Ideally, users of an untrusted storage server would immediately and unconditionally notice any misbehavior on the part of the server. This ideal is unfortunately not achievable. However, we define a notion of data integrity called fork consistency in which, if the server delays just one user from seeing even a single change by another, the two users will never again see one another's changes—a failure easily detectable with on-line communication. We give a practical protocol for a multi-user network file system called SUNDR, and prove that SUNDR offers fork consistency whether or not the server obeys the protocol

[Go to top]

A Market-Based Approach to Optimal Resource Allocation in Integrated-Services Connection-Oriented Networks (PDF)
by Panagiotis Thomas, Demosthenis Teneketzis, and Jeffrey K. MacKie-Mason.
In Operations Research 50(4), July 2002, pages 603-616. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present an approach to the admission control and resource allocation problem in connection-oriented networks that offer multiple services to users. Users' preferences are summarized by means of their utility functions, and each user is allowed to request more than one type of service. Multiple types of resources are allocated at each link along the path of a connection. We assume that the relation between Quality of Service (QoS) and resource allocation is given, and we incorporate it as a constraint into a static optimization problem. The objective of the optimization problem is to determine the amount of and required resources for each type of service to maximize the sum of the users' utilities. We prove the existence of a solution of the optimization problem and describe a competitive market economy that implements the solution and satisfies the informational constraints imposed by the nature of the decentralized resource allocation problem. The economy consists of four different types of agents: resource providers, service providers, users, and an auctioneer that regulates the prices based on the observed aggregate excess demand. The goods that are sold are: (i) the resources at each link of the network, and (ii) services constructed from these resources and then delivered to users. We specify an iterative procedure that is used by the auctioneer to update the prices, and we show that it leads to an allocation that is arbitrarily close to a solution of the optimization problem in a finite number of iterations

[Go to top]

Reclaiming Space from Duplicate Files in a Serverless Distributed File System (PDF)
by John R. Douceur, Atul Adya, William J. Bolosky, Dan Simon, and Marvin Theimer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Farsite distributed file system provides availability by replicating each file onto multiple desktop computers. Since this replication consumes significant storage space, it is important to reclaim used space where possible. Measurement of over 500 desktop file systems shows that nearly half of all consumed space is occupied by duplicate files. We present a mechanism to reclaim space from this incidental duplication to make it available for controlled file replication. Our mechanism includes: (1) convergent encryption, which enables duplicate files to be coalesced into the space of a single file, even if the files are encrypted with different users' keys; and (2) SALAD, a Self-Arranging Lossy Associative Database for aggregating file content and location information in a decentralized, scalable, fault-tolerant manner. Large-scale simulation experiments show that the duplicate-file coalescing system is scalable, highly effective, and fault-tolerant

[Go to top]

Infranet: Circumventing Web Censorship and Surveillance (PDF)
by Nick Feamster, Magdalena Balazinska, Greg Harfst, Hari Balakrishnan, and David Karger.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An increasing number of countries and companies routinely block or monitor access to parts of the Internet. To counteract these measures, we propose Infranet, a system that enables clients to surreptitiously retrieve sensitive content via cooperating Web servers distributed across the global Internet. These Infranet servers provide clients access to censored sites while continuing to host normal uncensored content. Infranet uses a tunnel protocol that provides a covert communication channel between its clients and servers, modulated over standard HTTP transactions that resemble innocuous Web browsing. In the upstream direction, Infranet clients send covert messages to Infranet servers by associating meaning to the sequence of HTTP requests being made. In the downstream direction, Infranet servers return content by hiding censored data in uncensored images using steganographic techniques. We describe the design, a prototype implementation, security properties, and performance of Infranet. Our security analysis shows that Infranet can successfully circumvent several sophisticated censoring techniques

[Go to top]

The LSD Broadcast Encryption Scheme (PDF)
by Dani Halevy and Adi Shamir.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Broadcast Encryption schemes enable a center to broadcast encrypted programs so that only designated subsets of users can decrypt each program. The stateless variant of this problem provides each user with a fixed set of keys which is never updated. The best scheme published so far for this problem is the "subset difference" (SD) technique of Naor Naor and Lotspiech, in which each one of the n users is initially given O(log2(n)) symmetric encryption keys. This allows the broadcaster to define at a later stage any subset of up to r users as "revoked", and to make the program accessible only to their complement by sending O(r) short messages before the encrypted program, and asking each user to perform an O(log(n)) computation. In this paper we describe the "Layered Subset Difference" (LSD) technique, which achieves the same goal with O(log1+(n)) keys, O(r) messages, and O(log(n)) computation. This reduces the number of keys given to each user by almost a square root factor without affecting the other parameters. In addition, we show how to use the same LSD keys in order to address any subset defined by a nested combination of inclusion and exclusion conditions with a number of messages which is proportional to the complexity of the description rather than to the size of the subset. The LSD scheme is truly practical, and makes it possible to broadcast an unlimited number of programs to 256,000,000 possible customers by giving each new customer a smart card with one kilobyte of tamper-resistant memory. It is then possible to address any subset defined by t nested inclusion and exclusion conditions by sending less than 4t short messages, and the scheme remains secure even if all the other users form an adversarial coalition

[Go to top]

Making mix nets robust for electronic voting by randomized partial checking (PDF)
by Markus Jakobsson, Ari Juels, and Ron Rivest.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a new technique for making mix nets robust, called randomized partial checking (RPC). The basic idea is that rather than providing a proof of completely correct operation, each server provides strong evidence of its correct operation by revealing a pseudo-randomly selected subset of its input/output relations. Randomized partial checking is exceptionally efficient compared to previous proposals for providing robustness; the evidence provided at each layer is shorter than the output of that layer, and producing the evidence is easier than doing the mixing. It works with mix nets based on any encryption scheme (i.e., on public-key alone, and on hybrid schemes using public-key/symmetric-key combinations). It also works both with Chaumian mix nets where the messages are successively encrypted with each server's key, and with mix nets based on a single public key with randomized re-encryption at each layer. Randomized partial checking is particularly well suited for voting systems, as it ensures voter privacy and provides assurance of correct operation. Voter privacy is ensured (either probabilistically or cryptographically) with appropriate design and parameter selection. Unlike previous work, our work provides voter privacy as a global property of the mix net rather than as a property ensured by a single honest server. RPC-based mix nets also provide high assurance of a correct election result, since a corrupt server is very likely to be caught if it attempts to tamper with even a couple of ballots

[Go to top]

Distributed algorithmic mechanism design: recent results and future directions (PDF)
by Joan Feigenbaum and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet's decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems

[Go to top]

Chaffinch: Confidentiality in the Face of Legal Threats (PDF)
by Richard Clayton and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present the design and rationale of a practical system for passing confidential messages. The mechanism is an adaptation of Rivest's chaffing and winnowing, which has the legal advantage of using authentication keys to provide privacy.We identify a weakness in Rivest's particular choice of his package transform as an all-or-nothing element within his scheme. We extend the basic system to allow the passing of several messages concurrently. Only some of these messages need be divulged under legal duress, the other messages will be plausibly deniable. We show how this system may have some resilience to the type of legal attack inherent in the UK's Regulation of Investigatory Powers (RIP) Act

[Go to top]

Fast and secure distributed read-only file system (PDF)
by Kevin Fu, Frans M. Kaashoek, and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Internet users increasingly rely on publicly available data for everything from software installation to investment decisions. Unfortunately, the vast majority of public content on the Internet comes with no integrity or authenticity guarantees. This paper presents the self-certifying read-only file system, a content distribution system providing secure, scalable access to public, read-only data. The read-only file system makes the security of published content independent from that of the distribution infrastructure. In a secure area (perhaps off-line), a publisher creates a digitally-signed database out of a file system's contents. The publisher then replicates the database on untrusted content-distribution servers, allowing for high availability. The read-only file system protocol furthermore pushes the cryptographic cost of content verification entirely onto clients, allowing servers to scale to a large number of clients. Measurements of an implementation show that an individual server running on a 550 Mhz Pentium III with FreeBSD can support 1,012 connections per second and 300 concurrent clients compiling a large software package

[Go to top]

From a Trickle to a Flood: Active Attacks on Several Mix Types (PDF)
by Andrei Serjantov, Roger Dingledine, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The literature contains a variety of different mixes, some of which have been used in deployed anonymity systems. We explore their anonymity and message delay properties, and show how to mount active attacks against them by altering the traffic between the mixes. We show that if certain mixes are used, such attacks cannot destroy the anonymity of a particular message completely. We work out the cost of these attacks in terms of the number of messages the attacker must insert into the network and the time he must spend. We discuss advantages and disadvantages of these mixes and the settings in which their use is appropriate. Finally, we look at dummy traffic and SG mixes as other promising ways of protecting against the attacks, point out potential weaknesses in existing designs, and suggest improvements

[Go to top]

Inter-Packet Delay Based Correlation for Tracing Encrypted Connections through Stepping Stones (PDF)
by Xinyuan Wang, Douglas S. Reeves, and S. Felix Wu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network based intrusions have become a serious threat to the users of the Internet. Intruders who wish to attack computers attached to the Internet frequently conceal their identity by staging their attacks through intermediate stepping stones. This makes tracing the source of the attack substantially more difficult, particularly if the attack traffic is encrypted. In this paper, we address the problem of tracing encrypted connections through stepping stones. The incoming and outgoing connections through a stepping stone must be correlated to accomplish this. We propose a novel correlation scheme based on inter-packet timing characteristics of both encrypted and unencrypted connections. We show that (after some filtering) inter-packet delays (IPDs) of both encrypted and unencrypted, interactive connections are preserved across many router hops and stepping stones. The effectiveness of this method for correlation purposes also requires that timing characteristics be distinctive enough to identify connections. We have found that normal interactive connections such as telnet, SSH and rlogin are almost always distinctive enough to provide correct correlation across stepping stones. The number of packets needed to correctly correlate two connections is also an important metric, and is shown to be quite modest for this method

[Go to top]

Limits of Anonymity in Open Environments (PDF)
by Dogan Kesdogan, Dakshi Agrawal, and Stefan Penz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A user is only anonymous within a set of other users. Hence, the core functionality of an anonymity providing technique is to establish an anonymity set. In open environments, such as the Internet, the established anonymity sets in the whole are observable and change with every anonymous communication. We use this fact of changing anonymity sets and present a model where we can determine the protection limit of an anonymity technique, i.e. the number of observations required for an attacker to break uniquely a given anonymity technique. In this paper, we use the popular MIX method to demonstrate our attack. The MIX method forms the basis of most of the today's deployments of anonymity services (e.g. Freedom, Onion Routing, Webmix). We note that our approach is general and can be applied equally well to other anonymity providing techniques

[Go to top]

Replication Strategies in Unstructured Peer-to-Peer Networks (PDF)
by Edith Cohen and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Peer-to-Peer (P2P) architectures that are most prevalent in today's Internet are decentralized and unstructured. Search is blind in that it is independent of the query and is thus not more effective than probing randomly chosen peers. One technique to improve the effectiveness of blind search is to proactively replicate data

[Go to top]

Almost Entirely Correct Mixing With Application to Voting (PDF)
by Dan Boneh and Philippe Golle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In order to design an exceptionally efficient mix network, both asymptotically and in real terms, we develop the notion of almost entirely correct mixing, and propose a new mix network that is almost entirely correct. In our new mix, the real cost of proving correctness is orders of magnitude faster than all other mix nets. The trade-off is that our mix only guarantees "almost entirely correct" mixing, i.e it guarantees that the mix network processed correctly all inputs with high (but not overwhelming) probability. We use a new technique for verifying correctness. This new technique consists of computing the product of a random subset of the inputs to a mix server, then require the mix server to produce a subset of the outputs of equal product. Our new mix net is of particular value for electronic voting, where a guarantee of almost entirely correct mixing may well be sufficient to announce instantly the result of a large election. The correctness of the result can later be verified beyond a doubt using any one of a number of much slower proofs of perfect-correctness, without having to mix the ballots again

[Go to top]

Forward Secure Mixes (PDF)
by George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

New threats such as compulsion to reveal logs, secret and private keys as well as to decrypt material are studied in the context of the security of mix networks. After a comparison of this new threat model with the traditional one, a new construction is introduced, the fs-mix, that minimizes the impact that such powers have on the security of the network, by using forward secure communication channels and key updating operation inside the mixes. A discussion about the forward security of these new proposals and some extensions is included

[Go to top]

Introducing MorphMix: Peer-to-Peer based Anonymous Internet Usage with Collusion Detection (PDF)
by Marc Rennhard and Bernhard Plattner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional mix-based systems are composed of a small set of static, well known, and highly reliable mixes. To resist traffic analysis attacks at a mix, cover traffic must be used, which results in significant bandwidth overhead. End-to-end traffic analysis attacks are even more difficult to counter because there are only a few entry-and exit-points in the system. Static mix networks also suffer from scalability problems and in several countries, institutions operating a mix could be targeted by legal attacks. In this paper, we introduce MorphMix, a system for peer-to-peer based anonymous Internet usage. Each MorphMix node is a mix and anyone can easily join the system. We believe that MorphMix overcomes or reduces several drawbacks of static mix networks. In particular, we argue that our approach offers good protection from traffic analysis attacks without employing cover traffic. But MorphMix also introduces new challenges. One is that an adversary can easily operate several malicious nodes in the system and try to break the anonymity of legitimate users by getting full control over their anonymous paths. To counter this attack, we have developed a collusion detection mechanism, which allows to identify compromised paths with high probability before they are being used

[Go to top]

Tarzan: A Peer-to-Peer Anonymizing Network Layer (PDF)
by Michael J. Freedman and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tarzan is a peer-to-peer anonymous IP network overlay. Because it provides IP service, Tarzan is general-purpose and transparent to applications. Organized as a decentralized peer-to-peer overlay, Tarzan is fault-tolerant, highly scalable, and easy to manage.Tarzan achieves its anonymity with layered encryption and multi-hop routing, much like a Chaumian mix. A message initiator chooses a path of peers pseudo-randomly through a restricted topology in a way that adversaries cannot easily influence. Cover traffic prevents a global observer from using traffic analysis to identify an initiator. Protocols toward unbiased peer-selection offer new directions for distributing trust among untrusted entities.Tarzan provides anonymity to either clients or servers, without requiring that both participate. In both cases, Tarzan uses a network address translator (NAT) to bridge between Tarzan hosts and oblivious Internet hosts.Measurements show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route

[Go to top]

Capacity-achieving sequences for the erasure channel (PDF)
by Peter Oswald and M. Amin Shokrollahi.
In IEEE Trans. Information Theory 48, December 2002, pages 3017-3028. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper starts a systematic study of capacity-achieving sequences of low-density paritycheck codes for the erasure channel. We introduce a class A of analytic functions and develop a procedure to obtain degree distributions for the codes. We showvarious properties of this class which will help us construct new distributions from old ones. We then study certain types of capacity-achieving sequences and introduce new measures for their optimality. For instance, it turns out that the right-regular sequence is capacity-achieving in a much stronger sense than, e.g., the Tornado sequence. This also explains why numerical optimization techniques tend to favor graphs with only one degree of check nodes. Using our methods, we attack the problem of reducing the fraction of degree 2 variable nodes, which has important practical implications. It turns out that one can produce capacity achieving sequences for which this fraction remains below any constant, albeit at the price of slower convergence to capacity

[Go to top]

FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment (PDF)
by Atul Adya, William J. Bolosky, Miguel Castro, Gerald Cermak, Ronnie Chaiken, John R. Douceur, Jon Howell, Jacob R. Lorch, Marvin Theimer, and Roger Wattenhofer.
In ACM SIGOPS Operating Systems Review 36, December 2002, pages 1-14. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design

[Go to top]

P5: A Protocol for Scalable Anonymous Communication (PDF)
by Rob Sherwood, Bobby Bhattacharjee, and Aravind Srinivasan.
In Journal of Computer Security Volume 13 ,, December 2002, pages 839-876. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a protocol for anonymous communication over the Internet. Our protocol, called P (Peer-to-Peer Personal Privacy Protocol) provides sender-, receiver-, and sender-receiver anonymity. P is designed to be implemented over the current Internet protocols, and does not require any special infrastructure support. A novel feature of P is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups. We present a description of P , an analysis of its anonymity and communication efficiency, and evaluate its performance using detailed packet-level simulations

[Go to top]

A Secure Directory Service based on Exclusive Encryption (PDF)
by John R. Douceur, Atul Adya, Josh Benaloh, William J. Bolosky, and Gideon Yuval.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe the design of a Windows file-system directory service that ensures the persistence, integrity, privacy, syntactic legality, and case-insensitive uniqueness of the names it indexes. Byzantine state replication provides persistence and integrity, and encryption imparts privacy. To enforce Windows' baroque name syntax–including restrictions on allowable characters, on the terminal character, and on several specific names–we develop a cryptographic process, called "exclusive encryption," that inherently excludes syntactically illegal names and that enables the exclusion of case-insensitively duplicate names without access to their plaintext. This process excludes entire names by mapping the set of allowed strings to the set of all strings, excludes certain characters through an amended prefix encoding, excludes terminal characters through varying the prefix coding by character index, and supports case-insensitive comparison of names by extracting and encrypting case information separately. We also address the issues of hiding name-length information and access-authorization information, and we report a newly discovered problem with enforcing case-insensitive uniqueness for Unicode names

[Go to top]

Tools for privacy preserving distributed data mining (PDF)
by Chris Clifton, Murat Kantarcioglu, Jaideep Vaidya, Xiaodong Lin, and Michael Y. Zhu.
In SIGKDD Explorations Newsletter 4(2), December 2002, pages 28-34. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems

[Go to top]

2003

Ad hoc-VCG: a truthful and cost-efficient routing protocol for mobile ad hoc networks with selfish agents (PDF)
by Luzi Anderegg and Stephan Eidenbenz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a game-theoretic setting for routing in a mobile ad hoc network that consists of greedy, selfish agents who accept payments for forwarding data for other agents if the payments cover their individual costs incurred by forwarding data. In this setting, we propose Ad hoc-VCG, a reactive routing protocol that achieves the design objectives of truthfulness (i.e., it is in the agents' best interest to reveal their true costs for forwarding data) and cost-efficiency (i.e., it guarantees that routing is done along the most cost-efficient path) in a game-theoretic sense by paying to the intermediate nodes a premium over their actual costs for forwarding data packets. We show that the total overpayment (i.e., the sum of all premiums paid) is relatively small by giving a theoretical upper bound and by providing experimental evidence. Our routing protocol implements a variation of the well-known mechanism by Vickrey, Clarke, and Groves in a mobile network setting. Finally, we analyze a very natural routing protocol that is an adaptation of the Packet Purse Model [8] with auctions in our setting and show that, unfortunately, it does not achieve cost-efficiency or truthfulness

[Go to top]

An analysis of compare-by-hash (PDF)
by Val Henson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recent research has produced a new and perhaps dangerous technique for uniquely identifying blocks that I will call compare-by-hash. Using this technique, we decide whether two blocks are identical to each other by comparing their hash values, using a collision-resistant hash such as SHA-1[5]. If the hash values match, we assume the blocks are identical without further ado. Users of compare-by-hash argue that this assumption is warranted because the chance of a hash collision between any two randomly generated blocks is estimated to be many orders of magnitude smaller than the chance of many kinds of hardware errors. Further analysis shows that this approach is not as risk-free as it seems at first glance

[Go to top]

Analytical and Empirical Analysis of Countermeasures to Traffic Analysis Attacks (PDF)
by Xinwen Fu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies countermeasures to traffic analysis attacks. A common strategy for such countermeasures is link padding. We consider systems where payload traffic is padded so that packets have either constant inter-arrival times or variable inter-arrival times. The adversary applies statistical recognition techniques to detect the payload traffic rates by using statistical measures like sample mean, sample variance, or sample entropy. We evaluate quantitatively the ability of the adversary to make a correct detection and derive closed-form formulas for the detection rate based on analytical models. Extensive experiments were carried out to validate the system performance predicted by the analytical method. Based on the systematic evaluations, we develop design guidelines for the proper configuration of a system in order to minimize the detection rate

[Go to top]

Asymptotically Efficient Approaches to Fault-Tolerance in Peer-to-Peer (PDF)
by Kirsten Hildrum and John Kubiatowicz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we show that two peer-to-peer systems, Pastry [13] and Tapestry [17] can be made tolerant to certain classes of failures and a limited class of attacks. These systems are said to operate properly if they can find the closest node matching a requested ID. The system must also be able to dynamically construct the necessary routing information when new nodes enter or the network changes. We show that with an additional factor of storage overhead and communication overhead, they can continue to achieve both of these goals in the presence of a constant fraction nodes that do not obey the protocol. Our techniques are similar in spirit to those of Saia et al. [14] and Naor and Wieder [10]. Some simple simulations show that these techniques are useful even with constant overhead

[Go to top]

Automatic Context Integration for Group Aware Environments (PDF)
by Bernhard Hurler, Leo Petrak, Thomas Fuhrmann, Oliver Brand, and Martina Zitterbart.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tele-collaboration is a valuable tool that can connect learners at different sites and help them benefit from their respective competences. Albeit many e-learning applications provide a high level of technical sophistication, such tools typically fall short of reflecting the learners ' full context, e.g., their presence and awareness. Hence, these applications cause many disturbances in the social interaction of the learners. This paper describes mechanisms to improve the group awareness in elearning environments with the help of automatic integration of such context information from the physical world. This information is gathered by different embedded sensors in various objects, e.g., a coffee mug or an office chair. This paper also describes first results of the integration of these sensors into an existing CSCW/CSCL framework

[Go to top]

Buses for Anonymous Message Delivery (PDF)
by Amos Beimel and Shlomi Dolev.
In Journal of Cryptology 16(1), 2003, pages 25-39. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This work develops a novel approach to hide the senders and the receivers of messages. The intuition is taken from an everyday activity that hides the communication pattern''the public transportation system. To describe our protocols, buses are used as a metaphor: Buses, i.e., messages, are traveling on the network, each piece of information is allocated a seat within the bus. Routes are chosen and buses are scheduled to traverse these routes. Deterministic and randomized protocols are presented, the protocols differ in the number of buses in the system, the worst case traveling time, and the required buffer size in a station.'' In particular, a protocol that is based on cluster partition of the network is presented; in this protocol there is one bus traversing each cluster. The clusters' size in the partition gives time and communication tradeoffs. One advantage of our protocols over previous works is that they are not based on statistical properties for the communication pattern. Another advantage is that they only require the processors in the communication network to be busy periodically

[Go to top]

A charging and rewarding scheme for packet forwarding in multi-hop cellular networks (PDF)
by Naouel Ben Salem, Levente Buttyán, Jean-Pierre Hubaux, and Markus Jakobsson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In multi-hop cellular networks, data packets have to be relayed hop by hop from a given mobile station to a base station and vice-versa. This means that the mobile stations must accept to forward information for the benefit of other stations. In this paper, we propose an incentive mechanism that is based on a charging/rewarding scheme and that makes collaboration rational for selfish nodes. We base our solution on symmetric cryptography to cope with the limited resources of the mobile stations. We provide a set of protocols and study their robustness with respect to various attacks. By leveraging on the relative stability of the routes, our solution leads to a very moderate overhead

[Go to top]

Connecting Vehicle Scatternets by Internet-Connected Gateways (PDF)
by Kendy Kutzner, Jean-Jacques Tchouto, Marc Bechler, Lars Wolf, Bernd Bochow, and Thomas Luckenbach.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents an approach for interconnecting isolated clouds of an ad hoc network that form a scatternet topology using Internet gateways as intermediate nodes. The architecture developed is intended to augment FleetNet, a highly dynamic ad hoc network for inter-vehicle communications. This is achieved by upgrading FleetNet capabilities to establish a communication path between moving vehicles and the Internet via Internet gateways to facilitate direct gateway to gateway communications via the Internet, thus bridging gaps in the network topology and relaying packets closer towards their geographical destination at the same time. After outlining the overall FleetNet approach and its underlying geographical multi-hop routing, we focus on the FleetNet gateway architecture. We describe required modifications to the gateway architecture and to the FleetNet network layer in order to use these gateways as intermediate nodes for FleetNet routing. Finally, we conclude the paper by a short discussion on the prototype gateway implementation and by summarizing first results and ongoing work on inter scatternet communication

[Go to top]

A cooperative internet backup scheme (PDF)
by Mark Lillibridge, Sameh Elnikety, Andrew D. Birrell, Mike Burrows, and Michael Isard.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a novel peer-to-peer backup technique that allows computers connected to the Internet to back up their data cooperatively: Each computer has a set of partner computers, which collectively hold its backup data. In return, it holds a part of each partner's backup data. By adding redundancy and distributing the backup data across many partners, a highly-reliable backup can be obtained in spite of the low reliability of the average Internet machine. Because our scheme requires cooperation, it is potentially vulnerable to several novel attacks involving free riding (e.g., holding a partner's data is costly, which tempts cheating) or disruption. We defend against these attacks using a number of new methods, including the use of periodic random challenges to ensure partners continue to hold data and the use of disk-space wasting to make cheating unprofitable. Results from an initial prototype show that our technique is feasible and very inexpensive: it appears to be one to two orders of magnitude cheaper than existing Internet backup services

[Go to top]

A delay-tolerant network architecture for challenged internets (PDF)
by Kevin Fall.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service

[Go to top]

Design and evaluation of a low density generator matrix (PDF)
by Vincent Roca, Zainab Khallouf, and Julien Laboure.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional small block Forward Error Correction (FEC) codes, like the Reed-Solomon erasure (RSE) code, are known to raise efficiency problems, in particular when they are applied to the Asynchronous Layered Coding (ALC) reliable multicast protocol. In this paper we describe the design of a simple large block Low Density Generator Matrix (LDGM) codec, a particular case of LDPC code, which is capable of operating on source blocks that are several tens of megabytes long. We also explain how the iterative decoding feature of LDGM/LDPC can be used to protect a large number of small independent objects during time-limited partially-reliable sessions. We illustrate this feature with an example derived from a video streaming scheme over ALC. We then evaluate our LDGM codec and compare its performances with a well known RSE codec. Tests focus on the global efficiency and on encoding/decoding performances. This paper deliberately skips theoretical aspects to focus on practical results. It shows that LDGM/LDPC open many opportunities in the area of bulk data multicasting

[Go to top]

A DHT-based Backup System (PDF)
by Emil Sit, Josh Cates, and Russ Cox.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed hashtables have been proposed as a way to simplify the construction of large-scale distributed applications(e.g.[1,6]). DHTs are completely decentralized systems that provide block storage on a changing collection of nodes spread throughout the Internet. Each block is identified by aunique key. DHTs spread the load of storing and serving blocks across all of the active nodes and keep the blocks available as nodes join and leave the system. This paper presents the design and implementation of a cooperative off-site backup system, Venti-DHash. Venti-DHash is based on a DHT infrastructure and is designed to support recovery of data after a disaster by keeping regular snapshots of filesystems distributed off-site, on peers on the Internet. Where as conventional backup systems incur significant equipment costs, manual effort and high administrative overhead, we hope that a distributed backup system can alleviate these problems, making backups easy and feasible. By building this system on top of a DHT, the backup application inherits the properties of the DHT, and serves to evaluate the feasibility of using a DHT to build larg escale applications

[Go to top]

On the Economics of Anonymity (PDF)
by Alessandro Acquisti, Roger Dingledine, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particularly hard to deploy, enumerate the incentives to participate either as senders or also as nodes, and build a general model to describe the effects of these incentives. We then describe and justify some simplifying assumptions to make the model manageable, and compare optimal strategies for participants based on a variety of scenarios

[Go to top]

The Effect of Rumor Spreading in Reputation Systems for Mobile Ad-Hoc Networks (PDF)
by Sonja Buchegger and Jean-Yves Le Boudec.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile ad-hoc networks rely on the cooperation of nodes for routing and forwarding. For individual nodes there are however several advantages resulting from noncooperation, the most obvious being power saving. Nodes that act selfishly or even maliciously pose a threat to availability in mobile ad-hoc networks. Several approaches have been proposed to detect noncooperative nodes. In this paper, we investigate the effect of using rumors with respect to the detection time of misbehaved nodes as well as the robustness of the reputation system against wrong accusations. We propose a Bayesian approach for reputation representation, updates, and view integration. We also present a mechanism to detect and exclude potential lies. The simulation results indicate that by using this Bayesian approach, the reputation system is robust against slander while still benefitting from the speed-up in detection time provided by the use of rumors

[Go to top]

Establishing pairwise keys in distributed sensor networks (PDF)
by Donggang Liu and Peng Ning.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Pairwise key establishment is a fundamental security service in sensor networks; it enables sensor nodes to communicate securely with each other using cryptographic techniques. However, due to the resource constraints on sensors, it is infeasible to use traditional key management techniques such as public key cryptography and key distribution center (KDC). To facilitate the study of novel pairwise key predistribution techniques, this paper presents a general framework for establishing pairwise keys between sensors on the basis of a polynomial-based key predistribution protocol [2]. This paper then presents two efficient instantiations of the general framework: a random subset assignment key predistribution scheme and a grid-based key predistribution scheme. The analysis in this paper indicates that these two schemes have a number of nice properties, including high probability (or guarantee) to establish pairwise keys, tolerance of node captures, and low communication overhead. Finally, this paper presents a technique to reduce the computation at sensors required by these schemes

[Go to top]

Ext3cow: The Design, Implementation, and Analysis of Metadata for a Time-Shifting File System (PDF)
by Zachary N. J. Peterson and Randal C. Burns.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The ext3cow file system, built on Linux's popular ext3 file system, brings snapshot functionality and file versioning to the open-source community. Our implementation of ext3cow has several desirable properties: ext3cow is implemented entirely in the file system and, therefore, does not modify kernel interfaces or change the operation of other file systems; ext3cow provides a time-shifting interface that permits access to data in the past without polluting the file system namespace; and, ext3cow creates versions of files on disk without copying data in memory. Experimental results show that the time-shifting functions of ext3cow do not degrade file system performance. Ext3cow performs comparably to ext3 on many file system benchmarks and trace driven experiments

[Go to top]

Extremum Feedback with Partial Knowledge (PDF)
by Thomas Fuhrmann and Jörg Widmer.
In unknown Volume 2816/2003, 2003. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A scalable feedback mechanism to solicit feedback from a potentially very large group of networked nodes is an important building block for many network protocols. Multicast transport protocols use it for negative acknowledgements and for delay and packet loss determination. Grid computing and peer-to-peer applications can use similar approaches to find nodes that are, at a given moment in time, best suited to serve a request. In sensor networks, such mechanisms allow to report extreme values in a resource efficient way. In this paper we analyze several extensions to the exponential feedback algorithm [5,6] that provide an optimal way to collect extreme values from a potentially very large group of networked nodes. In contrast to prior work, we focus on how knowledge about the value distribution in the group can be used to optimize the feedback process. We describe the trade-offs that have to be decided upon when using these extensions and provide additional insight into their performance by means of simulation. Furthermore, we briefly illustrate how sample applications can benefit from the proposed mechanisms

[Go to top]

gap–Practical Anonymous Networking (PDF)
by Krista Bennett and Christian Grothoff.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes how anonymity is achieved in GNUnet, a framework for anonymous distributed and secure networking. The main focus of this work is gap, a simple protocol for anonymous transfer of data which can achieve better anonymity guarantees than many traditional indirection schemes and is additionally more efficient. gap is based on a new perspective on how to achieve anonymity. Based on this new perspective it is possible to relax the requirements stated in traditional indirection schemes, allowing individual nodes to balance anonymity with efficiency according to their specific needs

[Go to top]

HIERAS: A DHT Based Hierarchical P2P Routing Algorithm
by Zhiyong Xu, Rui Min, and Yiming Hu.
In Parallel Processing, International Conference on, 2003, pages 0-187. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Routing algorithm has great influence on system overall performance in Peer-to-Peer (P2P) applications. In current DHT based routing algorithms, routing tasks are distributed across all system peers. However, a routing hop could happen between two widely separated peers with high network link latency which greatly increases system routing overheads. In this paper, we propose a new P2P routing algorithm— HIERAS to relieve this problem, it keeps scalability property of current DHT algorithms and improves system routing performance by the introduction of hierarchical structure. In HIERAS, we create several lower level P2P rings besides the highest level P2P ring. A P2P ring is a subset of the overall P2P overlay network. We create P2P rings in such a strategy that the average link latency between two peers in lower level rings is much smaller than higher level rings. Routing tasks are first executed in lower level rings before they go up to higher level rings, a large portion of routing hops previously executed in the global P2P ring are now replaced by hops in lower level rings, thus routing overheads can be reduced. The simulation results show HIERAS routing algorithm can significantly improve P2P system routing performance

[Go to top]

Kelips: Building an efficient and stable P2P DHT through increased memory and background overhead (PDF)
by Indranil Gupta, Kenneth P. Birman, Prakash Linga, Alan Demers, and Robbert Van Renesse.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A peer-to-peer (p2p) distributed hash table (DHT) system allows hosts to join and fail silently (or leave), as well as to insert and retrieve files (objects). This paper explores a new point in design space in which increased memory usage and constant background communication overheads are tolerated to reduce file lookup times and increase stability to failures and churn. Our system, called Kelips, uses peer-to-peer gossip to partially replicate file index information. In Kelips, (a) under normal conditions, file lookups are resolved with O(1) time and complexity (i.e., independent of system size), and (b) membership changes (e.g., even when a large number of nodes fail) are detected and disseminated to the system quickly. Per-node memory requirements are small in medium-sized systems. When there are failures, lookup success is ensured through query rerouting. Kelips achieves load balancing comparable to existing systems. Locality is supported by using topologically aware gossip mechanisms. Initial results of an ongoing experimental study are also discussed

[Go to top]

Koorde: A Simple degree-optimal distributed hash table (PDF)
by Frans M. Kaashoek and David Karger.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Koorde is a new distributed hash table (DHT) based on Chord 15 and the de Bruijn graphs 2. While inheriting the simplicity of Chord, Koorde meets various lower bounds, such as O(log n) hops per lookup request with only 2 neighbors per node (where n is the number of nodes in the DHT), and O(log n/log log n) hops per lookup request with O(log n) neighbors per node

[Go to top]

A Lightweight Currency Paradigm for the P2P Resource Market (PDF)
by David A. Turner and Keith W. Ross.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A P2P resource market is a market in which peers trade resources (including storage, bandwidth and CPU cycles) and services with each other. We propose a specific paradigm for a P2P resource market. This paradigm has five key components: (i) pairwise trading market, with peers setting their own prices for offered resources; (ii) multiple currency economy, in which any peer can issue its own currency; (iii) no legal recourse, thereby limiting the transaction costs in trades; (iv) a simple, secure application-layer protocol; and (v) entity identification based on the entity's unique public key. We argue that the paradigm can lead to a flourishing P2P resource market, allowing applications to tap into the huge pool of surplus peer resources. We illustrate the paradigm and its corresponding Lightweight Currency Protocol (LCP) with several application examples

[Go to top]

Making gnutella-like P2P systems scalable (PDF)
by Yatin Chawathe, Lee Breslau, Nick Lanham, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutella's simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutella's design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed

[Go to top]

Metadata Efficiency in Versioning File Systems (PDF)
by Craig A. N. Soules, Garth R. Goodson, John D. Strunk, and Gregory R. Ganger.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Versioning file systems retain earlier versions of modified files, allowing recovery from user mistakes or system corruption. Unfortunately, conventional versioning systems do not efficiently record large numbers of versions. In particular, versioned metadata can consume as much space as versioned data. This paper examines two space-efficient metadata structures for versioning file systems and describes their integration into the Comprehensive Versioning File System (CVFS), which keeps all versions of all files. Journal-based metadata encodes each metadata version into a single journal entry; CVFS uses this structure for inodes and indirect blocks, reducing the associated space requirements by 80. Multiversion b-trees extend each entrys key with a timestamp and keep current and historical entries in a single tree; CVFS uses this structure for directories, reducing the associated space requirements by 99. Similar space reductions are predicted via trace analysis for other versioning strategies (e.g., on-close versioning). Experiments with CVFS verify that its current-version performance is sim-ilar to that of non-versioning file systems while reducing overall space needed for history data by a factor of two. Although access to historical versions is slower than con-ventional versioning systems, checkpointing is shown to mitigate and bound this effect

[Go to top]

Mixminion: Design of a Type III Anonymous Remailer Protocol (PDF)
by George Danezis, Roger Dingledine, and Nick Mathewson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present Mixminion, a message-based anonymous remailer protocol with secure single-use reply blocks. Mix nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. We add directory servers that allow users to learn public keys and performance statistics of participating remailers, and we describe nymservers that provide long-term pseudonyms using single-use reply blocks as a primitive. Our design integrates link encryption between remailers to provide forward anonymity. Mixminion works in a real-world Internet environment, requires little synchronization or coordination between nodes, and protects against known anonymity-breaking attacks as well as or better than other systems with similar design parameters. 1. Overview Chaum first introduced anonymous remailers over 20 years ago [7]

[Go to top]

Multi-dimensional range queries in sensor networks (PDF)
by Xin Li, Young Jin Kim, Ramesh Govindan, and Wei Hong.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Multiple language family support for programmable network systems (PDF)
by Michael Conrad, Marcus Schoeller, Thomas Fuhrmann, Gerhard Bocksch, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Various programmable networks have been designed and implemented during the last couple of years. Many of them are focused on a single programming language only. This limitation mightto a certain extendhinder the productivity of service modules being programmed for such networks. Therefore, the concurrent support of service modules written in multiple programming languages was investigated within the FlexiNet project. Basically, support for three major programming paradigms was incorporated into FlexiNet: compiled programming languages like C, interpreted languages (e.g., Java), and hardware description languages such as VHDL. The key concept can be seen in an integral interface that is used by all three programming languages. This leads to a configuration scheme which is totally transparent to the programming languages used to develop the service. In order to get a better idea about the impact of the programming language used, some measurement experiments were conducted

[Go to top]

The nesC language: A holistic approach to networked embedded systems (PDF)
by David Gay, Matt Welsh, Philip Levis, Eric Brewer, Robert Von Behren, and David Culler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present nesC, a programming language for networked embedded systems that represent a new design space for application developers. An example of a networked embedded system is a sensor network, which consists of (potentially) thousands of tiny, low-power "motes," each of which execute concurrent, reactive programs that must operate with severe memory and power constraints.nesC's contribution is to support the special needs of this domain by exposing a programming model that incorporates event-driven execution, a flexible concurrency model, and component-oriented application design. Restrictions on the programming model allow the nesC compiler to perform whole-program analyses, including data-race detection (which improves reliability) and aggressive function inlining (which reduces resource consumption).nesC has been used to implement TinyOS, a small operating system for sensor networks, as well as several significant sensor applications. nesC and TinyOS have been adopted by a large number of sensor network research groups, and our experience and evaluation of the language shows that it is effective at supporting the complex, concurrent programming style demanded by this new class of deeply networked systems

[Go to top]

Network Services for the Support of Very-Low-Resource Devices (PDF)
by Thomas Fuhrmann, Till Harbaum, and Martina Zitterbart.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Visions of future computing scenarios envisage a multitude of very-low-resource devices linked by power-efficient wireless communication means. This paper presents our vision of such a scenario. From this vision requirements are derived for an infrastructure that is able to satisfy the largely differing needs of these devices. The paper also shows how innovative, collaborating applications between distributed sensors and actuators can arise from such an infrastructure. The realization of such innovative applications is illustrated with two examples of straightforward services that have been implemented with the AMnet infrastructure that is currently being developed in the FlexiNet project. Additionally, first performance measurements for one of these services are given. Index terms Bluetooth, Programmable networks, Sensoractuator networks

[Go to top]

New Covert Channels in HTTP: Adding Unwitting Web Browsers to Anonymity Sets
by Matthias Bauer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents new methods enabling anonymous communication on the Internet. We describe a new protocol that allows us to create an anonymous overlay network by exploiting the web browsing activities of regular users. We show that the overlay network provides an anonymity set greater than the set of senders and receivers in a realistic threat model. In particular, the protocol provides unobservability in our threat model

[Go to top]

A New Generation of File Sharing Tools
by Dan Klinedinst.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A Node Evaluation Mechanism for Service Setup in (PDF)
by Thomas Fuhrmann, Marcus Schoeller, Christina Schmidt, and Martina Zitterbart.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

AMnet is a programmable network that aims at the flexible and rapid creation of services within an IP network. Examples for typical services include network layer enhancements e.g. for multicast and mobility, transport layer enhancements e.g. to integrate wireless LANs, and various application layer services e.g. for media transcoding and content distribution. AMnet is based on regular Linux boxes that run an execution environment (EE), a resource monitor, and a basic signaling-engine. These so-called active nodes run the services and provide support for resource-management and module-relocation. Services are created by service modules, small pieces of code, that are executed within the EE. Based on the standard netfilter mechanism of Linux, service modules have full access to the network traffic passing through the active node. This paper describes the evaluation mechanism for service setup in AMnet. In order to determine where a service module can be started, service modules are accompanied by evaluation modules. This allows service module authors to implement various customized strategies for node-selection and service setup. Examples that are supported by the AMnet evaluation mechanism are a) service setup at a fixed position, e.g. as gateway, b) along a fixed path (with variable position along that path), c) at variable positions inside the network with preferences for certain constellations, or d) at an unspecified position, e.g. for modification of multicasted traffic. The required path information is gathered by the AMnodes present in the network. By interaction with the resource monitors of the AMnodes and the service module repository of the respective administrative domain, the AMnet evaluation also ensures overall system security and stability

[Go to top]

Opportunistic Use of Content Addressable Storage for Distributed File Systems (PDF)
by Niraj Tolia, Michael Kozuch, Mahadev Satyanarayanan, Brad Karp, Thomas Bressoud, and Adrian Perrig.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Motivated by the prospect of readily available Content Addressable Storage (CAS), we introduce the concept of file recipes. A file's recipe is a first-class file system object listing content hashes that describe the data blocks composing the file. File recipes provide applications with instructions for reconstructing the original file from available CAS data blocks. We describe one such application of recipes, the CASPER distributed file system. A CASPER client opportunistically fetches blocks from nearby CAS providers to improve its performance when the connection to a file server traverses a low-bandwidth path. We use measurements of our prototype to evaluate its performance under varying network conditions. Our results demonstrate significant improvements in execution times of applications that use a network file system. We conclude by describing fuzzy block matching, a promising technique for using approximately matching blocks on CAS providers to reconstitute the exact desired contents of a file at a client

[Go to top]

An Overlay-Network Approach for Distributed Access to SRS (PDF)
by Thomas Fuhrmann, Andrea Schafferhans, and Thure Etzold.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

SRS is a widely used system for integrating biologicaldatabases. Currently, SRS relies only on locally providedcopies of these databases. In this paper we propose a mechanism that also allows the seamless integration of remotedatabases. To this end, our proposed mechanism splits theexisting SRS functionality into two components and addsa third component that enables us to employ peer-to-peercomputing techniques to create optimized overlay-networkswithin which database queries can efficiently be routed. Asan additional benefit, this mechanism also reduces the administration effort that would be needed with a conventionalapproach using replicated databases

[Go to top]

Peer-To-Peer Backup for Personal Area Networks (PDF)
by Boon Thau Loo, Anthony LaMarca, Gaetano Borriello, and Boon Thau Loo.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

FlashBack is a peer-to-peer backup algorithm designed for powerconstrained devices running in a personal area network (PAN). Backups are performed transparently as local updates initiate the spread of backup data among a subset of the currently available peers. Flashback limits power usage by avoiding flooding and keeping small neighbor sets. Flashback has also been designed to utilize powered infrastructure when possible to further extend device lifetime. We propose our architecture and algorithms, and present initial experimental results that illustrate FlashBack's performance characteristics

[Go to top]

P-Grid: A Self-organizing Structured P2P System (PDF)
by Karl Aberer, Philippe Cudre-Mauroux, Anwitaman Datta, Zoran Despotovic, Manfred Hauswirth, Magdalena Punceva, and Roman Schmidt.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper was supported in part by the National Competence Center in Research on Mobile Information and Communication Systems (NCCR-MICS), a center supported by the Swiss National Science Foundation under grant number 5005-67322 and by SNSF grant 2100064994, "Peer-to-Peer Information Systems." messages. From the responses it (randomly) selects certain peers to which direct network links are established

[Go to top]

PlanetP: Using Gossiping to Build Content Addressable Peer-to-Peer Information Sharing Communities (PDF)
by Francisco Matias Cuenca-Acuna, Christopher Peery, Richard P. Martin, and Thu D. Nguyen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

PlanetP is a peer-to-peer system in which searching content is done mostly locally. Every peer knows which content is available at which other peers. The index information is represented compactly using bloom filters and distributed throughout the network using push and pull mechanisms

[Go to top]

Practical Verifiable Encryption and Decryption of Discrete Logarithms (PDF)
by Jan Camenisch and Victor Shoup.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper addresses the problem of designing practical protocols for proving properties about encrypted data. To this end, it presents a variant of the new public key encryption of Cramer and Shoup based on Pailliers decision composite residuosity assumption, along with efficient protocols for verifiable encryption and decryption of discrete logarithms (and more generally, of representations with respect to multiple bases). This is the first verifiable encryption system that provides chosen ciphertext security and avoids inefficient cut-and-choose proofs. The presented protocols have numerous applications, including key escrow, optimistic fair exchange, publicly verifiable secret and signature sharing, universally composable commitments, group signatures, and confirmer signatures

[Go to top]

Querying the internet with PIER (PDF)
by Ryan Huebsch, Joseph M. Hellerstein, Nick Lanham, Boon Thau Loo, S Shenker, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Range Queries over DHTs
by Sylvia Ratnasamy, Joseph M. Hellerstein, and S Shenker.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are scalable peer-to-peer systems that support exact match lookups. This paper describes the construction and use of a Prefix Hash Tree (PHT) – a distributed data structure that supports range queries over DHTs. PHTs use the hash-table interface of DHTs to construct a search tree that is efficient (insertions/lookups take DHT lookups, where D is the data domain being indexed) and robust (the failure of any given node in the search tree does not affect the availability of data stored at other nodes in the PHT)

[Go to top]

Reputation in P2P Anonymity Systems (PDF)
by Roger Dingledine, Nick Mathewson, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Decentralized anonymity systems tend to be unreliable, because users must choose nodes in the network without knowing the entire state of the network. Reputation systems promise to improve reliability by predicting network state. In this paper we focus on anonymous remailers and anonymous publishing, explain why the systems can benefit from reputation, and describe our experiences designing reputation systems for them while still ensuring anonymity. We find that in each example we first must redesign the underlying anonymity system to support verifiable transactions

[Go to top]

Results on the practical feasibility of programmable network services (PDF)
by Thomas Fuhrmann, Till Harbaum, Panos Kassianidis, Marcus Schoeller, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Active and programmable networks have been subject to intensive and successful research activities during the last couple of years. Many ideas and concepts have been pursued. However, only a few prototype implementations that have been developed so far, can deal with different applications in a larger scale setting. Moreover, detailed performance analyses of such prototypes are greatly missing today. Therefore, this paper does not present yet another architecture for active and programmable networks. In contrast, it rather focuses on the performance evaluation of the so-called AMnet approach that has already been presented previously [1]. As such, the paper demonstrates that an operational high-performance programmable network system with AAA (authentication, authorization, and accounting) security functionality will in fact be feasible in the near future

[Go to top]

Revealing Information While Preserving Privacy (PDF)
by Irit Dinur and Kobbi Nissim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d1 ,.., dn , with a query being a subset q ⊆ [n] to be answered by summation of values which belong to q. Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude Ω (√ n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O (√ n). For time-T bounded adversaries we demonstrate a privacy-preserving access algorithm whose perturbation magnitude is ≈ √T

[Go to top]

Security Performance (PDF)
by Daniel Menascé.
In IEEE Internet Computing 7(3), 2003, pages 84-87. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several protocols and mechanisms aim to enforce the various dimensions of security in applications ranging from email to e-commerce transactions. Adding such mechanisms and proceduresto applications and systems does not come cheaply, however, as they impose security trade-offs in the areas of performance and scalability

[Go to top]

Self-Organized Public-Key Management for Mobile Ad Hoc Networks (PDF)
by Srdjan Capkun, Levente Buttyán, and J-P Hubaux.
In IEEE Transactions on Mobile Computing 2(1), 2003, pages 52-64. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In contrast with conventional networks, mobile ad hoc networks usually do not provide online access to trusted authorities or to centralized servers, and they exhibit frequent partitioning due to link and node failures and to node mobility. For these reasons, traditional security solutions that require online trusted authorities or certificate repositories are not well-suited for securing ad hoc networks. In this paper, we propose a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services. Furthermore, our approach does not require any trusted authority, not even in the system initialization phase

[Go to top]

A Simple Fault Tolerant Distributed Hash Table (PDF)
by Moni Naor and Udi Wieder.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a distributed hash table (DHT) with logarithmic degree and logarithmic dilation. We show two lookup algorithms. The first has a message complexity of and is robust under random deletion of nodes. The second has parallel time of and message complexity of . It is robust under spam induced by a random subset of the nodes. We then show a construction which is fault tolerant against random deletions and has an optimal degree-dilation tradeoff. The construction has improved parameters when compared to other DHTs. Its main merits are its simplicity, its flexibility and the fresh ideas introduced in its design. It is very easy to modify and to add more sophisticated protocols, such as dynamic caching and erasure correcting codes

[Go to top]

SkipNet: a scalable overlay network with practical locality properties (PDF)
by Nicholas J. A. Harvey, Michael B. Jones, Stefan Saroiu, Marvin Theimer, and Alec Wolman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Sloppy Hashing and Self-Organizing Clusters (PDF)
by Michael J. Freedman and David Mazières.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We are building Coral, a peer-to-peer content distribution system. Coral creates self-organizing clusters of nodes that fetch information from each other to avoid communicating with more distant or heavily-loaded servers. Coral indexes data, but does not store it. The actual content resides where it is used, such as in nodes' local web caches. Thus, replication happens exactly in proportion to demand

[Go to top]

A Special-Purpose Peer-to-Peer File Sharing System for Mobile Ad Hoc Networks (PDF)
by Alexander Klemm, Er Klemm, Christoph Lindemann, and Oliver Waldhorst.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Establishing peer-to-peer (P2P) file sharing for mobile ad hoc networks ANET) requires the construction of a search algorithm for transmitting queries and search results as well as the development of a transfer protocol for downloading files matching a query. In this paper, we present a special-purpose system for searching and file transfer tailored to both the characteristics of MANET and the requirements of peer-to-peer file sharing. Our approach is based on an application layer overlay networlc As innovative feature, overlay routes are set up on demand by the search algorithm, closely matching network topology and transparently aggregating redundant transfer paths on a per-file basis. The transfer protocol guarantees high data rates and low transmission overhead by utilizing overlay routes. In a detailed ns2 simulation study, we show that both the search algorithm and the transfer protocol outperform offthe -shelf approaches based on a P2P file sharing system for the wireline Internet, TCP and a MANET routing protocol

[Go to top]

Stimulating cooperation in self-organizing mobile ad hoc networks (PDF)
by Levente Buttyán and Jean-Pierre Hubaux.
In Mob. Netw. Appl 8(5), 2003, pages 579-592. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In military and rescue applications of mobile ad hoc networks, all the nodes belong to the same authority; therefore, they are motivated to cooperate in order to support the basic functions of the network. In this paper, we consider the case when each node is its own authority and tries to maximize the benefits it gets from the network. More precisely, we assume that the nodes are not willing to forward packets for the benefit of other nodes. This problem may arise in civilian applications of mobile ad hoc networks. In order to stimulate the nodes for packet forwarding, we propose a simple mechanism based on a counter in each node. We study the behavior of the proposed mechanism analytically and by means of simulations, and detail the way in which it could be protected against misuse

[Go to top]

On the Strategic Importance of Programmable Middleboxes (PDF)
by Thomas Fuhrmann.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network protocols suffer from a lock dictated by the need for standardization and Metcalf's law. Programmable middleboxes can help to relieve the effects of that lock. This paper gives game theoretic arguments that show how the option of having middleboxes can raise the quality of communication protocols. Based on this analysis, design considerations for active and programmable networks are discussed

[Go to top]

Supporting Peer-to-Peer Computing with FlexiNet (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Formation of suitable overlay-network topologiesthat are able to reflect the structure of the underlying network-infrastructure, has rarely been addressedby peer-to-peer applications so far. Often, peer-to-peerprotocols restrain to purely random formation of theiroverlay-network. This leads to a far from optimal performance of such peer-to-peer networks and ruthlesslywastes network resources.In this paper, we describe a simple mechanism thatuses programmable network technologies to improvethe topology formation process of unstructured peer-to-peer networks. Being a network service, our mechanismdoes not require any modification of existing applications or computing systems. By that, it assists networkoperators with improving the performance of their network and relieves programmers from the burden of designing and implementing topology-aware peer-to-peerprotocols.Although we use the well-know Gnutella protocol todescribe the mechanism of our proposed service, it applies to all kinds of unstructured global peer-to-peercomputing applications

[Go to top]

Symphony: distributed hashing in a small world (PDF)
by Gurmeet Singh Manku, Mayank Bawa, and Prabhakar Raghavan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present Symphony, a novel protocol for maintaining distributed hash tables in a wide area network. The key idea is to arrange all participants along a ring and equip them with long distance contacts drawn from a family of harmonic distributions. Through simulation, we demonstrate that our construction is scalable, flexible, stable in the presence of frequent updates and offers small average latency with only a handful of long distance links per node. The cost of updates when hosts join and leave is small

[Go to top]

Taming the underlying challenges of reliable multihop routing in sensor networks (PDF)
by Alec Woo, Terence Tong, and David Culler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The dynamic and lossy nature of wireless communication poses major challenges to reliable, self-organizing multihop networks. These non-ideal characteristics are more problematic with the primitive, low-power radio transceivers found in sensor networks, and raise new issues that routing protocols must address. Link connectivity statistics should be captured dynamically through an efficient yet adaptive link estimator and routing decisions should exploit such connectivity statistics to achieve reliability. Link status and routing information must be maintained in a neighborhood table with constant space regardless of cell density. We study and evaluate link estimator, neighborhood table management, and reliable routing protocol techniques. We focus on a many-to-one, periodic data collection workload. We narrow the design space through evaluations on large-scale, high-level simulations to 50-node, in-depth empirical experiments. The most effective solution uses a simple time averaged EWMA estimator, frequency based table management, and cost-based routing

[Go to top]

On the Topology of Overlay-Networks (PDF)
by Thomas Fuhrmann.
In unknown, 2003. (BibTeX entry) (Download bibtex record)
(direct link)

Random-graph models are about to become an important tool in the study of wireless ad-hoc and sensor-networks, peer-to-peer networks, and, generally, overlay-networks. Such models provide a theoretical basis to assess the capabilities of certain networks, and guide the design of new protocols. Especially the recently proposed models for so-called small-world networks receive much attention from the networking community. This paper proposes the use of two more mathematical concepts for the analysis of network topologies, dimension and curvature. These concepts can intuitively be applied to, e.g., sensor-networks. But they can also be sensibly dened for certain other random-graph models. The latter is non-trivial since such models may describe purely virtual networks that do not inherit properties from an underlying physical world. Analysis of a random-graph model for Gnutella-like overlay-networks yields strong indications that such networks might be characterized as a sphere with fractal dimension

[Go to top]

A Transport Layer Abstraction for Peer-to-Peer Networks (PDF)
by Ronaldo A. Ferreira, Christian Grothoff, and Paul Ruth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The initially unrestricted host-to-host communication model provided by the Internet Protocol has deteriorated due to political and technical changes caused by Internet growth. While this is not a problem for most client-server applications, peer-to-peer networks frequently struggle with peers that are only partially reachable. We describe how a peer-to-peer framework can hide diversity and obstacles in the underlying Internet and provide peer-to-peer applications with abstractions that hide transport specific details. We present the details of an implementation of a transport service based on SMTP. Small-scale benchmarks are used to compare transport services over UDP, TCP, and SMTP

[Go to top]

Usability and privacy: a study of Kazaa P2P file-sharing (PDF)
by Nathaniel S. Good and Aaron Krekelberg.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

P2P file sharing systems such as Gnutella, Freenet, and KaZaA, while primarily intended for sharing multimedia files, frequently allow other types of information to be shared. This raises serious concerns about the extent to which users may unknowingly be sharing private or personal information.In this paper, we report on a cognitive walkthrough and a laboratory user study of the KaZaA file sharing user interface. The majority of the users in our study were unable to tell what files they were sharing, and sometimes incorrectly assumed they were not sharing any files when in fact they were sharing all files on their hard drive. An analysis of the KaZaA network suggested that a large number of users appeared to be unwittingly sharing personal and private files, and that some users were indeed taking advantage of this and downloading files containing ostensibly private information

[Go to top]

Using Bluetooth for Informationally Enhanced Environments Abstract
by Thomas Fuhrmann and Till Harbaum.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The continued miniaturization in computing and wireless communication is about to make informationally enhanced environments become a reality. Already today, devices like a notebook computer or a personal digital assistent (PDA) can easily connect to the Internet via IEEE 802.11 networks (WaveLAN) or similar technologies provided at so-called hot-spots. In the near future, even smaller devices can join a wireless network to exchange status information or send and receive commands. In this paper, we present sample uses of a generic Bluetooth component that we have developed and that has been successfully integrated into various mininature devices to transmit sensor data or exchange control commands. The use of standard protocols like TCP/IP, Obex, and HTTP simplifies the use of those devices with conventional devices (notebook, PDA, cell-phone) without even requiring special drivers or applications for these devices. While such scenarios have already often been dreamt of, we are able to present a working solution based on small and cost-effective standard elements. We describe two applications that illustrate the power this approach in the broad area of e-commerce, e-learning, and e-government: the BlueWand, a small, pen-like device that can control Bluetooth devices in its vincinity by simple gestures, and a door plate that can display messages that are posted to it e.g. by a Bluetooth PDA. Keywords: Human-Computer Interaction, Ubiquitous Computing, Wireless Communications (Bluetooth)

[Go to top]

Wireless Community Networks
by Saurabh Jain and Dharma P. Agrawal.
In Computer 36(8), 2003, pages 90-92. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Active Traffic Analysis Attacks and Countermeasures (PDF)
by Xinwen Fu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

To explore mission-critical information, an adversary using active traffic analysis attacks injects probing traffic into the victim network and analyzes the status of underlying payload traffic. Active traffic analysis attacks are easy to deploy and hence become a serious threat to mission critical applications. This paper suggests statistical pattern recognition as a fundamental technology to evaluate effectiveness of active traffic analysis attacks and corresponding countermeasures. Our evaluation shows that sample entropy of ping packets ' round trip time is an effective feature statistic to discover the payload traffic rate. We propose simple countermeasures that can significantly reduce the effectiveness of ping-based active traffic analysis attacks. Our experiments validate the effectiveness of this scheme, which can also be used in other scenarios

[Go to top]

An Analysis of GNUnet and the Implications for Anonymous, Censorship-Resistant Networks (PDF)
by Dennis Kügler.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

An Efficient Peer-to-Peer File Sharing Exploiting Hierarchy and Asymmetry (PDF)
by Gisik Kwon and Kyung D. Ryu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many Peer-to-Peer (P2P) file sharing systems have been proposed to take advantage of high scalability and abundant resources at end-user machines. Previous approaches adopted either simple flooding or routing with complex structures, such as Distributed HashingTables (DHT). However, these approaches did not consider the heterogeneous nature of the machines and the hierarchy of networks on the Internet. This paper presents Peer-to-peer Asymmetric file Sharing System(PASS), a novel approach to P2P file sharing, which accounts for the different capabilities and network locations of the participating machines. Our system selects only a portion of high-capacity machines(supernodes) for routing support, and organizes the network by using location information. We show that our key-coverage based directory replication improves the file search performance to a small constant number of routing hops, regardless of the network size

[Go to top]

k-Anonymous Message Transmission (PDF)
by Luis von Ahn, Andrew Bortz, and Nicholas J. Hopper.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Informally, a communication protocol is sender k–anonymous if it can guarantee that an adversary, trying to determine the sender of a particular message, can only narrow down its search to a set of k suspects. Receiver k-anonymity places a similar guarantee on the receiver: an adversary, at best, can only narrow down the possible receivers to a set of size k. In this paper we introduce the notions of sender and receiver k-anonymity and consider their applications. We show that there exist simple and e$$cient protocols which are k-anonymous for both the sender and the receiver in a model where a polynomial time adversary can see all tra$$c in the network and can control up to a constant fraction of the participants. Our protocol is provably secure, practical, and does not require the existence of trusted third parties. This paper also provides a conceptually simple augmentation to Chaum's DC-Nets that adds robustness against adversaries who attempt to disrupt the protocol through perpetual transmission or selective non-participation

[Go to top]

Herbivore: A Scalable and Efficient Protocol for Anonymous Communication (PDF)
by Sharad Goel, Mark Robson, Milo Polte, and Emin Gün Sirer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity is increasingly important for networked applications amidst concerns over censorship and privacy. In this paper, we describe Herbivore, a peer-to-peer, scalable, tamper-resilient communication system that provides provable anonymity and privacy. Building on dining cryptographer networks, Herbivore scales by partitioning the network into anonymizing cliques. Adversaries able to monitor all network traffic cannot deduce the identity of a sender or receiver beyond an anonymizing clique. In addition to strong anonymity, Herbivore simultaneously provides high efficiency and scalability, distinguishing it from other anonymous communication protocols. Performance measurements from a prototype implementation show that the system can achieve high bandwidths and low latencies when deployed over the Internet

[Go to top]

Rateless Codes and Big Downloads (PDF)
by Petar Maymounkov and David Mazières.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper presents a novel algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is simple, but offers several compelling properties. It ensures low hand-shaking overhead between peers that download files (or parts of files) from each other. It is computationally efficient, with cost linear in the amount of data transfered. Most importantly, when nodes leave the network in the middle of uploads, the algorithm minimizes the duplicate information shared by nodes with truncated downloads. Thus, any two peers with partial knowledge of a given file can almost always fully benefit from each other's knowledge. Our algorithm is made possible by the recent introduction of linear-time, rateless erasure codes

[Go to top]

Towards a Common API for Structured Peer-to-Peer Overlays (PDF)
by Frank Dabek, Ben Y. Zhao, Peter Druschel, John Kubiatowicz, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community

[Go to top]

Breaking and Mending Resilient Mix-nets (PDF)
by Lan Nguyen and Rei Safavi-Naini.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we show two attacks against universally resilient mix-nets. The first attack can be used against a number of mix-nets, including Furukawa-Sako01 [6], Millimix [11], Abe98 [1], MiP-1, MiP-2 [2,3] and Neff01 [19]. We give the details of the attack in the case of Furukawa-Sako01 mix-net. The second attack breaks the correctness of Millimix [11]. We show how to counter these attacks, and give efficiency and security analysis for the proposed countermeasures

[Go to top]

The effect of rumor spreading in reputation systems for mobile ad-hoc networks (PDF)
by Sonja Buchegger and Jean-Yves Le Boudec.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Mobile ad-hoc networks rely on the cooperation of nodes for routing and forwarding. For individual nodes there are however several advantages resulting from noncooperation, the most obvious being power saving. Nodes that act selfishly or even maliciously pose a threat to availability in mobile adhoc networks. Several approaches have been proposed to detect noncooperative nodes. In this paper, we investigate the e$$ect of using rumors with respect to the detection time of misbehaved nodes as well as the robustness of the reputation system against wrong accusations. We propose a Bayesian approach for reputation representation, updates, and view integration. We also present a mechanism to detect and exclude potential lies. The simulation results indicate that by using this Bayesian approach, the reputation system is robust against slander while still benefitting from the speed-up in detection time provided by the use of rumors

[Go to top]

The evolution of altruistic punishment (PDF)
by Robert Boyd, Herbert Gintis, Samuel Bowles, and Peter J. Richerson.
In Proceedings of the National Academy of Sciences of the USA 100, March 2003, pages 3531-3535. (BibTeX entry) (Download bibtex record)
(direct link)

Both laboratory and field data suggest that people punish noncooperators even in one-shot interactions. Although such altruistic punishment may explain the high levels of cooperation in human societies, it creates an evolutionary puzzle: existing models suggest that altruistic cooperation among nonrelatives is evolutionarily stable only in small groups. Thus, applying such models to the evolution of altruistic punishment leads to the prediction that people will not incur costs to punish others to provide benefits to large groups of nonrelatives. However, here we show that an important asymmetry between altruistic cooperation and altruistic punishment allows altruistic punishment to evolve in populations engaged in one-time, anonymous interactions. This process allows both altruistic punishment and altruistic cooperation to be maintained even when groups are large and other parameter values approximate conditions that characterize cultural evolution in the small-scale societies in which humans lived for most of our prehistory

[Go to top]

Generalising Mixes (PDF)
by Claudia Diaz and Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present a generalised framework for expressing batching strategies of a mix. First, we note that existing mixes can be represented as functions from the number of messages in the mix to the fraction of messages to be flushed. We then show how to express existing mixes in the framework, and then suggest other mixes which arise out of that framework. We note that these cannot be expressed as pool mixes. In particular, we call binomial mix a timed pool mix that tosses coins and uses a probability function that depends on the number of messages inside the mix at the time of flushing. We discuss the properties of this mix

[Go to top]

Improving Onion Notation (PDF)
by Richard Clayton.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several di$$erent notations are used in the literature of MIX networks to describe the nested encrypted structures now widely known as "onions". The shortcomings of these notations are described and a new notation is proposed, that as well as having some advantages from a typographical point of view, is also far clearer to read and to reason about. The proposed notation generated a lively debate at the PET2003 workshop and the various views, and alternative proposals, are reported upon. The workshop participants did not reach any consensus on improving onion notation, but there is now a heightened awareness of the problems that can arise with existing representations

[Go to top]

Metrics for Traffic Analysis Prevention (PDF)
by Richard E. Newman, Ira S. Moskowitz, Paul Syverson, and Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

This paper considers systems for Traffic Analysis Prevention (TAP) in a theoretical model. It considers TAP based on padding and rerouting of messages and describes the effects each has on the difference between the actual and the observed traffic matrix (TM). The paper introduces an entropy-based approach to the amount of uncertainty a global passive adversary has in determining the actual TM, or alternatively, the probability that the actual TM has a property of interest. Unlike previous work, the focus is on determining the overall amount of anonymity a TAP system can provide, or the amount it can provide for a given cost in padding and rerouting, rather than on the amount of protection a afforded particular communications

[Go to top]

Mix-networks with Restricted Routes (PDF)
by George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present a mix network topology that is based on sparse expander graphs, with each mix only communicating with a few neighbouring others. We analyse the anonymity such networks provide, and compare it with fully connected mix networks and mix cascades. We prove that such a topology is e$$cient since it only requires the route length of messages to be relatively small in comparison with the number of mixes to achieve maximal anonymity. Additionally mixes can resist intersection attacks while their batch size, that is directly linked to the latency of the network, remains constant. A worked example of a network is also presented to illustrate how these results can be applied to create secure mix networks in practise

[Go to top]

Modelling Unlinkability (PDF)
by Sandra Steinbrecher and Stefan Köpsell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While there have been made several proposals to define and measure anonymity (e.g., with information theory, formal languages and logics) unlinkability has not been modelled generally and formally. In contrast to anonymity unlinkability is not restricted to persons. In fact the unlinkability of arbitrary items can be measured. In this paper we try to formalise the notion of unlinkability, give a refinement of anonymity definitions based on this formalisation and show the impact of unlinkability on anonymity. We choose information theory as a method to describe unlinkability because it allows an easy probabilistic description. As an illustration for our formalisation we describe its meaning for communication systems

[Go to top]

Thwarding Web Censorship with Untrusted Messenger Delivery (PDF)
by Nick Feamster, Magdalena Balazinska, Winston Wang, Hari Balakrishnan, and David Karger.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

All existing anti-censorship systems for theWeb rely on proxies to grant clients access to censored information. Therefore, they face the proxy discovery problem: how can clients discover the proxies without having the censor discover and block these proxies? To avoid widespread discovery and blocking, proxies must not be widely published and should be discovered in-band. In this paper, we present a proxy discovery mechanism called keyspace hopping that meets this goal. Similar in spirit to frequency hopping in wireless networks, keyspace hopping ensures that each client discovers only a small fraction of the total number of proxies.However, requiring clients to independently discover proxies from a large set makes it practically impossible to verify the trustworthiness of every proxy and creates the possibility of having untrusted proxies. To address this, we propose separating the proxy into two distinct components|the messenger, which the client discovers using keyspace hopping and which simply acts as a gateway to the Internet; and the portal, whose identity is widely-published and whose responsibility it is to interpret and serve the client's requests for censored content. We show how this separation, as well as in-band proxy discovery, can be applied to a variety of anti-censorship systems

[Go to top]

Provably Secure Public-Key Encryption for Length-Preserving Chaumian Mixes (PDF)
by Bodo Möller.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mix chains as proposed by Chaum allow sending untraceable electronic e-mail without requiring trust in a single authority: messages are recursively public-key encrypted to multiple intermediates (mixes), each of which forwards the message after removing one layer of encryption. To conceal as much information as possible when using variable (source routed) chains, all messages passed to mixes should be of the same length; thus, message length should not decrease when a mix transforms an input message into the corresponding output message directed at the next mix in the chain. Chaum described an implementation for such length-preserving mixes, but it is not secure against active attacks. We show how to build practical cryptographically secure lengthpreserving mixes. The conventional de nition of security against chosen ciphertext attacks is not applicable to length-preserving mixes; we give an appropriate de nition and show that our construction achieves provable security

[Go to top]

On the Anonymity of Timed Pool Mixes (PDF)
by Andrei Serjantov and Richard E. Newman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents a method for calculating the anonymity of a timed pool mix. Thus we are able to compare it to a threshold pool mix, and any future mixes that might be developed. Although we are only able to compute the anonymity of a timed pool mix after some specic number of rounds, this is a practical approximation to the real anonymity

[Go to top]

Defending Anonymous Communication Against Passive Logging Attacks (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study the threat that passive logging attacks poseto anonymous communications. Previous work analyzedthese attacks under limiting assumptions. We first describea possible defense that comes from breaking the assumptionof uniformly random path selection. Our analysisshows that the defense improves anonymity in the staticmodel, where nodes stay in the system, but fails in a dynamicmodel, in which nodes leave and join. Additionally,we use the dynamic model to show that the intersectionattack creates a vulnerability in certain peer-to-peer systemsfor anonymous communciations. We present simulationresults that show that attack times are significantlylower in practice than the upper bounds given by previouswork. To determine whether users' web traffic has communicationpatterns required by the attacks, we collectedand analyzed the web requests of users. We found that,for our study, frequent and repeated communication to thesame web site is common

[Go to top]

The EigenTrust algorithm for reputation management in P2P networks (PDF)
by Sepandar D. Kamvar, Mario T. Schlosser, and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer file-sharing networks are currently receiving much attention as a means of sharing and distributing information. However, as recent experience shows, the anonymous, open nature of these networks offers an almost ideal environment for the spread of self-replicating inauthentic files.We describe an algorithm to decrease the number of downloads of inauthentic files in a peer-to-peer file-sharing network that assigns each peer a unique global trust value, based on the peer's history of uploads. We present a distributed and secure method to compute global trust values, based on Power iteration. By having peers use these global trust values to choose the peers from whom they download, the network effectively identifies malicious peers and isolates them from the network.In simulations, this reputation system, called EigenTrust, has been shown to significantly decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to deliberately subvert the system

[Go to top]

High Availability, Scalable Storage, Dynamic Peer Networks: Pick Two (PDF)
by Charles Blake and Rodrigo Rodrigues.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer storage aims to build large-scale, reliable and available storage from many small-scale unreliable, low-availability distributed hosts. Data redundancy is the key to any data guarantees. However, preserving redundancy in the face of highly dynamic membership is costly. We use a simple resource usage model to measured behavior from the Gnutella file-sharing network to argue that large-scale cooperative storage is limited by likely dynamics and cross-system bandwidth – not by local disk space. We examine some bandwidth optimization strategies like delayed response to failures, admission control, and load-shifting and find that they do not alter the basic problem. We conclude that when redundancy, data scale, and dynamics are all high, the needed cross-system bandwidth is unreasonable

[Go to top]

Probabilistic Treatment of MIXes to Hamper Traffic Analysis (PDF)
by Dakshi Agrawal, Dogan Kesdogan, and Stefan Penz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The goal of anonymity providing techniques is to preserve the privacy of users, who has communicated with whom, for how long, and from which location, by hiding traffic information. This is accomplished by organizing additional traffic to conceal particular communication relationships and by embedding the sender and receiver of a message in their respective anonymity sets. If the number of overall participants is greater than the size of the anonymity set and if the anonymity set changes with time due to unsynchronized participants, then the anonymity technique becomes prone to traffic analysis attacks. In this paper, we are interested in the statistical properties of the disclosure attack, a newly suggested traffic analysis attack on the MIXes. Our goal is to provide analytical estimates of the number of observations required by the disclosure attack and to identify fundamental (but avoidable) weak operational modes' of the MIXes and thus to protect users against a traffic analysis by the disclosure attack

[Go to top]

Statistical Disclosure Attacks: Traffic Confirmation in Open Environments (PDF)
by George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

An improvement over the previously known disclosure attack is presented that allows, using statistical methods, to effectively deanonymize users of a mix system. Furthermore the statistical disclosure attack is computationally efficient, and the conditions for it to be possible and accurate are much better understood. The new attack can be generalized easily to a variety of anonymity systems beyond mix networks

[Go to top]

Bootstrapping a Distributed Computational Economy with Peer-to-Peer Bartering (PDF)
by Brent Chun, Yun Fu, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Deconstructing the Kazaa Network (PDF)
by Nathaniel Leibowitz, Matei Ripeanu, and Adam Wierzbicki.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Internet traffic is experiencing a shift from webtraffic to file swapping traffic. Today a significant partof Internet traffic is generated by peer-to-peer applications, mostly by the popular Kazaa application.Yet, to date, few studies analyze Kazaa traffic, thusleaving the bulk of Internet traffic in dark. We presenta large-scale investigation of Kazaa traffic based onlogs collected at a large Israeli ISP, which captureroughly a quarter of all traffic between Israel and US

[Go to top]

An Excess-Based Economic Model for Resource Allocation in Peer-to-Peer Networks (PDF)
by Christian Grothoff.
In Wirtschaftsinformatik 3-2003, June 2003. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes economic aspects of GNUnet, a peer-to-peer framework for anonymous distributed file-sharing. GNUnet is decentralized; all nodes are equal peers. In particular, there are no trusted entities in the network. This paper describes an economic model to perform resource allocation and defend against malicious participants in this context. The approach presented does not use credentials or payments; rather, it is based on trust. The design is much like that of a cooperative game in which peers take the role of players. Nodes must cooperate to achieve individual goals. In such a scenario, it is important to be able to distinguish between nodes exhibiting friendly behavior and those exhibiting malicious behavior. GNUnet aims to provide anonymity for its users. Its design makes it hard to link a transaction to the node where it originated from. While anonymity requirements make a global view of the end-points of a transaction infeasible, the local link-to-link messages can be fully authenticated. Our economic model is based entirely on this local view of the network and takes only local decisions

[Go to top]

Incentives build robustness in BitTorrent (PDF)
by Bram Cohen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

The BitTorrent file distribution system uses tit-for-tat as a method to seeking pareto efficiency. It achieves a higher level of robustness and resource utilization than any currently known cooperative technique. We explain what BitTorrent does, and how economic methods are used to achieve that goal

[Go to top]

Incentives for Cooperation in Peer-to-Peer Networks (PDF)
by Kevin Lai, Michal Feldman, Ion Stoica, and John Chuang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this paper, our contributions are to generalize from the traditional symmetric EPD to the asymmetric transactions of P2P applications, map out the design space of EPD-based incentive techniques, and simulate a subset of these techniques. Our findings are as follows: Incentive techniques relying on private history (where entites only use their private histories of entities' actions) fail as the population size increases

[Go to top]

KARMA: a Secure Economic Framework for P2P Resource Sharing (PDF)
by Vivek Vishnumurthy, Sangeeth Chandrakumar, and Emin Gün Sirer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders,that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contributionof each participant. The overall standing of each

[Go to top]

Practical Anonymity for the Masses with Mix-Networks (PDF)
by Marc Rennhard and Bernhard Plattner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Designing mix-networks for low-latency applicationsthat offer acceptable performance and provide good resistanceagainst attacks without introducing too much over-headis very difficult. Good performance and small over-headsare vital to attract users and to be able to supportmany of them, because with only a few users, there is noanonymity at all. In this paper, we analyze how well differentkinds of mix-networks are suited to provide practicalanonymity for a very large number of users

[Go to top]

Quantifying Disincentives in Peer-to-Peer Networks (PDF)
by Michal Feldman, Kevin Lai, John Chuang, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper, we use modeling and simulation to better understand the effects of cooperation on user performance and to quantify the performance-based disincentives in a peer-to-peer file sharing system. This is the first step towards building an incentive system. For the models developed in this paper, we have the following results: Although performance improves significantly when cooperation increases from low to moderate levels, the improvement diminishes thereafter. In particular, the mean delay to download a file when 5 of the nodes share files is 8x more than when 40 of the nodes share files, while the mean download delay when 40 of the nodes share is only 1.75x more than when 100 share

[Go to top]

Reputation in P2P Anonymity Systems (PDF)
by Roger Dingledine, Nick Mathewson, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Decentralized anonymity systems tend to be unreliable, because users must choose nodes in the network without knowing the entire state of the network. Reputation systems promise to improve reliability by predicting network state. In this paper we focus on anonymous remailers and anonymous publishing, explain why the systems can benefit from reputation, and describe our experiences designing reputation systems for them while still ensuring anonymity. We find that in each example we first must redesign the underlying anonymity system to support verifiable transactions

[Go to top]

Mixmaster Protocol — Version 2 (PDF)
by Ulf Möller, Lance Cottrell, Peter Palfrader, and Len Sassaman.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

Most e-mail security protocols only protect the message body, leaving useful information such as the the identities of the conversing parties, sizes of messages and frequency of message exchange open to adversaries. This document describes Mixmaster (version 2), a mail transfer protocol designed to protect electronic mail against traffic analysis. Mixmaster is based on D. Chaum's mix-net protocol. A mix (remailer) is a service that forwards messages, using public key cryptography to hide the correlation between its inputs and outputs. Sending messages through sequences of remailers achieves anonymity and unobservability of communications against a powerful adversary

[Go to top]

Building Low-Diameter P2P Networks (PDF)
by Gopal Pandurangan, Prabhakar Raghavan, and Eli Upfal.
In IEEE Journal on Selected Areas in Communications 21, August 2003, pages 995-1002. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Scheme to build dynamic, distributed P2P networks of constant degree and logarithmic diameter

[Go to top]

The impact of DHT routing geometry on resilience and proximity (PDF)
by Krishna Phani Gummadi, Ramakrishna Gummadi, Steven D. Gribble, Sylvia Paul Ratnasamy, S Shenker, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance

[Go to top]

On selfish routing in internet-like environments (PDF)
by Lili Qiu, Yang Richard Yang, Yin Zhang, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to either choose routes themselves (e.g., source routing) or use overlay routing networks (e.g., Detour or RON). Such approaches result in selfish routing, because routing decisions are no longer based on system-wide criteria but are instead designed to optimize host-based or overlay-based metrics. A series of theoretical results showing that selfish routing can result in suboptimal system behavior have cast doubts on this approach. In this paper, we use a game-theoretic approach to investigate the performance of selfish routing in Internet-like environments. We focus on intra-domain network environments and use realistic topologies and traffic demands in our simulations. We show that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments. However, such performance benefit comes at the expense of significantly increased congestion on certain links. Moreover, the adaptive nature of selfish overlays can significantly reduce the effectiveness of traffic engineering by making network traffic less predictable

[Go to top]

A game theoretic framework for incentives in P2P systems (PDF)
by Chiranjeeb Buragohain, Dvyakant Agrawal, and Subhash Suri.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer (P2P) networks are self-organizing, distributed systems, with no centralized authority or infrastructure. Because of the voluntary participation, the availability of resources in a P2P system can be highly variable and unpredictable. We use ideas from game theory to study the interaction of strategic and rational peers, and propose a differential service-based incentive scheme to improve the system's performance

[Go to top]

Identity Crisis: Anonymity vs. Reputation in P2P Systems (PDF)
by Sergio Marti and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

The effectiveness of reputation systems for peer-to-peer resource-sharing networks is largely dependent on the reliability of the identities used by peers in the network. Much debate has centered around how closely one's pseudoidentity in the network should be tied to their real-world identity, and how that identity is protected from malicious spoofing. In this paper we investigate the cost in efficiency of two solutions to the identity problem for peer-to-peer reputation systems. Our results show that, using some simple mechanisms, reputation systems can provide a factor of 4 to 20 improvement in performance over no reputation system, depending on the identity model used

[Go to top]

On the Practical Use of LDPC Erasure Codes for Distributed Storage Applications (PDF)
by James S. Plank and Michael G. Thomason.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper has been submitted for publication. Please see the above URL for current publication status. As peer-to-peer and widely distributed storage systems proliferate, the need to perform efficient erasure coding, instead of replication, is crucial to performance and efficiency. Low-Density Parity-Check (LDPC) codes have arisen as alternatives to standard erasure codes, such as Reed-Solomon codes, trading off vastly improved decoding performance for inefficiencies in the amount of data that must be acquired to perform decoding. The scores of papers written on LDPC codes typically analyze their collective and asymptotic behavior. Unfortunately, their practical application requires the generation and analysis of individual codes for finite systems. This paper attempts to illuminate the practical considerations of LDPC codes for peer-to-peer and distributed storage systems. The three main types of LDPC codes are detailed, and a huge variety of codes are generated, then analyzed using simulation. This analysis focuses on the performance of individual codes for finite systems, and addresses several important heretofore unanswered questions about employing LDPC codes in real-world systems. This material is based upon work supported by the National

[Go to top]

Scalable Application-level Anycast for Highly Dynamic Groups (PDF)
by Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony Rowstron.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We present an application-level implementation of anycast for highly dynamic groups. The implementation can handle group sizes varying from one to the whole Internet, and membership maintenance is efficient enough to allow members to join for the purpose of receiving a single message. Key to this efficiency is the use of a proximity-aware peer-to-peer overlay network for decentralized, lightweight group maintenance; nodes join the overlay once and can join and leave many groups many times to amortize the cost of maintaining the overlay. An anycast implementation with these properties provides a key building block for distributed applications. In particular, it enables management and location of dynamic resources in large scale peer-to-peer systems. We present several resource management applications that are enabled by our implementation

[Go to top]

Using Caching for Browsing Anonymity (PDF)
by Anna Shubina and Sean Smith.
In ACM SIGEcom Exchanges 4(2), September 2003, pages 11-20. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Privacy-providing tools, including tools that provide anonymity, are gaining popularity in the modern world. Among the goals of their users is avoiding tracking and profiling. While some businesses are unhappy with the growth of privacy-enhancing technologies, others can use lack of information about their users to avoid unnecessary liability and even possible harassment by parties with contrary business interests, and to gain a competitive market edge.Currently, users interested in anonymous browsing have the choice only between single-hop proxies and the few more complex systems that are available. These still leave the user vulnerable to long-term intersection attacks.In this paper, we propose a caching proxy system for allowing users to retrieve data from the World-Wide Web in a way that would provide recipient unobservability by a third party and sender unobservability by the recipient and thus dispose with intersection attacks, and report on the prototype we built using Google

[Go to top]

Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh (PDF)
by Dejan Kostić, Adolfo Rodriguez, Jeannie Albrecht, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a node's parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate

[Go to top]

Heartbeat Traffic to Counter (n-1) Attacks (PDF)
by George Danezis and Len Sassaman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A dummy traffic strategy is described that can be implemented by mix nodes in an anonymous communication network to detect and counter active (n–1) attacks and their variants. Heartbeat messages are sent anonymously from the mix node back to itself in order to establish its state of connectivity with the rest of the network. In case the mix is under attack, the flow of heartbeat messages is interrupted and the mix takes measures to preserve the quality of the anonymity it provides by introducing decoy messages

[Go to top]

New Covert Channels in HTTP: Adding Unwitting Web Browsers to Anonymity Sets (PDF)
by Matthias Bauer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents new methods enabling anonymous communication on the Internet. We describe a new protocol that allows us to create an anonymous overlay network by exploiting the web browsing activities of regular users. We show that the overlay net work provides an anonymity set greater than the set of senders and receivers in a realistic threat model. In particular, the protocol provides unobservability in our threat model

[Go to top]

Passive Attack Analysis for Connection-Based Anonymity Systems (PDF)
by Andrei Serjantov and Peter Sewell.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we consider low latency connection-based anonymity systems which can be used for applications like web browsing or SSH. Although several such systems have been designed and built, their anonymity has so far not been adequately evaluated. We analyse the anonymity of connection-based systems against passive adversaries. We give a precise description of two attacks, evaluate their effectiveness, and calculate the amount of traffic necessary to provide a minimum degree of protection against them

[Go to top]

PPay: micropayments for peer-to-peer systems (PDF)
by Beverly Yang and Hector Garcia-Molina.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Puzzles in P2P Systems (PDF)
by Andrei Serjantov and Stephen Lewis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper we consider using client puzzles to provide incentives for users in a peer-to-peer system to behave in a uniform way. The techniques developed can be used to encourage users of a system to share content (combating the free riding problem) or perform community' tasks

[Go to top]

Rapid Mixing and Security of Chaum's Visual Electronic Voting (PDF)
by Marcin Gomulkiewicz, Marek Klonowski, and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Recently, David Chaum proposed an electronic voting scheme that combines visual cryptography and digital processing. It was designed to meet not only mathematical security standards, but also to be accepted by voters that do not trust electronic devices. In this scheme mix-servers are used to guarantee anonymity of the votes in the counting process. The mix-servers are operated by different parties, so an evidence of their correct operation is necessary. For this purpose the protocol uses randomized partial checking of Jakobsson et al., where some randomly selected connections between the (encoded) inputs and outputs of a mix-server are revealed. This leaks some information about the ballots, even if intuitively this information cannot be used for any efficient attack. We provide a rigorous stochastic analysis of how much information is revealed by randomized partial checking in the Chaums protocol. We estimate how many mix-servers are necessary for a fair security level. Namely, we consider probability distribution of the permutations linking the encoded votes with the decoded votes given the information revealed by randomized partial checking. We show that the variation distance between this distribution and the uniform distribution is already for a constant number of mix-servers (n is the number of voters). This means that a constant number of trustees in the Chaums protocol is enough to obtain provable security. The analysis also shows that certain details of the Chaums protocol can be simplified without lowering security level

[Go to top]

Receiver Anonymity via Incomparable Public Keys (PDF)
by Brent Waters, Edward W. Felten, and Amit Sahai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a new method for protecting the anonymity of message receivers in an untrusted network. Surprisingly, existing methods fail to provide the required level of anonymity for receivers (although those methods do protect sender anonymity). Our method relies on the use of multicast, along with a novel cryptographic primitive that we call an Incomparable Public Key cryptosystem, which allows a receiver to efficiently create many anonymous "identities" for itself without divulging that these separate "identities" actually refer to the same receiver, and without increasing the receiver's workload as the number of identities increases. We describe the details of our method, along with a prototype implementation

[Go to top]

Reusable Anonymous Return Channels (PDF)
by Philippe Golle and Markus Jakobsson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mix networks are used to deliver messages anonymously to recipients, but do not straightforwardly allow the recipient of an anonymous message to reply to its sender. Yet the ability to reply one or more times, and to further reply to replies, is essential to a complete anonymous conversation. We propose a protocol that allows a sender of anonymous messages to establish a reusable anonymous return channel. This channel enables any recipient of one of these anonymous messages to send back one or more anonymous replies. Recipients who reply to different messages can not test whether two return channels are the same, and there-fore can not learn whether they are replying to the same person. Yet the fact that multiple recipients may send multiple replies through the same return channel helps defend against the counting attacks that defeated earlier proposals for return channels. In these attacks, an adversary traces the origin of a message by sending a specific number of replies and observing who collects the same number of messages. Our scheme resists these attacks because the replies sent by an attacker are mixed with other replies submitted by other recipients through the same return channel. Moreover, our protocol straightforwardly allows for replies to replies, etc. Our protocol is based upon a re-encryption mix network, and requires four times the amount of computation and communication of a basic mixnet

[Go to top]

Samsara: Honor Among Thieves in Peer-to-Peer Storage (PDF)
by Landon P. Cox and Brian D. Noble.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems. requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return—a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure

[Go to top]

SplitStream: high-bandwidth multicast in cooperative environments (PDF)
by Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, Animesh Nandi, Antony Rowstron, and Atul Singh.
In SIGOPS'03 Operating Systems Review 37, October 2003, pages 298-313. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In tree-based multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are highly-available, dedicated infrastructure routers but it poses a problem for application-level multicast in peer-to-peer systems. SplitStream addresses this problem by striping the content across a forest of interior-node-disjoint multicast trees that distributes the forwarding load among all participating peers. For example, it is possible to construct efficient SplitStream forests in which each peer contributes only as much forwarding bandwidth as it receives. Furthermore, with appropriate content encodings, SplitStream is highly robust to failures because a node failure causes the loss of a single stripe on average. We present the design and implementation of SplitStream and show experimental results obtained on an Internet testbed and via large-scale network simulation. The results show that SplitStream distributes the forwarding load among all peers and can accommodate peers with different bandwidth capacities while imposing low overhead for forest construction and maintenance

[Go to top]

Lightweight probabilistic broadcast (PDF)
by Patrick Eugster, Rachid Guerraoui, Sidath B. Handurukande, Petr Kouznetsov, and Anne-Marie Kermarrec.
In ACM Trans. Comput. Syst 21, November 2003, pages 341-374. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Resilient Peer-to-Peer Streaming (PDF)
by Venkata N. Padmanabhan, Helen J. Wang, and Philip A. Chou.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundancy, both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data.We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases

[Go to top]

2004

ABS: The Apportioned Backup System (PDF)
by Joe Cooley, Chris Taylor, and Alen Peacock.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many personal computers are operated with no backup strategy for protecting data in the event of loss or failure. At the same time, PCs are likely to contain spare disk space and unused networking resources. We present the Apportioned Backup System (ABS), which provides a reliable collaborative backup resource by leveraging these independent, distributed resources. With ABS, procuring and maintaining specialized backup hardware is unnecessary. ABS makes efficient use of network and storage resources through use of coding techniques, convergent encryption and storage, and efficient versioning and verification processes. The system also painlessly accommodates dynamic expansion of system compute, storage, and network resources, and is tolerant of catastrophic node failures

[Go to top]

Anonymity and Information Hiding in Multiagent Systems (PDF)
by Joseph Y. Halpern and Kevin R. O'Neil.
In Journal of Computer Security 13, 2004, pages 483-514. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We Provide a framework for reasoning about information-hiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our carlier work on secercy [13]. we give several definitions of anonymity with respect to agents, actions and observers in multiagent systems, and we relate our defenitions of anonymity to other definitions of information hiding, such as secrecy. We also give probabilistic definitions of anonymity that are able to quantify an observer's uncertainty about the state of the system. Finally, we relate our definitions of anonymity to other formalizations of anonymity and information hiding, including defenitions of anonymity in the process algebra CSP and defenitions of information hiding using function views

[Go to top]

AP3: Cooperative, decentralized anonymous communication (PDF)
by Alan Mislove, Gaurav Oberoi, Ansley Post, Charles Reis, Peter Druschel, and Dan S Wallach.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a cooperative overlay network that provides anonymous communication services for participating users. The Anonymizing Peer-to-Peer Proxy (AP3) system provides clients with three primitives: (i) anonymous message delivery, (ii) anonymous channels, and (iii) secure pseudonyms. AP3 is designed to be lightweight, low-cost and provides "probable innocence" anonymity to participating users, even under a large-scale coordinated attack by a limited fraction of malicious overlay nodes. Additionally, we use AP3's primitives to build novel anonymous group communication facilities (multicast and anycast), which shield the identity of both publishers and subscribers

[Go to top]

Apres-a system for anonymous presence (PDF)
by Ben Laurie.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link)

If Alice wants to know when Bob is online, and they don't want anyone else to know their interest in each other, what do they do? Once they know they are both online, they would like to be able to exchange messages, send files, make phone calls to each other, and so forth, all without anyone except them knowing they are doing this. Apres is a system that attempts to make this possible

[Go to top]

Attack Resistant Trust Metrics (PDF)
by Raph Levien.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This dissertation characterizes the space of trust metrics, under both the scalar assumption where each assertion is evaluated independently, and the group assumption where a group of assertions are evaluated in tandem. We present a quantitative framework for evaluating the attack resistance of trust metrics, and give examples of trust metrics that are within a small factor of optimum compared to theoretical upper bounds. We discuss experiences with a realworld deployment of a group trust metric, the Advogato website. Finally, we explore possible applications of attack resistant trust metrics, including using it as to build a distributed name server, verifying metadata in peer-to-peer networks such as music sharing systems, and a proposal for highly spam resistant e-mail delivery

[Go to top]

Basic Concepts and Taxonomy of Dependable and Secure Computing (PDF)
by Algirdas Avizienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr.
In IEEE Trans. Dependable Secur. Comput 1(1), 2004, pages 11-33. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper gives the main definitions relating to dependability, a generic concept including as special case such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures

[Go to top]

Bootstrapping Locality-Aware P2P Networks (PDF)
by Curt Cramer, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Bootstrapping is a vital core functionality required by every peer-to-peer (P2P) overlay network. Nodes intending to participate in such an overlay network initially have to find at least one node that is already part of this network. While structured P2P networks (e.g. distributed hash tables, DHTs) define rules about how to proceed after this point, unstructured P2P networks continue using bootstrapping techniques until they are sufficiently connected. In this paper, we compare solutions applicable to the bootstrapping problem. Measurements of an existing system, the Gnutella web caches, highlight the inefficiency of this particular approach. Improved bootstrapping mechanisms could also incorporate locality-awareness into the process. We propose an advanced mechanism by which the overlay topology is–to some extent–matched with the underlying topology. Thereby, the performance of the overall system can be vastly improved

[Go to top]

Data durability in peer to peer storage systems (PDF)
by Gil Utard and Antoine Vernois.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present a quantitative study of data survival in peer to peer storage systems. We first recall two main redundancy mechanisms: replication and erasure codes, which are used by most peer to peer storage systems like OceanStore, PAST or CFS, to guarantee data durability. Second we characterize peer to peer systems according to a volatility factor (a peer is free to leave the system at anytime) and to an availability factor (a peer is not permanently connected to the system). Third we model the behavior of a system as a Markov chain and analyse the average life time of data (MTTF) according to the volatility and availability factors. We also present the cost of the repair process based on these redundancy schemes to recover failed peers. The conclusion of this study is that when there is no high availability of peers, a simple replication scheme may be more efficient than sophisticated erasure codes

[Go to top]

Data Indexing in Peer-to-Peer DHT Networks
by L Garcés-Erice, P. A. Felber, E W Biersack, G. Urvoy-Keller, and K. W. Ross.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Defending against eclipse attacks on overlay networks (PDF)
by Atul Singh, Miguel Castro, Peter Druschel, and Antony Rowstron.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Overlay networks are widely used to deploy functionality at edge nodes without changing network routers. Each node in an overlay network maintains pointers to a set of neighbor nodes. These pointers are used both to maintain the overlay and to implement application functionality, for example, to locate content stored by overlay nodes. If an attacker controls a large fraction of the neighbors of correct nodes, it can "eclipse" correct nodes and prevent correct overlay operation. This Eclipse attack is more general than the Sybil attack. Attackers can use a Sybil attack to launch an Eclipse attack by inventing a large number of seemingly distinct overlay nodes. However, defenses against Sybil attacks do not prevent Eclipse attacks because attackers may manipulate the overlay maintenance algorithm to mount an Eclipse attack. This paper discusses the impact of the Eclipse attack on several types of overlay and it proposes a novel defense that prevents the attack by bounding the degree of overlay nodes. Our defense can be applied to any overlay and it enables secure implementations of overlay optimizations that choose neighbors according to metrics like proximity. We present preliminary results that demonstrate the importance of defending against the Eclipse attack and show that our defense is effective

[Go to top]

Demand-Driven Clustering in MANETs (PDF)
by Curt Cramer, Oliver Stanze, Kilian Weniger, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Many clustering protocols for mobile ad hoc networks (MANETs) have been proposed in the literature. With only one exception so far [1], all these protocols are proactive, thus wasting bandwidth when their function is not currently needed. To reduce the signalling traffic load, reactive clustering may be employed. We have developed a clustering protocol named On-Demand Group Mobility-Based Clustering (ODGMBC) which is reactive. Its goal is to build clusters as a basis for address autoconfiguration and hierarchical routing. The design process especially addresses the notion of group mobility in a MANET. As a result, ODGMBC maps varying physical node groups onto logical clusters. In this paper, ODGMBC is described. It was implemented for the ad hoc network simulator GloMoSim [2] and evaluated using several performance indicators. Simulation results are promising and show that ODGMBC leads to stable clusters. This stability is advantageous for autoconfiguration and routing mechansims to be employed in conjunction with the clustering algorithm. Index Terms clustering, multi-hop, reactive, MANET, group mobility

[Go to top]

Design of a Secure Distributed Service Directory for Wireless Sensornetworks (PDF)
by Hans-Joachim Hof, Erik-Oliver Blass, Thomas Fuhrmann, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Sensor networks consist of a potentially huge number of very small and resource limited self-organizing devices. This paper presents the design of a general distributed service directory architecture for sensor networks which especially focuses on the security issues in sensor networks. It ensures secure construction and maintenance of the underlying storage structure, a Content Addressable Network. It also considers integrity of the distributed service directory and secures communication between service provider and inquirer using self-certifying path names. Key area of application of this architecture are gradually extendable sensor networks where sensors and actuators jointly perform various user defined tasks, e.g., in the field of an office environment

[Go to top]

Digital Fountains: A Survey and Look Forward Abstract We (PDF)
by TODO.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

survey constructions and applications of digital fountains, an abstraction of erasure coding for network communication. Digital fountains effectively change the standard paradigm where a user receives an ordered stream of packets to one where a user must simply receive enough packets in order to obtain the desired data. Obviating the need for ordered data simplifies data delivery, especially when the data is large or is to be distributed to a large number of users. We also examine barriers to the adoption of digital fountains and discuss whether they can be overcome. I

[Go to top]

Distributed Job Scheduling in a Peer-to-Peer Video Recording System (PDF)
by Curt Cramer, Kendy Kutzner, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Since the advent of Gnutella, Peer-to-Peer (P2P) protocols have matured towards a fundamental design element for large-scale, self-organising distributed systems. Many research efforts have been invested to improve various aspects of P2P systems, like their performance, scalability, and so on. However, little experience has been gathered from the actual deployment of such P2P systems apart from the typical file sharing applications. To bridge this gap and to gain more experience in making the transition from theory to practice, we started building advanced P2P applications whose explicit goal is to be deployed in the wild. In this paper, we describe a fully decentralised P2P video recording system. Every node in the system is a networked computer (desktop PC or set-top box) capable of receiving and recording DVB-S, i.e. digital satellite TV. Like a normal video recorder, users can program their machines to record certain programmes. With our system, they will be able to schedule multiple recordings in parallel. It is the task of the system to assign the recordings to different machines in the network. Moreover, users can record broadcasts in the past, i.e. the system serves as a short-term archival storage

[Go to top]

The Economics of Censorship Resistance (PDF)
by George Danezis and Ross Anderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose the first economic model of censorship resistance. Early peer-to-peer systems, such as the Eternity Service, sought to achieve censorshop resistance by distributing content randomly over the whole Internet. An alternative approach is to encourage nodes to serve resources they are interested in. Both architectures have been implemented but so far there has been no quantitative analysis of the protection they provide. We develop a model inspired by economics and con ict theory to analyse these systems. Under our assumptions, resource distribution according to nodes' individual preferences provides better stability and resistance to censorship. Our results may have wider application too

[Go to top]

Efficient Private Matching and Set Intersection (PDF)
by MichaelJ. Freedman, Kobbi Nissim, and Benny Pinkas.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of computing the intersection of private datasets of two parties, where the datasets contain lists of elements taken from a large domain. This problem has many applications for online collaboration. We present protocols, based on the use of homomorphic encryption and balanced hashing, for both semi-honest and malicious environments. For lists of length k, we obtain O(k) communication overhead and O(k ln ln k) computation. The protocol for the semi-honest environment is secure in the standard model, while the protocol for the malicious environment is secure in the random oracle model. We also consider the problem of approximating the size of the intersection, show a linear lower-bound for the communication overhead of solving this problem, and provide a suitable secure protocol. Lastly, we investigate other variants of the matching problem, including extending the protocol to the multi-party setting as well as considering the problem of approximate matching

[Go to top]

Efficient Resource Discovery in Wireless AdHoc Networks: Contacts Do Help (PDF)
by Ahmed Helmy.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The resource discovery problem poses new challenges in infrastructure-less wireless networks. Due to the highly dynamic nature of these networks and their bandwidth and energy constraints, there is a pressing need for energy-aware communicationefficient resource discovery protocols. This chapter provides an overview of several approaches to resource discovery, discussing their suitability for classes of wireless networks. The approaches discussed in this chapter include flooding-based approaches, hierarchical cluster-based and dominating set schemes, and hybrid loose hierarchy architectures. Furthermore, the chapter provides a detailed case study on the design, evaluation and analysis of an energy-efficient resource discovery protocol based on hybrid loose hierarchy and utilizing the concept of contacts'

[Go to top]

Eluding carnivores: file sharing with strong anonymity (PDF)
by Emin Gün Sirer, Sharad Goel, Mark Robson, and Dogan Engin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Energy-aware demand paging on NAND flash-based embedded storages (PDF)
by Chanik Park, Jeong-Uk Kang, Seon-Yeong Park, and Jin-Soo Kim.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The ever-increasing requirement for high-performance and huge-capacity memories of emerging embedded applications has led to the widespread adoption of SDRAM and NAND flash memory as main and secondary memories, respectively. In particular, the use of energy consuming memory, SDRAM, has become burdensome in battery-powered embedded systems. Intuitively, though demand paging can be used to mitigate the increasing requirement of main memory size, its applicability should be deliberately elaborated since NAND flash memory has asymmetric operation characteristics in terms of performance and energy consumption.In this paper, we present energy-aware demand paging technique to lower the energy consumption of embedded systems considering the characteristics of interactive embedded applications with large memory footprints. We also propose a flash memory-aware page replacement policy that can reduce the number of write and erase operations in NAND flash memory. With real-life workloads, we show the system-wide EnergyDelay can be reduced by 15~30 compared to the traditional shadowing architecture

[Go to top]

Energy-efficiency and storage flexibility in the blue file system (PDF)
by Edmund B. Nightingale and Jason Flinn.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental vision driving pervasive computing research is access to personal and shared data anywhere at anytime. In many ways, this vision is close to being realized. Wireless networks such as 802.11 offer connectivity to small, mobile devices. Portable storage, such as mobile disks and USB keychains, let users carry several gigabytes of data in their pockets. Yet, at least three substantial barriers to pervasive data access remain. First, power-hungry network and storage devices tax the limited battery capacity of mobile computers. Second, the danger of viewing stale data or making inconsistent updates grows as objects are replicated across more computers and portable storage devices. Third, mobile data access performance can suffer due to variable storage access times caused by dynamic power management, mobility, and use of heterogeneous storage devices. To overcome these barriers, we have built a new distributed file system called BlueFS. Compared to the Coda file system, BlueFS reduces file system energy usage by up to 55 and provides up to 3 times faster access to data replicated on portable storage

[Go to top]

Erasure Code Replication Revisited (PDF)
by W. K. Lin, Dah Ming Chiu, and Y. B. Lee.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Erasure coding is a technique for achieving high availability and reliability in storage and communication systems. In this paper, we revisit the analysis of erasure code replication and point out some situations when whole-file replication is preferred. The switchover point (from preferring whole-file replication to erasure code replication) is studied, and characterized using asymptotic analysis. We also discuss the additional considerations in building erasure code replication systems

[Go to top]

Evaluation of Efficient Archival Storage Techniques (PDF)
by Lawrence L. You and Christos Karamanolis.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The ever-increasing volume of archival data that need to be retained for long periods of time has motivated the design of low-cost, high-efficiency storage systems. Inter-file compression has been proposed as a technique to improve storage efficiency by exploiting the high degree of similarity among archival data. We evaluate the two main inter-file compression techniques, data chunking and delta encoding, and compare them with traditional intra-file compression. We report on experimental results from a range of representative archival data sets

[Go to top]

A formalization of anonymity and onion routing (PDF)
by Sjouke Mauw, Jan Verschuren, and Erik P. de Vink.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The use of formal methods to verify security protocols with respect to secrecy and authentication has become standard practice. In contrast, the formalization of other security goals, such as privacy, has received less attention. Due to the increasing importance of privacy in the current society, formal methods will also become indispensable in this area. Therefore, we propose a formal definition of the notion of anonymity in presence of an observing intruder. We validate this definition by analyzing a well-known anonymity preserving protocol, viz. onion routing

[Go to top]

Group Spreading: A Protocol for Provably Secure Distributed Name Service (PDF)
by Baruch Awerbuch and Christian Scheideler.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Information Hiding, Anonymity and Privacy: A Modular Approach (PDF)
by Dominic Hughes and Vitaly Shmatikov.
In Journal of Computer Security 12(1), 2004, pages 3-36. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a new specification framework for information hiding properties such as anonymity and privacy. The framework is based on the concept of a function view, which is a concise representation of the attacker's partial knowledge about a function. We describe system behavior as a set of functions, and formalize different information hiding properties in terms of views of these functions. We present an extensive case study, in which we use the function view framework to systematically classify and rigorously define a rich domain of identity-related properties, and to demonstrate that privacy and anonymity are independent. The key feature of our approach is its modularity. It yields precise, formal specifications of information hiding properties for any protocol formalism and any choice of the attacker model as long as the latter induce an observational equivalence relation on protocol instances. In particular, specifications based on function views are suitable for any cryptographic process calculus that defines some form of indistinguishability between processes. Our definitions of information hiding properties take into account any feature of the security model, including probabilities, random number generation, timing, etc., to the extent that it is accounted for by the formalism in which the system is specified

[Go to top]

Integrating Portable and Distributed Storage (PDF)
by Niraj Tolia, Jan Harkes, Michael Kozuch, and Mahadev Satyanarayanan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We describe a technique called lookaside caching that combines the strengths of distributed file systems and portable storage devices, while negating their weaknesses. In spite of its simplicity, this technique proves to be powerful and versatile. By unifying distributed storage and portable storage into a single abstraction, lookaside caching allows users to treat devices they carry as merely performance and availability assists for distant file servers. Careless use of portable storage has no catastrophic consequences. Experimental results show that significant performance improvements are possible even in the presence of stale data on the portable device

[Go to top]

Internet indirection infrastructure (PDF)
by Ion Stoica, Daniel Adkins, Shelley Zhuang, S Shenker, and Sonesh Surana.
In IEEE/ACM Trans. Netw 12(2), 2004, pages 205-218. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Attempts to generalize the Internet's point-to-point communication abstraction to provide services like multicast, anycast, and mobility have faced challenging technical problems and deployment barriers. To ease the deployment of such services, this paper proposes a general, overlay-based Internet Indirection Infrastructure (i3) that offers a rendezvous-based communication abstraction. Instead of explicitly sending a packet to a destination, each packet is associated with an identifier; this identifier is then used by the receiver to obtain delivery of the packet. This level of indirection decouples the act of sending from the act of receiving, and allows i3 to efficiently support a wide variety of fundamental communication services. To demonstrate the feasibility of this approach, we have designed and built a prototype based on the Chord lookup protocol

[Go to top]

An Introduction to Auction Theory (PDF)
by Flavio M. Menezes and Paulo K. Monteiro.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This book presents an in-depth discussion of the auction theory. It introduces the concept of Bayesian Nash equilibrium and the idea of studying auctions as games. Private, common, and affiliated values models and multi-object auction models are described. A general version of the Revenue Equivalence Theorem is derived and the optimal auction is characterized to relate the field of mechanism design to auction theory

[Go to top]

Leopard: A locality-aware peer-to-peer system with no hot spot (PDF)
by Yinzhe Yu, Sanghwan Lee, and Zhi-li Zhang.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental challenge in Peer-To-Peer (P2P) systems is how to locate objects of interest, namely, the look-up service problem. A key break-through towards a scalable and distributed solution of this problem is the distributed hash

[Go to top]

MACEDON: methodology for automatically creating, evaluating, and designing overlay networks (PDF)
by Adolfo Rodriguez, Charles Killian, Sooraj Bhat, Dejan Kostić, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Currently, researchers designing and implementing large-scale overlay services employ disparate techniques at each stage in the production cycle: design, implementation, experimentation, and evaluation. As a result, complex and tedious tasks are often duplicated leading to ineffective resource use and difficulty in fairly comparing competing algorithms. In this paper, we present MACEDON, an infrastructure that provides facilities to: i) specify distributed algorithms in a concise domain-specific language; ii) generate code that executes in popular evaluation infrastructures and in live networks; iii) leverage an overlay-generic API to simplify the interoperability of algorithm implementations and applications; and iv) enable consistent experimental evaluation. We have used MACEDON to implement and evaluate a number of algorithms, including AMMO, Bullet, Chord, NICE, Overcast, Pastry, Scribe, and SplitStream, typically with only a few hundred lines of MACEDON code. Using our infrastructure, we are able to accurately reproduce or exceed published results and behavior demonstrated by current publicly available implementations

[Go to top]

Measuring Anonymity in a Non-adaptive, Real-time System (PDF)
by Gergely Tóth and Zoltán Hornák.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous message transmission should be a key feature in network architectures ensuring that delivered messages are impossible-or at least infeasible-to be traced back to their senders. For this purpose the formal model of the non-adaptive, real-time PROB-channel will be introduced. In this model attackers try to circumvent applied protection measures and to link senders to delivered messages. In order to formally measure the level of anonymity provided by the system, the probability will be given, with which observers can determine the senders of delivered messages (source-hiding property) or the recipients of sent messages (destination-hiding property). In order to reduce the certainty of an observer, possible counter-measures will be defined that will ensure specified upper limit for the probability with which an observer can mark someone as the sender or recipient of a message. Finally results of simulations will be shown to demonstrate the strength of the techniques

[Go to top]

Mercury: supporting scalable multi-attribute range queries (PDF)
by Ashwin R. Bharambe, Mukesh Agrawal, and Srinivasan Seshan.
In SIGCOMM Comput. Commun. Rev 34(4), 2004, pages 353-366. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents the design of Mercury, a scalable protocol for supporting multi-attribute range-based searches. Mercury differs from previous range-based query systems in that it supports multiple attributes as well as performs explicit load balancing. To guarantee efficient routing and load balancing, Mercury uses novel light-weight sampling mechanisms for uniformly sampling random nodes in a highly dynamic overlay network. Our evaluation shows that Mercury is able to achieve its goals of logarithmic-hop routing and near-uniform load balancing.We also show that Mercury can be used to solve a key problem for an important class of distributed applications: distributed state maintenance for distributed games. We show that the Mercury-based solution is easy to use, and that it reduces the game's messaging overheard significantly compared to a naïve approach

[Go to top]

Multifaceted Simultaneous Load Balancing in DHT-based P2P systems: A new game with old balls and bins (PDF)
by Karl Aberer, Anwitaman Datta, and Manfred Hauswirth.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we present and evaluate uncoordinated on-line algorithms for simultaneous storage and replication load-balancing in DHT-based peer-to-peer systems. We compare our approach with the classical balls into bins model, and point out the similarities but also the differences which call for new loadbalancing mechanisms specifically targeted at P2P systems. Some of the peculiarities of P2P systems, which make our problem even more challenging are that both the network membership and the data indexed in the network is dynamic, there is neither global coordination nor global information to rely on, and the load-balancing mechanism ideally should not compromise the structural properties and thus the search efficiency of the DHT, while preserving the semantic information of the data (e.g., lexicographic ordering to enable range searches)

[Go to top]

MultiNet: Connecting to Multiple IEEE 802.11 Networks Using a Single Wireless Card (PDF)
by Ranveer Chandra, Victor Bahl, and Pradeep Bahl.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There are a number of scenarios where it is desirable to have a wireless device connect to multiple networks simultaneously. Currently, this is possible only by using multiple wireless network cards in the device. Unfortunately, using multiple wireless cards causes excessive energy drain and consequent reduction of lifetime in battery operated devices. In this paper, we propose a software based approach, called MultiNet, that facilitates simultaneous connections to multiple networks by virtualizing a single wireless card. The wireless card is virtualized by introducing an intermediate layer below IP, which continuously switches the card across multiple networks. The goal of the switching algorithm is to be transparent to the user who sees her machine as being connected to multiple networks. We present the design, implementation, and performance of the MultiNet system.We analyze and evaluate buffering and switching algorithms in terms of delay and energy consumption. Our system has been operational for over twelve months, it is agnostic of the upper layer protocols, and works well over popular IEEE 802.11 wireless LAN cards

[Go to top]

Operating system support for planetary-scale network services (PDF)
by Andy Bavier, Mic Bowman, Brent Chun, David Culler, Scott Karlin, Steve Muir, Larry Peterson, Timothy Roscoe, Tammo Spalink, and Mike Wawrzoniak.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

PlanetLab is a geographically distributed overlay network designed to support the deployment and evaluation of planetary-scale network services. Two high-level goals shape its design. First, to enable a large research community to share the infrastructure, PlanetLab provides distributed virtualization, whereby each service runs in an isolated slice of PlanetLab's global resources. Second, to support competition among multiple network services, PlanetLab decouples the operating system running on each node from the network-wide services that define PlanetLab, a principle referred to as unbundled management. This paper describes how Planet-Lab realizes the goals of distributed virtualization and unbundled management, with a focus on the OS running on each node

[Go to top]

PeerStore: Better Performance by Relaxing in Peer-to-Peer Backup (PDF)
by Martin Landers, Han Zhang, and Kian-Lee Tan.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Backup is cumbersome. To be effective, backups have to be made at regular intervals, forcing users to organize and store a growing collection of backup media. In this paper we propose a novel Peer-to-Peer backup system, PeerStore, that allows the user to store his backups on other people's computers instead. PeerStore is an adaptive, cost-effective system suitable for all types of networks ranging from LAN, WAN to large unstable networks like the Internet. The system consists of two layers: metadata layer and symmetric trading layer. Locating blocks and duplicate checking is accomplished by the metadata layer while the actual data distribution is done between pairs of peers after they have established a symmetric data trade. By decoupling the metadata management from data storage, the system offers a significant reduction of the maintenance cost and preserves fairness among peers. Results show that PeerStore has a reduced maintenance cost comparing to pStore. PeerStore also realizes fairness because of the symmetric nature of the trades

[Go to top]

A Peer-to-Peer File Sharing System for Wireless Ad-Hoc Networks (PDF)
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

File sharing in wireless ad-hoc networks in a peer to peer manner imposes many challenges that make conventional peer-to-peer systems operating on wire-line networks inapplicable for this case. Information and workload distribution as well as routing are major problems for members of a wireless ad-hoc network, which are only aware of their neighborhood. In this paper we propose a system that solves peer-to-peer filesharing problem for wireless ad-hoc networks. Our system works according to peer-to-peer principles, without requiring a central server, and distributes information regarding the location of shared files among members of the network. By means of a hashline and forming a tree-structure based on the topology of the network, the system is able to answer location queries, and also discover and maintain routing information that is used to transfer files from a source-peer to another peer

[Go to top]

Peer-to-Peer Overlays and Data Integration in a Life Science Grid (PDF)
by Curt Cramer, Andrea Schafferhans, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Databases and Grid computing are a good match. With the service orientation of Grid computing, the complexity of maintaining and integrating databases can be kept away from the actual users. Data access and integration is performed via services, which also allow to employ an access control. While it is our perception that many proposed Grid applications rely on a centralized and static infrastructure, Peer-to-Peer (P2P) technologies might help to dynamically scale and enhance Grid applications. The focus does not lie on publicly available P2P networks here, but on the self-organizing capabilities of P2P networks in general. A P2P overlay could, e.g., be used to improve the distribution of queries in a data Grid. For studying the combination of these three technologies, Grid computing, databases, and P2P, in this paper, we use an existing application from the life sciences, drug target validation, as an example. In its current form, this system has several drawbacks. We believe that they can be alleviated by using a combination of the service-based architecture of Grid computing and P2P technologies for implementing the services. The work presented in this paper is in progress. We mainly focus on the description of the current system state, its problems and the proposed new architecture. For a better understanding, we also outline the main topics related to the work presented here

[Go to top]

POSIX–Portable Operating System Interface
by The Open Group and IEEE.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Privacy in Electronic Commerce and the Economics of Immediate Gratification
by Alessandro Acquisti.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only naïve' individuals but also sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant

[Go to top]

Private keyword-based push and pull with applications to anonymous communication (PDF)
by Lea Kissner, Alina Oprea, Michael K. Reiter, Dawn Xiaodong Song, and Ke Yang.
In Applied Cryptography and Network Security, 2004. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a new keyword-based Private Information Retrieval (PIR) model that allows private modification of the database from which information is requested. In our model, the database is distributed over n servers, any one of which can act as a transparent interface for clients. We present protocols that support operations for accessing data, focusing on privately appending labelled records to the database (push) and privately retrieving the next unseen record appended under a given label (pull). The communication complexity between the client and servers is independent of the number of records in the database (or more generally, the number of previous push and pull operations) and of the number of servers. Our scheme also supports access control oblivious to the database servers by implicitly including a public key in each push, so that only the party holding the private key can retrieve the record via pull. To our knowledge, this is the first system that achieves the following properties: private database modification, private retrieval of multiple records with the same keyword, and oblivious access control. We also provide a number of extensions to our protocols and, as a demonstrative application, an unlinkable anonymous communication service using them

[Go to top]

Probabilistic Model Checking of an Anonymity System (PDF)
by Vitaly Shmatikov.
In Journal of Computer Security 12(3-4), 2004, pages 355-377. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We use the probabilistic model checker PRISM to analyze the Crowds system for anonymous Web browsing. This case study demonstrates how probabilistic model checking techniques can be used to formally analyze security properties of a peer-to-peer group communication system based on random message routing among members. The behavior of group members and the adversary is modeled as a discrete-time Markov chain, and the desired security properties are expressed as PCTL formulas. The PRISM model checker is used to perform automated analysis of the system and verify anonymity guarantees it provides. Our main result is a demonstration of how certain forms of probabilistic anonymity degrade when group size increases or random routing paths are rebuilt, assuming that the corrupt group members are able to identify and/or correlate multiple routing paths originating from the same sender

[Go to top]

Providing content-based services in a peer-to-peer environment (PDF)
by Ginger Perng, Chenxi Wang, and Michael K. Reiter.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Information dissemination in wide area networks has recently garnered much attention. Two differing models, publish/subscribe and rendezvous-based multicast atop overlay networks, have emerged as the two leading approaches for this goal. Event-based publish/subscribe supports contentbased services with powerful filtering capabilities, while peer-to-peer rendezvous-based services allow for efficient communication in a dynamic network infrastructure. We describe Reach, a system that integrates these two approaches to provide efficient and scalable content-based services in a dynamic network setting

[Go to top]

Redundancy elimination within large collections of files (PDF)
by Purushottam Kulkarni, Fred Douglis, Jason Lavoie, and John M. Tracey.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Ongoing advancements in technology lead to ever-increasing storage capacities. In spite of this, optimizing storage usage can still provide rich dividends. Several techniques based on delta-encoding and duplicate block suppression have been shown to reduce storage overheads, with varying requirements for resources such as computation and memory. We propose a new scheme for storage reduction that reduces data sizes with an effectiveness comparable to the more expensive techniques, but at a cost comparable to the faster but less effective ones. The scheme, called Redundancy Elimination at the Block Level (REBL), leverages the benefits of compression, duplicate block suppression, and delta-encoding to eliminate a broad spectrum of redundant data in a scalable and efficient manner. REBL generally encodes more compactly than compression (up to a factor of 14) and a combination of compression and duplicate suppression (up to a factor of 6.7). REBL also encodes similarly to a technique based on delta-encoding, reducing overall space significantly in one case. Furthermore, REBL uses super-fingerprints, a technique that reduces the data needed to identify similar blocks while dramatically reducing the computational requirements of matching the blocks: it turns O(n2) comparisons into hash table lookups. As a result, using super-fingerprints to avoid enumerating matching data objects decreases computation in the resemblance detection phase of REBL by up to a couple orders of magnitude

[Go to top]

A Replicated File System for Resource Constrained Mobile Devices (PDF)
by João Barreto and Paulo Ferreira.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The emergence of more powerful and resourceful mobile devices, as well as new wireless communication technologies, is turning the concept of ad-hoc networking into a viable and promising possibility for ubiquitous information sharing. However, the inherent characteristics of ad-hoc networks bring up new challenges for which most conventional systems don't provide an appropriate response. Namely, the lack of a pre-existing infrastructure, the high topological dynamism of these networks, the relatively low bandwidth of wireless links, as well as the limited storage and energy resources of mobile devices are issues that strongly affect the efficiency of any distributed system intended to provide ubiquitous information sharing. In this paper we describe Haddock-FS, a transparent replicated file system designed to support collaboration in the novel usage scenarios enabled by mobile environments. Haddock-FS is based on a highly available optimistic consistency protocol. In order to effectively cope with the network bandwidth and device memory constraints of these environments, Haddock-FS employs a limited size log truncation scheme and a cross-file, cross-version content similarity exploitation mechanism

[Go to top]

Robust Distributed Name Service (PDF)
by Baruch Awerbuch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Scalable byzantine agreement (PDF)
by Scott Lewis and Jared Saia.
In unknown, 2004. (BibTeX entry) (Download bibtex record)
(direct link)

This paper gives a scalable protocol for solving the Byzantine agreement problem. The protocol is scalable in the sense that for Byzantine agreement over n processors, each processor sends and receives only O(log n) messages in expectation. To the best of our knowledge this is the first result for the Byzantine agreement problem where each processor sends and receives o(n) messages. The protocol uses randomness and is correct with high probability. 1 It can tolerate any fraction of faulty processors which is strictly less than 1/6. Our result partially answers the following question posed by Kenneth Birman: How scalable are the traditional solutions to problems such as Consensus or Byzantine Agreement? [5]

[Go to top]

Secure Indexes (PDF)
by Eu-jin Goh.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Secure Service Signaling and fast Authorization in Programmable Networks (PDF)
by Michael Conrad, Thomas Fuhrmann, Marcus Schoeller, and Martina Zitterbart.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Programmable networks aim at the fast and flexible creation of services within a network. Often cited examples are audio and video transcoding, application layer multicast, or mobility and resilience support. In order to become commercially viable, programmable networks must provide authentication, authorization and accounting functionality. The mechanisms used to achieve these functionalities must be secure, reliable, and scalable, to be used in production scale programmable networks. Additionally programmable nodes must resist various kinds of attacks, such as denial of service or replay attacks. Fraudulent use by individual users must also be prohibited. This paper describes the design and implementation of a secure, reliable, and scalable signaling mechanism clients can use to initiate service startup and to manage services running on the nodes of a programmable network. This mechanism is designed for production scale networks with AAA-functionality

[Go to top]

Simple efficient load balancing algorithms for peer-to-peer systems (PDF)
by David Karger and Matthias Ruhl.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Load balancing is a critical issue for the efficient operation of peer-to-peer networks. We give two new load-balancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Koorde) P2P network. Both preserve Chord's logarithmic query time and near-optimal data migration cost.Consistent hashing is an instance of the distributed hash table (DHT) paradigm for assigning items to nodes in a peer-to-peer system: items and nodes are mapped to a common address space, and nodes have to store all items residing closeby in the address space.Our first protocol balances the distribution of the key address space to nodes, which yields a load-balanced system when the DHT maps items "randomly" into the address space. To our knowledge, this yields the first P2P scheme simultaneously achieving O(log n) degree, O(log n) look-up cost, and constant-factor load balance (previous schemes settled for any two of the three).Our second protocol aims to directly balance the distribution of items among the nodes. This is useful when the distribution of items in the address space cannot be randomized. We give a simple protocol that balances load by moving nodes to arbitrary locations "where they are needed." As an application, we use the last protocol to give an optimal implementation of a distributed data structure for range searches on ordered data

[Go to top]

Simulating the power consumption of large-scale sensor network applications (PDF)
by Victor Shnayder, Mark Hempstead, Bor-rong Chen, Geoff Werner Allen, and Matt Welsh.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals. In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations

[Go to top]

Total Recall: System Support for Automated Availability Management (PDF)
by Ranjita Bhagwan Kiran, Kiran Tati, Yu-chung Cheng, Stefan Savage, and Geoffrey M. Voelker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Availability is a storage system property that is both highly desired and yet minimally engineered. While many systems provide mechanisms to improve availability–such as redundancy and failure recovery–how to best configure these mechanisms is typically left to the system manager. Unfortunately, few individuals have the skills to properly manage the trade-offs involved, let alone the time to adapt these decisions to changing conditions. Instead, most systems are configured statically and with only a cursory understanding of how the configuration will impact overall performance or availability. While this issue can be problematic even for individual storage arrays, it becomes increasingly important as systems are distributed–and absolutely critical for the wide-area peer-to-peer storage infrastructures being explored. This paper describes the motivation, architecture and implementation for a new peer-to-peer storage system, called TotalRecall, that automates the task of availability management. In particular, the TotalRecall system automatically measures and estimates the availability of its constituent host components, predicts their future availability based on past behavior, calculates the appropriate redundancy mechanisms and repair policies, and delivers user-specified availability while maximizing efficiency

[Go to top]

Trust and Cooperation in Peer-to-Peer Systems (PDF)
by Junjie Jiang, Haihuan Bai, and Weinong Wang.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Most of the past studies on peer-to-peer systems have emphasized routing and lookup. The selfishness of users, which brings on the free riding problem, has not attracted sufficient attention from researchers. In this paper, we introduce a decentralized reputation-based trust model first, in which trust relationships could be built based on the reputation of peers. Subsequently, we use the iterated prisoner's dilemma to model the interactions in peer-to-peer systems and propose a simple incentive mechanism. By simulations, it's shown that the stable cooperation can emerge after limited rounds of interaction between peers by using the incentive mechanism

[Go to top]

Vulnerabilities and Security Threats in Structured Overlay Networks: A Quantitative Analysis (PDF)
by Mudhakar Srivatsa and Ling Liu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A number of recent applications have been built on distributed hash tables (DHTs) based overlay networks. Almost all DHT-based schemes employ a tight deterministic data placement and ID mapping schemes. This feature on one hand provides assurance on location of data if it exists, within a bounded number of hops, and on the other hand, opens doors for malicious nodes to lodge attacks that can potentially thwart the functionality of the overlay network. This paper studies several serious security threats in DHT-based systems through two targeted attacks at the overlay network's protocol layer. The first attack explores the routing anomalies that can be caused by malicious nodes returning incorrect lookup routes. The second attack targets the ID mapping scheme. We disclose that the malicious nodes can target any specific data item in the system; and corrupt/modify the data item to its favor. For each of these attacks, we provide quantitative analysis to estimate the extent of damage that can be caused by the attack; followed by experimental validation and defenses to guard the overlay networks from such attacks

[Go to top]

Wayback: A User-level Versioning File System for Linux (PDF)
by Fabian Bustamante, Brian Cornell, Brian Cornell, Peter Dinda, Peter Dinda, and Fabian Bustamante.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In a typical file system, only the current version of a file (or directory) is available. In Wayback, a user can also access any previous version, all the way back to the file's creation time. Versioning is done automatically at the write level: each write to the file creates a new version. Wayback implements versioning using an undo log structure, exploiting the massive space available on modern disks to provide its very useful functionality. Wayback is a user-level file system built on the FUSE framework that relies on an underlying file system for access to the disk. In addition to simplifying Wayback, this also allows it to extend any existing file system with versioning: after being mounted, the file system can be mounted a second time with versioning. We describe the implementation of Wayback, and evaluate its performance using several benchmarks

[Go to top]

When Can an Autonomous Reputation Scheme Discourage Free-riding in a Peer-to-Peer System?
by Nazareno Andrade, Miranda Mowbray, Walfredo Cirne, and Francisco Brasileiro.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We investigate the circumstances under which it is possible to discourage free-riding in a peer-to-peer system for resource-sharing by prioritizing resource allocation to peers with higher reputation. We use a model to predict conditions necessary for any reputation scheme to succeed in discouraging free-riding by this method. We show with simulations that for representative cases, a very simple autonomous reputation scheme works nearly as well at discouraging free-riding as an ideal reputation scheme. Finally, we investigate the expected dynamic behavior of the system

[Go to top]

An Asymptotically Optimal Scheme for P2P File Sharing (PDF)
by Panayotis Antoniadis, Costas Courcoubetis, and Richard Weber.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The asymptotic analysis of certain public good models for p2p systems suggests that when the aim is to maximize social welfare a fixed contribution scheme in terms of the number of files shared can be asymptotically optimal as the number of participants grows to infinity. Such a simple scheme eliminates free riding, is incentive compatible and obtains a value of social welfare that is within o(n) of that obtained by the second-best policy of the corresponding mechanism design formulation of the problem. We extend our model to account for file popularity, and discuss properties of the resulting equilibria. The fact that a simple optimization problem can be used to closely approximate the solution of the exact model (which is in most cases practically intractable both analytically and computationally), is of great importance for studying several interesting aspects of the system. We consider the evolution of the system to equilibrium in its early life, when both peers and the system planner are still learning about system parameters. We also analyse the case of group formation when peers belong to different classes (such as DSL and dial-up users), and it may be to their advantage to form distinct groups instead of a larger single group, or form such a larger group but avoid disclosing their class. We finally discuss the game that occurs when peers know that a fixed fee will be used, but the distribution of their valuations is unknown to the system designer

[Go to top]

A construction of locality-aware overlay network: mOverlay and its performance (PDF)
by Xin Yan Zhang, Qian Zhang, Zhensheng Zhang, Gang Song, and Wenwu Zhu.
In IEEE Journal on Selected Areas in Communications 22, January 2004, pages 18-28. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There are many research interests in peer-to-peer (P2P) overlay architectures. Most widely used unstructured P2P networks rely on central directory servers or massive message flooding, clearly not scalable. Structured overlay networks based on distributed hash tables (DHT) are expected to eliminate flooding and central servers, but can require many long-haul message deliveries. An important aspect of constructing an efficient overlay network is how to exploit network locality in the underlying network. We propose a novel mechanism, mOverlay, for constructing an overlay network that takes account of the locality of network hosts. The constructed overlay network can significantly decrease the communication cost between end hosts by ensuring that a message reaches its destination with small overhead and very efficient forwarding. To construct the locality-aware overlay network, dynamic landmark technology is introduced. We present an effective locating algorithm for a new host joining the overlay network. We then present a theoretical analysis and simulation results to evaluate the network performance. Our analysis shows that the overhead of our locating algorithm is O(logN), where N is the number of overlay network hosts. Our simulation results show that the average distance between a pair of hosts in the constructed overlay network is only about 11 of the one in a traditional, randomly connected overlay network. Network design guidelines are also provided. Many large-scale network applications, such as media streaming, application-level multicasting, and media distribution, can leverage mOverlay to enhance their performance

[Go to top]

Enhancing Web privacy and anonymity in the digital era (PDF)
by Stefanos Gritzalis.
In Information Management amp; Computer Security 12, January 2004, pages 255-287. (BibTeX entry) (Download bibtex record)
(direct link)

This paper presents a state-of-the-art review of the Web privacy and anonymity enhancing security mechanisms, tools, applications and services, with respect to their architecture, operational principles and vulnerabilities. Furthermore, to facilitate a detailed comparative analysis, the appropriate parameters have been selected and grouped in classes of comparison criteria, in the form of an integrated comparison framework. The main concern during the design of this framework was to cover the confronted security threats, applied technological issues and users' demands satisfaction. GNUnet's Anonymity Protocol (GAP), Freedom, Hordes, Crowds, Onion Routing, Platform for Privacy Preferences (P3P), TRUSTe, Lucent Personalized Web Assistant (LPWA), and Anonymizer have been reviewed and compared. The comparative review has clearly highlighted that the pros and cons of each system do not coincide, mainly due to the fact that each one exhibits different design goals and thus adopts dissimilar techniques for protecting privacy and anonymity

[Go to top]

Finite length analysis of LT codes
by Richard Karp, Michael Luby, and M. Amin Shokrollahi.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper provides an efficient method for analyzing the error probability of the belief propagation (BP) decoder applied to LT Codes. Each output symbol is generated independently by sampling from a distribution and adding the input symbols corresponding to the support of the sampled vector

[Go to top]

Network failure detection and graph connectivity (PDF)
by Jon Kleinberg, Mark Sandler, and Aleksandrs Slivkins.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider a model for monitoring the connectivity of a network subject to node or edge failures. In particular, we are concerned with detecting (, k)-failures: events in which an adversary deletes up to network elements (nodes or edges), after which there are two sets of nodes A and B, each at least an fraction of the network, that are disconnected from one another. We say that a set D of nodes is an ( k)-detection set if, for any ( k)-failure of the network, some two nodes in D are no longer able to communicate; in this way, D "witnesses" any such failure. Recent results show that for any graph G, there is an is ( k)-detection set of size bounded by a polynomial in k and , independent of the size of G.In this paper, we expose some relationships between bounds on detection sets and the edge-connectivity and node-connectivity of the underlying graph. Specifically, we show that detection set bounds can be made considerably stronger when parameterized by these connectivity values. We show that for an adversary that can delete edges, there is always a detection set of size O((/) log (1/)) which can be found by random sampling. Moreover, an (, lambda)-detection set of minimum size (which is at most 1/) can be computed in polynomial time. A crucial point is that these bounds are independent not just of the size of G but also of the value of .Extending these bounds to node failures is much more challenging. The most technically difficult result of this paper is that a random sample of O((/) log (1/)) nodes is a detection set for adversaries that can delete a number of nodes up to , the node-connectivity.For the case of edge-failures we use VC-dimension techniques and the cactus representation of all minimum edge-cuts of a graph; for node failures, we develop a novel approach for working with the much more complex set of all minimum node-cuts of a graph

[Go to top]

Personalized Web search for improving retrieval effectiveness (PDF)
by Fang Liu, C. Yu, and Weiyi Meng.
In Knowledge and Data Engineering, IEEE Transactions on 16, January 2004, pages 28-40. (BibTeX entry) (Download bibtex record)
(direct link)

Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient

[Go to top]

Practical, distributed network coordinates (PDF)
by Russ Cox, Frank Dabek, Frans M. Kaashoek, Jinyang Li, and Robert Morris.
In SIGCOMM Computer Communication Review 34, January 2004, pages 113-118. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Vivaldi is a distributed algorithm that assigns synthetic coordinates to internet hosts, so that the Euclidean distance between two hosts' coordinates predicts the network latency between them. Each node in Vivaldi computes its coordinates by simulating its position in a network of physical springs. Vivaldi is both distributed and efficient: no fixed infrastructure need be deployed and a new host can compute useful coordinates after collecting latency information from only a few other hosts. Vivaldi can rely on piggy-backing latency information on application traffic instead of generating extra traffic by sending its own probe packets.This paper evaluates Vivaldi through simulations of 750 hosts, with a matrix of inter-host latencies derived from measurements between 750 real Internet hosts. Vivaldi finds synthetic coordinates that predict the measured latencies with a median relative error of 14 percent. The simulations show that a new host joining an existing Vivaldi system requires fewer than 10 probes to achieve this accuracy. Vivaldi is currently used by the Chord distributed hash table to perform proximity routing, replica selection, and retransmission timer estimation

[Go to top]

Public-key encryption with keyword search (PDF)
by Dan Boneh, Giovanni Di Crescenzo, Rafail Ostrovsky, and Gieseppe Persiano.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice's public key. An email gateway wants to test whether the email contains the keyword "urgent" so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that

[Go to top]

Peer-to-Peer Networking amp; -Computing (PDF)
by Ralf Steinmetz and Klaus Wehrle.
In Informatik Spektrum 27, February 2004, pages 51-54. (BibTeX entry) (Download bibtex record)
(direct link)

Unter dem Begriff Peer-to-Peer etabliert sich ein höchst interessantes Paradigma für die Kommunikation im Internet. Obwohl ursprünglich nur für die sehr pragmatischen und rechtlich umstrittenen Dateitauschbörsen entworfen, können die Peerto-Peer-Mechanismen zur verteilten Nutzung unterschiedlichster Betriebsmittel genutzt werden und neue Möglichkeiten für Internetbasierte Anwendungen eröffnen

[Go to top]

Practical Anonymity for the Masses with MorphMix (PDF)
by Marc Rennhard and Bernhard Plattner.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

MorphMix is a peer-to-peer circuit-based mix network to provide practical anonymous low-latency Internet access for millions of users. The basic ideas of MorphMix have been published before; this paper focuses on solving open problems and giving an analysis of the resistance to attacks and the performance it offers assuming realistic scenarios with very many users. We demonstrate that MorphMix scales very well and can support as many nodes as there are public IP addresses. In addition, we show that MorphMix is indeed practical because it provides good resistance from long-term profiling and offers acceptable performance despite the heterogeneity of the nodes and the fact that nodes can join or leave the system at any time

[Go to top]

Provable Unlinkability Against Traffic Analysis (PDF)
by Ron Berman, Amos Fiat, and Amnon Ta-Shma.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider unlinkability of communication problem: given n users, each sending a message to some destination, encode and route the messages so that an adversary analyzing the traffic in the communication network cannot link the senders with the recipients. A solution should have a small communication overhead, that is, the number of additional messages should be kept low. David Chaum introduced idea of mixes for solving this problem. His approach was developed further by Simon and Rackoff, and implemented later as the onion protocol. Even if the onion protocol is widely regarded as secure and used in practice, formal arguments supporting this claim are rare and far from being complete. On top of that, in certain scenarios very simple tricks suffice to break security without breaking the cryptographic primitives. It turns out that one source of difficulties in analyzing the onion protocols security is the adversary model. In a recent work, Berman, Fiat and Ta-Shma develop a new and more realistic model in which only a constant fraction of communication lines can be accessed by an adversary, the number of messages does not need to be high and the preferences of the users are taken into account. For this model they prove that with high probability a good level of unlinkability is obtained after steps of the onion protocol where n is the number of messages sent. In this paper we improve these results: we show that the same level of unlinkability (expressed as variation distance between certain probability distributions) is obtained with high probability already after steps of the onion protocol. Asymptotically, this is the best result possible, since obviously (log n) steps are necessary. On top of that, our analysis is much simpler. It is based on path coupling technique designed for showing rapid mixing of Markov chains

[Go to top]

Timing Attacks in Low-Latency Mix-Based Systems (PDF)
by Brian Neil Levine, Michael K. Reiter, Chenxi Wang, and Matthew Wright.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A mix is a communication proxy that attempts to hide the correspondence between its incoming and outgoing messages. Timing attacks are a significant challenge for mix-based systems that wish to support interactive, low-latency applications. However, the potency of these attacks has not been studied carefully. In this paper, we investigate timing analysis attacks on low-latency mix systems and clarify the threat they pose. We propose a novel technique, defensive dropping, to thwart timing attacks. Through simulations and analysis, we show that defensive dropping can be effective against attackers who employ timing analysis

[Go to top]

Universal Re-Encryption for Mixnets (PDF)
by Philippe Golle, Markus Jakobsson, Ari Juels, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We introduce a new cryptographic technique that we call universal re-encryption. A conventional cryptosystem that permits re-encryption, such as ElGamal, does so only for a player with knowledge of the public key corresponding to a given ciphertext. In contrast, universal re-encryption can be done without knowledge of public keys. We propose an asymmetric cryptosystem with universal re-encryption that is half as efficient as standard ElGamal in terms of computation and storage. While technically and conceptually simple, universal re-encryption leads to new types of functionality in mixnet architectures. Conventional mixnets are often called upon to enable players to communicate with one another through channels that are externally anonymous, i.e., that hide information permitting traffic-analysis. Universal re-encryption lets us construct a mixnet of this kind in which servers hold no public or private keying material, and may therefore dispense with the cumbersome requirements of key generation, key distribution, and private-key management. We describe two practical mixnet constructions, one involving asymmetric input ciphertexts, and another with hybrid-ciphertext inputs

[Go to top]

Designing a DHT for Low Latency and High Throughput (PDF)
by Frank Dabek, Jinyang Li, Emil Sit, James Robertson, Frans M. Kaashoek, and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Designing a wide-area distributed hash table (DHT) that provides high-throughput and low-latency network storage is a challenge. Existing systems have explored a range of solutions, including iterative routing, recursive routing, proximity routing and neighbor selection, erasure coding, replication, and server selection. This paper explores the design of these techniques and their interaction in a complete system, drawing on the measured performance of a new DHT implementation and results from a simulator with an accurate Internet latency model. New techniques that resulted from this exploration include use of latency predictions based on synthetic co-ordinates, efficient integration of lookup routing and data fetching, and a congestion control mechanism suitable for fetching data striped over large numbers of servers. Measurements with 425 server instances running on 150 PlanetLab and RON hosts show that the latency optimizations reduce the time required to locate and fetch data by a factor of two. The throughput optimizations result in a sustainable bulk read throughput related to the number of DHT hosts times the capacity of the slowest access link; with 150 selected PlanetLab hosts, the peak aggregate throughput over multiple clients is 12.8 megabytes per second

[Go to top]

An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol (PDF)
by Salman A. Baset and Henning G. Schulzrinne.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic

[Go to top]

Designing Incentive mechanisms for peer-to-peer systems (PDF)
by John Chuang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

From file-sharing to mobile ad-hoc networks, community networking to application layer overlays, the peer-to-peer networking paradigm promises to revolutionize the way we design, build and use the communications network of tomorrow, transform the structure of the communications industry, and challenge our understanding of markets and democracies in a digital age. The fundamental premise of peer-to-peer systems is that individual peers voluntarily contribute resources to the system. We discuss some of the research opportunities and challenges in the design of incentive mechanisms for P2P systems

[Go to top]

Dissecting BitTorrent: Five Months in a Torrent's Lifetime (PDF)
by Mikel Izal, Guillaume Urvoy-Keller, E W Biersack, Pascal Felber, Anwar Al Hamra, and L Garcés-Erice.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Popular content such as software updates is requested by a large number of users. Traditionally, to satisfy a large number of requests, lager server farms or mirroring are used, both of which are expensive. An inexpensive alternative are peer-to-peer based replication systems, where users who retrieve the file, act simultaneously as clients and servers. In this paper, we study BitTorrent, a new and already very popular peer-to-peer application that allows distribution of very large contents to a large set of hosts. Our analysis of BitTorrent is based on measurements collected on a five months long period that involved thousands of peers

[Go to top]

Anonymity and Covert Channels in Simple Timed Mix-firewalls (PDF)
by Richard E. Newman, Vipan R. Nalla, and Ira S. Moskowitz.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Traditional methods for evaluating the amount of anonymity afforded by various Mix configurations have depended on either measuring the size of the set of possible senders of a particular message (the anonymity set size), or by measuring the entropy associated with the probability distribution of the messages possible senders. This paper explores further an alternative way of assessing the anonymity of a Mix system by considering the capacity of a covert channel from a sender behind the Mix to an observer of the Mix's output. Initial work considered a simple model, with an observer (Eve) restricted to counting the number of messages leaving a Mix configured as a firewall guarding an enclave with one malicious sender (Alice) and some other naive senders (Cluelessi's). Here, we consider the case where Eve can distinguish between multiple destinations, and the senders can select to which destination their message (if any) is sent each clock tick

[Go to top]

Dining Cryptographers Revisited (PDF)
by Philippe Golle and Ari Juels.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Dining cryptographers networks (or DC-nets) are a privacy-preserving primitive devised by Chaum for anonymous message publication. A very attractive feature of the basic DC-net is its non-interactivity. Subsequent to key establishment, players may publish their messages in a single broadcast round, with no player-to-player communication. This feature is not possible in other privacy-preserving tools like mixnets. A drawback to DC-nets, however, is that malicious players can easily jam them, i.e., corrupt or block the transmission of messages from honest parties, and may do so without being traced. Several researchers have proposed valuable methods of detecting cheating players in DC-nets. This is usually at the cost, however, of multiple broadcast rounds, even in the optimistic case, and often of high computational and/or communications overhead, particularly for fault recovery. We present new DC-net constructions that simultaneously achieve non-interactivity and high-probability detection and identification of cheating players. Our proposals are quite efficient, imposing a basic cost that is linear in the number of participating players. Moreover, even in the case of cheating in our proposed system, just one additional broadcast round suffices for full fault recovery. Among other tools, our constructions employ bilinear maps, a recently popular cryptographic technique for reducing communication complexity

[Go to top]

On Flow Correlation Attacks and Countermeasures in Mix Networks (PDF)
by Ye Zhu, Xinwen Fu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we address issues related to flow correlation attacks and the corresponding countermeasures in mix networks. Mixes have been used in many anonymous communication systems and are supposed to provide countermeasures that can defeat various traffic analysis attacks. In this paper, we focus on a particular class of traffic analysis attack, flow correlation attacks, by which an adversary attempts to analyze the network traffic and correlate the traffic of a flow over an input link at a mix with that over an output link of the same mix. Two classes of correlation methods are considered, namely time-domain methods and frequency-domain methods. Based on our threat model and known strategies in existing mix networks, we perform extensive experiments to analyze the performance of mixes. We find that a mix with any known batching strategy may fail against flow correlation attacks in the sense that for a given flow over an input link, the adversary can correctly determine which output link is used by the same flow. We also investigated methods that can effectively counter the flow correlation attack and other timing attacks. The empirical results provided in this paper give an indication to designers of Mix networks about appropriate configurations and alternative mechanisms to be used to counter flow correlation attacks. This work was supported in part by the National Science Foundation under Contracts 0081761 and 0324988, by the Defense Advanced Research Projects Agency under Contract F30602-99-1-0531, and by Texas Aamp;M University under its Telecommunication and Information Task Force Program. Any opinions, findings, and conclusions or recommendations in this material, either expressed or implied, are those of the authors and do not necessarily reflect the views of the sponsors listed above

[Go to top]

The Hitting Set Attack on Anonymity Protocols (PDF)
by Dogan Kesdogan and Lexi Pimenidis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A passive attacker can compromise a generic anonymity protocol by applying the so called disclosure attack, i.e. a special traffic analysis attack. In this work we present a more efficient way to accomplish this goal, i.e. we need less observations by looking for unique minimal hitting sets. We call this the hitting set attack or just HS-attack. In general, solving the minimal hitting set problem is NP-hard. Therefore, we use frequency analysis to enhance the applicability of our attack. It is possible to apply highly efficient backtracking search algorithms. We call this approach the statistical hitting set attack or SHS-attack. However, the statistical hitting set attack is prone to wrong solutions with a given small probability. We use here duality checking algorithms to resolve this problem. We call this final exact attack the HS*-attack

[Go to top]

An Improved Construction for Universal Re-encryption (PDF)
by Peter Fairbrother.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Golle et al recently introduced universal re-encryption, defining it as re-encryption by a player who does not know the key used for the original encryption, but which still allows an intended player to recover the plaintext. Universal re-encryption is potentially useful as part of many information-hiding techniques, as it allows any player to make ciphertext unidentifiable without knowing the key used. Golle et al's techniques for universal re-encryption are reviewed, and a hybrid universal re-encryption construction with improved work and space requirements which also permits indefinite re-encryptions is presented. Some implementational issues and optimisations are discussed

[Go to top]

Keso–a Scalable, Reliable and Secure Read/Write Peer-to-Peer File System (PDF)
by Mattias Amnefelt and Johanna Svenningsson.
Master's Thesis, KTH/Royal Institute of Technology, May 2004. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this thesis we present the design of Keso, a distributed and completely decentralized file system based on the peer-to-peer overlay network DKS. While designing Keso we have taken into account many of the problems that exist in today's distributed file systems. Traditionally, distributed file systems have been built around dedicated file servers which often use expensive hardware to minimize the risk of breakdown and to handle the load. System administrators are required to monitor the load and disk usage of the file servers and to manually add clients and servers to the system. Another drawback with centralized file systems are that a lot of storage space is unused on clients. Measurements we have taken on existing computer systems has shown that a large part of the storage capacity of workstations is unused. In the system we looked at there was three times as much storage space available on workstations than was stored in the distributed file system. We have also shown that much data stored in a production use distributed file system is redundant. The main goals for the design of Keso has been that it should make use of spare resources, avoid storing unnecessarily redundant data, scale well, be self-organizing and be a secure file system suitable for a real world environment. By basing Keso on peer-to-peer techniques it becomes highly scalable, fault tolerant and self-organizing. Keso is intended to run on ordinary workstations and can make use of the previously unused storage space. Keso also provides means for access control and data privacy despite being built on top of untrusted components. The file system utilizes the fact that a lot of data stored in traditional file systems is redundant by letting all files that contains a datablock with the same contents reference the same datablock in the file system. This is achieved while still maintaining access control and data privacy

[Go to top]

Practical Traffic Analysis: Extending and Resisting Statistical Disclosure (PDF)
by Nick Mathewson and Roger Dingledine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We extend earlier research on mounting and resisting passive long-term end-to-end traffic analysis attacks against anonymous message systems, by describing how an eavesdropper can learn sender-receiver connections even when the substrate is a network of pool mixes, the attacker is non-global, and senders have complex behavior or generate padding messages. Additionally, we describe how an attacker can use information about message distinguishability to speed the attack. We simulate our attacks for a variety of scenarios, focusing on the amount of information needed to link senders to their recipients. In each scenario, we show that the intersection attack is slowed but still succeeds against a steady-state mix network. We find that the attack takes an impractical amount of time when message delivery times are highly variable; when the attacker can observe very little of the network; and when users pad consistently and the adversary does not know how the network behaves in their absence

[Go to top]

Reasoning about the Anonymity Provided by Pool Mixes that Generate Dummy Traffic (PDF)
by Claudia Diaz and Bart Preneel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper we study the anonymity provided by genralized mixes that insert dummy traffic. Mixes are an essential component to offer anonymous email services. We indicate how to compute the recipient and sender anonymity and we point out some problems that may arise from the intutitive extension of the metric to make into account dummies. Two possible ways of inserting dummy traffic are disussed and compared. An active attack scenario is considered, and the anonymity provided by mixes under the attack is analyzed

[Go to top]

Reputable Mix Networks (PDF)
by Philippe Golle.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We define a new type of mix network that offers a reduced form of robustness: the mixnet can prove that every message it outputs corresponds to an input submitted by a player without revealing which input (for honest players). We call mixnets with this property reputable mixnets. Reputable mixnets are not fully robust, because they offer no guarantee that distinct outputs correspond to distinct inputs. In particular, a reputable mix may duplicate or erase messages. A reputable mixnet, however, can defend itself against charges of having authored the output messages it produces. This ability is very useful in practice, as it shields the mixnet from liability in the event that an output message is objectionable or illegal. We propose three very efficient protocols for reputable mixnets, all synchronous. The first protocol is based on blind signatures. It works both with Chaumian decryption mixnets or re-encryption mixnets based on ElGamal, but guarantees a slightly weaker form of reputability which we call near-reputability. The other two protocols are based on ElGamal re-encryption over a composite group and offer true reputability. One requires interaction between the mixnet and the players before players submit their inputs. The other assumes no interaction prior to input submission

[Go to top]

Robust incentive techniques for peer-to-peer networks (PDF)
by Michal Feldman, Kevin Lai, Ion Stoica, and John Chuang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, a symmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD),and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fullydistributed and include: discriminating server selection, maxflow-based subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation

[Go to top]

Statistical Disclosure or Intersection Attacks on Anonymity Systems (PDF)
by George Danezis and Andrei Serjantov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we look at the information an attacker can extract using a statistical disclosure attack. We provide analytical results about the anonymity of users when they repeatedly send messages through a threshold mix following the model of Kesdogan, Agrawal and Penz [7] and through a pool mix. We then present a statistical disclosure attack that can be used to attack models of anonymous communication networks based on pool mixes. Careful approximations make the attack computationally efficient. Such models are potentially better suited to derive results that could apply to the security of real anonymous communication networks

[Go to top]

Synchronous Batching: From Cascades to Free Routes (PDF)
by Roger Dingledine, Vitaly Shmatikov, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The variety of possible anonymity network topologies has spurred much debate in recent years. In a synchronous batching design, each batch of messages enters the mix network together, and the messages proceed in lockstep through the network. We show that a synchronous batching strategy can be used in various topologies, including a free-route network, in which senders choose paths freely, and a cascade network, in which senders choose from a set of fixed paths. We show that free-route topologies can provide better anonymity as well as better message reliability in the event of partial network failure

[Go to top]

The Traffic Analysis of Continuous-Time Mixes (PDF)
by George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We apply the information-theoretic anonymity metrics to continuous-time mixes, that individually delay messages instead of batching them. The anonymity of such mixes is measured based on their delay characteristics, and as an example the exponential mix (sg-mix) is analysed, simulated and shown to use the optimal strategy. We also describe a practical and powerful traffic analysis attack against connection based continuous-time mix networks, despite the presence of some cover traffic. Assuming a passive observer, the conditions are calculated that make tracing messages through the network possible

[Go to top]

On the Anonymity of Anonymity Systems (PDF)
by Andrei Serjantov.
phd, University of Cambridge, June 2004. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

A Network Positioning System for the Internet (PDF)
by T. S. Eugene Ng and Hui Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network positioning has recently been demonstrated to be a viable concept to represent the network distance relationships among Internet end hosts. Several subsequent studies have examined the potential benefits of using network position in applications, and proposed alternative network positioning algorithms. In this paper, we study the problem of designing and building a network positioning system (NPS). We identify several key system-building issues such as the consistency, adaptivity and stability of host network positions over time. We propose a hierarchical network positioning architecture that maintains consistency while enabling decentralization, a set of adaptive decentralized algorithms to compute and maintain accurate, stable network positions, and finally present a prototype system deployed on PlanetLab nodes that can be used by a variety of applications. We believe our system is a viable first step to provide a network positioning capability in the Internet

[Go to top]

SWIFT: A System With Incentives For Trading (PDF)
by Karthik Tamilmani, Vinay Pai, and Alexander E. Mohr.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In this paper, we present the design of a credit-based trading mechanism for peer-to-peer file sharing networks. We divide files into verifiable pieces; every peer interested in a file requests these pieces individually from the peers it is connected to. Our goal is to build a mechanism that supports fair large scale distribution in which downloads are fast, with low startup latency. We build a trading model in which peers use a pairwise currency to reconcile trading differences with each other and examine various trading strategies that peers can adopt. We show through analysis and simulation that peers who contribute to the network and take risks receive the most benefit in return. Our simulations demonstrate that peers who set high upload rates receive high download rates in return, but free-riders download very slowly compared to peers who upload. Finally, we propose a default trading strategy that is good for both the network as a whole and the peer employing it: deviating from that strategy yields little or no advantage for the peer

[Go to top]

Better Anonymous Communications (PDF)
by George Danezis.
phd, University of Cambridge, July 2004. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Universal Re-encryption of Signatures and Controlling Anonymous Information Flow (PDF)
by Marek Klonowski, Miroslaw Kutylowski, Anna Lauks, and Filip Zagorski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communication protocols, very essential for preserving privacy of the parties communicating, may lead to severe problems. A malicious server may use anonymous communication protocols for injecting unwelcome messages into the system so that their source can be hardly traced. So anonymity and privacy protection on one side and protection against such phenomena as spam are so far contradictory goals. We propose a mechanism that may be used to limit the mentioned side effects of privacy protection. During the protocol proposed each encrypted message admitted into the system is signed by a respective authority. Then, on its route through the network the encrypted message and the signature are re-encrypted universally. The purpose of universal re-encryption is to hide the routes of the messages from an observer monitoring the traffic. Despite re-encryption, signature of the authority remains valid. Depending on a particular application, verification of the signature is possible either off-line by anybody with the access to the ciphertext and the signature or requires contact with the authority that has issued the signature

[Go to top]

Anonymous Communication with On-line and Off-line Onion Encoding (PDF)
by Marcin Gomulkiewicz, Marek Klonowski, and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Encapsulating messages in onions is one of the major techniques providing anonymous communication in computer networks. To some extent, it provides security against traffic analysis by a passive adversary. However, it can be highly vulnerable to attacks by an active adversary. For instance, the adversary may perform a simple so–called repetitive attack: a malicious server sends the same massage twice, then the adversary traces places where the same message appears twice – revealing the route of the original message. A repetitive attack was examined for mix–networks. However, none of the countermeasures designed is suitable for onion–routing. In this paper we propose an onion-like encoding design based on universal reencryption. The onions constructed in this way can be used in a protocol that achieves the same goals as the classical onions, however, at the same time we achieve immunity against a repetitive attack. Even if an adversary disturbs communication and prevents processing a message somewhere on the onion path, it is easy to identify the malicious server performing the attack and provide an evidence of its illegal behavior

[Go to top]

Free-riding and whitewashing in peer-to-peer systems (PDF)
by Michal Feldman, Christos Papadimitriou, John Chuang, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We develop a model to study the phenomenon of free-riding in peer-to-peer (P2P) systems. At the heart of our model is a user of a certain type, an intrinsic and private parameter that reflects the user's willingness to contribute resources to the system. A user decides whether to contribute or free-ride based on how the current contribution cost in the system compares to her type. When the societal generosity (i.e., the average type) is low, intervention is required in order to sustain the system. We present the effect of mechanisms that exclude low type users or, more realistic, penalize free-riders with degraded service. We also consider dynamic scenarios with arrivals and departures of users, and with whitewashers: users who leave the system and rejoin with new identities to avoid reputational penalties. We find that when penalty is imposed on all newcomers in order to avoid whitewashing, system performance degrades significantly only when the turnover rate among users is high

[Go to top]

Modeling and performance analysis of BitTorrent-like peer-to-peer networks (PDF)
by Dongyu Qiu and Rayadurgam Srikant.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet

[Go to top]

Reputation Management Framework and Its Use as Currency in Large-Scale Peer-to-Peer Networks (PDF)
by Rohit Gupta and Arun K. Somani.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we propose a reputation management framework for large-scale peer-to-peer (P2P) networks, wherein all nodes are assumed to behave selfishly. The proposed framework has several advantages. It enables a form of virtual currency, such that the reputation of nodes is a measure of their wealth. The framework is scalable and provides protection against attacks by malicious nodes. The above features are achieved by developing trusted communities of nodes whose members trust each other and cooperate to deal with the problem of nodesý selfishness and possible maliciousness

[Go to top]

Taxonomy of Mixes and Dummy Traffic (PDF)
by Claudia Diaz and Bart Preneel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents an analysis of mixes and dummy traffic policies, which are building blocks of anonymous services. The goal of the paper is to bring together all the issues related to the analysis and design of mix networks. We discuss continuous and pool mixes, topologies for mix networks and dummy traffic policies. We point out the advantages and disadvantages of design decisions for mixes and dummy policies. Finally, we provide a list of research problems that need further work

[Go to top]

Tor: The Second-Generation Onion Router (PDF)
by Roger Dingledine, Nick Mathewson, and Paul Syverson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication

[Go to top]

Comparison between two practical mix designs (PDF)
by Claudia Diaz, Len Sassaman, and Evelyne Dewitte.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We evaluate the anonymity provided by two popular email mix implementations, Mixmaster and Reliable, and compare their effectiveness through the use of simulations which model the algorithms used by these mixing applications. Our simulations are based on actual traffic data obtained from a public anonymous remailer (mix node). We determine that assumptions made in previous literature about the distribution of mix input traffic are incorrect: in particular, the input traffic does not follow a Poisson distribution. We establish for the first time that a lower bound exists on the anonymity of Mixmaster, and discover that under certain circumstances the algorithm used by Reliable provides no anonymity. We find that the upper bound on anonymity provided by Mixmaster is slightly higher than that provided by Reliable. We identify flaws in the software in Reliable that further compromise its ability to provide anonymity, and review key areas that are necessary for the security of a mix in addition to a sound algorithm. Our analysis can be used to evaluate under which circumstances the two mixing algorithms should be used to best achieve anonymity and satisfy their purpose. Our work can also be used as a framework for establishing a security review process for mix node deployments

[Go to top]

DUO–Onions and Hydra–Onions – Failure and Adversary Resistant Onion Protocols
by Jan Iwanik, Marek Klonowski, and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A serious weakness of the onion protocol, one of the major tools for anonymous communication, is its vulnerability to network failures and/or an adversary trying to break the communication. This is facilitated by the fact that each message is sent through a path of a certain length and a failure in a single point of this path prohibits message delivery. Since the path cannot be too short in order to offer anonymity protection (at least logarithmic in the number of nodes), the failure probability might be quite substantial. The simplest solution to this problem would be to send many onions with the same message. We show that this approach can be optimized with respect to communication overhead and resilience to failures and/or adversary attacks. We propose two protocols: the first one mimics K independent onions with a single onion. The second protocol is designed for the case where an adaptive adversary may destroy communication going out of servers chosen according to the traffic observed by him. In this case a single message flows in a stream of K onions the main point is that even when the adversary kills some of these onions, the stream quickly recovers to the original bandwidth again K onions with this message would flow through the network

[Go to top]

A Probabilistic Approach to Predict Peers' Performance in P2P Networks (PDF)
by Zoran Despotovic and Karl Aberer.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

The problem of encouraging trustworthy behavior in P2P online communities by managing peers' reputations has drawn a lot of attention recently. However, most of the proposed solutions exhibit the following two problems: huge implementation overhead and unclear trust related model semantics. In this paper we show that a simple probabilistic technique, maximum likelihood estimation namely, can reduce these two problems substantially when employed as the feedback aggregation strategy. Thus, no complex exploration of the feedback is necessary. Instead, simple, intuitive and efficient probabilistic estimation methods suffice

[Go to top]

Signaling and Networking in Unstructured Peer-to-Peer Networks (PDF)
by Rüdiger Schollmeier.
Dissertation, Technische Universität München, September 2004. (BibTeX entry) (Download bibtex record)
(direct link)

This work deals with the efficiency of Peer-to-Peer (P2P) networks, which are distributed and self-organizing overlay networks. We contribute to their understanding and design by using new measurement techniques, simulations and analytical methods. In this context we first present measurement methods and results of P2P networks concerning traffic and topology characteristics as well as concerning user behavior. Based on these results we develop stochastic models to describe the user behavior, the traffic and the topology of P2P networks analytically. Using the results of our measurements and analytical investigations, we develop new P2P architectures to improve the efficiency of P2P networks concerning their topology and their signaling traffic. Finally we verify our results for the new architectures by measurements as well as computer-based simulations on different levels of detail

[Go to top]

Availability, Usage, and Deployment Characteristics of the Domain Name System (PDF)
by Jeffrey Pang, James Hendricks, Aditya Akella, Bruce Maggs, Roberto De Prisco, and Srinivasan Seshan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Domain Name System (DNS) is a critical part of the Internet's infrastructure, and is one of the few examples of a robust, highly-scalable, and operational distributed system. Although a few studies have been devoted to characterizing its properties, such as its workload and the stability of the top-level servers, many key components of DNS have not yet been examined. Based on large-scale measurements taken fromservers in a large content distribution network, we present a detailed study of key characteristics of the DNS infrastructure, such as load distribution, availability, and deployment patterns of DNS servers. Our analysis includes both local DNS servers and servers in the authoritative hierarchy. We find that (1) the vast majority of users use a small fraction of deployed name servers, (2) the availability of most name servers is high, and (3) there exists a larger degree of diversity in local DNS server deployment and usage than for authoritative servers. Furthermore, we use our DNS measurements to draw conclusions about federated infrastructures in general. We evaluate and discuss the impact of federated deployment models on future systems, such as Distributed Hash Tables

[Go to top]

The Decentralised Coordination of Self-Adaptive Components for Autonomic Distributed Systems (PDF)
by Jim Dowling.
Ph.D. thesis, University of Dublin, October 2004. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Fragile Mixing (PDF)
by Michael K. Reiter and XiaoFeng Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

No matter how well designed and engineered, a mix server offers little protection if its administrator can be convinced to log and selectively disclose correspondences between its input and output messages, either for profit or to cooperate with an investigation. In this paper we propose a technique, fragile mixing, to discourage an administrator from revealing such correspondences, assuming he is motivated to protect the unlinkability of other communications that flow through the mix (e.g., his own). Briefly, fragile mixing implements the property that any disclosure of an input-message-to-output-message correspondence discloses all such correspondences for that batch of output messages. We detail this technique in the context of a re-encryption mix, its integration with a mix network, and incentive and efficiency issues

[Go to top]

How to Achieve Blocking Resistance for Existing Systems Enabling Anonymous Web Surfing (PDF)
by Stefan Köpsell and Ulf Hilling.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We are developing a blocking resistant, practical and usable system for anonymous web surfing. This means, the system tries to provide as much reachability and availability as possible, even to users in countries where the free flow of information is legally, organizationally and physically restricted. The proposed solution is an add-on to existing anonymity systems. First we give a classification of blocking criteria and some general countermeasures. Using these techniques, we outline a concrete design, which is based on the JAP-Web Mixes (aka AN.ON)

[Go to top]

Location Diversity in Anonymity Networks (PDF)
by Nick Feamster and Roger Dingledine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity networks have long relied on diversity of node location for protection against attacks—typically an adversary who can observe a larger fraction of the network can launch a more effective attack. We investigate the diversity of two deployed anonymity networks, Mixmaster and Tor, with respect to an adversary who controls a single Internet administrative domain. Specifically, we implement a variant of a recently proposed technique that passively estimates the set of administrative domains (also known as autonomous systems, or ASes) between two arbitrary end-hosts without having access to either end of the path. Using this technique, we analyze the AS-level paths that are likely to be used in these anonymity networks. We find several cases in each network where multiple nodes are in the same administrative domain. Further, many paths between nodes, and between nodes and popular endpoints, traverse the same domain

[Go to top]

Minx: A simple and efficient anonymous packet format (PDF)
by George Danezis and Ben Laurie.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Minx is a cryptographic message format for encoding anonymous messages, relayed through a network of Chaumian mixes. It provides security against a passive adversary by completely hiding correspondences between input and output messages. Possibly corrupt mixes on the message path gain no information about the route length or the position of the mix on the route. Most importantly Minx resists active attackers that are prepared to modify messages in order to embed tags which they will try to detect elsewhere in the network. The proposed scheme imposes a low communication and computational overhead, and only combines well understood cryptographic primitives

[Go to top]

Parallel Mixing (PDF)
by Philippe Golle and Ari Juels.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Efforts to design faster synchronous mix networks have focused on reducing the computational cost of mixing per server. We propose a different approach: our reencryption mixnet allows servers to mix inputs in parallel. The result is a dramatic reduction in overall mixing time for moderate-to-large numbers of servers. As measured in the model we describe, for n inputs and $M$ servers our parallel re encryption mixnet produces output in time at most 2n – and only around n assuming a majority of honest servers. In contrast, a traditional, sequential, synchronous re-encryption mixnet requires time Mn. Parallel re-encryption mixnets offer security guarantees comparable to those of synchronous mixnets, and in many cases only a slightly weaker guarantee of privacy. Our proposed construction is applicable to many recently proposed re-encryption mixnets, such as those of Furukawa and Sako, Neff, Jakobsson et al., and Golle and Boneh. In practice, parallel mixnets promise a potentially substantial time saving in applications such as anonymous electronic elections

[Go to top]

Private collaborative forecasting and benchmarking (PDF)
by Mikhail Atallah, Marina Bykova, Jiangtao Li, Keith Frikken, and Mercan Topkara.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Suppose a number of hospitals in a geographic area want to learn how their own heart-surgery unit is doing compared with the others in terms of mortality rates, subsequent complications, or any other quality metric. Similarly, a number of small businesses might want to use their recent point-of-sales data to cooperatively forecast future demand and thus make more informed decisions about inventory, capacity, employment, etc. These are simple examples of cooperative benchmarking and (respectively) forecasting that would benefit all participants as well as the public at large, as they would make it possible for participants to avail themselves of more precise and reliable data collected from many sources, to assess their own local performance in comparison to global trends, and to avoid many of the inefficiencies that currently arise because of having less information available for their decision-making. And yet, in spite of all these advantages, cooperative benchmarking and forecasting typically do not take place, because of the participants' unwillingness to share their information with others. Their reluctance to share is quite rational, and is due to fears of embarrassment, lawsuits, weakening their negotiating position (e.g., in case of over-capacity), revealing corporate performance and strategies, etc. The development and deployment of private benchmarking and forecasting technologies would allow such collaborations to take place without revealing any participant's data to the others, reaping the benefits of collaboration while avoiding the drawbacks. Moreover, this kind of technology would empower smaller organizations who could then cooperatively base their decisions on a much broader information base, in a way that is today restricted to only the largest corporations. This paper is a step towards this goal, as it gives protocols for forecasting and benchmarking that reveal to the participants the desired answers yet do not reveal to any participant any other participant's private data. We consider several forecasting methods, including linear regression and time series techniques such as moving average and exponential smoothing. One of the novel parts of this work, that further distinguishes it from previous work in secure multi-party computation, is that it involves floating point arithmetic, in particular it provides protocols to securely and efficiently perform division

[Go to top]

Vivaldi: a decentralized network coordinate system (PDF)
by Frank Dabek, Russ Cox, Frans M. Kaashoek, and Robert Morris.
In SIGCOMM Computer Communication Review 34, October 2004, pages 15-26. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large-scale Internet applications can benefit from an ability to predict round-trip times to other hosts without having to contact them first. Explicit measurements are often unattractive because the cost of measurement can outweigh the benefits of exploiting proximity information. Vivaldi is a simple, light-weight algorithm that assigns synthetic coordinates to hosts such that the distance between the coordinates of two hosts accurately predicts the communication latency between the hosts. Vivaldi is fully distributed, requiring no fixed network infrastructure and no distinguished hosts. It is also efficient: a new host can compute good coordinates for itself after collecting latency information from only a few other hosts. Because it requires little com-munication, Vivaldi can piggy-back on the communication patterns of the application using it and scale to a large number of hosts. An evaluation of Vivaldi using a simulated network whose latencies are based on measurements among 1740 Internet hosts shows that a 2-dimensional Euclidean model with height vectors embeds these hosts with low error (the median relative error in round-trip time prediction is 11 percent)

[Go to top]

Measuring Anonymity Revisited (PDF)
by Gergely Tóth, Zoltán Hornák, and Ferenc Vajda.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous message transmission systems are the building blocks of several high-level anonymity services (e.g. epayment, e-voting). Therefore, it is essential to give a theoretically based but also practically usable objective numerical measure for the provided level of anonymity. In this paper two entropybased anonymity measures will be analyzed and some shortcomings of these methods will be highlighted. Finally, source- and destination-hiding properties will be introduced for so called local anonymity, an aspect reflecting the point of view of the users

[Go to top]

The Predecessor Attack: An Analysis of a Threat to Anonymous Communications Systems (PDF)
by Matthew Wright, Micah Adler, Brian Neil Levine, and Clay Shields.
In ACM Transactions on Information and System Security (TISSEC) 7(7), November 2004, pages 489-522. (BibTeX entry) (Download bibtex record)
(direct link) (website)

There have been a number of protocols proposed for anonymous network communication. In this paper, we investigate attacks by corrupt group members that degrade the anonymity of each protocol over time. We prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Hordes, Web Mixes, and DC-Net, can maintain anonymity in the face of the attacks described. This provides a basis for comparing these protocols against each other. Our results show that fully connected DC-Net is the most resilient to these attacks, but it suffers from scalability issues that keep anonymity group sizes small. We also show through simulation that the underlying topography of the DC-Net affects the resilience of the protocol: as the number of neighbors a node has increases the strength of the protocol increases, at the cost of higher communication overhead

[Go to top]

A survey of peer-to-peer content distribution technologies (PDF)
by Stephanos Androutsellis-Theotokis and Diomidis Spinellis.
In ACM Computing Surveys 36, December 2004, pages 335-371. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed computer architectures labeled "peer-to-peer" are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance.Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content.In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in—and affected by—the architectural design decisions adopted by current peer-to-peer systems.We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes

[Go to top]

2005

ABSTRACT Network Coding for Efficient Communication in Extreme Networks (PDF)
by Jörg Widmer.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Some forms of ad-hoc networks need to operate in extremely performance-challenged environments where end-to-end connectivity is rare. Such environments can be found for example in very sparse mobile networks where nodes meet only occasionally and are able to exchange information, or in wireless sensor networks where nodes sleep most of the time to conserve energy. Forwarding mechanisms in such networks usually resort to some form of intelligent flooding, as for example in probabilistic routing. We propose a communication algorithm that significantly reduces the overhead of probabilistic routing algorithms, making it a suitable building block for a delay-tolerant network architecture. Our forwarding scheme is based on network coding. Nodes do not simply forward packets they overhear but may send out information that is coded over the contents of several packets they received. We show by simulation that this algorithm achieves the reliability and robustness of flooding at a small fraction of the overhead

[Go to top]

Architecture and evaluation of an unplanned 802.11b mesh network (PDF)
by John Bicket, Daniel Aguayo, Sanjit Biswas, and Robert Morris.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper evaluates the ability of a wireless mesh architecture to provide high performance Internet access while demanding little deployment planning or operational management. The architecture considered in this paper has unplanned node placement (rather than planned topology), omni-directional antennas (rather than directional links), and multi-hop routing (rather than single-hop base stations). These design decisions contribute to ease of deployment, an important requirement for community wireless networks. However, this architecture carries the risk that lack of planning might render the network's performance unusably low. For example, it might be necessary to place nodes carefully to ensure connectivity; the omni-directional antennas might provide uselessly short radio ranges; or the inefficiency of multi-hop forwarding might leave some users effectively disconnected.The paper evaluates this unplanned mesh architecture with a case study of the Roofnet 802.11b mesh network. Roofnet consists of 37 nodes spread over four square kilometers of an urban area. The network provides users with usable performance despite lack of planning: the average inter-node throughput is 627 kbits/second, even though the average route has three hops.The paper evaluates multiple aspects of the architecture: the effect of node density on connectivity and throughput; the characteristics of the links that the routing protocol elects to use; the usefulness of the highly connected mesh afforded by omni-directional antennas for robustness and throughput; and the potential performance of a single-hop network using the same nodes as Roofnet

[Go to top]

BAR fault tolerance for cooperative services (PDF)
by Amitanand S. Aiyer, Lorenzo Alvisi, Allen Clement, Mike Dahlin, Jean-Philippe Martin, and Carl Porth.
In SIGOPS Oper. Syst. Rev 39(5), 2005, pages 45-58. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes a general approach to constructing cooperative services that span multiple administrative domains. In such environments, protocols must tolerate both Byzantine behaviors when broken, misconfigured, or malicious nodes arbitrarily deviate from their specification and rational behaviors when selfish nodes deviate from their specification to increase their local benefit. The paper makes three contributions: (1) It introduces the BAR (Byzantine, Altruistic, Rational) model as a foundation for reasoning about cooperative services; (2) It proposes a general three-level architecture to reduce the complexity of building services under the BAR model; and (3) It describes an implementation of BAR-B the first cooperative backup service to tolerate both Byzantine users and an unbounded number of rational users. At the core of BAR-B is an asynchronous replicated state machine that provides the customary safety and liveness guarantees despite nodes exhibiting both Byzantine and rational behaviors. Our prototype provides acceptable performance for our application: our BAR-tolerant state machine executes 15 requests per second, and our BAR-B backup service can back up 100MB of data in under 4 minutes

[Go to top]

Boundary Chord: A Novel Peer-to-Peer Algorithm for Replica Location Mechanism in Grid Environment
by Hai Jin, Chengwei Wang, and Hanhua Chen.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The emerging grids need an efficient replica location mechanism. In the experience of developing 1 ChinaGrid Supporting Platform (CGSP), a grid middleware that builds a uniform platform supporting multiple grid-based applications, we meet a challenge of utilizing the properties of locality in replica location process to construct a practical and high performance replica location mechanism. The key of the solution to this challenge is to design an efficient replica location algorithm that meets above requirements. Some previous works have been done to build a replica location mechanism, but they are not suitable for replica location in a grid environment with multiple applications like ChinaGrid. In this paper, we present a novel peer-to-peer algorithm for replica location mechanism, Boundary Chord, which has the merits of locality awareness, self-organization, and load balancing. Simulation results show that the algorithm has better performance than other structured peer-to-peer solutions to the replica location problem

[Go to top]

Capacity-achieving ensembles for the binary erasure channel with bounded complexity (PDF)
by Henry D. Pfister, Igal Sason, and Rüdiger L. Urbanke.
In IEEE TRANS. INFORMATION THEORY 51(7), 2005, pages 2352-2379. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present two sequences of ensembles of nonsystematic irregular repeat–accumulate (IRA) codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric (MBIOS) channel and is refined for the binary erasure channel

[Go to top]

Cashmere: Resilient anonymous routing (PDF)
by Li Zhuang, Feng Zhou, Ben Y. Zhao, and Antony Rowstron.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous routing protects user communication from identification by third-party observers. Existing anonymous routing layers utilize Chaum-Mixes for anonymity by relaying traffic through relay nodes called mixes. The source defines a static forwarding path through which traffic is relayed to the destination. The resulting path is fragile and shortlived: failure of one mix in the path breaks the forwarding path and results in data loss and jitter before a new path is constructed. In this paper, we propose Cashmere, a resilient anonymous routing layer built on a structured peer-to-peer overlay. Instead of single-node mixes, Cashmere selects regions in the overlay namespace as mixes. Any node in a region can act as the MIX, drastically reducing the probability of a mix failure. We analyze Cashmere's anonymity and measure its performance through simulation and measurements, and show that it maintains high anonymity while providing orders of magnitude improvement in resilience to network dynamics and node failures

[Go to top]

Characterization and measurement of tcp traversal through nats and firewalls (PDF)
by Saikat Guha and Paul Francis.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In recent years, the standards community has developed techniques for traversing NAT/firewall boxes with UDP (that is, establishing UDP flows between hosts behind NATs). Because of the asymmetric nature of TCP connection establishment, however, NAT traversal of TCP is more difficult. Researchers have recently proposed a variety of promising approaches for TCP NAT traversal. The success of these approaches, however, depend on how NAT boxes respond to various sequences of TCP (and ICMP) packets. This paper presents the first broad study of NAT behavior for a comprehensive set of TCP NAT traversal techniques over a wide range of commercial NAT products. We developed a publicly available software test suite that measures the NAT's responses both to a variety of isolated probes and to complete TCP connection establishments. We test sixteen NAT products in the lab, and 93 home NATs in the wild. Using these results, as well as market data for NAT products, we estimate the likelihood of successful NAT traversal for home networks. The insights gained from this paper can be used to guide both design of TCP NAT traversal protocols and the standardization of NAT/firewall behavior, including the IPv4-IPv6 translating NATs critical for IPv6 transition

[Go to top]

Compact E-Cash (PDF)
by Jan Camenisch, Susan Hohenberger, and Anna Lysyanskaya.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper presents efficient off-line anonymous e-cash schemes where a user can withdraw a wallet containing 2^l coins each of which she can spend unlinkably. Our first result is a scheme, secure under the strong RSA and the y-DDHI assumptions, where the complexity of the withdrawal and spend operations is O(l+k) and the user's wallet can be stored using O(l+k) bits, where k is a security parameter. The best previously known schemes require at least one of these complexities to be O(2^l k). In fact, compared to previous e-cash schemes, our whole wallet of 2^l coins has about the same size as one coin in these schemes. Our scheme also offers exculpability of users, that is, the bank can prove to third parties that a user has double-spent. We then extend our scheme to our second result, the first e-cash scheme that provides traceable coins without a trusted third party. That is, once a user has double spent one of the 2^l coins in her wallet, all her spendings of these coins can be traced. We present two alternate constructions. One construction shares the same complexities with our first result but requires a strong bilinear map assumption that is only conjectured to hold on MNT curves. The second construction works on more general types of elliptic curves, but the price for this is that the complexity of the spending and of the withdrawal protocols becomes O(lk) and O(lk + k^2) bits, respectively, and wallets take O(lk) bits of storage. All our schemes are secure in the random oracle model

[Go to top]

Correctness of a gossip based membership protocol (PDF)
by Andre Allavena, Alan Demers, and John E. Hopcroft.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Deep Store: An archival storage system architecture (PDF)
by Lawrence L. You, Kristal T. Pollack, and Darrell D. E. Long.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We present the Deep Store archival storage architecture, a large-scale storage system that stores immutable dataefficiently and reliably for long periods of time. Archived data is stored across a cluster of nodes and recorded to hard disk. The design differentiates itself from traditional file systems by eliminating redundancy within and across files, distributing content for scalability, associating rich metadata with content, and using variable levels of replication based on the importance or degree of dependency of each piece of stored data. We evaluate the foundations of our design, including PRESIDIO, a virtual content-addressable storage framework with multiple methods for inter-file and intra-file compression that effectively addresses the data-dependent variability of data compression. We measure content and metadata storage efficiency, demonstrate the need for a variable-degree replication model, and provide preliminary results for storage performance

[Go to top]

Detecting BGP configuration faults with static analysis (PDF)
by Nick Feamster and Hari Balakrishnan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The Internet is composed of many independent autonomous systems (ASes) that exchange reachability information to destinations using the Border Gateway Protocol (BGP). Network operators in each AS configure BGP routers to control the routes that are learned, selected, and announced to other routers. Faults in BGP configuration can cause forwarding loops, packet loss, and unintended paths between hosts, each of which constitutes a failure of the Internet routing infrastructure. This paper describes the design and implementation of rcc, the router configuration checker, a tool that finds faults in BGP configurations using static analysis. rcc detects faults by checking constraints that are based on a high-level correctness specification. rcc detects two broad classes of faults: route validity faults, where routers may learn routes that do not correspond to usable paths, and path visibility faults, where routers may fail to learn routes for paths that exist in the network. rcc enables network operators to test and debug configurations before deploying them in an operational network, improving on the status quo where most faults are detected only during operation. rcc has been downloaded by more than sixty-five network operators to date, some of whom have shared their configurations with us. We analyze network-wide configurations from 17 different ASes to detect a wide variety of faults and use these findings to motivate improvements to the Internet routing infrastructure

[Go to top]

Distributed Hash Tables (PDF)
by Klaus Wehrle, Stefan Götz, and Simon Rieche.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

In the last few years, an increasing number of massively distributed systems with millions of participants has emerged within very short time frames. Applications, such as instant messaging, file-sharing, and content distribution have attracted countless numbers of users. For example, Skype gained more than 2.5 millions of users within twelve months, and more than 50 of Internet traffic is originated by BitTorrent. These very large and still rapidly growing systems attest to a new era for the design and deployment of distributed systems. In particular, they reflect what the major challenges are today for designing and implementing distributed systems: scalability, flexibility, and instant deployment

[Go to top]

An empirical study of free-riding behavior in the maze p2p file-sharing system (PDF)
by Mao Yang, Zheng Zhang, Xiaoming Li, and Yafei Dai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Erasure-coding based routing for opportunistic networks (PDF)
by Yong Wang, Sushant Jain, Margaret Martonosi, and Kevin Fall.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

mobility is a challenging problem because disconnections are prevalent and lack of knowledge about network dynamics hinders good decision making. Current approaches are primarily based on redundant transmissions. They have either high overhead due to excessive transmissions or long delays due to the possibility of making wrong choices when forwarding a few redundant copies. In this paper, we propose a novel forwarding algorithm based on the idea of erasure codes. Erasure coding allows use of a large number of relays while maintaining a constant overhead, which results in fewer cases of long delays. We use simulation to compare the routing performance of using erasure codes in DTN with four other categories of forwarding algorithms proposed in the literature. Our simulations are based on a real-world mobility trace collected in a large outdoor wild-life environment. The results show that the erasure-coding based algorithm provides the best worst-case delay performance with a fixed amount of overhead. We also present a simple analytical model to capture the delay characteristics of erasure-coding based forwarding, which provides insights on the potential of our approach

[Go to top]

Exploiting co-location history for ef.cient service selection in ubiquitous computing systems
by Alexandros Karypidis and Spyros Lalis.
In Mobile and Ubiquitous Systems, Annual International Conference on, 2005, pages 202-212. (BibTeX entry) (Download bibtex record)
(direct link) (website)

As the ubiquitous computing vision materializes, the number and diversity of digital elements in our environment increases. Computing capability comes in various forms and is embedded in different physical objects, ranging from miniature devices such as human implants and tiny sensor particles, to large constructions such as vehicles and entire buildings. The number of possible interactions among such elements, some of which may be invisible or offer similar functionality, is growing fast so that it becomes increasingly hard to combine or select between them. Mechanisms are thus required for intelligent matchmaking that will achieve controlled system behavior, yet without requiring the user to continuously input desirable options in an explicit manner. In this paper we argue that information about the colocation relationship of computing elements is quite valuable in this respect and can be exploited to guide automated service selection with minimal or no user involvement. We also discuss the implementation of such mechanism that is part of our runtime system for smart objects

[Go to top]

The Feasibility of DHT-based Streaming Multicast (PDF)
by Stefan Birrer and Fabian E. Bustamante.
In 2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, 2005, pages 288-298. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Finding Collisions in the Full SHA-1 (PDF)
by Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present new collision search attacks on the hash function SHA-1. We show that collisions of SHA-1 can be found with complexity less than 2 69 hash operations. This is the first attack on the full 80-step SHA-1 with complexity less than the 2 80 theoretical bound. Keywords: Hash functions, collision search attacks, SHA-1, SHA-0. 1

[Go to top]

First and Second Generation of Peer-to-Peer Systems
by Jörg Eberspächer and Rüdiger Schollmeier.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-Peer (P2P) networks appeared roughly around the year 2000 when a broadband Internet infrastructure (even at the network edge) became widely available. Other than traditional networks Peer-to-Peer networks do not rely on a specific infrastructure offering transport services. Instead they form overlay structures focusing on content allocation and distribution based on TCP or HTTP connections. Whereas in a standard Client-Server configuration content is stored and provided only via some central server(s), Peer-to-Peer networks are highly decentralized and locate a desired content at some participating peer and provide the corresponding IP address of that peer to the searching peer. The download of that content is then initiated using a separate connection, often using HTTP. Thus, the high load usually resulting for a central server and its surrounding network is avoided leading to a more even distribution of load on the underlying physical network. On the other hand, such networks are typically subject to frequent changes because peers join and leave the network without any central control

[Go to top]

Fixing the embarrassing slowness of OpenDHT on PlanetLab (PDF)
by S. Rhea, B.G. Chun, J. Kubiatowicz, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

[Go to top]

Heterogeneity and Load Balance in Distributed Hash Tables (PDF)
by Brighten Godfrey and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing solutions to balance load in DHTs incur a high overhead either in terms of routing state or in terms of load movement generated by nodes arriving or departing the system. In this paper, we propose a set of general techniques and use them to develop a protocol based on Chord, called Y0 , that achieves load balancing with minimal overhead under the typical assumption that the load is uniformly distributed in the identifier space. In particular, we prove that Y0 can achieve near-optimal load balancing, while moving little load to maintain the balance and increasing the size of the routing tables by at most a constant factor

[Go to top]

The Hybrid Chord Protocol: A Peer-to-peer Lookup Service for Context-Aware Mobile Applications (PDF)
by Stefan Zöls, Rüdiger Schollmeier, Wolfgang Kellerer, and Anthony Tarlano.
In IEEE ICN, Reunion Island, April 2005. LNCS 3421, 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A fundamental problem in Peer-to-Peer (P2P) overlay networks is how to efficiently find a node that shares a requested object. The Chord protocol is a distributed lookup protocol addressing this problem using hash keys to identify the nodes in the network and also the shared objects. However, when a node joins or leaves the Chord ring, object references have to be rearranged in order to maintain the hash key mapping rules. This leads to a heavy traffic load, especially when nodes stay in the Chord ring only for a short time. In mobile scenarios storage capacity, transmission data rate and battery power are limited resources, so the heavy traffic load generated by the shifting of object references can lead to severe problems when using Chord in a mobile scenario. In this paper, we present the Hybrid Chord Protocol (HCP). HCP solves the problem of frequent joins and leaves of nodes. As a further improvement of an efficient search, HCP supports the grouping of shared objects in interest groups. Our concept of using information profiles to describe shared objects allows defining special interest groups (context spaces) and a shared object to be available in multiple context spaces

[Go to top]

Hydra: a platform for survivable and secure data storage systems (PDF)
by Lihao Xu.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper introduces Hydra, a platform that we are developing for highly survivable and secure data storage systems that distribute information over networks and adapt timely to environment changes, enabling users to store and access critical data in a continuously available and highly trustable fashion. The Hydra platform uses MDS array codes that can be encoded and decoded efficiently for distributing and recovering user data. Novel uses of MDS array codes in Hydra are discussed, as well as Hydra's design goals, general structures and a set of basic operations on user data. We also explore Hydra's applications in survivable and secure data storage systems

[Go to top]

Impacts of packet scheduling and packet loss distribution on FEC Performances: observations and recommendations (PDF)
by Christoph Neumann, Aurélien Francillon, and David Furodet.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Forward Error Correction (FEC) is commonly used for content broadcasting. The performance of the FEC codes largely vary, depending in particular on the code used and on the object size, and these parameters have already been studied in detail by the community. However the FEC performances are also largely dependent on the packet scheduling used during transmission and on the loss pattern introduced by the channel. Little attention has been devoted to these aspects so far. Therefore the present paper analyzes their impacts on the three FEC codes: LDGM Staircase, LDGM Triangle, two large block codes, and Reed-Solomon. Thanks to this analysis, we define several recommendations on how to best use these codes, depending on the test case and on the channel, which turns out to be of utmost importance

[Go to top]

Improving delivery ratios for application layer multicast in mobile ad hoc networks (PDF)
by Peter Baumung, Martina Zitterbart, and Kendy Kutzner.
In Comput. Commun 28(14), 2005, pages 1669-1679. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Delivering multicast data using application layer approaches offers different advantages, as group members communicate using so-called overlay networks. These consist of a multicast group's members connected by unicast tunnels. Since existing approaches for application layer delivery of multicast data in mobile ad hoc networks (short MANETs) only deal with routing but not with error recovery, this paper evaluates tailored mechanisms for handling packet losses and congested networks. Although illustrated at the example of a specific protocol, the mechanisms may be applied to arbitrary overlays. This paper also investigates how application layer functionality based on overlay networks can turn existing multicast routing protocols (like ODMRP, M-AODV,...) into (almost) reliable transport protocols

[Go to top]

ISPRP: A Message-Efficient Protocol for Initializing Structured P2P Networks (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most research activities in the field of peer-to-peer (P2P) computing are concerned with routing in virtualized overlay networks. These overlays generally assume node connectivity to be provided by an underlying network-layer routing protocol. This duplication of functionality can give rise to severe inefficiencies. In contrast, we suggest a cross-layer approach where the P2P overlay network also provides the required network-layer routing functionality by itself. Especially in sensor networks, where special attention has to be paid to the nodes' limited capabilities, this can greatly help in reducing the message overhead. In this paper, we present a key building block for such a protocol, the iterative successor pointer rewiring protocol (ISPRP), which efficiently initializes a P2P routing network among a freshly deployed set of nodes having but link-layer connectivity. ISPRP works in a fully self-organizing way and issues only a small per-node amount of messages by keeping interactions between nodes as local as possible

[Go to top]

On lifetime-based node failure and stochastic resilience of decentralized peer-to-peer networks (PDF)
by Derek Leonard, Vivek Rai, and Dmitri Loguinov.
In SIGMETRICS Perform. Eval. Rev 33(1), 2005, pages 26-37. (BibTeX entry) (Download bibtex record)
(direct link) (website)

To understand how high rates of churn and random departure decisions of end-users affect connectivity of P2P networks, this paper investigates resilience of random graphs to lifetime-based node failure and derives the expected delay before a user is forcefully isolated from the graph and the probability that this occurs within his/her lifetime. Our results indicate that systems with heavy-tailed lifetime distributions are more resilient than those with light-tailed (e.g., exponential) distributions and that for a given average degree, k-regular graphs exhibit the highest resilience. As a practical illustration of our results, each user in a system with n = 100 billion peers, 30-minute average lifetime, and 1-minute node-replacement delay can stay connected to the graph with probability 1-1 n using only 9 neighbors. This is in contrast to 37 neighbors required under previous modeling efforts. We finish the paper by showing that many P2P networks are almost surely (i.e., with probability 1-o(1)) connected if they have no isolated nodes and derive a simple model for the probability that a P2P system partitions under churn

[Go to top]

Location Awareness in Unstructured Peer-to-Peer Systems
by Yunhao Liu, Li Xiao, Xiaomei Liu, Lionel M. Ni, and Xiaodong Zhang.
In IEEE Trans. Parallel Distrib. Syst 16(2), 2005, pages 163-174. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-Peer (P2P) computing has emerged as a popular model aiming at further utilizing Internet information and resources. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a serious topology mismatch between the P2P overlay network and the physical underlying network. The topology mismatch problem brings great stress in the Internet infrastructure. It greatly limits the performance gain from various search or routing techniques. Meanwhile, due to the inefficient overlay topology, the flooding-based search mechanisms cause a large volume of unnecessary traffic. Aiming at alleviating the mismatching problem and reducing the unnecessary traffic, we propose a location-aware topology matching (LTM) technique. LTM builds an efficient overlay by disconnecting slow connections and choosing physically closer nodes as logical neighbors while still retaining the search scope and reducing response time for queries. LTM is scalable and completely distributed in the sense that it does not require any global knowledge of the whole overlay network. The effectiveness of LTM is demonstrated through simulation studies

[Go to top]

Making chord robust to byzantine attacks (PDF)
by Amos Fiat, Jared Saia, and Maxwell Young.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Chord is a distributed hash table (DHT) that requires only O(log n) links per node and performs searches with latency and message cost O(log n), where n is the number of peers in the network. Chord assumes all nodes behave according to protocol. We give a variant of Chord which is robust with high probability for any time period during which: 1) there are always at least z total peers in the network for some integer z; 2) there are never more than (1/4–)z Byzantine peers in the network for a fixed > 0; and 3) the number of peer insertion and deletion events is no more than zk for some tunable parameter k. We assume there is an adversary controlling the Byzantine peers and that the IP-addresses of all the Byzantine peers and the locations where they join the network are carefully selected by this adversary. Our notion of robustness is rather strong in that we not only guarantee that searches can be performed but also that we can enforce any set of proper behavior such as contributing new material, etc. In comparison to Chord, the resources required by this new variant are only a polylogarithmic factor greater in communication, messaging, and linking costs

[Go to top]

Measuring Large Overlay Networks–The Overnet Example (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Peer-to-peer overlay networks have grown significantly in size and sophistication over the last years. Meanwhile, distributed hash tables (DHT) provide efficient means to create global scale overlay networks on top of which various applications can be built. Although filesharing still is the most prominent example, other applications are well conceivable. In order to rationally design such applications, it is important to know (and understand) the properties of the overlay networks as seen from the respective application. This paper reports the results from a two week measurement of the entire Overnet network, the currently most widely deployed DHT-based overlay. We describe both, the design choices that made that measurement feasible and the results from the measurement itself. Besides the basic determination of network size, node availability and node distribution, we found unexpected results for the overlay latency distribution

[Go to top]

Non-transitive connectivity and DHTs (PDF)
by Michael J. Freedman, Karthik Lakshminarayanan, Sean C. Rhea, and Ion Stoica.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The most basic functionality of a distributed hash table, or DHT, is to partition a key space across the set of nodes in a distributed system such that all nodes agree on the partitioning. For example, the Chord DHT assigns each node

[Go to top]

OpenDHT: a public DHT service and its uses (PDF)
by unknown.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

A platform for lab exercises in sensor networks (PDF)
by Thomas Fuhrmann and Till Harbaum.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Programming of and experiences with sensor network nodes are about to enter the curricula of technical universities. Often however, practical obstacles complicate the implementation of a didactic concept. In this paper we present our approach that uses a Java virtual machine to decouple experiments with algorithm and protocol concepts from the odds of embedded system programming. This concept enables students to load Java classes via an SD-card into a sensor node. An LC display provides detailed information if the program aborts due to bugs

[Go to top]

Privacy Practices of Internet Users: Self-reports Versus Observed Behavior (PDF)
by Carlos Jensen, Colin Potts, and Christian Jensen.
In Int. J. Hum.-Comput. Stud 63, 2005, pages 203-227. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several recent surveys conclude that people are concerned about privacy and consider it to be an important factor in their online decision making. This paper reports on a study in which (1) user concerns were analysed more deeply and (2) what users said was contrasted with what they did in an experimental e-commerce scenario. Eleven independent variables were shown to affect the online behavior of at least some groups of users. Most significant were trust marks present on web pages and the existence of a privacy policy, though users seldom consulted the policy when one existed. We also find that many users have inaccurate perceptions of their own knowledge about privacy technology and vulnerabilities, and that important user groups, like those similar to the Westin "privacy fundamentalists", do not appear to form a cohesive group for privacy-related decision making.In this study we adopt an experimental economic research paradigm, a method for examining user behavior which challenges the current emphasis on survey data. We discuss these issues and the implications of our results on user interpretation of trust marks and interaction design. Although broad policy implications are beyond the scope of this paper, we conclude by questioning the application of the ethical/legal doctrine of informed consent to online transactions in the light of the evidence that users frequently do not consult privacy policies

[Go to top]

Privacy-Preserving Set Operations (PDF)
by Lea Kissner and Dawn Song.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In many important applications, a collection of mutually distrustful parties must perform private computation over multisets. Each party's input to the function is his private input multiset. In order to protect these private sets, the players perform privacy-preserving computation; that is, no party learns more information about other parties' private input sets than what can be deduced from the result. In this paper, we propose efficient techniques for privacy-preserving operations on multisets. By building a framework of multiset operations, employing the mathematical properties of polynomials, we design efficient, secure, and composable methods to enable privacy-preserving computation of the union, intersection, and element reduction operations. We apply these techniques to a wide range of practical problems, achieving more efficient results than those of previous work

[Go to top]

On Private Scalar Product Computation for Privacy-Preserving Data Mining (PDF)
by Bart Goethals, Sven Laur, Helger Lipmaa, and Taneli Mielikäinen.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In mining and integrating data from multiple sources, there are many privacy and security issues. In several different contexts, the security of the full privacy-preserving data mining protocol depends on the security of the underlying private scalar product protocol. We show that two of the private scalar product protocols, one of which was proposed in a leading data mining conference, are insecure. We then describe a provably private scalar product protocol that is based on homomorphic encryption and improve its efficiency so that it can also be used on massive datasets

[Go to top]

Proximity Neighbor Selection for a DHT in Wireless Multi-Hop Networks (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

A mobile ad hoc network (MANET) is a multi-hop wireless network having no infrastructure. Thus, the mobile nodes have to perform basic control tasks, such as routing, and higher-level tasks, such as service discovery, in a cooperative and distributed way. Originally conceived as a peer-to-peer application for the Internet, distributed hash tables (DHTs) are data structures offering both, scalable routing and a convenient abstraction for the design of applications in large, dynamic networks. Hence, DHTs and MANETs seem to be a good match, and both have to cope with dynamic, self-organizing networks. DHTs form a virtual control structure oblivious to the underlying network. Several techniques to improve the performance of DHTs in wired networks have been established in the literature. A particularly efficient one is proximity neighbor selection (PNS). PNS has to continuously adapt the virtual network to the physical network, incurring control traffic. The applicability of PNS and DHTs for MANETs commonly is regarded as hard because of this control traffic,the complexity of the adaptation algorithms, and the dynamics of a MANET. Using simulations supported by analytical methods, we show that by making a minor addition to PNS, it is also applicable for MANETs. We additionally show that the specifics of a MANET make PNS an easy exercise there. Thus, DHTs deliver good performance in MANETs

[Go to top]

A Random Walk Based Anonymous Peer-to-Peer Protocol Design
by Jinsong Han, Yunhao Liu, Li Lu, Lei Hu, and Abhishek Patil.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity has been one of the most challenging issues in Ad Hoc environment such as P2P systems. In this paper, we propose an anonymous protocol called Random Walk based Anonymous Protocol (RWAP), in decentralized P2P systems. We evaluate RWAP by comprehensive trace driven simulations. Results show that RWAP significantly reduces traffic cost and encryption overhead compared with existing approaches

[Go to top]

Retrivability of data in ad-hoc backup (PDF)
by Trond Aspelund.
Master thesis, Oslo University, 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This master thesis looks at aspects with backup of data and restore in ad-hoc networks. Ad-hoc networks are networks made between arbitrary nodes without any form of infrastructure or central control. Backup in such environments would have to rely on other nodes to keep backups. The key problem is knowing whom to trust. Backup in ad-hoc network is meant to be a method to offer extra security to data that is created outside of a controlled environment. The most important aspects of backup are the ability to retrieve data after it is lost from the original device. In this project an ad-hoc network is simulated, to measure how much of the data can be retrieved as a function of the size of the network. The distance to the data and how many of the distributed copies are available is measured. The network is simulated using User-mode Linux and the centrality and connectivity of the simulated network is measured. Finding the device that keeps your data when a restoration is needed can be like looking for a needle in a haystack. A simple solution to this is to not only rely on the ad-hoc network but also make it possible for devices that keep backups to upload data to others or back to a host that is available to the source itself

[Go to top]

Routing with Byzantine robustness (PDF)
by Radia Perlman.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper describes how a network can continue to function in the presence of Byzantine failures. A Byzantine failure is one in which a node, instead of halting (as it would in a fail-stop failure), continues to operate, but incorrectly. It might lie about routing information, perform the routing algorithm itself flawlessly, but then fail to forward some class of packets correctly, or flood the network with garbage traffic. Our goal is to design a network so that as long as one nonfaulty path connects nonfaulty nodes A and B, they will be able to communicate, with some fair share of bandwidth, even if all the other components in the network are maximally malicious. We review work from 1988 that presented a network design that had that property, but required the network to be small enough so that every router could keep state proportional to n2, where n is the total number of nodes in the network. This would work for a network of size on the order of a thousand nodes, but to build a large network, we need to introduce hierarchy. This paper presents a new design, building on the original work, that works with hierarchical networks. This design not only defends against malicious routers, but because it guarantees fair allocation of resources, can mitigate against many other types of denial of service attacks

[Go to top]

SAS: A Scalar Anonymous Communication System (PDF)
by Hongyun Xu, Xinwen Fu, Ye Zhu, Riccardo Bettati, Jianer Chen, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymity technologies have gained more and more attention for communication privacy. In general, users obtain anonymity at a certain cost in an anonymous communication system, which uses rerouting to increase the system's robustness. However, a long rerouting path incurs large overhead and decreases the quality of service (QoS). In this paper, we propose the Scalar Anonymity System (SAS) in order to provide a tradeoff between anonymity and cost for different users with different requirements. In SAS, by selecting the level of anonymity, a user obtains the corresponding anonymity and QoS and also sustains the corresponding load of traffic rerouting for other users. Our theoretical analysis and simulation experiments verify the effectiveness of SAS

[Go to top]

Scalable routing for networked sensors and actuators (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The design of efficient routing protocols for ad hoc and sensor networks is challenging for several reasons: Physical network topology is random. Nodes have limited computation and memory capabilities. Energy and bisection bandwidth are scarce. Furthermore, in most settings, the lack of centralized components leaves all network control tasks to the nodes acting as decentralized peers. In this paper, we present a novel routing algorithm, scalable source routing (SSR), which is capable of memory and message efficient routing in large random networks. A guiding example is a community of 'digital homes ' where smart sensors and actuators are installed by laypersons. Such networks combine wireless ad-hoc and infrastructure networks, and lack a well-crafted network topology. Typically, the nodes do not have sufficient processing and memory resources to perform sophisticated routing algorithms. Flooding on the other hand is too bandwidthconsuming in the envisaged large-scale networks. SSR is a fully self-organizing routing protocol for such scenarios. It creates a virtual ring that links all nodes via predecessor/successor source routes. Additionally, each node possesses O(log N) short-cut source routes to nodes in exponentially increasing virtual ring distance. Like with the Chord overlay network, this ensures full connectivity within the network. Moreover, it provides a routing semantic which can efficiently support indirection schemes like i3. Memory and message efficiency are achieved by the introduction of a route cache together with a set of path manipulation rules that allow to produce near-to-optimal paths

[Go to top]

Scalable Service Discovery for MANET (PDF)
by Francoise Sailhan and Valerie Issarny.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Mobile Ad hoc NETworks (MANETs) conveniently complement infrastructure-based networks, allowing mobile nodes to spontaneously form a network and share their services, including bridging with other networks, either infrastructure-based or ad hoc. However, distributed service provisioning over MANETs requires adequate support for service discovery and invocation, due to the networkýs dynamics and resource constraints of wireless nodes. While a number of existing service discovery protocols have shown to be effective for the wireless environment, these are mainly aimed at infrastructure-based and/or 1-hop ad hoc wireless networks. Some discovery protocols for MANETs have been proposed over the last couple of years but they induce significant traffic overhead, and are thus primarily suited for small-scale MANETs with few nodes. Building upon the evaluation of existing protocols, we introduce a scalable service discovery protocol for MANETs, which is based on the homogeneous and dynamic deployment of cooperating directories within the network. Scalability of our protocol comes from the minimization of the generatedtraffic, and the use of compact directory summaries that enable to efficiently locate the directory that most likely caches the description of a given service

[Go to top]

Searching in a Small World (PDF)
by Oskar Sandberg.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The small-world phenomenon, that the world's social network is tightly connected, and that any two people can be linked by a short chain of friends, has long been a subject of interest. Famously, the psychologist Stanley Milgram performed an experiment where he asked people to deliver a letter to a stranger by forwarding it to an acquaintance, who could forward it to one his acquaintances, and so on until the destination was reached. The results seemed to confirm that the small-world phenomenon is real. Recently it has been shown by Jon Kleinberg that in order to search in a network, that is to actually find the short paths in the manner of the Milgram experiment, a very special type of a graph model is needed. In this thesis, we present two ideas about searching in the small world stemming from Kleinberg's results. In the first we study the formation of networks of this type, attempting to see why the kind

[Go to top]

Selected DHT Algorithms (PDF)
by Stefan Götz, Simon Rieche, and Klaus Wehrle.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Several different approaches to realizing the basic principles of DHTs have emerged over the last few years. Although they rely on the same fundamental idea, there is a large diversity of methods for both organizing the identifier space and performing routing. The particular properties of each approach can thus be exploited by specific application scenarios and requirements. This overview focuses on the three DHT systems that have received the most attention in the research community: Chord, Pastry, and Content Addressable Networks (CAN). Furthermore, the systems Symphony, Viceroy, and Kademlia are discussed because they exhibit interesting mechanisms and properties beyond those of the first three systems

[Go to top]

A Self-Organizing Job Scheduling Algorithm for a Distributed VDR (PDF)
by Kendy Kutzner, Curt Cramer, and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In [CKF04], we have reported on our concept of a peer-to-peer extension to the popular video disk recorder (VDR) [Sch04], the Distributed Video Disk Recording (DVDR) system. The DVDR is a collaboration system of existing video disk recorders via a peer to peer network. There, the VDRs communicate about the tasks to be done and distribute the recordings afterwards. In this paper, we report on lessons learnt during its implementation and explain the considerations leading to the design of a new job scheduling algorithm. DVDR is an application which is based on a distributed hash table (DHT) employing proximity route selection (PRS)/proximity neighbor selection (PNS). For our implementation, we chose to use Chord [SMK + 01, GGG + 03]. Using a DHT with PRS/PNS yields two important features: (1) Each hashed key is routed to exactly one destination node within the system. (2) PRS/PNS forces messages originating in one region of the network destined to the same key to be routed through exactly one node in that region (route convergence). The first property enables per-key aggregation trees with a tree being rooted at the node which is responsible for the respective key. This node serves as a rendezvous point. The second property leads to locality (i.e., low latency) in this aggregation tree

[Go to top]

A Self-Organizing Routing Scheme for Random Networks (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Most routing protocols employ address aggregation to achieve scalability with respect to routing table size. But often, as networks grow in size and complexity, address aggregation fails. Other networks, e.g. sensor-actuator networks or ad-hoc networks, that are characterized by organic growth might not at all follow the classical hierarchical structures that are required for aggregation. In this paper, we present a fully self-organizing routing scheme that is able to efficiently route messages in random networks with randomly assigned node addresses. The protocol combines peer-to-peer techniques with source routing and can be implemented to work with very limited resource demands. With the help of simulations we show that it nevertheless quickly converges into a globally consistent state and achieves a routing stretch of only 1.2 – 1.3 in a network with more than 105 randomly assigned nodes

[Go to top]

Self-Stabilizing Ring Networks on Connected Graphs (PDF)
by Curt Cramer and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large networks require scalable routing. Traditionally, protocol overhead is reduced by introducing a hierarchy. This requires aggregation of nearby nodes under a common address prefix. In fixed networks, this is achieved administratively, whereas in wireless ad-hoc networks, dynamic assignments of nodes to aggregation units are required. As a result of the nodes commonly being assigned a random network address, the majority of proposed ad-hoc routing protocols discovers routes between end nodes by flooding, thus limiting the network size. Peer-to-peer (P2P) overlay networks offer scalable routing solutions by employing virtualized address spaces, yet assume an underlying routing protocol for end-to-end connectivity. We investigate a cross-layer approach to P2P routing, where the virtual address space is implemented with a network-layer routing protocol by itself. The Iterative Successor Pointer Rewiring Protocol (ISPRP) efficiently initializes a ring-structured network among nodes having but link-layer connectivity. It is fully self-organizing and issues only a small per-node amount of messages by keeping interactions between nodes as local as possible. The main contribution of this paper is a proof that ISPRP is self-stabilizing, that is, starting from an arbitrary initial state, the protocol lets the network converge into a correct state within a bounded amount of time

[Go to top]

Service discovery using volunteer nodes for pervasive environments (PDF)
by Mijeom Kim, Mohan Kumar, and Behrooz Shirazi.
In International Conference on Pervasive Services, 2005, pages 188-197. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a service discovery architecture called VSD (service discovery based on volunteers) for heterogeneous and dynamic pervasive computing environments. The proposed architecture uses a small subset of the nodes called volunteers that perform directory services. Relatively stable and capable nodes serve as volunteers, thus recognizing node heterogeneity in terms of mobility and capability. We discuss characteristics of VSD architecture and methods to improve connectivity among volunteers for higher discovery rate. By showing that VSD performs quite well compared to a broadcast based scheme in MANET scenarios, we validate that VSD is a flexible and adaptable architecture appropriate for dynamic pervasive computing environments. VSD incorporates several novel features: i) handles dynamism and supports self-reconfiguration; ii) provides physical locality and scalability; and iii) improves reliability and copes with uncertainty through redundancy by forming overlapped clusters

[Go to top]

A software framework for automated negotiation (PDF)
by Claudio Bartolini, Chris Preist, and Nicholas R Jennings.
<Odd type book>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

If agents are to negotiate automatically with one another they must share a negotiation mechanism, specifying what possible actions each party can take at any given time, when negotiation terminates, and what is the structure of the resulting agreements. Current standardization activities such as FIPA [2] and WS-Agreement [3] represent this as a negotiation protocol specifying the flow of messages. However, they omit other aspects of the rules of negotiation (such as obliging a participant to improve on a previous offer), requiring these to be represented implicitly in an agent's design, potentially resulting incompatibility, maintenance and re-usability problems. In this chapter, we propose an alternative approach, allowing all of a mechanism to be formal and explicit. We present (i) a taxonomy of declarative rules which can be used to capture a wide variety of negotiation mechanisms in a principled and well-structured way; (ii) a simple interaction protocol, which is able to support any mechanism which can be captured using the declarative rules; (iii) a software framework for negotiation that allows agents to effectively participate in negotiations defined using our rule taxonomy and protocol and (iv) a language for expressing aspects of the negotiation based on OWL-Lite [4]. We provide examples of some of the mechanisms that the framework can support

[Go to top]

Some Remarks on Universal Re-encryption and A Novel Practical Anonymous Tunnel
by Tianbo Lu, Bin-Xing Fang, Yuzhong Sun, and Li Guo.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In 2004 Golle, Jakobsson, Juels and Syverson presented a new encryption scheme called the universal re-encryption [GJJS04] for mixnets [Cha81] which was extended by Gomulkiewicz et al. [GKK04]. We discover that this scheme and its extension both are insecure against a chosen ciphertext attack proposed by Pfitzmann in 1994 [Pfi94]. Another drawback of them is low efficiency for anonymous communications due to their long ciphertexts, i.e., four times the size of plaintext. Accordingly, we devise a novel universal and efficient anonymous tunnel, rWonGoo, for circuit-based low-latency communications in large scale peer-to-peer environments to dramatically decrease possibility to suffer from the attack [Pfi94]. The basic idea behind rWonGoo is to provide anonymity with re-encryption and random forwarding, obtaining practicality, correctness and efficiency in encryption in the way differing from the layered encryption systems [Cha81] that can be difficult to achieve correctness of tunnels

[Go to top]

A Survey and Comparison of Peer-to-Peer Overlay Network Schemes (PDF)
by Eng Keong Lua, Jon Crowcroft, Marcelo Pias, Ravi Sharma, and Steven Lim.
In IEEE Communications Surveys and Tutorials 7, 2005, pages 72-93. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Over the Internet today, computing and communications environments are significantly more complex and chaotic than classical distributed systems, lacking any centralized organization or hierarchical control. There has been much interest in emerging Peer-to-Peer (P2P) network overlays because they provide a good substrate for creating large-scale data sharing, content distribution and application-level multicast applications. These P2P networks try to provide a long list of features such as: selection of nearby peers, redundant storage, efficient search/location of data items, data permanence or guarantees, hierarchical naming, trust and authentication, and, anonymity. P2P networks potentially offer an efficient routing architecture that is self-organizing, massively scalable, and robust in the wide-area, combining fault tolerance, load balancing and explicit notion of locality. In this paper, we present a survey and comparison of various Structured and Unstructured P2P networks. We categorize the various schemes into these two groups in the design spectrum and discuss the application-level network performance of each group

[Go to top]

Sybil-resistant DHT routing (PDF)
by George Danezis, Chris Lesniewski-laas, Frans M. Kaashoek, and Ross Anderson.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Distributed Hash Tables (DHTs) are very efficient distributed systems for routing, but at the same time vulnerable to disruptive nodes. Designers of such systems want them used in open networks, where an adversary can perform a sybil attack by introducing a large number of corrupt nodes in the network, considerably degrading its performance. We introduce a routing strategy that alleviates some of the effects of such an attack by making sure that lookups are performed using a diverse set of nodes. This ensures that at least some of the nodes queried are good, and hence the search makes forward progress. This strategy makes use of latent social information present in the introduction graph of the network

[Go to top]

A Taxonomy of Rational Attacks (PDF)
by Seth James Nielson and Scott A. Crosby.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

For peer-to-peer services to be effective, participating nodes must cooperate, but in most scenarios a node represents a self-interested party and cooperation can neither be expected nor enforced. A reasonable assumption is that a large fraction of p2p nodes are rational and will attempt to maximize their consumption of system resources while minimizing the use of their own. If such behavior violates system policy then it constitutes an attack. In this paper we identify and create a taxonomy for rational attacks and then identify corresponding solutions if they exist. The most effective solutions directly incentivize cooperative behavior, but when this is not feasible the common alternative is to incentivize evidence of cooperation instead

[Go to top]

Towards Autonomic Networking using Overlay Routing Techniques (PDF)
by Kendy Kutzner and Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

With an ever-growing number of computers being embedded into our surroundings, the era of ubiquitous computing is approaching fast. However, as the number of networked devices increases, so does system complexity. Contrary to the goal of achieving an invisible computer, the required amount of management and human intervention increases more and more, both slowing down the growth rate and limiting the achievable size of ubiquitous systems. In this paper we present a novel routing approach that is capable of handling complex networks without any administrative intervention. Based on a combination of standard overlay routing techniques and source routes, this approach is capable of efficiently bootstrapping a routable network. Unlike other approaches that try to combine peer-to-peer ideas with ad-hoc networks, sensor networks, or ubiquitous systems, our approach is not based on a routing scheme. This makes the resulting system flexible and powerful with respect at application support as well as efficient with regard to routing overhead and system complexity

[Go to top]

The Use of Scalable Source Routing for Networked Sensors (PDF)
by Thomas Fuhrmann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we briefly present a novel routing algorithm, scalable source routing (SSR), which is capable of memory and message efficient routing in networks with 'random topology'. This algorithm enables sensor networks to use recent peer to-peer mechanisms from the field of overlay networks, like e.g. distributed hash tables and indirection infrastructures. Unlike other proposals along that direction, SSR integrates all necessary routing tasks into one simple, highly efficient routing protocol. Simulations demonstrate that in a small-world network with more than 100 000 nodes, SSR requires each node to only store routing data for 255 other nodes to establish routes between arbitrary pairs of nodes. These routes are on average only about 20-30 longer than the globally optimal path between these nodes

[Go to top]

Using redundancy to cope with failures in a delay tolerant network (PDF)
by Sushant Jain, Michael J. Demmer, Rabin K. Patra, and Kevin Fall.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider the problem of routing in a delay tolerant network (DTN) in the presence of path failures. Previous work on DTN routing has focused on using precisely known network dynamics, which does not account for message losses due to link failures, buffer overruns, path selection errors, unscheduled delays, or other problems. We show how to split, replicate, and erasure code message fragments over multiple delivery paths to optimize the probability of successful message delivery. We provide a formulation of this problem and solve it for two cases: a 0/1 (Bernoulli) path delivery model where messages are either fully lost or delivered, and a Gaussian path delivery model where only a fraction of a message may be delivered. Ideas from the modern portfolio theory literature are borrowed to solve the underlying optimization problem. Our approach is directly relevant to solving similar problems that arise in replica placement in distributed file systems and virtual node placement in DHTs. In three different simulated DTN scenarios covering a wide range of applications, we show the effectiveness of our approach in handling failures

[Go to top]

Anonymous Communication with On-line and Off-line Onion Encoding (PDF)
by Marek Klonowski, Miroslaw Kutylowski, and Filip Zagorski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Anonymous communication with onions requires that a user application determines the whole routing path of an onion. This scenario has certain disadvantages, it might be dangerous in some situations, and it does not fit well to the current layered architecture of dynamic communication networks. We show that applying encoding based on universal re-encryption can solve many of these problems by providing much flexibility – the onions can be created on-the-fly or in advance by different parties

[Go to top]

The eMule Protocol Specification (PDF)
by Yoram Kulbak and Danny Bickson.
In unknown(TR-2005-03), January 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitle "GNU Free Documentation License"

[Go to top]

The BiTtorrent P2P File-sharing System: Measurements and Analysis (PDF)
by Johan Pouwelse, Pawel Garbacki, Dick H. J. Epema, and Henk J. Sips.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Of the many P2P file-sharing prototypes in existence, BitTorrent is one of the few that has managed to attract millions of users. BitTorrent relies on other (global) components for file search, employs a moderator system to ensure the integrity of file data, and uses a bartering technique for downloading in order to prevent users from freeriding. In this paper we present a measurement study of BitTorrent in which we focus on four issues, viz. availability, integrity, flashcrowd handling, and download performance. The purpose of this paper is to aid in the understanding of a real P2P system that apparently has the right mechanisms to attract a large user community, to provide measurement data that may be useful in modeling P2P systems, and to identify design issues in such systems

[Go to top]

High Availability in DHTs: Erasure Coding vs. Replication (PDF)
by Rodrigo Rodrigues and Barbara Liskov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

High availability in peer-to-peer DHTs requires data redundancy. This paper compares two popular redundancy schemes: replication and erasure coding. Unlike previous comparisons, we take the characteristics of the nodes that comprise the overlay into account, and conclude that in some cases the benefits from coding are limited, and may not be worth its disadvantages

[Go to top]

Designing Incentives for Peer-to-Peer Routing (PDF)
by Alberto Blanc, Yi-Kai Liu, and Amin Vahdat.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

In a peer-to-peer network, nodes are typically required to route packets for each other. This leads to a problem of "free-loaders", nodes that use the network but refuse to route other nodes' packets. In this paper we study ways of designing incentives to discourage free-loading. We model the interactions between nodes as a "random matching game", and describe a simple reputation system that provides incentives for good behavior. Under certain assumptions, we obtain a stable subgame-perfect equilibrium. We use simulations to investigate the robustness of this scheme in the presence of noise and malicious nodes, and we examine some of the design trade-offs. We also evaluate some possible adversarial strategies, and discuss how our results might apply to real peer-to-peer systems

[Go to top]

Exchange-based incentive mechanisms for peer-to-peer file sharing (PDF)
by Kostas G. Anagnostakis and Michael B. Greenwald.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Performance of peer-to-peer resource sharing networks depends upon the level of cooperation of the participants. To date, cash-based systems have seemed too complex, while lighter-weight credit mechanisms have not provided strong incentives for cooperation. We propose exchange-based mechanisms that provide incentives for cooperation in peer-to-peer file sharing networks. Peers give higher service priority to requests from peers that can provide a simultaneous and symmetric service in return. We generalize this approach to n-way exchanges among rings of peers and present a search algorithm for locating such rings. We have used simulation to analyze the effect of exchanges on performance. Our results show that exchange-based mechanisms can provide strong incentives for sharing, offering significant improvements in service times for sharing users compared to free-riders, without the problems and complexity of cash- or credit-based systems

[Go to top]

Exploiting anarchy in networks: a game-theoretic approach to combining fairness and throughput (PDF)
by Sreenivas Gollapudi, D. Sivakumar, and Aidong Zhang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We propose a novel mechanism for routing and bandwidth allocation that exploits the selfish and rational behavior of flows in a network. Our mechanism leads to allocations that simultaneously optimize throughput and fairness criteria. We analyze the performance of our mechanism in terms of the induced Nash equilibrium. We compare the allocations at the Nash equilibrium with throughput-optimal allocations as well as with fairness-optimal allocations. Our mechanism offers a smooth trade-off between these criteria, and allows us to produce allocations that are approximately optimal with respect to both. Our mechanism is also fairly simple and admits an efficient distributed implementation

[Go to top]

Market-driven bandwidth allocation in selfish overlay networks (PDF)
by Weihong Wang and Baochun Li.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Selfish overlay networks consist of autonomous nodes that develop their own strategies by optimizing towards their local objectives and self-interests, rather than following prescribed protocols. It is thus important to regulate the behavior of selfish nodes, so that system-wide properties are optimized. In this paper, we investigate the problem of bandwidth allocation in overlay networks, and propose to use a market-driven approach to regulate the behavior of selfish nodes that either provide or consume services. In such markets, consumers of services select the best service providers, taking into account both the performance and the price of the service. On the other hand, service providers are encouraged to strategically decide their respective prices in a pricing game, in order to maximize their economic revenues and minimize losses in the long run. In order to overcome the limitations of previous models towards similar objectives, we design a decentralized algorithm that uses reinforcement learning to help selfish nodes to incrementally adapt to the local market, and to make optimized strategic decisions based on past experiences. We have simulated our proposed algorithm in randomly generated overlay networks, and have shown that the behavior of selfish nodes converges to their optimal strategies, and resource allocations in the entire overlay are near-optimal, and efficiently adapts to the dynamics of overlay networks

[Go to top]

Network coding for large scale content distribution (PDF)
by Christos Gkantsidis and Pablo Rodriguez.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization introduced by the coding process eases the scheduling of block propagation, and, thus, makes the distribution more efficient. This is particularly important in large unstructured overlay networks, where the nodes need to make block forwarding decisions based on local information only. We compare network coding to other schemes that transmit unencoded information (i.e. blocks of the original file) and, also, to schemes in which only the source is allowed to generate and transmit encoded packets. We study the performance of network coding in heterogeneous networks with dynamic node arrival and departure patterns, clustered topologies, and when incentive mechanisms to discourage free-riding are in place. We demonstrate through simulations of scenarios of practical interest that the expected file download time improves by more than 20-30 with network coding compared to coding at the server only and, by more than 2-3 times compared to sending unencoded information. Moreover, we show that network coding improves the robustness of the system and is able to smoothly handle extreme situations where the server and nodes leave the system

[Go to top]

P2P Contracts: a Framework for Resource and Service Exchange (PDF)
by Dipak Ghosal, Benjamin K. Poon, and Keith Kong.
In FGCS. Future Generations Computer Systems 21, March 2005, pages 333-347. (BibTeX entry) (Download bibtex record)
(direct link)

A crucial aspect of Peer-to-Peer (P2P) systems is that of providing incentives for users to contribute their resources to the system. Without such incentives, empirical data show that a majority of the participants act asfree riders. As a result, a substantial amount of resource goes untapped, and, frequently, P2P systems devolve into client-server systems with attendant issues of performance under high load. We propose to address the free rider problem by introducing the notion of a P2P contract. In it, peers are made aware of the benefits they receive from the system as a function of their contributions. In this paper, we first describe a utility-based framework to determine the components of the contract and formulate the associated resource allocation problem. We consider the resource allocation problem for a flash crowd scenario and show how the contract mechanism implemented using a centralized server can be used to quickly create pseudoservers that can serve out the requests. We then study a decentralized implementation of the P2P contract scheme in which each node implements the contract based on local demand. We show that in such a system, other than contributing storage and bandwidth to serve out requests, it is also important that peer nodes function as application-level routers to connect pools of available pseudoservers. We study the performance of the distributed implementation with respect to the various parameters including the terms of the contract and the triggers to create pseudoservers and routers

[Go to top]

On Flow Marking Attacks in Wireless Anonymous Communication Networks (PDF)
by Xinwen Fu, Ye Zhu, Bryan Graham, Riccardo Bettati, and Wei Zhao.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This paper studies the degradation of anonymity in a flow-based wireless mix network under flow marking attacks, in which an adversary embeds a recognizable pattern of marks into wireless traffic flows by electromagnetic interference. We find that traditional mix technologies are not effective in defeating flow marking attacks, and it may take an adversary only a few seconds to recognize the communication relationship between hosts by tracking suchartificial marks. Flow marking attacks utilize frequency domain analytical techniques and convert time domain marks into invariant feature frequencies. To counter flow marking attacks, we propose a new countermeasure based on digital filtering technology, and show that this filter-based counter-measure can effectively defend a wireless mix network from flow marking attacks

[Go to top]

How good is random linear coding based distributed networked storage? (PDF)
by Szymon Acedański, Supratim Deb, Muriel Médard, and Ralf Koetter.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We consider the problem of storing a large file or multiple large files in a distributed manner over a network. In the framework we consider, there are multiple storage locations, each of which only have very limited storage space for each file. Each storage location chooses a part (or a coded version of the parts) of the file without the knowledge of what is stored in the other locations. We want a file-downloader to connect to as few storage locations as possible and retrieve the entire file. We compare the performance of three strategies: uncoded storage, traditional erasure coding based storage, random linear coding based storage motivated by network coding. We demonstrate that, in principle, a traditional erasure coding based storage (eg: Reed-Solomon Codes) strategy can almost do as well as one can ask for with appropriate choice of parameters. However, the cost is a large amount of additional storage space required at the centralized server before distribution among multiple locations. The random linear coding based strategy performs as well without suffering from any such disadvantage. Further, with a probability close to one, the minimum number of storage location a downloader needs to connect to (for reconstructing the entire file), can be very close to the case where there is complete coordination between the storage locations and the downloader. We also argue that an uncoded strategy performs poorly

[Go to top]

Peer-to-Peer Communication Across Network Address Translators (PDF)
by Pyda Srisuresh, Bryan Ford, and Dan Kegel.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. Several NAT traversal techniques are known, but their documentation is slim, and data about their robustness or relative merits is slimmer. This paper documents and analyzes one of the simplest but most robust and practical NAT traversal techniques, commonly known as hole punching. Hole punching is moderately well-understood for UDP communication, but we show how it can be reliably used to set up peer-to-peer TCP streams as well. After gathering data on the reliability of this technique on a wide variety of deployed NATs, we nd that about 82 of the NATs tested support hole punching for UDP, and about 64 support hole punching for TCP streams. As NAT vendors become increasingly conscious of the needs of important P2P applications such as Voice over IP and online gaming protocols, support for hole punching is likely to increase in the future

[Go to top]

An Analysis of Parallel Mixing with Attacker-Controlled Inputs (PDF)
by Nikita Borisov.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Parallel mixing [7] is a technique for optimizing the latency of a synchronous re-encryption mix network. We analyze the anonymity of this technique when an adversary can learn the output positions of some of the inputs to the mix network. Using probabilistic modeling, we show that parallel mixing falls short of achieving optimal anonymity in this case. In particular, when the number of unknown inputs is small, there are significant anonymity losses in the expected case. This remains true even if all the mixes in the network are honest, and becomes worse as the number of mixes increases. We also consider repeatedly applying parallel mixing to the same set of inputs. We show that an attacker who knows some input–output relationships will learn new information with each mixing and can eventually link previously unknown inputs and outputs

[Go to top]

Anonymity in Structured Peer-to-Peer Networks (PDF)
by Nikita Borisov and Jason Waddle.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Existing peer-to-peer systems that aim to provide anonymity to its users are based on networks with unstructured or loosely-structured routing algorithms. Structured routing offers performance and robustness guarantees that these systems are unable to achieve. We therefore investigate adding anonymity support to structured peer-to-peer networks. We apply an entropy-based anonymity metric to Chord and use this metric to quantify the improvements in anonymity afforded by several possible extensions. We identify particular properties of Chord that have the strongest effect on anonymity and propose a routing extension that allows a general trade-off between anonymity and performance. Our results should be applicable to other structured peer-to-peer systems

[Go to top]

Fuzzy Identity-Based Encryption (PDF)
by Amit Sahai and Brent Waters.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω , if and only if the identities ω and ω are close to each other as measured by the set overlap distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term attribute-based encryption. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model

[Go to top]

Low-Cost Traffic Analysis of Tor (PDF)
by Steven J. Murdoch and George Danezis.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Tor is the second generation Onion Router, supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as web browsing, but insecure against traffic-analysis attacks by a global passive adversary. We present new traffic-analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams and therefore greatly reduce the anonymity provided by Tor. Furthermore, we show that otherwise unrelated streams can be linked back to the same initiator. Our attack is feasible for the adversary anticipated by the Tor designers. Our theoretical attacks are backed up by experiments performed on the deployed, albeit experimental, Tor network. Our techniques should also be applicable to any low latency anonymous network. These attacks highlight the relationship between the field of traffic-analysis and more traditional computer security issues, such as covert channel analysis. Our research also highlights that the inability to directly observe network links does not prevent an attacker from performing traffic-analysis: the adversary can use the anonymising network as an oracle to infer the traffic load on remote nodes in order to perform traffic-analysis

[Go to top]

Message Splitting Against the Partial Adversary (PDF)
by Andrei Serjantov and Steven J. Murdoch.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We review threat models used in the evaluation of anonymity systems' vulnerability to traffic analysis. We then suggest that, under the partial adversary model, if multiple packets have to be sent through these systems, more anonymity can be achieved if senders route the packets via different paths. This is in contrast to the normal technique of using the same path for them all. We comment on the implications of this for message-based and connection-based anonymity systems. We then proceed to examine the only remaining traffic analysis attack – one which considers the entire system as a black box. We show that it is more difficult to execute than the literature suggests, and attempt to empirically estimate the parameters of the Mixmaster and the Mixminion systems needed in order to successfully execute the attack

[Go to top]

Mix-network with Stronger Security
by Jan Camenisch and Anton Mityagin.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We consider a mix-network as a cryptographic primitive that provides anonymity. A mix-network takes as input a number of ciphertexts and outputs a random shuffle of the corresponding plaintexts. Common applications of mix-nets are electronic voting and anonymous network traffic. In this paper, we present a novel construction of a mix-network, which is based on shuffling ElGamal encryptions. Our scheme is the first mix-net to meet the strongest security requirements: it is robust and secure against chosen ciphertext attacks as well as against active attacks in the Universally Composable model. Our construction allows one to securely execute several mix-net instances concurrently, as well as to run multiple mix-sessions without changing a set of keys. Nevertheless, the scheme is efficient: it requires a linear work (in the number of input messages) per mix-server

[Go to top]

Privacy Vulnerabilities in Encrypted HTTP Streams (PDF)
by George Dean Bissias, Marc Liberatore, and Brian Neil Levine.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Encrypting traffic does not prevent an attacker from performing some types of traffic analysis. We present a straightforward traffic analysis attack against encrypted HTTP streams that is surprisingly effective in identifying the source of the traffic. An attacker starts by creating a profile of the statistical characteristics of web requests from interesting sites, including distributions of packet sizes and inter-arrival times. Later, candidate encrypted streams are compared against these profiles. In our evaluations using real traffic, we find that many web sites are subject to this attack. With a training period of 24 hours and a 1 hour delay afterwards, the attack achieves only 23 accuracy. However, an attacker can easily pre-determine which of trained sites are easily identifiable. Accordingly, against 25 such sites, the attack achieves 40 accuracy; with three guesses, the attack achieves 100 accuracy for our data. Longer delays after training decrease accuracy, but not substantially. We also propose some countermeasures and improvements to our current method. Previous work analyzed SSL traffic to a proxy, taking advantage of a known flaw in SSL that reveals the length of each web object. In contrast, we exploit the statistical characteristics of web streams that are encrypted as a single flow, which is the case with WEP/WPA, IPsec, and SSH tunnels

[Go to top]

Unmixing Mix Traffic (PDF)
by Ye Zhu and Riccardo Bettati.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We apply blind source separation techniques from statistical signal processing to separate the traffic in a mix network. Our experiments show that this attack is effective and scalable. By combining the flow separation method and frequency spectrum matching method, a passive attacker can get the traffic map of the mix network. We use a non-trivial network to show that the combined attack works. The experiments also show that multicast traffic can be dangerous for anonymity networks

[Go to top]

On Blending Attacks For Mixes with Memory (PDF)
by Luke O'Connor.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Blending attacks are a general class of traffic-based attacks, exemplified by the (n–1)-attack. Adding memory or pools to mixes mitigates against such attacks, however there are few known quantitative results concerning the effect of pools on blending attacks. In this paper we give a precise analysis of the number of rounds required to perform an (n–1)-attack on the pool mix, timed pool mix, timed dynamic pool mix and the binomial mix

[Go to top]

Censorship Resistance Revisited (PDF)
by Ginger Perng, Michael K. Reiter, and Chenxi Wang.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Censorship resistant systems attempt to prevent censors from imposing a particular distribution of content across a system. In this paper, we introduce a variation of censorship resistance (CR) that is resistant to selective filtering even by a censor who is able to inspect (but not alter) the internal contents and computations of each data server, excluding only the server's private signature key. This models a service provided by operators who do not hide their identities from censors. Even with such a strong adversarial model, our definition states that CR is only achieved if the censor must disable the entire system to filter selected content. We show that existing censorship resistant systems fail to meet this definition; that Private Information Retrieval (PIR) is necessary, though not sufficient, to achieve our definition of CR; and that CR is achieved through a modification of PIR for which known implementations exist

[Go to top]

Compulsion Resistant Anonymous Communications (PDF)
by George Danezis and Jolyon Clulow.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We study the effect compulsion attacks, through which an adversary can request a decryption or key from an honest node, have on the security of mix based anonymous communication systems. Some specific countermeasures are proposed that increase the cost of compulsion attacks, detect that tracing is taking place and ultimately allow for some anonymity to be preserved even when all nodes are under compulsion. Going beyond the case when a single message is traced, we also analyze the effect of multiple messages being traced and devise some techniques that could retain some anonymity. Our analysis highlights that we can reason about plausible deniability in terms of the information theoretic anonymity metrics

[Go to top]

Countering Hidden-action Attacks on Networked Systems (PDF)
by Tyler Moore.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

We define an economic category of hidden-action attacks: actions made attractive by a lack of observation. We then consider its implications for computer systems. Rather than structure contracts to compensate for incentive problems, we rely on insights from social capital theory to design network topologies and interactions that undermine the potential for hidden-action attacks

[Go to top]

Coupon replication systems (PDF)
by Laurent Massoulié and Milan Vojnović.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Motivated by the study of peer-to-peer file swarming systems à la BitTorrent, we introduce a probabilistic model of coupon replication systems. These systems consist of users, aiming to complete a collection of distinct coupons. Users are characterised by their current collection of coupons, and leave the system once they complete their coupon collection. The system evolution is then specified by describing how users of distinct types meet, and which coupons get replicated upon such encounters.For open systems, with exogenous user arrivals, we derive necessary and sufficient stability conditions in a layered scenario, where encounters are between users holding the same number of coupons. We also consider a system where encounters are between users chosen uniformly at random from the whole population. We show that performance, captured by sojourn time, is asymptotically optimal in both systems as the number of coupon types becomes large.We also consider closed systems with no exogenous user arrivals. In a special scenario where users have only one missing coupon, we evaluate the size of the population ultimately remaining in the system, as the initial number of users, N, goes to infinity. We show that this decreases geometrically with the number of coupons, K. In particular, when the ratio K/log(N) is above a critical threshold, we prove that this number of left-overs is of order log(log(N)).These results suggest that performance of file swarming systems does not depend critically on either altruistic user behavior, or on load balancing strategies such as rarest first

[Go to top]

Free Riding on Gnutella Revisited: The Bell Tolls? (PDF)
by Daniel Hughes, Geoff Coulson, and James Walkerdine.
In IEEE Distributed Systems Online 6, June 2005. (BibTeX entry) (Download bibtex record)
(direct link)

Individuals who use peer-to-peer (P2P) file-sharing networks such as Gnutella face a social dilemma. They must decide whether to contribute to the common good by sharing files or to maximize their personal experience by free riding, downloading files while not contributing any to the network. Individuals gain no personal benefits from uploading files (in fact, it's inconvenient), so it's "rational" for users to free ride. However, significant numbers of free riders degrade the entire system's utility, creating a "tragedy of the digital commons." In this article, a new analysis of free riding on the Gnutella network updates data from 2000 and points to an increasing downgrade in the network's overall performance and the emergence of a "metatragedy" of the commons among Gnutella developers

[Go to top]

Hidden-action in multi-hop routing (PDF)
by Michal Feldman, John Chuang, Ion Stoica, and S Shenker.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action

[Go to top]

Off-line Karma: A Decentralized Currency for Peer-to-peer and Grid Applications (PDF)
by Flavio D. Garcia and Jaap-Henk Hoepman.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Peer-to-peer (P2P) and grid systems allow their users to exchange information and share resources, with little centralised or hierarchical control, instead relying on the fairness of the users to make roughly as much resources available as they use. To enforce this balance, some kind of currency or barter (called karma) is needed that must be exchanged for resources thus limiting abuse. We present a completely decentralised, off-line karma implementation for P2P and grid systems, that detects double-spending and other types of fraud under varying adversarial scenarios. The system is based on tracing the spending pattern of coins, and distributing the normally central role of a bank over a predetermined, but random, selection of nodes. The system is designed to allow nodes to join and leave the system at arbitrary times

[Go to top]

Provable Anonymity for Networks of Mixes (PDF)
by Marek Klonowski and Miroslaw Kutylowski.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

We analyze networks of mixes used for providing untraceable communication. We consider a network consisting of k mixes working in parallel and exchanging the outputs – which is the most natural architecture for composing mixes of a certain size into networks able to mix a larger number of inputs at once. We prove that after O(log k) rounds the network considered provides a fair level of privacy protection for any number of messages. No mathematical proof of this kind has been published before. We show that if at least one of server is corrupted we need substantially more rounds to meet the same requirements of privacy protection

[Go to top]

Reading File Metadata with extract and libextractor
by Christian Grothoff.
In Linux Journal 6-2005, June 2005. (BibTeX entry) (Download bibtex record)
(direct link) (website)

[Go to top]

Some observations on BitTorrent performance (PDF)
by Ashwin R. Bharambe, Cormac Herley, and Venkata N. Padmanabhan.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper, we present a simulation-based study of BitTorrent. Our results confirm that BitTorrent performs near-optimally in terms of uplink bandwidth utilization and download time, except under certain extreme conditions. On fairness, however, our work shows that low bandwidth peers systematically download more than they upload to the network when high bandwidth peers are present. We find that the rate-based tit-for-tat policy is not effective in preventing unfairness. We show how simple changes to the tracker and a stricter, block-based tit-for-tat policy, greatly improves fairness, while maintaining high utilization

[Go to top]

Decentralized Schemes for Size Estimation in Large and Dynamic Groups (PDF)
by Dionysios Kostoulas, Dimitrios Psaltoulis, Indranil Gupta, Kenneth P. Birman, and Alan Demers.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Large-scale and dynamically changing distributed systems such as the Grid, peer-to-peer overlays, etc., need to collect several kinds of global statistics in a decentralized manner. In this paper, we tackle a specific statistic collection problem called Group Size Estimation, for estimating the number of non-faulty processes present in the global group at any given point of time. We present two new decentralized algorithms for estimation in dynamic groups, analyze the algorithms, and experimentally evaluate them using real-life traces. One scheme is active: it spreads a gossip into the overlay first, and then samples the receipt times of this gossip at different processes. The second scheme is passive: it measures the density of processes when their identifiers are hashed into a real interval. Both schemes have low latency, scalable perprocess overheads, and provide high levels of probabilistic accuracy for the estimate. They are implemented as part of a size estimation utility called PeerCounter that can be incorporated modularly into standard peer-to-peer overlays. We present experimental results from both the simulations and PeerCounter, running on a cluster of 33 Linux servers

[Go to top]

Determining the Peer Resource Contributions in a P2P Contract (PDF)
by Behrooz Khorshadi, Xin Liu, and Dipak Ghosal.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In this paper we study a scheme called P2P contract which explicitly specifies the resource contributions that are required from the peers. In particular, we consider a P2P file sharing system in which when a peer downloads the file it is required to serve the file to upto N other peers within a maximum period of time T. We study the behavior of this contribution scheme in both centralized and decentralized P2P networks. In a centralized architecture, new requests are forwarded to a central server which hands out the contract along with a list of peers from where the file can be downloaded. We show that a simple fixed contract (i.e., fixed values of N and T) is sufficient to create the required server capacity which adapts to the load. Furthermore, we show that T, the time part of the contract is a more important control parameter than N. In the case of a decentralized P2P architecture, each new request is broadcast to a certain neighborhood determined by the time-to-live (TTL) parameter. Each server receiving the request independently doles out a contract and the requesting peer chooses the one which is least constraining. If there are no servers in the neighborhood, the request fails. To achieve a good request success ratio, we propose an adaptive scheme to set the contracts without requiring global information. Through both analysis and simulation, we show that the proposed scheme adapts to the load and achieves low request failure rate with high server efficiency

[Go to top]

Overcoming free-riding behavior in peer-to-peer systems (PDF)
by Michal Feldman and John Chuang.
In ACM SIGecom Exchanges 5, July 2005, pages 41-50. (BibTeX entry) (Download bibtex record)
(direct link) (website)

While the fundamental premise of peer-to-peer (P2P) systems is that of voluntary resource sharing among individual peers, there is an inherent tension between individual rationality and collective welfare that threatens the viability of these systems. This paper surveys recent research at the intersection of economics and computer science that targets the design of distributed systems consisting of rational participants with diverse and selfish interests. In particular, we discuss major findings and open questions related to free-riding in P2P systems: factors affecting the degree of free-riding, incentive mechanisms to encourage user cooperation, and challenges in the design of incentive mechanisms for P2P systems

[Go to top]

Preprocessing techniques for accelerating the DCOP algorithm ADOPT (PDF)
by Syed Ali, Sven Koenig, and Milind Tambe.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

Methods for solving Distributed Constraint Optimization Problems (DCOP) have emerged as key techniques for distributed reasoning. Yet, their application faces significant hurdles in many multiagent domains due to their inefficiency. Preprocessing techniques have successfully been used to speed up algorithms for centralized constraint satisfaction problems. This paper introduces a framework of different preprocessing techniques that are based on dynamic programming and speed up ADOPT, an asynchronous complete and optimal DCOP algorithm. We investigate when preprocessing is useful and which factors influence the resulting speedups in two DCOP domains, namely graph coloring and distributed sensor networks. Our experimental results demonstrate that our preprocessing techniques are fast and can speed up ADOPT by an order of magnitude

[Go to top]

Query Forwarding Algorithm Supporting Initiator Anonymity in GNUnet (PDF)
by Kohei Tatara, Y. Hori, and Kouichi Sakurai.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link)

Anonymity in peer-to-peer network means that it is difficult to associate a particular communication with a sender or a recipient. Recently, anonymous peer-to-peer framework, called GNUnet, was developed. A primary feature of GNUnet is resistance to traffic-analysis. However, Kugler analyzed a routing protocol in GNUnet, and pointed out traceability of initiator. In this paper, we propose an alternative routing protocol applicable in GNUnet, which is resistant to Kugler's shortcut attacks

[Go to top]

Selfish Routing with Incomplete Information (PDF)
by Martin Gairing, Burkhard Monien, and Karsten Tiemann.
<Odd type conference>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

In his seminal work Harsanyi introduced an elegant approach to study non-cooperative games with incomplete information where the players are uncertain about some parameters. To model such games he introduced the Harsanyi transformation, which converts a game with incomplete information to a strategic game where players may have different types. In the resulting Bayesian game players' uncertainty about each others types is described by a probability distribution over all possible type profiles.In this work, we introduce a particular selfish routing game with incomplete information that we call Bayesian routing game. Here, n selfish users wish to assign their traffic to one of m links. Users do not know each others traffic. Following Harsanyi's approach, we introduce for each user a set of possible types.This paper presents a comprehensive collection of results for the Bayesian routing game.We prove, with help of a potential function, that every Bayesian routing game possesses a pure Bayesian Nash equilibrium. For the model of identical links and independent type distribution we give a polynomial time algorithm to compute a pure Bayesian Nash equilibrium.We study structural properties of fully mixed Bayesian Nash equilibria for the model of identical links and show that they maximize individual cost. In general there exists more than one fully mixed Bayesian Nash equilibrium. We characterize the class of fully mixed Bayesian Nash equilibria in the case of independent type distribution.We conclude with results on coordination ratio for the model of identical links for three social cost measures, that is, social cost as expected maximum congestion, sum of individual costs and maximum individual cost. For the latter two we are able to give (asymptotic) tight bounds using our results on fully mixed Bayesian Nash equilibria.To the best of our knowledge this is the first time that mixed Bayesian Nash equilibria have been studied in conjunction with social cost

[Go to top]

The Topology of Covert Conflict (PDF)
by Shishir Nagaraja and Ross Anderson.
<Odd type booklet>. (BibTeX entry) (Download bibtex record)
(direct link) (website)

This is a short talk on topology of covert conflict, comprising joint work I've been doing with Ross Anderson. The background of this work is the following. We consider a conflict, and there are parties to the conflict. There is communication going on that can be abstracted as a network of nodes (parties) and links (social ties between the nodes). We contend that once you've got a conflict and you've got enough parties to it, these guys start communicating as a result of the conflict. They form connections, that influences the conflict, and the dynamics of the conflict in turn feeds the connectivity of the unfolding network. Modern conflicts often turn on connectivity: consider, for instance, anything from the American army's attack on the Taleban in Afghanistan, and elsewhere, or medics who are trying to battle a disease, like Aids, or anything else. All of these turn on, making strategic decisions about which nodes to go after in the network. For instance, you could consider that a good first place to give condoms out and start any Aids programme, would be with prostitutes

[Go to top]

Cooperation among strangers with limited information about reputation (PDF)
by Gary E. Bolton, Elena Katok, and Axel Ockenfels.
In Journal of Public Economics 89, August 2005, pages 1457-1468. (BibTeX entry) (Download bibtex record)
(direct link) (website)

The amount of institutional intervention necessary to secure efficiency-enhancing cooperation in markets and organizations, in circumstances where interactions take place among essentially strangers, depends critically on the amount of information informal reputation mechanisms need transmit. Models based on subgame perfection find that the information necessary to support cooperation is recursive in nature and thus information generating and processing requirements are quite demanding. Models that do not rely on subgame perfection, on the other hand, suggest that the information demands may be quite modest. The experiment we present indicates that even without any reputation information there is a non-negligible amount of cooperation that is, however, quite sensitive to the cooperation costs. For high costs, providing information about a partner's immediate past action increases