. 11
( 19)


in this proceedings version of the paper; the formal treatment can be found in
the full version [3].

2 Preliminaries

2.1 Communication and Adversary

We consider a synchronized authenticated-link model where a message from
to is delivered within a constant delay and accepted by if and only if it is
sent from to Moreover, we assume a broadcast channel with which every
player sends a message authentically and all players receives the same message.
We consider a central adversary which may corrupt players at will. Cor-
rupting a player allows to read internal state and to act on behalf
from that point on. In the non-erasure model, additionally gains com-
plete history. is called if it corrupts at most players. Furthermore,
is called static if it corrupts the players before the protocol starts, and is
called adaptive if it corrupts the players during the execution of the protocol,
depending on what it has seen so far.
320 Masayuki Abe and Serge Fehr

2.2 Canetti™s Universally Composable Framework
In order to formally specify and prove the security of our protocols, we will use
the universally composable (UC) framework of Canetti [4]. We briefly sketch
this framework here; for a more elaborate description we refer to the full ver-
sion of the paper [3] or to the literature. The UC framework allows to define and
prove secure cryptographic protocols as stand-alone protocols, while at the same
time guaranteeing security in any application by means of a general composition
theorem. In order to define a protocol secure, it is compared with an ideal func-
tionality Such a functionality can be thought of as a trusted party with whom
every player can communicate privately and which honestly executes a number
of specified commands. The UC security definition essentially requires that for
every (real-life) adversary attacking the protocol there exists an ideal-life
adversary also called simulator, which gets to attack the ideal-life scenario
where only the players and are present, such that achieves “the same” as
could have achieved by an attack on the real protocol. In the framework, this is
formalized by considering an environment which provides inputs to and col-
lects outputs from the honest players and communicates in the real-life execution
with and in the ideal-life execution with It is required that it cannot tell
the difference between the real-life and the ideal-life executions, meaning that
its respective outputs in the two cases are computationally indistinguishable.
As mentioned above, the UC framework provides a very general composition
theorem: For any protocol that securely realizes functionality in the so-called
model, meaning that it may use as a subroutine, composed protocol
that replaces with a secure protocol also securely realizes (in the
real-life model).

2.3 Single-Inconsistent-Player UC Framework
The single-inconsistent-player (SIP) technique of [7] is often used to achieve
both adaptive security and efficiency. A protocol in the SIP model is secure
(i.e. securely simulatable in the classical model of computation) if the adversary
does not corrupt a designated player which is chosen independently at random
before the protocol starts. Using the terms of the UC framework, it means that
the simulator is given as input the identity of a randomly chosen player
and is required to work well as long as is uncorrupt. In the case of
adversary with this reduces success probability by a factor
of 1/2. This still guarantees security in that whatever can do in the real-
life model, has a good chance in achieving the same in the ideal-life model.
Indeed, in the classical sense, a simulator is considered successful if it works
better than with negligible probability. However, with such a simulator the
composition theorem no longer works in its full generality. To minimize the
effect of the SIP approach, we have to limit the set of players to be the same
in all subsidiary protocols. This way, can be sampled once and for all, and
the condition that remains uncorrupt applies to (and either holds or does
not hold) simultaneously for all protocols. With this limitation, the composition
theorem essentially works as before. See also the full version of the paper [3].
Adaptively Secure Feldman VSS and Applications 321

2.4 Some Functionalities
We briefly introduce some functionalities we use throughout the paper. For for-
mal descriptions and more elaborate discussions, we refer to [3].
Secure-Message-Transmission Functionalities: The secure-message-transmission
functionality, as defined in [4], is denoted by On receiving (send, sid,
from sends (sid, to and (sid, to the (ideal-life) adversary
and halts. If the length of may vary, it is also given to
Due to some subtle technicalities, cannot be securely realized in a syn-
chronized communication model against an active adversary. The reason is that
in any (interactive) candidate realization the adversary can corrupt the
sender during the execution of the protocol and change the message to be se-
curely transmitted (or abort the protocol), while this cannot be achieved by
Indeed, once is invoked it is always completed with the initial input (and
the output is delivered to the receiver). To overcome this problem, we introduce
spooled SMT. This is captured by which first spools the sent message
and only delivers the spooled message (or a possibly different in case of a
corrupt when receiving another (the actual) send command from This
allows to change after has been launched simply by corrupting
after the spool- but before the send-command.
can be realized over a public network using non-committing encryption
as introduced by Canetti et al. in [6]. However, this is rather expensive as the best
known schemes [9] still bear a ciphertext expansion Instead, our results
are based on efficient though not fully adaptively secure realizations.
In our construction, we will also use an extended version of the function-
ality which allows in case of a dispute, to convince the other players of the
message sent to This is specified by spooled SMT with opening,
which works as except that it additionally allows an open-command sent
from (and only from upon which it announces the transmitted message
to all players.
Committed- VSS Functionalities: An advantage of using Feldman and Pedersen
VSS in protocol design is that besides producing a (correct) sharing, they also
commit the dealer to the shared secret. Often, this commitment can be and
is used in upper-level protocols. However, in the definition of UC-secure VSS
given in [4], such a commitment is hidden in the protocol and not part of the
functionality, and thus not available for external protocols. We introduce the
notion of committed VSS to overcome this inconvenience.
Let be a (efficiently computable) commitment func-
tion, indexed by a commitment key K. Typically, K is sampled by a poly-time
generator (on input the security parameter). A commitment for a secret
is computed as where we use the semicolon ˜;™ to express that
the second argument, is chosen randomly (from unless it is explicitly
A committed VSS (with respect to commitment scheme is specified
by functionality which sends (shared, sid, to all players
322 Masayuki Abe and Serge Fehr

and on receiving(share, sid, from the dealer, and later, on receiving
(open, sid) from distinct players, it sends (opened, sid, to all players and
Due to the same technical problem as above, if the dealer may be adaptively
corrupted, we need to incorporate spooling into the committed VSS function-
ality: first spools (and gives to before awaiting and
executing the actual share-command (for the original or a new
We would like to mention that for certain candidate protocols for com-
mitted VSS (with spooling), whose security relies on the commitment scheme
the generation of the key K needs to be added to the VSS functionality
in order to be able to prove secure in the UC framework. This is for instance
the case for Pedersen™s VSS scheme as discussed in [3].

2.5 The Discrete-Log Setting
Let be a security parameter and be a prime of size Let denote a group
of order and let be its generator. We use multiplicative notation for the
group operation of Some of our constructions require to be the
multiplicative subgroup of with prime Unless otherwise noted,
all arithmetics are done in or and should in each case be clear from the
Throughout, we assume that such is given to all players, and that
the Decision Diffie-Hellman problem for is intractable, meaning that
the respective uniform distributions over
and are computationally indistinguishable. This assumption im-
plies the discrete-log assumption for given a random it is
computationally infeasible to compute

3 The Original Feldman VSS
The Basic Scheme: Let be distinct and non-zero. In order to
share a (random) secret the dealer selects a Shamir sharing polynomial
and sends privately to
Additionally, he broadcasts as well as for Each
player now verifies whether

If it does not hold for some then player broadcasts an accusation against
the dealer, who has to respond by broadcasting such that (1) holds. If he
fails, then the execution is rejected, while otherwise uses the new as his
share. Correct reconstruction is achieved simply by filtering out shares that do
not satisfy (1).
This scheme is proved secure against a static adversary: Assume that
corrupts Given the simulator simply chooses random
Adaptively Secure Feldman VSS and Applications 323

shares for the corrupted players, and it computes
with the right distribution from and by applying appropriate
Lagrange interpolation coefficients “in the exponent”. Informally, this shows that
learns nothing about beyond
This simulation-based proof though fails completely if the adversary may
corrupt players adaptively, i.e., during or even after the execution of the protocol.
The problem is that given needs to come up with such
that if corrupts some player at some later point, can serve with
such that (1) is satisfied. However, it is not known how to successfully provide
such for any dynamic choice of without knowing unless corrupts the
dealer to start with.

Adaptive Security with Erasure: Feldman addressed adaptive security by pro-
viding a set-up phase where the dealer assigns a private X-coordinate
to every Additionally, he needs to convince the players of the
uniqueness of their This is done in the following way. Let E be a semantically-
secure public-key encryption function, with public-key chosen by the dealer.
1. The dealer computes an encryption (with random for
every and he chooses as a random permutation
of Then, he broadcasts ordered in such a way that
appears in position, and he privately sends to
2. Each locates in position and verifies whether and, if
it holds, erases The dealer erases too.
After the erasure is completed, the dealer performs the basic Feldman VSS with
X-coordinates We stress that it is important that the erasures of the
must be done before entering to the sharing phase. On reconstruction, each
player broadcasts
Since each can be opened only to player is convinced of the unique-
ness of Simulation against an adaptive adversary is argued separately for
each phase. If a player gets corrupted in the set-up phase, the simulator just
honestly gives the internal state of the corrupt player to the adversary. Nothing
needs to be simulated. Then, the sharing phase is simulated similar as for the
static adversary, except that, since does not know which players will be cor-
rupted, it predetermines shares for a random subset of size of the X-coordinates
and whenever a player gets corrupted one of these prepared X-
coordinates is assigned to as his Since has already been erased, it is
computationally infeasible to determine whether in position is an encryp-
tion of or not.

An Obstacle in Dispute Resolution: We identify a problem in the dispute reso-
lution of the above scheme1. Suppose that honest accuses the dealer, and that
No dispute resolution procedure is shown in [10]. It is said that a player simply
rejects the dealer when he receives an incorrect share (and the dealer is disqualified
if more than players rejects). But this works only if

324 Masayuki Abe and Serge Fehr

instead of publishing correct the corrupt dealer responds by publishing
of another honest player Since and have been already erased,
both and have no way to prove that the published is different from the
original assignment.
To efficiently settle such a dispute, digital signature scheme will be needed.
That is, the dealer sends with his signature in the set-up phase. This allows
to publish the signature when he accuses the dealer in the sharing phase.
Without using digital signatures, additional rounds are needed to settle the
dispute: If observes that his is published to respond to the accusation
from also accuses the dealer and the dealer publishes the data for this
time. After repeating this accuse-then-publish process at most times, the
dealer either gets stuck or exposes correct shares.

4 Adaptive Security Without Overheads and Erasures
The goal of this section is an adaptively secure Feldman VSS that provides (1)
security without the need for reliably erasing data, (2) efficient dispute resolution
without digital signatures, and (3) efficient realization over a public network, i.e.
without secure channels (or expensive non-committing encryptions).
The first two goals are achieved by a simple modification of the original Feld-
man VSS. The idea is to replace the encryption function E with instantiations
of a trapdoor commitment scheme with certain properties whose commitment
keys are provided separately from each player so that the trapdoors are not
known to the dealer. We show this modified Feldman VSS and the security
proof in [3]. Since Pedersen™s commitment scheme turns out to be good enough
for this purpose, we have a scheme that meets (1) and (2) solely under the DL
assumption. Furthermore, the modified scheme is more efficient in the number
of communication rounds over the original adaptively-secure Feldman VSS.
Hence, what the secure-channels model is concerned, we are done. Unfortu-
nately, we do not know how to efficiently implement the above scheme efficiently
over a public network, even when limiting the power of the adversary as we do in
Sect. 4.2 below. Therefore, we design a new scheme which allows to seamlessly
install our efficient components for public communication presented later.

4.1 Construction in a Hybrid Model
Our approach is to let each player select a random non-zero X-coordinate
and send it privately to the dealer. When corrupted, a simulated player
reveals a (fake) X-coordinate that has been prepared in advance to be consistent
with the transcript, as in Feldman™s approach. On the other hand, in case of a
dispute, each player should be able to convince the other players of his This
is achieved by initially sending to the dealer using secure message transmission
with opening, as specified in Sect. 2.4 by functionality The scheme is
detailed in Fig. 1 in the model.
Consider Feldman™s commitment scheme with base a commitment
for a secret is computed as (without using
Adaptively Secure Feldman VSS and Applications 325

Fig. 1. Adaptively secure Feldman-VSS in model.

Proposition 1. Protocol shown in Fig. 1 securely realizes in the
model against adaptive adversary for
The proof is given in [3]. Essentially, it uses the same idea as Feldman™s version:
the simulator prepares (random) shares for X-coordinates
and assigns to every newly corrupt player one of these X-coordinates as
and the corresponding share as

4.2 Efficient Composition to the Real-Life Protocol
This section provides protocols that realize and over the public
network with broadcast, i.e., without secure channels. Then, by applying the
composition theorem, one can have adaptively secure Feldman VSS as a real-life
protocol. As we shall see, these realizations are efficient but have some limitation
on the adversary, which though can be successfully overcome in our applications.
Our constructions require an efficient bidirectional mapping between and
while the DDH problem should be hard to solve. This is the case when is
Indeed, encoding
the multiplicative subgroup of with prime
where is identified with
can be done by
its representant in and
This encoder is denoted by
the corresponding decoder by
Receiver Non-committing Message Transmission: By we denote a proto-
col that realizes (or with receiver non-committing feature. That is,
326 Masayuki Abe and Serge Fehr

remains secure even if the receiver is adaptively corrupted (in the non-erasure
model), while the sender may only be statically corrupted. Note that with such a
restriction on the sender, can be realized (without spooling). We review the
construction by [13] (adapted to accept messages which is originally
designed in a classical model but can fit to the UC model. A proof is given in
the full version of the paper [3].

Fig. 2. Protocol for receiver non-committing transmission.

Lemma 1. Under the DDH assumption, protocol securely realizes
(or against an adaptive adversary if the sender is only statically corrupt
and is aware of with
The assumption that the ideal-life adversary is aware of the DL of
seems quite restrictive for to be a general stand-alone tool. It is however
acceptable for our purpose as will be chosen by in an upper-level protocol
(playing the role of the to-be-corrupted sender) such that it knows the DL of
We stress that this assumption does not mean at all that is given
any kind of power to solve the DL problem.

Sender Non-committing Message Transmission with Opening: A protocol
that realizes with sender non-committing feature follows easily from
The receiver simply uses to securely send a randomly chosen
to the sender (precisely, sends the message and then
sends to who computes as
We also consider the following variant of which we denote by All
communication is done over the broadcast channel, and in an additional phase,
the opening phase, the sender publishes and privately sampled for
the secure transmission of and every player verifies whether and
computes and
Lemma 2. Under the DDH assumption, protocol securely realizes and
securely realizes against an adaptive adversary if the receiver is
only statically corrupt and is aware of with
The proof of Lemma 2 is similar to that of Lemma 1, although slightly more
involved. For completeness, it is included in [3].

Composition with the Efficient Realizations: We now show that when the func-
tionalities and in the hybrid-protocol are implemented by
Adaptively Secure Feldman VSS and Applications 327

and respectively, then the composed protocol securely realizes
or in some weakened sense as stated below.

Theorem 1. Implementing the functionality in step F-1 of the hybrid-
from Fig. 1. by
protocol and in step F-2 by results in a
secure realization of (or in the real-life model, assumed that (1)
the adversary corrupts the dealer only statically, and (2) the adversary corrupts
players only before the reconstruction phase.

Proof. The claim follows essentially from Proposition 1, Lemma 1 and 2, and
the composition theorem. It remains to show that the assumptions for Lemma 1
and 2 are satisfied. By assumption (1) it is guaranteed that the receiver in
and the sender in (which in both cases is the dealer) is only statically cor-
rupt. Furthermore, by (2) and the way works in the proof of Proposition 1, the
messages, which are supposedly send through and and for which
has to convince as being the messages sent through respectively
are the values and all chosen randomly from (respec-
tively by Hence, could sample them just as well by choosing
and computing such that the conditions for Lemma 1 and 2 are
indeed satisfied. Finally, as the dealer may only be statically corrupted, we do
not need to care about spooling. Thus and are equivalent here.

5 Applications to Threshold Cryptography
In this section, we propose several applications of the adaptively-secure Feldman
VSS scheme from the previous section. Our main applications are a DLKG proto-
col and a UC threshold Schnorr signature scheme, though we also propose some
related applications which might be of independent interest like a trapdoor-key
generation protocol for Pedersen™s commitment scheme and distributed-verifier
UC proofs of knowledge. Interestingly, even though our Feldman VSS scheme
has restricted adaptive security, the applications remain fully adaptively secure
in the (SIP) UC model and do not underly restrictions as posed in Theorem 1.
To simplify terminology, from now on when referring to protocol we
mean from Fig. 1 with and replaced by and as
specified in Theorem 1. Furthermore, it will at some point be convenient to use
a different basis, say rather than the public parameter in the core part of
such that for instance will be published as C. This will be denoted by
and obviously securely realizes We stress that this modification
is not meant to affect the sub-protocols and

5.1 How to Generate the First Trapdoor Commitment-Key

In many protocols, a trapdoor commitment-key is considered as given by some
trusted party so that the trapdoor information is unknown to any player. If the
trusted party is replaced with multi-party computation, as we usually do, the
328 Masayuki Abe and Serge Fehr

protocol should be designed not to use any common trapdoor commitment-key.
In this section, we show a protocol that meets this requirement.
The players execute protocol in Fig. 3. We assume that it is triggered
who sends init to all players. The protocol outputs a (trapdoor)
by a player
commitment-key for Pedersen™s commitment scheme. Note that the
corresponding trapdoor is not shared among the players (in
the usual way).

Fig. 3. Commitment-key generation protocol in model.

Unfortunately, one cannot expect to be random as a rushing party can affect
its distribution. However, the protocol inherits the following two properties which
are sufficient for our purpose. (1) A simulator that simulates can compute
the DL of and (2) given a simulator can embed Y into so that
given the simulator can compute The latter in particular implies
that the adversary is not able to compute the trapdoor
Our idea for formally capturing such a notion is that the ideal functionality
challenges the adversary by sending a random and allows to ran-
domize it so that is transformed into such that knows the trapdoor for
if and only if it knows it for This clearly captures (1) and (2) above.
Definition 1 (Commitment-Key Generation Functionality:
1. On receiving (generate, sid) from choose and send to
2. On receiving from compute and send (com-key, sid, to
all players and
Proposition 2. Protocol in Fig. 3 securely realizes against
adaptive adversary for in the SIP UC model.
The proof is given in the full version of the paper. Essentially, on receiving
from simulates the SIP call to with input and it sends
We claim that in can be securely realized by the protocol
from Theorem 1. This may look contradictory since is secure only against
static corruption of the dealer as stated in Theorem 1, while in every
player acts as a dealer and may be adaptively corrupted. However, looking at
the proof, except for the run launched by the SIP simulates all runs of
honestly with true inputs. Hence, for these simulations, the situation is
exactly as in the case where the dealer is statically corrupted and the secret is
known to the simulator at the beginning. Furthermore, the reconstruction phase
of is never invoked in Thus, the following holds.
Adaptively Secure Feldman VSS and Applications 329

Theorem 2. Implementing in of Fig. 3 by results in a secure
realization of against adaptive adversary for in the (real-
life) SIP UC model.

5.2 DL-Key Generation
This section constructs an adaptively secure protocol for DLKG, whose function-
ality is defined below. Clearly, from such a key-generation protocol (respectively
functionality), one expects that it outputs a public-key and in some hidden way
produces the corresponding secret-key (typically by having it shared among
the players), such that can be used to do some cryptographic task like signing
or decrypting if enough of the players agree [16]. However, as we want to view
our protocol as a generic building block for threshold schemes, we simply require
that the secret-key can be opened rather than be used for some specific task.
In Sect. 5.3 we then show a concrete example threshold scheme based on our
DLKG protocol.

Definition 2 (Threshold DL Key Generation Functionality:
1. On receiving (generate, sid) from select compute and
send (key, sid, to all players and
2. On receiving (open, sid) from players, send (private, sid, to all players

Our realization of is illustrated in Fig. 5 below, and makes use of (or-
dinary) Pedersen™s VSS scheme given in Fig. 4.

Fig. 4. Pedersen™s VSS scheme:

We do not prove Pedersen™s VSS secure in the UC framework, and in fact it
is not (as a committed VSS against an adaptive adversary). The only security
requirement we need is covered by the following well-known fact.
330 Masayuki Abe and Serge Fehr

Lemma 3. Except with negligible probability, after the sharing phase of Peder-
sen™s VSS, both the and of the uncorrupt players are correct sharings of
and such that and such that is reconstructed in the reconstruction
phase (and and coincide with the dealer™s choice in case he remains honest),
or otherwise can be efficiently extracted from the adversary.
We write to denote an execution of the
sharing phase of Pedersen™s VSS with secret and player acting as dealer,
and with values C generated as described in Fig. 4.

Fig. 5. Threshold DLKG protocol in model.

Note that in the additive shares are used to reconstruct the secret-
key rather than the threshold-shares implicitly given by The
reason is that even though using the threshold shares can be proven secure in
the hybrid-model, it resists a security proof when the ideal functionality
in Pedersen™s VSS is replaced by as we do (due to the DL condition from
Lemma 1). In [3] we show how to modify the scheme in order to be able to use
the threshold-shares as secret-key shares. Also note that using the terminology
introduced in [2], based on the results in [1], step K-3 can be seen as a distributed-
verifier zero-knowledge proof of knowledge of and such that and
(see also Sect. 5.4).
Theorem 3. Implementing in the DLKG protocol from Fig. 5 the func-
tionalities and by and
respectively, results in a secure realization of against adaptive
adversary for in the SIP UC model.
Adaptively Secure Feldman VSS and Applications 331

Using the UC with joint state framework [8], one can prove using similar
arguments that the commitment-key can be generated once and for all invoca-
tions of Furthermore, concerning efficiency, the communication complexity
of the key-generation phase is comparable to that of the schemes by [13]: it re-
quires bits to be sent over the bilateral public channels and another
bits to be broadcast.
The full proof of Theorem 3 is given in the full version of the paper. We
simply sketch its idea here. First, the simulator simulates the generation of
such that it knows the DL of while step K-2 is executed as prescribed.
Then, it reconstructs the of the corrupt players, and it computes and
for the SIP such that and where
is the value provided by Then it simulates the two Feldman VSSes with
as dealer, while the other executions are followed as prescribed (with inputs
respectively As a result, the output of the key-generation phase is
In the opening phase, having received from simply adapts
initial such that and it uses the DL of to open to
(the new) The only difference in the adversary™s and thus the environment™s
view between the simulation and a real execution lies in the encrypted Pedersen
shares of (the initial) given to the uncorrupt players. By the property of
this cannot be distinguished by the environment.
From now on, when referring to protocol we mean from Fig. 5
with the functionalities replaces by real-life protocols as specified in Theorem 3.

5.3 Universally-Composable Threshold Schnorr-Signatures
As an example application of our DL-key generation protocol, we propose a
threshold variant of Schnorr™s signature scheme [15], provable secure in the UC
framework. The scheme is illustrated in Fig. 6. Recall, a Schnorr signature for
message under public-key consists of such that satisfies
where H is a cryptographic hash-function. Such a signature is
computed by the signer (in the single-signer variant), who knows the secret-
key by choosing and computing and
Schnorr™s signature scheme can be proven secure, in the sense of existential
unforgability against chosen message attacks, in the random oracle model.
Consider the ideal threshold signature functionality by adapting the
(single-signer) signature functionality from [5] in the obvious way.
Theorem 4. Protocol securely realizes against adaptive ad-
versary for in the UC model, under the DDH assumption and under the
assumption that the standard Schnorr signature scheme is secure.
We stress that interestingly securely realizes in the standard rather
than the SIP UC model.
Proof. (Sketch) The simulator simply executes honestly Note that the
public-key is not dictated by but rather asks to provide it. In
order to prove that this is a good simulation, we argue as follows. The only
332 Masayuki Abe and Serge Fehr

Fig. 6. Threshold Schnorr-signature scheme

way may see a difference is when breaks the signature scheme, i.e., when
a player provides at some point a valid signature on a message that has not
been signed. However, if there exist and that can enforce such an event
with non-negligible probability, then there exists a forger F that breaks the
existential unforgability against chosen message attacks of the standard (single-
signer) Schnorr signature scheme. F works as follows. F runs and and
it simulates the action of i.e. the execution of as follows. It uses the
SIP simulator for the key-generation phase of to force the output of the
key-generation to be the given public-key Furthermore, to sign a message
it asks the signing oracle for a signatures on it forces as above the
outcome of S-1 to be and it uses a straightforward modification of
the SIP simulator for the opening phase of to simulate the signing phase:
the simulated opens to in step S-2 (rather than to
forcing the output of the signing phase to be the given signature
Additionally, whenever a message-signature pair is asked to be
verified, F first checks whether was never signed before and if is a valid
signature on Once such a pair is found, F outputs that pair and halts.
Similar to the proof of Theorem 3, one can show that if does not corrupt the
SIP then cannot distinguish between the real execution of (executed by
the simulator and the SIP simulation (executed by the forger F). Hence, by
assumption on and F outputs a signature on a message not signed by the
signing oracle with non-negligible probability.

5.4 Adaptively Secure Distributed-Verifier Proofs
In designing threshold cryptography, it is quite common to prove some re-
lation (or knowledge) about committed witnesses in zero-knowledge manner.
In the UC framework, however, zero-knowledge proofs are extremely expen-
Adaptively Secure Feldman VSS and Applications 333

sive components: they are realized by combining a generic non-interactive zero-
knowledge proof with a common-reference string generator, or UC-secure com-
mitment scheme (which anyway needs common reference string) with generic
zero-knowledge proof system for an NP-complete language such as Hamiltonian.
They are generic and powerful, but cannot be directly used for practical subjects
such as showing equality of discrete-logs or knowledge of a representation.
Combining our results with techniques developed in [1,2], one can construct
adaptively secure efficient distributed-verifier zero-knowledge proofs in univer-
sally composable way for many practical subjects. We illustrate a concrete ex-
is in DH,
ample. Suppose that a prover needs to show that a triple
i.e. satisfies This can be done as follows. A prover shares twice:
once using the sharing phase of and once using that of with
base Furthermore, in the second execution, the same sharing polynomial and
X-coordinates as in the first execution are used. Hence the second execution is
completed only by broadcasting a new commitment of the sharing polynomial,
which is verified by the players by using the same share and X-coordinate re-
ceived in the first execution. This guarantees that indeed the same secret,
has been shared. Note that supposed to be is published in the sec-
ond execution. Finally, the prover shares (or using the sharing phase of
with base If all sharing phases are accepted, the proof is accepted.
Given can simulate the prover by simulating the dealer in each
execution of In the case of corrupt prover who completes the proof,
can extract and from the set of uncorrupt players. Hence the simulator can
extract a witness needed to invoke ideal zero-knowledge functionality.
The techniques of [1,2] also apply to other commitment schemes that Feld-
man™s, and allow to prove other relations as well like equality and additive and
inverse relations among committed values. From these building blocks, one can
even construct an adaptive distributed verifier proof for any NP relation by
following the construction in [2].

We thank Jesper Buus Nielsen for suggesting to use spooling to overcome the
problem addressed in Sect. 2.4. We also thank Stas Jarecki and the anonymous
referees for their careful reading and invaluable comments.

1. M. Abe. Robust distributed multiplication without interaction. In M. Wiener, ed-
itor, Advances in Cryptology ” CRYPTO ™99, volume 1666 of Lecture Notes in
Computer Science, pages 130“147. Springer-Verlag, 1999.
2. M. Abe, R. Cramer, and S. Fehr. Non-interactive distributed-verifier proofs and
proving relations among commitments. In Y. Zheng, editor, Advances in Cryptology
“ ASIACRYPT ™02, volume 2501 of Lecture Notes in Computer Science, pages 206“
223. Springer-Verlag, 2002.

Masayuki Abe and Serge Fehr

3. M. Abe and S. Fehr. Adaptively secure Feldman VSS and applications to
universally-composable threshold cryptography. In Cryptology ePrint Archive, Re-
port 2004/119, 2004. http://eprint.iacr.org.
4. R. Canetti. Universally composable security: A new paradigm for cryptographic
protocols. In Proceedings of the 42nd IEEE Annual Symposium on Foundations of
Computer Science, pages 136“145, 2001.
5. R. Canetti. On universally composable notions of security for signature, certifi-
cation and authentication. In Cryptology ePrint Archive, Report 2003/239, 2003.
6. R. Canetti, U. Feige, O. Goldreich, and M. Naor. Adaptively secure multi-party
computation. In Proceedings of the 28th annual ACM Symposium on the Theory
of Computing, pages 639“648, 1996.
7. R. Canetti, R. Gennaro, S. Jarecki, H. Krawczyk, and T. Rabin. Adaptive secu-
rity for threshold cryptosystems. In M. Wiener, editor, Advances in Cryptology ”
CRYPTO ™99, volume 1666 of Lecture Notes in Computer Science, pages 98“115.
Springer-Verlag, 1999.
8. R. Canetti and T. Rabin. Universal composition with joint state. In Cryptology
ePrint Archive, Report 2003/047, 2002. http://eprint.iacr.org.
9. I. Damgård and J. Nielsen. Improved non-committing encryption schemes based on
a general complexity assumption. In M. Bellare, editor, Advances in Cryptology ”
CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 432“450.
Springer-Verlag, 2000.
10. P. Feldman. A practical scheme for non-interactive verifiable secret sharing. In
Proceedings of the 28th IEEE Annual Symposium on Foundations of Computer
Science, pages 427“437, 1987.
11. Y. Frankel, P. D. MacKenzie, and M. Yung. Adaptively-secure distributed public-
key systems. In J. Nesetril, editor, European Symposium on Algorithms (ESA ™99),
volume 1643 of Lecture Notes in Computer Science, pages 4“27. Springer-Verlag,
12. R. Gennaro, S. Jarecki, H. Krawczyk, and T. Rabin. Secure distributed key gener-
ation for discrete-log based cryptosystems. In J. Stern, editor, Advances in Cryp-
tology ” EUROCRYPT ™99, volume 1592 of Lecture Notes in Computer Science,
pages 295“310. Springer-Verlag, 1999.
13. S. Jarecki and A. Lysyanskaya. Adaptively secure threshold cryptography: Intro-
ducing concurrency, removing erasures (extended abstract). In B. Preneel, editor,
Advances in Cryptology “ EUROCRYPT 2000, volume 1807 of Lecture Notes in
Computer Science, pages 221“242. Springer-Verlag, 2000.
14. T. P. Pedersen. A threshold cryptosystem without a trusted party. In D. W. Davies,
editor, Advances in Cryptology ” EUROCRYPT ™91, volume 547 of Lecture Notes
in Computer Science, pages 522“526. Springer-Verlag, 1991.
15. C. P. Schnorr. Efficient signature generation for smart cards. Journal of Cryptology,
4(3):239“252, 1991.
16. D. Wikström. A universally composable mix-net. In Proceedings of the First Theory
of Cryptography Conference (TCC™04), volume 2951 of Lecture Notes in Computer
Science, pages 315“335. Springer-Verlag, 2004.

Round-Optimal Secure Two-Party Computation

Jonathan Katz1,* and Rafail Ostrovsky2,**
Dept. of Computer Science, University of Maryland
Dept. of Computer Science, U.C.L.A.

Abstract. We consider the central cryptographic task of secure two-
party computation: two parties wish to compute some function of their
private inputs (each receiving possibly different outputs) where security
should hold with respect to arbitrarily-malicious behavior of either of the
participants. Despite extensive research in this area, the exact round-
complexity of this fundamental problem (i.e., the number of rounds re-
quired to compute an arbitrary poly-time functionality) was not previ-
ously known.
Here, we establish the exact round complexity of secure two-party com-
putation with respect to black-box proofs of security. We first show a
lower bound establishing (unconditionally) that four rounds are not suf-
ficient to securely compute the coin-tossing functionality for any super-
logarithmic number of coins; this rules out 4-round protocols for other
natural functionalities as well. Next, we construct protocols for securely
computing any (randomized) functionality using only five rounds. Our
protocols may be based either on certified trapdoor permutations or ho-
momorphic encryption schemes satisfying certain additional properties.
The former assumption is implied by, e.g., the RSA assumption for large
public exponents, while the latter is implied by, e.g., the DDH assump-
tion. Finally, we show how our protocols may be modified “ without
increasing their round complexity and without requiring erasures “ to
tolerate an adaptive malicious adversary.

1 Introduction
Round complexity measures the number of messages that parties need to ex-
change in order to perform some joint task. Round complexity is a central mea-
sure of efficiency for any interactive protocol, and much research has focused on
improving bounds on the round complexity of various cryptographic tasks. As
representative examples (this list is not exhaustive), we mention work on upper-
and lower-bounds for zero-knowledge proofs and arguments [6,7,19,27,28,40],
concurrent zero-knowledge [13,15,17,35,41,42], and secure two-party and multi-
party computation [4,5,10,11,14,21“23,31“34,37,43]. The study of secure two-
party computation is fundamental in this regard: not only does it encompasses
Part of this work was supported by NSF Trusted Computing Grant #0310751.

Part of this work was supported by a gift from the Teradata Corporation.

M. Franklin (Ed.): CRYPTO 2004, LNCS 3152, pp. 335“354, 2004.
© International Association for Cryptologic Research 2004
336 Jonathan Katz and Rafail Ostrovsky

functionalities whose round-complexity is of independent interest (such as coin
tossing or the zero-knowledge functionality), but it also serves as an important
special case in the study of secure computation.
Yao [43] presented a constant-round protocol for secure two-party computa-
tion when the adversarial party is assumed to be honest-but-curious (or passive).
Goldreich, Micali, and Wigderson [25,29] extended Yao™s result, and showed a
protocol for secure multi-party computation (and two-party computation in par-
ticular) tolerating malicious (or active) adversaries. Unfortunately, their proto-
col does not run in a constant number of rounds. Recently, Lindell [37] gave the
first constant-round protocol for secure two-party computation in the presence of
malicious adversaries; he achieves this result by constructing the first constant-
round coin-tossing protocol (for polynomially-many coins) and then applying
the techniques of [29]. The number of rounds in the resulting protocol for secure
two-party computation is not specified by Lindell, but is on the order of 20“30.
The above works all focus on the case of a non-adaptive adversary. A general
methodology for constructing protocols secure against an adaptive adversary is
known [12], and typically requires additional rounds of interaction.
Lower bounds on the round-complexity of secure two-party computation with
respect to black-box1 proofs of security have also been given. (We comment
further on black-box bounds in Section 1.2.) Goldreich and Krawczyk [28] showed
that, assuming NP BPP, zero-knowledge (ZK) proofs or arguments for NP
require 4 rounds. Since ZK proofs (of knowledge) are a particular example of
a two-party functionality, this establishes a lower bound of 4 rounds for secure
two-party computation. Under the same complexity assumption, Lindell [38] has
shown that for some polynomial secure coin-tossing of coins requires at
least 4 rounds.

1.1 Our Results
Here, we exactly characterize the (black-box) round complexity of secure two-
party computation by improving the known bounds. In particular:
Lower bound: We show that 5 rounds are necessary for securely tossing any
super-logarithmic (in the security parameter) number of coins, with respect to
black-box proofs of security. Thus implies a 5-round black-box lower bound for
a number of other (deterministic) functionalities as well. Beyond the implica-
tions for the round complexity of secure computation, we believe the result is
of independent interest due to the many applications of coin-tossing to other
cryptographic tasks.
The result of Goldreich and Krawczyk [28] mentioned above implies a black-
box lower bound of five rounds for the “symmetric” ZK functionality (where the
parties simultaneously prove statements to each other) “ and hence the same
lower bound on the black-box round complexity of secure two-party computation
Throughout this paper, “black-box” refers to black-box use of an adversary™s
code/circuit (and not black-box use of a cryptographic primitive, as in [30]). A defi-
nition of black-box proofs of security is given in Appendix A.

Round-Optimal Secure Two-Party Computation 337

of general functionalities “ assuming NP BPP. In contrast, our lower bound
holds unconditionally.
Matching upper bound: As our main result, we construct 5-round protocols
for securely computing any (randomized) poly-time functionality in the presence
of a malicious adversary. Our protocols may be based on various cryptographic
assumptions, including certified, enhanced trapdoor permutations (see Defini-
tion 1 and Remark 1), or homomorphic encryption schemes satisfying certain
additional properties. The former may be based on, for example, the RSA as-
sumption for large public exponents, while the latter may be based on, for ex-
ample, the decisional Diffie-Hellman (DDH) assumption in certain groups. Due
to space limitations, we focus on the (more difficult) case of certified trapdoor
permutations, and refer the reader to the full version for protocols based on
alternate assumptions.
In Section 4.1, we sketch how our protocols can be extended “ without in-
creasing the round complexity and without requiring erasures “ to tolerate an
adaptive adversary. The necessary cryptographic assumptions are described in
more detail there.

1.2 A Note on Black-Box Lower Bounds
Until the recent work of Barak [1,2], a black-box impossibility result was gen-
erally viewed as strong evidence for the “true” impossibility of a given task.
Barak showed, however, that non-black-box use of an adversary™s code could,
in fact, be used to circumvent certain black-box impossibility results [1]. Never-
theless, we believe there is still an important place in cryptography for black-box
impossibility results for at least the following reasons:
1. A black-box impossibility result is useful insofar as it rules out a certain
class of techniques for solving a given problem.
2. With respect to our current understanding, protocols constructed using non-
black-box techniques, currently seem inherently less efficient than those con-
structed using black-box techniques.
It remains an interesting open question to beat the lower bound given in this
paper using non-black-box techniques, or to prove that this is impossible.

1.3 Discussion
Yao™s results [43] give a 4-round protocol secure against honest-but-curious ad-
versaries, assuming the existence of enhanced [26, Sec. C.1] trapdoor permuta-
tions (an optimal 3-round protocol secure against honest-but-curious adversaries
can be constructed based on the existence of homomorphic encryption schemes).
Our lower bound shows that additional rounds are necessary to achieve security
against the stronger class of malicious adversaries. Our upper bound, however,
shows that (at least in the case of trapdoor permutations) a single (i.e., fifth)
additional round suffices.
338 Jonathan Katz and Rafail Ostrovsky

Our technique for achieving security against adaptive adversaries applies only
to adversaries who corrupt at most one of the players. An interesting open ques-
tion is to construct a constant-round protocol tolerating an adaptive adversary
who can potentially corrupt both players.

2 Definitions and Cryptographic Preliminaries
We omit the (completely standard) definitions of security for two-party com-
putation used in this work, which follow [9,10,25,39]. However, we provide in
Appendix A our definition of black-box simulation which is used to prove the
lower bound of Section 3.
We assume the reader is familiar with the cryptographic tools we use and refer
the reader elsewhere for definitions of non-interactive (perfectly binding) com-
mitment schemes [24], 3-round witness-indistinguishable (WI) proofs of knowl-
edge [20,24], witness-extended emulation for proofs/arguments of knowledge [37],
and the Feige-Shamir 4-round ZK argument of knowledge [18,19]. We note that
all the above may be constructed based on the existence of certified, enhanced
trapdoor permutations.
To establish notation, we provide here our working definitions of trapdoor
permutations, hard-core bits, and Yao™s garbled circuit technique. We also discuss
equivocal commitment, and show a new construction of this primitive.
Trapdoor permutations. For the purposes of the present abstract, we use the
following simplified definition of trapdoor permutations (but see Remark 1):
be a triple of PPT algorithms (Gen, Eval, Invert) such that
Definition 1. Let
td), then
if outputs a pair is a permutation over
and lnvert(td, ·) is its inverse. is a trapdoor permutation family if the
following is negligible in for all poly-size circuit families

We additionally assume that satisfies (a weak variant of) “certifiability”:
namely, given some it is possible to decide in polynomial time whether
is a permutation over
td) be implicit and will simply let f(·)
For notational convenience, we let
denote Invert(td,·) (where td are understood from
denote and
can only be efficiently evaluated if td is known.
the context). Of course,
Remark 1. The above definition is somewhat less general than others that have
been considered (e.g., that of [24, Def. 2.4.5]); in particular, the present defini-
tion assumes a domain of and therefore no “domain sampling” algorithm
is necessary. Furthermore, the protocol of Section 4 does not immediately gener-
alize for trapdoor permutations requiring such domain sampling. Nevertheless,
by introducing additional machinery it is possible to modify our protocol so that
it may be based on any family of enhanced trapdoor permutations (cf. [26, Sec.
C.1]) satisfying the certifiability condition noted above. For simplicity, however,
Round-Optimal Secure Two-Party Computation 339

we use the above definition in proving our results and defer the more complicated
protocol (and proof) to the full version.
Hard-core bits. We assume the reader is familiar with the notion of hard-core
bits for any trapdoor permutation family (see [24]), and thus we merely describe
the notation we use. Let be a hard-core bit for some
trapdoor permutation family (we will let be implicit, and set thus
(informally), is “hard” to predict given We extend this notation to a
vector of hard-core bits in the following way:

Now (informally), “looks pseudorandom” given
Yao™s “garbled circuit”. Our secure computation protocol uses as a building
block the “garbled circuit” technique of Yao [43] which enables constant-round
secure computation for honest-but-curious adversaries. We abstract Yao™s tech-
nique, and only consider those aspects of it which are necessary for our proof of
security. In what follows, F is a description of a two-input/single-output circuit
whose inputs and output have the same length (yet the technique may be
generalized for inputs and output of arbitrary polynomial lengths). Yao™s results
give PPT algorithms for which:
is a randomized algorithm which takes as input a security parameter
It outputs a “garbled circuit” circuit and
a circuit F, and a string
input-wire labels The “garbled circuit”
may be viewed as representing the function
is a deterministic algorithm which takes as input a “garbled cir-
cuit” circuit, and values It outputs either an invalid
symbol or a value
(When is clear from the context, we omit it.)
We briefly describe how the above algorithms may be used for secure com-
putation in the honest-but-curious setting. Let player 1 (resp., 2) hold input
and assume that player 1 is to obtain the output First,
and sends circuit to player 1.
player 2 computes
Then, the two players engage in instances of oblivious transfer: in the in-
stance, player 1 enters with “input” player 2 enters with “input”
and player 1 obtains the “output” Player 1 then computes
and outputs
A 3-round protocol for oblivious transfer (OT) based on trapdoor permu-
tations may be constructed as follows (we remark that using number-theoretic
assumptions, 2-round OT is possible): Let player 1 have input and player 2 have
input strings (the goal is for player 1 to obtain Player 2 be-
and sending f to player 1. Next,
gins by generating trapdoor permutation (f,
player 1 chooses random sets and and sends
to player 2. Finally, player 2 computes computes
340 Jonathan Katz and Rafail Ostrovsky

analogously, and sends to player 1. Player 1 can then easily recover
(A proof of security for essentially the above protocol appears in [25].) Note
that in the honest-but-curious setting it is secure to run polynomially-many
executions of the above in parallel.
Putting everything together, we obtain the following 3-round protocol for
secure computation of any single-output functionality in the honest-but-curious
He then sends circuit
Round 1 Player 2 runs to generate
and the f™s for oblivious transfer.
Round 2 Player 1 sends pairs
Round 3 Player 2 sends pairs
Output computation Player 1 can now recover the appropriate and thus
compute the output value using as discussed above.
Finally, any protocol for secure computation of single-output functionalities can
be used for secure computation of two-output functionalities using only one
additional round [25, Prop. 7.2.11]. Furthermore, any protocol for secure com-
putation of deterministic functionalities may be used for secure computation of
randomized ones (with the same round complexity) [25, Prop. 7.4.4].
With the above in mind, we describe the properties required of
We first require correctness: for any F, any output of
and any we have The algorithms also
satisfy the following notion of security: there exists a simulator Yao-Sim which
as input, and which outputs circuit and a set of input-wire labels
furthermore, the following distributions are computationally indistinguishable
(by poly-size circuit families):

Algorithms satisfying the above definitions may be constructed as-
suming the existence of one-way functions.
Equivocal commitment. Although various notions of equivocal commitment
have appeared previously, we present here a definition and construction specific
to our application. Informally, an equivocal commitment scheme is an interactive
protocol between a sender and a receiver which is computationally hiding and
computationally binding in a real execution of the protocol. However, in a simu-
lated execution of the protocol (where the simulator interacts with the receiver),
the simulator is not bound to any particular value but can instead open the com-
mitment to any desired value. Furthermore, for any (non-uniform) PPT receiver
R and any string the view of R when the real sender commits/decommits to
is computationally indistinguishable from the view of R when the simulator
“commits” in an equivocal way and later opens this commitment as We defer
a formal definition, especially since one follows easily from the construction we
now provide.
Round-Optimal Secure Two-Party Computation 341

We construct an equivocal commitment scheme for a single bit in the follow-
ing way: let Com be a non-interactive (perfectly binding) commitment scheme.
To commit to a bit the sender chooses coins and computes
It sends C to the receiver and per-
forms a zero-knowledge proof/argument that C was constructed correctly (i.e.,
that there exist such that The receiver rejects in
case the proof/argument fails. To decommit, the sender chooses a bit at ran-
dom and reveals Note that a simulator can “equivocate” the commitment
by setting (where is chosen at random in {0,1}),
simulating the zero-knowledge step, and then revealing or depending on
and the bit to be revealed. By committing bit-by-bit, the above extends easily
to yield an equivocal commitment scheme for polynomial-length strings.

3 The Round Complexity of Coin Tossing
We show that any protocol for securely flipping a super-logarithmic number of
coins (which is proven secure via black-box simulation) requires at least 5 rounds.
(The reader is referred to Appendix A for a definition of black-box simulation.)
More formally:
Theorem 1. Let where is the security parameter. Then there
does not exist a 4-round protocol for tossing coins which can be proven secure
via black-box simulation.
The above theorem refers to the case where both parties are supposed to receive
the resulting coin as output.
Before starting our proof, we note that the above theorem is “tight” in the
following two regards: first, for any 3-round protocols (proven
secure using black-box simulation) for tossing coins are known [8, 25, 29],
assuming the existence of a non-interactive commitment scheme. Furthermore,
our results of Section 4 imply a 5-round protocol (based on the existence of
trapdoor permutations) for tossing any polynomial number of coins. In fact,
we can also construct a 5-round protocol for tossing any polynomial number of
coins based on the existence of a non-interactive commitment scheme; details
will appear in the final version.
Proof (sketch). We assume (toward a contradiction) some 4-round protocol
for tossing coins. Without loss of generality, we may assume that player
1 sends the final message of (since in the ideal model, only player 1 has the
ability to abort the trusted party); hence, player 2 must send the first message of
Consider a real-model adversary corrupting player 1, who acts as follows:
Let be some set of “small” but noticeable size, whose exact
size we will fix later. runs protocol honestly until it receives the third
message, and then computes the value of the tossed coin. If then
completes execution of the protocol honestly and outputs some function of its
view; otherwise, aborts with output
342 Jonathan Katz and Rafail Ostrovsky

Black-box security of implies the existence of a black-box ideal-model
adversary satisfying the following property (informally): conditioned upon
receiving a coin from the trusted party, with all but negligible proba-
bility “forces” an execution with in which does not abort and hence
view is consistent with some coin (for our proof, it does not
matter whether or not).
We next define a real-model adversary corrupting player 2, acting as
follows: incorporates the code of and “ simulating the trusted party for
a coin randomly chosen from Good. By the above,
“ feeds can
with overwhelming probability “force” an execution with in which sees a
view consistent with some We show that we can use to “force”
an execution with (the honest) in which outputs some with
sufficiently high probability. Of course, (and hence interacts with the
honest and not with adversarial thus, in particular, (and hence
cannot rewind However, since acts “essentially” like the honest (with
the only difference being due to aborts), we can show that “forces” to
output a coin with at least some inverse polynomial probability
where relates to the number of queries makes to its oracle for
Choosing Good such that we derive a contradiction:
in any ideal-model execution, an honest player 1 outputs a coin in Good with
probability at most in the real world, however, forces an honest
to output a coin in Good with probability at least This implies a simple,
poly-time distinguisher with non-negligible advantage at least

Remark 2. Theorem 1 immediately extends to rule out 4-round, black-box
protocols for other functionalities (when both parties are supposed to receive
output), and in particular some natural, deterministic ones. For example, the
theorem implies that 4 rounds are not sufficient for computing the “xor” func-
tionality (i.e., on inputs of super-logarithmic length, since any
such protocol could be used to toss a super-logarithmic number of coins (in the
same number of rounds). This can be generalized in the obvious way.

4 A 5-Round Protocol for Secure Computation
Here, we prove the existence of a 5-round protocol for secure computation of gen-
eral functionalities based on the existence of (certified) trapdoor permutations
(see Definition 1 and Remark 1). To simplify matters, we describe a 4-round
protocol for secure computation of deterministic functionalities in which only
the first party receives output; this suffices for our main result since any such
protocol can be used for secure computation of randomized functionalities in
which both parties receive (possibly different) outputs, at the cost of one more
(i.e., fifth) additional round [25, Propositions 7.2.11 and 7.4.4].
Before describing our protocol, we provide some intuition about the “high-
level” structure of our protocol and highlight some techniques developed in the
course of its construction. We stress that our protocol does not merely involve
Round-Optimal Secure Two-Party Computation 343

“collapsing” rounds by running things in parallel “ new techniques are needed to
obtain a round-optimal protocol. At the core of our protocol is Yao™s 3-round pro-
tocol tolerating honest-but-curious adversaries (it will be helpful in what follows
to refer to the description of Yao™s “basic” protocol in Section 2). The standard
way of adding robustness against malicious adversaries (see [25]) is to “compile”
this protocol by having the parties (1) commit to their inputs; (2) run (modified)
coin-tossing protocols, so each party ends up with a random tape and the other
party receives a commitment to this tape; and (3) run the basic Yao protocol
with ZK proofs/arguments of correct behavior (given the committed values of
the input and random tape) at each round. We may immediately note this ap-
proach will not suffice to obtain a 4-round protocol, since a ZK proof/argument
for the first round of Yao™s protocol alone will already require 4 rounds. Instead,
we briefly (and informally) summarize some of the techniques we use to achieve
a 4-round protocol. In the following (but not in the more formal description
that follows), we number the rounds from 0“3, where round 0 corresponds to an
“initialization” round, and rounds 1“3 correspond to rounds 1“3 of Yao™s basic

We first observe that in Yao™s protocol a malicious player 2 gains nothing by
using a non-random tape and thus coin-tossing for this party is not needed.
It is essential, however, that player 1 is unable to choose his coins in round
two. However, full-blown coin-tossing is unnecessary, and we instead use a 3-
round sub-protocol which “forces” player 1 to use an appropriate set of coins.
(This sub-protocol is run in rounds 0“2.) This component and its analysis
are based loosely on earlier work of Barak and Lindell [3].
When compiling Yao™s protocol, player 1 may send his round-two message
before the proof of correctness for round one (being given by player 2) is com-
plete (here, we use the fact that the trapdoor permutation family being used
is “certifiable”). We thus construct our protocol so the proof of correctness
for round one completes in round three. To obtain a proof of security, we
require player 2 to delay revealing circuit until round three. Yet, a proof of
security also requires player 2 to be committed to a circuit at the end of the
round one. We resolve this dilemma by having player 2 commit to circuit in
round one using an equivocal commitment scheme.
Finally, use a specific WI proof of knowledge (from [36]; see also [18]) with the
property that the statement to be proved (and, by implication, a witness) need
not be known until the last round of the protocol, yet soundness, completeness,
and witness-indistinguishability still hold. (The proof of knowledge aspect
must be dealt with more carefully; see Appendix B.) Furthermore, this proof
system has the property that the first message from the prover is computed
independently of the statement being proved (as well as its witness); we use
this fact when constructing an adaptively-secure protocol in Section 4.1.
We also construct a novel 4-round ZK argument of knowledge with similar
properties (see Appendix B), by modifying the Feige-Shamir ZK argument
of knowledge [19]. Our new protocol may be of independent interest.
344 Jonathan Katz and Rafail Ostrovsky

Let be a polynomial-size (deterministic) circuit family repre-
senting the functionality of interest, where takes two inputs and re-
turns a output to player 1. (Clearly, the protocol extends for arbitrary
input/output lengths. We have also mentioned earlier how the protocol may be
extended for randomized, two-output functionalities.) When is understood, we
write F instead of Let represent the input of player 1,
let represent the input of player 2, and let In
the following, always ranges from 1 to and ranges from 0 to 1.
First round. The protocol begins with first player proceeding as follows:

1. Player 1 chooses values at random from
It then chooses random coins and computes
where Com is any perfectly-binding commitment scheme.
2. Player 1 also prepares the first message (which we call of a 3-round
witness indistinguishable proof of knowledge (for a statement which will
be fully determined in the third round; see the earlier remarks). For later
reference, define as the following:

(Informally, represents the fact that player 1 “knows” either the
decommitment of or the decommitment of for each
3. Player 1 also prepares the first message (acting as the verifier) of the modified
Feige-Shamir ZK argument of knowledge (see Appendix B). We denote this
message by
4. The message sent by player 1 contains and

Second round. Player 2 proceeds as follows:
1. Player 2 generates trapdoor permutations (denoted using
invocations of chooses values at random from and
prepares the second message (denoted for the WI proof of knowledge
initiated by player 1 in the previous round.
2. Next, player 2 generates a “garbled circuit” (cf. Section 2) for the func-
tionality F, based on its own input This involves choosing random coins
and computing Player 2 also computes
commitments to the that is, it chooses coins and computes

3. Player 2 next chooses random coins and generates an equivocal commit-
4. Next, player 2 prepares the second message (denoted for the modified
Feige-Shamir ZK argument of knowledge (for a statement which will be fully
determined in the fourth round; cf. Appendix B). For future reference, let
be the following: there exist s.t.:
(b) and
Round-Optimal Secure Two-Party Computation 345

(Informally, states that player 2 performed the preceding two
steps correctly.)
5. The message includes and

Third round. Player 1 proceeds as follows:

are not valid2, player 1 aborts. Otherwise, player 1 will
1. If any of the
use parallel invocations of oblivious transfer to obtain the input-wire labels
corresponding to its input Formally, for each player 1 prepares values
in the following way:
If choose random and set Also, set
(recall, was committed to by player 1 in the first
round, and was obtained from player 2 in the second round).
If choose random set and set

2. Define as follows:

Informally, this says that player 1 correctly constructed the values.
3. Player 1 then prepares the final message (denoted for the proof of
knowledge begun in round 1. The statement to be proved is:
Player 1 also prepares the third message for the modified Feige-
Shamir ZK protocol (denoted
4. The message includes and

Fourth round. The second player proceeds as follows:
1. If either or would cause rejection, player 2 aborts. Otherwise,
player 2 completes the oblivious transfer in the standard way. Namely, for
each sent in the previous round, player 2 computes and
xor™s the resulting hard-core bits with the corresponding input-wire labels
2. Define as follows:

Informally, this says that player 2 performed the oblivious transfer correctly.
3. Player 2 prepares the final messages (denoted for the modified Feige-
Shamir protocol. The statement to be proved is:
Recall (cf. Definition 1) that the trapdoor permutation family is certifiable.
An honest player 1 actually knows multiple witnesses for For concrete-
ness, we have the player choose one of these at random to complete the proof.

346 Jonathan Katz and Rafail Ostrovsky

4. Finally, player 2 decommits Equiv as circuit (recall from Section 2 how de-
commitment is done for equivocal commitments).
circuit (and the corresponding decommit-
5. The message includes the
ment), and

Output computation. The first player concludes the protocol as follows: If
or the decommitment of circuit would cause rejection, player 1 aborts.
Otherwise, by completing the oblivious transfer (in the standard way) player
1 obtains (recall, is the input of player 1) and computes
If it outputs Otherwise, it aborts.
Sufficient assumptions. As noted in Section 2, every component of the above
protocol may be based on the existence of a trapdoor permutation family (the
certifiability property is only needed for the verification performed by player 1 at
the beginning of the third round). Furthermore, as noted in Remark 1, although
the description of the protocol (and its proof of security) use the definition of
a trapdoor permutation family given by Definition 1, it is possible to adapt the
protocol so that its security may be based on any family of (certified) enhanced
trapdoor permutations, as per the definitions of [24, 26].
Theorem 2. Assuming the existence of a trapdoor permutation family, the above
protocol securely computes functionality F.
Proof. We separately prove two lemmas dealing with possible malicious behavior
of each of the parties; the theorem follows. We first consider the case when
player 2 is malicious:
Lemma 1. Let be a pair of (non-uniform) PPT machines in which
is honest. There exist a pair of (non-uniform) expected polynomial-time machines
such that

Proof (sketch). Clearly, we may take to be honest. We assume that is
deterministic, and construct using black-box access to as follows:
1. runs a copy of internally, passing to it any auxiliary information
To emulate the first round of the protocol, acts exactly as an honest
player 1, generates a first-round message, and passes this message to
In return, receives a second-round message which includes, in particular,
If an honest player 1 would abort after receiving this second-round
message, aborts (without sending any input to the trusted party) and
outputs whatever outputs.
2. Otherwise generates a third-round message exactly as an honest player 1
would, with the following exception: for all it sets Note
in particular that can easily compute since both and
are true. It passes the third-round message to and receives
in return a fourth-round message.
Round-Optimal Secure Two-Party Computation 347

3. If an honest player 1 would abort after receiving the fourth-round message,
aborts (without sending any input to the trusted party) and outputs
attempts to extract4 from
whatever outputs. Otherwise, an input
value (cf. step 4 of the second round in the description of the protocol). If
aborts and outputs fail.
extraction fails,
4. Otherwise, sends to the trusted party. It then stops and outputs what-
ever outputs.

We may note the following differences between the ideal world and the real
world: (1) in the second round, sets for all whereas an
honest player 1 does this only for such that also (2) passes the
input value to the trusted party (and hence player 1 will receive the value
from this party), whereas in the real world player 1 will compute an
output value based on the circuit and other values it receives from in the
fourth round. Nevertheless, we claim that Equation (1) holds based on (1) the
hiding property of the commitment scheme used in the first round and (2) the
argument of knowledge (and hence soundness) property of the modified Feige-
Shamir protocol (cf. Appendix B), as well as the correctness of the Yao “garbled
circuit” construction. A complete proof appears in the full version.

Lemma 2. Let be a pair of (non-uniform) PPT machines in which
is honest. There exist a pair of (non-uniform) expected polynomial-time machines
such that

Proof (sketch). Clearly, we may take to be honest. We assume that is
deterministic, and construct using black-box access to as follows:

1. runs a copy of internally, passing to it any auxiliary information
and receiving a first-round message from Next, emulates the sec-
ond round of the protocol as follows: it generates and
exactly as an honest player 2. All the commitments however, are
Furthermore, commitment Equiv is set up in
random commitments to
an “equivocal” way (cf. Section 2) so that will later be able to open this
commitment to any value of its choice. prepares using the ZK sim-
ulator for the modified Feige-Shamir protocol (cf. Appendix B). passes
the second-round message thus constructed to and receives in return a
third-round message. If an honest player 2 would abort after receiving this
message, aborts (without sending any input to the trusted party) and
outputs whatever outputs.
Technically, runs a witness-extended emulator [37] for the modified Feige-Shamir
proof system, which results in a transcript and a witness This is what we mean
when we informally say that “attempts to extract”.

348 Jonathan Katz and Rafail Ostrovsky

2. Otherwise, attempts to extract (cf. footnote 4) values
corresponding to (half) the commitments sent by in the first
outputs fail. Otherwise, let
round. If extraction fails, be such
that then defines a string as follows:

is “guess” as to which input-wire label is “interested in”.)
sends the string thus defined to the trusted party, and receives a value in
to generate a garbled circuit circuit along
return. It then runs
with input-wire labels (cf. Section 2). then prepares the “answers”
to the oblivious transfer as follows, for each it correctly sets
but chooses at random.
3. emulates the fourth round of the protocol as follows: it sends the
as computed above, sends circuit as computed above (note that the cor-
responding decommitment can be given since Equiv was constructed in an
“equivocal” way), and uses the simulator for the modified Feige-Shamir pro-
tocol to compute passes the final message thus
(cf. Appendix B).
constructed to and outputs whatever outputs.
We note (informally) the following differences between the ideal world and the
real world: (1) are commitments to rather than to “real” input-wire
labels; (2) Equiv is set up so that can “equivocate” and later open this as
any value it chooses; (3) the modified Feige-Shamir ZK argument is simulated
rather than real; (4) the answers are “garbage” (where is guess as
and (5) the garbled circuit is constructed using Yao-Sim
to the “input” of
rather than Nevertheless, we claim that Equation (2) holds. Due to lack
of space, a complete proof appears in the full version.

4.1 Handling Adaptive Adversaries
We briefly sketch how the protocol above can be modified “ without increasing
the round complexity “ to provide security against an adaptive adversary who
can monitor communication between the parties and decide whom to corrupt
at any point during the protocol based on this information. (We consider only
an adversary who can corrupt at most one of the parties.) In brief, we modify
the protocol by using a (public-key) adaptively-secure encryption scheme [12] to
encrypt the communication between the two parties. Two issues arise:
1. The encryption scheme of [12] requires a key-generation phase which would
necessitate additional rounds. We avoid this extra phase using the assump-
tion of simulatable public-key cryptosystems [16] (see below). The existence
of such cryptosystems is implied in particular by the DDH assumption [16];
see there for constructions based on alternate assumptions.
2. Regardless of the encryption scheme used, one additional round seems nec-
essary just to exchange public keys. To avoid this, we do not encrypt the
first message from player 1 to player 2. Nevertheless, the modified protocol
Round-Optimal Secure Two-Party Computation 349

is adaptively-secure: the proof uses the fact that the first round (as well as
the internal state after the first round) is identical in both the real execution
and the simulation for a malicious player 2 (cf. the proof of Lemma 1).

The modified protocol. Before describing our construction of an adaptively-
secure encryption scheme, we outline how it will be used to achieve adaptive
security for our protocol. Let denote the protocol given in the previous section.
Our adaptively-secure protocol proceeds as follows: in the first round of
player 1 sends a message just as in the first round of but also sends
sufficiently-many public keys (for an adaptively-secure encryption scheme) to
enable player 2 to encrypt the messages of rounds two and four. (The adaptively-
secure encryption scheme we use only allows encryption of a single bit; therefore,
the number of public keys sent by player 1 is equal to the bit-length of messages
two and four in In the second round of player 2 constructs a message as in
encrypts this message using the corresponding public keys sent in round one,
and additionally sends sufficiently-many public keys (for an adaptively-secure
encryption scheme) to enable player 1 to encrypt the messages of rounds three
and five. proceeds by having the players construct a message just as in the
corresponding round of and then having them encrypt these messages using
the appropriate public keys sent by the other player.
We defer a proof of security for this construction to the final version.

An adaptively-secure encryption scheme. Informally, a public-key cryp-
tosystem for single-bit plaintexts is simulatable if (1) it is possible to obliviously
generate a public key without learning the corresponding secret key, and also
(2) given a public key, it is possible to obliviously sample a random (valid) ci-
phertext without learning the corresponding message. We assume further that
if a ciphertext is obliviously sampled in this way, then the probability that the
corresponding plaintext will be a 0 or 1 is equal (or statistically close). See [16,
Def. 2] for a formal definition.
Given such a cryptosystem, our construction of an adaptively-secure en-
cryption scheme for a single-bit is as follows: the receiver generates pairs
of public keys by generating one key of each pair (selected at ran-
dom) using the key-generation algorithm, and the other key using the oblivious
sampling algorithm. This also results in a set of secret keys (one for each pair
of public keys). To encrypt a bit the sender proceeds as follows: for each
index choose a random bit set and choose using the
oblivious sampling algorithm. Then send the ciphertext pairs
To decrypt, the receiver decrypts one ciphertext out of each pair using the
secret key it knows, and sets the decrypted message equal to the majority of the
recovered bits. Note that correctness holds with all but negligible probability
since (on average) 3/4 of the ciphertexts constructed by the sender decrypt to
the desired message (namely, the ciphertexts encrypted using the legitimate
encryption algorithm, along with (on average) 1/2 of the remaining ciphertexts
chosen via the oblivious sampling algorithm).
We defer a proof that this scheme is adaptively secure to the full version.
Jonathan Katz and Rafail Ostrovsky

We thank Bill Aiello for a helpful discussion, and the anonymous referees for their
useful remarks. Thanks also to Yuval Ishai for his useful suggestions towards
improving the presentation of our results.

1. B. Barak. How to Go Beyond the Black-Box Simulation Barrier. 42nd IEEE Sym-
posium on Foundations of Computer Science (FOCS), IEEE, pp. 106“115, 2001.
2. B. Barak. Constant-Round Coin-Tossing with a Man-in-the-Middle or Realizing
the Shared Random String Model. 43rd IEEE Symposium on Foundations of Com-
puter Science (FOCS), IEEE, pp. 345“355, 2002.
3. B. Barak and Y. Lindell. Strict Polynomial Time in Simulation and Extraction.
34th ACM Symposium on Theory of Computing (STOC), ACM, pp. 484“493, 2002.
J. Bar-Ilan and D. Beaver. Non-Cryptographic Fault-Tolerant Computing in Con-
stant Number of Rounds of Interaction. Principles of Distributed Computing, ACM,
pp. 201“209, 1989.
5. D. Beaver, S. Micali, and P. Rogaway. The Round Complexity of Secure Protocols.
22nd ACM Symposium on Theory of Computing (STOC), ACM, pp. 503“513, 1990.
6. M. Bellare, S. Micali, R. Ostrovsky. Perfect Zero-Knowledge in Constant Rounds
STOC 1990: 482-493.
7. M. Bellare, S. Micali, R. Ostrovsky. The (True) Complexity of Statistical Zero
Knowledge STOC 1990: 494-502
8. M. Blum. Coin Flipping by Phone. IEEE COMPCOM, pp. 133“137, 1982.
9. R. Canetti. Security and Composition of Multi-Party Cryptographic Protocols. J.
Cryptology 13(1): 143“202, 2000.
10. R. Canetti, Y. Lindell, R. Ostrovsky, A. Sahai. Universally composable two-party
and multi-party secure computation. STOC 2002: 494-503
11. R. Canetti, E. Kushilevitz, R. Ostrovsky, A. Rosen: Randomness versus Fault-
Tolerance. J. Cryptology 13(1): 107-142 (2000)
12. R. Canetti, U. Feige, O. Goldreich, and M. Naor. Adaptively-Secure Multiparty
Computation. 28th ACM Symposium on Theory of Computing (STOC), ACM, pp.
639“648, 1996.
13. R. Canetti, J. Kilian, E. Petrank, and A. Rosen. Concurrent Zero-Knowledge Re-
quires Rounds. 33rd ACM Symp. on Theory of Comp. (STOC), ACM,
pp. 570“579, 2001.
14. R. Cramer and I. Damgård. Secure Distributed Linear Algebra in a Constant
Number of Rounds. Adv. in Cryptology “ Crypto ™01, LNCS 2139, Springer-Verlag,
pp. 119“136, 2001.
15. G. Di Crescenzo and R. Ostrovsky. On Concurrent Zero-Knowledge with Pre-
processing. In CRYPTO 1999: pp. 485-502.
16. I. Damgård and J.B. Nielsen. Improved Non-Committing Encryption Schemes.
Adv. in Cryptology “ Crypto 2000, LNCS vol. 1880, Springer-Verlag, pp. 432“450,
17. A. De Santis, G. DiCrescenzo, R. Ostrovsky, G. Persiano, A. Sahai: Robust Non-
interactive Zero Knowledge. CRYPTO 2001: 566-598

Round-Optimal Secure Two-Party Computation 351

18. U. Feige. Alternative Models for Zero Knowledge Interactive Proofs. PhD thesis,
Weizmann Institute of Science, 1990.
19. U. Feige and A. Shamir. Zero Knowledge Proofs of Knowledge in Two Rounds.
Adv. in Cryptology “ Crypto 1989, LNCS vol. 435, Springer-Verlag, pp. 526“544,
20. U. Feige and A. Shamir. Witness Indistinguishability and Witness Hiding Proto-
cols. 22nd ACM Symposium on Theory of Computing (STOC), ACM, pp. 416“426,
21. M. Fitzi, J. Garay, U. Maurer, R. Ostrovsky: Minimal Complete Primitives for
Secure Multi-party Computation. CRYPTO 2001: 80-100
22. R. Gennaro, Y. Ishai, E. Kushilevitz, and T. Rabin. The Round Complexity of
Verifiable Secret Sharing and Secure Multicast. 33rd ACM Symposium on Theory
of Computing (STOC), ACM, pp. 580“589, 2001.
23. R. Gennaro, Y. Ishai, E. Kushilevitz, and T. Rabin. On 2-Round Secure Multiparty
Computation. Adv. in Cryptology “ Crypto 2002, LNCS vol. 2442, Springer-Verlag,
pp. 178“193, 2002.
24. O. Goldreich. Foundations of Cryptography, vol. 1: Basic Tools. Cambridge Uni-
versity Press, Cambridge, UK, 2001.
25. O. Goldreich. Draft of a Chapter on Cryptographic Protocols, June 2003. Available
at http://www.wisdom.weizmann.ac.il/˜oded/foc-vol2.html.
26. O. Goldreich. Draft of an Appendix Regarding Corrections and Additions, June


. 11
( 19)