Consensus in the Presence of Partial Synchrony 307 synchronous. Moreover, the algorithms for corresponding cases still require amounts of time (specifically, polynomial) similar to the earlier case. Again, we proceed by showing how to use the models of this section to simulate the basic model of Section 3. In the previous section, the processors had a common notion of time that allowed time to be divided into rounds. In this case, where phi does not always hold or is unknown, no such common notion of time is available. Therefore, our first task is to describe protocols that give the processors some approximately common notion of time. We call such protocols distributed clocks. Our distributed clocks do not use explicit knowledge of A or a. They are designed to be used in either kind of partially synchronous model, delta and phi holding eventually or delta and phi unknown. However, the properties that the clocks exhibit do depend on the particular bounds A and 9 that hold (eventually) during the particular run. Each processor maintains a private (software) clock. The private clocks grow at a rate that is within some constant factor of real time and remain within a constant of each other. For the model with delta and phi unknown, these conditions hold at all times. For the GST model, however, these conditions are only guaranteed to hold after some constant amount of time after GST. The three “constants” here depend polynomially on N, +, and A. We have made no effort to optimize these constants, as this would obfuscate an already difficult and technical argument. In addition, the number of message bits sent by correct processors is polynomially bounded in N, A, a, and GST. Once we have defined the distributed clocks, the protocols of Section 3 are simulated by letting each processor use its private clock to determine which round it is in. Several “ticks” of each private clock are used for the simulation of each round in the basic model. In order to use a distributed clock in such simulations, we need to interleave the steps of the distributed-clock algorithm with steps belonging to the underlying algorithm being simulated. Moreover, the distributed- clock algorithm itself is conveniently described as interleaving Receive steps, which increase the recipient’s knowledge of other processors’ local clocks, with Send steps, which allow the sender to inform others about its local clock. To be specific, we assume that processors alternately execute a Receive operation for the clock, a Send operation for the clock, and a step of the algorithm being simulated. In this section we describe what happens during the clock maintenance steps for two different distributed clocks. The first, presented in Section 5.1, handles Byzantine faults without authentication and requires N 2 3t + 1. The second, presented in Section 5.2, handles Byzantine faults with authentication and requires N 2 2t + 1. This clock obviously handles fail-stop and omission faults as well. In Section 5.3 the upper bounds for the model in which delta and phi hold eventually are given. In Section 5.4 we present the upper bound results for the model in which delta and phi are unknown. We do not prove lower bounds in this section, since the lower bounds obtained in Section 4 apply to the current models. 5.1 A DISTRIBUTED CLOCK FOR BYZANTINE FAULTS WITHOUT AUTHENTICA- TION. Throughout this section we assume that N I 3t + 1. We again assume that . Processors participate in our distributed clock real times are numbered 0, 1,2, . . . protocols by sending ticks to one another. As an expositional convenience, we define a master clock whose value at any time s depends on the past global behavior of the system and is a function of the ticks that have been sent before s. Even approximating the value of the master clock requires global information about

Consensus in the Presence of Partial Synchrony - Page 20 Consensus in the Presence of Partial Synchrony Page 19 Page 21