A
clock synchronization algorithm used to synchronize the time on a machine with a remote
time server. This is a very straightforward algorithm, and is quite easy to understand.
The procedure:
- A process p requests the time in a message mr and receives the time value t in a message mt.
- t is inserted in mt at the last possible point before transmission from the server S.
- Tround = Time(send mr) + Time ( receive mt) = (1-10)*10-3 seconds on a LAN.
- min = minimum queueing time for S.
The earliest point at which S could have placed the time m
t was min after p dispatched m
r. The time by S's clock when the message is received by P is in the range
t + min < p < t + T
round - min. The total width of this range is T
round - 2*min. This gives an accuracy of (T
round / 2 - min).
If all of that made absolutely no sense to you, here's a much simpler (but far less rigorous) explanation. Basically, the client sends a request for the current time to the time server. When it receives the response, it finds the transmission delay (time between the request being sent and the response being received), divides that in half, and adds that to the time received back from the server. The idea is to eliminate the inaccuracy caused by network delays. This assumes that the link is equally fast both ways, which may not always be the case. But as with any algorithm, you have to make tradeoffs.