Original Post — Direct link

I'm working on a netcode based programming project so I've been doing a lot of reading and research on how different games handle their netcode. Currently trying to implement a framework similar to the one used in Rocket League.

I've watched the GDC talk about Rocket League's physics and netcode, the one for Overwatch, and a few unofficial videos examining the netcode in RL. I've also read quite a few articles on the topic in a more general sense such as the Gaffer on Games articles on client-side prediction, server reconciliation, entity interpolation, and lag compensation.

I understand that for shooters, it's better to have the other players rendered slightly in the past so that they are only ever shown in places where they've actually been (and the server is able to rewind time when you shoot to check if you actually hit them on your local machine). However, in Rocket League, it seems like the client wants to display things slightly in the future so that by the time your local inputs arrive on the server, the server's view of the game is identical (or very close) to what you saw when you pushed those inputs.

Does this mean that the client is constantly showing an extrapolated version of the ball / other players? (And then using smoothing techniques when the server ends up disagreeing with the client rather than having an instant jittery correction.)

For example, if my client has 50ms of latency (25ms in either direction):

  • The server is at time 100ms
  • My client has received game-state for time 75ms.
  • If my client sends an input now, the server will receive and execute that input at time 125ms (ignoring the buffer that causes the server to wait a little before executing the inputs).
  • Because of this, my client uses the past game-state values to extrapolate game-state (ball / other players) for time 125ms (while the client only has state up to 75ms).

Does this look correct? And how does that input buffer on the server affect this? If this input buffer is causing inputs to be delayed by ping/2 (25ms in this case), does that mean my local client will be extrapolating to time 150ms instead of 125ms since that is when my input will be processed on the server?

Also, if the input that my client sent at 100ms (which is expected to be executed on the server at t=125ms) ends up taking longer than normal and arrives on the server at t=150ms, what happens? I'm guessing that at t=125ms, the server notices it hasn't received an input for t=125ms so it assumes that the input will be the same as the previously received input (with some input release smoothing techniques) and simulates the frame using that input then sends game-state updates to every client. But then, at time 150ms, it receives the proper input for t=125ms. If this input is different than the one that the server assumed, what does the server do? Does it re-simulate from t=125ms using the correct input and then send corrections to every client?

Assuming this is all correct (or close to correct), if a different client on the server has 200ms ping, when the server is at time=100ms, this client will have received game-state from the server for time=0ms and will then be using extrapolation to show the ball/other players for time=200ms. Is this accurate (still ignoring the input buffer).

Thank you!

External link →
about 4 years ago - /u/Psyonix_Cone - Direct link

Does this mean that the client is constantly showing an extrapolated version of the ball / other players? (And then using smoothing techniques when the server ends up disagreeing with the client rather than having an instant jittery correction.)

For example, if my client has 50ms of latency (25ms in either direction):

The server is at time 100ms

My client has received game-state for time 75ms.

If my client sends an input now, the server will receive and execute that input at time 125ms (ignoring the buffer that causes the server to wait a little before executing the inputs).

Because of this, my client uses the past game-state values to extrapolate game-state (ball / other players) for time 125ms (while the client only has state up to 75ms).

Yep that's exactly right!

And how does that input buffer on the server affect this? If this input buffer is causing inputs to be delayed by ping/2 (25ms in this case), does that mean my local client will be extrapolating to time 150ms instead of 125ms since that is when my input will be processed on the server?

Yes that is also correct. The extra latency it adds depends on how many inputs are in the buffer. So if we have 4 inputs in the buffer, that's 4*(1/120)*1000=33.333 ms of extra latency

Note that because we used a fixed tick rate for physics/gameplay, the client and server communicate using frames instead of time.

If this input is different than the one that the server assumed, what does the server do? Does it re-simulate from t=125ms using the correct input and then send corrections to every client?

The server never re-simulates. The goal of the input buffer is to try to handle the case where client inputs sometimes take longer to get to the server due to latency variation. However if there is a spike in variance and the input buffer runs dry, the server will repeat the last known inputs. When the real inputs eventually arrive, the server will just add them to the input buffer queue and consume from the queue as usual the next time physics runs. That last part is less than ideal for reasons that are more complicated than what I have time to write out right now, but it's what the Default input buffer strategy for our game uses and is good enough for most cases.