Originally posted by
BlueRajasmyk2
As a developer with experience in game networking, I'd be interested in hearing a more technical explanation of what /u/Psyonix_Cone meant, because I find it hard to believe they invented their own lossy compression algorithm for this.
I think more likely what he meant was "the data is compressed, but also, packets can be lost"
It is lossy compression. Position vector components (float XYZ) get rounded to the nearest integer. Angular and Linear velocity get scaled and rounded. Rotation gets snapped to certain degree angles.
To correct other things I see in this thread: The compression is not for replays, it's for sending the data over the network. Replays are just a recording of the network data. The compression isn't to save money, it's to save client download bandwidth. We want to get as much information as possible to the client without overloading them. The information isn't sent at a "variable bit rate" in the audio/video encoding sense of the phrase. The server creates network packets 60 times a second, but not ever replicated actor will have a chance to write to every outgoing packet. Then the packets suffer from your usual internet jitter (variable latency from A to B). Then we only write to the replay at 30 FPS (to save disk space).
In OP's link, the replay records a snapshot right before the ball crosses the goal line. The next snapshot isn't until a short while after the ball explodes. However, the replay (clients) aren't allowed to predict something as important as a goal explosion. With no goal explosion, the simulation goes on to show us what would have happened if there was no goal - the defender blocks it and the ball starts to go back the other direction. Finally the next replay snapshot gets read off disk and informs us that there was a goal explosion.