A Neat Trick For Compressing Networked State Data

I’m not a network programmer, but I do like cool algorithms.

Today at lunch a co-worker mentioned a really cool trick for when you need to continually send state across a network.

For instance, if you had a large struct of data containing information about all the players in the world, projectiles, enemies, and other game objects – and you wanted to send this information to a specific player so their game client could render the world appropriately / do collision detection / etc.

It goes like this:

  1. Get the initial state and send it (compressed).
  2. When it’s time to send an update to the state, XOR it against the previous state, compress that result and send it.
  3. Rinse and repeat.

The magic here is in the assumption that the state as a whole isn’t going to change much from update to update. If this assumption is true, when you do the XOR against the previous state, you are going to end up with a lot of zeroes, which compress very nicely, making for a small data payload.

On the other side, when you receive an update, you would just decompress it, XOR it against the previous state, and use that as the new state.

I thought that was pretty clever.

BTW when sending compressed data across the network, the smaller the message sent, the more of the network data is taken up by the “compression header”. To get an intuition for this, go compress an empty text file and notice how it got bigger. The header is there to give information about how to decompress the data (such as a huffman table, or whatever else).

One way to get around this is to have the server and client pre-agree on a compression header so that when they talk to each other, they can omit it.

Another way to get around this is to use an ADAPTIVE compression header.

What you do is on the client, when you decompress data, you process it using logic to improve an implicit compression header for use with the next communication. If the server does the same (keeping a context for each player individually), and if this code is deterministic, client and server will adapt their compression in the exact same ways. This allows each side to talk to each other without sending compression headers, but still changing compression on demand to match the needs of the specific data being sent.

Anyways, like I said, I’m not a network programmer, so back to graphics for me 🙂

If you like network programming, Glenn Fiedler has written a lot of really interesting things on the subject.
https://gafferongames.com/
@gafferongames


7 comments

    • Delta compression is very common over UDP with game protocols. Look up the quake 3 networking model for a good explanation of the details.

      In Doom 3, they used a simple/fixed compression method on top of the deltas (instead of LZ, etc), of just RLE’ing the strings of zeros, storing the repetition count in the 3 bits (1-8 repeats) immediately after each 0 in the bitstream.

      Like

  1. Yes those are good tricks indeed, but there is an even better trick! You store the initial state in the zlib deflate dictionary. Then you compress the next state using that dictionary. The result will be a huffman-coded “delta”. The other side has to have the initial state to store in its dictionary (this can be sent in a one-time handshake). This process can then be repeated without sending the entire dictionary, and it is deterministic.

    Like

  2. Excellent beat ! I wish to apprentice whilst you amend your site, how
    could i subscribe for a blog web site? The account
    aided me a acceptable deal. I were a little bit acquainted
    of this your broadcast offered brilliant clear concept

    Like


Leave a comment