Several months ago, I proposed a bucket brigade approach to singing with friends over the Internet. Recently, Glenn and I have been working on an implementation (prototype source code). We can't use WebRTC (I think) because we need fine grained control over latency, so we're doing everything manually. Which raises the question of compression.

CD quality audio, which your browser is happy to give you, represents a second of audio with 44,100 ("44.1kHz") 16bit samples. It's possible to send this raw over the internet, but that's a lot of bandwidth: 88,200 bytes per second in each direction, or 1.4 Mb/s. That's not completely nuts, but many people on Wi-Fi are in environments that won't be able to handle that consistently.

One way to do better is to just throw away data. Instead of taking 44,100 samples every second, just take a quarter of that (11,025). Instead of using sixteen bits per sample, just send the eight most important ones. This is a factor of eight smaller at 175 kb/s round trip, but compare:

(CD Quality: 44.1kHz, 16bit)

(Reduced: 11kHz, 8bit, undithered)

This doesn't sound great: there are some weird audio artifacts that come from rounding in a predictable way. Instead, people would normally dither it:

(Reduced: 11kHz, 8bit, dithered)

This sounds much more realistic, but it adds a lot of noise. Either way, in addition to sounding awkward this approach also uses more bandwidth then would be ideal. Can we do better?

I wrote to Chris Jacoby, and he suggested I look into the Opus codec. I tried it out, and it's pretty great:

(Opus: 64 kb/s)

(Opus: 32 kb/s)

(Opus: 24 kb/s)

(Opus: 16 kb/s)

All of these are smaller than the reduced version above, and all of them except the 16 kb/s sound substantially better.

In our use case, however, we're not talking about sending one large recording up to the server. Instead, we'd be sending batches of samples off every, perhaps, 200ms. Many compression systems do better if you give them a lot to work with; how efficient is opus if we give it such short windows?

One way to test is to break the input file up into 200 ms files, encode each one with opus, and then measure the total size. The default opus file format includes what I measure as ~850 bytes of header, however, and since we control both the client and the server we don't have to send any header. So I count for my test file:

format size
CD Quality 706 KB
11kHz, 8bit 88 KB
Opus, 32 kb/s 44 KB
Opus, 32 kb/s, 200ms chunks 51 KB

I was also worried that maybe splitting the file up into chunks would sound bad, either because they wouldn't join together well or because it would force opus to use a lower quality encoding. But it sounds pretty good to me:

(200ms chunks)

Another question is efficiency: how fast is opus? Here's a very rough test, on my 2017 Macbook Pro:

$ time for i in {{1..1000}}
    do opusenc --quiet \
       --music --bitrate 32 \
       row-row-row.wav output.opus32
    done
$ time for i in {{1..1000}}
    do opusdec --quiet \
      row-row-row.opus32 output.wav
    done

The test file is 8.2 seconds of audio. Encoding 1000 copies took 68s, decoding took 43s, for a total of 111s. This is 74x realtime, and is the kind of embarrassingly parallel computation where you can just add more cores to support more users.

Running on the server it looks like we can use the python bindings, and on the client we can use an emscripten web assembly port.

Overall, this seems like a solid chunk of work, but also a good improvement.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 7:42 AM
In our use case, however, we're not talking about sending one large recording up to the server. Instead, we'd be sending batches of samples off every, perhaps, 200ms. Many compression systems do better if you give them a lot to work with; how efficient is opus if we give it such short windows?
One way to test is to break the input file up into 200 ms files, encode each one with opus, and then measure the total size. The default opus file format includes what I measure as ~850 bytes of header, however, and since we control both the client and the server we don't have to send any header. So I count for my test file...

Based on my understanding of what you are building, this splitting is not a good model for how you would actually implement it. If you have a sender that is generating uncompressed audio, you can feed it into the compressor as you produce it and get a stream of compressed output frames that you can send and decode on the other end, without resetting the compressor in between.

Coauthor here: FWIW I also favor eventually switching to the (more reasonable IMO) streaming approach. But this does require a lot more complexity and state on the server side, so I have not yet attempted to implement it to see how much of an improvement it is. Right now the server is an extremely dumb single-threaded Python program with nginx in front of it, which is performant enough to scale to at least 200 clients. (This is using larger than 200 ms windows.) Switching to a websocket (or even webrtc) approach will add probably an order of magnitude in complexity on the server end. (For webRTC, maybe closer to two orders, from my experiments so far.)

You can do that, but then you need the server to retain per-client state. Everything stays much simpler if we don't!

I agree with @cata here. You also introduce a forced 200ms of latency with batch sending. I also would propose a proper protocol like RTP for network transport, which helps with handling problems during transport.

To support fully streaming operation in the browser, I think we would need to switch to using web sockets. Doable, but complicated and may not be worth it?

200ms of latency isn't ideal, but also isn't that bad. The design does not require minimizing latency, just keeping it reasonably small.