Lliora a day ago

Ship-to-shore SAT link, 800 ms RTT, 2 % burst loss. We muxed 4 K pps telemetry + 1 Mbps H264 over QUIC last year. Head-of-line blocking vanished - TCP would have stalled 12 s on each 200 ms fade. FEC at the stream frame, not packet, let us ride fades with 3 % overhead. QUIC’s real win is acking individual frames; we saw 40 % better goodput vs TCP + application FEC at the same latency.

  • scottlamb a day ago

    Very cool result, but I'm struggling to understand the baseline: what does "TCP + application FEC" mean? If everything is one TCP stream, and thus the kernel delivers bytes to the application strictly in order, what does application FEC accomplish? Or is it distributed across several TCP streams?

dmm a day ago

Pull-based streaming can work with webrtc. I implemented it for my custom ip camera nvr solution. I just open N streams on the client and when one is deactivated (typically by scrolling it out of the viewport), the client sends an unsubscribe message over a separate control channel and the server just stops sending video until they resubscribe.

I'm currently switching to a quic-based solution for other reasons, mainly that webrtc is a giant blackbox which provides very limited control[1], yet requires deep understanding of its implementation[2] and I'm tired[3].

I looked at moq-lite but decided against it for some reason. I think because I have <5 clients and don't need the fanout. The auth strategy is very different than what I currently use too.

[1] Why is firefox now picking that (wrong) ice candidate?

[2] rtp, ice, sdp, etc

[3] webrtc isn't bad for the video conferencing use-case but anything else is a pain

  • scottlamb a day ago

    I've also looked at switching my open source IP camera NVR to WebCodecs and WebTransport (maybe MoQ). Two things giving me pause:

    * Firefox support for WebCodecs is poor—none at all on Android [1], H.265 is behind a feature flag. [2]

    * Mobile Safari doesn't support WebTransport. Or didn't...I just looked it up again and see it does in 26.4 TP. Progress! [3]

    [1] https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe...

    [2] https://searchfox.org/firefox-main/rev/da2bfb8bf7dc476186dfe...

    [3] https://caniuse.com/webtransport

    • kixelated a day ago

      Yeah for Safari support I'm using polyfills; it sucks.

      - libav.js for AudioEncoder/AudioDecoder. - QMux over WebSockets for WebTransport.

      Both are NPM packages if you want to use them. @kixelated/libavjs-webcodecs-polyfill and @moq/qmux

      26.4 removes the need for both so there's hope!

      • scottlamb a day ago

        Thanks!

        Any idea what Firefox is waiting for? To me those lines I quoted seem entirely arbitrary, and a skim through bugzilla didn't help.

    • Sean-Der a day ago

      That's exciting! When you were evaluating it everything about the protocol/APIs fits your needs?

      Just features/software need to be implemented?

      • scottlamb a day ago

        I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.

        But WebCodecs is just really straightforward. It's hard to find anything to complain about.

        If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <https://github.com/scottlamb/retina> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.

        Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.

        [1] https://pasteboard.co/Jfda3nqOQtyV.png

  • kixelated a day ago

    Absolutely agree.

    You can convert any push-based protocol into a pull-based one with a custom protocol to toggle sources on/off. But it's a non-standard solution, and soon enough you have to control the entire stack.

    The goal of MoQ is to split WebRTC into 3-4 standard layers for reusability. You can use QUIC for networking, moq-lite/moq-transport for pub/sub, hang/msf for media, etc. Or don't! The composability depends on your use case.

    And yeah lemme know if you want some help/advice on your QUIC-based solution. Join the discord and DM @kixelated.

adithyassekhar a day ago

Never had to work with moq, got me to read the whole thing. There's still a place for good writing.

  • 0_____0 a day ago

    After all the LLM written or lobotomized^W"polished" writing that gets surfaced here, seeing human writing makes me want to do drugs and fall in love.

  • scottlamb a day ago

    > Never had to work with moq

    Probably never had to work with (live) video at all? I think using moq is the dream for anyone who does. The alternatives—DASH, HLS, MSE, WebRTC, SRT, etc.— are all ridiculously fussy and limiting in one way or another, where QUIC/WebTransport and WebCodecs just give you the primitives you want to use as you choose, and moq appears focused on using them in a reasonable, CDN-friendly way.

    • throwaway290 13 hours ago

      > Probably never had to work with (live) video at all?

      streaming might seem like a world to you by it is a small niche of live video;)

      unless it has p2p in browser support, it has nothing against webrtc for live video calls which doesn't need a server.

teekert a day ago

The Lonely Island - I'm On A Boat (Explicit Version) ft. T-Pain (Official Video) [0]

[0] https://www.youtube.com/watch?v=avaSdC0QOUM

  • e40 a day ago

    Time flies, that video is 16 years old!

    • bigiain 15 hours ago

      Presumably meaning nobody under 30 understands "I'm on a boat" as anything other than ancient history or boomer-speak (or more accurately perhaps millienal-speak?).

andai 20 hours ago

Do we have UDP in browser yet? Last I checked (mid 2024) it was soon-ish?

Edit: https://caniuse.com/?search=webtransport

Looks like the situation is the same as in 2024: "Yes, except for Apple devices?" If I'm reading this right, it looks like Safari will support it next week though...

ale42 a day ago

Apart from actual support on real networks, isn't this the problem IP multicast was supposed to solve ages ago?

  • kixelated a day ago

    Yep, it's similar to multicast but L7.

    But a huge difference is that there's a plan for congestion. We heavily rely on QUIC to drain network queues and prioritize/queue media based on importance. It's doable with multicast+unicast, but complicated.

newsclues a day ago

I like the ability to choose what you want to pull.

I’ve been thinking about an application where people consume all their media, and having the ability to pick which tracks to pull for any content you want to stream would be great.

tamimio a day ago

Very good progress, I have been keeping an eye on quic for some time, I have yet to use it in the wild. The article mentions the prioritization of the frames and keeping it in the RAM, I am a bit confused, so.. it’s sent delayed later or is it only added in non-priority stream? Also slightly far from that, how does that work with FEC? I built before a streaming platform for drones but it utilized gstreamer primarily over udp, different codecs based in the hardware, one of the issues was what you mentioned in the article of having one subscriber only at a time, so we had some duct tape solutions if we needed more but it wasn’t really great.

  • kixelated a day ago

    QUIC libraries work by looping over pending streams (in priority order) to determine which UDP packet to send next. If there's more stream data than available congestion control, the data will send there in the stream send buffer.

    Either side can abort a stream if it's taking too long, clearing the send buffer and officially dropping the data. It's a lot more flexible than opaque UDP send buffers and random packet loss.

    FEC would make the most sense at the QUIC level because random packet loss is primarily hop-by-hop. But I'm not aware of any serious efforts to do that. There's a lot of ideas out there, but TBH MoQ is too young to have the production usage required to evaluate a FEC scheme.