Skip to main content
Version: 0.6.3

HLS

The conducted stress tests were geared towards assessing performance of both regular HLS and Low-Latency HLS under substantial load.

We assumed a livestream scenario, where all of the clients demanded the most recent segments or partial segments as they were produced. This model closely simulates real-time broadcasting situations where all participants are consuming the same (latest) content concurrently. It could be contrasted e.g. with a video-on-demand (VOD) scenario, where any client can request any segment at any given time.

Setup

  • Machine A, running Fishjam with one room

    • One WebRTC peer in the room (WebRTC video stream with constant bitrate of 1.8 Mbps)
    • One HLS component in the room (segment length: 6 s, partial segment length: 1 s)
  • Machine B, running a stress test utility

    • The utility steadily increases the number of simulated clients over the course of the test, up to a given amount
    • Clients simulate an HLS player continuously requesting the most recent segments/partials (as if watching a livestream)
    • Each client opens a separate connection to Fishjam on machine A (no connection pooling)

Machine specs

  • CPU: AMD EPYC 7502P (32 cores, 64 threads at 2.5 GHz)
  • Memory: 128 GB
  • Network bandwidth (A <---> B): 10 Gbps

Test results

We varied the number of client connections and recorded server metrics at each level to determine how it would perform under increasing load. Values for mean incoming throughput, mean and peak outgoing throughput, peak memory used, peak CPU utilisation and mean request latency (grouped by request type) were all recorded.

Before any clients were connected to the server, the memory usage stood at 130 MB and CPU utilisation was noted at 11%. These values serve as the base or idle state of the server for comparison against the usage under stress testing.

info

It's important to remember that, in the context of HLS, the outgoing throughput is greatly influenced by the bitrate of the generated segments, which is directly connected to the bitrate of the streams used for segment generation. Specifically, for video streams featuring high and fluctuating bitrates (such as sports events and video game live streams) you may typically observe amplified mean and peak outgoing throughputs.

Caveat

Values italicised in the Latency columns indicate instances where network bandwidth limits were reached, resulting in throttling; in such cases, the observed latency does not reflect the server's innate processing capabilities.

Regular HLS

Client connectionsIncoming throughput (mean)Outgoing throughput (mean)Outgoing throughput (peak)Memory used (peak)CPU utilisation (peak)Playlist request latency (mean)Segment request latency (mean)
5006 Mbps0.9 Gbps1.3 Gbps0.9 GB18%4 ms66 ms
100012 Mbps1.9 Gbps2.6 Gbps1.9 GB19%3 ms65 ms
200024 Mbps3.6 Gbps5.2 Gbps2.8 GB23%3 ms66 ms
300036 Mbps5.1 Gbps6.5 Gbps4.5 GB30%4 ms67 ms
400048 Mbps7.0 Gbps9.2 Gbps (limit)9.2 GB34%17 ms*243 ms*

Low-Latency HLS

In Low-Latency HLS, the server intentionally delays the response to playlist requests until the specifically requested partial segment becomes available. This makes the "Playlist request latency (mean)" metric meaningless, and it is thus omitted from the following table.

Client connectionsIncoming throughput (mean)Outgoing throughput (mean)Outgoing throughput (peak)Memory used (peak)CPU utilisation (peak)Partial segment request latency (mean)
50025 Mbps1.1 Gbps4.7 Gbps210 MB25%98 ms
100050 Mbps2.3 Gbps9.0 Gbps290 MB34%138 ms
150075 Mbps3.3 Gbps9.2 Gbps (limit)340 MB43%244 ms*
2000100 Mbps4.5 Gbps9.2 Gbps (limit)410 MB47%341 ms*
2500125 Mbps5.7 Gbps9.2 Gbps (limit)480 MB48%402 ms*
3000150 Mbps6.6 Gbps9.2 Gbps (limit)560 MB50%514 ms*