mirror of
https://git.proxmox.com/git/proxmox
synced 2025-05-23 15:49:54 +00:00
![]() I made some comparision with bombardier[0], the one listed here are 30s looped requests with two concurrent clients: [ static download of ext-all.js ]: lvl avg / stdev / max none 1.98 MiB 100 % 5.17ms / 1.30ms / 32.38ms fastest 813.14 KiB 42 % 20.53ms / 2.85ms / 58.71ms default 626.35 KiB 30 % 39.70ms / 3.98ms / 85.47ms [ deterministic (pre-defined data), but real API call ]: lvl avg / stdev / max none 129.09 KiB 100 % 2.70ms / 471.58us / 26.93ms fastest 42.12 KiB 33 % 3.47ms / 606.46us / 32.42ms default 34.82 KiB 27 % 4.28ms / 737.99us / 33.75ms The reduction is quite better with default, but it's also slower, but only when testing over unconstrained network. For real world scenarios where compression actually matters, e.g., when using a spotty train connection, we will be faster again with better compression. A GPRS limited connection (Firefox developer console) requires the following load (until the DOMContentLoaded event triggered) times: lvl t x faster none 9m 18.6s x 1.0 fastest 3m 20.0s x 2.8 default 2m 30.0s x 3.7 So for worst case using sligthly more CPU time on the server has a tremendous effect on the client load time. Using a more realistical example and limiting for "Good 2G" gives: none 1m 1.8s x 1.0 fastest 22.6s x 2.7 default 16.6s x 3.7 16s is somewhat OK, >1m just isn't... So, use default level to ensure we get bearable load times on clients, and if we want to improve transmission size AND speed then we could always use a in-memory cache, only a few MiB would be required for the compressable static files we server. Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com> |
||
---|---|---|
src |