Probably an All-Time Record

Almost 20 years ago, back when this site was very young, it sometimes enjoyed the "Digg effect" (front page of Digg.com), resulting in maybe 50,000 visitors in short bursts or limited time periods. It created a lot of strain on WordPress (running in CentOS, with or without Varnish cache). We've since then developed and adopted a static site generator (SSG), which can probably cope just fine with over 10,000 requests per minute; that would hardly create any CPU workloads or RAM usage spikes, as the database is not involved and the real bottleneck is the network throughput and overhead associated with interpreting requests prior to delivery.
This means we must focus on light pages, fast network, and low latency. We won't outsource to centralised CDNs such as Clownflare, ever!
As noted earlier, "[y]esterday our server served over 5 million Web requests." It didn't result in slowdowns. We could cope just fine. Our investment in our own SSG is paying off.
Despite soaring memory prices, the mindset of today's developers is, "throw more RAM at it" or "add more virtual CPUs" or "add more servers" (e.g. Kubernetes), not making the programs (or topology) leaner. Changing code is "too much work", so "let's just buy more hardware" or pay for more "clown computers/compute capacity"...
Wrong approach, not just a danger to our planet (pollution, water exhaustion, e-waste and so on). █
