With the days of dial-up and pitiful 2G data connections long behind most of us, it would seem tempting to stop caring about how much data an end-user is expected to suck down that big and wide bro…
This very much bothers me as a web developer. I go hard on Conditinal GET Request support and compression as well as using http/2+. I’m tired of using websites (outside of work) that need to load a fuckton of assets (even after I block 99% of advertising and tracking domains).
macOS and iOS actually allow updates to be cached locally on the network, and if I remember correctly Windows has some sort of peer-to-peer mechanism for updates too (I can’t remember if that works over the LAN though; I don’t use Windows).
The part I struggle with is caching HTTP. It used to be easy pre-HTTPS but now it’s practically impossible. I do think other types of apps do a poor job of caching things though too.
Yes, Windows peer to peer update downloads work over LAN. (In theory, I’ve never verified it.)
HTTP caching still works fine, if your proxy performs SSL termination and reencryption. In an enterprise environment that’s fine, for individuals it’s a non-starter. In this case, you’d want to have a local CDN mirror.
I couldn’t get SSL bumping in Squid on Alpine Linux about a year ago but I’m willing to give it another shot.
My home router is also a mini PC on Alpine Linux. I do transparent caching of plain HTTP (it’s minimal but it works) but with others using the router I do feel uneasy about SSL bumping, not to mention some apps (banks) are a lot more strict about it.
Yeah, you’ll have to have a bypass list for some sites.
Honestly, unless you’re actually on a very limited connection, you probably won’t see any actual value from it. Even if you do cache everything, each site hosts their own copy of jQuery or whatever the kids use these days, and your proxy isn’t going to cache that any better than the client already does.
Even if you do cache everything, each site hosts their own copy of jQuery or whatever the kids use these days, and your proxy isn’t going to cache that any better than the client already does.
don’t they always have a short cache timeout? the proxy could just tell the client that the cache timeout is a long time, and when the browser checks if it’s really up to date, it would redownload the asset but just return the right status code if it actually didn’t change.
and all the jquery copies could be also eliminated with a filesystem that can do deduplication, even if just periodically. I think even ext4 can do that with reflink copy, and rmlint helps there.
For my personal setup I’ve been wanting to do it on a VPS I have. I route my traffic through a bundle of VPNs from the US to Switzerland and I end up needing to clear browser cache often (web developer testing JavaScript, etc) on my end devices.
each site hosts their own copy of jQuery or whatever the kids use these days
I do this in my projects (Hotwire) but I wish I could say the same for other websites. I still run into broken websites due to trying to import jQuery from Google for example. This would be another nice thing to have cached.
This very much bothers me as a web developer. I go hard on Conditinal GET Request support and compression as well as using http/2+. I’m tired of using websites (outside of work) that need to load a fuckton of assets (even after I block 99% of advertising and tracking domains).
macOS and iOS actually allow updates to be cached locally on the network, and if I remember correctly Windows has some sort of peer-to-peer mechanism for updates too (I can’t remember if that works over the LAN though; I don’t use Windows).
The part I struggle with is caching HTTP. It used to be easy pre-HTTPS but now it’s practically impossible. I do think other types of apps do a poor job of caching things though too.
Yes, Windows peer to peer update downloads work over LAN. (In theory, I’ve never verified it.)
HTTP caching still works fine, if your proxy performs SSL termination and reencryption. In an enterprise environment that’s fine, for individuals it’s a non-starter. In this case, you’d want to have a local CDN mirror.
I couldn’t get SSL bumping in Squid on Alpine Linux about a year ago but I’m willing to give it another shot.
My home router is also a mini PC on Alpine Linux. I do transparent caching of plain HTTP (it’s minimal but it works) but with others using the router I do feel uneasy about SSL bumping, not to mention some apps (banks) are a lot more strict about it.
Yeah, you’ll have to have a bypass list for some sites.
Honestly, unless you’re actually on a very limited connection, you probably won’t see any actual value from it. Even if you do cache everything, each site hosts their own copy of jQuery or whatever the kids use these days, and your proxy isn’t going to cache that any better than the client already does.
don’t they always have a short cache timeout? the proxy could just tell the client that the cache timeout is a long time, and when the browser checks if it’s really up to date, it would redownload the asset but just return the right status code if it actually didn’t change.
and all the jquery copies could be also eliminated with a filesystem that can do deduplication, even if just periodically. I think even ext4 can do that with reflink copy, and rmlint helps there.
For my personal setup I’ve been wanting to do it on a VPS I have. I route my traffic through a bundle of VPNs from the US to Switzerland and I end up needing to clear browser cache often (web developer testing JavaScript, etc) on my end devices.
I do this in my projects (Hotwire) but I wish I could say the same for other websites. I still run into broken websites due to trying to import jQuery from Google for example. This would be another nice thing to have cached.