If you’re like me – you live in the middle of nowhere, in New Zealand, without fiber let alone any DSL, and you develop a lot of software using docker – you might find your build/deploy cycle a bit painful, since many docker builds download packages.
Even if you are an unusual case – you live in an urban area with good espresso on every corner, scarce parking, and fiber to the couch (complete with 5G) – retrieving lots of packages from disk/SSD is still faster than fiber.
Running a docker pull through cache can help if you have a few machines pulling the same images. However, you sometimes you still need to actually build those images locally, and the build process can involve a lot of apt-get’ing or apk add’ing.
To help with package retrieval over HTTP, you can run apt-cacher-ng. That’s great if you have a way to convince your docker build to use the cache (e.g. via an environment variable). However it’d be even nicer not to have to do anything special.
The missing piece in the puzzle, is a transparent HTTP proxy (like squid), that knows how to redirect requests to apt-cacher-ng. This is what I do and it makes docker builds really fast (at least, the package retrieval part). Using a relatively old squid redirector, jesred, squid intercepts common package retrieval URLs and pass them to apt-cacher-ng.
Here’s part of my /etc/jesred.conf:
regex ^http://((.*)archive.ubuntu.com/ubuntu/(dists|pool)/.*)$ http://localho st:3142/\1 regex ^http://(security.ubuntu.com/ubuntu/(dists|pool)/.*)$ http://localhost: 3142/\1 regex ^http://(dl-cdn.alpinelinux.org/alpine.+)$ http://localhost:3142/\1 regex ^http://(.*cdn.*.debian.org/.+)$ http://localhost:3142/\1 regex ^http://(deb.debian.org/.+)$ http://localhost:3142/\1 regex ^http://(archive.raspberrypi.org/debian/.+)$ http://localhost:3142/\1
Of course, your squid installation has to be set up to do transparent caching on port 80, and has to reference jesred in /etc/squid/squid.conf: