John Roman
2018-07-08 00:59:16 UTC
Greetings,
I wish to discuss a formal change of the default retry for wget from 20
to something more pragmatic such as two or three.
While I believe 20 retries may have been the correct default many years
ago, it seems overkill for the modern "cloud based" internet, where most sites are
backed by one or more load balancers. Geolocateable A records further
reduce the necessity for retries by providing a second or third option
for browsers to try. To a lesser extent, GTM and GSLB technologies
(however maligned they may be) are sufficient as well to
properly handle failures for significant amounts of traffic. BGP
network technology for large hosting providers has also further reduced
the need to perform several retries to a site. Finally, for better or
worse, environments such as Kubernetes and other container orchestration
tools seem to afford sites an unlimited uptime should the marketing be
trusted.
I wish to discuss a formal change of the default retry for wget from 20
to something more pragmatic such as two or three.
While I believe 20 retries may have been the correct default many years
ago, it seems overkill for the modern "cloud based" internet, where most sites are
backed by one or more load balancers. Geolocateable A records further
reduce the necessity for retries by providing a second or third option
for browsers to try. To a lesser extent, GTM and GSLB technologies
(however maligned they may be) are sufficient as well to
properly handle failures for significant amounts of traffic. BGP
network technology for large hosting providers has also further reduced
the need to perform several retries to a site. Finally, for better or
worse, environments such as Kubernetes and other container orchestration
tools seem to afford sites an unlimited uptime should the marketing be
trusted.