Its a question of reproduceability – when your build system relies on binary artifacts retrieved from external upstream servers you have no control over (which is the modus operandi of Maven, Ivy and similar tools) what will you do when you need to release an urgent update to your software, but the upstream servers are down for some reason? Or, your system has been completely obliterated when your co-location suffered a major failure and as luck would have it, Apache stores their main servers in the same colo, would you be willing to wait with your rebuild until your upstream (who owes you no SLA) rebuild their systems?

I believe living like this is insane and if your DR plan includes “download these binaries from this non-contracted third party” then you should be fired.

The only question now remains how you handle that? I guess some people use an artifact cache (such as Artifactory) and pray that it contains the correct binaries when push comes to shove. I prefer to live in certainty that I can build my entire production system from source, with 100% identical artifacts, by having SRPMs stored and backed up for anything over my base OS (whose provider I have an SLA with).

You have to remember, your DR plan is as weak as the weakest SLA it relies on. If that is a $100 rebate from AWS, then the SLA you can provide to your customers can only guarantee that. If your DR plan relies on non-contracted third parties who has no obligation to you, then the only SLA you can reliably guarantee is “we’ll do our best to not lose your data” good luck selling that to your CEO.