[ale] [ALE] So the winner is?

Solomon Peachy pizza at shaftnet.org
Fri May 21 10:28:19 EDT 2021


On Thu, May 20, 2021 at 07:23:34PM -0400, Leam Hall via Ale wrote:
> I agree, but for a large number of places, not having infrastructure 
> bills is a better financial decision. 

So what is "the bill from AWS" if not an "infrastructure bill"?

Unless by "infrastructure" you really mean "capitalized expenditures" 
(as opposed to "expensing").  The former is certainly worse in the short 
term, but is (usually much!) cheaper in the long run.

> It is also a better supported, and can be a more secure and reliable 
> option. 

Sure, but it also opens you up to a lot of attacks that aren't possible 
if it's completely in-house.

> Most workloads aren't on bare metal. But if you're not running 
> on bare metal, then you're already on someone's virtualization. With 
> AWS/GCE/DO, you're running on someone's virtualization. With a VPS 
> hosted in a datacenter, you're running on someone's virtualization.

Don't forget that "someone's" can be "yours"

Look, my point isn't that "VMs/Containers BAD" -- Just that using 
VMs/etc doesn't free you from the burden of knowing what's going on 
under the hood.  Someone still has to create and maintain those 
customized VMs/containers.  Someone still has to babysit them and debug 
stuff that goes wrong -- And that still requires "Linux" skills.

At larger organizations, random developers haven't ever had to worry 
about this stuff directly; there's always been a separate team to set up 
and maintain the development (and production) environments.  Whether the 
stuff runs bare metal or in a VM or in a container is immaterial.  The 
"stuff" still needs setup, monitoring, and other general 
tracking/administration.  

CentOS is still CentOS, whether or not it's running on local bare metal 
or on AWS.  You still need to maintain complete BOMs, whether or not the 
"platform" is "Ubuntu 18.04 plus X packages" or "AWS Python 3.6 plus Y 
packages", and the system is specified in a kickstarter script, docker 
recipie, travis yaml, or (gack) 'sudo curl http://example.com/setup.sh | bash'

You still have to design your system for fault tolerance/resilience 
should something crash or otherwise go wrong.  High-availability designs 
are inherently more complex (especially when data consistency is 
critical), and that requires additional skills to set up and maintain, 
whether or not the individual moving parts are bare metal or 
virtualized.  

Incidently, "it's not worth the complexity/cost" can easily be the 
correct decision.

> Are you building your own DC, or are you trusting the third party to 
> hire people who can swap drives and replace cables? You're 

There's a ginormous gap between "a bare metal server" and "your own 
datacenter", and it's the wrong metric besides.

VMs decouple the software lifecycle from the hardware lifecycle.  That 
brings a _lot_ of advantages, but also some disadvantages, mostly in 
administrative overhead.  Going to a "cloud" model brings more of both. 
but you have to operate at a certain minimal scale before the plusses 
outweigh the minuses.  Not every application has to exist at Google-scale.  

Indicently, once you scale high enough pulling everything back in-house 
makes more sense again; it's why Google and Facebook design their own 
servers, network gear, and datacenter infrastructure; it's also why 
they're so active in upstream Linux and various other core plumbing 
software layers.  Because if someone at Google improves operatational 
efficiency/performance by 0.1% it'll cover their (substantial) yearly 
salary several times over.

Anyway.

 - Solomon
-- 
Solomon Peachy			      pizza at shaftnet dot org (email&xmpp)
                                      @pizza:shaftnet dot org   (matrix)
High Springs, FL                      speachy (freenode)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <https://mail.ale.org/pipermail/ale/attachments/20210521/d93ee8b7/attachment.sig>


More information about the Ale mailing list