[ale] [ALE] So the winner is?

Jim Kinney jim.kinney at gmail.com
Thu May 20 17:24:03 EDT 2021


I think you're hinting at the issue: people (beancounters in particular) seem pretty ok swapping cost savings for security.

For the startup with $0, by the hour/job/transaction hosting makes sense. There's a point where some things come local and that's the same discussion when companies stopped leasing computer time 30 years ago and bought servers and built in-house data centers.

Moving to leased equipment, aka cloud, has costs and advantages. I moved ale and my other web hosting to a dedicated machine I lease because it was WAY cheaper than running in house. But I still have to provide local backup for when it blows up (which I need to finish config on and test, sigh).

The thing I see with cloud is it allows for a concentration of lower skill and lower paid people to run the gear. The rack access team doesn't need to know anything other than cable plus socket equals food. Anyone ever sat on a call with AWS parallel cluster support? Even with 3rd tier they were basically useless.

It's really cool stuff, though. But having a crapload of it doesn't make it scale. It just scales problems.

HA will always be a beast in all ways: technical, network, financial, etc. But really, does everything NEED 5 9s? Could, say, slashdot drop offline for a day or two and that not result in loss of life?

There's a cost to large stuff that doesn't bear out in obvious manner. Big datacenter uses big power and must be filled to justify cost. Acres of former farmland is bulldozed for instant retrieval of cousin Ethyl's pit bellied pig pictures and NSA recording of everything along with Google, FaceBook, everyone who never deletes files, etc. Seems cooling those monsters is a problem - as Belgium about the lack of living things downstream of a Google datacenter due to hot river water.

And then there's the loss of farmland. I like food. Especially barley-based liquids.

Unicorn farts power the white papers touting CLOUD CLOUD CLOUD the same way tobacco was harmless and burning oil can continue forever with no problems.  

New shit. New assholes. Same stench. 10-15 years the push to bring it all home will start. Too bad there's only gonna be a few dozen people that know how to plug in a machine by then.

Old habits die hard. So I've stopped being a nun.

Existential rant over. Gotta go convert ideas into heat.

On May 20, 2021 3:43:16 PM EDT, Leam Hall via Ale <ale at ale.org> wrote:
>I'm an old guy, and I'm happy to face reality. Don't get me wrong; I'm
>not saying it's all fluffy unicorn farts. But there are a few issues
>that drive this.
>
>1. People don't care about security enough to pay for it.
>
>People still shop at Target, Experian is still in business, banks still
>offer on-line banking, and most people still have credit cards. Either
>accept that you value convenience more than security, or do some
>drastic life changing.
>
>
>2. Abstraction and virtualization are mandatory.
>
>By count, most Linux machines either run on a virtual host (KVM,
>Docker, AWS Image, VMWare) or are highly controlled and blocked off
>(Android). Yes, Jim and his HPC toys are there, but they are the
>exception. Most of us don't get to play with a Cray. Even with Linux on
>bare metal, the udev/HAL tries to abstract the hardware so the
>applications don't have to have device drivers embedded. So there are
>at least a few layers of abstraction between the user and the metal.
>
>
>3. Economics pays.
>
>Servers turn money into heat, unless you have an application running.
>Let's use the standard 3 tier app; database, middleware, and webserver.
>For security, each of those needs to be a separate server. If you want
>bare metal, you're talking three servers. But that means you have three
>single points of failure unless you double the server count and make
>your application highly available. Now, that means you need someone
>with OS skills as well as a few years of experience, HA don't come
>cheap. Don't forget the network engineer for your firewalls, routers,
>and switches. You also need a management server (Ansible) unless you're
>going to build and maintain all these snowflakes by hand, so you're up
>to 7 physical servers, one firewall, and a couple network devices. You
>probably want a NAS for drive storage and a backup server for, well,
>backups. More hardware. Sadly, most physical boxes are only at 5-10%
>utilization. So you have an RHCE level person, a CCNA level person, and
>you're probably
>at a dozen physical devices and a quarter mil per year for salary and
>benefits. Until you realize that being one deep puts you at risk, so
>you get two each. That doesn't even count your developer staff, this is
>just infrastructure.
>
>
>Or...
>
>Let your dev staff use AWS Lambda, S3, and DynamoDB. Be able to build
>from a dev's workstation, and set up for deploying to a second
>availability zone for high availability. You'd need one or two AWS
>cloud people, so your infrastructure staffing costs are cut in half.
>You don't have to rack and stack servers, nor trace and replace network
>cables at 0300. If you really want an OS underneath, for comfort or
>because you haven't coded your application to be serverless, you can
>use EC2 and right-scale your nodes. That also means your staff can work
>from about anywhere that has a decent internet connection, and if your
>building loses power, your application doesn't.
>
>I know AWS has external security audits, and you can inherit their
>controls for your artifacts. AWS security is enough for the US DoD, so
>likely more than sufficient for most other use cases. I do not know
>much about Digital Ocean or Google Compute, but my bet is they are
>working to get a share of that same market.
>
>
>4. The real driver for serverless/microarchitecture/containers.
>
>It's not about circumventing security (though some devs do that), nor
>is it about always running as root (again, for smart devs, this ain't
>it). It is about reducing complexity. The fewer moving parts an
>application host has, the less change the development team has to code
>around. I just checked three Linux nodes, and they have 808, 527, and
>767 packages, respectively. With an AWS Lambda based application, I
>pick the runtime (Python 3.8, Node.js 14, etc), add just the packages
>my app specifically needs, and then test that. In truth, the reduced
>package footprint can increase security. Nor do I have to wait for Red
>Hat or Oracle to package the version of an application I need; I can do
>that myself. Yes, it means I need to be aware of where that code comes
>from, but that's not an infrastructure issue. Devs have to do that in
>the cloud or on metal.
>
>
>5. In the end, success matters.
>
>I've been the hardware, OS, datacenter, and network person; I
>understand the basics of how these things work. AWS and similar are
>changing what we're used to. I find some of it uncomfortable, but I
>want to pay the bills. I'll change my habits so my family is provided
>for.
>
>
>Leam
>
>
>
>
>On 5/20/21 9:03 AM, DJ-Pfulio via Ale wrote:
>> Common sense isn't nearly as common as we all think.
>> 
>> I recall, vaguely, thinking all the "old guys" just were afraid of
>the great, new, tech too.  Now I know better.
>> 
>> 
>> On 5/19/21 9:53 PM, Allen Beddingfield via Ale wrote:
>>> I remember being at an event several years back, where a group of
>20-something web hipsters were doing a session on how they had replaced
>the legacy client/server setup at a corporation with some overly
>complicated in-house built thing mixing all sorts of web technologies
>and dbs in containers running at a cloud provider.  They were very
>detailed about their decision to put it in containers, because all the
>infrastructure people at that company were so behind the times with all
>their security models, insisting on not running things as root,
>firewalls, blah, blah...
>>> Quite a few people left shaking their heads at that point.   I was
>sitting next to a guy FROM a major cloud hosting provider, who almost
>choked on his coffee while laughing when one of them said that "It is
>just a matter of time before Dell and HP are out of the server business
>- no one needs their servers anymore!  Everything will be running in
>the cloud, instead!"
>>>
>>> I still argue that the main motivating force behind containers is
>that developers want an easy way to circumvent basic security
>practices, sane  version control practices, and change control
>processes.  There are plenty of valid use cases for them, but sadly,
>that is the one actually driving things.  We have a whole generation of
>developers who weren't taught to work within the confines of the system
>presented to them.
>>> No one ever prepared them for enterprise IT.  Now we have heaven
>knows what software, running heaven knows what version, in some
>container that developers can put online and take offline at will.  Who
>audited that random base Docker image they started with?  Are patches
>applied to what is running in there?  Is it secretly shipping off
>sensitive data somewhere?  Who knows.  Unless you defeat the whole
>purpose of a container, you don't have any agents on the thing to give
>you that data.
>>>
>>> Next, I'm going to go outside and yell at people to get off my lawn
>. . .
>>>
>>> Allen B.
>
>
>-- 
>Site Reliability Engineer  (reuel.net/resume)
>Scribe: The Domici War     (domiciwar.net)
>General Ne'er-do-well      (github.com/LeamHall)
>_______________________________________________
>Ale mailing list
>Ale at ale.org
>https://mail.ale.org/mailman/listinfo/ale
>See JOBS, ANNOUNCE and SCHOOLS lists at
>http://mail.ale.org/mailman/listinfo

-- 
Computers amplify human error
Super computers are really cool
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20210520/692568be/attachment.htm>


More information about the Ale mailing list