[ale] Containers... use?

Jerald Sheets questy at gmail.com
Mon Sep 18 09:18:46 EDT 2017


> On Sep 16, 2017, at 10:21 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
> 
> From a sysadmin perspective, containers make it far to easy to bypass all security protocols. Until it's live, it's a binary blob waiting to suck in code from unknown sources and send information to unknown locations. Virtual machine security is better and more understood than containers.

You host your own hub.  That’s the answer.  We’re prevented from “reaching out” to the ‘net for anything at all.  I’ve built my own container registries internally, and only pull images *I* have rolled from there.  I never touch DockerHub.


> 
> Until I can get a SHA256 signed docker container with sig I trust, I can't allow them to touch my storage cluster.

Again, the setup is necessary, but you can completely lock it down to your own internal resources.  This is a non-issue.

> 
> How do containers get updated for security patches? They don't. Toss it and rebuild.

You do it.  Don’t rely on Docker or the community.  Roll your own images (just like folks who use custom AMIs) and maintain full control of “all the things”.

> That sets up a churn of install new containers which will in time dull the build process security focus.


Which is why we automate.  I personally use Puppet, as that is my SME domain, but I’ve seen workflows for both Chef and Ansible.  Also a non-issue.


> Time passes and a mission critical process is running on a gaping security hole that can't be patched because the F+@$ing developer who built it got a better job offer and left.

All containers should be curated by Systems.  The Developers should submit them for security scanning, or you should employ a DevSecOps model for deployment.  i.e., federate security scanning by providing OS, App, transport, penetration, and network security testing as APIs that devs can leverage instead of leaving them to security.  Left to their own devices, unreasonable deploy timelines set for them, and golf-playing pointy-hairs with unreasonable ship date requirements, it’ll never happen.

This should all be automated and part of a security CI/CD pipeline without which a “pass” from the security field, cannot ever be deployed into production.  This is how we do it.


> Developers don't have the responsibility for the integrity of the system, network, environment. Just their code. The sysadmin is on the hook for that blob of festering code rot that lets <fill in a cracking team name here> gain root in a container attached to a few TB of patient/banking/insurance/ANYTHING data and suddenly the sysadmin makes headline news .
> 

Which doesn’t really happen in containerized applications.  ESPECIALLY if you’re orchestrating them properly, and the curation of the containers is where they belong: in Systems and Security circles.

FUD doesn’t play well here, and this smacks of FUD to me.

Not to call you out, Jim.  :D


The real issue is automation should be a core component of Security, Operations, QA, Development, AND Deployment.  None of this crap should be touched with human hands any more.  That’s how you end up with an Equifax website with a U/P of admin:admin, thus this morning’s news.


—jms





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20170918/1ee09256/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP
URL: <http://mail.ale.org/pipermail/ale/attachments/20170918/1ee09256/attachment.sig>


More information about the Ale mailing list