[ale] Alright, it's time to move on from Linode

Justin Caratzas bigjust at lambdaphil.es
Sun Jan 10 15:02:43 EST 2016


I'm done some interesting work w/ Ansible + Buildbot -> (Vagrant/AWS). I
wouldn't mind talking about the setup.

On 1/9/16 8:30 AM, Jim Kinney wrote:
> What are the chances of someone doing a talk on integrating cloud and local
> services? March is open.
> On Jan 8, 2016 11:20 PM, "Jeremy T. Bouse" <jeremy.bouse at undergrid.net>
> wrote:
>
>> On 1/8/2016 7:34 PM, Justin Caratzas wrote:
>>> On 1/8/16 7:23 PM, Jeremy T. Bouse wrote:
>>>> On 1/8/2016 5:39 PM, James Sumners wrote:
>>>>> On Fri, Jan 8, 2016 at 1:13 PM, chip <chip.gwyn at gmail.com
>>>>> <mailto:chip.gwyn at gmail.com>> wrote:
>>>>>
>>>>>     Take a look at Vultr.com, can do it there.  They have hosting in
>>>>>     Atlanta too.  They're basically the economy choopa stuff.
>>>>>
>>>>>
>>>>> That's looking rather nice. $5/mo for 1TB of transfer and plenty of
>>>>> resources for my needs.
>>>> Not that I have any horse in the race or anything, but as a cloud
>>>> service consumer here's a few of my observations...
>>>>
>>>> First off, I have/currently use LInode, AWS and DigitalOcean... Mainly
>>>> for one simple reason, all 3 providers have good support with SaltStack
>>>> so I don't actually have to log into their UI to do anything to manage
>>>> my servers from cradle to grave.
>>>>
>>>> I will say I did look at Vultr and they do have some nice features and
>>>> it does appear that Apache libcloud [1] does have support for Vultr
>>>> which would make a SaltStack salt-cloud driver realistically possible
>>>> though doesn't currently exist. I was really floored by their benchmark
>>>> comparisons [2] and how much it was apples and oranges. I loved how they
>>>> compare a 768MB/1CPU Vultr system for $5/month against a 3.75GB/2CPU AWS
>>>> C3.Large that will run you around $78/month on-demand or between
>>>> $29-54/month depending on reserved instance pricing or their 2GB/2CPU
>>>> Vultr system for $20/month against the 3.75GB/1CPU AWS M3.Large with run
>>>> costs abount $99/month on-demand and
>>>> $39-71/month reserved instance. Comparing against an AWS T2 instance
>>>> (nano 512MB/1CPU or micro 1GB/1CPU) would have seemed like better
>>>> candidate for comparison against the 768MB Vultr and runs closer
>>>> ($5/month t2.nano or $10/month t2.micro on-demand or $2-4/month t2.nano
>>>> or $6-7/month t2.micro reserved instance). Likewise a t2.small or
>>>> t2.medium would have been better comparisons for the 2GB Vultr. It
>>>> looked like they went out of their way to pick the most expensive option
>>>> to compare so their numbers looked better.  I found a blog [3] that
>>>> seemed to give a better comparison in fact.
>>> Slight disagreement, I believe the t2.* are terrible machines to
>>> benchmark, given the cpu bursting budget. m3/4.mediums would have been
>>> the better comparison, the Cs are a bit nuts w/ pricing.
>> Yes, the t2 instances are burstable but they are better than the older
>> generate t1 instances. If you're comparing cost however the t2 would be
>> a better comparison as the specs are closer as is the cost. When you're
>> comparing a $5 instance to a $78 instance your "Performance per dollar"
>> is obviously not going to be comparable. The C3 instances are more CPU
>> optimized instances, the M3 and M4 are more general purpose with
>> balanced CPU & memory with the M3 being SSD-based instances which is
>> really the only comparison against DO or Vultr with the minimum in the
>> series being the m3.medium which has 1 CPU and 3.75GB RAM and 4GB SSD.
>>> How do you like libcloud? I've been meaning to check it out.
>> I haven't worked with it directly myself. Many of the salt-cloud
>> provider drivers are written utilizing it as it provides a quick method
>> to do so. There are still many drivers that have libcloud support
>> available but still don't utilize it. In most of the cases the drivers
>> were written prior to libcloud support and hasn't been any real need to
>> re-write them yet. I'm currently working with another cloud provider
>> that doesn't have libcloud support so we're having to do a lot more of
>> the work going off API documentation from the provider as the only API
>> library we've been able to find for it is not fully up to the task.
>>>> Otherwise the pricing between DO and Vultr doesn't appear to really be
>>>> all that difference comparing plans either. That said I may have to
>>>> check out Vultr and see if I can't get the salt-cloud driver working.
>>>> Cost being low enough I wouldn't mind throwing some money at it to get
>>>> another cloud provider option made available to me. I like having the
>>>> ability to launch and deploy my hosts to any SaltStack supported cloud
>>>> provider for a DR/BC perspective and keeps me from being locked into any
>>>> one provider. Then again I'm not worried about uploading custom ISO
>>>> images and if I were I'd simply build and deploy those to AWS where I
>>>> could easily make my own AMI offline and knowing how to work AWS to be
>>>> cost comparative wouldn't bother me.
>>>>
>>>> 1. http://libcloud.readthedocs.org/en/latest/compute/drivers/vultr.html
>>>> 2. https://www.vultr.com/benchmarks/
>>>> 3. http://blog.due.io/2014/linode-digitalocean-and-vultr-comparison/
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
>>
>>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20160110/5c614bea/attachment.html>


More information about the Ale mailing list