Members

The Ultimate Cheat Sheet on aws managed services provider

VPSDeploy is a new web platform designed to provide users with the ability to "deploy" web based applications to a number of different "cloud" VPS servers.

The system was originally designed to support "Ruby on Rails" application deployments, with an underlying application designed to provide users with a "one click" solution to getting their applications deployed.

Since the popularity of the system has grown, it's branched out into the provision of a number of other services, including the likes of database provision and CDN integration.

The point of the service is that if you're looking to utilize the MASSIVE wave of new compute resource that has been provided by way of the "cloud" service providers (Microsoft Azure, AWS, Rackspace, DigitalOcean etc) - you need a way to provision the servers you're using.

Contrary to popular belief, you're basically paying for a distributed VPS server running on 1000's of servers in different data warehouses. The VPS's you run will still require the installation of an underlying OS (Linux or Windows) and will also need the various libraries / applications necessary to get those systems working properly (typically the likes of web server software etc).

Whilst "deployment" services exist already (from the likes of Nanobox), the big issue they have is they are entirely focused on providing "per app" functionality. This means that you're basically getting a system that deals with the provisioning of a single application - running on as many servers as required.

It has been created to provide server-centric software capabilities - allowing users to deploy as many apps as they want onto their server infrastructure. It works very similarly to the "shared" hosting we all know and love (which basically has a single server box with 1,000's of user accounts on it).

How It Works

Its core is a vast API integration system which allows it to integrate directly into the various "cloud" VPS providers. Companies like Microsoft, Rackspace, DigitalOcean and others all provide simple API's which gives the application the ability to connect to a user's account on their provider of choice, and set up servers as required.

This capacity gives the application the ability to create, manage and provision a multitude of different servers on different providers. For example, if you wanted to guide UK traffic to an AWS-powered server cluster, you'd be able to do set that up in conjunction to the German traffic's Hetzner cluster.

To get this working, the system also includes an "endpoint manager" - which basically helps people visualize their DNS setup. The DNS is essentially your domain names - they point users to different web servers.

Whilst the DNS side of things has been taken care of before, VPSDeploy's endpoint manager is the first to provide a visual experience - backed by the ability to manage the various public-facing "endpoints" that a user may wish to use.

Regardless of how the system manages the various infrastructure you may have, the point is that it actually deploys a "stack" to each VPS you may want to deploy. This "stack" basically installs all the software that gets a server operating for the "web", and thus means that if you're looking to deploy applications to your server infrastructure, you'll be able to tap into the GIT repositories established by the system, and the underlying libraries it will have installed - all via SSH (so it's able to do it across a number of different providers).

Is It Effective?

The most important thing to remember is that it is not a replacement for cloud VPS provision; it's a way to manage it.

The way in which the system is able to help you visualize, manage and optimize the various applications & servers you have running is one of the most effective systems that a developer may wish to use to deploy their applications.

Whilst running web based applications / services on "cloud" VPS infrastructure is not a necessity, it's certainly one of the most extensible and modular ways to get up and running in a production capacity.

Why Would You Need It?

The main benefit of using the system is the way in which it allows you to manage your own infrastructure.

The way the "web" works is exactly the same as your home network (computer systems networked together) - except we have a huge system called the DNS which basically allows us to mask a huge amount of infrastructure behind "domain" names.

Domain names allow us to manage exactly what shows to a client when they want to access a particular service or content. This works well, BUT has a major issue in the sense that if you want to provide your *own* infrastructure (beyond "shared" or "dedicated" hosting), there is presently no way to do it.

The introduction of the many "cloud" VPS providers basically provided us with the capacity to determine exactly what our infrastructure looks like - without having to purchase / rent expensive hardware.

The only problem presently is that if you're going to go down the "cloud" route, you need to ensure you actually have a way to both manage your infrastructure *and* (if necessary) determine exactly how that infrastructure is going to work cross-provider.

Other Solutions

If you are looking at moving (or adopting) to a cloud-centric infrastructure, you'll be best placed looking at a number of different services which are able to help provision servers across the various providers.

Some of the more pertinent are Nanobox and Hatchbox - the latter being specifically for Ruby on Rails. Nanobox works very similarly to Heroku, except it's able to deploy to a number of different services, and is very dependable.

Micro instance (t1.micro) type is one of the most fashionable and highly acceptable instance types by IT fellows supported by Amazon EC2. During November 2010, AWS announced the free tier and started offering 750 hours of Micro Instance usage free per month for the first one year, but it's available as an Amazon EBS-backed instance only. You can now launch EC2 within a Virtual Private Cloud (VPC). AWS now extends to t1.micro instances running inside of a VPC also.

Talking to it technical specifications, the Micro Instance type doesn't have that much power required for heavy stimulating. The main memory presented in Micro instance type is 613MB. It comes with explode CPU capacity that can be goes up to 2 Elastic Compute Units (ECU). That means the CPU performance is not conventional. This is just not enough for running any severe workloads. And yes, storage can be added through Elastic Block Storage (EBS) and the free tier covers up to 30GB of storage space.

Best Recommendation when optimizing an AMI for the micro instance type:

• Design the AMI can run on at max. 600 MB of Memory Usage

• Edge the number of chronic processes that use CPU time (e.g., cron jobs, daemons)

But from the technical specification it doesn't mean Micro Instances are totally ineffective. They offer excellent worth in certain cases. In this article, I want to share how to get the best out of the Amazon EC2 MI.

Optimize Swap Memory - This is pertinent to Linux based Micro Instances. By default, these types of instances do not have swap space configured at initial level. I ran my Cloud Magic World Website on a MI for a few days. During the crest loads, I have experienced Apache Server or MySQL crashing unexpectedly. So with just 613 MB at your clearance, you got to make sure that you have set aside enough disk space for the swap.

Auto Scaling Out - The funda on the Cloud is auto scaling out. Running a convoy of low-end servers in parallel Aws managed services is more competent and cost effectual on any virtualized infrastructure. As per the load and use-case, splitting a job across number of Micro Instances may be cheaper and faster than running the same job on a single Large Instance. This scale out structural design provides better fail over and quicker processing.

Mull over Caching - If you are scheduling to host sites on these, be clear in your mind that they are not very dynamic. Dynamic websites demand more CPU power and memory due to the approach each request is processed. Straightforward websites like blogs and marketing sites with a few dynamic contents are ideal participants for the Micro Instances. Moreover, consider caching the content to avoid CPU spikes. For example, if you are running any blog or website, you can enable caching plug-ins to increase the performance. There are plenty of plug-ins available for caching by free of cost.

Select 64-bit - Always pick 64-bit when running it. This is assured to give you better recital than the 32-bit complement. You will see the difference when you are running batch processing that deals with large files and processes.

Pull the Cron jobs - Many patrons operate a Linux Micro Instance to run cron jobs and precise locale tasks that monitor and handle their entire AWS infrastructure. If you want to run a cron job, stop all other running services, add swap space to instance and pull it to make it a tilt and mean cron job machine.

Views: 2

Comment

You need to be a member of On Feet Nation to add comments!

Join On Feet Nation

© 2024   Created by PH the vintage.   Powered by

Badges  |  Report an Issue  |  Terms of Service