Edward M. Goldberg

Cloud Computing - News and Ideas

Edward M. Goldberg header image 4

The AWS EC2 Instance Missing Users Guide

January 9th, 2009 by Edward M. Goldberg
Respond

When you get a new car you look into the glove box and find a small book that explains the day to day operation of the car.  The same is true for your laptop.

What happened to the users guide for the AWS EC2 Servers?

First of all AWS calls them instances and gives them strange names like i-12345678 that are not very easy to remember.  So to start off most users are lost just finding the server they just Launched (like a boat?).

This document covers the basics.  How to use a new server (read instance) you booted up (Launched) in AWS EC2.

After you read this document you will know the “Lay of the land” and will have a good idea of how to drive this car off the lot and change the oil.

Preparation for Deployment

Get all of the parts of the server in place first.  A good list would look like:

  • Decide what OS to use,  I use CENTOS.
  • Pick the AMI to launch from,  I like the RightScale templates.
  • Collect all of the code and content an svn that you can use to load the server at boot.
  • Select the E-Mail address that will get all of the Alerts.
  • Select the size and number of servers you want to start with,  you can change your mind later.
  • Select a Dashboard to use for the Launch,  I use RightScale for most project.
  • Get a pot of tea and start to work…

Networking

Each server in EC2 has two Network Interface Ports.  But wait if you type “ifconfig” you only see one?

One is used for the EBS Volume Interface and the one you see is used for TCP, UDP and ICMP connections.

The only one you can control is the one shown in the OS Networking command.  This interface “eth0″ has two uses:

  1. Internal Network Connections from one server to the next in your farm
  2. WAN Connections from the WWW to your servers.

Internal IP Address


FREE Bandwidth,  let me repeat, FREE Bandwidth.  All transfers from an Internal IP to and Internal IP in the same Zone are FREE.

The Internal Network address starts with “10.”  as seen above “10.248.107.68″ and is used for:

  • Communication to the master Data Base.
  • rsync from one server to the next in the farm.
  • ssh and scp commands inside the farm.


External NAT

WARNING: $$$$$ you pay for these bytes.  Watch out,  any transfers that use this address cost you $$$$.  If you use External IP addresses for both ends you pay 2x!!!!!

The other use of the eth0 interface is from the “NAT” that connects this server to the WAN or WWW. Each server when Launched (booted) is added to the list of servers the Firewall provides an external address for access from the WAN.

You can get an external IP address in two ways:

  1. Random EC2 IP address: 75.102.166.16 (ec2-75-102-166-16.compute-1.amazonaws.com)
  2. EIP


Default IP Address


The luck of the draw gets each server a “re-used” IP address picked at random from well used addresses in the AWS pool.  Watch out you are not the first user and the last user may have had a “Black Hat”.  These addresses are fine for development and random uses but fall short in many ways for real use of the server on the WAN.  Each IP address comes with a Sub Domain Name string that can be used to access this address.  The IP address is “hidden” in the string.  Just extract the numbers to get a “raw” IP address for your general use.

EIP

An Elastic IP (EIP) is an address you “own” from the AWS pool.  You get to keep these addresses for as long as you want to pay for them.  If you “age” these addresses for a while the “old taint” does drop off after a few days and the address can be used for WAN uses.  I would never use any of the AWS IP Address Pool for sending E-Mail.  Use a E-Mail Forward service to keep the IP address taint issue from stopping your E-Mail in it’s tracks.  You can rent these EIP values for $00.00 and hour if they are “in use” or $00.01 an hour if left idle.  So keep them “assigned” to some server to avoid the cost of the idle IP address.

So now you have the basic facts.  Start all of the public addressed server with a “Well Aged EIP” and use the “10.” internal address for all server to server networking.  Now you know.

Every Day Tasks

Time to talk about how to full the gas and check the oil.  This section talks about the day to day uses of the servers (Instances) and general information you better know if you are going to use the server and not get stuck with a big bill or a stranded instance.

Most of this is covered in the AWS Documents is 500 or so pages.  The is a short “crib” sheet of the most important topics.

ssh

Each server is started from an AMI or Image of a Server in S3.  As part of the process of Launch the server is given a KEY to provide access to the server.  I will only address LINUX servers for now.
To access a server you must have the ssh key that it wast started with.  No other access exists to a well designed AMI based server.  Feel free to break this rule,  but at your own risk.  The WWW is a very harsh place.  If your Network Access is attacked not only is your server at risk but any information that could allow the “bad guy” to see other data you have left passwords or keys to access from this server!!!

So for now I will assume you have used “Best Practice” and the only way to access the server is the KEY you provided when you launched this server.  So to log into the server you need a few things:

  • The ssh client
  • The KEY used to Launch this server
  • A few hints.

The hints are as follows.  The only account on all of the servers is “root”.  The root account does not have a password!

The command: (Where Demo.pem is the public key)

$ ssh -i Demo.pem root@ec2-67-202-48-82.compute-1.amazonaws.com

Is the only way to address the server from “outside”.  From “inside” use ssh root@10.248.107.68 and get free network bytes and a faster connection server to server inside your deployment.  This assumes that you have allowed access from server to server in the zone.

sftp and scp

The same rule hold for sftp and scp.  Use the KEY file and login as root with the password empty.  I use the FireFox sftp plug in all of the time.  Simple to use and set up.

Extra note for putty users.  The putty program needs to have the key reformatted.  It comes with the tool to convert the key from standard form to putty form.  Just us the conversion menu in the tool.

Web Content

The Web Server on you server may have a different places for the code and content but these idea are the same for all servers.

The best way to install the code on a server is from a svn repository.  You have one right?  All of the code on the server is in a nice safe place ready to load up on request,  right?

If you use scripts this is the command: (replace the xxxx with your username and password)

$ cd /var/www     # This is my DocRoot,  you need to adjust this as needed.
$ svn –username xxxxx  –password xxxxxx –no-auth-cache  –force –quiet export “https://angel1.projectlocker.com/EdwardMGoldberg/Demo/svn/www/htdoc/”

Let me break down this command for you:

svn

The tool to provide the code tree

–no-auth-cache

Keeps the tools from remembering the password for later uses.

–force

Overwrite the content of the local files as needed.

–quiet

Keep the output down for better speed.  Leave off to see the processing
export

Get one static copy,  no file are written for later updates or check-ins,  just the data please.

“xxxxx”

This string is provided by the svn source repository.  You need to know where this to get the correct file.
In this case the results go into:   /var/www/htdoc/…  that last directory name is the new sub directory.

With no second “location” string in the command the files are placed in the current working directory.  Now the job is 1/2 done.  You have all of the files local to the server.  Next we need to allow for apache to read them.

$ chown -R apache:apache htdoc

If you forget this step you may see errors for files that can not be read by apache.  BTW, if apache is not the owner of the server,  replace apache with your server user name as needed.

Done,  now we have a new copy of the code on the server from SVN.  We could get fancy the save the old code or delete the oldest copy…

The important part is that this script that the new code is installed from svn with no user prompts.  Add this to the boot code for the server and each time you boot the code gets installed.

On RightScale servers this script is provided as an Operational Script.  Just click on the button in the dashboard and it performs these actions for you over ssh.  Nice feature…

Now the whole script in one place:


#!/bin/bash -e
#
# Upload the code to the server.
#
cd /var/www
svn --username xxxxx  --password xxxxxx --no-auth-cache  --force --quiet export "https://angel1.projectlocker.com/EdwardMGoldberg/Demo/svn/www/htdoc/"
chown -R apache:apache htdoc
logger -t Upload "Code uploaded to the server from SVN"

General Content

You can use the same SVN code for any general content files.  The S3 access also works well for restoring files from the backups.  S3 take more tools and keys.

Take care here,  leaving the keys on the server grants access to many other services.

I also use the wget command many times to fetch files from other web servers that distribute files.

Database

The database should be configured with the my.cnf file to use the /mnt/mysql directory or a mounted EBS Volume.  The default is for the MySQL database to be at the  /var/mysql location this is very bad for many reasons.

This makes very poor use the disks on the server.  The disk mounted on the /mnt mount point is the fast disk on these servers.  Use it for large read-write file systems like MySQL.  If you leave the MySQL on the / tree it will fill up the /tmp area and other important file space.  The /mnt disk is empty at the launch of the server.

If you use EBS for your database server you need to mount the EBS Volume for the MySQL server to use.  This works very well.

Remember to never use the / file system for your Database.

Reboot

The servers can be rebooted.  The AWS command to reboot them exist is every Dashboard I have seen.  Just take care,  the server may not reboot clean.  If you want to be 100% sure that you keep your service up to the users, launch a new fresh server.  Later you can terminate the old server.  The overlap will allow you to change your mind and go back to the first server.  I never reboot servers anymore.

Launch

This is like going to Fry’s Electronics and getting a new server.  The nice part is that the return lines are zero in length!

I use the RightScale Dashboard to launch servers.  Once defined the server costs Zero $ when not running.

When you start (read Launch) one of these defined servers the $$$ start to spend.  This ends when the server is terminated (rounded up to the next whole hour,  thank you).

The server boots from a “CD-ROM” like image called an AMI.  This is not unlike a Live CD is some ways.  You request that a server be a running copy of the AMI on the selected hardware.  AWS picks a server (read instance) for you and loads up that first 10G hard drive with your AMI.  Once all of the code in in place the server boots.

Part of the boot process is the reading of the KEY from the AWS storage.  This is placed into the /root/.ssh files so you can log into the server later.  If you have not selected a KEY and have a copy of the public key for your use the server will run just fine and you will have no access to it all.

So the first step is to check that you have a KEY that matches the KEY name Amazon has so the new server will allow you to access it after launch.  Don’t worry is you have no servers running and you have lost the key.  It is easy to create new keys before the next launch.    If you have lost the key to a running server,  you are out of luck.  Just terminate it and start a new server with a new key pair.

You may read lots of information about the creation of this AMI thing.  For now just use well designed and trusted ones from RightScale.  One day you may need to get fancy and make your own.  But not at the start.

I have never made an AMI and do not plan to make my own at all.  Several web based services now make AMI on request.  Just use one of the services.  You have little to gain rolling your own for now.

Updates

I never update servers.  I just start new ones and terminate the old ones.  I have been known to debug a server for a while till I get all of the scripts working well.  But in the end I Launch a clean one and watch it hum.

It is important to get in the habit of working with Clean New Servers and not hacking old ones.  If you keep hacking old servers to fix issues one day you may need to Launch and the server will fail to work at the nasty moment.  Launch new servers and then terminate the old when you have happy with the clean new Launch.  Terminate servers that have been used up, or hacked by hand ASAP.  The cost of the “overlap” then two servers are runs is very small.

For Development it is a very different story.  Use the ssh login and hack the server all you want.  Just remember in the end the goal is a clean “One Click Launch”.  I use servers as tools all of the time.  Start one when you need a tools and turn it off then you are done with the tool.  If you need a place to save files use EBS for storage and mount it on the servers when you need access to the files.  I keep a Snap-Shot of my favorite tools ready to attach to any server I need to debug.  Just like a USB Dongle or rescue CD-ROM.

SVN

Source Code Control is very important is the Cloud Environment.  It is the place to keep any files where the date and time of each update needs to be remembered.  Later when the server gets sick you can look and the dates and times of updates and find clues as to what made the server sick.

Some people like git or CVS.  Go right ahead and use the program you like.  I use svn and have had great luck with ProjectLocker for my deployments.  You mileage may vary.

Summary

Keep all of your servers happy and follow the simple ideas outlined here and you will have good control over your servers and your intellectual property (code and content).

Good rules to follow are:

  • Keep your code and data separate from uploaded and generated stuff.
  • Backup any files that you care about.  Keep the backups off the server.
  • Assume that the server can stop existing at any moment and all of the files go away with it.
  • Use each disk, / and /mnt for it’s special uses.  Never fill up the root file system, you will be sorry.
  • Keep all of the files that are loaded on the server in SVN or your pick of Source Code Control.

Tags:   · · · · · · · · · No Comments.

Amazon S3 Requester Pays Model

January 3rd, 2009 by Edward M. Goldberg
Respond

The idea of reverse charges on a phone call are not new.  But “please reverse the charges” for a download?

This a great new idea.  Now you can make files that users down load from the internet at the cost of the user.  It is also possible to set a fee for the download.

All of this just shows that Cloud Computing is breaking the mold and stirring up the whole question of fees.  With new features all of the time, it does get very hard to keep up with the new tools.

It is now more important than ever to read the news and keep up to date with what can be done.

Tags:   · · · · · · 2 Comments

Cloud Computing and SVN Export a good match

December 17th, 2008 by Edward M. Goldberg
Respond

Cloud Computing and SVN Checkout in the form of an “export” is a very good match.

Let me start with an outline of the basic ideas.  You need to store all of the code and content for your server or servers in a safe place.  This is NOT on the server.  If the server dies or is attacked you need to put back the code.  You could say that is is a code backup.  But that is not exactly correct.

SVN is the Master or “Gold” repository for the code and you are making a code distribution.

Once you have all of the code in the SVN tree.  All you need to due when the server is deployed is to export the SVN tree “on top” of the server.   The content in the master SVN is then on the server and you are ready to start any post install scripts you need.

The good parts are:

  1. The code you own is in your SVN “GOLD” file collection is SVN under lock and key.
  2. The steps to install from SVN to your new server is a simple SVN EXPORT of the tree.
  3. The cost is very low.

The bad parts:

  1. You need to set up a SVN for the deployment of the project and keep it up to date.
  2. If the SVN is down you can not launch a new server,
  3. More working parts to maintain.

Once you have all of the process in place.  You will have a very simple commend to deploy a new server.  Ever change and update will be logged.  All of the code and data will be safe and stored is several places.  So if later you need a copy of the code you will not be stuck.  If you rely on eht code and data on a server one day that server my die and out are out of luck!

Edward M. Goldberg

Tags:   · · · No Comments.

Web Talk about Master and Slave Fail Over in the Cloud

December 14th, 2008 by Edward M. Goldberg
Respond

This is a shameless plug for the talk I am doing Dec 18th with
RightScale.

We will be talking about Master Slave Fail Over and general issues of
how
to maintain a code base on a collection of servers in the Cloud.

Link to sign up:

https://www2.gotomeeting.com/register/260903688

Link to the “news” at RightScale:

http://RightScale.com/ (look for the 18th)

I will be doing the LIVE part of the demo.  Wish me luck!!!

Edward M. Goldberg
http://BLOG.EdwardMGoldberg.com

Tags:   · · · · No Comments.

RightScale gets a cash infusion of $13M

December 8th, 2008 by Edward M. Goldberg
Respond

RightScale gets a cash infusion of $13M

This is great news for the Cloud Computing Community.  The team at RightScale has added many tools to Open Source.  The right_aws Ruby tools are just one example of the contribution RightScale makes to the community.

I look forward to this new $13M added to the Cloud Computing kitty adding many more tools.  Keep the software coming!

“”RightScale Blog - Expanding RightScale with $13M new funding”"

http://Blog.RightScale.com

Edward M. Goldberg

Tags:   1 Comment

What is a CDN? Why use it? Is this a new idea?

November 26th, 2008 by Edward M. Goldberg
Respond

What is a CDN and what is it good for?

Before this week CDN may not have been part of your vocabulary.  Now that AWS CloudFront is launched the buzz around the forums is all about CDN and the good and bad (lack of) features in CloudFront.

Basics;  A CDN has two basic parts,

  1. The origin server is where the bits and bytes of your master html files com from.
  2. The CDN Network of servers, are where the users get the service.

The idea is very simple.  It works like a cache.  You provide once a day or so,  a copy of the content.  The CDN provides a copy each time the file is served to user.  Nice and simple.

But the details,  that is where this whole idea gets complicated.  The CDN also services these requests from the closest (is network time,  not miles) server to the user.  The “Feature” adds more performance for remote users.  I could go on and on about cache features and details,  but it would just make you dizzy.

So, why is this AWS CloudFront service special, you ask?

Till now most of the CDN providers started to want to talk to service users when the volume of data got to $1,000.00 each month.  Wow,  not many small servers need that level of service.  But is it great when you need it …..  AWS CloudFront can be used starting at $0.15 for each month.

With this lean feature AWS CDN at rates too low to ignore and NO lock-in or low end minimums we need to start talking more about why low end sites should use CDN Technology.

For many sites CloudFront can help make the site Slash-Dot proof.  Content served from a CDN scales with the demand for the service.  For sites that are NOT in EC2 today can use CDN services to provide capacity that scales with need for even one page or download.

The ecosystem has changed.

CDN technology is not new.  Just the cost and lock-in levels have changed.

Tags:   · · · No Comments.

What is the connection? CDN (CloudFront) and Cloud Servers?

November 26th, 2008 by Edward M. Goldberg
Respond

I have been thinking a lot about Content Delivery Network (CDN) ideas for a while now.

AWS just has added CDN service the EC2-S3 Cloud Ecosystem under the name CloudFront….

So what is the connection with Clouds?

The idea of a CDN is more than just providing some of the content “close” to the point of use. The CDN also adds one more dimension to the ability to scale a site in a way that is driven by the demand of the users.

Think about the number of bytes your sever is sending out to the internet. Most of the “bulk” is static content. If this content comes from CDN servers and you pay by the byte served. You service is in fact scaling up and down with need in a very dynamic way, at very little cost to you. A simple elegant solution.

No the connection top Clouds start to be clear. The CDN does help with world wide service, but it also helps send out content for local users. Your servers can now focus on dynamic content!

Make you “landing Page” 100% CDN and the next “Slash Dot” will not hit you so bad. That first page with shine bright, even if the rest of the service is impacted.

Edward M. Goldberg

Tags:   · · · · No Comments.

EC2 Grows up and leaves behind the BETA tag

October 24th, 2008 by Edward M. Goldberg
Respond

Some would say, “it is about time!”.  But I see this as good timing. Ec2 has been in BETA for only a very short time when you think about the scale of the project.

How does this new Ec2 change the game? Well the BETA tag allowed AWS to make changes and update the recipe as needed to meet the market. Now we start a whole new era of “Backwards Compatibility”. AWS has to show that the solution is stable.

Tags:   · · · · No Comments.

AWS adds Windows Servers to the EC2 Cloud

October 24th, 2008 by Edward M. Goldberg
Respond

As the Cloud Environment gets richer and more diverse we start to see more and more software in the Cloud.  With the addition of Windows to the EC2 Cloud today I see a trend of rich new features making the Cloud Environment a better place to development launch projects.

The limitations of the Cloud are being addressed one by one.  We are all very happy to see players like RightScale growing in features.  With each deployment I work on in the tools provided just get better and better.

Is the addition of Windows good for Cloud Computing?  I would say yes and bring it on!   Any OS that is used for applications or servers will need to be ported to the Cloud.

Tags:   · · · No Comments.

The Topic of UpTime and how many 9s

October 15th, 2008 by Edward M. Goldberg
Respond

The topic of Up-time and as we call it the “9’s” is very important area to explore.

IMHO to get more 9’s you need more “distinct” Clouds.  I would like to see more Clouds that vary the solution to the extent that they do not suffer from the same software and hardware flaws.  Software problems many be more of an issue then hardware problems.

The hardware issues up to this point in the history of the Cloud have been covered well by the Cloud Infrastructure Code Base.  The wide spread issues that have “brought down the Cloud” have all been
software top to this pint in the history of the Cloud.

I would like my first contribution to this “Cloud Computing Guide” to be a discussion of how to use several Clouds and non Cloud Deployments to move closer to the goal of 9’s.

We may never have a deployment that gets to 100% up-time,  but IMHO  the path is to provide a wide number of solutions to the problem and  distribute the risk.

Tags:   · · · No Comments.