In this post, I will demonstrate how to set up a Kubernetes cluster in AWS using KOPS.
First of all, you will require an Amazon AWS account. If you do not have an account yet, then you can sign up at https://aws.amazon.com/. AWS has a free tier but the resources that are used for a Kubernetes cluster will have a cost.
AWS command line interface (CLI)
You will need the AWS command line interface tool installed so that you can log in to AWS. Installers are available at https://aws.amazon.com/cli/.
Once you have the CLI tool installed, you will need to log in using the access key ID and the secret access key for your admin user. If you do not have an admin user, then you can create one by following the instructions for best practice on the AWS getting started guide: http://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html.
$ aws configure AWS Access Key ID [None]: ***** AWS Secret Access Key [None]: ***** Default region name [None]: eu-west-2 Default output format [None]: json
Once you have setup the CLI tool with your admin user, we can create new resources in AWS.
It is not considered best practice to use your root account or admin user to interact with AWS so we shall create a separate group and user that has only the permissions required to create and manage a Kubernetes cluster.
Creating a new group and user just requires a couple of commands at the command line as follows:
aws iam create-group --group-name k8s-group aws iam create-user --user-name k8s-user
Now we can add the necessary permissions as per the KOPS documentation on the subject:
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name k8s-group aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name k8s-group aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name k8s-group aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name k8s-group aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name k8s-group
Now we can add the new user to the new group and create an access key for the new user:
aws iam add-user-to-group --user-name k8s-user --group-name k8s-group aws iam create-access-key --user-name k8s-user
You can then use the
aws configure command to log in as this new user.
KOPS is a command line tool used to create and manage the cluster on AWS. It can be installed by following the instructions at https://github.com/kubernetes/kops/blob/master/docs/install.md.
If you are on a Mac and using Brew, it’s as simple as:
$ brew install kops
If you already have KOPS installed then you can upgrade:
$ brew upgrade kops
At the time of writing, version 1.8.0 was the current.
Kubectl is a command line tool used to control the cluster. It can be installed by following the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/.
If you are on a Mac and using Brew, it’s as simple as:
$ brew install kubectl
If you already have kubectl installed then you can upgrade:
$ brew upgrade kubectl
At the time of writing, version 1.8.5 was the current version.
Prior to version 1.6.2 of KOPS, you had to use DNS for master and node discovery. This required a dedicated domain or subdomain to be set up.
As of version 1.6.2, a gossip-based network is now supported that does away with the need to mess about with domains and Route 53. Using a gossip-based cluster makes it easier and quicker to set up a Kubernetes cluster. However, it does create an additional AWS load balancer so does come at an additional cost.
For the purposes of this tutorial, and to keep things as simple as possible, I will use a gossip-based cluster which I will name
However, if you follow along with this tutorial and you want to use your own domain/subdomain, just substitute
cluster.k8.local with your domain/subdomain instead. We will be using environment variables so this only needs to be specified once.
If you would like to set up a dedicated domain/subdomain then please take a look at the KOPS documentation at https://github.com/kubernetes/kops/blob/master/docs/aws.md#configure-dns.
Create a Kubernetes Cluster
Creating a Kubernetes cluster in AWS is easy thanks to KOPS. KOPS manages the cluster for you storing its state in S3. If you enable versioning on your S3 bucket, you’ll get an automatic historical store of your Kubernetes cluster configuration enabling you to revert configuration changes if necessary.
So first of all, let’s define our cluster name. As I mentioned above, I am using gossip-based discovery for my cluster. The only requirement to enable this is to have a cluster ending in
$ export K8S_CLUSTER_NAME=cluster.k8s.local
If you are using DNS based discovery then just simply substitute your domain/subdomain here.
Next, we create the S3 bucket that the configuration is stored in.
$ export K8S_S3_BUCKET_NAME=my_k8s_cluster $ export KOPS_STATE_STORE=s3://$K8S_S3_BUCKET_NAME $ aws s3 mb $KOPS_STATE_STORE $ aws s3api put-bucket-versioning --bucket $K8S_S3_BUCKET_NAME --versioning-configuration Status=Enabled
The next thing we do is create the cluster configuration. This doesn’t create any resources on AWS, it just creates the configuration in the S3 bucket.
I’m in the UK, so I’m using the London region which has two availability zones;
eu-west-2b. For the purposes of this tutorial, we only need a limited amount of resources so I will use only
The following command will create the cluster configuration and by default, will create one master and two nodes.
$ kops create cluster --cloud aws --name $K8S_CLUSTER_NAME --zones eu-west-2a
This command will produce a large amount of output detailing the cluster that will be created.
The end of the output will contain:
Cluster configuration has been created. Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster cluster.k8s.local * edit your node instance group: kops edit ig --name=cluster.k8s.local nodes * edit your master instance group: kops edit ig --name=cluster.k8s.local master-eu-west-2a Finally configure your cluster with: kops update cluster cluster.k8s.local --yes
We don’t need to edit the configuration any further at this stage so we can now create the AWS resources with one further command:
$ kops update cluster $K8S_CLUSTER_NAME --yes
After a minute or two, the above command will have completed and AWS will be creating the necessary resources. This could take several more minutes so a nice cuppa at this point is probably a good idea.
Eventually, you’ll see in the AWS console that the EC2 instances have been provisioned and are ready for use.
You can also validate the cluster with
kops validate cluster and you’ll get output similar to the following once the cluster is ready:
$ kops validate cluster Using cluster from kubectl context: cluster.k8s.local Validating cluster cluster.k8s.local INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master c4.large 1 1 eu-west-2a nodes Node t2.medium 2 2 eu-west-2a NODE STATUS NAME ROLE READY ip-172-20-40-221.eu-west-2.compute.internal node True ip-172-20-53-49.eu-west-2.compute.internal master True ip-172-20-59-174.eu-west-2.compute.internal node True Your cluster cluster.k8s.local is ready
Install Kubernetes dashboard UI
Kubernetes provides a nice UI interface to your Kubernetes cluster. This can be installed with a couple of commands:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl proxy
That last command creates a local proxy to securely communicate with your Kubernetes cluster. Just point your browser at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ to view the dashboard. Just click the skip button when viewing it for the first time.
That completes our cluster.
Delete the Kubernetes cluster
If you want to delete the cluster and delete the associated AWS resources, then you can use the following command:
$ kops delete cluster $K8S_CLUSTER_NAME --yes