Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Michael Venezia committed Aug 27, 2015
0 parents commit a088c8c
Show file tree
Hide file tree
Showing 43 changed files with 792 additions and 0 deletions.
50 changes: 50 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Kubernetes Training

### Introduction

The purpose of this training is to show how quick and easy you can create Kubernetes cluster. At the same time, it's not recommended to use this configs for your production applications.

### Disclaimer

Please consult with your doctor before starting this course. Excessive usage may cause eye-bleeding and caffeine overdose.

### 1. Preparing AWS

0. [Creating SSH Key](/chapter-1/1.md)
0. [Adding new VPC](/chapter-1/2.md)
0. [Creating new Security Group](/chapter-1/3.md)
0. [Configuring AWS console tool](/chapter-1/4.md)

### 2. Establish ETCD

0. [Launch single ETCD instance](/chapter-2/1.md)
0. [Verifying ETCD setup](/chapter-2/2.md)

### 3. Creating K8s cluster. Easy way

0. [Creating master instance](/chapter-3/1.md)
0. [Add nodes/slaves/minions to the cluster](/chapter-3/2.md)
0. [Verifying Kubernetes setup](/chapter-3/3.md)

### 4. Creating AWS Load Balancer

0. Description of Service A and Service B
0. Creating Load Balancers for Service A and Service B

### 5. Launch Application A

0. Pod Config = NGINX + PHP -> A website
0. Launching in K8s
0. Making sure it works

### 6. Launch Application B

0. Run in K8s
0. Prove that both services are working

### 7. K8s Service resize and update

0. Updating source code and rebuilding docker images
0. Scaling
0. Rolling update

Binary file added assets/1_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/1_7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/2_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/2_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/3_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 10 additions & 0 deletions chapter-1/1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Creating new SSH key

First you need to generate new SSH key pair which you will use to connect to all Kubernetes instances

In the main menu go to "Services" -- "EC2". You will see the following screen:

![New SSH Key](/assets/1_1.png)

You can automatically generate a new key pair, in this case AWS saves your private SSH key in a downloads folder. In this case you need to copy it to your .ssh folder and change permissions to 600.
Or if you want to user your own ssh key, you can upload it with an "Import Key Pair" button.
14 changes: 14 additions & 0 deletions chapter-1/2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Adding new VPC

In your AWS main menu open "Services" and click on "VPC".

![New SSH Key](/assets/1_2.png)

You want to create new VPC with a "Start VPC wizzard" button. As soon as you click on it, you will see the following screen:

![VPC Wizzard](/assets/1_3.png)

Next screen should be like this one:

![VPC Settings](/assets/1_4.png)

18 changes: 18 additions & 0 deletions chapter-1/3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
To create new security group go to "Services" - "VPC" in the main menu. Then select "Security Groups" link in the "Security" section.
As soon as you click on the button "Create Security Group", you will see the following window

![Creating Security Group](/assets/1_5.png)

After new security group is created, select it and copy it's ID to the clipboard. At the bottom of the page you will see information about your new security group. Go to the "Inbound Rules" and add new rules to it

![Creating SG rules](/assets/1_6.png)

Here is explanatio why do we need this inbound rules

0. SSH (22) - enables ability login to any vm via ssh protocol
0. HTTP (80) - our website-a and website-b will be available through this port to the outside world
0. All Trafic - with this rule we allow any traffic inside of our subnetwork, so that vms can talk to each other

You can leave outbound rules as is

![Outbound rules](/assets/1_7.png)
5 changes: 5 additions & 0 deletions chapter-1/4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Amazon Cloud offers useful and handy command line tool to perform all necessary operations with AWS cloud. We will use it in the next chapters, so please follow this manual to instal and configure it

[Setup manual](http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html)


28 changes: 28 additions & 0 deletions chapter-2/1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Launch single ETCD instance

To create new ETCD instance we will use [ETCD cloud config](/files/cloud-configs/etcd.yaml)

You can find command which we need to execute inside "/files/commands/" folder in [create-etcd.sh](/files/commands/create-etcd.sh)

Before you can run this command you need to replace all placeholders with ID of the AWS services, which you created before. You need Security Group ID, Subnet ID, Region (it should be the as for your VPC) and SSH Key Pair name.

After you do this replacement, run this commands:

chmod +x ./create-etcd.sh
./create-etcd.sh

The output of this command should look like


{
"OwnerId": "...",
"ReservationId": "...",
"Groups": [],
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
...
}

40 changes: 40 additions & 0 deletions chapter-2/2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Verify ETCD setup

First you need to wait while AWS shows that the instance is ready. Go to "Services" - "EC2" and click on "Instances" link under the "Instances" section. you should see your new VM as a first item (if you don't have any sortings of filters applied, in this case you should clear filter and sort instaces by launch date). First several minutes AWS will show it as

![Preparing instance](/assets/2_1.png)

But as soon as it's ready you will see

![Ready instance](/assets/2_2.png)

After that you can try to login into it via ssh with a default "core" user and your SSH key

ssh core@<ip> -i <path to Private SSH key generated before>

After that you will see something like that

CoreOS alpha (752.1.0)
Update Strategy: No Reboots
core@ip-10-0-0-65 ~ $

Let's make sure that ETCD is working

etcdctl cluster-health

This command should list all ETCD nodes (in our case it's just one) with their status. Output should be like

cluster is healthy
member 4d061a28954e38d7 is healthy

Let's try to store something to the ETCD

etcdctl set test test

And check that it's in place

etcdctl ls /

Output should be

/test
29 changes: 29 additions & 0 deletions chapter-3/1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Creating master instance

Go inside of a folder "/files/cloud-configs" and open file [Cloud Config for K8s master](/files/cloud-configs/create-k8s-master.md)
In this config you need to replace **<etcd ip>** placeholder with an actual IP address. Open list of the instances ("Services" - "EC2" - "Instances")
and click on the ETCD instance. At the bottom you should see information about this VM, you need Private IP :

![VM Info](/assets/3_1.png)

Copy it and replace all placeholder.

The next step is to actually create Kubernetes Master VM. Go to /files/commands folder and execute the following commands:

chmod +x create-k8s-master.sh
./create-k8s-master.sh

Expected output is

{
"OwnerId": "...",
"ReservationId": "...",
"Groups": [],
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
...
}

16 changes: 16 additions & 0 deletions chapter-3/2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Add nodes/slaves/minions to the cluster

To finish Kubernetes cluster setup we need to add nodes to our master. They will actually host our services and serve it to the users.

As a first step you need to get ETCD IP. You can read about it at [Previous article](/chapter-3/1.md)

Moreover, you also need to get Kubernetes Master IP address. Steps are the same as for ETCD IP address.

After that, you can open file [Kubernetes Node Cloud Config](/files/cloud-configs/node.yaml) and replace **\<etcd ip\>** and **\<master ip\>** placeholders

Then, go to folder "/files/commands" and run the following commands:

chmod +x create-k8s-nodes.sh
./create-k8s-nodes.sh

Output should be similar to Kubernetes Master output.
29 changes: 29 additions & 0 deletions chapter-3/3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Verifying Kubernetes setup

Connet to Kubernetes master via SSH:

ssh core@<master ip> -i <path to Private SSH>

Kubernetes has a command line toolkit which can check if kubernetes is up and running. Just type

kubectl cluster-info

The output will be like:

Kubernetes master is running at http://localhost:8080

To verify that Kubernetes nodes connected to the master and are working use this command:

kubectl get nodes

Output should be

NAME LABELS STATUS
10.0.0.164 kubernetes.io/hostname=10.0.0.164 Ready
10.0.0.165 kubernetes.io/hostname=10.0.0.165 Ready
10.0.0.166 kubernetes.io/hostname=10.0.0.166 Ready
10.0.0.167 kubernetes.io/hostname=10.0.0.167 Ready
10.0.0.168 kubernetes.io/hostname=10.0.0.168 Ready


If you see all 5 nodes in a Ready state you can go to the next chapter.
Loading

0 comments on commit a088c8c

Please sign in to comment.