Previous Lecture Complete and continue  

  Lab 5: Pager Duty Solution Demo 1 💻

Welcome to the lab number five, Pager Duty. It's called Pager Duty because it will pretty much eliminate the need for anyone to wake up and be on a Pager Duty, because we will implement an auto scaling group. So from the home page of your AWS web console, let's go to the "EC2". And then, on the left side, find "Launch Configurations". It's under "Auto Scaling" and click on the "Create Auto Scaling Group". The very first time you create it, it will explain you how things work. So, before we can create another scaling group, we'll create the launch configuration. And what launch configuration is doing, it's basically the exactly the same steps that you've used when you were creating individual instances. So basically, you would tell AWS what instances you need to create. Pick Amazon Linux. "t2.micro" will be enough. This is the only difference between the launch configuration and creation of the individual instances.

"my-slow-node", it's slow Node because in the Node server, in the Hello Node server I will intentionally put some delay. No, we don't want to make it a Spot instance. IAM role, no. And then, in the "Advanced Settings", we want to paste this user data. Let me quickly show you what we're doing here. Again, we are logging the files in a new file, "user-data.log", just for easier debugging. And then, we are installing Node.js using this curl command and the yum installer. Then, we're installing pm2 globally. And then, I'm puling the code... This is my code, so let me show it to you in the new browser tab. It's publicly available to everyone. So, in this Node.js server, I have this for loop which will basically slow down our server and slow down our responses. Why? Because we want to test the auto scaling and we want to actually trigger some of the CPU usage thresholds. We want to trigger some of the alerts, that's why. Because node is pretty efficient, without this for loop it will take you a lot of stress-testing to trigger the alert.

Okay, and then, we have this configuration, "Only assign a public IP address to instances launched in the default VPC and subnet". Yes, let's keep it public. Let's keep it the default. The storage, we keep it default. Security groups, let's keep it open. And then we click on "Create Launch Configuration". So this is the step number one, we are creating a launch configuration. We don't need an SSH key. Because I've tested my user data, it should just work. So now, we are back into Auto Scaling Group. That's step number two. And as you can see, it automatically populated the name of the launch configuration which we created before. So let's come up with a very good, creative name, which we can remember, "my-auto-group-for-slow-node". We will start with just one instance.

We can use a new VPC, if you want. There's a button to create it. I don't have any VPCs right now. And then we can also launch it in the particular subnet. That's more applicable when you have more instances. So this is for the auto scaling group. When you just have one instance, you know, it'd be able to launch into two subnets. But once you start having more and more instances, they would be launched into different subnets. Advanced Detail, so at this point, you can also configure load balancing, health checks, and some other parameters. We're going to move on. In the configure scaling policies, we would set up two policies.

0 comments