37,19 €
Continuous integration and continuous delivery (CI/CD) has never been simple, but these days the landscape is more bewildering than ever; its terrain riddled with blind alleys and pitfalls that seem almost designed to trap the less-experienced developer. If you’re determined enough to keep your balance on the cutting edge, this book will help you navigate the landscape with ease.
This book will guide you through the most modern ways of building CI/CD pipelines with AWS, taking you step-by-step from the basics right through to the most advanced topics in this domain.
The book starts by covering the basics of CI/CD with AWS. Once you’re well-versed with tools such as AWS Codestar, Proton, CodeGuru, App Mesh, SecurityHub, and CloudFormation, you’ll focus on chaos engineering, the latest trend in testing the fault tolerance of your system. Next, you’ll explore the advanced concepts of AIOps and DevSecOps, two highly sought-after skill sets for securing and optimizing your CI/CD systems. All along, you’ll cover the full range of AWS CI/CD features, gaining real-world expertise.
By the end of this AWS book, you’ll have the confidence you need to create resilient, secure, and performant CI/CD pipelines using the best techniques and technologies that AWS has to offer.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 395
Veröffentlichungsjahr: 2022
Create secure CI/CD pipelines using Chaos and AIOps
Nikit Swaraj
BIRMINGHAM—MUMBAI
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Rahul Nair
Publishing Product Manager: Meeta Rajani
Senior Editor: Sangeeta Purkayastha
Content Development Editor: Yasir Ali Khan
Technical Editor: Shruthi Shetty
Copy Editor: Safis Editing
Project Coordinator: Shagun Saini
Proofreader: Safis Editing
Indexer: Subalakshmi Govindhan
Production Designer: Shyam Sundar Korumilli
Senior Marketing Coordinator: Sanjana Gupta
Marketing Coordinator: Nimisha Dua
First published: April 2022
Production reference: 1060422
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80324-860-8
www.packt.com
To my father, Nagendra Ram.
I would have been a terrible software engineer if you had not bought me a desktop and taught me cut, copy, and paste in Windows 98.
– Nikit Swaraj
Nikit Swaraj is an experienced solution architect. He is well versed in the melding of development and operations to deliver efficient code. Nikit has expertise in designing, developing, and delivering enterprise-wide solutions that meet business requirements and enhance operational efficiency. As an AWS solution architect, he has plenty of experience in designing end-to-end IT solutions and leading and managing complete projects within time and budgetary constraints. He contributes to open source projects and has experience working with start-ups as well as enterprises including financial service industries and public and government sectors. He holds various professional certifications from AWS, Red Hat, CNCF, and HashiCorp. He loves to share his experience with the latest technologies at AWS meetups.
When he's not in front of his computer, you might find him playing badminton or golf or trying out new restaurants in town. He enjoys traveling to new places around the world and learning about various cultures.
I must begin my acknowledgment by thanking a couple of people who have had a significant effect on my profession. First and foremost, I want to thank my mentor and friend, Rahul Natarajan, who basically taught me AWS and continues to help me with architecture challenges whenever I get stuck. My former manager, Jason Carter, taught me a lot about application architecture and security. I'd also like to thank Stephen Brown and Gergely Varga for advancing my career by exposing me to AI and global reach. Finally I want to thank my girlfriend Lee Lee who has supported me in this journey.
Julian Andres Forero is a DevOps consultant at Endava, with more than 7 years of experience in different private and public companies related to media, payments, education, and financial services. He holds a degree in systems engineering and other professional certifications as a Professional and Associate Solutions Architect on Amazon Web Services and as a Terraform Associate Engineer. He has broad experience with cloud architectures and enterprise DevOps frameworks and helps companies to embrace DevOps and site reliability engineering principles. He is the author of various academic articles and has spoken at multiple events on the IT scene. Outside of work, he is an amateur footballer and enjoys visiting new places around the world.
To my partner, for always being there for me, and supporting me when I need it the most. I really appreciate all the moments that we have shared together. I love you.
CI/CD has never been simple, but these days the landscape is more bewildering than ever, its terrain riddled with blind alleys and pitfalls that seem almost designed to trap the less-experienced developer. If you're determined enough to keep your balance on the cutting edge and are equipped with a resource like this book, though, the landscape of CI/CD is one that you will navigate with ease.
Accelerating DevSecOps on AWS will help you discover all the most modern ways of building CI/CD pipelines with AWS by placing security checks, chaos experiment, and AIOps stage in pipeline, taking you step by step from the basics right through to the most advanced topics in this varied domain.
This comprehensive guide wastes no time in covering the basics of CI/CD with AWS. Once you're all set with tools such as AWS CodeStar, Proton, CodeGuru, App Mesh, Security Hub, and CloudFormation, you'll dive into chaos engineering, the latest trend in testing the fault tolerance of your system using AWS Fault Injection Simulator. After that, you'll explore the advanced concepts of AIOps using AWS DevOps Guru and DevSecOps, two highly sought-after skill sets for securing and optimizing your CI/CD systems. The full range of AWS CI/CD features will be covered, including the Security Advisory plugin for IDEs, SAST, DAST, and RASP, giving you real, applicable expertise in the things that matter.
By the end of this book, you'll be confidently creating resilient, secure, and performant CI/CD pipelines using the best techniques and technologies that AWS has to offer.
This book is for DevOps engineers, engineering managers, cloud developers, and cloud architects. All you need to get started is basic experience with the software development life cycle, DevOps, and AWS.
Chapter 1, CI/CD Using AWS CodeStar, introduces the basic concept of CI/CD and branching strategies, then you will create a basic pipeline using AWS CodeStar and enhance it by adding multiple stages, environments, and branching strategies. Doing this will cover all of the AWS developer toolchain, such as CodeCommit, CodeBuild, CloudFormation, and CodePipeline.
Chapter 2, Enforcing Policy as Code on CloudFormation and Terraform, walks through the concept of policy as code and its importance in security and compliance, and the stage of CI/CD at which infrastructure can be checked. You will use CloudFormation Guard to apply policies on an AWS CloudFormation template. After that, you will learn how to use AWS Service Catalog across multiple teams. You will also do hands-on implementation on Terraform Cloud and policy implementation using HashiCorp Sentinel.
Chapter 3, CI/CD Using AWS Proton and an Introduction to AWS CodeGuru, introduces the new AWS Proton service and how AWS Proton helps both developers and DevOps/infrastructure engineers with their work in software delivery. You will learn the basic blocks of the Proton service and create an environment template to spin up multiple infrastructure environments and service templates to deploy the service instance in the environment. This chapter will also walk you through the code review process and how to find a vulnerability or secret leak using AWS CodeGuru.
Chapter 4, Working with AWS EKS and App Mesh, guides you through the architecture and implementation of an AWS EKS cluster. It explains the importance of and need for the AWS App Mesh service mesh and implementing features such as traffic routing, mutual TLS authentication, and using the X-Ray service for tracing.
Chapter 5, Securing Private EKS Cluster for Production, contains an implementation guide to set up a production-grade secure private EKS cluster. It covers almost all the important implementations on EKS, such as IAM Role for Service Account (IRSA), Cluster Autoscaler, EBS CSI, App Mesh, hardening using Kubescape, policy and governance using OPA Gatekeeper, and the backup and restore of a stateful application using Velero.
Chapter 6, Chaos Engineering with AWS Fault Injection Simulator, covers the concept of chaos engineering and when it is needed. It walks through the principles of chaos engineering and gives insights in terms of where it fits in CI/CD. You will learn how to perform chaos simulation using AWS FIS on an EC2 instance, Relational Database Service (RDS), and an EKS node.
Chapter 7, Infrastructure Security Automation Using Security Hub and Systems Manager, includes some important solutions to automate infrastructure security using AWS Security Hub and Systems Manager. The solutions include enforcing only running compliant images from ECR on an EKS cluster, config rule evaluation as an insight into Security Hub, and integrating Systems Manager with Security Hub to detect issues, create an incident, and remediate it automatically.
Chapter 8, DevSecOps Using AWS Native Services, walks you step by step through creating a DevSecOps CI/CD pipeline with a branching strategy using AWS native security services such as CodeGuru Reviewer and ECR image scanning. It includes the powerful combination of the developer toolchain, App Mesh, and Fault Injection Simulator. It also covers the canary deployment of microservices and analysis using Prometheus and Grafana.
Chapter 9, DevSecOps Pipeline with AWS Services and Tools Popular Industry-Wide,walks you through the planning to create a pipeline. It shows how to implement security at every stage of software delivery, starting from when you write code. It also shows the usage of the Snyk Security Advisory plugin in an IDE, git-secrets to scan sensitive data such as keys and passwords, SAST using Snyk, DAST using OWASP ZAP, RASP using Falco, chaos simulation using AWS FIS, and AIOps using AWS DevOps Guru. It also includes operational activities such as showing a security posture and vulnerability findings using AWS Security Hub.
Chapter 10, AIOps with Amazon DevOps Guru and Systems Manager OpsCenter, introduces the primer artificial intelligence and machine learning concepts. It covers what AIOps is, why we need it, and how it applies to IT operations. You will learn about the AWS AIOps tool DevOps Guru and implement two use cases about identifying anomalies in CPU, memory, and networking within an EKS cluster, and analyzing failure insights and remediation in a serverless application.
All the tools used are the latest version while writing the book.
All the tools used in this book are open source or have a trial version that you can subscribe to.
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
It is important to have cloud, DevOps, or development work experience to understand the content of the book.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Accelerating-DevSecOps-on-AWS. If there's an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Download the color images
We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781803248608_ColorImages.pdf.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "To verify the policy, we will issue a command in the EKS cluster to run the node:10 image."
A block of code is set as follows:
{
"detail-type": ["Config Rules Compliance Change"],
"source": ["aws.config"],
"detail": {
"messageType": ["ComplianceChangeNotification"]
}
}
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
$ wget https://raw.githubusercontent.com/PacktPublishing/Modern-CI-CD-on-AWS/main/chapter-07/ecr-compliance.yaml
Any command-line input or output is written as follows:
$ docker push <yourAWSAccount>.dkr.ecr.us-east-1.amazonaws.com/node:latest
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: "Select System info from the Administration panel."
Tips or Important Notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you've read Accelerating DevSecOps on AWS, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
This part includes chapters that cover how to create a CI/CD pipeline using AWS CodeStar with a branching strategy and adding multiple stages and environments. It covers how to leverage the AWS Proton service to create a CI/CD pipeline for applications and infrastructure at scale. It also covers how to avoid any secrets and vulnerabilities in code by integrating AWS CodeGuru Reviewer with CodeCommit. After that, it covers how to enforce policy on infrastructure code using CloudFormation Guard and HashiCorp Sentinel.
This section contains the following chapters:
Chapter 1, CI/CD Using AWS CodeStarChapter 2, Enforcing Policy as Code on CloudFormation and TerraformChapter 3, CI/CD Using AWS Proton and an Introduction to AWS CodeGuruThis chapter will first introduce you to the basic concepts of Continuous Integration/Continuous Deployment (or Continuous Delivery) (CI/CD) and a branching strategy. Then, we will implement basic CI/CD for a sample Node.js application using Amazon Web Services (AWS) CodeStar, which will deploy the application in Elastic Beanstalk. We will begin by creating a CodeStar project, then we will enhance it by adding develop and feature branches in a CodeCommit repository. We will also add a manual approval process as well as a production stage in CodePipeline. We will also spin up the production environment (modifying a CloudFormation template) so that the production stage of the pipeline can deploy the application. After that, we will create two lambda functions that will validate the Pull Request (PR) raised from the feature branch to develop branch, by getting the status of the CodeBuild project. Doing this entire activity will give you an overall idea of AWS Developer Tools (CodeCommit, CodeBuild, and CodePipeline) and how to implement a cloud-native CI/CD pipeline.
In this chapter, we are going to cover the following main topics:
Introduction to CI/CD, along with a branching strategyCreating a project in AWS CodeStarCreating feature and development branches, as well as an environmentValidating PRs/Merge Requests (MRs) into the develop branch from the feature branch via CodeBuild and AWS LambdaAdding a production stage and environmentTo get started, you will need an AWS account and the source code of this repository, which can be found at https://github.com/PacktPublishing/Accelerating-DevSecOps-on-AWS/tree/main/chapter-01.
In this section of the chapter, we will dig into what exactly CI/CD is and why is it so important in the software life cycle. Then, we will learn about a branching strategy, and how we use it in the source code repository to make the software delivery more efficient, collaborative, and faster.
Before getting to know about CI, let's have a brief look at what happens in a software development workflow. Suppose you are working independently, and you have been asked to develop a web application that is a chat system. So, the first thing you will be doing is to create a Git repository and write your code in your local machine, build the code, and run some tests. If it works fine in your local environment, you will then push it to a remote Git repository. After that, you will build this code for different environments (where the actual application will run) and put the artifact in the artifact registry. After that, you will deploy that artifact into the application server where your application will be running.
Now, suppose the frontend of your application is not too good, and you want some help from your frontend developer. The frontend developer will clone the code repository, then contribute to the repository either by modifying the existing code or adding new code. After that, they will commit the code and push it into the repository. Then again, the same steps of build and deploy will take place, and your application will be running with the new User Interface (UI). Now, what if you and the frontend developer both want to enhance the application, whereby both of you will be writing the code, and somehow you both used the same file and did your own changes, and tried to push back to the repository? If there are no conflicts, then your Git repository will allow you to update the repository, but in case there are any conflicts then it will highlight this to you. Now, once your code repository is updated, you must again build the code and run some unit tests. If the tests find a bug, then the build process will fail and you or the frontend developer will need to fix the bug, and again run the build and unit test. Once this passes, you will then need to put the build artifact into the artifact registry and then later deploy it into the application server. But this whole manual process of building, testing, and making the artifact ready for deployment will become quite troublesome and slow when your application gets bigger, and collaborators will increase, which in return will slow the deployment of your application. These problems of slow feedback and a manual process will easily put the project off schedule. To solve this, we have a CI process.
CI is a process where all the collaborators/developers contribute their code, several times a day, in a central repository that is further integrated into an automated system that pulls the code from the repository, builds it, runs unit tests, fails the build, gives feedback in case there are bugs, and prepares the artifact so that it is deployment-ready. The process is illustrated in the following diagram:
Figure 1.1 – CI process
CI makes sure that software components or services work together. The integration process should take place and complete frequently. This increases the frequency of developer code commits and reduces the chances of non-compatible code and redundant efforts. To implement a CI process, we need—at the very least—the following tools:
Version Control System (VCS)Build toolArtifact repository managerWhile implementing a CI process, our code base must be under version control. Every change applied to the code base must be stored in a VCS. Once the code is version controlled, it can be accessed by the CI tool. The most widely used VCS is Git, and in this book, we will also be using Git-based tools. The next requirement for CI is a build tool, which basically compiles your code and provides the executable file in an automated way. The build tool depends on the technology stack; for instance, for Java, the build tool will be Maven or Ant, while for Node.js, it will be npm. Once an executable file gets generated by the build tool, it will be stored in the artifact repository manager. There are lots of tools available in the market—for example, Sonatype Nexus Repository Manager (NXRM) or JFrog. We will be using the AWS Artifact service. The whole CI workflow will be covered in detail after the Branching strategy (Gitflow) section.
CD is a process where the generated executable files or packages (in the CI process) are installed or deployed on application servers in an automated manner. So, CI is basically the first step toward achieving CD.
There is a difference between continuous deployment and delivery, but most of the time, you will be seeing or implementing continuous delivery, especially when you are in the financial sector or any critical business.
Continuous delivery is a process whereby, after all the steps of CI, building and testing then deploying to the application server happens with human intervention. Human intervention means either clicking a button on Build Tools to deploy or allowing a slack bot by approving it. The continuous deployment process differs slightly, whereby the deployment of a successful build to the application server takes place in an automated way and without human intervention.
The processes are illustrated in the following diagram:
Figure 1.2 – CD processes
On reading up to this point, you must be thinking that the CI process is more complex than the CD process, but the CD process is trickier when deploying an application to the production server, especially when it is serving thousands to millions of end-user customers. Any bad experience with the application that is running on your production server may lose customers, which results in a loss for the business. For instance, version 1 (v1) of your application is running live right now and your manager has asked you to deploy version v1.1, but during the deployment, some problem occurred, and somehow v1.1 is not running properly, so you then have to roll back to the previous version, v1. So, all these things now need to be planned and automated in a deployment strategy.
Some CD strategies used in DevOps methodologies are mentioned in the following list:
Blue-green deploymentCanary deploymentRecreate deploymentA/B testing deploymentLet's have a look at these strategies in brief.
A blue-green deployment is a deployment strategy or pattern where we can reduce downtime by having two production environments. It provides near-zero-downtime rollback capabilities. The basic idea of a blue-green deployment is to shift the traffic from the current environment to another environment. The environments will be identical and run the same application but will have different versions.
You can see an illustration of this strategy in the following diagram:
Figure 1.3 – Blue-green demonstration
In the preceding diagram, we can see that initially, the live application was running in the blue environment, which has App v1; later, it got switched to green, which is running the latest version of the application, App v2.
In a canary deployment strategy, applications or services get deployed in an incremental manner to a subset of users. Once this subset of users starts using an application or service, then important application metrics are collected and analyzed to decide whether the new version is good to go ahead at full scale to be rolled to all users or needs to roll back for troubleshooting. All infrastructure in production environments is updated in small phases (for example, 10%, 20%, 50%, 75%, 100%).
You can see an illustration of this strategy in the following diagram:
Figure 1.4 – Canary deployment phases
Let's move on to the next strategy.
With this deployment strategy, we stop the older version of an application before deploying the newer version. For this deployment, downtime of service is expected, and a full restart cycle is executed.
You can see an illustration of this strategy in the following diagram:
Figure 1.5 – Recreate deployment steps
Let's have a look at the next deployment strategy.
A/B testing is a deployment whereby we run different versions of the same application/services simultaneously, for experimental purposes, in the same environment for a certain period. This strategy consists of routing the traffic of a subset of users to a new feature or function, then getting their feedback and metrics, and after that comparing this with the older version. After comparing the feedback, the decision-maker will update the entire environment with the chosen version of the application/services.
You can see an illustration of this strategy in the following diagram:
Figure 1.6 – A/B testing demonstration
So far, we got to see the deployment strategies, but we do not deploy the application in the production server just after having the build artifact ready from the CI process. We deploy and test the application in various environments and, post success, we deploy in the production environment. We will now see how application versions relate to branches and environments.
In the preceding two sections, we got to know about CI and CD, but it is not possible to have a good CI and CD strategy if you do not have a good branching strategy. However, what does branching strategy mean and what exactly are branches?
Whenever a developer writes code in a local machine, after completing the code, they upload/push it to a VCS (Git). The reason for using a VCS is to store the code so that it can be used by other developers and can be tracked and versioned. When a developer pushes the code to Git for the first time, it goes to the master/main branch. A branch in Git is an independent line of development and it serves as an abstraction for the edit/commit/stage process. We will explore the Gitflow branching strategy with a simple example.
Suppose we have a project to implement a calculator. The calculator will have functions of addition, subtraction, multiplication, division, and average. We have three developers (Lily, Brice, and Geraldine) and a manager to complete this project. The manager has asked them to deliver the addition function first. Lily quickly developed the code, built and tested it, pushed it to the Git main/master branch, and tagged it with version 0.1, as illustrated in the following screenshot. The code in the main/master Git branch always reflects the production-ready state, meaning the code for addition will be running in the production environment.
Figure 1.7 – Master branch with the latest version of the code
Now, the manager has asked Brice to start the development of subtraction and multiplication functions as the major functionality for a calculator project for the next release and asked Geraldine to develop division and average functions as a functionality for a future release. Thus, the best way to move ahead is to create a develop branch that will be an exact replica of the working code placed in the master branch. A representation of this branch can be seen in the following screenshot:
Figure 1.8 – Checking out the develop branch from master
So, once a develop branch gets created out of the master, it will have the latest code of the addition function. Now, since Brice and Geraldine must work on their task, they will create a feature branch out of the develop branch. Feature branches are used by developers to develop new features for upcoming releases. It branches off from the develop branch and must merge into the develop branch back once the development of the new functionality completes, as illustrated in the following screenshot:
Figure 1.9 – Creating feature branches from the develop branch
While Brice (responsible for subtraction and multiplication) and Geraldine (responsible for division and average) have been working on their functionality and committing their branches, the manager has found a bug in the current live production environment and has asked Lily to fix that bug. It is never a good practice and is not at all recommended to fix any bug in a production environment. So, what Lily will have to do is to create a hotfix branch from the master branch, fix the bug in code, then merge it into the master branch as well as the develop branch. Hotfix branches are required to take immediate action on the undesired status of the master branch. It must branch off from the master branch and, after fixing the bug, must merge into the master branch as well as the develop branch so that the current develop branch does not have that bug and can deploy smoothly in the next release cycle. Once the fixed code gets merged into the master branch, it gets a new minor version tag and is deployed to the production environment.
This process is illustrated in the following screenshot:
Figure 1.10 – Checking out hotfix from master and later merging into the master and develop branches
Now, once Brice completes his development (subtraction and multiplication), he will then merge his code from the feature branch into the develop branch. But before he merges, he needs to raise a PR/MR. As the name implies, this requests the maintainer of the project to merge the new feature into the develop branch, after reviewing the code. All companies have their own requirements and policies enforced before a merge. Some basic requirements to get a feature merged into the develop branch are that the feature branch should get built successfully without any failures and must have passed a code quality scan.
You can see an example of a PR/MR in the following diagram:
Figure 1.11 – PR raised from the feature branch, post-approval, merged into the develop branch
Once Brice's code (subtraction and multiplication feature) is accepted and merged into the develop branch, then the release process will take place. In the release process, the develop branch code gets merged to the release branch. The release branch basically supports preparation for a new production release. The code in the release branch gets deployed to an environment that is similar to a production environment. That environment is known as staging (pre-production). A staging environment not only tests the application functionality but also tests the load on the server in case traffic increases. If any bugs are found during the test, then these bugs need to be fixed in the release branch itself and merged back into the develop branch.
The process is illustrated in the following screenshot:
Figure 1.12 – Merging the code into release branch from develop branch, fixing bugs (if any), and then merging back into develop branch
Once all the bugs are fixed and testing is successful, the release branch code will get merged into the master branch, then tagged and deployed to the production environment. So, after that, the application will have three functions: addition, subtraction, and multiplication. A similar process will take place for the new features of division and average developed by Geraldine, and finally, the version will be tagged as 1.1 and deployed in the production environment, as illustrated in the following diagram:
Figure 1.13 – Merging code from the release branch into the master branch
These are the main branches in the whole life cycle of an application:
MasterDevelopBut during a development cycle, the supporting branches, which do not persist once the merge finishes, are shown here:
FeatureHotfixReleaseSince we now understand branching and CI/CD, let's club all the pieces together, as follows:
Figure 1.14 – CI/CD stages
So, in the preceding diagram, we can see that when a developer finishes their work in a feature branch, they try to raise a PR to the develop branch and the CI pipeline gets triggered. The CI pipeline is nothing but an automated flow or process in any CI tool such as Jenkins/AWS CodePipeline. This CI pipeline will validate whether the feature branch meets all the criteria to get merged into the develop branch. If the CI pipeline runs and build successfully, then the lead maintainer of the project will merge the feature branch into the develop branch, where another automated CI pipeline will trigger and try to deploy the new feature in the development environment. This whole process is known as CI (colored in blue in the preceding diagram). Post-deployment in the developmentenvironment, some automated test runs on top of it. If everything goes well and all the metrics look good, then the develop branch gets merged into the staging branch. During this merge process, another automated pipeline gets triggered, which deploys the artifact (uploaded during the develop branch CI process) into the staging environment. The staging environment is generally a close replica of the production environment, where some other tests such as Dynamic Application Security Testing (DAST) and load stress testing take place. If all the metrics and data from the staging environment look good, then staging of the branch code gets merged into the master branch and tagged as a new version.
If the maintainer deploys the tagged artifact in the production environment, then it is considered as continuous delivery. If the deployment happens without any intervention, then it is considered as continuous deployment.
So far, we have learned how application development and deployment take place. The preceding concept of CI/CD and branching strategies were quite important to be familiar with to move ahead and understand the rest of the chapters. In the next section, we will be learning about the AWS-managed CI/CD service CodeStar and will use it to create and deploy a project in development, staging, and production environments.
In this section of the chapter, we will understand the core components of AWS CodeStar and will create a project and replace the existing project code with our own application code.
AWS CodeStar is a managed service by AWS that enable developers to quickly develop, build, and deploy an application on AWS. It provides all the necessary templates and interfaces to get you started. This service basically gives you an entire CI/CD toolchain within minutes using a CloudFormation stack. This service is quite good for any start-up companies that want to focus only on business logic and do not want to spend any time on setting up an infrastructure environment. It is so cloud-centric that it is integrated with the Cloud9 editor to edit your application code and CloudShell to perform terminal/shell-related action. AWS CodeStar is a free service, but you will be paying for other resources that get provisioned with it—for example, if you use this service to deploy your application on an Elastic Compute Cloud (EC2) instance, then you will not pay to use AWS CodeStar but will pay for the EC2 instance. AWS CodeStar is tightly integrated with other AWS developer tools mentioned next.
AWS CodeCommit is a VCS that is managed by AWS, where we can privately store and manage code in the cloud and integrate it with AWS. It is a highly scalable and secure VCS that hosts private Git repositories and supports the standard functionality of Git, so it works very well with your existing Git-based tools.
When it comes to cloud-hosted and fully managed build services that compile our source code, run unit tests, and produce artifacts that are ready to deploy, then AWS CodeBuild comes into the picture.
AWS CodeDeploy is a deployment service provided by AWS that automates the deployment of an application to an Amazon EC2 instance, Elastic Beanstalk, and on-premises instances. It provides the facility to deploy unlimited variants of application content such as code, configuration, scripts, executable files, multimedia, and much more. CodeDeploy can deploy application files stored in Amazon Simple Storage Service (S3) buckets, GitHub repositories, or Bitbucket repositories.
AWS CodePipeline comes into the picture when you want to automate all the software release process. AWS CodePipeline is a CD and release automation service that helps smoothen deployment. You can quickly configure the different stages of a software release process. AWS CodePipeline automates the steps required to release software changes continuously.
Before jumping into creating a project in AWS CodeStar, note the following points:
The project template that we will be selecting is a Node.js web application.We will be using Elastic Beanstalk as application compute infrastructure.Create a Virtual Private Cloud (VPC) with a private subnet and an EC2 key pair that we will be using later.If you are using Elastic Beanstalk for the first time in your AWS account, make sure the t2.micro instance type has enough Central Processing Unit (CPU) credit (see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html).Let's get started by following these next steps:
Log in to the AWS Management Console by going to this site: https://aws.amazon.com/console/.Go to the search box and search for AWS CodeStar, click on the result, and this will redirect you to AWS CodeStar intro/home page.Click on Create project, and you will be redirected to Choose a project template page where you will see information on how to create a service role. Click on Create service role, as illustrated in the following screenshot:Figure 1.15 – Service role prompt
Post that, you will see a green Success message, as illustrated in the following screenshot:Figure 1.16 – Service role creation success message
Click on the dropdown of the Templates search box, then click AWS Elastic Beanstalk under AWS service, Web application under Application type, and Node.js under Programming language, as illustrated in the following screenshot:Figure 1.17 – Selecting service, application type, and programing language
You will see two search results, Node.js and Express.js. We will go ahead with Node.js by clicking on the radio button and then on Next, as illustrated in the following screenshot:Figure 1.18 – Selecting Node.js web application template
We will be redirected to another page called Set up your project, where we will be entering northstar in the Project name field. This will auto-populate the Project ID and Repository name fields. We will be using CodeCommit for the code repository. In EC2 Configuration, we will be going ahead with t2.micro and will select the available VPC and subnet. After that, we will select an existing key pair that we should already have access to and then click Next, as illustrated in the following screenshot:Figure 1.19 – CodeStar project setup details
Post that, we will be reviewing all the information related to the project and will proceed to click Create project. This process will take almost 10-15 minutes to set up the CI/CD toolchain and Elastic Beanstalk and deploy the sample Node.js application in Elastic Beanstalk. During this process, we can go to CloudFormation, search for the awscodestar-northstar stack, and see all the resources that are getting provisioned, as illustrated in the following screenshot:Figure 1.20 – CodeStar resources creation in CloudFormation page
We can also have a look at the Elastic Beanstalk resource by going to the Environments view of the Elastic Beanstalk console, as illustrated in the following screenshot:Figure 1.21 – Elastic Beanstalk page showing northstarapp
After 10-15 minutes, we will keep monitoring the project main page. Once the View application button gets enabled, this means that the creation of a project, including a CI/CD toolchain and an environment infrastructure, and the deployment of an application have been completed. We can access the application by clicking on the View application button.This will redirect us to a Node.js sample web application, as illustrated in the following screenshot:Figure 1.22 – Default Node.js web application page
Now, before replacing the sample application with our own application, let's get to know what exactly happened at the backend, as follows:
CodeStar triggers an AWS CloudFormation stack to create an entire CI/CD toolchain and workload infrastructure.The toolchain includes the following:A CodeCommit repository with the master branch having a sample Node.js applicationA CodeBuild project with the preconfigured environment to run the buildCodePipeline to trigger the build and deploy the applicationThe workload infrastructure includes Elastic Beanstalk with one EC2 instance.IAM roles with certain permissions that allow CloudFormation to perform actions on other services.To replace the sample application with our own sample application, perform the following steps:
Our sample code provided by AWS CodeStar resides in AWS CodeCommit, but we are not going to edit the application code in CodeCommit directly; instead, we will use the AWS Cloud9Integrated Development Environment (IDE) tool. Switch to the CodeStar console from Elastic Beanstalk. We need to create an IDE environment by going to the IDE tab, as illustrated in the following screenshot, and clicking on Create environment:Figure 1.23 – CodeStar project page at IDE tab
The second page will ask you for the environment configuration for the Cloud9 IDE. This IDE environment will stop automatically if it is in an ideal state for 30 minutes to save on cost. Once you fill in all the details, click on Create environment, as illustrated in the following screenshot:Figure 1.24 – Cloud9 environment configuration
After 10-15 minutes, you will be able to see the Cloud9 IDE environment available for you to open it and start using it. Click on Open IDE to get to the Cloud9 IDE.The following screenshot shows you the Cloud9 IDE. This IDE will automatically clone the code from CodeCommit and show it in the cloud9 explorer. It also comes with its own shell:Figure 1.25 – Cloud9 IDE console
Go to the shell of the editor and type the following commands. These commands will set the Git profile with your name and then clone our own application code:$ git config –global user.name <your username>
$ git clone https://github.com/PacktPublishing/Modern-CI-CD-on-AWS.git
Once you clone the application code, you will be able to see a folder with Accelerating-DevSecOps-on-AWS on the left side, as illustrated in the following screenshot. AWS-CodeStar, in your case it will be Accelerating-DevSecOps-on-AWS folder. That folder contains all the application code and some lambda code that we will use in this chapter:Figure 1.26 – Cloud9 IDE with the cloned Git folder
Now, type the following command into the shell to replace the application code:$ cd northstar
$ rm -rf package.json app.js public tests
$ cd ../Modern-CI-CD-on-AWS/chapter-01
$ cp -rpf server.js source static package.json ../../northstar
After that, go to the editor and edit the buildspec.yml file. Comment line 16 where it is using the npm test command because our application does not include a test case for now.After that, push the code into CodeCommit by typing the following commands:$ cd ../../northstar
$ git add .
$ git commit -m "Replacing application code"
$ git push origin master
The moment we push the code into CodeCommit, CodePipeline will trigger automatically. This will pull the latest code from CodeCommit and pass it to CodeBuild, where the build will take place using buildspec.yml