Archive

May 2020

Browsing

Continuous integration and delivery is part of DevOps processes, and currently part of most software projects. In my previous blog post, I had a review of the problem and why infrastructure as code (IaC) matters. The analysis included the architecture diagram and the Azure components. In this blog post as the continuation, you can read and learn how to Implement Azure Infra using Terraform and Pipelines to be part of your CI/CD in Azure DevOps. This blog post includes a complete technical guide.

Terraform

Terraform is Infrastructure as code to manage and develop cloud components. The tool is not only for specific cloud ecosystem; therefore, it is popular among developer working in different ecosystems. Terraform uses various providers, and in Microsoft cloud case, it uses Azure provisioner (azurerm) to create, modify and delete Azure resources. As explained in the previous blog post, you need the Terraform CLI installed on the environment and Azure account to deploy your resources. To verify the installation, you can write “terraform –version” in the shell terminal to view the installed version of the CLI.

Terraform uses declarative state model. The developer writes the desired state of wanted infrastructure and Terraform will take care of the rest to achieve the results. The workspace consists of one or many .terraform files. The folder on the environment also includes hidden settings and plugins files. The basic Azure provider block with subscription and resource block selection looks like this in main.tf file.

provider "azurerm" {
    version = "~>1.32.0"
    use_msi = true
    subscription_id = "xxxxxxxxxxxxxxxxx"
    tenant_id       = "xxxxxxxxxxxxxxxxx"
}
resource "azurerm_resource_group" "rg" {
    name     = "myExampleGroup"
    location = "westeurope"
}

The provider needs to authenticate to Azure before being able to provision the infrastructure. At the moment there is four authentication models:

Azure DevOps Pipelines Structure

The pipeline definition is defined in the YAML file, which includes one or many stages of the CI/CD process. It’s worth mentioning that currently, Azure pipelines do not support all YAML features. This blog post is not about the YAML and to read more please refer to “Learn YAML in Y minutes“. In the structure of the YAML build file, the stage is the top level of a specific process and includes one or many Jobs with again has one or many jobs. Here is an example of the pipeline structure:

  • Stage A
    • Job 1
      • Step 1.1
      • Step 1.2
    • Job 2
      • Step 2.1
      • Step 2.2
  • Stage B

Terraform + Azure DevOps Pipelines

Now we have the basic understanding to Implement Azure infra using Terraform and Azure DevOps Pipelines. With the knowledge of Terraform definition files and also the YAML file, it is time to jump to the implementation. In my Github InfraProvisioning code repository root folder, there are three folders and the azure-pipelies.yml file. The YML file is the build definition file which has references to the subfolders which includes the job and step definitions for a stage. The stages in the YAML file refers to the validate, plan and apply steps require in Terraform to provision model.

variables:
  project: shared-resources

- stage: validate
    displayName: Validate
    variables:
      - group: shared
    jobs:
      - template: pipelines/jobs/terraform/validate.yml

  - stage: plan_dev
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
    displayName: Plan for development
    variables:
      - group: shared
      - group: development
    jobs:
      - template: pipelines/jobs/terraform/plan.yml
        parameters:
          workspace: dev

  - stage: apply_dev
    displayName: Apply for development
    variables:
      - group: shared
      - group: development
    jobs:
      - template: pipelines/jobs/terraform/apply.yml
        parameters:
          project: ${{ variables.project }}
          workspace: dev

Each stage in the azure-pipelies.yml file refers to sub .yml files which are:

  • jobs/terraform/validate.yml: the step will download the latest version of the terraform, installs it and validates the installation.
  • jobs/terraform/plan.yml: this step gets the existing infrastructure definition and compares it to the changes, generates the modified infrastructure plan and publish the plan for the next stage.
  • jobs/terraform/apply.yml: will get the plan file, extract it, apply changes and save the output back to the storage account for the next run and comparison.

One more thing, in my previous blog post, I explained how Terraform would use blob storage to save the state files. Include the following job in your build definition if you want to create those initial Azure resources automatically. You can comment the step out once you have the required blob storage. After creating the storage account, create a new blob folder inside the storage account and also create a new secret. You will need these values later when adding all variables to the Azure DevOps environment.

 jobs:
      - job: runbash
        steps:
        - task: Bash@3
          inputs:
            targetType: 'filePath' # Optional. Options: filePath, inline
            filePath: ./tools/create-terraform-backend.sh
            arguments: dev

Settings in Azure DevOps

Most of the environment variables like Azure Resource Manager values are defined in the Group Variables in the Library section under Pipelines on the left navigation. The idea is to have as many environments as necessary in different subscriptions. If you inspect the apply.yml file, you can find the following variables:

  • ARM_CLIENT_ID: $(ARM_CLIENT_ID)
  • ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
  • ARM_TENANT_ID: $(ARM_TENANT_ID)
  • ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)

To implement Azure infra using Terraform and Pipelines, we need to create an application in Azure Active Directory so Azure DevOps can access our resources in Azure. Follow the following steps to create the application:

  1. Navigate to Azure Portal and choose your Active Directory from the navigation.
  2. Under the AAD, choose Application Registration and create a new application. You can name it TerraformAzureDevOps.
  3. From the main page of the application, copy the Application ID and the Tenant Id. We will need these values later.
  4. Choose Certificates & secrets from the navigation and create a new secret. Copy this value before changing the view because you will see it once.

Back in the Azure DevOps, and under Library section we have to create the following Variable Groups with the following variables:

Name: Development

  • ARM_CLIENT_ID: [The application ID we created in the AAD]
  • ARM_CLIENT_SECRET: [The secret from the AAD]
  • ARM_SUBSCRIPTION_ID: [The Subscription ID from Azure]
  • TERRAFORM_BACKEND_KEY: [The secret from the storage account created using the create-terraform-backend.sh script ]
  • TERRAFORM_BACKEND_NAME: [The name of the blob folder created using the create-terraform-backend.sh script]
  • WORKSPACE: [Your choice of name, e.g. Dev]

Name: Shared

  • ARM_TENANT_ID: [The AAD Id]
  • TERRAFORM_VERSION: 0.12.18

Under each variable group the “Allow access to all pipelines” should be on!

Implement Azure infra Modifications

As you get the build definition up and running in your Azure DevOps environment all you have to do in the future is to edit the terraform/main.tf file to manage your Azure infrastructure.

Implement Azure infra using Terraform and Pipelines will help you to save a lot of time and money, not to mention it will also improve the quality, maintainability and the security of the environment.

You can find the complete solution and source files from my GitHub repository. Lastly, I want to give credits to my colleague Antti Kivimäki from Futurice, who has helped my team with difficult Terraform tasks.

The size and complexity of cloud infrastructure have a direct relationship with management, maintainability and cost control of cloud projects. It might be easy to create resources using the Azure Portal or the CLI, but when there is a need to update properties, change plans, upgrade services or re-create all services in new regions the situation is more tricky. The combination of Azure DevOps pipelines and Terraform brings an enterprise-level solution to solve the problem.

The Infrastructure Problem

My previous blog post introduced a simple SaaS architecture which included App services, SQL database, storage account and background processes using Azure Functions. But why should we create automation for such a small environment?

In my project, the need started to rise as I was creating background processes for the SaaS platform. Let’s assume we have three Azure Functions to send emails, SMS messages and modify user pictures. Each Function will start using the storage account queue triggers. What if each Function has to use the API to write values to the database. For some reason, the path for the API changes, and It would be insane to manually update the URL of the new end-point in three places. Not to speak if the service is running in three Azure regions and no one should update any value manually in the production environment and without approval processes.

Azure Infrastructure using Terraform

There are few solutions to have Infrastructure as code using Azure DevOps Pipelines. You can always export Azure resource ARM templates and create the pipeline based on that, but I found it complicated and time-consuming. The second choice was to use the Azure CLI in pipelines and create resources, but the problem will be with maintenances, and the more resources you have the management of the script will be laborious.

The third option was to use third-party tools, and to be honest; I got in love with Terraform. The tool has providers for major clouds and of course for tools for Azure. The easiest way to start with the tool is to install the CLI on your computer and following the 12 sectioned learning path. Let’s add the supplement parts to the existing architecture:

Azure Infrastructure using Terraform and Pipelines

The Explanations for items one to five are in my previous blog, so let’s concentrate on the new parts.

The New Components and the Build

Azure Infrastructure using Terraform and Pipelines requires knowledge of Terraform, and at this stage, I assume that you are familiar with the Terraform init, plan and apply terms. While running the Terraform locally, all configuration and metadata files will store on your computer hard disk. By moving the build to Azure DevOps Pipelines, there will be a need to store Terraform generated metadata and infrastructure code files.

Git repositories are a perfect solution to store Terraform infrastructure code files (.tf), and any other tools which are part of the pipeline. I have created a separated repository for my infrastructure in my project and named it InfraProvisioning.

To save the Terraform state and metadata files, we need different storage than GIT repository. Each Terraform execution will compare the current changes with the existing infrastructure and will override the existing environment. Azure Storage BLOB is a perfect location to save these files. The resource is marked as number six in the architecture diagram. But how should we automatically create the storage and include it as part of the automatic infrastructure?

I have created a Bash script which is for download from my Git Hub project. The script will login to the Azure using the CLI, create a Shared resource group, a storage account and BLOB storage. The Bash file is part of the build process and parametrised, but all parameters can be replaced with hard-coded values in the script. The Bash file is the initial part of the pipeline and how to implement the pipeline is the subject of my next blog post.

The mindset for start-ups is to keep the costs down and develop fast with quality. Therefore the idea is to have continuous improvement loops and publish the most viable product (MVP) version as soon as possible. Microsoft Azure provides services which are essential to the Software as a Service (SaaS) products. Most importantly and fortunately, most of these services have a free plan to kick start the development project. In this blog post, I’ll have a review of these free Azure essential services in SaaS architecture.

During 14 years of my career, I have worked with customers from different industry sectors and various project types. Most of them were Enterprise-grade business-to-business (b2b) solutions, and my experience with business-to.customer (b2c) products is quite narrow. As explained in my previous blog post, recent Azure certification exams are demanding, and studying requires a lot of reading and hands-on training. During my studies, the outcome product has taken my attention and interest to develop the product further. Let’s have a look at the SaaS high-level architecture in it’s purest form.

Simple SaaS Architecture

The illustration represents free azure services in Service-oriented SaaS architecture. These services are:

  1. App Service is a Platform as a Service which is the best solution to host the front-end layer and the UI of the product. Developing the UI by any best-of-breed front-end framework like Angular or React can be up and running on Azure with few clicks. The app service can be scaled as the demand grows, but you can kick start the project with the free plan.
  2. App API is based on App Service platform and will act as the service layer of the product. The API can also be developed with any popular back-end language like .net core, Node.js or Go lang. The service can be scaled up by demand, and it has many other useful features like hosting Docker containers to serve the API. The app service environment can be hosted either on Windows or Linux environment.
  3. Azure SQL is also a Platform as a service product which should not be confused with the self-hosted Microsoft SQL Server. The Azure SQL database does not require a SQL server licence, and you pay based on the Database Throughput Unit or DTU. Developers can have SQL relational database so serve the API layer. The Azure SQL does not have a free tire, but the Basic tire with 5 DTU costs under five euro per month.
  4. The Storage Account is a package of four different services, and you pay only for the use. The following services are essential for the SaaS product:
    • Blob Storage is the solution to host images, videos and binary files.
    • Table Storage is the part of the storage account to host non-relational data in a table format where the schema can scale based on needs.
    • Storage Queue is a simple service bus solution to enable event-based operations.
  5. Azure functions are server-less PaaS product which is hosted on the App Services environment and is a perfect solution for handling the background processes. To read more about Azure Functions in cation and hosting plans, please refer to my previous blog posts.

The services above are the perfect initial parts of a SaaS application. The architecture can be extended with other services to provide an industry lead solution which can be topics for my next blog posts.