Tag

PipeLines

Browsing

Best practice of secrets in Azure DevOps and life-cycle management can be a complicated topic. In some cases, existing AzDo tasks might not fulfil your needs. This post reviews the options.

The need to use secrets in Azure DevOps pipelines increases the more extensive the enterprise environment, and the more complicated Azure resources are in use. There are three options that I’ll go through in this blog post, and the usage of each case depends on the requirements and the corporate policy in the Azure cloud environment.

Linking an Azure KeyVault to Azure DevOps Variable Group

The first option is to link an Azure Key Vault to a Library in DevOps. During the creation of the Variable Group, you check “Link secrets from an Azure key vault as variables” switch on. The check box will make the authorisation dropdowns visible. The first dropdown will show all the AzDo service connections, or you can create a new one. The second dropdown will show available Key Vaults under the selected service connection. Make sure the Key Vault is created before you create the service connection. The authorisation to the Key Vault requires the Get and List permission from the Service Principal in the AAD created during the service connection creation.

Using Secrets in Azure DevOps

The pros of the feature are the manageability and the life-cycle of the secret in the KV. The cons of this feature are that you can not have any other secrets outside of the KV in this variable group.

Best practice of secrets in Azure DevOps using the Azure Key Vault Task

The second option is to use the Azure Key Vault Task. This option makes it possible to take advantage of a service connection which has access to a KV and fetches all secrets. Here is the code for the YAML task:

# Azure Key Vault
# Download Azure Key Vault secrets
- task: AzureKeyVault@2
  inputs:
    connectedServiceName: MSDN # Azure subscription
    keyVaultName: kv-saman-test # Name of existing key vault
    secretsFilter: '*' # Downloads all secrets for the key vault
    runAsPreJob: true # Runs before the job starts

Two options are critical in this task. The first is to filter secrets using the ‘*’ wildcard or the secret’s name. The second one is to run this task as a pre-job to fetch secrets before the run begins. In addition, using this task will make it possible to point to the Secret using the variable syntax, e.g. $(saman-secret).

By using this approach, you don’t have to manage any variable group, but at the same time, you have no visibility of the available secrets. The visibility of the list of secrets might be a plus if there are corporate policy restrictions.

Custom AzDo Task to Fetch and Expose the Secrets

Not always, the corporate policies or lack of authorisation to Azure resources make it possible to create service connections in the Azure DevOps environment. Fortunately, using the CLI of Azure enables access to Key Vaults. This approach is more extensive and might promote the best practice of secrets in Azure DevOps.

The custom code approach will use the CLI to log in to the Azure subscription and get the secret from the Key Vault. As the environment uses Bash script, the code will also expose the environment variable for further use. Here is the AzDo task code:

variables:
  samanSecret: ''

- bash: |
    # login to Azure using the CLI and service principal
    az login --service-principal -u $CLIENT_ID --tenant $TENANT_ID -p $CLIENT_SECRET
    
    # Get the secret
    SAMANSERCRET=$(az keyvault secret show --vault-name kv-saman-test --name saman-secret | jq -r '.value')

    #Set the value of the secret to a pipeline variable.
    #This requires the initiation of a pipeline variable before this task
    echo "##vso[task.setvariable variable=samanSecret;issecret=true;isOutput=false;]$SAMANSERCRET"

  displayName: Get saman-secret from kv#
  env:  # mapping of environment variables to add
    CLIENT_ID: 'the-id-of-a-service-principal'
    CLIENT_SECRET: 'the-secret-from-a-service-principal'
    TENANT_ID: 'azure-tenant-id'

    

Using this custom task, you can get and use the secret in the following Tasks or stages to update the value of a pipeline variable. The Task also makes it possible to fetch previous secret versions and update the secret using the same CLI if required.

Continuous integration and delivery is part of DevOps processes, and currently part of most software projects. In my previous blog post, I had a review of the problem and why infrastructure as code (IaC) matters. The analysis included the architecture diagram and the Azure components. In this blog post as the continuation, you can read and learn how to Implement Azure Infra using Terraform and Pipelines to be part of your CI/CD in Azure DevOps. This blog post includes a complete technical guide.

Terraform

Terraform is Infrastructure as code to manage and develop cloud components. The tool is not only for specific cloud ecosystem; therefore, it is popular among developer working in different ecosystems. Terraform uses various providers, and in Microsoft cloud case, it uses Azure provisioner (azurerm) to create, modify and delete Azure resources. As explained in the previous blog post, you need the Terraform CLI installed on the environment and Azure account to deploy your resources. To verify the installation, you can write “terraform –version” in the shell terminal to view the installed version of the CLI.

Terraform uses declarative state model. The developer writes the desired state of wanted infrastructure and Terraform will take care of the rest to achieve the results. The workspace consists of one or many .terraform files. The folder on the environment also includes hidden settings and plugins files. The basic Azure provider block with subscription and resource block selection looks like this in main.tf file.

provider "azurerm" {
    version = "~>1.32.0"
    use_msi = true
    subscription_id = "xxxxxxxxxxxxxxxxx"
    tenant_id       = "xxxxxxxxxxxxxxxxx"
}
resource "azurerm_resource_group" "rg" {
    name     = "myExampleGroup"
    location = "westeurope"
}

The provider needs to authenticate to Azure before being able to provision the infrastructure. At the moment there is four authentication models:

Azure DevOps Pipelines Structure

The pipeline definition is defined in the YAML file, which includes one or many stages of the CI/CD process. It’s worth mentioning that currently, Azure pipelines do not support all YAML features. This blog post is not about the YAML and to read more please refer to “Learn YAML in Y minutes“. In the structure of the YAML build file, the stage is the top level of a specific process and includes one or many Jobs with again has one or many jobs. Here is an example of the pipeline structure:

  • Stage A
    • Job 1
      • Step 1.1
      • Step 1.2
    • Job 2
      • Step 2.1
      • Step 2.2
  • Stage B

Terraform + Azure DevOps Pipelines

Now we have the basic understanding to Implement Azure infra using Terraform and Azure DevOps Pipelines. With the knowledge of Terraform definition files and also the YAML file, it is time to jump to the implementation. In my Github InfraProvisioning code repository root folder, there are three folders and the azure-pipelies.yml file. The YML file is the build definition file which has references to the subfolders which includes the job and step definitions for a stage. The stages in the YAML file refers to the validate, plan and apply steps require in Terraform to provision model.

variables:
  project: shared-resources

- stage: validate
    displayName: Validate
    variables:
      - group: shared
    jobs:
      - template: pipelines/jobs/terraform/validate.yml

  - stage: plan_dev
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
    displayName: Plan for development
    variables:
      - group: shared
      - group: development
    jobs:
      - template: pipelines/jobs/terraform/plan.yml
        parameters:
          workspace: dev

  - stage: apply_dev
    displayName: Apply for development
    variables:
      - group: shared
      - group: development
    jobs:
      - template: pipelines/jobs/terraform/apply.yml
        parameters:
          project: ${{ variables.project }}
          workspace: dev

Each stage in the azure-pipelies.yml file refers to sub .yml files which are:

  • jobs/terraform/validate.yml: the step will download the latest version of the terraform, installs it and validates the installation.
  • jobs/terraform/plan.yml: this step gets the existing infrastructure definition and compares it to the changes, generates the modified infrastructure plan and publish the plan for the next stage.
  • jobs/terraform/apply.yml: will get the plan file, extract it, apply changes and save the output back to the storage account for the next run and comparison.

One more thing, in my previous blog post, I explained how Terraform would use blob storage to save the state files. Include the following job in your build definition if you want to create those initial Azure resources automatically. You can comment the step out once you have the required blob storage. After creating the storage account, create a new blob folder inside the storage account and also create a new secret. You will need these values later when adding all variables to the Azure DevOps environment.

 jobs:
      - job: runbash
        steps:
        - task: Bash@3
          inputs:
            targetType: 'filePath' # Optional. Options: filePath, inline
            filePath: ./tools/create-terraform-backend.sh
            arguments: dev

Settings in Azure DevOps

Most of the environment variables like Azure Resource Manager values are defined in the Group Variables in the Library section under Pipelines on the left navigation. The idea is to have as many environments as necessary in different subscriptions. If you inspect the apply.yml file, you can find the following variables:

  • ARM_CLIENT_ID: $(ARM_CLIENT_ID)
  • ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
  • ARM_TENANT_ID: $(ARM_TENANT_ID)
  • ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)

To implement Azure infra using Terraform and Pipelines, we need to create an application in Azure Active Directory so Azure DevOps can access our resources in Azure. Follow the following steps to create the application:

  1. Navigate to Azure Portal and choose your Active Directory from the navigation.
  2. Under the AAD, choose Application Registration and create a new application. You can name it TerraformAzureDevOps.
  3. From the main page of the application, copy the Application ID and the Tenant Id. We will need these values later.
  4. Choose Certificates & secrets from the navigation and create a new secret. Copy this value before changing the view because you will see it once.

Back in the Azure DevOps, and under Library section we have to create the following Variable Groups with the following variables:

Name: Development

  • ARM_CLIENT_ID: [The application ID we created in the AAD]
  • ARM_CLIENT_SECRET: [The secret from the AAD]
  • ARM_SUBSCRIPTION_ID: [The Subscription ID from Azure]
  • TERRAFORM_BACKEND_KEY: [The secret from the storage account created using the create-terraform-backend.sh script ]
  • TERRAFORM_BACKEND_NAME: [The name of the blob folder created using the create-terraform-backend.sh script]
  • WORKSPACE: [Your choice of name, e.g. Dev]

Name: Shared

  • ARM_TENANT_ID: [The AAD Id]
  • TERRAFORM_VERSION: 0.12.18

Under each variable group the “Allow access to all pipelines” should be on!

Implement Azure infra Modifications

As you get the build definition up and running in your Azure DevOps environment all you have to do in the future is to edit the terraform/main.tf file to manage your Azure infrastructure.

Implement Azure infra using Terraform and Pipelines will help you to save a lot of time and money, not to mention it will also improve the quality, maintainability and the security of the environment.

You can find the complete solution and source files from my GitHub repository. Lastly, I want to give credits to my colleague Antti Kivimäki from Futurice, who has helped my team with difficult Terraform tasks.

The size and complexity of cloud infrastructure have a direct relationship with management, maintainability and cost control of cloud projects. It might be easy to create resources using the Azure Portal or the CLI, but when there is a need to update properties, change plans, upgrade services or re-create all services in new regions the situation is more tricky. The combination of Azure DevOps pipelines and Terraform brings an enterprise-level solution to solve the problem.

The Infrastructure Problem

My previous blog post introduced a simple SaaS architecture which included App services, SQL database, storage account and background processes using Azure Functions. But why should we create automation for such a small environment?

In my project, the need started to rise as I was creating background processes for the SaaS platform. Let’s assume we have three Azure Functions to send emails, SMS messages and modify user pictures. Each Function will start using the storage account queue triggers. What if each Function has to use the API to write values to the database. For some reason, the path for the API changes, and It would be insane to manually update the URL of the new end-point in three places. Not to speak if the service is running in three Azure regions and no one should update any value manually in the production environment and without approval processes.

Azure Infrastructure using Terraform

There are few solutions to have Infrastructure as code using Azure DevOps Pipelines. You can always export Azure resource ARM templates and create the pipeline based on that, but I found it complicated and time-consuming. The second choice was to use the Azure CLI in pipelines and create resources, but the problem will be with maintenances, and the more resources you have the management of the script will be laborious.

The third option was to use third-party tools, and to be honest; I got in love with Terraform. The tool has providers for major clouds and of course for tools for Azure. The easiest way to start with the tool is to install the CLI on your computer and following the 12 sectioned learning path. Let’s add the supplement parts to the existing architecture:

Azure Infrastructure using Terraform and Pipelines

The Explanations for items one to five are in my previous blog, so let’s concentrate on the new parts.

The New Components and the Build

Azure Infrastructure using Terraform and Pipelines requires knowledge of Terraform, and at this stage, I assume that you are familiar with the Terraform init, plan and apply terms. While running the Terraform locally, all configuration and metadata files will store on your computer hard disk. By moving the build to Azure DevOps Pipelines, there will be a need to store Terraform generated metadata and infrastructure code files.

Git repositories are a perfect solution to store Terraform infrastructure code files (.tf), and any other tools which are part of the pipeline. I have created a separated repository for my infrastructure in my project and named it InfraProvisioning.

To save the Terraform state and metadata files, we need different storage than GIT repository. Each Terraform execution will compare the current changes with the existing infrastructure and will override the existing environment. Azure Storage BLOB is a perfect location to save these files. The resource is marked as number six in the architecture diagram. But how should we automatically create the storage and include it as part of the automatic infrastructure?

I have created a Bash script which is for download from my Git Hub project. The script will login to the Azure using the CLI, create a Shared resource group, a storage account and BLOB storage. The Bash file is part of the build process and parametrised, but all parameters can be replaced with hard-coded values in the script. The Bash file is the initial part of the pipeline and how to implement the pipeline is the subject of my next blog post.