Back to Insights

How I Set Up Azure to Deploy Microservices

Christian Scott·

When I setup a new azure tenant for a client, I like to use Microsoft's Cloud Adoption Framework (CAF) "start small and expand" model to set up a governance structure that scales from your first microservice to your tenth product line without hitting subscription quotas or reorganizing everything six months in.

This is the approach I cover in the Azure Engineering Bootcamp, a 12-session hands-on program that takes your engineers from zero to deploying production workloads on Azure.

1. Start with Management Groups and Landing Zones

Most teams start Azure by creating a subscription and throwing resources into it. That works until you hit subscription-level quotas for a region, need separate billing for different products, or want to apply different security policies to dev vs production. By then, restructuring is painful.

Management groups sit above subscriptions and let you apply RBAC, Azure Policy, and budgets at a governance level that cascades down. The hierarchy is simple and it's straightforward to set up:

At the top is your Organization root management group, which branches into two areas: Platform and Landing Zones.

Platform contains a shared subscription for reusable resources that span your entire organization: hub networking, DNS zones, container registries, log analytics workspaces, and shared Key Vaults. These are the resources every product team depends on but nobody should duplicate.

Landing Zones is where your products live. Each product gets its own management group, and under that, a subscription per environment (dev, prod). This means each product/environment combination has its own subscription with its own quotas, its own billing boundary, its own RBAC scope. When you add a new product, you just add a new management group and subscriptions.

resource "azurerm_management_group" "org" {
  display_name = var.org_name
}

resource "azurerm_management_group" "platform" {
  display_name               = "Platform"
  parent_management_group_id = azurerm_management_group.org.id
}

resource "azurerm_management_group" "landing_zones" {
  display_name               = "Landing Zones"
  parent_management_group_id = azurerm_management_group.org.id
}

resource "azurerm_management_group" "online" {
  display_name               = "Online"
  parent_management_group_id = azurerm_management_group.landing_zones.id
}

Policies applied at the Organization level cascade to everything below. You can enforce tagging requirements, allowed regions, required encryption, and audit logging across your entire Azure footprint with a single policy assignment. Landing zone-specific policies, like restricting production subscriptions from being modified by dev teams, go on the product management group.

I usually deploy all of this with Terraform. From there, the whole environment can be maintained as code or operated through the UI/CLI - it's a matter of preference and your team's skillset. With TF, everything is versioned, reviewable, and reproducible. The Terraform state is stored in an Azure Storage Account backend so the whole team works from the same source of truth. And because the structure is in code, adding a new landing zone can be done with a pull request.

2. RBAC, Budgets, and Policy

Once the management group hierarchy is in place, I assign governance at each level so it cascades automatically. Organization-level policies enforce allowed regions, required tagging, and encryption at rest across every subscription. Landing zone policies restrict production subscriptions from exposing public endpoints. Budgets set cost alerts before anything gets out of hand.

RBAC follows the same pattern. Readers get assigned at the org level so they can see everything. Platform admins get Contributor on the platform subscription. Dev teams get Contributor scoped to their product's management group, which gives them access to dev and prod subscriptions underneath without touching anything else. For production environments, some teams opt for read-only permissions for people and assign Contributor roles exclusively to service principals that execute through the CI/CD pipeline. Service principals authenticate via OIDC federation, secrets, or certificates. Entra ID Privileged Identity Management (PIM) is worth setting up for eligible role assignments so nobody has standing Contributor access 24/7.

resource "azurerm_management_group_policy_assignment" "allowed_regions" {
  name                 = "allowed-regions"
  management_group_id  = azurerm_management_group.org.id
  policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c"

  parameters = jsonencode({
    listOfAllowedLocations = { value = var.allowed_regions }
  })
}

resource "azurerm_role_assignment" "readers" {
  scope                = azurerm_management_group.org.id
  role_definition_name = "Reader"
  principal_id         = var.reader_group_id
}

resource "azurerm_consumption_budget_management_group" "org" {
  name                = "org-budget"
  management_group_id = azurerm_management_group.org.id
  amount              = var.org_budget_amount
  time_grain          = "Monthly"

  time_period {
    start_date = "2026-01-01T00:00:00Z"
  }

  notification {
    threshold      = 80
    operator       = "GreaterThanOrEqualTo"
    contact_emails = var.budget_alert_emails
  }
}

Everything assigned at a management group cascades to all subscriptions and resource groups beneath it. Add a new product landing zone and it automatically inherits org-level policies, RBAC, and budget alerts without any additional configuration.

Get future articles delivered to your inbox

Sign up to receive new articles on cloud infrastructure, DevOps, and software development as they're published.

3. Deploy Shared Platform Resources

The Platform subscription hosts resources that every product depends on but nobody should duplicate. Azure Container Registry stores all container images. Azure DNS manages custom domains. Azure Front Door handles global ingress with WAF, SSL termination, and routing rules that point traffic to the right environment.

resource "azurerm_container_registry" "main" {
  name                = "cr${replace(var.org_name, "-", "")}"
  resource_group_name = azurerm_resource_group.platform.name
  location            = azurerm_resource_group.platform.location
  sku                 = "Premium"
  admin_enabled       = false
}

resource "azurerm_dns_zone" "main" {
  name                = var.domain_name
  resource_group_name = azurerm_resource_group.platform.name
}

resource "azurerm_cdn_frontdoor_profile" "main" {
  name                = "fd-${var.org_name}"
  resource_group_name = azurerm_resource_group.platform.name
  sku_name            = "Premium_AzureFrontDoor"
}

resource "azurerm_cdn_frontdoor_endpoint" "main" {
  name                     = "fde-${var.org_name}"
  cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.main.id
}

Front Door routes requests to Container Apps based on hostname and path. Each environment gets its own origin group and routing rule. DNS CNAME records point custom domains to the Front Door endpoint, and Front Door handles TLS certificates automatically. ACR uses Premium SKU for geo-replication and private endpoint support. Admin access is disabled since Container Apps authenticate to ACR using managed identity.

4. Container App Environment and Apps

Each environment (dev, staging, prod) gets its own Container App Environment with the same set of services. I structure the Terraform so that all per-environment resources live in a module called infrastructure, and each environment calls that module with different variables. Same config, different names and scaling settings.

# modules/infrastructure/main.tf

resource "azurerm_resource_group" "main" {
  name     = "rg-${var.app_name}-${var.environment}"
  location = var.location
}

resource "azurerm_container_app_environment" "main" {
  name                       = "cae-${var.app_name}-${var.environment}"
  location                   = azurerm_resource_group.main.location
  resource_group_name        = azurerm_resource_group.main.name
  log_analytics_workspace_id = var.log_analytics_workspace_id
}

resource "azurerm_container_app" "api" {
  name                         = "ca-api-${var.environment}"
  container_app_environment_id = azurerm_container_app_environment.main.id
  resource_group_name          = azurerm_resource_group.main.name
  revision_mode                = "Single"

  identity {
    type = "SystemAssigned"
  }

  template {
    min_replicas = var.environment == "prod" ? 2 : 1
    max_replicas = var.environment == "prod" ? 10 : 3

    container {
      name   = "api"
      image  = "${var.acr_login_server}/api:${var.image_tag}"
      cpu    = 0.5
      memory = "1Gi"
    }
  }

  ingress {
    external_target_port = 8080
    transport            = "http"

    traffic_weight {
      latest_revision = true
      percentage      = 100
    }
  }
}
# environments/dev/main.tf

module "infrastructure" {
  source                     = "../../modules/infrastructure"
  app_name                   = "myapp"
  environment                = "dev"
  location                   = "eastus2"
  acr_login_server           = data.azurerm_container_registry.main.login_server
  log_analytics_workspace_id = data.azurerm_log_analytics_workspace.main.id
  image_tag                  = var.image_tag
}

# environments/prod/main.tf

module "infrastructure" {
  source                     = "../../modules/infrastructure"
  app_name                   = "myapp"
  environment                = "prod"
  location                   = "eastus2"
  acr_login_server           = data.azurerm_container_registry.main.login_server
  log_analytics_workspace_id = data.azurerm_log_analytics_workspace.main.id
  image_tag                  = var.image_tag
}

The module defines everything an environment needs: resource group, Container App Environment, individual Container Apps, managed identity assignments for ACR pull access, and the Front Door origin groups and routing rules that point traffic to each environment's apps. The environment Terraform configs data-source the shared resources (ACR, DNS zone, Front Door profile) and pass them into the module, so each environment automatically gets its own DNS records and routing without duplicating any platform infrastructure. Adding a new environment means creating a new directory with a main.tf that calls the same module. The only things that change are the environment name and scaling parameters.

5. CI/CD with GitHub Actions

I split the pipeline into two workflows. Pull requests validate that the build works and tests pass. I also recommend installing the Socket GitHub App, which automatically scans PRs for dependency vulnerabilities and supply chain risks without any workflow configuration. On merge to main, Terraform plans and the container build run in parallel. Dev infrastructure applies automatically from the saved plan, then the app deploys. Production requires a manual button press for both the infrastructure apply and the app deployment, using GitHub environment protection rules.

PR Validation

name: PR Validation
on:
  pull_request:
    branches: [main]

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build container
        run: docker build -t api:${{ github.sha }} .

      - name: Run tests
        run: docker run --rm api:${{ github.sha }} npm test

      - name: Terraform plan
        working-directory: environments/dev
        run: |
          terraform init
          terraform plan -no-color

Build + Deploy

name: Deploy
on:
  push:
    branches: [main]

permissions:
  id-token: write
  contents: read

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - run: az acr login --name ${{ vars.ACR_NAME }}
      - run: |
          docker build -t ${{ vars.ACR_NAME }}.azurecr.io/api:${{ github.sha }} .
          docker push ${{ vars.ACR_NAME }}.azurecr.io/api:${{ github.sha }}

  plan-dev:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - working-directory: environments/dev
        run: |
          terraform init
          terraform plan -out=dev.tfplan \
            -var="image_tag=${{ github.sha }}"
      - uses: actions/upload-artifact@v4
        with:
          name: dev-tfplan
          path: environments/dev/dev.tfplan

  plan-prod:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - working-directory: environments/prod
        run: |
          terraform init
          terraform plan -out=prod.tfplan \
            -var="image_tag=${{ github.sha }}"
      - uses: actions/upload-artifact@v4
        with:
          name: prod-tfplan
          path: environments/prod/prod.tfplan

  apply-dev:
    needs: [build, plan-dev]
    runs-on: ubuntu-latest
    environment: dev
    steps:
      - uses: actions/checkout@v4
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - uses: actions/download-artifact@v4
        with:
          name: dev-tfplan
          path: environments/dev
      - working-directory: environments/dev
        run: |
          terraform init
          terraform apply dev.tfplan

  deploy-dev:
    needs: apply-dev
    runs-on: ubuntu-latest
    environment: dev
    steps:
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - run: |
          az containerapp update \
            --name ca-api-dev \
            --resource-group rg-myapp-dev \
            --image ${{ vars.ACR_NAME }}.azurecr.io/api:${{ github.sha }}

  apply-prod:
    needs: [deploy-dev, plan-prod]
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - uses: actions/download-artifact@v4
        with:
          name: prod-tfplan
          path: environments/prod
      - working-directory: environments/prod
        run: |
          terraform init
          terraform apply prod.tfplan

  deploy-prod:
    needs: apply-prod
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      - run: |
          az containerapp update \
            --name ca-api-prod \
            --resource-group rg-myapp-prod \
            --image ${{ vars.ACR_NAME }}.azurecr.io/api:${{ github.sha }}

The PR workflow builds the container but never pushes it. This validates that the Dockerfile works and the tests pass without polluting the registry. On merge, the deploy workflow uses OIDC federation so GitHub authenticates to Azure without long-lived credentials. The plan-dev, plan-prod, and build jobs run in parallel. Each plan produces a saved artifact so the apply step executes exactly what was planned, not a fresh plan that might have drifted. Dev applies and deploys automatically. The production environment in GitHub has required reviewers configured, so both apply-prod and deploy-prod wait for manual approval.

The same image tag (the commit SHA) flows through every environment. Dev gets it automatically, and you promote the exact same artifact to staging and production. No rebuilds between environments.

6. Monitoring and Operations

Log Analytics Workspace collects logs from all Container Apps. Application Insights provides request tracing, dependency tracking, and performance metrics. Azure Monitor alerts fire when error rates spike or response times degrade. For dashboards, Azure Managed Grafana integrates natively with Log Analytics and Application Insights, or you can self-host Grafana and point it at the same data sources.

resource "azurerm_log_analytics_workspace" "main" {
  name                = "law-${var.app_name}-${var.environment}"
  location            = azurerm_resource_group.app.location
  resource_group_name = azurerm_resource_group.app.name
  sku                 = "PerGB2018"
  retention_in_days   = 30
}

resource "azurerm_application_insights" "main" {
  name                = "ai-${var.app_name}-${var.environment}"
  location            = azurerm_resource_group.app.location
  resource_group_name = azurerm_resource_group.app.name
  workspace_id        = azurerm_log_analytics_workspace.main.id
  application_type    = "web"
}

The Container App Environment is configured with the Log Analytics workspace ID, so all container logs stream there automatically. Application Insights gets injected into each service via environment variable. Dashboards, alerts, and runbooks come last. Once you have data flowing, you can build operational tooling on top of it.

The Full Picture

The pattern is: set up management groups and landing zones so your governance scales, apply RBAC, policy, and budgets at the management group level, deploy shared platform resources (ACR, DNS, Front Door) once, use a Terraform module to stamp out identical Container App environments per product, automate builds and promotion with GitHub Actions, and monitor with Log Analytics and Application Insights.

Every resource is defined in Terraform. Every change goes through a pull request. Every deployment is automated. Adding a new product or environment is a pull request, not a rearchitecture. Your team gets a production-grade Azure environment that they understand end-to-end because they built it, not a black box someone handed them. That's the goal.

This walkthrough covers the approach I use in the Azure Engineering Bootcamp, a 12-session hands-on bootcamp that takes your engineers from zero to migrating or deploying production workloads on Azure. Get in touch to discuss your needs.