Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

README.md

Demonstration: Deploying Azure Resources for an ML Platform

Costa Rica

GitHub
brown9804

Last updated: 2025-07-16


Terraform configurations for setting up an Azure Machine Learning workspace along with compute clusters and supportive resources to form the core of an ML platform.
Remember, managing your infrastructure through code (IaC) not only ensures consistency, but also offers version control, reproducibility, and collaboration benefits—essential for scalable ML operations. For additional Terraform templates covering various Azure services, check out this repository. Explore and borrow ideas as needed!

Tip

About Infrastructure via Terraform: Terraform is a powerful IaC tool that enables you to define and provision your cloud resources through a high-level configuration language. This approach keeps not only your application objects under source control but also the infrastructure code, ensuring reproducible environments across development, testing, and production. Microsoft also offers additional IaC tools like Bicep and ARM templates, giving you flexibility in how you manage your Azure resources.

Azure Machine Learning architecture

List of References (Click to expand)
Table of Contents (Click to expand)

Overview

.
├── README.md
├── src
│   ├── main.tf
│   ├── variables.tf
│   ├── provider.tf
│   ├── terraform.tfvars
│   ├── remote-storage.tf
│   └── outputs.tf
  • main.tf (Main Terraform configuration file): Contains the core infrastructure code that provisions your Azure Machine Learning workspace, compute clusters, and other related services.
  • variables.tf (Variable definitions): Defines variables to parameterize your configurations. This includes settings for workspace names, compute configurations, and other environment-specific parameters.
  • provider.tf (Provider configurations): Specifies the necessary settings for the Azure provider so Terraform can authenticate and manage your Azure resources.
  • terraform.tfvars (Variable values): Holds the actual values for the variables defined in variables.tf. Adjust these values according to the environment you’re targeting (development, staging, production).
  • remote-storage.tf (Remote state storage configuration): Configures a remote backend (such as Azure Blob Storage) for storing Terraform’s state file securely, ensuring reliable collaboration.
  • outputs.tf (Output values): Defines outputs to display resource endpoints, IDs, and other key details after a successful deployment.

Configuring Access with Azure CLI

To deploy Azure Machine Learning resources, proper authentication is required. In many cases, you might need to assign a service principal with the appropriate permissions.

To list available service principals, run:

az ad sp list --query "[].{Name:displayName, AppId:appId, ObjectId:id}" --output table

Below is an example showing how you would reference the service principal (whose Object ID you’ve retrieved) in your Terraform configuration:

ml_service_principal_id = "12345678-1234-1234-1234-1234567890ab"

Configure Remote Storage for Terraform Deployment

For robust state management and collaboration, configuring a remote backend for Terraform is essential. This section outlines how to use Azure Blob Storage for remote state storage.

  1. Create an Azure Storage Account:
    • Use the Azure portal or CLI to set up a new storage account if you do not already have one.
    • Note down the storage account name and access key.
  2. Create a Storage Container:
    • Within your storage account, create a container dedicated to holding your Terraform state file.
  3. Configure Terraform Backend:
    • In the remote-storage.tf file (located in the src folder), include the backend configuration to connect to your Azure Blob Storage container.

How to Execute the Deployment

graph TD;
    A[az login] --> B(terraform init)
    B --> C{Terraform Provisioning Stage}
    C -->|Review| D[terraform plan]
    C -->|Deploy Resources| E[terraform apply]
    C -->|Tear Down Infrastructure| F[terraform destroy]
Loading

Important

Before executing, update terraform.tfvars with your personalized configuration values. This repository provisions an Azure Machine Learning workspace, compute clusters, and essential support resources for running ML experiments. A video walk-through is available that clearly explains the deployment steps.
Note: Once your ML experiments are complete, remember to scale down compute clusters or delete the resource group to control costs.

  1. Login to Azure: Navigate to your Terraform directory and log in to your Azure account. This command opens a browser window for authentication.

    cd ./infrastructure/azMachineLearning/src/
    az login
    azML.deployment.-.az.login.mp4
  2. Initialize Terraform: Set up your working directory and install the necessary provider plugins.

    terraform init
    azML.deployment.-terraform.init.mp4
  3. Review the Deployment Plan: Preview the changes Terraform will make.

    terraform plan -var-file terraform.tfvars
    azML.deploy.-.terraform.plan.mp4
  4. Apply the Configuration: Deploy the specified Azure resources.

    terraform apply -var-file terraform.tfvars
    image
  5. Destroy the Infrastructure (if needed): Clean up resources by tearing down the deployment.

    terraform destroy -var-file terraform.tfvars
Total views

Refresh Date: 2025-07-16