Episode #4: Building your first Azure Linux Virtual Machine with Terraform: A Step-by-Step Guide
Welcome to the fourth episode of Azure Terraformer, where we dive deep into using Terraform on Azure to set up powerful, scalable cloud solutions. Today, we’ll demonstrate how to provision an Azure Linux Virtual Machine (VM) with associated resources like a Resource Group, Public IP, Network Interface, and subnet, while also fetching a public SSH key securely from Azure Key Vault. Let’s break down the structure and intent of the code step by step, exploring the logic and relationships between the components.
Randomize Naming Conventions
The random_string
resource generates an 8-character random string that will be appended to resource names, ensuring uniqueness and preventing naming collisions.
resource "random_string" "main" {
length = 8
upper = false
special = false
}
Here, the string includes only lowercase letters and numbers, making it suitable for resource names. The Azure Resource Group serves as a container for all resources in the deployment. Its name incorporates the random string for uniqueness.
resource "azurerm_resource_group" "main" {
name = "rg-ep4-${random_string.main.result}"
location = var.location
}
The var.location
variable specifies the Azure region, ensuring the deployment location is configurable.
Reference an Existing Network
The subnet is referenced using a data
block, fetching details from an existing virtual network (vnet-ep2-mr8x8gxj
) within another resource group (rg-ep2-mr8x8gxj
).
data "azurerm_subnet" "default" {
name = "snet-default"
virtual_network_name = "vnet-ep2-mr8x8gxj"
resource_group_name = "rg-ep2-mr8x8gxj"
}
This approach demonstrates how to leverage existing infrastructure when building new components. Now, instead of hard coding these values, I could have simply supplied input variables for them like this:
data "azurerm_subnet" "default" {
name = var.subnet
virtual_network_name = var.virtual_network
resource_group_name = var.network_resource_group
}
This is one of those examples where using a simple grouping object to bundle relevant (and co-dependent) input variables together might make sense. Creating a single input variable like this:
variable "existing_network" {
type = object({
subnet = string
virtual_network = string
resource_group = string
})
}
Then we can reference this single object to pass in the needed context to our azurerm_subnet
data source like so:
data "azurerm_subnet" "default" {
name = var.existing_network.subnet
virtual_network_name = var.existing_network.virtual_network
resource_group_name = var.existing_network.resource_group
}
This seems more readable to me and it creates clear relationships between the co-dependent input variables that are used for a common cause: referencing the existing network.
Prepare Network Accessibility for Administrators
When creating Virtual Machines in Azure we need to think about how we want to access it. Bastion? VPN? Point-to-Site? Site-to-Site? Each has its own level of involvement and cost that need to be considered, especially if you are just starting out and learning. It’s important to note that some solutions might be expedient for learning purposes are not the way things are done in production. That’s O.K.–but you need to be aware of the difference. For example, we can use a static public IP as an easy way to make VMs accessible to our administrators.
Below is the code to do it. The resource name also includes the random string for consistency across the infrastructure.
resource "azurerm_public_ip" "main" {
name = "pip-vm${random_string.main.result}"
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
allocation_method = "Static"
}
Using a static allocation method guarantees that the public IP address remains consistent throughout the lifecycle of the VM. This is a very viable approach for when you actually want to host something on the public internet and hook up an A-record to point your own custom domain at whatever you’re hosting.
The next thing we need is a network device that will use this Public IP address. In Azure, we can assign one or more Network Interface Cards (NICs) to a Virtual Machine. NICs must be attached to a Subnet, which means if you want a Virtual Machine on two subnets you just need to attach two NICs, each to their own Subnet. A scenario, not uncommon in the enterprise space but not for the faint of heart when you’re just learning the platform.
resource "azurerm_network_interface" "main" {
name = "nic-vm${random_string.main.result}"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
ip_configuration {
name = "public"
subnet_id = data.azurerm_subnet.default.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.main.id
}
}
The ip_configuration
block defines a dynamic private IP allocation and binds the NIC to the public IP for external connectivity. This allows your NIC to be tethered to both the private Virtual Network via the Subnet we referenced and the public internet via the Public IP Address we created previously.
Prepare Administrator Remote Access
Sensitive data, such as SSH keys, should be retrieved securely from Azure Key Vault. The data
block queries the specific Key Vault and retrieves the desired secret.
data "azurerm_key_vault" "main" {
name = "kv-ep3-gz9fbcix"
resource_group_name = "rg-ep3-gz9fbcix"
}
data "azurerm_key_vault_secret" "ssh_public_key" {
name = "ssh-public"
key_vault_id = data.azurerm_key_vault.main.id
}
This ensures that the VM’s SSH key is securely stored and accessed without hardcoding sensitive information in the Terraform code. However, it is important to note that in this approach the SSH key will be stored in the Terraform State of this workspace. Althought it is stored securely in KeyVault, Terraform needs the SSH Key in order to assign it to the VM when it provisions it. This highlights the importance of keeping Terraform State files secure. Do note that we are not pulling in the SSH Private Key, we are only pulling in the Public Key which is much less risky but still not something you want to pass around like candy.
Storing the SSH Key in KeyVault does have its purposes though, as services like Azure Bastion make it easy to reference KeyVault secrets to enable administrators to broker SSH connections to the Virtual Machine using the Azure Portal.
Create the Virtual Machine
Finally, we’re at the part where we lay down some metal. We’re gonna provision ourselves an Azrue Linux Virtual Machine! The VM includes network configuration, OS disk settings, and the integration of the SSH public key that we fetched from Key Vault.
resource "azurerm_linux_virtual_machine" "main" {
name = "vm${random_string.main.result}"
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
size = "Standard_DS2_v2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.main.id,
]
admin_ssh_key {
username = "adminuser"
public_key = data.azurerm_key_vault_secret.ssh_public_key.value
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
Key highlights:
-
Dynamic Name: The VM’s name incorporates the random string for uniqueness but we are starting it with a
vm
prefix to ensure its easy to recognize in the Azure Portal and a valid Computer Name. - Admin SSH Key: The public key retrieved from Key Vault ensures we are using a centrally managed key.
- Source Image: The VM uses an Ubuntu 18.04 LTS image, pulling the latest version directly from Azure’s marketplace.
Some improvements we could make might include parameterizing the source_image_reference
thus allowing us to dynamically change the Azure Marketplace image we start our VM with. Eventually, we’ll very likely replace the Marketplace image with our own Packer-built custom images.\
Conclusion
This Terraform configuration demonstrates how to deploy Azure infrastructure while emphasizing security, modularity, and readability. Now it’s your turn—try implementing this setup in your environment and explore how Terraform simplifies the provisioning of cloud resources. Experiment with extending this configuration, perhaps by adding load balancers, additional VMs, or exploring other Azure services.
Until Next Time–Happy Azure Terraforming!!!