Building Windows Servers with Hashicorp Packer + Terraform on Oracle Cloud Infrastructure OCI

In today’s dynamic IT landscape, platform engineers juggle a diverse array of cloud technologies to cater to specific client needs. Among these, Oracle Cloud Infrastructure (OCI) is rapidly gaining traction due to its competitive pricing for certain services. However, navigating the intricacies of each cloud can present a significant learning curve. This is where cloud-agnostic tools like Terraform and Packer shine. By abstracting away the underlying APIs and automating repetitive tasks, they empower us to leverage OCI’s potential without getting bogged down in vendor-specific complexities.
In this article I show you how to get started with Oracle Cloud by using Packer and Terraform for Windows servers, and this can be used for other Infrastructure as code tasks.
Oracle Cloud Infrastructure Configs
OCI Keys for API Use

Prerequisite: Before you generate a key pair, create the .oci
directory in your home directory to store the credentials. See SDK and CLI Configuration File for more details.
- View the user’s details:
- If you’re adding an API key for yourself:
Open the Profile menu and click My profile.
- If you’re an administrator adding an API key for another user: Open the navigation menu and click Identity & Security. Under Identity, click Users. Locate the user in the list, and then click the user’s name to view the details.
- If you’re adding an API key for yourself:
- In the Resources section at the bottom left, click API Keys
- Click Add API Key at the top left of the API Keys list. The Add API Key dialog displays.
-
Click Download Private Key and save the key to your
.oci
directory. In most cases, you do not need to download the public key.Note: If your browser downloads the private key to a different directory, be sure to move it to your
.oci
directory. - Click Add.
The key is added and the Configuration File Preview is displayed. The file snippet includes required parameters and values you’ll need to create your configuration file. Copy and paste the configuration file snippet from the text box into your
~/.oci/config file
. (If you have not yet created this file, see SDK and CLI Configuration File for details on how to create one.)After you paste the file contents, you’ll need to update the
key_file
parameter to the location where you saved your private key file.If your config file already has a DEFAULT profile, you’ll need to do one of the following:
- Replace the existing profile and its contents.
- Rename the existing profile.
- Rename this profile to a different name after pasting it into the config file.
- Update the permissions on your downloaded private key file so that only you can view it:
- Go to the
.oci
directory where you placed the private key file. - Use the command
chmod go-rwx ~/.oci/<oci_api_keyfile>.pem
to set the permissions on the fil
- Go to the
Network
Make sure to allow WinRM and RDP so that packer can configure the VM and make it into an image and so that you can RDP to the server after it’s created.

Packer Configuration & Requirements
Install the packer OCI plugin on the host running packer
$ packer plugins install github.com/hashicorp/oracle
Packer Config
- Configure your source
- Availability domain:
oci iam availability-domain list
- Availability domain:
- Get your base image (Drivers Included)
- With the OCI cli:
oci compute image list --compartment-id "ocid#.tenancy.XXXX" --operating-system "Windows" | grep -e 2019 -e ocid1
- With the OCI cli:
- Point to config file that has the OCI Profile we downloaded in the previous steps.
- WinRM Config
- User Data (Bootstrap)
- You must set the password to not be changed at next logon so that packer can connect:
- Code:
#ps1_sysnative
cmd /C 'wmic UserAccount where Name="opc" set PasswordExpires=False'

Automating Special Considerations from OCI
Images can be used to launch other instances. The instances launched from these images will include the customizations, configurations, and software installed when the image was created. For windows a we need to sysprep but OCI has specifics on doing so.
Creating a generalized image from an instance will render the instance non-functional, so you should first create a custom image from the instance, and then create a new instance from the custom image. Source below
We automated their instruction by:
- Extract the contents of oracle-cloud_windows-server_generalize_2022-08-24.SED.EXE to your packer scripts directory
- Copy all files to C:\Windows\Panther
- Use the windows-shell provisioner in packer to run Generalize.cmd

Terraform Config with Oracle Cloud
- Configure the vars
Oracle OCI Terraform Variables - Pass the private key at runtime:
terraform apply --var-file=oci.tfvars -var=private_key_path=~/.oci/user_2024-10-30T10_10_10.478Z.pem
Sources:
Sys-prepping in OCI is specific to their options here’s a link:
https://docs.oracle.com/en-us/iaas/Content/Compute/References/windowsimages.htm#Windows_Generalized_Image_Support_Files
Other Sources:
https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#apisigningkey_topic_How_to_Generate_an_API_Signing_Key_Console
https://github.com/hashicorp/packer/issues/7033
https://github.com/hashicorp/packer-plugin-oracle/tree/main/docs
Deploy Azure Container Apps with the native AzureRM Terraform provider, no more AzAPI!

Azure has given us great platforms to run containers. Starting with Azure Container Instance where you can run a small container group just like a docker server and also Azure Kubernetes Service where you can run and manage Kubernetes clusters and containers at scale. Now, the latest Kubernetes abstraction from Azure is called Container Apps!
When a service comes out in a cloud provider their tools are updated right away so when Contrainer Apps came out you could deploy it with ARM or Bicep. You could still deploy it with Terraform by using the AzAPI provider which interacts directly with Azures API but as of a few weeks back (from the publish date of this article) you can use the native AzureRM provider to deploy it.
Code Snippet
resource "azurerm_container_app_environment" "example" {
name = "Example-Environment"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
log_analytics_workspace_id = azurerm_log_analytics_workspace.example.id
}
resource "azurerm_container_app" "example" {
name = "example-app"
container_app_environment_id = azurerm_container_app_environment.example.id
resource_group_name = azurerm_resource_group.example.name
revision_mode = "Single"
template {
container {
name = "examplecontainerapp"
image = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
cpu = 0.25
memory = "0.5Gi"
}
}
}
Sources
How to use the Azure Private Link with uncommon or new PaaS offerings. You need the subresource names!

Azure, like other clouds, has a private link feature that allows connectivity to stay “inside” the network if you have an Express Route or a P2P. The one advantage is that you don’t have to have an internet facing endpoint, you don’t have to whitelist domains or insane ranges of IPs and you can also use your internal DNS.
I like to use Terraform to build the different PaaS offerings and in the same templates I can add the private endpoints to the services. The one thing that took me a while to find is the sub resource names. See below:
resource "azurerm_private_endpoint" "keyvault" {
name = "key_vault-terraform-endpoint"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
subnet_id = "${data.azurerm_subnet.rg.id}"
private_service_connection {
name = “key_vault-terraform-privateserviceconnection”
private_connection_resource_id = azurerm_key_vault.main.id
subresource_names = [ “vault” ]
is_manual_connection = false
}
A private-link resource is the destination target of a specified private endpoint.
Some Benefits
The benefits to most common private endpoints I’ve used are for the following services are
- Azure Container Registry
- The benefit here is that I can have a Docker Hub like container registry and I can push/pull containers to my local dev without having to go out to the internet
- Another benefit is that I can hook up security scans as well
- Azure SQL DBs
- The benefit is that again you can connect from a local server to this DB using internal IPs and DNS
- Azure Key Vault
- The benefit here is that your services and vault are not in the internet. Even in the internet they will need accounts to login but I like to know that the service can only be used inside the network.
If all your services are inside then there is no need to allow public networks. You can disable access and only allow trusted Microsoft Services (Log Analytics, Defender, etc.)
Disable public access to Azure Container Registry

Deploy A Private Elastic Cloud Kubernetes Cluster On Azure DevOps Pipelines For CI/CD

ElasticSearch has developed a great Operator for Kubernetes to orchestrate tasks to make things easier to deploy and prevent cowboy engineers like me forcing changes that end up breaking stuff :D . In this article I will go over deploying ECK on AKS via ADO and I will share some FYIs. (Too many acronyms, get ready for more.)
New Terraform 1.1 Refactoring Feature!

The new refactoring feature can help when (many times) you find a better module but you don't want to go through the shuffle of the mv command. I liked one scenario explained in the demo which was the decoupling a web config from a specific cloud provider module to a module that can be used for multiple clouds done without the mv command and less risk.
Deploying Azure App Service Environment v3, App Plan and blue/green Functions with Terraform via Azure DevOps.

Azure's ASE is all about serverless! In a Windows environment IT usually spins up a server in an on prem hypervisor, updates it, installs security software and SCCM to patch it and then configure IIS with certs and bindings for Development to deploy simple code. The ASE is an abstraction of all those layers and provides a platform for Dev to deploy code. Thanks to Jason Savill's youtube channel for a great overview on ASE v3, the video is embedded here for review and I explain the different areas in terraform.
Working with secure files (certs) in Azure DevOps and Terraform the easy way without compromising security.

The documentation from Hashicorp is great! If you are using your shell with terraform then the docs will save you lots of time but eventually you'll want to use terraform in your pipelines and this is where things change, for better! In this article we show how you can save the steps of creating an Azure vault, setting permissions and uploading secrets or certs to use later on. Since we are using Azure DevOps pipelines we can use the secure file download task to get our cert on the agent and upload it directly to the app service in our case. We are not compromising security by making it simpler which is the best part.