Why is the app not starting? - Understanding the .NET Stack on Windows
One of the key elements to understand, as an IT professional (Mostly working with Windows) that's transitioning to DevOps or Platform Engineering, is everything that surrounds code. If you maintain servers for applications, you've likely encountered scenarios where a seemingly straightforward application fails to deploy or fails after deployment. Perhaps they've copied all the files to the right locations, but the application refuses to run. Or maybe it works on one server but not another, even though they appear identical at first glance.
The root of these problems, aside from networking and having the correct ports opened to different services if you are in an air-gapped environment, often lies in an incomplete understanding of the application stack – the complete set of software components required for an application to run properly. In this article, we'll explain application stacks fundamentals, focusing on Windows server environments and .NET applications as an example. I'll explain how the various layers interact and how to ensure your servers are properly configured before deploying code.
What Is an Application Stack?
An application stack is like a layer cake. Each layer provides essential functionality that the layers above it depend on. If any layer is missing or misconfigured, the entire application may fail to run correctly – or at all.
Consider a typical .NET web application. From bottom to top, its stack might include:
- The operating system (Windows Server)
- Required Windows features (IIS, necessary Windows components)
- Runtime environments (.NET Framework or .NET Core)
- Middleware components (ASP.NET, Entity Framework)
- The application code itself
Let's break down each of these components to understand their role in the stack.
The Foundation: Operating System and Windows Features
At the base of our application stack is the operating system. For .NET applications, this is typically a Windows Server environment. However, simply having Windows Server with runtimes installed isn't enough – you also need IIS from Windows features.
Internet Information Services (IIS)
IIS is Microsoft's web server software that handles HTTP requests and responses. For web applications, IIS is essential, but it's not a monolithic feature. IIS comprises multiple components and features, each serving a specific purpose, examples below.
- Web Server (IIS) – The core feature that enables the server to respond to HTTP requests
- IIS Management Console – The GUI tool for configuring IIS
- Basic Authentication – For simple username/password authentication
- Windows Authentication – For integrated Windows authentication
- URL Rewrite Module – For manipulating requested URLs based on defined rules
Think of IIS features as specialized tools in a toolbox. Installing all IIS features on every server would be like carrying the entire toolbox to every job when you only need a screwdriver. Understanding which features your application requires is critical for proper configuration and security.
Picking, ONLY, the necessary features is also essential for good security. We often see admins that enable all features in IIS and move on.
How Missing IIS Features or too many features Cause Problems
Imagine deploying a web application that uses Windows Authentication. If the Windows Authentication feature isn't installed on IIS, users will receive authentication errors even though the application code is perfectly valid. These issues can be perplexing because they're not caused by bugs in the code but by missing infrastructure components.
The Engines: Runtime Environments
Runtimes are the engines that execute your application code. They provide the necessary libraries and services for your application to run. In the .NET ecosystem, the most common runtimes are:
.NET Framework Runtime
The traditional .NET Framework is Windows-only and includes:
- CLR (Common Language Runtime) – Executes the compiled code
- Base Class Library – Provides fundamental types and functionality
Applications targeting specific versions of .NET Framework (e.g., 4.6.2, 4.7.2, 4.8) require that exact version installed on the server.
.NET Core/.NET Runtime
The newer, cross-platform .NET implementation includes:
- .NET Runtime – The basic runtime for console applications
- ASP.NET Core Runtime – Additional components for web applications
- .NET Desktop Runtime – Components for Windows desktop applications
- Web Hosting Bundle – Combines the ASP.NET Core Runtime with the IIS integration module
Why Runtimes Matter
Runtimes are version-specific. An application built for .NET Core 3.1 won't run on a server with only .NET 5 installed, even though .NET 5 is newer. This version specificity is a common source of deployment issues.
Consider this real-world scenario: A development team builds an application using .NET Core 3.1. The production server has .NET 5 installed. When deployed, the application fails with cryptic errors about missing assemblies. The solution isn't to fix the code but to install the correct runtime on the server.
The Bridges: Middleware and Frameworks
Between the runtime and your application code lies middleware – components that provide additional functionality beyond what the basic runtime offers. In .NET applications, this often includes:
- ASP.NET (for .NET Framework) or ASP.NET Core (for .NET Core/.NET) – For web applications
- Entity Framework – For database access
- SignalR – For real-time communications
Middleware components can have their own dependencies and version requirements. For example, an application using Entity Framework Core 3.1 needs compatible versions of other components.
The Pinnacle: Application Code
At the top of the stack sits your application code – the custom software that provides the specific functionality your users need. This includes:
- Compiled assemblies (.dll files)
- Configuration files
- Static content (HTML, CSS, JavaScript, images)
- Client-side libraries
While this is the most visible part of the stack, it cannot function without all the layers beneath it.
Bringing It All Together: A Practical Example
Let's examine a concrete example to illustrate how all these components interact:
Scenario: Deploying a .NET Core 3.1 MVC web application that uses Windows Authentication and connects to a SQL Server database.
Required stack components:
- Operating System: Windows Server 2019
- Windows Features:
- IIS Web Server
- Windows Authentication
- ASP.NET 4.8 (for backward compatibility with some components)
- Runtimes:
- .NET Core 3.1 SDK (for development servers)
- .NET Core 3.1 ASP.NET Core Runtime (for production servers)
- .NET Core 3.1 Hosting Bundle (which installs the ASP.NET Core Module for IIS)
- Middleware:
- Entity Framework Core 3.1
- Application Code:
- Your custom application DLLs
- Configuration files (appsettings.json)
- Static web content
If any component is missing from this stack, the application won't function correctly. For instance:
- Without the Windows Authentication feature, users can't log in.
- Without the .NET Core 3.1 Runtime, the application won't start.
- Without the ASP.NET Core Module, IIS won't know how to handle requests for the application.
Best Practices for Managing Application Stacks
Now that we understand what makes up an application stack, let's look at some best practices for managing them:
1. Document Your Application Stack
Create detailed documentation of every component required for your application, including specific versions. This documentation should be maintained alongside your codebase and updated whenever dependencies change.
2. CICD and Server Setup Scripts
Automate the installation and configuration of your application stack using PowerShell scripts or configuration management tools. This ensures consistency across environments and makes it easier to set up new servers.
# Example PowerShell script to install required IIS components for a .NET Core application
# Enable IIS and required features
$features = @(
'Web-Default-Doc',
'Web-Dir-Browsing',
'Web-Http-Errors',
'Web-Static-Content',
'Web-Http-Redirect',
'Web-Http-Logging',
'Web-Custom-Logging',
'Web-Log-Libraries',
'Web-ODBC-Logging',
'Web-Request-Monitor',
'Web-Http-Tracing',
'Web-Stat-Compression',
'Web-Dyn-Compression',
'Web-Filtering',
'Web-Basic-Auth',
'Web-CertProvider',
'Web-Client-Auth',
'Web-Digest-Auth',
'Web-Cert-Auth',
'Web-IP-Security',
'Web-Url-Auth',
'Web-Windows-Auth',
'Web-Net-Ext',
'Web-Net-Ext45',
'Web-AppInit',
'Web-Asp',
'Web-Asp-Net',
'Web-Asp-Net45',
'Web-ISAPI-Ext',
'Web-ISAPI-Filter',
'Web-Mgmt-Console',
'Web-Metabase',
'Web-Lgcy-Mgmt-Console',
'Web-Lgcy-Scripting',
'Web-WMI',
'Web-Scripting-Tools',
'Web-Mgmt-Service'
)
foreach ($iissharefilereq in $features){
Install-WindowsFeature $iissharefilereq -Confirm:$false
}
# Download and install .NET Core Hosting Bundle Invoke-WebRequest -Uri 'https://download.visualstudio.microsoft.com/download/pr/48d3bdeb-c0c0-457e-b570-bc2c65a4d51e/c81fc85c9319a573881b0f8b1f671f3a/dotnet-hosting-3.1.25-win.exe' -OutFile 'dotnet-hosting-3.1.25-win.exe' Start-Process -FilePath 'dotnet-hosting-3.1.25-win.exe' -ArgumentList '/quiet' -Wait # Restart IIS to apply changes net stop was /y net start w3svc
3. Use Configuration Verification
Implement scripts that verify server configurations before deployment. These scripts should check for all required components and their versions, alerting you to any discrepancies.
4. Consider Containerization
For more complex applications, consider containerization technologies like Docker. Containers package the application and its dependencies together, ensuring consistency across environments and eliminating many configuration issues.
5. Create Environment Parity
Ensure that your development, testing, and production environments have identical application stacks. This reduces the "it works on my machine" problem and makes testing more reliable.
6. Application Logging
Ensure that web.config has a logging directory to catch errors.

Common Pitfalls and How to Avoid Them
Several common pitfalls can trip up IT teams when managing application stacks:
Pitfall 1: Assuming Newer Is Always Better
Just because a newer version of a runtime or framework is available doesn't mean your application is compatible with it. Always test compatibility before upgrading components in your application stack.
Pitfall 2: Incomplete Feature Installation
When installing Windows features like IIS, it's easy to miss sub-features that your application requires. Use comprehensive installation scripts that include all necessary components.
Pitfall 3: Overlooking Dependencies
Some components have dependencies that aren't immediately obvious. For example, certain .NET features depend on specific Visual C++ Redistributable packages. Make sure to identify and install all dependencies.
Pitfall 4: Ignoring Regional and Language Settings
Applications may behave differently based on regional settings, time zones, or character encodings. Ensure these settings are consistent across your environments.
Pitfall 5: Misconfigured Permissions
Even with all the right components installed, incorrect permissions on IIS web folder level can prevent applications from running correctly. Ensure your application has the necessary permissions to access files, folders, and other resources. The app pool usually has IDs to authenticate.
Conclusion
Understanding application stacks is crucial for successful deployment and maintenance of modern applications. By recognizing that your application is more than just the code you write – it's a complex interplay of operating system features, runtimes, middleware, and your custom code – you can approach server configuration methodically and avoid mysterious deployment failures.
The next time you prepare to deploy an application, take the time to document and verify your application stack. Your future self (and your colleagues) will thank you when deployments go smoothly and applications run as expected in every environment.
Remember: Proper server configuration isn't an afterthought – it's a prerequisite for your application code to function correctly.
Leveraging GitHub Actions for Efficient Infrastructure Automation with Separate Workflows.
Building infrastructure requires a well-defined pipeline. This article demonstrates how to leverage GitHub Actions to build an Amazon Machine Image (AMI) with Packer and then automatically trigger a separate Terraform workflow via Github’s Workflow API and pass the AMI ID as well.
Benefits:
- Streamlined workflow: Packer builds the AMI, and the AMI ID is seamlessly passed to the Terraform workflow for deployment.
- Reduced manual intervention: The entire process is automated, eliminating the need to manually trigger the Terraform workflow or update the AMI ID.
- Improved efficiency: Faster deployment cycles and reduced risk of errors due to manual configuration.
Why separate workflows?

First, think about a simple AWS architecture consisting on a Load Balancer in front of an Autoscaling group, you still need to build a VM image, make sure the load balancer has 2 networks for HA and add security groups for layer 4 access controls. The VM will be built by packer and terraform will deploy the rest of the components so your workflow consists of 2 jobs Packer builds, Terraform deploys but I am here to challenge this approach. You might think this goes against Build / Deploy workflows since most workflows or pipelines have the 2 job pattern of packer build then Terraform deploys but often times we see that we need to separate them because the work we do in Terraform is separate and shouldn’t depend on building an AMI every time.
Think of updating the number of machines on the scale set. Doing it manually will cause drift and the typical workflow will need to run packer before getting to Terraform which is not too bad but we are wasting some cycles.
Separating the workflows makes more sense because you can run terraform to update your infrastructure components from any API Client. Having Terraform in a separate workflow gets rid of the dependency of running packer every time. Ultimately, the choice between the two methods depends on your specific requirements and preferences.
Build and Trigger the Next Workflow
In the packer workflow we add a second job to trigger terraform. We have to pass our Personal Access Token (PAT) and the AMI_ID so that terraform can update the VM Autoscaling Group.
trigger_another_repo:
needs: packer
runs-on: ubuntu-latest
steps:
- name: Trigger second workflow
env:
AMITF: ${{ needs.packer.outputs.AMI_ID_TF }}
run: |
curl -X POST \
-H "Authorization: token ${{ secrets.PAT }}" \
-H "Accept: application/vnd.github.everest-preview+json" \
"https://api.github.com/repos/repo_name/workflow_name/dispatches" \
-d '{"event_type": "trigger_tf_build", "client_payload": {"variable_name": "${{ needs.packer.outputs.AMI_ID_TF }}"}}'
As you can see we are simply using CURL to send the data payload to the Terraform workflow.
The Triggered Workflow Requirements
For the Terraform workflow to start from the packer trigger we need a few simple things.
- Workflow trigger
on:
repository_dispatch:
types: [trigger_prod_tf_build]
- Confirm variable (Optional)
- name: Print Event Payload
run: echo "${{ github.event.client_payload.variable_name }}"
While combining Packer and Terraform into a single workflow can simplify things in certain scenarios, separating them provides more granular control, reusability, and scalability. The best approach depends on the specific needs and complexity of your infrastructure.
Containers for Data Scientists on top of Azure Container Apps
The Azure Data Science VMs are good for dev and testing and even though you could use a virtual machine scale set, that is a heavy and costly solution.
When thinking about scaling, one good solution is to containerize the Anaconda / Python virtual environments and deploy them to Azure Kubernetes Service or better yet, Azure Container Apps, the new abstraction layer for Kubernetes that Azure provides.
Here is a quick way to create a container with Miniconda 3, Pandas and Jupyter Notebooks to interface with the environment. Here I also show how to deploy this single test container it to Azure Container Apps.
The result:
A Jupyter Notebook with Pandas Running on Azure Container Apps.

Container Build
If you know the libraries you need then it would make sense to start with the lightest base image which is Miniconda3, you can also deploy the Anaconda3 container but that one might have libraries you might never use that might create unnecessary vulnerabilities top remediate.
Miniconda 3: https://hub.docker.com/r/continuumio/miniconda3
Anaconda 3: https://hub.docker.com/r/continuumio/anaconda3
Below is a simple dockerfile to build a container with pandas, openAi and tensorflow libraries.
FROM continuumio/miniconda3
RUN conda install jupyter -y --quiet && \ mkdir -p /opt/notebooks
WORKDIR /opt/notebooks
RUN pip install pandas
RUN pip install openAI
RUN pip install tensorflow
CMD ["jupyter", "notebook", "--ip='*'", "--port=8888", "--no-browser", "--allow-root"]
Build and Push the Container
Now that you have the container built push it to your registry and deploy it on Azure Container Apps. I use Azure DevOps to get the job done.

Here’s the pipeline task:
- task: Docker@2
inputs:
containerRegistry: 'dockerRepo'
repository: 'm05tr0/jupycondaoai'
command: 'buildAndPush'
Dockerfile: 'dockerfile'
tags: |
$(Build.BuildId)
latest
Deploy to Azure ContainerApps
Deploying to Azure Container Apps was painless, after understanding the Azure DevOps task, since I can include my ingress configuration in the same step as the container. The only requirement I had to do was configure DNS in my environment. The DevOps task is well documented as well but here’s a link to their official docs.
Architecture / DNS: https://learn.microsoft.com/en-us/azure/container-apps/networking?tabs=azure-cli
Azure Container Apps Deploy Task : https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureContainerAppsV1/README.md

A few things I’d like to point out is that you don’t have to provide a username and password for the container registry the task gets a token from az login. The resource group has to be the one where the Azure Container Apps environment lives, if not a new one will be created. The target port is where the container listens on, see the container build and the jupyter notebooks are pointing to port 8888. If you are using the Container Apps Environment with a private VNET, setting the ingress to external means that the VNET can get to it not outside traffic from the internet. Lastly I disable telemetry to stop reporting.
task: AzureContainerApps@1
inputs:
azureSubscription: 'IngDevOps(XXXXXXXXXXXXXXXXXXXX)'
acrName: 'idocr'
dockerfilePath: 'dockerfile'
imageToBuild: 'idocr.azurecr.io/m05tr0/jupycondaoai'
imageToDeploy: 'idocr.azurecr.io/m05tr0/jupycondaoai'
containerAppName: 'datasci'
resourceGroup: 'IDO-DataScience-Containers'
containerAppEnvironment: 'idoazconapps'
targetPort: '8888'
location: 'East US'
ingress: 'external'
disableTelemetry: true
After deployment I had to get the token which was easy with the Log Stream feature under Monitoring. For a deployment of multiple Jupyter Notebooks it makes sense to use JupyterHub.

Deploy Azure Container Apps with the native AzureRM Terraform provider, no more AzAPI!

Azure has given us great platforms to run containers. Starting with Azure Container Instance where you can run a small container group just like a docker server and also Azure Kubernetes Service where you can run and manage Kubernetes clusters and containers at scale. Now, the latest Kubernetes abstraction from Azure is called Container Apps!
When a service comes out in a cloud provider their tools are updated right away so when Contrainer Apps came out you could deploy it with ARM or Bicep. You could still deploy it with Terraform by using the AzAPI provider which interacts directly with Azures API but as of a few weeks back (from the publish date of this article) you can use the native AzureRM provider to deploy it.
Code Snippet
resource "azurerm_container_app_environment" "example" {
name = "Example-Environment"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
log_analytics_workspace_id = azurerm_log_analytics_workspace.example.id
}
resource "azurerm_container_app" "example" {
name = "example-app"
container_app_environment_id = azurerm_container_app_environment.example.id
resource_group_name = azurerm_resource_group.example.name
revision_mode = "Single"
template {
container {
name = "examplecontainerapp"
image = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
cpu = 0.25
memory = "0.5Gi"
}
}
}
Sources
Self-Healing I.T. Orchestration with Jenkins, Powershell, ServiceNow and Azure DevOps!

Most I.T. ticketing systems have an incident module to wait for users to submit issues so that the team can triage and react to solve the issue. In some cases you can spot repetitive issues and automate a fix or workaround to, at least, proactively bring services back online faster than waiting for an user to get an issue and then report the incident. Another benefit on automating fixes or work arounds is that they work 24/7 and they respond faster then the on call person. In this article I will show how to use Jenkins, Powershell, ServiceNow and Azure DevOps to orchestrate a server reboot after detecting a specific issue in the event log of a Windows server.
Easiest Way to Deploy Ubuntu 20.04 with NVIDIA Drivers and the Latest CUDA toolkit via Packer.

I am building an analytics system that deploys containers on top of the Azure NCasT4_v3-series virtual machines which are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs. I am deploying the VM from an Azure DevOps pipeline using Hashicorp Packer and after trying a few ways I found a very easy way to deploy the VM, Driver and Cuda Toolkit which I will share in this article.
Working with secure files (certs) in Azure DevOps and Terraform the easy way without compromising security.

The documentation from Hashicorp is great! If you are using your shell with terraform then the docs will save you lots of time but eventually you'll want to use terraform in your pipelines and this is where things change, for better! In this article we show how you can save the steps of creating an Azure vault, setting permissions and uploading secrets or certs to use later on. Since we are using Azure DevOps pipelines we can use the secure file download task to get our cert on the agent and upload it directly to the app service in our case. We are not compromising security by making it simpler which is the best part.
Fix for Azure DevOps Build Immutable Image: Invalid Grant - AADSTS50173

I have a pipeline with an on prem Azure DevOps agent that is loaded with packer so that I can use the packer image build step. After changing my password and installing the azure cli the pipeline failed with status code 400.
Error: Invalid Grant
Error Description: AADSTS50173: The provided grant has expired due to it being revoked, a fresh auth token is needed. The user might have changed or reset their password. The grant was issued on '{{ timestamp }}' and the TokensValidFrom date (before which tokens are not valid) for this user is '{{ timestamp }}'
Storing and Passing the packer imageid to Azure DevOps variable in a variable group.

For infrastructure as code I am using packer (Build Immutable image) task to create a gold image. I then want to pass the image URI to Terraform so it can spin up servers or scale sets. Since I like to add date/time in our packer image name then the name is not static so we have to save the resource ID somewhere after a successful packer build so that Terraform is aware of which image to use.
Using ServiceNow flow REST step to start and pass variables to an Azure DevOps pipeline with started integrationhub package.

If you have the starter pack and want to create your own automation without having to pay for higher packs you can pass variables to Azure DevOps or Jenkins and run pipelines to orchestrate tasks.
In this article we configure a SNOW Catalog Item with a Flow which has a rest step that passes variables and starts a pipeline in Azure DevOps. The pipeline then runs the script with variables and updates the request so the user is aware of progress. Then the SNOW flow checks the request and based on the modification from the script it closes the request or opens a task for IT to check and perform the request manually.