Why is the app not starting? - Understanding the .NET Stack on Windows
One of the key elements to understand, as an IT professional (Mostly working with Windows) that's transitioning to DevOps or Platform Engineering, is everything that surrounds code. If you maintain servers for applications, you've likely encountered scenarios where a seemingly straightforward application fails to deploy or fails after deployment. Perhaps they've copied all the files to the right locations, but the application refuses to run. Or maybe it works on one server but not another, even though they appear identical at first glance.
The root of these problems, aside from networking and having the correct ports opened to different services if you are in an air-gapped environment, often lies in an incomplete understanding of the application stack – the complete set of software components required for an application to run properly. In this article, we'll explain application stacks fundamentals, focusing on Windows server environments and .NET applications as an example. I'll explain how the various layers interact and how to ensure your servers are properly configured before deploying code.
What Is an Application Stack?
An application stack is like a layer cake. Each layer provides essential functionality that the layers above it depend on. If any layer is missing or misconfigured, the entire application may fail to run correctly – or at all.
Consider a typical .NET web application. From bottom to top, its stack might include:
- The operating system (Windows Server)
- Required Windows features (IIS, necessary Windows components)
- Runtime environments (.NET Framework or .NET Core)
- Middleware components (ASP.NET, Entity Framework)
- The application code itself
Let's break down each of these components to understand their role in the stack.
The Foundation: Operating System and Windows Features
At the base of our application stack is the operating system. For .NET applications, this is typically a Windows Server environment. However, simply having Windows Server with runtimes installed isn't enough – you also need IIS from Windows features.
Internet Information Services (IIS)
IIS is Microsoft's web server software that handles HTTP requests and responses. For web applications, IIS is essential, but it's not a monolithic feature. IIS comprises multiple components and features, each serving a specific purpose, examples below.
- Web Server (IIS) – The core feature that enables the server to respond to HTTP requests
- IIS Management Console – The GUI tool for configuring IIS
- Basic Authentication – For simple username/password authentication
- Windows Authentication – For integrated Windows authentication
- URL Rewrite Module – For manipulating requested URLs based on defined rules
Think of IIS features as specialized tools in a toolbox. Installing all IIS features on every server would be like carrying the entire toolbox to every job when you only need a screwdriver. Understanding which features your application requires is critical for proper configuration and security.
Picking, ONLY, the necessary features is also essential for good security. We often see admins that enable all features in IIS and move on.
How Missing IIS Features or too many features Cause Problems
Imagine deploying a web application that uses Windows Authentication. If the Windows Authentication feature isn't installed on IIS, users will receive authentication errors even though the application code is perfectly valid. These issues can be perplexing because they're not caused by bugs in the code but by missing infrastructure components.
The Engines: Runtime Environments
Runtimes are the engines that execute your application code. They provide the necessary libraries and services for your application to run. In the .NET ecosystem, the most common runtimes are:
.NET Framework Runtime
The traditional .NET Framework is Windows-only and includes:
- CLR (Common Language Runtime) – Executes the compiled code
- Base Class Library – Provides fundamental types and functionality
Applications targeting specific versions of .NET Framework (e.g., 4.6.2, 4.7.2, 4.8) require that exact version installed on the server.
.NET Core/.NET Runtime
The newer, cross-platform .NET implementation includes:
- .NET Runtime – The basic runtime for console applications
- ASP.NET Core Runtime – Additional components for web applications
- .NET Desktop Runtime – Components for Windows desktop applications
- Web Hosting Bundle – Combines the ASP.NET Core Runtime with the IIS integration module
Why Runtimes Matter
Runtimes are version-specific. An application built for .NET Core 3.1 won't run on a server with only .NET 5 installed, even though .NET 5 is newer. This version specificity is a common source of deployment issues.
Consider this real-world scenario: A development team builds an application using .NET Core 3.1. The production server has .NET 5 installed. When deployed, the application fails with cryptic errors about missing assemblies. The solution isn't to fix the code but to install the correct runtime on the server.
The Bridges: Middleware and Frameworks
Between the runtime and your application code lies middleware – components that provide additional functionality beyond what the basic runtime offers. In .NET applications, this often includes:
- ASP.NET (for .NET Framework) or ASP.NET Core (for .NET Core/.NET) – For web applications
- Entity Framework – For database access
- SignalR – For real-time communications
Middleware components can have their own dependencies and version requirements. For example, an application using Entity Framework Core 3.1 needs compatible versions of other components.
The Pinnacle: Application Code
At the top of the stack sits your application code – the custom software that provides the specific functionality your users need. This includes:
- Compiled assemblies (.dll files)
- Configuration files
- Static content (HTML, CSS, JavaScript, images)
- Client-side libraries
While this is the most visible part of the stack, it cannot function without all the layers beneath it.
Bringing It All Together: A Practical Example
Let's examine a concrete example to illustrate how all these components interact:
Scenario: Deploying a .NET Core 3.1 MVC web application that uses Windows Authentication and connects to a SQL Server database.
Required stack components:
- Operating System: Windows Server 2019
- Windows Features:
- IIS Web Server
- Windows Authentication
- ASP.NET 4.8 (for backward compatibility with some components)
- Runtimes:
- .NET Core 3.1 SDK (for development servers)
- .NET Core 3.1 ASP.NET Core Runtime (for production servers)
- .NET Core 3.1 Hosting Bundle (which installs the ASP.NET Core Module for IIS)
- Middleware:
- Entity Framework Core 3.1
- Application Code:
- Your custom application DLLs
- Configuration files (appsettings.json)
- Static web content
If any component is missing from this stack, the application won't function correctly. For instance:
- Without the Windows Authentication feature, users can't log in.
- Without the .NET Core 3.1 Runtime, the application won't start.
- Without the ASP.NET Core Module, IIS won't know how to handle requests for the application.
Best Practices for Managing Application Stacks
Now that we understand what makes up an application stack, let's look at some best practices for managing them:
1. Document Your Application Stack
Create detailed documentation of every component required for your application, including specific versions. This documentation should be maintained alongside your codebase and updated whenever dependencies change.
2. CICD and Server Setup Scripts
Automate the installation and configuration of your application stack using PowerShell scripts or configuration management tools. This ensures consistency across environments and makes it easier to set up new servers.
# Example PowerShell script to install required IIS components for a .NET Core application
# Enable IIS and required features
$features = @(
'Web-Default-Doc',
'Web-Dir-Browsing',
'Web-Http-Errors',
'Web-Static-Content',
'Web-Http-Redirect',
'Web-Http-Logging',
'Web-Custom-Logging',
'Web-Log-Libraries',
'Web-ODBC-Logging',
'Web-Request-Monitor',
'Web-Http-Tracing',
'Web-Stat-Compression',
'Web-Dyn-Compression',
'Web-Filtering',
'Web-Basic-Auth',
'Web-CertProvider',
'Web-Client-Auth',
'Web-Digest-Auth',
'Web-Cert-Auth',
'Web-IP-Security',
'Web-Url-Auth',
'Web-Windows-Auth',
'Web-Net-Ext',
'Web-Net-Ext45',
'Web-AppInit',
'Web-Asp',
'Web-Asp-Net',
'Web-Asp-Net45',
'Web-ISAPI-Ext',
'Web-ISAPI-Filter',
'Web-Mgmt-Console',
'Web-Metabase',
'Web-Lgcy-Mgmt-Console',
'Web-Lgcy-Scripting',
'Web-WMI',
'Web-Scripting-Tools',
'Web-Mgmt-Service'
)
foreach ($iissharefilereq in $features){
Install-WindowsFeature $iissharefilereq -Confirm:$false
}
# Download and install .NET Core Hosting Bundle Invoke-WebRequest -Uri 'https://download.visualstudio.microsoft.com/download/pr/48d3bdeb-c0c0-457e-b570-bc2c65a4d51e/c81fc85c9319a573881b0f8b1f671f3a/dotnet-hosting-3.1.25-win.exe' -OutFile 'dotnet-hosting-3.1.25-win.exe' Start-Process -FilePath 'dotnet-hosting-3.1.25-win.exe' -ArgumentList '/quiet' -Wait # Restart IIS to apply changes net stop was /y net start w3svc
3. Use Configuration Verification
Implement scripts that verify server configurations before deployment. These scripts should check for all required components and their versions, alerting you to any discrepancies.
4. Consider Containerization
For more complex applications, consider containerization technologies like Docker. Containers package the application and its dependencies together, ensuring consistency across environments and eliminating many configuration issues.
5. Create Environment Parity
Ensure that your development, testing, and production environments have identical application stacks. This reduces the "it works on my machine" problem and makes testing more reliable.
6. Application Logging
Ensure that web.config has a logging directory to catch errors.

Common Pitfalls and How to Avoid Them
Several common pitfalls can trip up IT teams when managing application stacks:
Pitfall 1: Assuming Newer Is Always Better
Just because a newer version of a runtime or framework is available doesn't mean your application is compatible with it. Always test compatibility before upgrading components in your application stack.
Pitfall 2: Incomplete Feature Installation
When installing Windows features like IIS, it's easy to miss sub-features that your application requires. Use comprehensive installation scripts that include all necessary components.
Pitfall 3: Overlooking Dependencies
Some components have dependencies that aren't immediately obvious. For example, certain .NET features depend on specific Visual C++ Redistributable packages. Make sure to identify and install all dependencies.
Pitfall 4: Ignoring Regional and Language Settings
Applications may behave differently based on regional settings, time zones, or character encodings. Ensure these settings are consistent across your environments.
Pitfall 5: Misconfigured Permissions
Even with all the right components installed, incorrect permissions on IIS web folder level can prevent applications from running correctly. Ensure your application has the necessary permissions to access files, folders, and other resources. The app pool usually has IDs to authenticate.
Conclusion
Understanding application stacks is crucial for successful deployment and maintenance of modern applications. By recognizing that your application is more than just the code you write – it's a complex interplay of operating system features, runtimes, middleware, and your custom code – you can approach server configuration methodically and avoid mysterious deployment failures.
The next time you prepare to deploy an application, take the time to document and verify your application stack. Your future self (and your colleagues) will thank you when deployments go smoothly and applications run as expected in every environment.
Remember: Proper server configuration isn't an afterthought – it's a prerequisite for your application code to function correctly.
Azure Container Apps: Simplifying Container Deployment with Enterprise-Grade Features
In the ever-evolving landscape of cloud computing, organizations are constantly seeking solutions that balance simplicity with enterprise-grade capabilities. Azure Container Apps emerges as a compelling answer to this challenge, offering a powerful abstraction layer over container orchestration while providing the robustness needed for production workloads.
What Makes Azure Container Apps Special?
Azure Container Apps represents Microsoft’s vision for serverless container deployment. While Kubernetes has become the de facto standard for container orchestration, its complexity can be overwhelming for teams that simply want to deploy and scale their containerized applications. Container Apps provides a higher-level abstraction that handles many infrastructure concerns automatically, allowing developers to focus on their applications.
Key Benefits of the Platform
Built-in Load Balancing with Envoy
One of the standout features of Azure Container Apps is its integration with Envoy as a load balancer. This isn’t just any load balancer – Envoy is the same battle-tested proxy used by major cloud-native platforms. It provides:
- Automatic HTTP/2 and gRPC support
- Advanced traffic splitting capabilities for A/B testing
- Built-in circuit breaking and retry logic
- Detailed metrics and tracing
The best part? You don’t need to configure or maintain Envoy yourself. It’s managed entirely by the platform, giving you enterprise-grade load balancing capabilities without the operational overhead.
Integrated Observability with Azure Application Insights
Understanding what’s happening in your containerized applications is crucial for maintaining reliability. Container Apps integrates seamlessly with Azure Application Insights, providing:
- Distributed tracing across your microservices
- Detailed performance metrics and request logging
- Custom metric collection
- Real-time application map visualization
The platform automatically injects the necessary instrumentation, ensuring you have visibility into your applications from day one.
Cost Considerations and Optimization
While Azure Container Apps offers a serverless pricing model that can be cost-effective, it’s important to understand the pricing structure to avoid surprises:
Cost Components
- Compute Usage: Charged per vCPU-second and GB-second of memory used
- Baseline: ~$0.000012/vCPU-second
- Memory: ~$0.000002/GB-second
- Request Processing:
- First 2 million requests/month included
- ~$0.40 per additional million requests
- Storage and Networking:
- Ingress: Free
- Egress: Standard Azure bandwidth rates apply
Cost Optimization Tips
To keep your Azure Container Apps costs under control:
- Right-size your containers by carefully setting resource limits and requests
- Utilize scale-to-zero for non-critical workloads
- Configure appropriate minimum and maximum replica counts
- Monitor and adjust based on actual usage patterns
Advanced Features Worth Exploring
Revision Management
Container Apps introduces a powerful revision management system that allows you to:
- Maintain multiple versions of your application
- Implement blue-green deployments
- Roll back to previous versions if needed
DAPR Integration
For microservices architectures, the built-in DAPR (Distributed Application Runtime) support provides:
- Service-to-service invocation
- State management
- Pub/sub messaging
- Input and output bindings
Conclusion
Azure Container Apps strikes an impressive balance between simplicity and capability. It removes much of the complexity associated with container orchestration while providing the features needed for production-grade applications. Whether you’re building microservices, web applications, or background processing jobs, Container Apps offers a compelling platform that can grow with your needs.
By understanding the pricing model and following best practices for cost optimization, you can leverage this powerful platform while keeping expenses under control. The integration with Azure’s broader ecosystem, particularly Application Insights and Container Registry, creates a seamless experience for developing, deploying, and monitoring containerized applications.
Remember to adjust resource allocations and scaling rules based on your specific workload patterns to optimize both performance and cost. Monitor your application’s metrics through Application Insights to make informed decisions about resource utilization and scaling policies.
It's 2025 and We are Still Revolutionizing Legacy IT with Modern DevOps and Platform Engineering to Unlock Business Potential
In the rapidly evolving digital landscape, traditional IT strategies are becoming relics and even risks for cybersecurity if not revised. Organizations clinging to outdated infrastructure and siloed development practices find themselves struggling to compete in a world that demands agility, innovation, and rapid value delivery. This is where modern DevOps and Platform Engineering emerge as transformative forces, bridging the gap between legacy systems and cutting-edge technological capabilities.
Limitations of Traditional IT Strategies
Traditional IT approaches are characterized by:
- High Cost due to vendor licensing (Currently: VMWare’s Broadcom Acquisition)
- Slow, cumbersome manual processes (ClickOps Repetition)
- Scary infrastructure management (Don’t touch it because it’s working!)
- Disconnected development and operations teams (IT Staff:That’s Dev’s responsibility)
- Manual, error-prone configuration processes (ClickOps Engineer did 10 server but forgot one step in 3 servers)
- Significant time-to-market delays (I.T. PM’s top skill is how to keep delaying project deadlines)
These challenges create a perfect storm of inefficiency that stifles innovation and increases operational costs. Companies find themselves trapped in a cycle of reactive maintenance rather than proactive innovation.
DevOps and Platform Engineering: A Shift to Modern Strategies
Our comprehensive DevOps and Platform Engineering services offer a holistic approach to transforming your IT infrastructure:
1. Unified Ecosystem Integration
We break down the walls between development, operations, and business teams, creating a seamless, collaborative environment. By implementing advanced integration strategies, we transform fragmented IT landscapes into cohesive, responsive systems that align directly with business objectives.
2. Infrastructure as Code (IaC) Revolution
Gone are the days of manual server configurations and time-consuming infrastructure management. Our Platform Engineering approach leverages cutting-edge Infrastructure as Code methodologies, enabling:
- Repeatable and consistent infrastructure deployment
- Automated configuration management
- Rapid scalability and flexibility
- Reduced human error
- Enhanced security through standardized deployment processes
3. Continuous Improvement and Innovation
We don’t just optimize your current systems; we create a framework for perpetual evolution. Our DevOps methodologies introduce:
- Continuous Integration and Continuous Deployment (CI/CD) pipelines
- Automated testing and quality assurance
- Real-time monitoring and proactive issue resolution
- Data-driven performance optimization
Tangible Benefits
Cost Efficiency
By streamlining processes and reducing manual interventions, organizations can significantly cut operational expenses while improving overall system reliability.
Accelerated Time-to-Market
Our platform engineering solutions reduce development cycles from months to weeks, allowing businesses to respond quickly to market demands and customer needs.
Enhanced Reliability and Performance
Automated monitoring, predictive maintenance, and robust architectural design ensure your systems remain stable, secure, and high-performing.
Extra Benefit: A Powerful Approach to Cybersecurity
In today’s threat landscape, cybersecurity is no longer a mere afterthought but a critical business imperative. DevOps methodologies revolutionize security by embedding protective measures directly into the development and operational processes, creating a proactive and resilient security posture.
Integrated Security: The DevOps Security Advantage
Traditional security approaches often treat cybersecurity as a final checkpoint, creating vulnerabilities and inefficiencies. DevOps transforms this paradigm through:
1. Continuous Security Integration (CSI)
- Automated Security Scanning: Implement real-time vulnerability detection throughout the development lifecycle
- Code-Level Security Checks: Identify and remediate potential security weaknesses before they reach production
- Comprehensive Threat Modeling: Proactively analyze and mitigate potential security risks during the design phase
2. Infrastructure as Code (IaC) Security Benefits
- Consistent Security Configurations: Eliminate human error in security setup through automated, standardized deployments
- Immutable Infrastructure: Reduce attack surfaces by creating predictable, easily replaceable system components
- Rapid Patch and Update Mechanisms: Quickly respond to emerging security threats across entire infrastructure
3. Advanced Monitoring and Incident Response
- Real-Time Threat Detection: Implement sophisticated monitoring tools that provide immediate insights into potential security incidents
- Automated Incident Response: Create predefined, executable playbooks for rapid threat mitigation
- Comprehensive Logging and Auditing: Maintain detailed, tamper-evident logs for forensic analysis and compliance requirements
Security Transformation in Practice
Consider the security journey of a typical enterprise:
- Before DevOps: Sporadic security audits, manual vulnerability assessments, and reactive threat management
- With DevOps: Continuous security integration, automated threat detection, and proactive risk mitigation
Compliance and Governance
DevOps approaches ensure:
- Consistent adherence to security standards and regulatory requirements
- Transparent and traceable security processes
- Reduced compliance risks through automated checks and balances
The Human Factor Challenge in I.T. : Understanding Resistance to Change
Behind every legacy system and outdated IT strategy lies a deeply human story of comfort, fear, and inertia. The “if it ain’t broke, don’t fix it” mentality is more than just a technical challenge—it’s a profound psychological barrier that organizations must overcome to remain competitive.
The Comfort of the Familiar
Imagine a seasoned IT professional who has spent years mastering a complex, albeit outdated, system. This system has become an extension of their expertise, a familiar landscape where they feel confident and capable. Changing this environment feels like more than a technical challenge—it’s a personal disruption. The human tendency to avoid uncertainty is a powerful force that keeps organizations trapped in technological stagnation.
Psychological Barriers to Technological Evolution
1. Fear of Obsolescence
Many IT professionals worry that new technologies will render their hard-earned skills irrelevant. This fear manifests as resistance to change, creating an invisible barrier to innovation. The “set it and forget it” approach becomes a psychological defense mechanism, a way to maintain a sense of control in a rapidly changing technological landscape.
2. The Illusion of Stability
There’s a comforting myth that stable systems are reliable systems. In reality, “stable” often means “slowly becoming obsolete.” Legacy systems create a false sense of security, masking underlying inefficiencies and potential risks.
The Hidden Costs of Inaction
What appears to be a stable, low-risk approach actually exposes organizations to significant dangers:
- Technical Debt Accumulation: Each day a legacy system remains unchanged, the cost of eventual modernization increases exponentially.
- Security Vulnerabilities: Outdated systems become prime targets for cybersecurity threats.
- Competitive Disadvantage: While your organization maintains the status quo, competitors are leveraging modern technologies to innovate and grow.
Breaking the Psychological Barrier
Successful digital transformation requires more than technical solutions—it demands a holistic approach that addresses human factors:
1. Empowerment Through Education
- Provide clear, supportive training that demonstrates the personal and professional benefits of new technologies
- Create learning paths that build confidence and excitement about technological change
- Highlight how new skills increase individual marketability and career potential
2. Gradual, Supportive Transformation
- Implement incremental changes that allow teams to adapt without overwhelming them
- Create a supportive environment that celebrates learning and adaptation
- Demonstrate tangible benefits through pilot projects and success stories
3. Reframing Change as Opportunity
Instead of viewing technological transformation as a threat, we help organizations see it as:
- A chance to solve long-standing operational challenges
- An opportunity to reduce daily frustrations and workload
- A path to more meaningful and strategic work
The Cost of Comfort
Let’s put the “set it and forget it” mentality into perspective:
Before Transformation
- Limited flexibility
- Increasing maintenance costs
- Growing security risks
- Decreasing employee satisfaction
- Reduced competitive ability
After DevOps Transformation
- Adaptive, responsive infrastructure
- Reduced operational overhead
- Enhanced security and reliability
- Increased employee engagement
- Competitive technological edge
A New Paradigm of Great Tech Solutions
DevOps and Platform Engineering are not just about implementing new tools—they’re about creating a culture of continuous improvement, learning, and adaptation. We understand that behind every system are human beings with their own experiences, fears, and aspirations.
Our approach goes beyond technical implementation. We provide:
- Comprehensive change management support
- Personalized skill development programs
- Continuous learning and support frameworks
- A partnership that values both technological innovation and human potential
Invitation to Modernizing I.T.
The world of technology waits for no one. The choice is not between changing or staying the same—it’s between leading or being left behind.
Are you ready to transform not just your technology, but your entire approach to innovation?
Let’s have a conversation about your unique challenges and opportunities.
Django Microservices Approach with Azure Functions on Azure Container Apps
We are creating a multi-part video to explain Azure Functions running on Azure Container Apps so that we can offload some of the code out of our Django App and build our infrastructure with a microservice approach. Here’s part one and below the video a quick high-level explanation for this architecture.
Azure Functions are serverless computing units within Azure that allow you to run event-driven code without having to manage servers. They’re a great choice for building microservices due to their scalability, flexibility, and cost-effectiveness.
Azure Container Apps provide a fully managed platform for deploying and managing containerized applications. By deploying Azure Functions as containerized applications on Container Apps, you gain several advantages:
-
Microservices Architecture:
- Decoupling: Each function becomes an independent microservice, isolated from other parts of your application. This makes it easier to develop, test, and deploy them independently.
- Scalability: You can scale each function individually based on its workload, ensuring optimal resource utilization.
- Resilience: If one microservice fails, the others can continue to operate, improving the overall reliability of your application.
-
Containerization:
- Portability: Containerized functions can be easily moved between environments (development, testing, production) without changes.
- Isolation: Each container runs in its own isolated environment, reducing the risk of conflicts between different functions.
- Efficiency: Containers are optimized for resource utilization, making them ideal for running functions on shared infrastructure.
-
Azure Container Apps Benefits:
- Managed Service: Azure Container Apps handles the underlying infrastructure, allowing you to focus on your application’s logic.
- Scalability: Container Apps automatically scale your functions based on demand, ensuring optimal performance.
- Integration: It seamlessly integrates with other Azure services, such as Azure Functions, Azure App Service, and Azure Kubernetes Service.
In summary, Azure Functions deployed on Azure Container Apps provide a powerful and flexible solution for building microservices. By leveraging the benefits of serverless computing, containerization, and a managed platform, you can create scalable, resilient, and efficient applications.
Stay tuned for part 2
Containers for Data Scientists on top of Azure Container Apps
The Azure Data Science VMs are good for dev and testing and even though you could use a virtual machine scale set, that is a heavy and costly solution.
When thinking about scaling, one good solution is to containerize the Anaconda / Python virtual environments and deploy them to Azure Kubernetes Service or better yet, Azure Container Apps, the new abstraction layer for Kubernetes that Azure provides.
Here is a quick way to create a container with Miniconda 3, Pandas and Jupyter Notebooks to interface with the environment. Here I also show how to deploy this single test container it to Azure Container Apps.
The result:
A Jupyter Notebook with Pandas Running on Azure Container Apps.

Container Build
If you know the libraries you need then it would make sense to start with the lightest base image which is Miniconda3, you can also deploy the Anaconda3 container but that one might have libraries you might never use that might create unnecessary vulnerabilities top remediate.
Miniconda 3: https://hub.docker.com/r/continuumio/miniconda3
Anaconda 3: https://hub.docker.com/r/continuumio/anaconda3
Below is a simple dockerfile to build a container with pandas, openAi and tensorflow libraries.
FROM continuumio/miniconda3
RUN conda install jupyter -y --quiet && \ mkdir -p /opt/notebooks
WORKDIR /opt/notebooks
RUN pip install pandas
RUN pip install openAI
RUN pip install tensorflow
CMD ["jupyter", "notebook", "--ip='*'", "--port=8888", "--no-browser", "--allow-root"]
Build and Push the Container
Now that you have the container built push it to your registry and deploy it on Azure Container Apps. I use Azure DevOps to get the job done.

Here’s the pipeline task:
- task: Docker@2
inputs:
containerRegistry: 'dockerRepo'
repository: 'm05tr0/jupycondaoai'
command: 'buildAndPush'
Dockerfile: 'dockerfile'
tags: |
$(Build.BuildId)
latest
Deploy to Azure ContainerApps
Deploying to Azure Container Apps was painless, after understanding the Azure DevOps task, since I can include my ingress configuration in the same step as the container. The only requirement I had to do was configure DNS in my environment. The DevOps task is well documented as well but here’s a link to their official docs.
Architecture / DNS: https://learn.microsoft.com/en-us/azure/container-apps/networking?tabs=azure-cli
Azure Container Apps Deploy Task : https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureContainerAppsV1/README.md

A few things I’d like to point out is that you don’t have to provide a username and password for the container registry the task gets a token from az login. The resource group has to be the one where the Azure Container Apps environment lives, if not a new one will be created. The target port is where the container listens on, see the container build and the jupyter notebooks are pointing to port 8888. If you are using the Container Apps Environment with a private VNET, setting the ingress to external means that the VNET can get to it not outside traffic from the internet. Lastly I disable telemetry to stop reporting.
task: AzureContainerApps@1
inputs:
azureSubscription: 'IngDevOps(XXXXXXXXXXXXXXXXXXXX)'
acrName: 'idocr'
dockerfilePath: 'dockerfile'
imageToBuild: 'idocr.azurecr.io/m05tr0/jupycondaoai'
imageToDeploy: 'idocr.azurecr.io/m05tr0/jupycondaoai'
containerAppName: 'datasci'
resourceGroup: 'IDO-DataScience-Containers'
containerAppEnvironment: 'idoazconapps'
targetPort: '8888'
location: 'East US'
ingress: 'external'
disableTelemetry: true
After deployment I had to get the token which was easy with the Log Stream feature under Monitoring. For a deployment of multiple Jupyter Notebooks it makes sense to use JupyterHub.

Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers
In the past I’ve worked with VDI solutions like Citrix, VMWare Horizon, Azure Virtual Desktop and others but my favorite is Kasm. For me Kasm has a DevOps friendly and modern way of doing virtual apps and virtual desktops that I didn’t find with other vendors.
With Kasm, apps and desktops run in isolated containers and I can access them easily with my browser, no need to install client software.
Here are my top 3 favorite features:
Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers!
#1 - Runs on the Home Lab!
Kasm Workspaces can be used to create a secure and isolated environment for running applications and browsing the web in your home lab. This can help to protect your devices from malware and other threats.
The community edition is free for 5 concurrent sessions.
If you are a Systems Admin or Engineer you can use it at home for your benefit but also to get familiar with the configuration so that you are better prepared for deploying it at work.
#2 - Low Resource Utilization
Kasm container apps are lightweight and efficient, so they can run quickly and without consuming a lot of resources. This is especially beneficial if you have a limited amount of hardware resources like on a home lab. I run mine in a small ProxMox cluster and offloads work from my main PC. You can also set the amount of compute when configuring your containerized apps.

#3 - Security
Each application is run in its own isolated container, which prevents them from interacting with each other or with your PC. This helps to prevent malware or other threats from spreading from one application to another.
The containers could run on isolated Docker networks and with a good firewall solution you can even prevent a self-replicating trojan by segmenting your network and only allowing the necessary ports and traffic flow. Example, if running the Tor Browser containerized app you could only allow it to go outbound to the internet and block SMB (Port 445) from your internal network. If the containerized app gets infected with something like the Emotet Trojan you could be preventing it from spreading further and you could kill the isolated container without having to shutdown or reformatting your local computer.
Code Vulnerability scanning: You can scan your container images in your CI/CD pipelines for vulnerabilities, which helps to identify and fix security weaknesses before you deploy them and before they can be exploited.
Azure Open AI: Private and Secure "ChatGPT like" experience for Enterprises.

Azure provides the OpenAI service to address the concerns for companies and government agencies that have strong security regulations but want to leverage the power of AI as well.
Most likely you’ve used one of the many AI offerings out there. Open AI’s ChatGPT, Google Bard, Google PaLM with MakerSuite, Perplexity AI, Hugging Chat and many more have been in the latest hype and software companies are racing to integrate them into their products. The main way is to buy a subscription and connect to the ones that offer their API over the internet but as an DevSecOps engineer here’s where the fun starts.
A lot of companies following good security practices block traffic to and from the internet so the first part of all this will be to open the firewall. Next you must protect the credentials of the API user so that it doesn’t get hacked and access will reveal what you are up to. Then you have to trust that OpenAI is not using your data to train their models and that they are keeping your company’s data safe.
It could take a ton of time to plan, design and deploy a secured infrastructure for using large language models and unless you have a very specific use case it might be overkill to build your own.
Here’s a breakdown of a few infrastructure highlights about this service.
3 Main Features
Privacy and Security
Your Chat-GPT like interface called Azure AI Studio runs in your private subscription. It can be linked to one of your VNETs so that you can use internal routing and you can also add private endpoints so that you don’t even have to use it over the internet.

Even if you have to use it over the internet you can lock it down to only allow your public IPs and your developers will need a token for authentication as well that can be scripted to rotate every month.

Pricing

Common Models
- GPT-4 Series: The GPT-4 models are like super-smart computers that can understand and generate human-like text. They can help with things like understanding what people are saying, writing stories or articles, and even translating languages.
Key Differences from GPT-3:
- Model Size: GPT-4 models tend to be larger in terms of parameters compared to GPT-3. Larger models often have more capacity to understand and generate complex text, potentially resulting in improved performance.
- Training Data: GPT-4 models might have been trained on a more extensive and diverse dataset, potentially covering a broader range of topics and languages. This expanded training data can enhance the model’s knowledge and understanding of different subjects.
- Improved Performance: GPT-4 models are likely to demonstrate enhanced performance across various natural language processing tasks. This improvement can include better language comprehension, generating more accurate and coherent text, and understanding context more effectively.
- Fine-tuning Capabilities: GPT-4 might introduce new features or techniques that allow for more efficient fine-tuning of the model. Fine-tuning refers to the process of training a pre-trained model on a specific dataset or task to make it more specialized for that particular use case.
- Contextual Understanding: GPT-4 models might have an improved ability to understand context in a more sophisticated manner. This could allow for a deeper understanding of long passages of text, leading to more accurate responses and better contextual awareness in conversation.
- GPT-3 Base Series: These models are also really smart and can do similar things as GPT-4. They can generate text for writing, help translate languages, complete sentences, and understand how people feel based on what they write.
- Codex Series: The Codex models are designed for programming tasks. They can understand and generate computer code. This helps programmers write code faster, get suggestions for completing code, and even understand and improve existing code.
- Embeddings Series: The Embeddings models are like special tools for understanding text. They can turn words and sentences into numbers that computers can understand. These numbers can be used to do things like classify text into different categories, find information that is similar to what you’re looking for, and even figure out how people feel based on what they write.
Getting Access to it!
Although the service is Generally Available (GA) it is only available in East US and West Europe. You also have to submit an application so that MS can review your company and use case so they can approve or deny your request. This could be due to capacity and for Microsoft to gather information on how companies will be using the service.
The application is here: https://aka.ms/oai/access
Based on research and experience getting this for my clients I always recommend only pick what you initially need and not get too greedy. It would be also wise to speak with your MS Rep and take them out for a beer! For example if you just need code generation then just select the codex option.
Lately getting the service has been easier to get, hopefully soon we won’t need the form and approval dance.

How to Appreciate the Little Things in Life and be Happy
Just the other day I happened to wake up early. That is unusual for an engineering student. After a long time I could witness the sunrise. I could feel the sun rays falling on my body. Usual morning is followed by hustle to make it to college on time. This morning was just another morning yet seemed different.
Witnessing calm and quiet atmosphere, clear and fresh air seemed like a miracle to me. I wanted this time to last longer since I was not sure if I would be able to witness it again, knowing my habit of succumbing to schedule. There was this unusual serenity that comforted my mind. It dawned on me, how distant I had been from nature. Standing near the compound’s gate, feeling the moistness that the air carried, I thought about my life so far.
Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma – which is living with the results of other people's thinking.
Steve Jobs
I was good at academics, so decisions of my life had been pretty simple and straight. Being pretty confident I would make it to the best junior college of my town in the first round itself, never made me consider any other option. I loved psychology since childhood, but engineering was the safest option. Being born in a middle class family, thinking of risking your career to make it to medical field was not sane. I grew up hearing ‘Only doctor’s children can afford that field’ and finally ended up believing it. No one around me believed in taking risks. Everyone worshiped security. I grew up doing the same.
‘Being in the top will only grant you a good life’ has been the mantra of my life. But at times, I wish I was an average student. I wish decisions would have not been so straightforward. Maybe I would have played cricket- the only thing I feel passionate about. Or maybe I would have studied literature (literature drives me crazy). Isn’t that disappointing- me wishing to be bad at academics. It’s like at times I hate myself for the stuff I am good at.
When you step out of these four walls on a peaceful morning, you realize how much nature has to offer to you. Its boundless. Your thoughts, worries, deadlines won’t resonate here. Everything will flow away along with the wind. And you will realize every answer you had been looking for, was always known to you. It would mean a lot to me if you recommend this article and help me improve.
Thriving for Simplicity and Ease of Use Sharing Knowledge
Just the other day I happened to wake up early. That is unusual for an engineering student. After a long time I could witness the sunrise. I could feel the sun rays falling on my body. Usual morning is followed by hustle to make it to college on time. This morning was just another morning yet seemed different.
Witnessing calm and quiet atmosphere, clear and fresh air seemed like a miracle to me. I wanted this time to last longer since I was not sure if I would be able to witness it again, knowing my habit of succumbing to schedule. There was this unusual serenity that comforted my mind. It dawned on me, how distant I had been from nature. Standing near the compound’s gate, feeling the moistness that the air carried, I thought about my life so far.
import styles from './MyComponent.css'; import React, { Component } from 'react'; export default class MyComponent extends Component { render() { return ( <div> <div className={styles.foo}>Foo</div> <div className={styles.bar}>Bar</div> </div> ); }
I was good at academics, so decisions of my life had been pretty simple and straight. Being pretty confident I would make it to the best junior college of my town in the first round itself, never made me consider any other option. I loved psychology since childhood, but engineering was the safest option. Being born in a middle class family, thinking of risking your career to make it to medical field was not sane. I grew up hearing ‘Only doctor’s children can afford that field’ and finally ended up believing it. No one around me believed in taking risks. Everyone worshiped security. I grew up doing the same.
process.env.NODE_ENV === 'development' ? '[name]__[local]___[hash:base64:5]' : '[hash:base64:5]' )
When you step out of these four walls on a peaceful morning, you realize how much nature has to offer to you. Its boundless. Your thoughts, worries, deadlines won’t resonate here. Everything will flow away along with the wind. And you will realize every answer you had been looking for, was always known to you. It would mean a lot to me if you recommend this article and help me improve.
Take the Time to Listen and Find the Right Inspirations
Just the other day I happened to wake up early. That is unusual for an engineering student. After a long time I could witness the sunrise. I could feel the sun rays falling on my body. Usual morning is followed by hustle to make it to college on time. This morning was just another morning yet seemed different.
Witnessing calm and quiet atmosphere, clear and fresh air seemed like a miracle to me. I wanted this time to last longer since I was not sure if I would be able to witness it again, knowing my habit of succumbing to schedule. There was this unusual serenity that comforted my mind. It dawned on me, how distant I had been from nature. Standing near the compound’s gate, feeling the moistness that the air carried, I thought about my life so far.
Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma – which is living with the results of other people's thinking.
Steve Jobs
I was good at academics, so decisions of my life had been pretty simple and straight. Being pretty confident I would make it to the best junior college of my town in the first round itself, never made me consider any other option. I loved psychology since childhood, but engineering was the safest option. Being born in a middle class family, thinking of risking your career to make it to medical field was not sane. I grew up hearing ‘Only doctor’s children can afford that field’ and finally ended up believing it. No one around me believed in taking risks. Everyone worshiped security. I grew up doing the same.
‘Being in the top will only grant you a good life’ has been the mantra of my life. But at times, I wish I was an average student. I wish decisions would have not been so straightforward. Maybe I would have played cricket- the only thing I feel passionate about. Or maybe I would have studied literature (literature drives me crazy). Isn’t that disappointing- me wishing to be bad at academics. It’s like at times I hate myself for the stuff I am good at.
When you step out of these four walls on a peaceful morning, you realize how much nature has to offer to you. Its boundless. Your thoughts, worries, deadlines won’t resonate here. Everything will flow away along with the wind. And you will realize every answer you had been looking for, was always known to you. It would mean a lot to me if you recommend this article and help me improve.