SSL HTTPS Certificate Deep Dive

Secure HTTP Traffic with HashiCorp Vault as your PKI + Cert Manager in Kubernetes - Deep Dive!

Here's a deep dive technical guide with the steps to configure HashiCorp Vault as a Private Certificate Authority (PKI) and integrate it with cert-manager in Kubernetes to automate certificate management. I've configured it in production environments but for the purposes of this demo I am implementing it in my lab so that my internal apps can have HTTPS encryption in transit. Here are a few examples of internal apps with certs and as you can see the ping shows they are in a private network.

 

Main Benefits:

  1. Centralized PKI Infrastructure: Vault provides a centralized solution for managing your entire certificate lifecycle. Instead of managing certificates across different applications and services, Vault acts as a single source of truth for all your PKI needs. This centralization simplifies management, improves security posture, and ensures consistent certificate policies across your organization.
  2. Dynamic Certificate Issuance and Rotation: Vault can automatically issue short-lived certificates and rotate them before expiration. When integrated with cert-manager in Kubernetes, this automation eliminates the manual certificate renewal process that often leads to outages from expired certificates. The system can continuously issue, renew, and rotate certificates without human intervention.
  3. Fine-grained Access Control: Vault's advanced policy system allows you to implement precise access controls around who can issue what types of certificates. You can limit which teams or services can request certificates for specific domains, restrict certificate lifetimes based on risk profiles, and implement comprehensive audit logging. This helps enforce the principle of least privilege across your certificate infrastructure.

An additional benefit is Vault's broader secret management capabilities – the same tool managing your certificates can also handle database credentials, API keys, and other sensitive information, giving you a unified approach to secrets management.

Prerequisites

  • A DNS Server (I use my firewall)
  • A running Kubernetes cluster (I am using microk8s)
  • Vault server installed and initialized (vault 0.30.0 · hashicorp/hashicorp)
  • cert-manager installed in your Kubernetes cluster (microk8s addon)
  • Administrative access to both Vault and Kubernetes

See my homelab diagram in github: mdf-ido/mdf-ido: Config files for my GitHub profile.

1. Configure Vault as a PKI

1.1. Enable the PKI Secrets Engine

# Enable the PKI secrets engine
vault secrets enable pki

PKI in Hashicorp Vault
PKI in Hashicorp Vault

# Configure the PKI secrets engine with a longer max lease time (e.g., 1 year)
vault secrets tune -max-lease-ttl=8760h pki

PKI 1 year Expiration
PKI 1 year Expiration

1.2. Generate or Import Root CA

# Generate a new root CA
vault write -field=certificate pki/root/generate/internal \
    common_name="Root CA" \
    ttl=87600h > root_ca.crt
Hashicorp Vault Root CA
Hashicorp Vault Root CA

1.3. Configure PKI URLs

# Configure the CA and CRL URLs
vault write pki/config/urls \
    issuing_certificates="http://vault.example.com:8200/v1/pki/ca" \
    crl_distribution_points="http://vault.example.com:8200/v1/pki/crl"

Issuing and Certificate Request Links
Issuing and Certificate Request Links

1.4. Create an Intermediate CA

Hashicorp Intermediate Certificate Authority
Hashicorp Intermediate Certificate Authority
# Enable the intermediate PKI secrets engine
vault secrets enable -path=pki_int pki

# Set the maximum TTL for the intermediate CA
vault secrets tune -max-lease-ttl=43800h pki_int

# Generate a CSR for the intermediate CA
vault write -format=json pki_int/intermediate/generate/internal \
    common_name="Intermediate CA" \
    ttl=43800h > pki_intermediate.json

# Extract the CSR
cat pki_intermediate.json | jq -r '.data.csr' > pki_intermediate.csr

# Sign the intermediate CSR with the root CA
vault write -format=json pki/root/sign-intermediate \
    csr=@pki_intermediate.csr \
    format=pem_bundle \
    ttl=43800h > intermediate_cert.json

# Extract the signed certificate
cat intermediate_cert.json | jq -r '.data.certificate' > intermediate.cert.pem

# Import the signed certificate back into Vault
vault write pki_int/intermediate/set-signed \
    certificate=@intermediate.cert.pem

1.5. Create a Role for Certificate Issuance

# Create a role for issuing certificates
vault write pki_int/roles/your-domain-role \
    allowed_domains="yourdomain.com" \
    allow_subdomains=true \
    allow_bare_domains=true \
    allow_wildcard_certificates=true \
    max_ttl=720h

Hashicorp PKI Role
Hashicorp PKI Role

2. Configure Kubernetes Authentication in Vault

2.1. Enable Kubernetes Auth Method

# Enable the Kubernetes auth method
vault auth enable kubernetes

2.2. Configure Kubernetes Auth Method

# Get the Kubernetes API address
KUBE_API="https://kubernetes.default.svc.cluster.local"

# Get the CA certificate used by Kubernetes
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)

# Get the JWT token for the Vault SA
KUBE_TOKEN=$(kubectl create token vault-auth)

# Configure the Kubernetes auth method in Vault
vault write auth/kubernetes/config \
    kubernetes_host="$KUBE_API" \
    kubernetes_ca_cert="$KUBE_CA_CERT" \
    token_reviewer_jwt="$KUBE_TOKEN" \
    issuer="https://kubernetes.default.svc.cluster.local"
Hashicorp Kubernetes Auth Method
Hashicorp Kubernetes Auth Method

2.3. Create Policy for Certificate Issuance

# Create a policy file
cat > pki-policy.hcl << EOF
# Read and list access to PKI endpoints
path "pki_int/*" {
  capabilities = ["read", "list"]
}

# Allow creating certificates
path "pki_int/sign/your-domain-role" {
  capabilities = ["create", "update"]
}

path "pki_int/issue/your-domain-role" {
  capabilities = ["create"]
}
EOF

# Create the policy in Vault
vault policy write pki-policy pki-policy.hcl
Hashicorp Vault PKI Policy
Hashicorp Vault PKI Policy

2.4. Create Kubernetes Auth Role

# Create a role that maps a Kubernetes service account to Vault policies (Created next)
vault write auth/kubernetes/role/cert-manager \
    bound_service_account_names="issuer" \
    bound_service_account_namespaces="default" \
    policies="pki-policy" \
    ttl=1h

3. Configure cert-manager to Use Vault

3.1. Create Service Account for cert-manager

# Create a file named cert-manager-vault-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: issuer
  namespace: default

Apply the manifest:

kubectl apply -f cert-manager-vault-sa.yaml

3.2. Create Issuer Resource

# Create a file named vault-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  namespace: default
spec:
  vault:
    server: http://vault.vault-system.svc.cluster.local:8200
    path: pki_int/sign/your-domain-role
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: cert-manager
        serviceAccountRef:
          name: issuer

Apply the manifest:

kubectl apply -f vault-issuer.yaml
Kubernetes Cert Manager Issuer
Kubernetes Cert Manager Issuer

4. Request Certificates

4.1. Direct Certificate Request

# Create a file named certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: example-cert
  namespace: default
spec:
  secretName: example-tls
  issuerRef:
    name: vault-issuer
  commonName: app.yourdomain.com
  dnsNames:
  - app.yourdomain.com

Apply the manifest:

kubectl apply -f certificate.yaml
Kubernetes Certs from Hashicorp Vault
Kubernetes Certs from Hashicorp Vault

4.2. Using Ingress for Certificate Request

# Create a file named secure-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress
  annotations:
    cert-manager.io/issuer: "vault-issuer"
spec:
  tls:
  - hosts:
    - app.yourdomain.com
    secretName: example-tls
  rules:
  - host: app.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-app
            port:
              number: 80

Apply the manifest:

kubectl apply -f secure-ingress.yaml

5. Troubleshooting

5.1. Common Issues and Solutions

Cannot find cert issuer

The cert issuer was deployed to a specific namespace so if you are creating an ingress outside you might need to solve with a few things:

  • Create a cluster issuer which is not restricted to a namespace
  • Create a duplicate issuer in the specific namespace
  • Create an externalName service and bridge the actual service.
Kubernetes ExternalName Bridge
Kubernetes ExternalName Bridge

Permission Denied

If you see permission denied errors:

  • Check that your Vault policy includes the correct paths
  • Verify that the role binding is correct in Vault
  • Ensure the service account has the necessary permissions
# Check the Vault policy
vault policy read pki-policy

# Verify the role binding
vault read auth/kubernetes/role/cert-manager

Domain Not Allowed

If you see common name not allowed by this role errors:

  • Update your Vault PKI role to allow the domain:
vault write pki_int/roles/your-domain-role \
    allowed_domains="yourdomain.com" \
    allow_subdomains=true \
    allow_bare_domains=true \
    allow_wildcard_certificates=true

Certificate Expiry Issues

If your certificate would expire after the CA certificate:

  • Adjust the max TTL to be shorter than your CA expiration:
vault write pki_int/roles/your-domain-role \
    max_ttl="30d"

Issuer Annotation Issues

If multiple controllers are fighting for the certificate request:

  • Check that you're using the correct annotation:
    • For namespaced Issuers: cert-manager.io/issuer
    • For ClusterIssuers: cert-manager.io/cluster-issuer

5.2. Checking Certificate Status

# Check certificate status
kubectl describe certificate example-cert

# Check certificate request status
kubectl get certificaterequest

# Check cert-manager logs
kubectl logs -n cert-manager deploy/cert-manager-controller

# Check if the secret was created
kubectl get secret example-tls

6. Best Practices

  1. Certificate Rotation: Set appropriate TTLs and let cert-manager handle rotation
  2. Secure Vault Access: Restrict access to Vault and use dedicated service accounts
  3. Monitor Expirations: Set up alerts for certificate expirations
  4. CA Renewals: Plan for CA certificate renewals well in advance
  5. Backup: Regularly backup your Vault PKI configuration and CA certificates
  6. Audit Logging: Enable audit logging in Vault to track certificate operations

7. Maintenance and Operations

7.1. Renewing the CA Certificate

Before your CA certificate expires, you'll need to renew it:

# Check when your CA certificate expires
vault read pki_int/cert/ca

# Plan and execute your CA renewal process well before expiration

7.2. Rotating Credentials

Periodically rotate your Kubernetes auth credentials:

# Update the JWT token used by Vault
KUBE_TOKEN=$(kubectl create token vault-auth)
vault write auth/kubernetes/config \
    token_reviewer_jwt="$KUBE_TOKEN"

Issues

  1. Your ingresses need to be in the same namespace as the issuer
    1. Create an external service as bridge
  2. You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.

Conclusion

You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.


Containers In the Cloud

Deploying Azure Functions in Containers to Azure Container Apps - like a boss!!!

Introduction

In today's cloud-native world, containerization has become a fundamental approach for deploying applications. Azure Functions can be containerized and deployed to a docker container which means we can deploy them on kubernetes. One compelling option is Azure Container Apps (ACA), which provides a fully managed Kubernetes-based environment with powerful features specifically designed for microservices and containerized applications.

Azure Container Apps is powered by Kubernetes and open-source technologies like Dapr, KEDA, and Envoy. It supports Kubernetes-style apps and microservices with features like service discovery and traffic splitting while enabling event-driven application architectures. This makes it an excellent choice for deploying containerized Azure Functions.

This blog post explores how to deploy Azure Functions in containers to Azure Container Apps, with special focus on the benefits of Envoy for traffic management, revision handling, and logging capabilities for troubleshooting.

Video Demo:

Why Deploy Azure Functions to Container Apps?

Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any language stack supported by Functions and use the same Functions triggers and bindings with event-driven scaling.

Key advantages include:

  1. Containerization flexibility: Package your functions with custom dependencies and runtime environments for Dev, QA, STG and PROD
  2. Kubernetes-based infrastructure: Get the benefits of Kubernetes without managing the complexity
  3. Microservices architecture support: Deploy functions as part of a larger microservices ecosystem
  4. Advanced networking: Take advantage of virtual network integration and service discovery

Benefits of Envoy in Azure Container Apps

Azure Container Apps includes a built-in Ingress controller running Envoy. You can use this to expose your application to the outside world and automatically get a URL and an SSL certificate. Envoy brings several significant benefits to your containerized Azure Functions:

1. Advanced Traffic Management

Envoy serves as the backbone of ACA's traffic management capabilities, allowing for:

  • Intelligent routing: Route traffic based on paths, headers, and other request attributes
  • Load balancing: Distribute traffic efficiently across multiple instances
  • Protocol support: Downstream connections support HTTP1.1 and HTTP2, and Envoy automatically detects and upgrades connections if the client connection requires an upgrade.

2. Built-in Security

  • TLS termination: Automatic handling of HTTPS traffic with Azure managed certificates
  • mTLS support: Azure Container Apps supports peer-to-peer TLS encryption within the environment. Enabling this feature encrypts all network traffic within the environment with a private certificate that is valid within the Azure Container Apps environment scope. Azure Container Apps automatically manage these certificates.

3. Observability

  • Detailed metrics and logs for traffic patterns
  • Request tracing capabilities
  • Performance insights for troubleshooting

Traffic Management for Revisions

One of the most powerful features of Azure Container Apps is its handling of revisions and traffic management between them.

Understanding Revisions

Revisions are immutable snapshots of your container application at a point in time. When you upgrade your container app to a new version, you create a new revision. This allows you to have the old and new versions running simultaneously and use the traffic management functionality to direct traffic to old or new versions of the application.

Traffic Splitting Between Revisions

Traffic split is a mechanism that routes configurable percentages of incoming requests (traffic) to various downstream services. With Azure Container Apps, we can weight traffic between multiple downstream revisions.

This capability enables several powerful deployment strategies:

Blue/Green Deployments

Deploy a new version alongside the existing one, and gradually shift traffic:

  1. Deploy revision 2 (green) alongside revision 1 (blue)
  2. Initially direct a small percentage (e.g., 10%) of traffic to revision 2
  3. Monitor performance and errors
  4. Gradually increase traffic to revision 2 as confidence grows
  5. Eventually direct 100% traffic to revision 2
  6. Retire revision 1 when no longer needed

A/B Testing

Test different implementations with real users:

Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in blue-green deployments or in A/B testing. Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision.

Implementation

To implement traffic splitting in Azure Container Apps:

By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable multiple revision mode in your container app, you can split incoming traffic between active revisions.

Here's how to configure it:

  1. Enable multiple revision mode:
    • In the Azure portal, go to your container app
    • Select "Revision management"
    • Set the mode to "Multiple: Several revisions active simultaneously"
    • Apply changes
  2. Configure traffic weights:
    • For each active revision, specify the percentage of traffic it should receive
    • Ensure the combined percentage equals 100%

Logging and Troubleshooting

Effective logging is crucial for monitoring and troubleshooting containerized applications. Azure Container Apps provides comprehensive logging capabilities integrated with Azure Monitor.

Centralized Logging Infrastructure

Azure Container Apps environments provide centralized logging capabilities through integration with Azure Monitor and Application Insights. By default, all container apps within an environment send logs to a common Log Analytics workspace, making it easier to query and analyze logs across multiple apps.

Key Logging Benefits

  1. Unified logging experience: All container apps in an environment send logs to the same workspace
  2. Detailed container insights: Access container-specific metrics and logs
  3. Function-specific logging: You can monitor your containerized function app hosted in Container Apps using Azure Monitor Application Insights in the same way you do with apps hosted by Azure Functions.
  4. Scale event logging: For bindings that support event-driven scaling, scale events are logged as FunctionsScalerInfo and FunctionsScalerError events in your Log Analytics workspace.

Troubleshooting Best Practices

When troubleshooting issues in containerized Azure Functions running on ACA:

  1. Check application logs: Review function execution logs for errors or exceptions
  2. Monitor scale events: Identify issues with auto-scaling behavior
  3. Examine container health: Check for container startup failures or crashes
  4. Review ingress traffic: Analyze traffic patterns and routing decisions
  5. Inspect revisions: Verify that traffic is being distributed as expected between revisions

Implementation Steps

Here's the full playlist we did in youtube to follow along: https://www.youtube.com/playlist?list=PLKwr1he0x0Dl2glbE8oHeTgdY-_wZkrhi

In Summary:

  1. Containerize your Azure Functions app:
    • Create a Dockerfile based on the Azure Functions base images
    • Build and test your container locally
    • Video demo:
  2. Push your container to a registry:
    • Push to Azure Container Registry or another compatible registry
  3. Create a Container Apps environment:
    • Set up the environment with appropriate virtual network and logging settings
  4. Deploy your function container:
    • Use Azure CLI, ARM templates, or the Azure Portal to deploy
    • Configure scaling rules, ingress settings, and revision strategy
  5. Set up traffic management:
    • Enable multiple revision mode if desired
    • Configure traffic splitting rules for testing or gradual rollout

Conclusion

Deploying Azure Functions in containers to Azure Container Apps combines the best of serverless computing with the flexibility of containers and the rich features of a managed Kubernetes environment. The built-in Envoy proxy provides powerful traffic management capabilities, especially for handling multiple revisions of your application. Meanwhile, the integrated logging infrastructure simplifies monitoring and troubleshooting across all your containerized functions.

This approach is particularly valuable for teams looking to:

  • Deploy Azure Functions with custom dependencies
  • Integrate functions into a microservices architecture
  • Implement sophisticated deployment strategies like blue/green or A/B testing
  • Maintain a consistent container-based deployment strategy across all application components

By leveraging these capabilities, you can create more robust, scalable, and manageable serverless applications while maintaining the development simplicity that makes Azure Functions so powerful.


I.T. Automation with Python and Ansible

Comprehensive Guide to Upgrading Ansible via Pip with New Python Versions on Ubuntu 20.04

For system administrators and DevOps engineers using Ansible in production environments, upgrading Ansible can sometimes be challenging, especially when the new version requires a newer Python version than what's available by default in Ubuntu 20.04. This guide walks through the process of upgrading Ansible installed via pip when a new Python version is required.

Why This Matters

Ubuntu 20.04 LTS ships with Python 3.8 by default. However, newer Ansible versions may require Python 3.9, 3.10, or even newer. Since Ansible in our environment is installed via pip rather than the APT package manager, we need a careful approach to manage this transition without breaking existing automation.

Prerequisites

  • Ubuntu 20.04 LTS system
  • Sudo access
  • Existing Ansible installation via pip
  • Backup of your Ansible playbooks and configuration files

Step 1: Install the Python Repository "Snakes"

The "deadsnakes" PPA provides newer Python versions for Ubuntu. This repository allows us to install Python versions that aren't available in the standard Ubuntu repositories.

# Add the deadsnakes PPA
sudo add-apt-repository ppa:deadsnakes/ppa

# Update package lists
sudo apt update

Step 2: Install the New Python Version and Pip

Install the specific Python version required by your target Ansible version. In this example, we'll use Python 3.10, but adjust as needed.

# Install Python 3.10 and development headers
sudo apt install python3.10 python3.10-dev python3.10-venv

# Install pip for Python 3.10
curl -sS https://bootstrap.pypa.io/get-pip.py | sudo python3.10

# Verify the installation
python3.10 --version
python3.10 -m pip --version

Note: After this step, you will have different Python versions installed, and you will need to use them with the correct executable as shown above (e.g., python3.10 for Python 3.10, python3.8 for the default Ubuntu 20.04 Python).

Warning: Do not uninstall the Python version that comes with the OS (Python 3.8 in Ubuntu 20.04), as this can cause serious issues with the Ubuntu system. Many system utilities depend on this specific Python version.

Step 3: Uninstall Ansible from the Previous Python Version

Before installing the new version, remove the old Ansible installation to avoid conflicts.

# Find out which pip currently has Ansible installed
which ansible
# This will show something like /usr/local/bin/ansible or ~/.local/bin/ansible

# Check which Python version is used for the current Ansible
ansible --version
# Look for the "python version" line in the output

# Uninstall Ansible from the previous Python version
python3.8 -m pip uninstall ansible ansible-core

# If you had other Ansible-related packages, uninstall those too
python3.8 -m pip uninstall ansible-runner ansible-builder

Step 4: Install Ansible with the New Python Version

Install Ansible for both system-wide (sudo) and user-specific contexts as needed:

System-Wide Installation (sudo)

# Install Ansible system-wide with the new Python version
sudo python3.10 -m pip install ansible

# Verify the installation
ansible --version
# Confirm it shows the new Python version

User-Specific Installation (if needed)

# Install Ansible for your user with the new Python version
python3.10 -m pip install --user ansible

# Verify the installation
ansible --version

Reinstall Additional Pip Packages with the New Python Version

If you had additional pip packages installed for Ansible, reinstall them with the --force-reinstall flag to ensure they use the new Python version:

# Reinstall packages with the new Python version
sudo python3.10 -m pip install --force-reinstall ansible-runner ansible-builder

# For user-specific installations
python3.10 -m pip install --user --force-reinstall ansible-runner ansible-builder

Step 5: Update Ansible Collections

Ansible collections might need to be updated to work with the new Ansible version:

# List currently installed collections
ansible-galaxy collection list

# Update all collections
ansible-galaxy collection install --upgrade --force-with-deps <collection_name>

# Example: 
# ansible-galaxy collection install --upgrade --force-with-deps community.general
# ansible-galaxy collection install --upgrade --force-with-deps ansible.posix

Installing Collection Requirements

When installing pip package requirements for Ansible collections, you must use the specific Python executable with the correct version. For example:

# Incorrect (might use the wrong Python version):
sudo pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt

# Correct (explicitly using Python 3.11):
sudo python3.11 -m pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt

This ensures that the dependencies are installed for the correct Python interpreter that Ansible is using.

Consider using a requirements.yml file to manage your collections:

# requirements.yml
collections:
  - name: community.general
    version: 5.0.0
  - name: ansible.posix
    version: 1.4.0

And install them with:

ansible-galaxy collection install -r requirements.yml

Step 6: Update Jenkins Configuration (If Applicable)

If you're using Jenkins to run Ansible playbooks, you'll need to update your Jenkins configuration to use the new Python and Ansible paths:

  1. Go to Jenkins > Manage Jenkins > Global Tool Configuration
  2. Update the Ansible installation path to point to the new version:
    • For system-wide installations: /usr/local/bin/ansible (likely unchanged, but verify)
    • For user-specific installations: Update to the correct path
  3. In your Jenkins pipeline or job configuration, specify the Python interpreter path if needed:
// Jenkinsfile example
pipeline {
    agent any
    environment {
        ANSIBLE_PYTHON_INTERPRETER = '/usr/bin/python3.10'
    }
    stages {
        stage('Run Ansible') {
            steps {
                sh 'ansible-playbook -i inventory playbook.yml'
            }
        }
    }
}

Step 7: Update Ansible Configuration Files (Additional Step)

You might need to update your ansible.cfg file to specify the new Python interpreter:

# In ansible.cfg
[defaults]
interpreter_python = /usr/bin/python3.10

This ensures that Ansible uses the correct Python version when connecting to remote hosts.

Step 8: Test Your Ansible Installation

Before relying on your upgraded Ansible for production work, test it thoroughly:

# Check Ansible version
ansible --version

# Run a simple ping test
ansible localhost -m ping

# Run a simple playbook
ansible-playbook test-playbook.yml

Troubleshooting Common Issues

Python Module Import Errors

If you encounter module import errors, ensure that all required dependencies are installed for the new Python version:

sudo python3.10 -m pip install paramiko jinja2 pyyaml cryptography

Path Issues

If running ansible command doesn't use the new version, check your PATH environment variable:

echo $PATH
which ansible

You might need to create symlinks or adjust your PATH to ensure the correct version is used.

Collection Compatibility

Some collections may not be compatible with the new Ansible or Python version. Check the documentation for your specific collections.

Conclusion

Upgrading Ansible when a new Python version is required involves several careful steps to ensure all components work together smoothly. By following this guide, you should be able to successfully upgrade your Ansible installation while minimizing disruption to your automation workflows.

Remember to always test in a non-production environment first, and maintain backups of your configuration and playbooks before making significant changes.

Happy automating!


🚀 Mastering Azure Functions in Docker: Secure Your App with Function Keys! 🔒

In this session, we’re merging the robust capabilities of Azure Functions with the versatility of Docker containers.

By the end of this tutorial, you will have a secure and scalable process for deploying your Azure Functions within Docker, equipped with function keys to ensure security.

Why use Azure Functions inside Docker?

Serverless architecture allows you to run code without provisioning or managing servers. Azure Functions take this concept further by providing a fully managed compute platform. Docker, on the other hand, offers a consistent development environment, making it easy to deploy your applications across various environments. Together, they create a robust and efficient way to develop and deploy serverless applications. Later we will be deploy this container to our local kubernetes cluster and to Azure Container Apps.

Development

The Azure Functions Core tools make it easy to package your function into a container with a single command:

func init MyFunctionApp --docker

The command creates the dockerfile and supporting json for your function inside a container and all you need to do is add your code and dependencies. Since we are building a python function we will be adding our python libraries in the requirements.txt

Using Function Keys for Security

Create a host_secret.json file in the root of your function app directory. Add the following configuration to specify your function key:

{
"masterKey": {
"name": "master",
"value": "your-master-key-here"
},
"functionKeys": {
"default": "your-function-key-here"
}
}

Now this file needs to be added to the container so the function can read it. You can simply add the following to your dockerfile and rebuild:

RUN mkdir /etc/secrets/
ENV FUNCTIONS_SECRETS_PATH=/etc/secrets
ENV AzureWebJobsSecretStorageType=Files
ENV PYTHONHTTPSVERIFY=0
ADD host_secrets.json /etc/secrets/host.json

Testing

Now you can use the function key you set in the previous step as a query parameter for the function’s endpoint in your api client.


Or you can use curl / powershell as well:

curl -X POST \
'http://192.168.1.200:8081/api/getbooks?code=XXXX000something0000XXXX' \
--header 'Accept: */*' \
--header 'User-Agent: Thunder Client (https://www.thunderclient.com)' \
--header 'Content-Type: application/json' \
--data-raw '{
"query": "Dune"
}'


Azure Functions Cartoon

Develop and Test Local Azure Functions from your IDE

Offloading code from apps is a great way to adapt a microservices architecture. If you are still making the decision of whether to create functions or just code on your app, check out the decision matrix article and some gotchas that will help you know if you should create a function or not. Since we have checked the boxes and our code is a great candidate for Azure Functions then here’s our process:

Dev Environment Setup

Azure Functions Core Tools

First thing is to install the Azure Functions core tools on your machine. There are many ways to install the core tools and instructions can be found in the official Microsoft learn doc here: Develop Azure Functions locally using Core Tools | Microsoft Learn . We are using Ubuntu and Python so we did the following:

wget -q https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then:

sudo apt-get update
sudo apt-get install azure-functions-core-tools-4

After getting the core tools you can test by running

func --help

Result:

Azure Functions Core Tools
Azure Functions Core Tools
Visual Studio Code Extension
  • Go to the Extensions view by clicking the Extensions icon in the Activity Bar.
  • Search for “Azure Functions” and install the extension.
  • Open the Command Palette (F1) and select Azure Functions: Install or Update Azure Functions Core Tools.

Azure Function Fundamentals

Here are some Azure Function Basics. You can write in many languages as described in the official Microsoft learn doc here: Supported Languages with Durable Functions Overview – Azure | Microsoft Learn . We are using Python so here’s our process

I. Create a Python Virtual Environment to manage dependencies:

A Python virtual environment is an isolated environment that allows you to manage dependencies for your project separately from other projects. Here are the key benefits:

  1. Dependency Isolation:
    • Each project can have its own dependencies, regardless of what dependencies other projects have. This prevents conflicts between different versions of packages used in different projects.
  2. Reproducibility:
    • By isolating dependencies, you ensure that your project runs consistently across different environments (development, testing, production). This makes it easier to reproduce bugs and issues.
  3. Simplified Dependency Management:
    • You can easily manage and update dependencies for a specific project without affecting other projects. This is particularly useful when working on multiple projects simultaneously.
  4. Cleaner Development Environment:
    • Your global Python environment remains clean and uncluttered, as all project-specific dependencies are contained within the virtual environment.

Create the virtual environment simply with: python -m venv name_of_venv

What is a Function Route?

A function route is essentially the path part of the URL that maps to your function. When an HTTP request matches this route, the function is executed. Routes are particularly useful for organizing and structuring your API endpoints.

II. Initialization

The line app = func.FunctionApp() seen in the code snippet below is used in the context of Azure Functions for Python to create an instance of the FunctionApp class. This instance, app, serves as the main entry point for defining and managing your Azure Functions within the application. Here’s a breakdown of what it does:

  1. Initialization:
    • It initializes a new FunctionApp object, which acts as a container for your function definitions.
  2. Function Registration:
    • You use this app instance to register your individual functions. Each function is associated with a specific trigger (e.g., HTTP, Timer) and is defined using decorators.

import azure.functions as func
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1")
@app.route(route="hello")
def hello_function(req: func.HttpRequest) -> func.HttpResponse:
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)

  • The @app.function_name and @app.route decorators are used to define the function’s name and route, respectively. This makes it easy to map HTTP requests to specific functions.
  • The hello_function is defined to handle HTTP requests. It extracts the name parameter from the query string or request body and returns a greeting.
  • The function returns an HttpResponse object, which is sent back to the client.

What is a Function Route?

A function route is essentially the path part of the URL that maps to your function. When an HTTP request matches this route, the function is executed. Routes are particularly useful for organizing and structuring your API endpoints.

Running The Azure Function

Once you have your code ready to go you can test you function locally by using func start but there are a few “gotchas” to be aware of:

1. Port Conflicts

  • By default, func start runs on port 7071. If this port is already in use by another application, you’ll encounter a conflict. You can specify a different port using the --port option:
    func start --port 8080
    

     

2. Environment Variables

  • Ensure that all necessary environment variables are set correctly. Missing or incorrect environment variables can cause your function to fail. You can use a local.settings.json file to manage these variables during local development.

3. Dependencies

  • Make sure all dependencies listed in your requirements.txt (for Python) or package.json (for Node.js) are installed. Missing dependencies can lead to runtime errors.

4. Function Proxies

  • If you’re using function proxies, ensure that the proxies.json file is correctly configured. Misconfigurations can lead to unexpected behavior or routing issues.

5. Binding Configuration

  • Incorrect or incomplete binding configurations in your function.json file can cause your function to not trigger as expected. Double-check your bindings to ensure they are set up correctly.

6. Local Settings File

  • The local.settings.json file should not be checked into source control as it may contain sensitive information. Ensure this file is listed in your .gitignore file.

7. Cold Start Delays

  • When running functions locally, you might experience delays due to cold starts, especially if your function has many dependencies or complex initialization logic.

8. Logging and Monitoring

  • Ensure that logging is properly configured to help debug issues. Use the func start command’s output to monitor logs and diagnose problems.

9. Version Compatibility

  • Ensure that the version of Azure Functions Core Tools you are using is compatible with your function runtime version. Incompatibilities can lead to unexpected errors.

10. Network Issues

  • If your function relies on external services or APIs, ensure that your local environment has network access to these services. Network issues can cause your function to fail.

11. File Changes

  • Be aware that changes to your function code or configuration files may require restarting the func start process to take effect.

12. Debugging

  • When debugging, ensure that your IDE is correctly configured to attach to the running function process. Misconfigurations can prevent you from hitting breakpoints.

By keeping these gotchas in mind, you can avoid common pitfalls and ensure a smoother development experience with Azure Functions. If you encounter any specific issues or need further assistance, feel free to ask us!

Testing and Getting Results

If your function starts and you are looking at the logs you will see your endpoints listed as seen below but since you wrote them you know the paths as well and can start testing with your favorite API client, our favorite is Thunder Client.

Thunder Client with Azure Functions
Thunder Client with Azure Functions
The Response

In Azure Functions, an HTTP response is what your function sends back to the client after processing an HTTP request. Here are the basics:

  1. Status Code:
    • The status code indicates the result of the HTTP request. Common status codes include:
      • 200 OK: The request was successful.
      • 400 Bad Request: The request was invalid.
      • 404 Not Found: The requested resource was not found.
      • 500 Internal Server Error: An error occurred on the server.
  2. Headers:
    • HTTP headers provide additional information about the response. Common headers include:
      • Content-Type: Specifies the media type of the response (e.g., application/jsontext/html).
      • Content-Length: Indicates the size of the response body.
      • Access-Control-Allow-Origin: Controls which origins are allowed to access the resource.
  3. Body:
    • The body contains the actual data being sent back to the client. This can be in various formats such as JSON, HTML, XML, or plain text. We chose JSON so we can use the different fields and values.

Conclusion

In this article, we’ve explored the process of creating your first Python Azure Function using Visual Studio Code. We covered setting up your environment, including installing Azure Functions Core Tools and the VS Code extension, which simplifies project setup, development, and deployment. We delved into the importance of using a Python virtual environment and a requirements.txt file for managing dependencies, ensuring consistency, and facilitating collaboration. Additionally, we discussed the basics of function routes and HTTP responses, highlighting how to define routes and customize responses to enhance your API’s structure and usability. By understanding these fundamentals, you can efficiently develop, test, and deploy serverless applications on Azure, leveraging the full potential of Azure Functions. Happy coding!


Django Microservices Approach with Azure Functions on Azure Container Apps

We are creating a multi-part video to explain Azure Functions running on Azure Container Apps so that we can offload some of the code out of our Django App and build our infrastructure with a microservice approach. Here’s part one and below the video a quick high-level explanation for this architecture.

Azure Functions are serverless computing units within Azure that allow you to run event-driven code without having to manage servers. They’re a great choice for building microservices due to their scalability, flexibility, and cost-effectiveness.

Azure Container Apps provide a fully managed platform for deploying and managing containerized applications. By deploying Azure Functions as containerized applications on Container Apps, you gain several advantages:

  1. Microservices Architecture:

    • Decoupling: Each function becomes an independent microservice, isolated from other parts of your application. This makes it easier to develop, test, and deploy them independently.
    • Scalability: You can scale each function individually based on its workload, ensuring optimal resource utilization.
    • Resilience: If one microservice fails, the others can continue to operate, improving the overall reliability of your application.
  2. Containerization:

    • Portability: Containerized functions can be easily moved between environments (development, testing, production) without changes.
    • Isolation: Each container runs in its own isolated environment, reducing the risk of conflicts between different functions.
    • Efficiency: Containers are optimized for resource utilization, making them ideal for running functions on shared infrastructure.
  3. Azure Container Apps Benefits:

    • Managed Service: Azure Container Apps handles the underlying infrastructure, allowing you to focus on your application’s logic.
    • Scalability: Container Apps automatically scale your functions based on demand, ensuring optimal performance.
    • Integration: It seamlessly integrates with other Azure services, such as Azure Functions, Azure App Service, and Azure Kubernetes Service.

In summary, Azure Functions deployed on Azure Container Apps provide a powerful and flexible solution for building microservices. By leveraging the benefits of serverless computing, containerization, and a managed platform, you can create scalable, resilient, and efficient applications.

Stay tuned for part 2


Deploying Azure Functions with Azure DevOps: 3 Must-Dos! Code Security Included

Azure Functions is a serverless compute service that allows you to run your code in response to various events, without the need to manage any infrastructure. Azure DevOps, on the other hand, is a set of tools and services that help you build, test, and deploy your applications more efficiently. Combining these two powerful tools can streamline your Azure Functions deployment process and ensure a smooth, automated workflow.

In this blog post, we’ll explore three essential steps to consider when deploying Azure Functions using Azure DevOps.

1. Ensure Consistent Python Versions

When working with Azure Functions, it’s crucial to ensure that the Python version used in your build pipeline matches the Python version configured in your Azure Function. Mismatched versions can lead to unexpected runtime errors and deployment failures.

To ensure consistency, follow these steps:

  1. Determine the Python version required by your Azure Function. You can find this information in the requirements.txt file or the host.json file in your Azure Functions project.
  2. In your Azure DevOps pipeline, use the UsePythonVersion task to set the Python version to match the one required by your Azure Function.
yaml
- task: UsePythonVersion@0
inputs:
versionSpec: '3.9'
addToPath: true
  1. Verify the Python version in your pipeline by running python --version and ensuring it matches the version specified in the previous step.

2. Manage Environment Variables Securely

Azure Functions often require access to various environment variables, such as database connection strings, API keys, or other sensitive information. When deploying your Azure Functions using Azure DevOps, it’s essential to handle these environment variables securely.

Here’s how you can approach this:

  1. Store your environment variables as Azure DevOps Service Connections or Azure Key Vault Secrets.
  2. In your Azure DevOps pipeline, use the appropriate task to retrieve and set the environment variables. For example, you can use the AzureKeyVault task to fetch secrets from Azure Key Vault.
yaml
- task: AzureKeyVault@1
inputs:
azureSubscription: 'Your_Azure_Subscription_Connection'
KeyVaultName: 'your-keyvault-name'
SecretsFilter: '*'
RunAsPreJob: false
  1. Ensure that your pipeline has the necessary permissions to access the Azure Key Vault or Service Connections.

3. Implement Continuous Integration and Continuous Deployment (CI/CD)

To streamline the deployment process, it’s recommended to set up a CI/CD pipeline in Azure DevOps. This will automatically build, test, and deploy your Azure Functions whenever changes are made to your codebase.

Here’s how you can set up a CI/CD pipeline:

  1. Create an Azure DevOps Pipeline and configure it to trigger on specific events, such as a push to your repository or a pull request.
  2. In the pipeline, include steps to build, test, and package your Azure Functions project.
  3. Add a deployment task to the pipeline to deploy your packaged Azure Functions to the target Azure environment.
yaml
# CI/CD pipeline
trigger:
- main
pool:
vmImage: ‘ubuntu-latest’steps:
task: UsePythonVersion@0
inputs:
versionSpec: ‘3.9’
addToPath: true script: |
pip install -r requirements.txt
displayName: ‘Install dependencies’ task: AzureWebApp@1
inputs:
azureSubscription: ‘Your_Azure_Subscription_Connection’
appName: ‘your-function-app-name’
appType: ‘functionApp’
deployToSlotOrASE: true
resourceGroupName: ‘your-resource-group-name’
slotName: ‘production’

By following these three essential steps, you can ensure a smooth and reliable deployment of your Azure Functions using Azure DevOps, maintaining consistency, security, and automation throughout the process.

Bonus: Embrace DevSecOps with Code Security Checks

As part of your Azure DevOps pipeline, it’s crucial to incorporate security checks to ensure the integrity and safety of your code. This is where the principles of DevSecOps come into play, where security is integrated throughout the software development lifecycle.

Here’s how you can implement code security checks in your Azure DevOps pipeline:

  1. Use Bandit for Python Code Security: Bandit is a popular open-source tool that analyzes Python code for common security issues. You can integrate Bandit into your Azure DevOps pipeline to automatically scan your Azure Functions code for potential vulnerabilities.
yaml
- script: |
pip install bandit
bandit -r your-functions-directory -f custom -o bandit_report.json
displayName: 'Run Bandit Security Scan'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'bandit_report.json'
ArtifactName: 'bandit-report'
publishLocation: 'Container'
  1. Leverage the Safety Tool for Dependency Scanning: Safety is another security tool that checks your Python dependencies for known vulnerabilities. Integrate this tool into your Azure DevOps pipeline to ensure that your Azure Functions are using secure dependencies.
yaml
- script: |
pip install safety
safety check --full-report
displayName: 'Run Safety Dependency Scan'
  1. Review Security Scan Results: After running the Bandit and Safety scans, review the generated reports and address any identified security issues before deploying your Azure Functions. You can publish the reports as build artifacts in Azure DevOps for easy access and further investigation.

By incorporating these DevSecOps practices into your Azure DevOps pipeline, you can ensure that your Azure Functions are not only deployed efficiently but also secure and compliant with industry best practices.


Building Windows Servers with Hashicorp Packer + Terraform on Oracle Cloud Infrastructure OCI

Oracle Cloud Infrastructure

In today’s dynamic IT landscape, platform engineers juggle a diverse array of cloud technologies to cater to specific client needs. Among these, Oracle Cloud Infrastructure (OCI) is rapidly gaining traction due to its competitive pricing for certain services. However, navigating the intricacies of each cloud can present a significant learning curve. This is where cloud-agnostic tools like Terraform and Packer shine. By abstracting away the underlying APIs and automating repetitive tasks, they empower us to leverage OCI’s potential without getting bogged down in vendor-specific complexities.

 

In this article I show you how to get started with Oracle Cloud by using Packer and Terraform for Windows servers, and this can be used for other Infrastructure as code tasks.

Oracle Cloud Infrastructure Configs

OCI Keys for API Use

Oracle OCI with Packer and Terraform config
Oracle OCI with Packer and Terraform config

Prerequisite: Before you generate a key pair, create the .oci directory in your home directory to store the credentials. See SDK and CLI Configuration File for more details.

  1. View the user’s details:
    • If you’re adding an API key for yourself:

      Open the Profile menu and click My profile.

    • If you’re an administrator adding an API key for another userOpen the navigation menu and click Identity & Security. Under Identity, click Users. Locate the user in the list, and then click the user’s name to view the details.
  2. In the Resources section at the bottom left, click API Keys
  3. Click Add API Key at the top left of the API Keys list. The Add API Key dialog displays.
  4. Click Download Private Key and save the key to your .oci directory. In most cases, you do not need to download the public key.

    Note: If your browser downloads the private key to a different directory, be sure to move it to your .oci directory.

  5. Click Add.

    The key is added and the Configuration File Preview is displayed. The file snippet includes required parameters and values you’ll need to create your configuration file. Copy and paste the configuration file snippet from the text box into your ~/.oci/config file. (If you have not yet created this file, see SDK and CLI Configuration File for details on how to create one.)

    After you paste the file contents, you’ll need to update the key_file parameter to the location where you saved your private key file.

    If your config file already has a DEFAULT profile, you’ll need to do one of the following:

    • Replace the existing profile and its contents.
    • Rename the existing profile.
    • Rename this profile to a different name after pasting it into the config file.
  6. Update the permissions on your downloaded private key file so that only you can view it:
    1. Go to the .oci directory where you placed the private key file.
    2. Use the command chmod go-rwx ~/.oci/<oci_api_keyfile>.pem to set the permissions on the fil

Network

Make sure to allow WinRM and RDP so that packer can configure the VM and make it into an image and so that you can RDP to the server after it’s created.

Allow WinRM in Oracle Cloud Infrastructure for WinRM

Packer Configuration & Requirements

Install the packer OCI plugin on the host running packer

$ packer plugins install github.com/hashicorp/oracle

Packer Config

  1. Configure your source
    1. Availability domain: oci iam availability-domain list
  2. Get your base image (Drivers Included)
    1. With the OCI cli: oci compute image list --compartment-id "ocid#.tenancy.XXXX" --operating-system "Windows" | grep -e 2019 -e ocid1
  3. Point to config file that has the OCI Profile we downloaded in the previous steps.
  4. WinRM Config
  5. User Data (Bootstrap)
    1. You must set the password to not be changed at next logon so that packer can connect:
    2. Code:#ps1_sysnative
      cmd /C 'wmic UserAccount where Name="opc" set PasswordExpires=False'
Packer config for Oracle Cloud Infrastructure

Automating Special Considerations from OCI

Images can be used to launch other instances. The instances launched from these images will include the customizations, configurations, and software installed when the image was created. For windows a we need to sysprep but OCI has specifics on doing so.

Creating a generalized image from an instance will render the instance non-functional, so you should first create a custom image from the instance, and then create a new instance from the custom image. Source below

We automated their instruction by:

  1. Extract the contents of oracle-cloud_windows-server_generalize_2022-08-24.SED.EXE to your packer scripts directory
  2. Copy all files to C:\Windows\Panther
  3. Use the windows-shell provisioner in packer to run Generalize.cmd
OCI Generalize Windows Steps
OCI Generalize Windows Steps

Terraform Config with Oracle Cloud

  1.  Configure the vars

    Oracle OCI Terraform Variables
    Oracle OCI Terraform Variables
  2. Pass the private key at runtime:
    terraform apply --var-file=oci.tfvars -var=private_key_path=~/.oci/user_2024-10-30T10_10_10.478Z.pem

Sources:

Sys-prepping in OCI is specific to their options here’s a link:

https://docs.oracle.com/en-us/iaas/Content/Compute/References/windowsimages.htm#Windows_Generalized_Image_Support_Files

Other Sources:

https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#apisigningkey_topic_How_to_Generate_an_API_Signing_Key_Console

https://github.com/hashicorp/packer/issues/7033

https://github.com/hashicorp/packer-plugin-oracle/tree/main/docs


Server Scared of Downtime

Avoid Full Downtime on Auto-Scaled Environments by Only Targeting New Instances with Ansible and Github Actions!

Server Scared of Downtime

Servers are scared of downtime!

The following Ansible playbook provides a simple but powerful way to compare instance uptime to a threshold “scale_time” variable that you can set in your pipeline variables in Github. By checking the uptime, you can selectively run tasks only on machines newer than that time period you set to avoid downtime on the rest.

Of course the purpose of Ansible is to be idempotent but sometimes during testing we might need to isolate servers to not affect all, specially when using dynamic inventories.

Solution: The Playbook

Ansible Playbook to target new only in scale set

How it works:

  1. Create a variable in Github Pipeline Variables.
  2. Set the variable at runtime:
ansible-playbook -i target_only_new_vmss.yml pm2status.yml -e "scaletime=${{ vars.SCALETIME }}"

  • The set_fact task defines the scale_time variable based on when the last scaling event occurred. This will be a timestamp.
  • The uptime command gets the current uptime of the instance. This is registered as a variable.
  • Using a conditional when statement, we only run certain tasks if the uptime is less than the scale_time threshold.
  • This allows you to selectively target new instances created after the last scale-up event.

Benefits:

  • Avoid unnecessary work on stable instances that don’t need updates.
  • Focus load and jobs on new machines only
  • Safer rollouts in large auto-scaled environments by targeting smaller batches.
  • Easy way to check uptime against a set point in time.


Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers

In the past I’ve worked with VDI solutions like Citrix, VMWare Horizon, Azure Virtual Desktop and others but my favorite is Kasm. For me Kasm has a DevOps friendly and modern way of doing virtual apps and virtual desktops that I didn’t find with other vendors.

With Kasm, apps and desktops run in isolated containers and I can access them easily with my browser, no need to install client software.

Here are my top 3 favorite features:

Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers!

#1 - Runs on the Home Lab!

Kasm Workspaces can be used to create a secure and isolated environment for running applications and browsing the web in your home lab. This can help to protect your devices from malware and other threats.

The community edition is free for 5 concurrent sessions.

If you are a Systems Admin or Engineer you can use it at home for your benefit but also to get familiar with the configuration so that you are better prepared for deploying it at work.

#2 - Low Resource Utilization

Kasm container apps are lightweight and efficient, so they can run quickly and without consuming a lot of resources. This is especially beneficial if you have a limited amount of hardware resources like on a home lab. I run mine in a small ProxMox cluster and offloads work from my main PC. You can also set the amount of compute when configuring your containerized apps.

#3 - Security

Each application is run in its own isolated container, which prevents them from interacting with each other or with your PC. This helps to prevent malware or other threats from spreading from one application to another.

The containers could run on isolated Docker networks and with a good firewall solution you can even prevent a self-replicating trojan by segmenting your network and only allowing the necessary ports and traffic flow. Example, if running the Tor Browser containerized app you could only allow it to go outbound to the internet and block SMB (Port 445) from your internal network. If the containerized app gets infected with something like the Emotet Trojan you could be preventing it from spreading further and you could kill the isolated container without having to shutdown or reformatting your local computer.

Code Vulnerability scanning: You can scan your container images in your CI/CD pipelines for vulnerabilities, which helps to identify and fix security weaknesses before you deploy them and before they can be exploited.