Secure HTTP Traffic with HashiCorp Vault as your PKI + Cert Manager in Kubernetes - Deep Dive!
Here's a deep dive technical guide with the steps to configure HashiCorp Vault as a Private Certificate Authority (PKI) and integrate it with cert-manager in Kubernetes to automate certificate management. I've configured it in production environments but for the purposes of this demo I am implementing it in my lab so that my internal apps can have HTTPS encryption in transit. Here are a few examples of internal apps with certs and as you can see the ping shows they are in a private network.
Main Benefits:
- Centralized PKI Infrastructure: Vault provides a centralized solution for managing your entire certificate lifecycle. Instead of managing certificates across different applications and services, Vault acts as a single source of truth for all your PKI needs. This centralization simplifies management, improves security posture, and ensures consistent certificate policies across your organization.
- Dynamic Certificate Issuance and Rotation: Vault can automatically issue short-lived certificates and rotate them before expiration. When integrated with cert-manager in Kubernetes, this automation eliminates the manual certificate renewal process that often leads to outages from expired certificates. The system can continuously issue, renew, and rotate certificates without human intervention.
- Fine-grained Access Control: Vault's advanced policy system allows you to implement precise access controls around who can issue what types of certificates. You can limit which teams or services can request certificates for specific domains, restrict certificate lifetimes based on risk profiles, and implement comprehensive audit logging. This helps enforce the principle of least privilege across your certificate infrastructure.
An additional benefit is Vault's broader secret management capabilities – the same tool managing your certificates can also handle database credentials, API keys, and other sensitive information, giving you a unified approach to secrets management.
Prerequisites
- A DNS Server (I use my firewall)
- A running Kubernetes cluster (I am using microk8s)
- Vault server installed and initialized (vault 0.30.0 · hashicorp/hashicorp)
- cert-manager installed in your Kubernetes cluster (microk8s addon)
- Administrative access to both Vault and Kubernetes
See my homelab diagram in github: mdf-ido/mdf-ido: Config files for my GitHub profile.
1. Configure Vault as a PKI
1.1. Enable the PKI Secrets Engine
# Enable the PKI secrets engine
vault secrets enable pki

# Configure the PKI secrets engine with a longer max lease time (e.g., 1 year)
vault secrets tune -max-lease-ttl=8760h pki

1.2. Generate or Import Root CA
# Generate a new root CA
vault write -field=certificate pki/root/generate/internal \
common_name="Root CA" \
ttl=87600h > root_ca.crt

1.3. Configure PKI URLs
# Configure the CA and CRL URLs
vault write pki/config/urls \
issuing_certificates="http://vault.example.com:8200/v1/pki/ca" \
crl_distribution_points="http://vault.example.com:8200/v1/pki/crl"

1.4. Create an Intermediate CA

# Enable the intermediate PKI secrets engine
vault secrets enable -path=pki_int pki
# Set the maximum TTL for the intermediate CA
vault secrets tune -max-lease-ttl=43800h pki_int
# Generate a CSR for the intermediate CA
vault write -format=json pki_int/intermediate/generate/internal \
common_name="Intermediate CA" \
ttl=43800h > pki_intermediate.json
# Extract the CSR
cat pki_intermediate.json | jq -r '.data.csr' > pki_intermediate.csr
# Sign the intermediate CSR with the root CA
vault write -format=json pki/root/sign-intermediate \
csr=@pki_intermediate.csr \
format=pem_bundle \
ttl=43800h > intermediate_cert.json
# Extract the signed certificate
cat intermediate_cert.json | jq -r '.data.certificate' > intermediate.cert.pem
# Import the signed certificate back into Vault
vault write pki_int/intermediate/set-signed \
certificate=@intermediate.cert.pem
1.5. Create a Role for Certificate Issuance
# Create a role for issuing certificates
vault write pki_int/roles/your-domain-role \
allowed_domains="yourdomain.com" \
allow_subdomains=true \
allow_bare_domains=true \
allow_wildcard_certificates=true \
max_ttl=720h

2. Configure Kubernetes Authentication in Vault
2.1. Enable Kubernetes Auth Method
# Enable the Kubernetes auth method
vault auth enable kubernetes
2.2. Configure Kubernetes Auth Method
# Get the Kubernetes API address
KUBE_API="https://kubernetes.default.svc.cluster.local"
# Get the CA certificate used by Kubernetes
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
# Get the JWT token for the Vault SA
KUBE_TOKEN=$(kubectl create token vault-auth)
# Configure the Kubernetes auth method in Vault
vault write auth/kubernetes/config \
kubernetes_host="$KUBE_API" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
token_reviewer_jwt="$KUBE_TOKEN" \
issuer="https://kubernetes.default.svc.cluster.local"

2.3. Create Policy for Certificate Issuance
# Create a policy file
cat > pki-policy.hcl << EOF
# Read and list access to PKI endpoints
path "pki_int/*" {
capabilities = ["read", "list"]
}
# Allow creating certificates
path "pki_int/sign/your-domain-role" {
capabilities = ["create", "update"]
}
path "pki_int/issue/your-domain-role" {
capabilities = ["create"]
}
EOF
# Create the policy in Vault
vault policy write pki-policy pki-policy.hcl

2.4. Create Kubernetes Auth Role
# Create a role that maps a Kubernetes service account to Vault policies (Created next)
vault write auth/kubernetes/role/cert-manager \
bound_service_account_names="issuer" \
bound_service_account_namespaces="default" \
policies="pki-policy" \
ttl=1h
3. Configure cert-manager to Use Vault
3.1. Create Service Account for cert-manager
# Create a file named cert-manager-vault-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: issuer
namespace: default
Apply the manifest:
kubectl apply -f cert-manager-vault-sa.yaml
3.2. Create Issuer Resource
# Create a file named vault-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: vault-issuer
namespace: default
spec:
vault:
server: http://vault.vault-system.svc.cluster.local:8200
path: pki_int/sign/your-domain-role
auth:
kubernetes:
mountPath: /v1/auth/kubernetes
role: cert-manager
serviceAccountRef:
name: issuer
Apply the manifest:
kubectl apply -f vault-issuer.yaml

4. Request Certificates
4.1. Direct Certificate Request
# Create a file named certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-cert
namespace: default
spec:
secretName: example-tls
issuerRef:
name: vault-issuer
commonName: app.yourdomain.com
dnsNames:
- app.yourdomain.com
Apply the manifest:
kubectl apply -f certificate.yaml

4.2. Using Ingress for Certificate Request
# Create a file named secure-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
annotations:
cert-manager.io/issuer: "vault-issuer"
spec:
tls:
- hosts:
- app.yourdomain.com
secretName: example-tls
rules:
- host: app.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-app
port:
number: 80
Apply the manifest:
kubectl apply -f secure-ingress.yaml
5. Troubleshooting
5.1. Common Issues and Solutions
Cannot find cert issuer
The cert issuer was deployed to a specific namespace
so if you are creating an ingress outside you might need to solve with a few things:
- Create a cluster issuer which is not restricted to a namespace
- Create a duplicate issuer in the specific namespace
- Create an externalName service and bridge the actual service.

Permission Denied
If you see permission denied
errors:
- Check that your Vault policy includes the correct paths
- Verify that the role binding is correct in Vault
- Ensure the service account has the necessary permissions
# Check the Vault policy
vault policy read pki-policy
# Verify the role binding
vault read auth/kubernetes/role/cert-manager
Domain Not Allowed
If you see common name not allowed by this role
errors:
- Update your Vault PKI role to allow the domain:
vault write pki_int/roles/your-domain-role \
allowed_domains="yourdomain.com" \
allow_subdomains=true \
allow_bare_domains=true \
allow_wildcard_certificates=true
Certificate Expiry Issues
If your certificate would expire after the CA certificate:
- Adjust the max TTL to be shorter than your CA expiration:
vault write pki_int/roles/your-domain-role \
max_ttl="30d"
Issuer Annotation Issues
If multiple controllers are fighting for the certificate request:
- Check that you're using the correct annotation:
- For namespaced Issuers:
cert-manager.io/issuer
- For ClusterIssuers:
cert-manager.io/cluster-issuer
- For namespaced Issuers:
5.2. Checking Certificate Status
# Check certificate status
kubectl describe certificate example-cert
# Check certificate request status
kubectl get certificaterequest
# Check cert-manager logs
kubectl logs -n cert-manager deploy/cert-manager-controller
# Check if the secret was created
kubectl get secret example-tls
6. Best Practices
- Certificate Rotation: Set appropriate TTLs and let cert-manager handle rotation
- Secure Vault Access: Restrict access to Vault and use dedicated service accounts
- Monitor Expirations: Set up alerts for certificate expirations
- CA Renewals: Plan for CA certificate renewals well in advance
- Backup: Regularly backup your Vault PKI configuration and CA certificates
- Audit Logging: Enable audit logging in Vault to track certificate operations
7. Maintenance and Operations
7.1. Renewing the CA Certificate
Before your CA certificate expires, you'll need to renew it:
# Check when your CA certificate expires
vault read pki_int/cert/ca
# Plan and execute your CA renewal process well before expiration
7.2. Rotating Credentials
Periodically rotate your Kubernetes auth credentials:
# Update the JWT token used by Vault
KUBE_TOKEN=$(kubectl create token vault-auth)
vault write auth/kubernetes/config \
token_reviewer_jwt="$KUBE_TOKEN"
Issues
- Your ingresses need to be in the same namespace as the issuer
- Create an external service as bridge
- You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.
Conclusion
You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.
Deploying Azure Functions in Containers to Azure Container Apps - like a boss!!!
Introduction
In today's cloud-native world, containerization has become a fundamental approach for deploying applications. Azure Functions can be containerized and deployed to a docker container which means we can deploy them on kubernetes. One compelling option is Azure Container Apps (ACA), which provides a fully managed Kubernetes-based environment with powerful features specifically designed for microservices and containerized applications.
Azure Container Apps is powered by Kubernetes and open-source technologies like Dapr, KEDA, and Envoy. It supports Kubernetes-style apps and microservices with features like service discovery and traffic splitting while enabling event-driven application architectures. This makes it an excellent choice for deploying containerized Azure Functions.
This blog post explores how to deploy Azure Functions in containers to Azure Container Apps, with special focus on the benefits of Envoy for traffic management, revision handling, and logging capabilities for troubleshooting.
Video Demo:
Why Deploy Azure Functions to Container Apps?
Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any language stack supported by Functions and use the same Functions triggers and bindings with event-driven scaling.
Key advantages include:
- Containerization flexibility: Package your functions with custom dependencies and runtime environments for Dev, QA, STG and PROD
- Kubernetes-based infrastructure: Get the benefits of Kubernetes without managing the complexity
- Microservices architecture support: Deploy functions as part of a larger microservices ecosystem
- Advanced networking: Take advantage of virtual network integration and service discovery
Benefits of Envoy in Azure Container Apps
Azure Container Apps includes a built-in Ingress controller running Envoy. You can use this to expose your application to the outside world and automatically get a URL and an SSL certificate. Envoy brings several significant benefits to your containerized Azure Functions:
1. Advanced Traffic Management
Envoy serves as the backbone of ACA's traffic management capabilities, allowing for:
- Intelligent routing: Route traffic based on paths, headers, and other request attributes
- Load balancing: Distribute traffic efficiently across multiple instances
- Protocol support: Downstream connections support HTTP1.1 and HTTP2, and Envoy automatically detects and upgrades connections if the client connection requires an upgrade.
2. Built-in Security
- TLS termination: Automatic handling of HTTPS traffic with Azure managed certificates
- mTLS support: Azure Container Apps supports peer-to-peer TLS encryption within the environment. Enabling this feature encrypts all network traffic within the environment with a private certificate that is valid within the Azure Container Apps environment scope. Azure Container Apps automatically manage these certificates.
3. Observability
- Detailed metrics and logs for traffic patterns
- Request tracing capabilities
- Performance insights for troubleshooting
Traffic Management for Revisions
One of the most powerful features of Azure Container Apps is its handling of revisions and traffic management between them.
Understanding Revisions
Revisions are immutable snapshots of your container application at a point in time. When you upgrade your container app to a new version, you create a new revision. This allows you to have the old and new versions running simultaneously and use the traffic management functionality to direct traffic to old or new versions of the application.
Traffic Splitting Between Revisions
Traffic split is a mechanism that routes configurable percentages of incoming requests (traffic) to various downstream services. With Azure Container Apps, we can weight traffic between multiple downstream revisions.
This capability enables several powerful deployment strategies:
Blue/Green Deployments
Deploy a new version alongside the existing one, and gradually shift traffic:
- Deploy revision 2 (green) alongside revision 1 (blue)
- Initially direct a small percentage (e.g., 10%) of traffic to revision 2
- Monitor performance and errors
- Gradually increase traffic to revision 2 as confidence grows
- Eventually direct 100% traffic to revision 2
- Retire revision 1 when no longer needed
A/B Testing
Test different implementations with real users:
Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in blue-green deployments or in A/B testing. Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision.
Implementation
To implement traffic splitting in Azure Container Apps:
By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable multiple revision mode in your container app, you can split incoming traffic between active revisions.
Here's how to configure it:
- Enable multiple revision mode:
- In the Azure portal, go to your container app
- Select "Revision management"
- Set the mode to "Multiple: Several revisions active simultaneously"
- Apply changes
- Configure traffic weights:
- For each active revision, specify the percentage of traffic it should receive
- Ensure the combined percentage equals 100%
Logging and Troubleshooting
Effective logging is crucial for monitoring and troubleshooting containerized applications. Azure Container Apps provides comprehensive logging capabilities integrated with Azure Monitor.
Centralized Logging Infrastructure
Azure Container Apps environments provide centralized logging capabilities through integration with Azure Monitor and Application Insights. By default, all container apps within an environment send logs to a common Log Analytics workspace, making it easier to query and analyze logs across multiple apps.
Key Logging Benefits
- Unified logging experience: All container apps in an environment send logs to the same workspace
- Detailed container insights: Access container-specific metrics and logs
- Function-specific logging: You can monitor your containerized function app hosted in Container Apps using Azure Monitor Application Insights in the same way you do with apps hosted by Azure Functions.
- Scale event logging: For bindings that support event-driven scaling, scale events are logged as FunctionsScalerInfo and FunctionsScalerError events in your Log Analytics workspace.
Troubleshooting Best Practices
When troubleshooting issues in containerized Azure Functions running on ACA:
- Check application logs: Review function execution logs for errors or exceptions
- Monitor scale events: Identify issues with auto-scaling behavior
- Examine container health: Check for container startup failures or crashes
- Review ingress traffic: Analyze traffic patterns and routing decisions
- Inspect revisions: Verify that traffic is being distributed as expected between revisions
Implementation Steps
Here's the full playlist we did in youtube to follow along: https://www.youtube.com/playlist?list=PLKwr1he0x0Dl2glbE8oHeTgdY-_wZkrhi
In Summary:
- Containerize your Azure Functions app:
- Create a Dockerfile based on the Azure Functions base images
- Build and test your container locally
- Video demo:
- Push your container to a registry:
- Push to Azure Container Registry or another compatible registry
- Create a Container Apps environment:
- Set up the environment with appropriate virtual network and logging settings
- Deploy your function container:
- Use Azure CLI, ARM templates, or the Azure Portal to deploy
- Configure scaling rules, ingress settings, and revision strategy
- Set up traffic management:
- Enable multiple revision mode if desired
- Configure traffic splitting rules for testing or gradual rollout
Conclusion
Deploying Azure Functions in containers to Azure Container Apps combines the best of serverless computing with the flexibility of containers and the rich features of a managed Kubernetes environment. The built-in Envoy proxy provides powerful traffic management capabilities, especially for handling multiple revisions of your application. Meanwhile, the integrated logging infrastructure simplifies monitoring and troubleshooting across all your containerized functions.
This approach is particularly valuable for teams looking to:
- Deploy Azure Functions with custom dependencies
- Integrate functions into a microservices architecture
- Implement sophisticated deployment strategies like blue/green or A/B testing
- Maintain a consistent container-based deployment strategy across all application components
By leveraging these capabilities, you can create more robust, scalable, and manageable serverless applications while maintaining the development simplicity that makes Azure Functions so powerful.
Comprehensive Guide to Upgrading Ansible via Pip with New Python Versions on Ubuntu 20.04
For system administrators and DevOps engineers using Ansible in production environments, upgrading Ansible can sometimes be challenging, especially when the new version requires a newer Python version than what's available by default in Ubuntu 20.04. This guide walks through the process of upgrading Ansible installed via pip when a new Python version is required.
Why This Matters
Ubuntu 20.04 LTS ships with Python 3.8 by default. However, newer Ansible versions may require Python 3.9, 3.10, or even newer. Since Ansible in our environment is installed via pip rather than the APT package manager, we need a careful approach to manage this transition without breaking existing automation.
Prerequisites
- Ubuntu 20.04 LTS system
- Sudo access
- Existing Ansible installation via pip
- Backup of your Ansible playbooks and configuration files
Step 1: Install the Python Repository "Snakes"
The "deadsnakes" PPA provides newer Python versions for Ubuntu. This repository allows us to install Python versions that aren't available in the standard Ubuntu repositories.
# Add the deadsnakes PPA
sudo add-apt-repository ppa:deadsnakes/ppa
# Update package lists
sudo apt update
Step 2: Install the New Python Version and Pip
Install the specific Python version required by your target Ansible version. In this example, we'll use Python 3.10, but adjust as needed.
# Install Python 3.10 and development headers
sudo apt install python3.10 python3.10-dev python3.10-venv
# Install pip for Python 3.10
curl -sS https://bootstrap.pypa.io/get-pip.py | sudo python3.10
# Verify the installation
python3.10 --version
python3.10 -m pip --version
Note: After this step, you will have different Python versions installed, and you will need to use them with the correct executable as shown above (e.g.,
python3.10
for Python 3.10,python3.8
for the default Ubuntu 20.04 Python).
Warning: Do not uninstall the Python version that comes with the OS (Python 3.8 in Ubuntu 20.04), as this can cause serious issues with the Ubuntu system. Many system utilities depend on this specific Python version.
Step 3: Uninstall Ansible from the Previous Python Version
Before installing the new version, remove the old Ansible installation to avoid conflicts.
# Find out which pip currently has Ansible installed
which ansible
# This will show something like /usr/local/bin/ansible or ~/.local/bin/ansible
# Check which Python version is used for the current Ansible
ansible --version
# Look for the "python version" line in the output
# Uninstall Ansible from the previous Python version
python3.8 -m pip uninstall ansible ansible-core
# If you had other Ansible-related packages, uninstall those too
python3.8 -m pip uninstall ansible-runner ansible-builder
Step 4: Install Ansible with the New Python Version
Install Ansible for both system-wide (sudo) and user-specific contexts as needed:
System-Wide Installation (sudo)
# Install Ansible system-wide with the new Python version
sudo python3.10 -m pip install ansible
# Verify the installation
ansible --version
# Confirm it shows the new Python version
User-Specific Installation (if needed)
# Install Ansible for your user with the new Python version
python3.10 -m pip install --user ansible
# Verify the installation
ansible --version
Reinstall Additional Pip Packages with the New Python Version
If you had additional pip packages installed for Ansible, reinstall them with the --force-reinstall
flag to ensure they use the new Python version:
# Reinstall packages with the new Python version
sudo python3.10 -m pip install --force-reinstall ansible-runner ansible-builder
# For user-specific installations
python3.10 -m pip install --user --force-reinstall ansible-runner ansible-builder
Step 5: Update Ansible Collections
Ansible collections might need to be updated to work with the new Ansible version:
# List currently installed collections
ansible-galaxy collection list
# Update all collections
ansible-galaxy collection install --upgrade --force-with-deps <collection_name>
# Example:
# ansible-galaxy collection install --upgrade --force-with-deps community.general
# ansible-galaxy collection install --upgrade --force-with-deps ansible.posix
Installing Collection Requirements
When installing pip package requirements for Ansible collections, you must use the specific Python executable with the correct version. For example:
# Incorrect (might use the wrong Python version):
sudo pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt
# Correct (explicitly using Python 3.11):
sudo python3.11 -m pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt
This ensures that the dependencies are installed for the correct Python interpreter that Ansible is using.
Consider using a requirements.yml file to manage your collections:
# requirements.yml
collections:
- name: community.general
version: 5.0.0
- name: ansible.posix
version: 1.4.0
And install them with:
ansible-galaxy collection install -r requirements.yml
Step 6: Update Jenkins Configuration (If Applicable)
If you're using Jenkins to run Ansible playbooks, you'll need to update your Jenkins configuration to use the new Python and Ansible paths:
- Go to Jenkins > Manage Jenkins > Global Tool Configuration
- Update the Ansible installation path to point to the new version:
- For system-wide installations:
/usr/local/bin/ansible
(likely unchanged, but verify) - For user-specific installations: Update to the correct path
- For system-wide installations:
- In your Jenkins pipeline or job configuration, specify the Python interpreter path if needed:
// Jenkinsfile example
pipeline {
agent any
environment {
ANSIBLE_PYTHON_INTERPRETER = '/usr/bin/python3.10'
}
stages {
stage('Run Ansible') {
steps {
sh 'ansible-playbook -i inventory playbook.yml'
}
}
}
}
Step 7: Update Ansible Configuration Files (Additional Step)
You might need to update your ansible.cfg file to specify the new Python interpreter:
# In ansible.cfg
[defaults]
interpreter_python = /usr/bin/python3.10
This ensures that Ansible uses the correct Python version when connecting to remote hosts.
Step 8: Test Your Ansible Installation
Before relying on your upgraded Ansible for production work, test it thoroughly:
# Check Ansible version
ansible --version
# Run a simple ping test
ansible localhost -m ping
# Run a simple playbook
ansible-playbook test-playbook.yml
Troubleshooting Common Issues
Python Module Import Errors
If you encounter module import errors, ensure that all required dependencies are installed for the new Python version:
sudo python3.10 -m pip install paramiko jinja2 pyyaml cryptography
Path Issues
If running ansible
command doesn't use the new version, check your PATH environment variable:
echo $PATH
which ansible
You might need to create symlinks or adjust your PATH to ensure the correct version is used.
Collection Compatibility
Some collections may not be compatible with the new Ansible or Python version. Check the documentation for your specific collections.
Conclusion
Upgrading Ansible when a new Python version is required involves several careful steps to ensure all components work together smoothly. By following this guide, you should be able to successfully upgrade your Ansible installation while minimizing disruption to your automation workflows.
Remember to always test in a non-production environment first, and maintain backups of your configuration and playbooks before making significant changes.
Happy automating!