Introduction
In today’s cloud-native world, containerization has become a fundamental approach for deploying applications. Azure Functions can be containerized and deployed to a docker container which means we can deploy them on kubernetes. One compelling option is Azure Container Apps (ACA), which provides a fully managed Kubernetes-based environment with powerful features specifically designed for microservices and containerized applications.
Azure Container Apps is powered by Kubernetes and open-source technologies like Dapr, KEDA, and Envoy. It supports Kubernetes-style apps and microservices with features like service discovery and traffic splitting while enabling event-driven application architectures. This makes it an excellent choice for deploying containerized Azure Functions.
This blog post explores how to deploy Azure Functions in containers to Azure Container Apps, with special focus on the benefits of Envoy for traffic management, revision handling, and logging capabilities for troubleshooting.
Video Demo:
Why Deploy Azure Functions to Container Apps?
Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any language stack supported by Functions and use the same Functions triggers and bindings with event-driven scaling.
Key advantages include:
- Containerization flexibility: Package your functions with custom dependencies and runtime environments for Dev, QA, STG and PROD
- Kubernetes-based infrastructure: Get the benefits of Kubernetes without managing the complexity
- Microservices architecture support: Deploy functions as part of a larger microservices ecosystem
- Advanced networking: Take advantage of virtual network integration and service discovery
Benefits of Envoy in Azure Container Apps
Azure Container Apps includes a built-in Ingress controller running Envoy. You can use this to expose your application to the outside world and automatically get a URL and an SSL certificate. Envoy brings several significant benefits to your containerized Azure Functions:
1. Advanced Traffic Management
Envoy serves as the backbone of ACA’s traffic management capabilities, allowing for:
- Intelligent routing: Route traffic based on paths, headers, and other request attributes
- Load balancing: Distribute traffic efficiently across multiple instances
- Protocol support: Downstream connections support HTTP1.1 and HTTP2, and Envoy automatically detects and upgrades connections if the client connection requires an upgrade.
2. Built-in Security
- TLS termination: Automatic handling of HTTPS traffic with Azure managed certificates
- mTLS support: Azure Container Apps supports peer-to-peer TLS encryption within the environment. Enabling this feature encrypts all network traffic within the environment with a private certificate that is valid within the Azure Container Apps environment scope. Azure Container Apps automatically manage these certificates.
3. Observability
- Detailed metrics and logs for traffic patterns
- Request tracing capabilities
- Performance insights for troubleshooting
Traffic Management for Revisions
One of the most powerful features of Azure Container Apps is its handling of revisions and traffic management between them.
Understanding Revisions
Revisions are immutable snapshots of your container application at a point in time. When you upgrade your container app to a new version, you create a new revision. This allows you to have the old and new versions running simultaneously and use the traffic management functionality to direct traffic to old or new versions of the application.
Traffic Splitting Between Revisions
Traffic split is a mechanism that routes configurable percentages of incoming requests (traffic) to various downstream services. With Azure Container Apps, we can weight traffic between multiple downstream revisions.
This capability enables several powerful deployment strategies:
Blue/Green Deployments
Deploy a new version alongside the existing one, and gradually shift traffic:
- Deploy revision 2 (green) alongside revision 1 (blue)
- Initially direct a small percentage (e.g., 10%) of traffic to revision 2
- Monitor performance and errors
- Gradually increase traffic to revision 2 as confidence grows
- Eventually direct 100% traffic to revision 2
- Retire revision 1 when no longer needed
A/B Testing
Test different implementations with real users:
Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in blue-green deployments or in A/B testing. Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision.
Implementation
To implement traffic splitting in Azure Container Apps:
By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable multiple revision mode in your container app, you can split incoming traffic between active revisions.
Here’s how to configure it:
- Enable multiple revision mode:
- In the Azure portal, go to your container app
- Select “Revision management”
- Set the mode to “Multiple: Several revisions active simultaneously”
- Apply changes
- Configure traffic weights:
- For each active revision, specify the percentage of traffic it should receive
- Ensure the combined percentage equals 100%
Logging and Troubleshooting
Effective logging is crucial for monitoring and troubleshooting containerized applications. Azure Container Apps provides comprehensive logging capabilities integrated with Azure Monitor.
Centralized Logging Infrastructure
Azure Container Apps environments provide centralized logging capabilities through integration with Azure Monitor and Application Insights. By default, all container apps within an environment send logs to a common Log Analytics workspace, making it easier to query and analyze logs across multiple apps.
Key Logging Benefits
- Unified logging experience: All container apps in an environment send logs to the same workspace
- Detailed container insights: Access container-specific metrics and logs
- Function-specific logging: You can monitor your containerized function app hosted in Container Apps using Azure Monitor Application Insights in the same way you do with apps hosted by Azure Functions.
- Scale event logging: For bindings that support event-driven scaling, scale events are logged as FunctionsScalerInfo and FunctionsScalerError events in your Log Analytics workspace.
Troubleshooting Best Practices
When troubleshooting issues in containerized Azure Functions running on ACA:
- Check application logs: Review function execution logs for errors or exceptions
- Monitor scale events: Identify issues with auto-scaling behavior
- Examine container health: Check for container startup failures or crashes
- Review ingress traffic: Analyze traffic patterns and routing decisions
- Inspect revisions: Verify that traffic is being distributed as expected between revisions
Implementation Steps
Here’s the full playlist we did in youtube to follow along: https://www.youtube.com/playlist?list=PLKwr1he0x0Dl2glbE8oHeTgdY-_wZkrhi
In Summary:
- Containerize your Azure Functions app:
- Create a Dockerfile based on the Azure Functions base images
- Build and test your container locally
- Video demo:
- Push your container to a registry:
- Push to Azure Container Registry or another compatible registry
- Create a Container Apps environment:
- Set up the environment with appropriate virtual network and logging settings
- Deploy your function container:
- Use Azure CLI, ARM templates, or the Azure Portal to deploy
- Configure scaling rules, ingress settings, and revision strategy
- Set up traffic management:
- Enable multiple revision mode if desired
- Configure traffic splitting rules for testing or gradual rollout
Conclusion
Deploying Azure Functions in containers to Azure Container Apps combines the best of serverless computing with the flexibility of containers and the rich features of a managed Kubernetes environment. The built-in Envoy proxy provides powerful traffic management capabilities, especially for handling multiple revisions of your application. Meanwhile, the integrated logging infrastructure simplifies monitoring and troubleshooting across all your containerized functions.
This approach is particularly valuable for teams looking to:
- Deploy Azure Functions with custom dependencies
- Integrate functions into a microservices architecture
- Implement sophisticated deployment strategies like blue/green or A/B testing
- Maintain a consistent container-based deployment strategy across all application components
By leveraging these capabilities, you can create more robust, scalable, and manageable serverless applications while maintaining the development simplicity that makes Azure Functions so powerful.