Containers In the Cloud

Deploying Azure Functions in Containers to Azure Container Apps - like a boss!!!

Introduction

In today's cloud-native world, containerization has become a fundamental approach for deploying applications. Azure Functions can be containerized and deployed to a docker container which means we can deploy them on kubernetes. One compelling option is Azure Container Apps (ACA), which provides a fully managed Kubernetes-based environment with powerful features specifically designed for microservices and containerized applications.

Azure Container Apps is powered by Kubernetes and open-source technologies like Dapr, KEDA, and Envoy. It supports Kubernetes-style apps and microservices with features like service discovery and traffic splitting while enabling event-driven application architectures. This makes it an excellent choice for deploying containerized Azure Functions.

This blog post explores how to deploy Azure Functions in containers to Azure Container Apps, with special focus on the benefits of Envoy for traffic management, revision handling, and logging capabilities for troubleshooting.

Video Demo:

Why Deploy Azure Functions to Container Apps?

Container Apps hosting lets you run your functions in a fully managed, Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and Kubernetes Event-driven Autoscaling (KEDA). You can write your function code in any language stack supported by Functions and use the same Functions triggers and bindings with event-driven scaling.

Key advantages include:

  1. Containerization flexibility: Package your functions with custom dependencies and runtime environments for Dev, QA, STG and PROD
  2. Kubernetes-based infrastructure: Get the benefits of Kubernetes without managing the complexity
  3. Microservices architecture support: Deploy functions as part of a larger microservices ecosystem
  4. Advanced networking: Take advantage of virtual network integration and service discovery

Benefits of Envoy in Azure Container Apps

Azure Container Apps includes a built-in Ingress controller running Envoy. You can use this to expose your application to the outside world and automatically get a URL and an SSL certificate. Envoy brings several significant benefits to your containerized Azure Functions:

1. Advanced Traffic Management

Envoy serves as the backbone of ACA's traffic management capabilities, allowing for:

  • Intelligent routing: Route traffic based on paths, headers, and other request attributes
  • Load balancing: Distribute traffic efficiently across multiple instances
  • Protocol support: Downstream connections support HTTP1.1 and HTTP2, and Envoy automatically detects and upgrades connections if the client connection requires an upgrade.

2. Built-in Security

  • TLS termination: Automatic handling of HTTPS traffic with Azure managed certificates
  • mTLS support: Azure Container Apps supports peer-to-peer TLS encryption within the environment. Enabling this feature encrypts all network traffic within the environment with a private certificate that is valid within the Azure Container Apps environment scope. Azure Container Apps automatically manage these certificates.

3. Observability

  • Detailed metrics and logs for traffic patterns
  • Request tracing capabilities
  • Performance insights for troubleshooting

Traffic Management for Revisions

One of the most powerful features of Azure Container Apps is its handling of revisions and traffic management between them.

Understanding Revisions

Revisions are immutable snapshots of your container application at a point in time. When you upgrade your container app to a new version, you create a new revision. This allows you to have the old and new versions running simultaneously and use the traffic management functionality to direct traffic to old or new versions of the application.

Traffic Splitting Between Revisions

Traffic split is a mechanism that routes configurable percentages of incoming requests (traffic) to various downstream services. With Azure Container Apps, we can weight traffic between multiple downstream revisions.

This capability enables several powerful deployment strategies:

Blue/Green Deployments

Deploy a new version alongside the existing one, and gradually shift traffic:

  1. Deploy revision 2 (green) alongside revision 1 (blue)
  2. Initially direct a small percentage (e.g., 10%) of traffic to revision 2
  3. Monitor performance and errors
  4. Gradually increase traffic to revision 2 as confidence grows
  5. Eventually direct 100% traffic to revision 2
  6. Retire revision 1 when no longer needed

A/B Testing

Test different implementations with real users:

Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in blue-green deployments or in A/B testing. Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision.

Implementation

To implement traffic splitting in Azure Container Apps:

By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable multiple revision mode in your container app, you can split incoming traffic between active revisions.

Here's how to configure it:

  1. Enable multiple revision mode:
    • In the Azure portal, go to your container app
    • Select "Revision management"
    • Set the mode to "Multiple: Several revisions active simultaneously"
    • Apply changes
  2. Configure traffic weights:
    • For each active revision, specify the percentage of traffic it should receive
    • Ensure the combined percentage equals 100%

Logging and Troubleshooting

Effective logging is crucial for monitoring and troubleshooting containerized applications. Azure Container Apps provides comprehensive logging capabilities integrated with Azure Monitor.

Centralized Logging Infrastructure

Azure Container Apps environments provide centralized logging capabilities through integration with Azure Monitor and Application Insights. By default, all container apps within an environment send logs to a common Log Analytics workspace, making it easier to query and analyze logs across multiple apps.

Key Logging Benefits

  1. Unified logging experience: All container apps in an environment send logs to the same workspace
  2. Detailed container insights: Access container-specific metrics and logs
  3. Function-specific logging: You can monitor your containerized function app hosted in Container Apps using Azure Monitor Application Insights in the same way you do with apps hosted by Azure Functions.
  4. Scale event logging: For bindings that support event-driven scaling, scale events are logged as FunctionsScalerInfo and FunctionsScalerError events in your Log Analytics workspace.

Troubleshooting Best Practices

When troubleshooting issues in containerized Azure Functions running on ACA:

  1. Check application logs: Review function execution logs for errors or exceptions
  2. Monitor scale events: Identify issues with auto-scaling behavior
  3. Examine container health: Check for container startup failures or crashes
  4. Review ingress traffic: Analyze traffic patterns and routing decisions
  5. Inspect revisions: Verify that traffic is being distributed as expected between revisions

Implementation Steps

Here's the full playlist we did in youtube to follow along: https://www.youtube.com/playlist?list=PLKwr1he0x0Dl2glbE8oHeTgdY-_wZkrhi

In Summary:

  1. Containerize your Azure Functions app:
    • Create a Dockerfile based on the Azure Functions base images
    • Build and test your container locally
    • Video demo:
  2. Push your container to a registry:
    • Push to Azure Container Registry or another compatible registry
  3. Create a Container Apps environment:
    • Set up the environment with appropriate virtual network and logging settings
  4. Deploy your function container:
    • Use Azure CLI, ARM templates, or the Azure Portal to deploy
    • Configure scaling rules, ingress settings, and revision strategy
  5. Set up traffic management:
    • Enable multiple revision mode if desired
    • Configure traffic splitting rules for testing or gradual rollout

Conclusion

Deploying Azure Functions in containers to Azure Container Apps combines the best of serverless computing with the flexibility of containers and the rich features of a managed Kubernetes environment. The built-in Envoy proxy provides powerful traffic management capabilities, especially for handling multiple revisions of your application. Meanwhile, the integrated logging infrastructure simplifies monitoring and troubleshooting across all your containerized functions.

This approach is particularly valuable for teams looking to:

  • Deploy Azure Functions with custom dependencies
  • Integrate functions into a microservices architecture
  • Implement sophisticated deployment strategies like blue/green or A/B testing
  • Maintain a consistent container-based deployment strategy across all application components

By leveraging these capabilities, you can create more robust, scalable, and manageable serverless applications while maintaining the development simplicity that makes Azure Functions so powerful.


Azure Container App Environment

Azure Container Apps: Simplifying Container Deployment with Enterprise-Grade Features

In the ever-evolving landscape of cloud computing, organizations are constantly seeking solutions that balance simplicity with enterprise-grade capabilities. Azure Container Apps emerges as a compelling answer to this challenge, offering a powerful abstraction layer over container orchestration while providing the robustness needed for production workloads.

What Makes Azure Container Apps Special?

Azure Container Apps represents Microsoft’s vision for serverless container deployment. While Kubernetes has become the de facto standard for container orchestration, its complexity can be overwhelming for teams that simply want to deploy and scale their containerized applications. Container Apps provides a higher-level abstraction that handles many infrastructure concerns automatically, allowing developers to focus on their applications.

Key Benefits of the Platform

Built-in Load Balancing with Envoy

One of the standout features of Azure Container Apps is its integration with Envoy as a load balancer. This isn’t just any load balancer – Envoy is the same battle-tested proxy used by major cloud-native platforms. It provides:

  • Automatic HTTP/2 and gRPC support
  • Advanced traffic splitting capabilities for A/B testing
  • Built-in circuit breaking and retry logic
  • Detailed metrics and tracing

The best part? You don’t need to configure or maintain Envoy yourself. It’s managed entirely by the platform, giving you enterprise-grade load balancing capabilities without the operational overhead.

Integrated Observability with Azure Application Insights

Understanding what’s happening in your containerized applications is crucial for maintaining reliability. Container Apps integrates seamlessly with Azure Application Insights, providing:

  • Distributed tracing across your microservices
  • Detailed performance metrics and request logging
  • Custom metric collection
  • Real-time application map visualization

The platform automatically injects the necessary instrumentation, ensuring you have visibility into your applications from day one.

Cost Considerations and Optimization

While Azure Container Apps offers a serverless pricing model that can be cost-effective, it’s important to understand the pricing structure to avoid surprises:

Cost Components

  1. Compute Usage: Charged per vCPU-second and GB-second of memory used
    • Baseline: ~$0.000012/vCPU-second
    • Memory: ~$0.000002/GB-second
  2. Request Processing:
    • First 2 million requests/month included
    • ~$0.40 per additional million requests
  3. Storage and Networking:
    • Ingress: Free
    • Egress: Standard Azure bandwidth rates apply

Cost Optimization Tips

To keep your Azure Container Apps costs under control:

  1. Right-size your containers by carefully setting resource limits and requests
  2. Utilize scale-to-zero for non-critical workloads
  3. Configure appropriate minimum and maximum replica counts
  4. Monitor and adjust based on actual usage patterns

Advanced Features Worth Exploring

Revision Management

Container Apps introduces a powerful revision management system that allows you to:

  • Maintain multiple versions of your application
  • Implement blue-green deployments
  • Roll back to previous versions if needed

DAPR Integration

For microservices architectures, the built-in DAPR (Distributed Application Runtime) support provides:

  • Service-to-service invocation
  • State management
  • Pub/sub messaging
  • Input and output bindings

Conclusion

Azure Container Apps strikes an impressive balance between simplicity and capability. It removes much of the complexity associated with container orchestration while providing the features needed for production-grade applications. Whether you’re building microservices, web applications, or background processing jobs, Container Apps offers a compelling platform that can grow with your needs.

By understanding the pricing model and following best practices for cost optimization, you can leverage this powerful platform while keeping expenses under control. The integration with Azure’s broader ecosystem, particularly Application Insights and Container Registry, creates a seamless experience for developing, deploying, and monitoring containerized applications.


Remember to adjust resource allocations and scaling rules based on your specific workload patterns to optimize both performance and cost. Monitor your application’s metrics through Application Insights to make informed decisions about resource utilization and scaling policies.


Local Kubernetes for Cost Savings

Azure Functions on your local Kubernetes Cluster: A Dev Powerhouse

In today’s fast-paced development landscape, the traditional Dev, QA, STG (Staging), PROD pipeline has become a standard practice. However, the increasing adoption of cloud-based environments has introduced new challenges, particularly in terms of cost and deployment speed. To address these issues, many organizations are exploring strategies to optimize their development and deployment processes. In this article we are exploring the use of our local Kubernetes cluster since Azure Functions can run on containers, this can improve your deployments and cost savings.

KEDA (Kubernetes Event-Driven Autoscaler)

KEDA is a tool that helps manage the scaling of your applications based on the workload they’re handling. Imagine having a website that experiences a sudden surge in traffic. KEDA can automatically increase the number of servers running your website to handle the increased load. Once the traffic subsides, it can also scale down all the way to zero PODS to reduce costs.

What is Scale to Zero? It’s a feature that allows applications to automatically scale down to zero instances when there’s no incoming traffic or activity. This means that the application is essentially turned off to save costs. However, as soon as activity resumes, the application can quickly scale back up to handle the load.

Caveat: Your app needs to be packaged in a way that it can start up fast and not have a high warm-up period.

How Does it Work? KEDA monitors application metrics and automatically scales the number of instances up or down based on predefined rules. KEDA supports a wide range of application metrics that can be used to trigger scaling actions. Here are some examples and the most commonly used ones:

  • HTTP Metrics:
    • HTTP requests: The number of HTTP requests received by an application.
    • HTTP status codes: The frequency of different HTTP status codes returned by an application (e.g., 200, 404, 500).
  • Queue Lengths:
    • Message queue length: The number of messages waiting to be processed in a message queue.
    • Job queue length: The number of jobs waiting to be executed in a job queue.
  • Custom Metrics:
    • Application-specific metrics: Any custom metrics that can be exposed by your application (e.g., database connection pool size, cache hit rate).

Choosing the right metrics depends on your specific application and scaling needs. For example, if your application relies heavily on message queues, monitoring queue lengths might be the most relevant metric. If your application is CPU-intensive, monitoring CPU utilization could be a good indicator for scaling.

KEDA also supports metric aggregators like Prometheus and StatsD, which can be used to collect and aggregate metrics from various sources and provide a unified view of your application’s performance.

Azure Container Registry

Azure Container Registry (ACR) and Docker Hub are both popular platforms for storing and managing container images. While both offer essential features, Azure Container Registry provides several distinct advantages that make it a compelling choice for many developers and organizations.

Key Benefits of Azure Container Registry

  1. Integration with Azure Ecosystem:

    • Seamless integration: ACR is deeply integrated with other Azure services, such as Azure Kubernetes Service (AKS), Azure App Service, and Azure Functions. This integration simplifies deployment and management workflows.
    • Centralized management: You can manage container images, deployments, and other related resources from a single Azure portal.
  2. Enhanced Security and Compliance:

    • Private repositories: ACR allows you to create private repositories, ensuring that your container images are not publicly accessible.
    • Role-based access control (RBAC): Implement fine-grained access control to manage who can view, create, and modify container images.
    • Compliance: ACR meets various industry compliance standards, making it suitable for organizations with strict security requirements.
  3. Performance and Scalability:

    • Regional proximity: ACR offers multiple regions worldwide, allowing you to store and retrieve images from a location that is geographically closer to your users, improving performance.
    • Scalability: ACR can automatically scale to handle increased demand for container images.
  4. Advanced Features:

    • Webhooks: Trigger custom actions (e.g., build pipelines, notifications) based on events in your registry, such as image pushes or deletes.
    • Geo-replication: Replicate your images across multiple regions for improved availability and disaster recovery.
    • Integrated vulnerability scanning: Automatically scan your images for known vulnerabilities and receive alerts.
  5. Cost-Effective:

    • Azure pricing: ACR is part of the Azure ecosystem, allowing you to leverage Azure’s flexible pricing models and potential cost savings through various discounts and promotions.

In summary, while Docker Hub is a valuable platform for sharing container images publicly, Azure Container Registry offers a more comprehensive solution tailored to the needs of organizations that require enhanced security, integration with Azure services, and performance optimization.

ACR and Kubernetes Integration

To pull container images from Azure Container Registry (ACR) in a Kubernetes manifest, you’ll need to add an imagePullSecret attribute to the relevant deployment or pod specification. This secret stores the credentials required to authenticate with ACR and pull the images.

Here’s a step-by-step guide on how to achieve this:

1. Create a Kubernetes Secret:

  • Use the kubectl create secret docker-registry command to create a secret that holds your ACR credentials. Replace <your-acr-name> with the actual name of your ACR instance and <your-acr-password> with your ACR password:
Bash
kubectl create secret docker-registry <your-acr-name> --username=<your-acr-username> --password=<your-acr-password>

2. Reference the Secret in Your Manifest:

  • In your Kubernetes manifest (e.g., deployment.yaml, pod.yaml), add the imagePullSecrets attribute to the spec section of the deployment or pod. Reference the name of the secret you created in the previous step:
YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app   

        image: <your-acr-name>.azurecr.io/<your-image-name>:<your-tag>
        imagePullPolicy: Always
      imagePullSecrets:
      - name: <your-secret-name>

Key Points:

  • Replace <your-acr-name>, <your-image-name>, <your-tag>, and <your-secret-name> with the appropriate values for your specific ACR instance, image, and secret.
  • The imagePullPolicy is set to Always to ensure that the image is always pulled from the registry, even if it’s already present on the node. You can adjust this policy based on your requirements.

Additional Considerations:

  • For more complex scenarios, you might consider using service accounts and role-based access control (RBAC) to manage permissions for accessing ACR.
  • If you’re using Azure Kubernetes Service (AKS), you can leverage Azure Active Directory (Azure AD) integration for authentication and authorization, simplifying the management of ACR credentials.

By following these steps, you can successfully configure your Kubernetes deployment or pod to pull container images from Azure Container Registry using the imagePullSecret attribute.


🚀 Mastering Azure Functions in Docker: Secure Your App with Function Keys! 🔒

In this session, we’re merging the robust capabilities of Azure Functions with the versatility of Docker containers.

By the end of this tutorial, you will have a secure and scalable process for deploying your Azure Functions within Docker, equipped with function keys to ensure security.

Why use Azure Functions inside Docker?

Serverless architecture allows you to run code without provisioning or managing servers. Azure Functions take this concept further by providing a fully managed compute platform. Docker, on the other hand, offers a consistent development environment, making it easy to deploy your applications across various environments. Together, they create a robust and efficient way to develop and deploy serverless applications. Later we will be deploy this container to our local kubernetes cluster and to Azure Container Apps.

Development

The Azure Functions Core tools make it easy to package your function into a container with a single command:

func init MyFunctionApp --docker

The command creates the dockerfile and supporting json for your function inside a container and all you need to do is add your code and dependencies. Since we are building a python function we will be adding our python libraries in the requirements.txt

Using Function Keys for Security

Create a host_secret.json file in the root of your function app directory. Add the following configuration to specify your function key:

{
"masterKey": {
"name": "master",
"value": "your-master-key-here"
},
"functionKeys": {
"default": "your-function-key-here"
}
}

Now this file needs to be added to the container so the function can read it. You can simply add the following to your dockerfile and rebuild:

RUN mkdir /etc/secrets/
ENV FUNCTIONS_SECRETS_PATH=/etc/secrets
ENV AzureWebJobsSecretStorageType=Files
ENV PYTHONHTTPSVERIFY=0
ADD host_secrets.json /etc/secrets/host.json

Testing

Now you can use the function key you set in the previous step as a query parameter for the function’s endpoint in your api client.


Or you can use curl / powershell as well:

curl -X POST \
'http://192.168.1.200:8081/api/getbooks?code=XXXX000something0000XXXX' \
--header 'Accept: */*' \
--header 'User-Agent: Thunder Client (https://www.thunderclient.com)' \
--header 'Content-Type: application/json' \
--data-raw '{
"query": "Dune"
}'


Azure Functions Cartoon

Develop and Test Local Azure Functions from your IDE

Offloading code from apps is a great way to adapt a microservices architecture. If you are still making the decision of whether to create functions or just code on your app, check out the decision matrix article and some gotchas that will help you know if you should create a function or not. Since we have checked the boxes and our code is a great candidate for Azure Functions then here’s our process:

Dev Environment Setup

Azure Functions Core Tools

First thing is to install the Azure Functions core tools on your machine. There are many ways to install the core tools and instructions can be found in the official Microsoft learn doc here: Develop Azure Functions locally using Core Tools | Microsoft Learn . We are using Ubuntu and Python so we did the following:

wget -q https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then:

sudo apt-get update
sudo apt-get install azure-functions-core-tools-4

After getting the core tools you can test by running

func --help

Result:

Azure Functions Core Tools
Azure Functions Core Tools
Visual Studio Code Extension
  • Go to the Extensions view by clicking the Extensions icon in the Activity Bar.
  • Search for “Azure Functions” and install the extension.
  • Open the Command Palette (F1) and select Azure Functions: Install or Update Azure Functions Core Tools.

Azure Function Fundamentals

Here are some Azure Function Basics. You can write in many languages as described in the official Microsoft learn doc here: Supported Languages with Durable Functions Overview – Azure | Microsoft Learn . We are using Python so here’s our process

I. Create a Python Virtual Environment to manage dependencies:

A Python virtual environment is an isolated environment that allows you to manage dependencies for your project separately from other projects. Here are the key benefits:

  1. Dependency Isolation:
    • Each project can have its own dependencies, regardless of what dependencies other projects have. This prevents conflicts between different versions of packages used in different projects.
  2. Reproducibility:
    • By isolating dependencies, you ensure that your project runs consistently across different environments (development, testing, production). This makes it easier to reproduce bugs and issues.
  3. Simplified Dependency Management:
    • You can easily manage and update dependencies for a specific project without affecting other projects. This is particularly useful when working on multiple projects simultaneously.
  4. Cleaner Development Environment:
    • Your global Python environment remains clean and uncluttered, as all project-specific dependencies are contained within the virtual environment.

Create the virtual environment simply with: python -m venv name_of_venv

What is a Function Route?

A function route is essentially the path part of the URL that maps to your function. When an HTTP request matches this route, the function is executed. Routes are particularly useful for organizing and structuring your API endpoints.

II. Initialization

The line app = func.FunctionApp() seen in the code snippet below is used in the context of Azure Functions for Python to create an instance of the FunctionApp class. This instance, app, serves as the main entry point for defining and managing your Azure Functions within the application. Here’s a breakdown of what it does:

  1. Initialization:
    • It initializes a new FunctionApp object, which acts as a container for your function definitions.
  2. Function Registration:
    • You use this app instance to register your individual functions. Each function is associated with a specific trigger (e.g., HTTP, Timer) and is defined using decorators.

import azure.functions as func
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1")
@app.route(route="hello")
def hello_function(req: func.HttpRequest) -> func.HttpResponse:
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)

  • The @app.function_name and @app.route decorators are used to define the function’s name and route, respectively. This makes it easy to map HTTP requests to specific functions.
  • The hello_function is defined to handle HTTP requests. It extracts the name parameter from the query string or request body and returns a greeting.
  • The function returns an HttpResponse object, which is sent back to the client.

What is a Function Route?

A function route is essentially the path part of the URL that maps to your function. When an HTTP request matches this route, the function is executed. Routes are particularly useful for organizing and structuring your API endpoints.

Running The Azure Function

Once you have your code ready to go you can test you function locally by using func start but there are a few “gotchas” to be aware of:

1. Port Conflicts

  • By default, func start runs on port 7071. If this port is already in use by another application, you’ll encounter a conflict. You can specify a different port using the --port option:
    func start --port 8080
    

     

2. Environment Variables

  • Ensure that all necessary environment variables are set correctly. Missing or incorrect environment variables can cause your function to fail. You can use a local.settings.json file to manage these variables during local development.

3. Dependencies

  • Make sure all dependencies listed in your requirements.txt (for Python) or package.json (for Node.js) are installed. Missing dependencies can lead to runtime errors.

4. Function Proxies

  • If you’re using function proxies, ensure that the proxies.json file is correctly configured. Misconfigurations can lead to unexpected behavior or routing issues.

5. Binding Configuration

  • Incorrect or incomplete binding configurations in your function.json file can cause your function to not trigger as expected. Double-check your bindings to ensure they are set up correctly.

6. Local Settings File

  • The local.settings.json file should not be checked into source control as it may contain sensitive information. Ensure this file is listed in your .gitignore file.

7. Cold Start Delays

  • When running functions locally, you might experience delays due to cold starts, especially if your function has many dependencies or complex initialization logic.

8. Logging and Monitoring

  • Ensure that logging is properly configured to help debug issues. Use the func start command’s output to monitor logs and diagnose problems.

9. Version Compatibility

  • Ensure that the version of Azure Functions Core Tools you are using is compatible with your function runtime version. Incompatibilities can lead to unexpected errors.

10. Network Issues

  • If your function relies on external services or APIs, ensure that your local environment has network access to these services. Network issues can cause your function to fail.

11. File Changes

  • Be aware that changes to your function code or configuration files may require restarting the func start process to take effect.

12. Debugging

  • When debugging, ensure that your IDE is correctly configured to attach to the running function process. Misconfigurations can prevent you from hitting breakpoints.

By keeping these gotchas in mind, you can avoid common pitfalls and ensure a smoother development experience with Azure Functions. If you encounter any specific issues or need further assistance, feel free to ask us!

Testing and Getting Results

If your function starts and you are looking at the logs you will see your endpoints listed as seen below but since you wrote them you know the paths as well and can start testing with your favorite API client, our favorite is Thunder Client.

Thunder Client with Azure Functions
Thunder Client with Azure Functions
The Response

In Azure Functions, an HTTP response is what your function sends back to the client after processing an HTTP request. Here are the basics:

  1. Status Code:
    • The status code indicates the result of the HTTP request. Common status codes include:
      • 200 OK: The request was successful.
      • 400 Bad Request: The request was invalid.
      • 404 Not Found: The requested resource was not found.
      • 500 Internal Server Error: An error occurred on the server.
  2. Headers:
    • HTTP headers provide additional information about the response. Common headers include:
      • Content-Type: Specifies the media type of the response (e.g., application/jsontext/html).
      • Content-Length: Indicates the size of the response body.
      • Access-Control-Allow-Origin: Controls which origins are allowed to access the resource.
  3. Body:
    • The body contains the actual data being sent back to the client. This can be in various formats such as JSON, HTML, XML, or plain text. We chose JSON so we can use the different fields and values.

Conclusion

In this article, we’ve explored the process of creating your first Python Azure Function using Visual Studio Code. We covered setting up your environment, including installing Azure Functions Core Tools and the VS Code extension, which simplifies project setup, development, and deployment. We delved into the importance of using a Python virtual environment and a requirements.txt file for managing dependencies, ensuring consistency, and facilitating collaboration. Additionally, we discussed the basics of function routes and HTTP responses, highlighting how to define routes and customize responses to enhance your API’s structure and usability. By understanding these fundamentals, you can efficiently develop, test, and deploy serverless applications on Azure, leveraging the full potential of Azure Functions. Happy coding!


Django Microservices Approach with Azure Functions on Azure Container Apps

We are creating a multi-part video to explain Azure Functions running on Azure Container Apps so that we can offload some of the code out of our Django App and build our infrastructure with a microservice approach. Here’s part one and below the video a quick high-level explanation for this architecture.

Azure Functions are serverless computing units within Azure that allow you to run event-driven code without having to manage servers. They’re a great choice for building microservices due to their scalability, flexibility, and cost-effectiveness.

Azure Container Apps provide a fully managed platform for deploying and managing containerized applications. By deploying Azure Functions as containerized applications on Container Apps, you gain several advantages:

  1. Microservices Architecture:

    • Decoupling: Each function becomes an independent microservice, isolated from other parts of your application. This makes it easier to develop, test, and deploy them independently.
    • Scalability: You can scale each function individually based on its workload, ensuring optimal resource utilization.
    • Resilience: If one microservice fails, the others can continue to operate, improving the overall reliability of your application.
  2. Containerization:

    • Portability: Containerized functions can be easily moved between environments (development, testing, production) without changes.
    • Isolation: Each container runs in its own isolated environment, reducing the risk of conflicts between different functions.
    • Efficiency: Containers are optimized for resource utilization, making them ideal for running functions on shared infrastructure.
  3. Azure Container Apps Benefits:

    • Managed Service: Azure Container Apps handles the underlying infrastructure, allowing you to focus on your application’s logic.
    • Scalability: Container Apps automatically scale your functions based on demand, ensuring optimal performance.
    • Integration: It seamlessly integrates with other Azure services, such as Azure Functions, Azure App Service, and Azure Kubernetes Service.

In summary, Azure Functions deployed on Azure Container Apps provide a powerful and flexible solution for building microservices. By leveraging the benefits of serverless computing, containerization, and a managed platform, you can create scalable, resilient, and efficient applications.

Stay tuned for part 2


Deploying Azure Functions with Azure DevOps: 3 Must-Dos! Code Security Included

Azure Functions is a serverless compute service that allows you to run your code in response to various events, without the need to manage any infrastructure. Azure DevOps, on the other hand, is a set of tools and services that help you build, test, and deploy your applications more efficiently. Combining these two powerful tools can streamline your Azure Functions deployment process and ensure a smooth, automated workflow.

In this blog post, we’ll explore three essential steps to consider when deploying Azure Functions using Azure DevOps.

1. Ensure Consistent Python Versions

When working with Azure Functions, it’s crucial to ensure that the Python version used in your build pipeline matches the Python version configured in your Azure Function. Mismatched versions can lead to unexpected runtime errors and deployment failures.

To ensure consistency, follow these steps:

  1. Determine the Python version required by your Azure Function. You can find this information in the requirements.txt file or the host.json file in your Azure Functions project.
  2. In your Azure DevOps pipeline, use the UsePythonVersion task to set the Python version to match the one required by your Azure Function.
yaml
- task: UsePythonVersion@0
inputs:
versionSpec: '3.9'
addToPath: true
  1. Verify the Python version in your pipeline by running python --version and ensuring it matches the version specified in the previous step.

2. Manage Environment Variables Securely

Azure Functions often require access to various environment variables, such as database connection strings, API keys, or other sensitive information. When deploying your Azure Functions using Azure DevOps, it’s essential to handle these environment variables securely.

Here’s how you can approach this:

  1. Store your environment variables as Azure DevOps Service Connections or Azure Key Vault Secrets.
  2. In your Azure DevOps pipeline, use the appropriate task to retrieve and set the environment variables. For example, you can use the AzureKeyVault task to fetch secrets from Azure Key Vault.
yaml
- task: AzureKeyVault@1
inputs:
azureSubscription: 'Your_Azure_Subscription_Connection'
KeyVaultName: 'your-keyvault-name'
SecretsFilter: '*'
RunAsPreJob: false
  1. Ensure that your pipeline has the necessary permissions to access the Azure Key Vault or Service Connections.

3. Implement Continuous Integration and Continuous Deployment (CI/CD)

To streamline the deployment process, it’s recommended to set up a CI/CD pipeline in Azure DevOps. This will automatically build, test, and deploy your Azure Functions whenever changes are made to your codebase.

Here’s how you can set up a CI/CD pipeline:

  1. Create an Azure DevOps Pipeline and configure it to trigger on specific events, such as a push to your repository or a pull request.
  2. In the pipeline, include steps to build, test, and package your Azure Functions project.
  3. Add a deployment task to the pipeline to deploy your packaged Azure Functions to the target Azure environment.
yaml
# CI/CD pipeline
trigger:
- main
pool:
vmImage: ‘ubuntu-latest’steps:
task: UsePythonVersion@0
inputs:
versionSpec: ‘3.9’
addToPath: true script: |
pip install -r requirements.txt
displayName: ‘Install dependencies’ task: AzureWebApp@1
inputs:
azureSubscription: ‘Your_Azure_Subscription_Connection’
appName: ‘your-function-app-name’
appType: ‘functionApp’
deployToSlotOrASE: true
resourceGroupName: ‘your-resource-group-name’
slotName: ‘production’

By following these three essential steps, you can ensure a smooth and reliable deployment of your Azure Functions using Azure DevOps, maintaining consistency, security, and automation throughout the process.

Bonus: Embrace DevSecOps with Code Security Checks

As part of your Azure DevOps pipeline, it’s crucial to incorporate security checks to ensure the integrity and safety of your code. This is where the principles of DevSecOps come into play, where security is integrated throughout the software development lifecycle.

Here’s how you can implement code security checks in your Azure DevOps pipeline:

  1. Use Bandit for Python Code Security: Bandit is a popular open-source tool that analyzes Python code for common security issues. You can integrate Bandit into your Azure DevOps pipeline to automatically scan your Azure Functions code for potential vulnerabilities.
yaml
- script: |
pip install bandit
bandit -r your-functions-directory -f custom -o bandit_report.json
displayName: 'Run Bandit Security Scan'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'bandit_report.json'
ArtifactName: 'bandit-report'
publishLocation: 'Container'
  1. Leverage the Safety Tool for Dependency Scanning: Safety is another security tool that checks your Python dependencies for known vulnerabilities. Integrate this tool into your Azure DevOps pipeline to ensure that your Azure Functions are using secure dependencies.
yaml
- script: |
pip install safety
safety check --full-report
displayName: 'Run Safety Dependency Scan'
  1. Review Security Scan Results: After running the Bandit and Safety scans, review the generated reports and address any identified security issues before deploying your Azure Functions. You can publish the reports as build artifacts in Azure DevOps for easy access and further investigation.

By incorporating these DevSecOps practices into your Azure DevOps pipeline, you can ensure that your Azure Functions are not only deployed efficiently but also secure and compliant with industry best practices.


Containers for Data Scientists on top of Azure Container Apps

The Azure Data Science VMs are good for dev and testing and even though you could use a virtual machine scale set, that is a heavy and costly solution.

When thinking about scaling, one good solution is to containerize the Anaconda / Python virtual environments and deploy them to Azure Kubernetes Service or better yet, Azure Container Apps, the new abstraction layer for Kubernetes that Azure provides.

Here is a quick way to create a container with Miniconda 3, Pandas and Jupyter Notebooks to interface with the environment. Here I also show how to deploy this single test container it to Azure Container Apps.

The result:

A Jupyter Notebook with Pandas Running on Azure Container Apps.

Container Build

If you know the libraries you need then it would make sense to start with the lightest base image which is Miniconda3, you can also deploy the Anaconda3 container but that one might have libraries you might never use that might create unnecessary vulnerabilities top remediate.

Miniconda 3: https://hub.docker.com/r/continuumio/miniconda3

Anaconda 3: https://hub.docker.com/r/continuumio/anaconda3

Below is a simple dockerfile to build a container with pandas, openAi and tensorflow libraries.

FROM continuumio/miniconda3
RUN conda install jupyter -y --quiet && \ mkdir -p /opt/notebooks
WORKDIR /opt/notebooks
RUN pip install pandas
RUN pip install openAI
RUN pip install tensorflow
CMD ["jupyter", "notebook", "--ip='*'", "--port=8888", "--no-browser", "--allow-root"]

Build and Push the Container

Now that you have the container built push it to your registry and deploy it on Azure Container Apps. I use Azure DevOps to get the job done.

Here’s the pipeline task:

- task: Docker@2
inputs:
containerRegistry: 'dockerRepo'
repository: 'm05tr0/jupycondaoai'
command: 'buildAndPush'
Dockerfile: 'dockerfile'
tags: |
$(Build.BuildId)
latest

Deploy to Azure ContainerApps

Deploying to Azure Container Apps was painless, after understanding the Azure DevOps task, since I can include my ingress configuration in the same step as the container. The only requirement I had to do was configure DNS in my environment. The DevOps task is well documented as well but here’s a link to their official docs.

Architecture / DNS: https://learn.microsoft.com/en-us/azure/container-apps/networking?tabs=azure-cli

Azure Container Apps Deploy Task : https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureContainerAppsV1/README.md

A few things I’d like to point out is that you don’t have to provide a username and password for the container registry the task gets a token from az login. The resource group has to be the one where the Azure Container Apps environment lives, if not a new one will be created. The target port is where the container listens on, see the container build and the jupyter notebooks are pointing to port 8888. If you are using the Container Apps Environment with a private VNET, setting the ingress to external means that the VNET can get to it not outside traffic from the internet. Lastly I disable telemetry to stop reporting.


task: AzureContainerApps@1
inputs:
azureSubscription: 'IngDevOps(XXXXXXXXXXXXXXXXXXXX)'
acrName: 'idocr'
dockerfilePath: 'dockerfile'
imageToBuild: 'idocr.azurecr.io/m05tr0/jupycondaoai'
imageToDeploy: 'idocr.azurecr.io/m05tr0/jupycondaoai'
containerAppName: 'datasci'
resourceGroup: 'IDO-DataScience-Containers'
containerAppEnvironment: 'idoazconapps'
targetPort: '8888'
location: 'East US'
ingress: 'external'
disableTelemetry: true

After deployment I had to get the token which was easy with the Log Stream feature under Monitoring. For a deployment of multiple Jupyter Notebooks it makes sense to use JupyterHub.


Azure Open AI: Private and Secure "ChatGPT like" experience for Enterprises.

Azure provides the OpenAI service to address the concerns for companies and government agencies that have strong security regulations but want to leverage the power of AI as well.

Most likely you’ve used one of the many AI offerings out there. Open AI’s ChatGPT, Google Bard, Google PaLM with MakerSuite, Perplexity AI, Hugging Chat and many more have been in the latest hype and software companies are racing to integrate them into their products. The main way is to buy a subscription and connect to the ones that offer their API over the internet but as an DevSecOps engineer here’s where the fun starts.

A lot of companies following good security practices block traffic to and from the internet so the first part of all this will be to open the firewall. Next you must protect the credentials of the API user so that it doesn’t get hacked and access will reveal what you are up to. Then you have to trust that OpenAI is not using your data to train their models and that they are keeping your company’s data safe.

It could take a ton of time to plan, design and deploy a secured infrastructure for using large language models and unless you have a very specific use case it might be overkill to build your own.

Here’s a breakdown of a few infrastructure highlights about this service.

3 Main Features

Privacy and Security

Your Chat-GPT like interface called Azure AI Studio runs in your private subscription. It can be linked to one of your VNETs so that you can use internal routing and you can also add private endpoints so that you don’t even have to use it over the internet.

Even if you have to use it over the internet you can lock it down to only allow your public IPs and your developers will need a token for authentication as well that can be scripted to rotate every month.

Pricing

Common Models

  1. GPT-4 Series: The GPT-4 models are like super-smart computers that can understand and generate human-like text. They can help with things like understanding what people are saying, writing stories or articles, and even translating languages.
    Key Differences from GPT-3:
    • Model Size: GPT-4 models tend to be larger in terms of parameters compared to GPT-3. Larger models often have more capacity to understand and generate complex text, potentially resulting in improved performance.
    • Training Data: GPT-4 models might have been trained on a more extensive and diverse dataset, potentially covering a broader range of topics and languages. This expanded training data can enhance the model’s knowledge and understanding of different subjects.
    • Improved Performance: GPT-4 models are likely to demonstrate enhanced performance across various natural language processing tasks. This improvement can include better language comprehension, generating more accurate and coherent text, and understanding context more effectively.
    • Fine-tuning Capabilities: GPT-4 might introduce new features or techniques that allow for more efficient fine-tuning of the model. Fine-tuning refers to the process of training a pre-trained model on a specific dataset or task to make it more specialized for that particular use case.
    • Contextual Understanding: GPT-4 models might have an improved ability to understand context in a more sophisticated manner. This could allow for a deeper understanding of long passages of text, leading to more accurate responses and better contextual awareness in conversation.
  2. GPT-3 Base Series: These models are also really smart and can do similar things as GPT-4. They can generate text for writing, help translate languages, complete sentences, and understand how people feel based on what they write.
  3. Codex Series: The Codex models are designed for programming tasks. They can understand and generate computer code. This helps programmers write code faster, get suggestions for completing code, and even understand and improve existing code.
  4. Embeddings Series: The Embeddings models are like special tools for understanding text. They can turn words and sentences into numbers that computers can understand. These numbers can be used to do things like classify text into different categories, find information that is similar to what you’re looking for, and even figure out how people feel based on what they write.

 

Getting Access to it!

Although the service is Generally Available (GA) it is only available in East US and West Europe. You also have to submit an application so that MS can review your company and use case so they can approve or deny your request. This could be due to capacity and for Microsoft to gather information on how companies will be using the service.

The application is here: https://aka.ms/oai/access

Based on research and experience getting this for my clients I always recommend only pick what you initially need and not get too greedy. It would be also wise to speak with your MS Rep and take them out for a beer! For example if you just need code generation then just select the codex option.

Lately getting the service has been easier to get, hopefully soon we won’t need the form and approval dance.


Deploy Azure Container Apps with the native AzureRM Terraform provider, no more AzAPI!

Azure has given us great platforms to run containers. Starting with Azure Container Instance where you can run a small container group just like a docker server and also Azure Kubernetes Service where you can run and manage Kubernetes clusters and containers at scale. Now, the latest Kubernetes abstraction from Azure is called Container Apps!

When a service comes out in a cloud provider their tools are updated right away so when Contrainer Apps came out you could deploy it with ARM or Bicep. You could still deploy it with Terraform by using the AzAPI provider which interacts directly with Azures API but as of a few weeks back (from the publish date of this article) you can use the native AzureRM provider to deploy it.

 

Code Snippet

resource "azurerm_container_app_environment" "example" {
  name                       = "Example-Environment"
  location                   = azurerm_resource_group.example.location
  resource_group_name        = azurerm_resource_group.example.name
  log_analytics_workspace_id = azurerm_log_analytics_workspace.example.id
}
resource "azurerm_container_app" "example" {
  name                         = "example-app"
  container_app_environment_id = azurerm_container_app_environment.example.id
  resource_group_name          = azurerm_resource_group.example.name
  revision_mode                = "Single"

  template {
    container {
      name   = "examplecontainerapp"
      image  = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
      cpu    = 0.25
      memory = "0.5Gi"
    }
  }
}

Sources

Azure Container Apps