Deploying Azure Functions with Azure DevOps: 3 Must-Dos! Code Security Included

Azure Functions is a serverless compute service that allows you to run your code in response to various events, without the need to manage any infrastructure. Azure DevOps, on the other hand, is a set of tools and services that help you build, test, and deploy your applications more efficiently. Combining these two powerful tools can streamline your Azure Functions deployment process and ensure a smooth, automated workflow.

In this blog post, we’ll explore three essential steps to consider when deploying Azure Functions using Azure DevOps.

1. Ensure Consistent Python Versions

When working with Azure Functions, it’s crucial to ensure that the Python version used in your build pipeline matches the Python version configured in your Azure Function. Mismatched versions can lead to unexpected runtime errors and deployment failures.

To ensure consistency, follow these steps:

  1. Determine the Python version required by your Azure Function. You can find this information in the requirements.txt file or the host.json file in your Azure Functions project.
  2. In your Azure DevOps pipeline, use the UsePythonVersion task to set the Python version to match the one required by your Azure Function.
yaml
- task: UsePythonVersion@0
inputs:
versionSpec: '3.9'
addToPath: true
  1. Verify the Python version in your pipeline by running python --version and ensuring it matches the version specified in the previous step.

2. Manage Environment Variables Securely

Azure Functions often require access to various environment variables, such as database connection strings, API keys, or other sensitive information. When deploying your Azure Functions using Azure DevOps, it’s essential to handle these environment variables securely.

Here’s how you can approach this:

  1. Store your environment variables as Azure DevOps Service Connections or Azure Key Vault Secrets.
  2. In your Azure DevOps pipeline, use the appropriate task to retrieve and set the environment variables. For example, you can use the AzureKeyVault task to fetch secrets from Azure Key Vault.
yaml
- task: AzureKeyVault@1
inputs:
azureSubscription: 'Your_Azure_Subscription_Connection'
KeyVaultName: 'your-keyvault-name'
SecretsFilter: '*'
RunAsPreJob: false
  1. Ensure that your pipeline has the necessary permissions to access the Azure Key Vault or Service Connections.

3. Implement Continuous Integration and Continuous Deployment (CI/CD)

To streamline the deployment process, it’s recommended to set up a CI/CD pipeline in Azure DevOps. This will automatically build, test, and deploy your Azure Functions whenever changes are made to your codebase.

Here’s how you can set up a CI/CD pipeline:

  1. Create an Azure DevOps Pipeline and configure it to trigger on specific events, such as a push to your repository or a pull request.
  2. In the pipeline, include steps to build, test, and package your Azure Functions project.
  3. Add a deployment task to the pipeline to deploy your packaged Azure Functions to the target Azure environment.
yaml
# CI/CD pipeline
trigger:
- main
pool:
vmImage: ‘ubuntu-latest’steps:
task: UsePythonVersion@0
inputs:
versionSpec: ‘3.9’
addToPath: true script: |
pip install -r requirements.txt
displayName: ‘Install dependencies’ task: AzureWebApp@1
inputs:
azureSubscription: ‘Your_Azure_Subscription_Connection’
appName: ‘your-function-app-name’
appType: ‘functionApp’
deployToSlotOrASE: true
resourceGroupName: ‘your-resource-group-name’
slotName: ‘production’

By following these three essential steps, you can ensure a smooth and reliable deployment of your Azure Functions using Azure DevOps, maintaining consistency, security, and automation throughout the process.

Bonus: Embrace DevSecOps with Code Security Checks

As part of your Azure DevOps pipeline, it’s crucial to incorporate security checks to ensure the integrity and safety of your code. This is where the principles of DevSecOps come into play, where security is integrated throughout the software development lifecycle.

Here’s how you can implement code security checks in your Azure DevOps pipeline:

  1. Use Bandit for Python Code Security: Bandit is a popular open-source tool that analyzes Python code for common security issues. You can integrate Bandit into your Azure DevOps pipeline to automatically scan your Azure Functions code for potential vulnerabilities.
yaml
- script: |
pip install bandit
bandit -r your-functions-directory -f custom -o bandit_report.json
displayName: 'Run Bandit Security Scan'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'bandit_report.json'
ArtifactName: 'bandit-report'
publishLocation: 'Container'
  1. Leverage the Safety Tool for Dependency Scanning: Safety is another security tool that checks your Python dependencies for known vulnerabilities. Integrate this tool into your Azure DevOps pipeline to ensure that your Azure Functions are using secure dependencies.
yaml
- script: |
pip install safety
safety check --full-report
displayName: 'Run Safety Dependency Scan'
  1. Review Security Scan Results: After running the Bandit and Safety scans, review the generated reports and address any identified security issues before deploying your Azure Functions. You can publish the reports as build artifacts in Azure DevOps for easy access and further investigation.

By incorporating these DevSecOps practices into your Azure DevOps pipeline, you can ensure that your Azure Functions are not only deployed efficiently but also secure and compliant with industry best practices.


Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers

In the past I’ve worked with VDI solutions like Citrix, VMWare Horizon, Azure Virtual Desktop and others but my favorite is Kasm. For me Kasm has a DevOps friendly and modern way of doing virtual apps and virtual desktops that I didn’t find with other vendors.

With Kasm, apps and desktops run in isolated containers and I can access them easily with my browser, no need to install client software.

Here are my top 3 favorite features:

Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers!

#1 - Runs on the Home Lab!

Kasm Workspaces can be used to create a secure and isolated environment for running applications and browsing the web in your home lab. This can help to protect your devices from malware and other threats.

The community edition is free for 5 concurrent sessions.

If you are a Systems Admin or Engineer you can use it at home for your benefit but also to get familiar with the configuration so that you are better prepared for deploying it at work.

#2 - Low Resource Utilization

Kasm container apps are lightweight and efficient, so they can run quickly and without consuming a lot of resources. This is especially beneficial if you have a limited amount of hardware resources like on a home lab. I run mine in a small ProxMox cluster and offloads work from my main PC. You can also set the amount of compute when configuring your containerized apps.

#3 - Security

Each application is run in its own isolated container, which prevents them from interacting with each other or with your PC. This helps to prevent malware or other threats from spreading from one application to another.

The containers could run on isolated Docker networks and with a good firewall solution you can even prevent a self-replicating trojan by segmenting your network and only allowing the necessary ports and traffic flow. Example, if running the Tor Browser containerized app you could only allow it to go outbound to the internet and block SMB (Port 445) from your internal network. If the containerized app gets infected with something like the Emotet Trojan you could be preventing it from spreading further and you could kill the isolated container without having to shutdown or reformatting your local computer.

Code Vulnerability scanning: You can scan your container images in your CI/CD pipelines for vulnerabilities, which helps to identify and fix security weaknesses before you deploy them and before they can be exploited.


The NodeSelector Is My Mixed-CPU-Architecture Kubernetes Cluster's Best Friend!

I have an always growing Kubernetes cluster. I currently have a cluster made of 2 Raspberry Pis and 1 PC but HOW, isn't that frowned upon? Well, you can use the NodeSelector attribute to make the containers stick to specific nodes. The specifics we are covering in this article are the CPU architectures of the nodes since the Raspberry Pis run ARM and my PCs run AMD64.

Read more


Don't be a part of the naughty list of the internet exposed Kubernetes clusters!

Exposed K8S Apis

A finding by the Shadow Foundation uncovered close to half a million k8s endpoints on the internet which can be targets to exploits. One factor is that by default these clusters are built with public IPs since cloud providers are outside your network and not all companies can have ExpressRoutes or dedicated point to point connectivity. To increase the security and have easier routing of your kubernetes cluster you can create a private cluster. In Azure Kubernetes Service the private cluster assigns an internal IP to your k8s API but NGINX defaults to external IP so in this article I walk through configuring NGINX to have internal IPs as well to keep it all inside the network.

Read more


Scale Down a specific node in Azure Kubernetes Service

Azure Kubernetes Service

In the past I've scaled up a cluster to test a new deployment or to provide extra compute during an upgrade but when it comes to scaling down AKS won't pick the node with less resources or even the drained one so I've seen it briefly disrupt a deployment.

To have more control on what AKS scales down you can use virtual machine scale set protection policies to specify which node to remove. Here's how to do so...

Read more


Avoid Self-Monitoring on your PROD ElasticSearch Cluster! Ship logs to a separate deployment.

It is highly recommended to disable self-monitoring on your production Elasticsearch deployment for performance and built-in support in Elastic Cloud on Kubernetes. In this article we go over configuring the monitoring cluster.

Read more