In today’s fast-paced development landscape, the traditional Dev, QA, STG (Staging), PROD pipeline has become a standard practice. However, the increasing adoption of cloud-based environments has introduced new challenges, particularly in terms of cost and deployment speed. To address these issues, many organizations are exploring strategies to optimize their development and deployment processes. In this article we are exploring the use of our local Kubernetes cluster since Azure Functions can run on containers, this can improve your deployments and cost savings.
KEDA (Kubernetes Event-Driven Autoscaler)
KEDA is a tool that helps manage the scaling of your applications based on the workload they’re handling. Imagine having a website that experiences a sudden surge in traffic. KEDA can automatically increase the number of servers running your website to handle the increased load. Once the traffic subsides, it can also scale down all the way to zero PODS to reduce costs.
What is Scale to Zero? It’s a feature that allows applications to automatically scale down to zero instances when there’s no incoming traffic or activity. This means that the application is essentially turned off to save costs. However, as soon as activity resumes, the application can quickly scale back up to handle the load.
Caveat: Your app needs to be packaged in a way that it can start up fast and not have a high warm-up period.
How Does it Work? KEDA monitors application metrics and automatically scales the number of instances up or down based on predefined rules. KEDA supports a wide range of application metrics that can be used to trigger scaling actions. Here are some examples and the most commonly used ones:
- HTTP Metrics:
- HTTP requests: The number of HTTP requests received by an application.
- HTTP status codes: The frequency of different HTTP status codes returned by an application (e.g., 200, 404, 500).
- Queue Lengths:
- Message queue length: The number of messages waiting to be processed in a message queue.
- Job queue length: The number of jobs waiting to be executed in a job queue.
- Custom Metrics:
- Application-specific metrics: Any custom metrics that can be exposed by your application (e.g., database connection pool size, cache hit rate).
Choosing the right metrics depends on your specific application and scaling needs. For example, if your application relies heavily on message queues, monitoring queue lengths might be the most relevant metric. If your application is CPU-intensive, monitoring CPU utilization could be a good indicator for scaling.
KEDA also supports metric aggregators like Prometheus and StatsD, which can be used to collect and aggregate metrics from various sources and provide a unified view of your application’s performance.
Azure Container Registry
Azure Container Registry (ACR) and Docker Hub are both popular platforms for storing and managing container images. While both offer essential features, Azure Container Registry provides several distinct advantages that make it a compelling choice for many developers and organizations.
Key Benefits of Azure Container Registry
-
Integration with Azure Ecosystem:
- Seamless integration: ACR is deeply integrated with other Azure services, such as Azure Kubernetes Service (AKS), Azure App Service, and Azure Functions. This integration simplifies deployment and management workflows.
- Centralized management: You can manage container images, deployments, and other related resources from a single Azure portal.
-
Enhanced Security and Compliance:
- Private repositories: ACR allows you to create private repositories, ensuring that your container images are not publicly accessible.
- Role-based access control (RBAC): Implement fine-grained access control to manage who can view, create, and modify container images.
- Compliance: ACR meets various industry compliance standards, making it suitable for organizations with strict security requirements.
-
Performance and Scalability:
- Regional proximity: ACR offers multiple regions worldwide, allowing you to store and retrieve images from a location that is geographically closer to your users, improving performance.
- Scalability: ACR can automatically scale to handle increased demand for container images.
-
Advanced Features:
- Webhooks: Trigger custom actions (e.g., build pipelines, notifications) based on events in your registry, such as image pushes or deletes.
- Geo-replication: Replicate your images across multiple regions for improved availability and disaster recovery.
- Integrated vulnerability scanning: Automatically scan your images for known vulnerabilities and receive alerts.
-
Cost-Effective:
- Azure pricing: ACR is part of the Azure ecosystem, allowing you to leverage Azure’s flexible pricing models and potential cost savings through various discounts and promotions.
In summary, while Docker Hub is a valuable platform for sharing container images publicly, Azure Container Registry offers a more comprehensive solution tailored to the needs of organizations that require enhanced security, integration with Azure services, and performance optimization.
ACR and Kubernetes Integration
To pull container images from Azure Container Registry (ACR) in a Kubernetes manifest, you’ll need to add an imagePullSecret
attribute to the relevant deployment or pod specification. This secret stores the credentials required to authenticate with ACR and pull the images.
Here’s a step-by-step guide on how to achieve this:
1. Create a Kubernetes Secret:
- Use the
kubectl create secret docker-registry
command to create a secret that holds your ACR credentials. Replace<your-acr-name>
with the actual name of your ACR instance and<your-acr-password>
with your ACR password:
kubectl create secret docker-registry <your-acr-name> --username=<your-acr-username> --password=<your-acr-password>
2. Reference the Secret in Your Manifest:
- In your Kubernetes manifest (e.g., deployment.yaml, pod.yaml), add the
imagePullSecrets
attribute to thespec
section of the deployment or pod. Reference the name of the secret you created in the previous step:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: <your-acr-name>.azurecr.io/<your-image-name>:<your-tag>
imagePullPolicy: Always
imagePullSecrets:
- name: <your-secret-name>
Key Points:
- Replace
<your-acr-name>
,<your-image-name>
,<your-tag>
, and<your-secret-name>
with the appropriate values for your specific ACR instance, image, and secret. - The
imagePullPolicy
is set toAlways
to ensure that the image is always pulled from the registry, even if it’s already present on the node. You can adjust this policy based on your requirements.
Additional Considerations:
- For more complex scenarios, you might consider using service accounts and role-based access control (RBAC) to manage permissions for accessing ACR.
- If you’re using Azure Kubernetes Service (AKS), you can leverage Azure Active Directory (Azure AD) integration for authentication and authorization, simplifying the management of ACR credentials.
By following these steps, you can successfully configure your Kubernetes deployment or pod to pull container images from Azure Container Registry using the imagePullSecret
attribute.