SSL HTTPS Certificate Deep Dive

Secure HTTP Traffic with HashiCorp Vault as your PKI + Cert Manager in Kubernetes - Deep Dive!

Here's a deep dive technical guide with the steps to configure HashiCorp Vault as a Private Certificate Authority (PKI) and integrate it with cert-manager in Kubernetes to automate certificate management. I've configured it in production environments but for the purposes of this demo I am implementing it in my lab so that my internal apps can have HTTPS encryption in transit. Here are a few examples of internal apps with certs and as you can see the ping shows they are in a private network.

 

Main Benefits:

  1. Centralized PKI Infrastructure: Vault provides a centralized solution for managing your entire certificate lifecycle. Instead of managing certificates across different applications and services, Vault acts as a single source of truth for all your PKI needs. This centralization simplifies management, improves security posture, and ensures consistent certificate policies across your organization.
  2. Dynamic Certificate Issuance and Rotation: Vault can automatically issue short-lived certificates and rotate them before expiration. When integrated with cert-manager in Kubernetes, this automation eliminates the manual certificate renewal process that often leads to outages from expired certificates. The system can continuously issue, renew, and rotate certificates without human intervention.
  3. Fine-grained Access Control: Vault's advanced policy system allows you to implement precise access controls around who can issue what types of certificates. You can limit which teams or services can request certificates for specific domains, restrict certificate lifetimes based on risk profiles, and implement comprehensive audit logging. This helps enforce the principle of least privilege across your certificate infrastructure.

An additional benefit is Vault's broader secret management capabilities – the same tool managing your certificates can also handle database credentials, API keys, and other sensitive information, giving you a unified approach to secrets management.

Prerequisites

  • A DNS Server (I use my firewall)
  • A running Kubernetes cluster (I am using microk8s)
  • Vault server installed and initialized (vault 0.30.0 · hashicorp/hashicorp)
  • cert-manager installed in your Kubernetes cluster (microk8s addon)
  • Administrative access to both Vault and Kubernetes

See my homelab diagram in github: mdf-ido/mdf-ido: Config files for my GitHub profile.

1. Configure Vault as a PKI

1.1. Enable the PKI Secrets Engine

# Enable the PKI secrets engine
vault secrets enable pki

PKI in Hashicorp Vault
PKI in Hashicorp Vault

# Configure the PKI secrets engine with a longer max lease time (e.g., 1 year)
vault secrets tune -max-lease-ttl=8760h pki

PKI 1 year Expiration
PKI 1 year Expiration

1.2. Generate or Import Root CA

# Generate a new root CA
vault write -field=certificate pki/root/generate/internal \
    common_name="Root CA" \
    ttl=87600h > root_ca.crt
Hashicorp Vault Root CA
Hashicorp Vault Root CA

1.3. Configure PKI URLs

# Configure the CA and CRL URLs
vault write pki/config/urls \
    issuing_certificates="http://vault.example.com:8200/v1/pki/ca" \
    crl_distribution_points="http://vault.example.com:8200/v1/pki/crl"

Issuing and Certificate Request Links
Issuing and Certificate Request Links

1.4. Create an Intermediate CA

Hashicorp Intermediate Certificate Authority
Hashicorp Intermediate Certificate Authority
# Enable the intermediate PKI secrets engine
vault secrets enable -path=pki_int pki

# Set the maximum TTL for the intermediate CA
vault secrets tune -max-lease-ttl=43800h pki_int

# Generate a CSR for the intermediate CA
vault write -format=json pki_int/intermediate/generate/internal \
    common_name="Intermediate CA" \
    ttl=43800h > pki_intermediate.json

# Extract the CSR
cat pki_intermediate.json | jq -r '.data.csr' > pki_intermediate.csr

# Sign the intermediate CSR with the root CA
vault write -format=json pki/root/sign-intermediate \
    csr=@pki_intermediate.csr \
    format=pem_bundle \
    ttl=43800h > intermediate_cert.json

# Extract the signed certificate
cat intermediate_cert.json | jq -r '.data.certificate' > intermediate.cert.pem

# Import the signed certificate back into Vault
vault write pki_int/intermediate/set-signed \
    certificate=@intermediate.cert.pem

1.5. Create a Role for Certificate Issuance

# Create a role for issuing certificates
vault write pki_int/roles/your-domain-role \
    allowed_domains="yourdomain.com" \
    allow_subdomains=true \
    allow_bare_domains=true \
    allow_wildcard_certificates=true \
    max_ttl=720h

Hashicorp PKI Role
Hashicorp PKI Role

2. Configure Kubernetes Authentication in Vault

2.1. Enable Kubernetes Auth Method

# Enable the Kubernetes auth method
vault auth enable kubernetes

2.2. Configure Kubernetes Auth Method

# Get the Kubernetes API address
KUBE_API="https://kubernetes.default.svc.cluster.local"

# Get the CA certificate used by Kubernetes
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)

# Get the JWT token for the Vault SA
KUBE_TOKEN=$(kubectl create token vault-auth)

# Configure the Kubernetes auth method in Vault
vault write auth/kubernetes/config \
    kubernetes_host="$KUBE_API" \
    kubernetes_ca_cert="$KUBE_CA_CERT" \
    token_reviewer_jwt="$KUBE_TOKEN" \
    issuer="https://kubernetes.default.svc.cluster.local"
Hashicorp Kubernetes Auth Method
Hashicorp Kubernetes Auth Method

2.3. Create Policy for Certificate Issuance

# Create a policy file
cat > pki-policy.hcl << EOF
# Read and list access to PKI endpoints
path "pki_int/*" {
  capabilities = ["read", "list"]
}

# Allow creating certificates
path "pki_int/sign/your-domain-role" {
  capabilities = ["create", "update"]
}

path "pki_int/issue/your-domain-role" {
  capabilities = ["create"]
}
EOF

# Create the policy in Vault
vault policy write pki-policy pki-policy.hcl
Hashicorp Vault PKI Policy
Hashicorp Vault PKI Policy

2.4. Create Kubernetes Auth Role

# Create a role that maps a Kubernetes service account to Vault policies (Created next)
vault write auth/kubernetes/role/cert-manager \
    bound_service_account_names="issuer" \
    bound_service_account_namespaces="default" \
    policies="pki-policy" \
    ttl=1h

3. Configure cert-manager to Use Vault

3.1. Create Service Account for cert-manager

# Create a file named cert-manager-vault-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: issuer
  namespace: default

Apply the manifest:

kubectl apply -f cert-manager-vault-sa.yaml

3.2. Create Issuer Resource

# Create a file named vault-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  namespace: default
spec:
  vault:
    server: http://vault.vault-system.svc.cluster.local:8200
    path: pki_int/sign/your-domain-role
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: cert-manager
        serviceAccountRef:
          name: issuer

Apply the manifest:

kubectl apply -f vault-issuer.yaml
Kubernetes Cert Manager Issuer
Kubernetes Cert Manager Issuer

4. Request Certificates

4.1. Direct Certificate Request

# Create a file named certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: example-cert
  namespace: default
spec:
  secretName: example-tls
  issuerRef:
    name: vault-issuer
  commonName: app.yourdomain.com
  dnsNames:
  - app.yourdomain.com

Apply the manifest:

kubectl apply -f certificate.yaml
Kubernetes Certs from Hashicorp Vault
Kubernetes Certs from Hashicorp Vault

4.2. Using Ingress for Certificate Request

# Create a file named secure-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress
  annotations:
    cert-manager.io/issuer: "vault-issuer"
spec:
  tls:
  - hosts:
    - app.yourdomain.com
    secretName: example-tls
  rules:
  - host: app.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-app
            port:
              number: 80

Apply the manifest:

kubectl apply -f secure-ingress.yaml

5. Troubleshooting

5.1. Common Issues and Solutions

Cannot find cert issuer

The cert issuer was deployed to a specific namespace so if you are creating an ingress outside you might need to solve with a few things:

  • Create a cluster issuer which is not restricted to a namespace
  • Create a duplicate issuer in the specific namespace
  • Create an externalName service and bridge the actual service.
Kubernetes ExternalName Bridge
Kubernetes ExternalName Bridge

Permission Denied

If you see permission denied errors:

  • Check that your Vault policy includes the correct paths
  • Verify that the role binding is correct in Vault
  • Ensure the service account has the necessary permissions
# Check the Vault policy
vault policy read pki-policy

# Verify the role binding
vault read auth/kubernetes/role/cert-manager

Domain Not Allowed

If you see common name not allowed by this role errors:

  • Update your Vault PKI role to allow the domain:
vault write pki_int/roles/your-domain-role \
    allowed_domains="yourdomain.com" \
    allow_subdomains=true \
    allow_bare_domains=true \
    allow_wildcard_certificates=true

Certificate Expiry Issues

If your certificate would expire after the CA certificate:

  • Adjust the max TTL to be shorter than your CA expiration:
vault write pki_int/roles/your-domain-role \
    max_ttl="30d"

Issuer Annotation Issues

If multiple controllers are fighting for the certificate request:

  • Check that you're using the correct annotation:
    • For namespaced Issuers: cert-manager.io/issuer
    • For ClusterIssuers: cert-manager.io/cluster-issuer

5.2. Checking Certificate Status

# Check certificate status
kubectl describe certificate example-cert

# Check certificate request status
kubectl get certificaterequest

# Check cert-manager logs
kubectl logs -n cert-manager deploy/cert-manager-controller

# Check if the secret was created
kubectl get secret example-tls

6. Best Practices

  1. Certificate Rotation: Set appropriate TTLs and let cert-manager handle rotation
  2. Secure Vault Access: Restrict access to Vault and use dedicated service accounts
  3. Monitor Expirations: Set up alerts for certificate expirations
  4. CA Renewals: Plan for CA certificate renewals well in advance
  5. Backup: Regularly backup your Vault PKI configuration and CA certificates
  6. Audit Logging: Enable audit logging in Vault to track certificate operations

7. Maintenance and Operations

7.1. Renewing the CA Certificate

Before your CA certificate expires, you'll need to renew it:

# Check when your CA certificate expires
vault read pki_int/cert/ca

# Plan and execute your CA renewal process well before expiration

7.2. Rotating Credentials

Periodically rotate your Kubernetes auth credentials:

# Update the JWT token used by Vault
KUBE_TOKEN=$(kubectl create token vault-auth)
vault write auth/kubernetes/config \
    token_reviewer_jwt="$KUBE_TOKEN"

Issues

  1. Your ingresses need to be in the same namespace as the issuer
    1. Create an external service as bridge
  2. You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.

Conclusion

You now have a fully functional PKI system using HashiCorp Vault integrated with cert-manager in Kubernetes. This setup automatically issues, manages, and renews TLS certificates for your applications, enhancing security and reducing operational overhead.


I.T. Modernization

It's 2025 and We are Still Revolutionizing Legacy IT with Modern DevOps and Platform Engineering to Unlock Business Potential

In the rapidly evolving digital landscape, traditional IT strategies are becoming relics and even risks for cybersecurity if not revised. Organizations clinging to outdated infrastructure and siloed development practices find themselves struggling to compete in a world that demands agility, innovation, and rapid value delivery. This is where modern DevOps and Platform Engineering emerge as transformative forces, bridging the gap between legacy systems and cutting-edge technological capabilities.

Limitations of Traditional IT Strategies

Traditional IT approaches are characterized by:

  • High Cost due to vendor licensing (Currently: VMWare’s Broadcom Acquisition)
  • Slow, cumbersome manual processes (ClickOps Repetition)
  • Scary infrastructure management (Don’t touch it because it’s working!)
  • Disconnected development and operations teams (IT Staff:That’s Dev’s responsibility) 
  • Manual, error-prone configuration processes (ClickOps Engineer did 10 server but forgot one step in 3 servers)
  • Significant time-to-market delays (I.T. PM’s top skill is how to keep delaying project deadlines)

These challenges create a perfect storm of inefficiency that stifles innovation and increases operational costs. Companies find themselves trapped in a cycle of reactive maintenance rather than proactive innovation.

DevOps and Platform Engineering: A Shift to Modern Strategies

Our comprehensive DevOps and Platform Engineering services offer a holistic approach to transforming your IT infrastructure:

1. Unified Ecosystem Integration

We break down the walls between development, operations, and business teams, creating a seamless, collaborative environment. By implementing advanced integration strategies, we transform fragmented IT landscapes into cohesive, responsive systems that align directly with business objectives.

2. Infrastructure as Code (IaC) Revolution

Gone are the days of manual server configurations and time-consuming infrastructure management. Our Platform Engineering approach leverages cutting-edge Infrastructure as Code methodologies, enabling:

  • Repeatable and consistent infrastructure deployment
  • Automated configuration management
  • Rapid scalability and flexibility
  • Reduced human error
  • Enhanced security through standardized deployment processes

3. Continuous Improvement and Innovation

We don’t just optimize your current systems; we create a framework for perpetual evolution. Our DevOps methodologies introduce:

  • Continuous Integration and Continuous Deployment (CI/CD) pipelines
  • Automated testing and quality assurance
  • Real-time monitoring and proactive issue resolution
  • Data-driven performance optimization

Tangible Benefits

Cost Efficiency

By streamlining processes and reducing manual interventions, organizations can significantly cut operational expenses while improving overall system reliability.

Accelerated Time-to-Market

Our platform engineering solutions reduce development cycles from months to weeks, allowing businesses to respond quickly to market demands and customer needs.

Enhanced Reliability and Performance

Automated monitoring, predictive maintenance, and robust architectural design ensure your systems remain stable, secure, and high-performing.

Extra Benefit: A Powerful Approach to Cybersecurity

In today’s threat landscape, cybersecurity is no longer a mere afterthought but a critical business imperative. DevOps methodologies revolutionize security by embedding protective measures directly into the development and operational processes, creating a proactive and resilient security posture.

Integrated Security: The DevOps Security Advantage

Traditional security approaches often treat cybersecurity as a final checkpoint, creating vulnerabilities and inefficiencies. DevOps transforms this paradigm through:

1. Continuous Security Integration (CSI)

  • Automated Security Scanning: Implement real-time vulnerability detection throughout the development lifecycle
  • Code-Level Security Checks: Identify and remediate potential security weaknesses before they reach production
  • Comprehensive Threat Modeling: Proactively analyze and mitigate potential security risks during the design phase

2. Infrastructure as Code (IaC) Security Benefits

  • Consistent Security Configurations: Eliminate human error in security setup through automated, standardized deployments
  • Immutable Infrastructure: Reduce attack surfaces by creating predictable, easily replaceable system components
  • Rapid Patch and Update Mechanisms: Quickly respond to emerging security threats across entire infrastructure

3. Advanced Monitoring and Incident Response

  • Real-Time Threat Detection: Implement sophisticated monitoring tools that provide immediate insights into potential security incidents
  • Automated Incident Response: Create predefined, executable playbooks for rapid threat mitigation
  • Comprehensive Logging and Auditing: Maintain detailed, tamper-evident logs for forensic analysis and compliance requirements

Security Transformation in Practice

Consider the security journey of a typical enterprise:

  • Before DevOps: Sporadic security audits, manual vulnerability assessments, and reactive threat management
  • With DevOps: Continuous security integration, automated threat detection, and proactive risk mitigation

Compliance and Governance

DevOps approaches ensure:

  • Consistent adherence to security standards and regulatory requirements
  • Transparent and traceable security processes
  • Reduced compliance risks through automated checks and balances

The Human Factor Challenge in I.T. : Understanding Resistance to Change

Behind every legacy system and outdated IT strategy lies a deeply human story of comfort, fear, and inertia. The “if it ain’t broke, don’t fix it” mentality is more than just a technical challenge—it’s a profound psychological barrier that organizations must overcome to remain competitive.

The Comfort of the Familiar

Imagine a seasoned IT professional who has spent years mastering a complex, albeit outdated, system. This system has become an extension of their expertise, a familiar landscape where they feel confident and capable. Changing this environment feels like more than a technical challenge—it’s a personal disruption. The human tendency to avoid uncertainty is a powerful force that keeps organizations trapped in technological stagnation.

Psychological Barriers to Technological Evolution

1. Fear of Obsolescence

Many IT professionals worry that new technologies will render their hard-earned skills irrelevant. This fear manifests as resistance to change, creating an invisible barrier to innovation. The “set it and forget it” approach becomes a psychological defense mechanism, a way to maintain a sense of control in a rapidly changing technological landscape.

2. The Illusion of Stability

There’s a comforting myth that stable systems are reliable systems. In reality, “stable” often means “slowly becoming obsolete.” Legacy systems create a false sense of security, masking underlying inefficiencies and potential risks.

The Hidden Costs of Inaction

What appears to be a stable, low-risk approach actually exposes organizations to significant dangers:

  • Technical Debt Accumulation: Each day a legacy system remains unchanged, the cost of eventual modernization increases exponentially.
  • Security Vulnerabilities: Outdated systems become prime targets for cybersecurity threats.
  • Competitive Disadvantage: While your organization maintains the status quo, competitors are leveraging modern technologies to innovate and grow.

Breaking the Psychological Barrier

Successful digital transformation requires more than technical solutions—it demands a holistic approach that addresses human factors:

1. Empowerment Through Education

  • Provide clear, supportive training that demonstrates the personal and professional benefits of new technologies
  • Create learning paths that build confidence and excitement about technological change
  • Highlight how new skills increase individual marketability and career potential

2. Gradual, Supportive Transformation

  • Implement incremental changes that allow teams to adapt without overwhelming them
  • Create a supportive environment that celebrates learning and adaptation
  • Demonstrate tangible benefits through pilot projects and success stories

3. Reframing Change as Opportunity

Instead of viewing technological transformation as a threat, we help organizations see it as:

  • A chance to solve long-standing operational challenges
  • An opportunity to reduce daily frustrations and workload
  • A path to more meaningful and strategic work

The Cost of Comfort

Let’s put the “set it and forget it” mentality into perspective:

Before Transformation

  • Limited flexibility
  • Increasing maintenance costs
  • Growing security risks
  • Decreasing employee satisfaction
  • Reduced competitive ability

After DevOps Transformation

  • Adaptive, responsive infrastructure
  • Reduced operational overhead
  • Enhanced security and reliability
  • Increased employee engagement
  • Competitive technological edge

A New Paradigm of Great Tech Solutions

DevOps and Platform Engineering are not just about implementing new tools—they’re about creating a culture of continuous improvement, learning, and adaptation. We understand that behind every system are human beings with their own experiences, fears, and aspirations.

Our approach goes beyond technical implementation. We provide:

  • Comprehensive change management support
  • Personalized skill development programs
  • Continuous learning and support frameworks
  • A partnership that values both technological innovation and human potential

Invitation to Modernizing I.T.

The world of technology waits for no one. The choice is not between changing or staying the same—it’s between leading or being left behind.

Are you ready to transform not just your technology, but your entire approach to innovation?

Let’s have a conversation about your unique challenges and opportunities.


Deploying Azure Functions with Azure DevOps: 3 Must-Dos! Code Security Included

Azure Functions is a serverless compute service that allows you to run your code in response to various events, without the need to manage any infrastructure. Azure DevOps, on the other hand, is a set of tools and services that help you build, test, and deploy your applications more efficiently. Combining these two powerful tools can streamline your Azure Functions deployment process and ensure a smooth, automated workflow.

In this blog post, we’ll explore three essential steps to consider when deploying Azure Functions using Azure DevOps.

1. Ensure Consistent Python Versions

When working with Azure Functions, it’s crucial to ensure that the Python version used in your build pipeline matches the Python version configured in your Azure Function. Mismatched versions can lead to unexpected runtime errors and deployment failures.

To ensure consistency, follow these steps:

  1. Determine the Python version required by your Azure Function. You can find this information in the requirements.txt file or the host.json file in your Azure Functions project.
  2. In your Azure DevOps pipeline, use the UsePythonVersion task to set the Python version to match the one required by your Azure Function.
yaml
- task: UsePythonVersion@0
inputs:
versionSpec: '3.9'
addToPath: true
  1. Verify the Python version in your pipeline by running python --version and ensuring it matches the version specified in the previous step.

2. Manage Environment Variables Securely

Azure Functions often require access to various environment variables, such as database connection strings, API keys, or other sensitive information. When deploying your Azure Functions using Azure DevOps, it’s essential to handle these environment variables securely.

Here’s how you can approach this:

  1. Store your environment variables as Azure DevOps Service Connections or Azure Key Vault Secrets.
  2. In your Azure DevOps pipeline, use the appropriate task to retrieve and set the environment variables. For example, you can use the AzureKeyVault task to fetch secrets from Azure Key Vault.
yaml
- task: AzureKeyVault@1
inputs:
azureSubscription: 'Your_Azure_Subscription_Connection'
KeyVaultName: 'your-keyvault-name'
SecretsFilter: '*'
RunAsPreJob: false
  1. Ensure that your pipeline has the necessary permissions to access the Azure Key Vault or Service Connections.

3. Implement Continuous Integration and Continuous Deployment (CI/CD)

To streamline the deployment process, it’s recommended to set up a CI/CD pipeline in Azure DevOps. This will automatically build, test, and deploy your Azure Functions whenever changes are made to your codebase.

Here’s how you can set up a CI/CD pipeline:

  1. Create an Azure DevOps Pipeline and configure it to trigger on specific events, such as a push to your repository or a pull request.
  2. In the pipeline, include steps to build, test, and package your Azure Functions project.
  3. Add a deployment task to the pipeline to deploy your packaged Azure Functions to the target Azure environment.
yaml
# CI/CD pipeline
trigger:
- main
pool:
vmImage: ‘ubuntu-latest’steps:
task: UsePythonVersion@0
inputs:
versionSpec: ‘3.9’
addToPath: true script: |
pip install -r requirements.txt
displayName: ‘Install dependencies’ task: AzureWebApp@1
inputs:
azureSubscription: ‘Your_Azure_Subscription_Connection’
appName: ‘your-function-app-name’
appType: ‘functionApp’
deployToSlotOrASE: true
resourceGroupName: ‘your-resource-group-name’
slotName: ‘production’

By following these three essential steps, you can ensure a smooth and reliable deployment of your Azure Functions using Azure DevOps, maintaining consistency, security, and automation throughout the process.

Bonus: Embrace DevSecOps with Code Security Checks

As part of your Azure DevOps pipeline, it’s crucial to incorporate security checks to ensure the integrity and safety of your code. This is where the principles of DevSecOps come into play, where security is integrated throughout the software development lifecycle.

Here’s how you can implement code security checks in your Azure DevOps pipeline:

  1. Use Bandit for Python Code Security: Bandit is a popular open-source tool that analyzes Python code for common security issues. You can integrate Bandit into your Azure DevOps pipeline to automatically scan your Azure Functions code for potential vulnerabilities.
yaml
- script: |
pip install bandit
bandit -r your-functions-directory -f custom -o bandit_report.json
displayName: 'Run Bandit Security Scan'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'bandit_report.json'
ArtifactName: 'bandit-report'
publishLocation: 'Container'
  1. Leverage the Safety Tool for Dependency Scanning: Safety is another security tool that checks your Python dependencies for known vulnerabilities. Integrate this tool into your Azure DevOps pipeline to ensure that your Azure Functions are using secure dependencies.
yaml
- script: |
pip install safety
safety check --full-report
displayName: 'Run Safety Dependency Scan'
  1. Review Security Scan Results: After running the Bandit and Safety scans, review the generated reports and address any identified security issues before deploying your Azure Functions. You can publish the reports as build artifacts in Azure DevOps for easy access and further investigation.

By incorporating these DevSecOps practices into your Azure DevOps pipeline, you can ensure that your Azure Functions are not only deployed efficiently but also secure and compliant with industry best practices.


Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers

In the past I’ve worked with VDI solutions like Citrix, VMWare Horizon, Azure Virtual Desktop and others but my favorite is Kasm. For me Kasm has a DevOps friendly and modern way of doing virtual apps and virtual desktops that I didn’t find with other vendors.

With Kasm, apps and desktops run in isolated containers and I can access them easily with my browser, no need to install client software.

Here are my top 3 favorite features:

Boosting My Home Lab's Security and Performance with Virtual Apps from Kasm Containers!

#1 - Runs on the Home Lab!

Kasm Workspaces can be used to create a secure and isolated environment for running applications and browsing the web in your home lab. This can help to protect your devices from malware and other threats.

The community edition is free for 5 concurrent sessions.

If you are a Systems Admin or Engineer you can use it at home for your benefit but also to get familiar with the configuration so that you are better prepared for deploying it at work.

#2 - Low Resource Utilization

Kasm container apps are lightweight and efficient, so they can run quickly and without consuming a lot of resources. This is especially beneficial if you have a limited amount of hardware resources like on a home lab. I run mine in a small ProxMox cluster and offloads work from my main PC. You can also set the amount of compute when configuring your containerized apps.

#3 - Security

Each application is run in its own isolated container, which prevents them from interacting with each other or with your PC. This helps to prevent malware or other threats from spreading from one application to another.

The containers could run on isolated Docker networks and with a good firewall solution you can even prevent a self-replicating trojan by segmenting your network and only allowing the necessary ports and traffic flow. Example, if running the Tor Browser containerized app you could only allow it to go outbound to the internet and block SMB (Port 445) from your internal network. If the containerized app gets infected with something like the Emotet Trojan you could be preventing it from spreading further and you could kill the isolated container without having to shutdown or reformatting your local computer.

Code Vulnerability scanning: You can scan your container images in your CI/CD pipelines for vulnerabilities, which helps to identify and fix security weaknesses before you deploy them and before they can be exploited.


How to use the Azure Private Link with uncommon or new PaaS offerings. You need the subresource names!

Azure, like other clouds, has a private link feature that allows connectivity to stay “inside” the network if you have an Express Route or a P2P. The one advantage is that you don’t have to have an internet facing endpoint, you don’t have to whitelist domains or insane ranges of IPs and you can also use your internal DNS.

I like to use Terraform to build the different PaaS offerings and in the same templates I can add the private endpoints to the services. The one thing that took me a while to find is the sub resource names. See below:

resource "azurerm_private_endpoint" "keyvault" {
name = "key_vault-terraform-endpoint"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
subnet_id = "${data.azurerm_subnet.rg.id}"

private_service_connection {
name = “key_vault-terraform-privateserviceconnection”
private_connection_resource_id = azurerm_key_vault.main.id
subresource_names = [ “vault” ]
is_manual_connection = false
}

A private-link resource is the destination target of a specified private endpoint.

Some Benefits

The benefits to most common private endpoints I’ve used are for the following services are

  1. Azure Container Registry
    1. The benefit here is that I can have a Docker Hub like container registry and I can push/pull containers to my local dev without having to go out to the internet
    2. Another benefit is that I can hook up security scans as well
  2. Azure SQL DBs
    1. The benefit is that again you can connect from a local server to this DB using internal IPs and DNS
  3. Azure Key Vault
    1. The benefit here is that your services and vault are not in the internet. Even in the internet they will need accounts to login but I like to know that the service can only be used inside the network.

If all your services are inside then there is no need to allow public networks. You can disable access and only allow trusted Microsoft Services (Log Analytics, Defender, etc.)

Disable public access to Azure Container Registry


Book Review: Sandworm - A New Era of Cyberwar and the Hunt for Kremlin's Most Dangerous Hackers

Book Review: Sandworm - A New Era of Cyberwar and the Hunt for Kremlin's Most Dangerous Hackers

Imagine you are in the office, in front of your computer, focused on your work but all of the sudden your computer reboots but this time it doesn’t come back to a login screen, instead it shows a ransomware message.

 

What do you do now?

Youtube Link to book review for Sandworm.

One of my favorite books I read last year was Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers” by Andy Greenberg. I highly recommend this book not only for Cyber Security and Tech professionals but also for anyone that wants to better understand the motives and evolution of hacking groups around the world. It reads like a true crime story and it provides a good background to understand how the hacking group evolved and was able to launch a devastating attack.

Who is Sandworm?

Sandworm, also known as APT28 and Fancy Bear, is a state-sponsored hacking group that is believed to operate on behalf of the Russian government. Other countries have their state-sponsored groups as well but in this article we will only focus on Sandworm. Investigations show that the group has been active since at least 2007 although it could be an evolution of another set of groups. They have been linked to a number of high-profile cyberattacks against governments, military organizations, and other targets around the world.

What are their motives?

According to experts, Sandworm has primarily been motivated by geopolitical objectives and has been used as a tool of Russian statecraft. The group has been used to gather intelligence, disrupt critical infrastructure, and spread propaganda and disinformation. Some of the specific goals that Sandworm has been associated with include:

  • Gathering intelligence on governments and military organizations in order to advance Russian interests
  • Disrupting the operations of governments and military organizations in order to weaken their ability to resist Russian aggression
  • Spread propaganda and disinformation in order to shape public opinion in favor of Russian policies
  • Sabotaging critical infrastructure in order to disrupt the economies and societies of targeted countries

Overall, Sandworm’s activities have been aimed at furthering the interests of the Russian state and undermining the security and stability of other countries.

Hackers and Software Development - Evolving from mimikatz

Mimikatz is a tool that can be used to obtain the passwords of Windows users, allowing an attacker to gain unauthorized access to a system. It was developed by French security researcher Benjamin Delpy and has been used by a variety of hacking groups, including Sandworm.

It is not clear exactly how Sandworm came to use Mimikatz in its operations. However, Mimikatz has become a popular tool among hackers due to its effectiveness at extracting passwords, and it is likely that Sandworm, like many other groups, adopted it as a means of gaining access to targeted systems.

Once Mimikatz has been used to obtain passwords, an attacker can use them to log into systems and gain access to sensitive data, install malware, or perform other malicious actions. Sandworm and other groups have used Mimikatz as part of their toolkit for conducting cyber espionage and other types of attacks.

Damage and Impact

Maersk, a Danish shipping and logistics company, was one of the organizations that was significantly impacted by the NotPetya cyberattack in 2017. NotPetya was a strain of ransomware that was initially spread through a software update mechanism for a Ukrainian accounting program, but it quickly spread to other countries and caused widespread damage to businesses and government organizations around the world.

Maersk was one of the hardest hit companies, with the attack causing significant disruption to its operations. The attack encrypted the company’s data and rendered its systems inoperable, resulting in the shut down of a number of its critical systems, including its email and financial systems. The company estimated that the attack cost it upwards of $300 million in lost revenue and expenses related to the recovery effort.

In the aftermath of the attack, Maersk worked to restore its systems and rebuild its operations, but the damage caused by the attack took months to fully repair. The incident highlights the significant risks and costs that businesses can face as a result of cyberattacks.

maerks