Comprehensive Guide to Upgrading Ansible via Pip with New Python Versions on Ubuntu 20.04
For system administrators and DevOps engineers using Ansible in production environments, upgrading Ansible can sometimes be challenging, especially when the new version requires a newer Python version than what's available by default in Ubuntu 20.04. This guide walks through the process of upgrading Ansible installed via pip when a new Python version is required.
Why This Matters
Ubuntu 20.04 LTS ships with Python 3.8 by default. However, newer Ansible versions may require Python 3.9, 3.10, or even newer. Since Ansible in our environment is installed via pip rather than the APT package manager, we need a careful approach to manage this transition without breaking existing automation.
Prerequisites
- Ubuntu 20.04 LTS system
- Sudo access
- Existing Ansible installation via pip
- Backup of your Ansible playbooks and configuration files
Step 1: Install the Python Repository "Snakes"
The "deadsnakes" PPA provides newer Python versions for Ubuntu. This repository allows us to install Python versions that aren't available in the standard Ubuntu repositories.
# Add the deadsnakes PPA
sudo add-apt-repository ppa:deadsnakes/ppa
# Update package lists
sudo apt update
Step 2: Install the New Python Version and Pip
Install the specific Python version required by your target Ansible version. In this example, we'll use Python 3.10, but adjust as needed.
# Install Python 3.10 and development headers
sudo apt install python3.10 python3.10-dev python3.10-venv
# Install pip for Python 3.10
curl -sS https://bootstrap.pypa.io/get-pip.py | sudo python3.10
# Verify the installation
python3.10 --version
python3.10 -m pip --version
Note: After this step, you will have different Python versions installed, and you will need to use them with the correct executable as shown above (e.g.,
python3.10
for Python 3.10,python3.8
for the default Ubuntu 20.04 Python).
Warning: Do not uninstall the Python version that comes with the OS (Python 3.8 in Ubuntu 20.04), as this can cause serious issues with the Ubuntu system. Many system utilities depend on this specific Python version.
Step 3: Uninstall Ansible from the Previous Python Version
Before installing the new version, remove the old Ansible installation to avoid conflicts.
# Find out which pip currently has Ansible installed
which ansible
# This will show something like /usr/local/bin/ansible or ~/.local/bin/ansible
# Check which Python version is used for the current Ansible
ansible --version
# Look for the "python version" line in the output
# Uninstall Ansible from the previous Python version
python3.8 -m pip uninstall ansible ansible-core
# If you had other Ansible-related packages, uninstall those too
python3.8 -m pip uninstall ansible-runner ansible-builder
Step 4: Install Ansible with the New Python Version
Install Ansible for both system-wide (sudo) and user-specific contexts as needed:
System-Wide Installation (sudo)
# Install Ansible system-wide with the new Python version
sudo python3.10 -m pip install ansible
# Verify the installation
ansible --version
# Confirm it shows the new Python version
User-Specific Installation (if needed)
# Install Ansible for your user with the new Python version
python3.10 -m pip install --user ansible
# Verify the installation
ansible --version
Reinstall Additional Pip Packages with the New Python Version
If you had additional pip packages installed for Ansible, reinstall them with the --force-reinstall
flag to ensure they use the new Python version:
# Reinstall packages with the new Python version
sudo python3.10 -m pip install --force-reinstall ansible-runner ansible-builder
# For user-specific installations
python3.10 -m pip install --user --force-reinstall ansible-runner ansible-builder
Step 5: Update Ansible Collections
Ansible collections might need to be updated to work with the new Ansible version:
# List currently installed collections
ansible-galaxy collection list
# Update all collections
ansible-galaxy collection install --upgrade --force-with-deps <collection_name>
# Example:
# ansible-galaxy collection install --upgrade --force-with-deps community.general
# ansible-galaxy collection install --upgrade --force-with-deps ansible.posix
Installing Collection Requirements
When installing pip package requirements for Ansible collections, you must use the specific Python executable with the correct version. For example:
# Incorrect (might use the wrong Python version):
sudo pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt
# Correct (explicitly using Python 3.11):
sudo python3.11 -m pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt
This ensures that the dependencies are installed for the correct Python interpreter that Ansible is using.
Consider using a requirements.yml file to manage your collections:
# requirements.yml
collections:
- name: community.general
version: 5.0.0
- name: ansible.posix
version: 1.4.0
And install them with:
ansible-galaxy collection install -r requirements.yml
Step 6: Update Jenkins Configuration (If Applicable)
If you're using Jenkins to run Ansible playbooks, you'll need to update your Jenkins configuration to use the new Python and Ansible paths:
- Go to Jenkins > Manage Jenkins > Global Tool Configuration
- Update the Ansible installation path to point to the new version:
- For system-wide installations:
/usr/local/bin/ansible
(likely unchanged, but verify) - For user-specific installations: Update to the correct path
- For system-wide installations:
- In your Jenkins pipeline or job configuration, specify the Python interpreter path if needed:
// Jenkinsfile example
pipeline {
agent any
environment {
ANSIBLE_PYTHON_INTERPRETER = '/usr/bin/python3.10'
}
stages {
stage('Run Ansible') {
steps {
sh 'ansible-playbook -i inventory playbook.yml'
}
}
}
}
Step 7: Update Ansible Configuration Files (Additional Step)
You might need to update your ansible.cfg file to specify the new Python interpreter:
# In ansible.cfg
[defaults]
interpreter_python = /usr/bin/python3.10
This ensures that Ansible uses the correct Python version when connecting to remote hosts.
Step 8: Test Your Ansible Installation
Before relying on your upgraded Ansible for production work, test it thoroughly:
# Check Ansible version
ansible --version
# Run a simple ping test
ansible localhost -m ping
# Run a simple playbook
ansible-playbook test-playbook.yml
Troubleshooting Common Issues
Python Module Import Errors
If you encounter module import errors, ensure that all required dependencies are installed for the new Python version:
sudo python3.10 -m pip install paramiko jinja2 pyyaml cryptography
Path Issues
If running ansible
command doesn't use the new version, check your PATH environment variable:
echo $PATH
which ansible
You might need to create symlinks or adjust your PATH to ensure the correct version is used.
Collection Compatibility
Some collections may not be compatible with the new Ansible or Python version. Check the documentation for your specific collections.
Conclusion
Upgrading Ansible when a new Python version is required involves several careful steps to ensure all components work together smoothly. By following this guide, you should be able to successfully upgrade your Ansible installation while minimizing disruption to your automation workflows.
Remember to always test in a non-production environment first, and maintain backups of your configuration and playbooks before making significant changes.
Happy automating!
Avoid Full Downtime on Auto-Scaled Environments by Only Targeting New Instances with Ansible and Github Actions!

Servers are scared of downtime!
The following Ansible playbook provides a simple but powerful way to compare instance uptime to a threshold “scale_time” variable that you can set in your pipeline variables in Github. By checking the uptime, you can selectively run tasks only on machines newer than that time period you set to avoid downtime on the rest.
Of course the purpose of Ansible is to be idempotent but sometimes during testing we might need to isolate servers to not affect all, specially when using dynamic inventories.
Solution: The Playbook

How it works:
- Create a variable in Github Pipeline Variables.
- Set the variable at runtime:
ansible-playbook -i target_only_new_vmss.yml pm2status.yml -e "scaletime=${{ vars.SCALETIME }}"
- The
set_fact
task defines thescale_time
variable based on when the last scaling event occurred. This will be a timestamp. - The
uptime
command gets the current uptime of the instance. This is registered as a variable. - Using a conditional
when
statement, we only run certain tasks if the uptime is less than thescale_time
threshold. - This allows you to selectively target new instances created after the last scale-up event.
Benefits:
- Avoid unnecessary work on stable instances that don’t need updates.
- Focus load and jobs on new machines only
- Safer rollouts in large auto-scaled environments by targeting smaller batches.
- Easy way to check uptime against a set point in time.
Deploy Carbon Black linux sensor to your endpoints with a simple Ansible playbook.

You can bake some things into your gold images and/or deploy after the servers are up. Unlike other AVs, Carbon Black makes it easy to do both but in this article we will talk about deploying it after the ubuntu server is up with Ansible.
There are 2 requirements to this playbook. First get the company code from your Carbon Black console. Second, download the sensor from the Carbon Black console ahead of time to to avoid sign-in or MFA. I have 2 examples here where you can download it to the local ansible agent and copy it to the remote server or you can download from Azure storage blob.
The ansible playbook is simple but I added some conditions to make it idempotent.
Checks if CB is already installed

Create directories to place the compressed and uncompressed files.

Downloads installer from Blob to target (If not present)

Uncompress Tarball on Remote Target

Install with Company Code

Want to make I.T. Happy? Automate Cert Import and Binding for IIS.

An important part of any web server build is to install a valid SSL cert and bind it to the sites. In a regular IT team certs are misunderstood since it is something they do only when they expire or when there is a new server to build, some even fear dealing with certs. New IT teams are building servers and infrastructure as code and this step makes it easy to deploy and bind the certs when using windows and IIS.
Configure the Azure DevOps Agent to use Ansible playbooks locally.

Sometimes you might want to do things in localhost. My example is that I want to mount a share locally so that I can create directories for mount points and different permissions. I don't want to spin up a new machine to do this simple task so I will run Ansible on the localhost which is my Azure DevOps Agent.
Easy way to create an Ansible Dynamic Inventory from an Azure Resource group

I was looking at the best way to create an inventory from a virtual machine scale set and found this gem. The Azure dynamic-inventory plugin is the easiest way to get VMs, scale set and resources from resource groups and do stuff with ansible.
Elastic Cloud Enterprise Configuration with Ansible

One of the advantages about using DevOps practices is that you can rely less in documentation and more on orchestration. Documentation can come from a vendor or the engineer that configured the system or a mix of both since you have to modify based on your environments. Using an Ansible role like the one developed by Elastic saves a ton of time and add best practices and performance tuning while at it!