Blog Post

Azure High Performance Computing (HPC) Blog
9 MIN READ

Use Entra IDs to run jobs on your HPC cluster

trcooper's avatar
trcooper
Icon for Microsoft rankMicrosoft
Sep 29, 2025

This blog demonstrates the practical implementation of System Security Services Daemon (SSSD) with the recently introduced “idp” provider that can be used on Azure Linux 3.0 HPC clusters to provide consistent Usernames, UIDs and GIDs across the cluster all rooted in Microsoft Entra ID.

Introduction

This blog demonstrates the practical implementation of System Security Services Daemon (SSSD) with the recently introduced “idp” provider that can be used on Azure Linux 3.0 HPC clusters to provide consistent Usernames, UIDs and GIDs across the cluster all rooted in Microsoft Entra ID.

Having consistent Identities across the cluster is a fundamental requirement that is commonly met using SSSD and a provider such as LDAP, FreeIPA, or ADDS, or if no IdP is available by managing local accounts across all nodes.

SSSD 2.11.0 introduced a new generic “idp” provider that can integrate Linux systems with Microsoft Entra ID via OAuth2/OpenID Connect. This means we can now define a domain in sssd.conf with id_provider = idp and idp_type = entra_id, along with Entra tenant and app credentials. With SSSD configured and running, getent can now resolve Entra users and groups via Entra ID, fetching the Entra user’s POSIX info consistently across the cluster.

As this new capability is very new (it’s being included in the Fedora 43 pre-release) this blog intends to cover the steps required to implement it on Azure Linux 3.0 for those that would like to explore this on their own VMs and Clusters.

Implementation

1. Build RPMs

As we are deploying on Azure Linux 3.0 and RPMs are not available in packages.microsoft.com (PMC) we must download the release package 2.11.0 from Releases · SSSD/sssd and follow the guidance from Building SSSD - sssd.io

A virtual machine running Azure Linux 3.0 HPC edition which provides many of the build tools required (and is our target operating system) was used. A number of dependencies must still be installed to perform the make but these are all available from PMC and the make runs without issue.

# Install dependencies sudo tdnf -y install \ c-ares-devel \ cifs-utils-devel \ curl-devel \ cyrus-sasl-devel \ dbus-devel \ jansson-devel \ krb5-devel \ libcap-devel \ libdhash-devel \ libldb-devel \ libini_config-devel \ libjose-devel \ libnfsidmap-devel \ libsemanage-devel \ libsmbclient-devel \ libtalloc-devel \ libtdb-devel \ libtevent-devel \ libunistring-devel \ libwbclient-devel \ p11-kit-devel \ samba-devel \ samba-winbind sudo ln -s /etc/alternatives/libwbclient.so-64 /usr/lib/libwbclient.so.0 # Build SSSD from source wget https://github.com/SSSD/sssd/releases/download/2.11.0/sssd-2.11.0.tar.gz tar -xvf sssd-2.11.0.tar.gz cd sssd-2.11.0 autoreconf -if ./configure --enable-nsslibdir=/lib64 --enable-pammoddir=/lib64/security --enable-silent-rules --with-smb-idmap-interface-version=6 make # Success!!

Building the RPMs is more complex as there are many more dependencies, some dependencies not available on PMC and we are also reusing the generic sssd.spec file. However, this can be performed to create a working set of required SSSD RPMs.

First install the dependencies available from PMC:

# Add dependencies for rpmbuild sudo tdnf -y install \ doxygen \ libcmocka-devel \ nss_wrapper \ pam_wrapper \ po4a \ shadow-utils-subid-devel \ softhsm \ systemtap-sdt-devel \ uid_wrapper

The remaining four dependencies are sourced from Fedora 42 builds and may be installed using tdnf:

# gdm-pam-extensions-devel wget https://kojipkgs.fedoraproject.org//packages/gdm/48.0/3.fc42/x86_64/gdm-pam-extensions-devel-48.0-3.fc42.x86_64.rpm sudo tdnf install ./gdm-pam-extensions-devel-48.0-3.fc42.x86_64.rpm # libfido2-devel wget https://dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/l/libcbor-0.7.0-6.el8.x86_64.rpm wget https://dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/l/libfido2-1.11.0-2.el8.x86_64.rpm wget https://dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/l/libfido2-devel-1.11.0-2.el8.x86_64.rpm sudo tdnf install ./libcbor-0.7.0-6.el8.x86_64.rpm --nogpgcheck sudo tdnf install ./libfido2-1.11.0-2.el8.x86_64.rpm --nogpgcheck sudo tdnf install ./libfido2-devel-1.11.0-2.el8.x86_64.rpm --nogpgcheck

The sudo make rpms can now be initiated. It will fail but establishes much of what we need for a successful rpmbuild using the following steps:

# rpmbuild sudo make rpms # will error with: File /rpmbuild/SOURCES/sssd-2.11.0.tar.gz: No such file or directory sudo cp ../sssd-2.11.0.tar.gz /rpmbuild/SOURCES/ cd /rpmbuild sudo vi SPECS/sssd.spec # edit build_passkey 1 in SPECS/sssd.spec to 0 to skip passkey support sudo rpmbuild --define "_topdir /rpmbuild" -ba SPECS/sssd.spec

And we have RPMs 

#RPMS!!! libipa_hbac-2.11.0-0.azl3.x86_64.rpm libipa_hbac-devel-2.11.0-0.azl3.x86_64.rpm libsss_autofs-2.11.0-0.azl3.x86_64.rpm libsss_certmap-2.11.0-0.azl3.x86_64.rpm libsss_certmap-devel-2.11.0-0.azl3.x86_64.rpm libsss_idmap-2.11.0-0.azl3.x86_64.rpm libsss_idmap-devel-2.11.0-0.azl3.x86_64.rpm libsss_nss_idmap-2.11.0-0.azl3.x86_64.rpm libsss_nss_idmap-devel-2.11.0-0.azl3.x86_64.rpm libsss_sudo-2.11.0-0.azl3.x86_64.rpm python3-libipa_hbac-2.11.0-0.azl3.x86_64.rpm python3-libsss_nss_idmap-2.11.0-0.azl3.x86_64.rpm python3-sss-2.11.0-0.azl3.x86_64.rpm python3-sss-murmur-2.11.0-0.azl3.x86_64.rpm python3-sssdconfig-2.11.0-0.azl3.noarch.rpm sssd-2.11.0-0.azl3.x86_64.rpm sssd-ad-2.11.0-0.azl3.x86_64.rpm sssd-client-2.11.0-0.azl3.x86_64.rpm sssd-common-2.11.0-0.azl3.x86_64.rpm sssd-common-pac-2.11.0-0.azl3.x86_64.rpm sssd-dbus-2.11.0-0.azl3.x86_64.rpm sssd-debuginfo-2.11.0-0.azl3.x86_64.rpm sssd-idp-2.11.0-0.azl3.x86_64.rpm sssd-ipa-2.11.0-0.azl3.x86_64.rpm sssd-kcm-2.11.0-0.azl3.x86_64.rpm sssd-krb5-2.11.0-0.azl3.x86_64.rpm sssd-krb5-common-2.11.0-0.azl3.x86_64.rpm sssd-ldap-2.11.0-0.azl3.x86_64.rpm sssd-nfs-idmap-2.11.0-0.azl3.x86_64.rpm sssd-proxy-2.11.0-0.azl3.x86_64.rpm sssd-tools-2.11.0-0.azl3.x86_64.rpm sssd-winbind-idmap-2.11.0-0.azl3.x86_64.rpm

2. Deploy RPMs

With the RPMs created we can now move to installing them on our Cluster. In my case I am using a customised image with other tunings and packages so these can be included in my Ansible Playbook and an updated image produced. The following details the rpms (a subset of the 30 or so created) installed into the image:

# Pre install sssd rpms - name: Copy sssd rpms onto host ansible.builtin.copy: src: sssd-2.11.0/ dest: /tmp/sssd/ - name: Install sssd rpms ansible.builtin.shell: | tdnf -y install /tmp/sssd/libsss_certmap-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/libsss_certmap-devel-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/libsss_idmap-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/libsss_nss_idmap-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-client-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/libsss_sudo-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-nfs-idmap-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-common-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-common-pac-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-idp-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-krb5-common-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-ad-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/libipa_hbac-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-ipa-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-krb5-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-ldap-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-proxy-2.11.0-0.azl3.x86_64.rpm tdnf -y install /tmp/sssd/sssd-2.11.0-0.azl3.x86_64.rpm 

3. Create an App Registration

For the SSSD “idp” provider to be able to read Entra ID user and group attributes we must create an Application ID with Secret in our Entra tenant.

The Application will require the following API permissions:

 

 

Additionally the Application must be assigned the Directory Readers permissions over the directory. This can be done through the Graph API using the following template:

POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments { "principalId": "<ObjectId of your SPN>", "roleDefinitionId": "<RoleDefinitionId for Directory Readers>", "directoryScopeId": "/" }

Note the Application (client) ID and its secret as these will be required to for the SSSD configuration.

4. Configure SSSD & NSSWITCH

For these I have used cloud init to add the sssd.conf and amend the nsswitch.conf during deployment across Slurm Controllers, Login nodes and Compute nodes. The SSSD service is also enabled and started. The resulting files should look like the following customized to your own domain, app Id and secret.

/etc/sssd/sssd.conf

[sssd] config_file_version = 2 services = nss, pam domains = mydomain.onmicrosoft.com [domain/mydomain.onmicrosoft.com] id_provider = idp idp_type = entra_id idp_client_id = ########-####-####-####-############ idp_client_secret = ######################################## idp_token_endpoint = https://login.microsoftonline.com/937d5829-df9d-46b6-ad5a-718ebc33371e/oauth2/v2.0/token idp_userinfo_endpoint = https://graph.microsoft.com/v1.0/me idp_device_auth_endpoint = https://login.microsoftonline.com/937d5829-df9d-46b6-ad5a-718ebc33371e/oauth2/v2.0/devicecode idp_id_scope = https%3A%2F%2Fgraph.microsoft.com%2F.default idp_auth_scope = openid profile email auto_private_groups = true use_fully_qualified_names = false cache_credentials = true entry_cache_timeout = 5400 entry_cache_nowait_percentage = 50 refresh_expired_interval = 4050 enumerate = false debug_level = 2 [nss] debug_level = 2 default_shell = /bin/bash fallback_homedir = /shared/home/%u [pam] debug_level = 2

 

/etc/nsswitch.conf

# Begin /etc/nsswitch.conf passwd: files sss group: files sss shadow: files sss hosts: files dns networks: files protocols: files services: files ethers: files rpc: files # End /etc/nsswitch.conf

5. Create User home directories

The use of Device Auth for Entra users over SSH is not currently supported so for now my Entra users will authenticate using SSH Public Key Auth. For that to work their $HOME directories must be pre-created, and their public keys added to .ssh/authorized_keys. This is simplified by having SSSD in place as we can use getent passwd to get a user’s $HOME and set directory and file permissions using the usual chown command.

The following example script will create the users directory, add their public key, and creates a keypair for internal use across the cluster:

#!/bin/bash # Script to create a user home directory and populate it with a given SSH public key. # Must be executed as root or via sudo. USER_NAME=$1 USER_PUBKEY=$2 if [ -z "${USER_NAME}" ] || [ -z "${USER_PUBKEY}" ]; then echo "Usage: $0 " exit 1 fi entry=$(getent passwd "${USER_NAME}") export USER_UID=$(echo "$entry" | awk -F: '{print $3}') export USER_HOME=$(echo "$entry" | awk -F: '{print $6}') #if directory exists, we're good if [ -d "${USER_HOME}" ]; then echo "Directory ${USER_HOME} exists, do not modify." else mkdir -p "${USER_HOME}" chown $USER_UID:$USER_UID $USER_HOME chmod 700 $USER_HOME cp -r /etc/skel/. $USER_HOME mkdir -p $USER_HOME/.ssh chmod 700 $USER_HOME/.ssh touch $USER_HOME/.ssh/authorized_keys chmod 644 $USER_HOME/.ssh/authorized_keys echo "${USER_PUBKEY}" >> $USER_HOME/.ssh/authorized_keys { echo "# Automatically generated - StrictHostKeyChecking is disabled to allow for passwordless SSH between Azure nodes" echo "Host *" echo " StrictHostKeyChecking no" } >> "$USER_HOME/.ssh/config" chmod 644 "$USER_HOME/.ssh/config" chown -R $USER_UID:$USER_UID $USER_HOME sudo -u $USER_NAME ssh-keygen -f $USER_HOME/.ssh/id_ed25519 -N "" -q cat $USER_HOME/.ssh/id_ed25519.pub >> $USER_HOME/.ssh/authorized_keys fi

6. Run jobs as Entra user

Logged in Entra user:

john.doe@tst4-login-0 [ ~ ]$ id uid=1137116670(john.doe) gid=1137116670(john.doe) groups=1137116670(john.doe) john.doe@tst4-login-0 [ ~ ]$ getent passwd john.doe john.doe:*:1137116670:1137116670::/shared/home/john.doe:/bin/bash

And running an MPI job:

john.doe@tst4-login-0 [ ~ ]$ sbatch -p hbv4 /cvmfs/az.pe/1.2.6/tests/imb/imb-env-intel-oneapi.sh Submitted batch job 330 john.doe@tst4-login-0 [ ~ ]$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 330 hbv4 imb-env- john.doe R 0:06 2 tst4-hbv4-[114-115] john.doe@tst4-login-0 [ ~ ]$ cat slurm-imb-env-intel-oneapi-330.out Testing IMB using Spack environment intel-oneapi ... Setting up Azure PE version 1.2.6 for azurelinux3.0 on x86_64 Testing IMB using srun... #---------------------------------------------------------------- # Intel(R) MPI Benchmarks 2021.7, MPI-1 part #---------------------------------------------------------------- # Date : Mon Sep 29 15:24:24 2025 # Machine : x86_64 # System : Linux # Release : 6.6.96.1-1.azl3 # Version : #1 SMP PREEMPT_DYNAMIC Tue Jul 29 02:44:24 UTC 2025 # MPI Version : 4.1 # MPI Thread Environment:

Summary

So, early days and requires a little prep but hopefully this demonstrates that using the new SSSD “idp” provider we can finally use Entra and the source of user identities on our HPC clusters.

Updated Oct 01, 2025
Version 2.0
No CommentsBe the first to comment