Skip to content

DOMA Token-based AuthZ Testbed

Joining testbed

Send mail to wlcg-doma-tpc e-group if you want to join Rucio DOMA Functional Tests OIDC or advertise your CE that support job submission with tokens. Please include necessary to access your SE/CE endpoint, as an example you can use entries that were already filled in tables with available resources that to some degree already supports tokens.

IAM

To be able to use tokens it is necessary to know which token issuer is used by individual VOs. For tests we use WLCG IAM token issuer which provides services for "wlcg" VO, e.g.

VO Issuer
WLCG IAM https://wlcg.cloud.cnaf.infn.it/
ALICE IAM https://atlice-auth.cern.ch/
ATLAS IAM https://atlas-auth.cern.ch/
CMS IAM https://cms-auth.cern.ch/
LHCB IAM https://lhcb-auth.cern.ch/
DTEAM IAM https://dteam-auth.cern.ch/
DUNE https://cilogon.org/dune
Fermilab https://cilogon.org/fermilab

OSG have token isser details stored with all VO detail in their virtual-organization topology files. WLCG/EGI currently doesn't provide common (trusted) place with token isser details, but naturally this could become part of EGI VO ID Cards (briefly touched this topic here).

CE

ARC-CE

ARC-CE 6.12 still has limited support for WLCG JWT tokens and they can be only used to submit jobs with HTTP based protocols (EMI-ES and REST interface). With current limitations configuration close to expectations from WLCG JWT profile could look like (replace arc1.example.com with your ARC-CE hostname or arc1.example.com:port_number if you don't use standard HTTPS port 443):

# ...
[authtokens]

[authgroup: wlcg_iam]
# capability based authorization that use compute.* scopes
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.create *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.read *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.modify *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.cancel *
# group based authorization that use /wlcg/pilots group (LHC experiments prefer capabilities)
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com * /wlcg/pilots

# accept token issued by EGI Check-in for job submission (both old MitreID and new Keycloak issuer)
[authgroup: egi_aai]
# very rough / DANGEROUS configuration - accepting all tokens without restrictions
#authtokens = * https://aai.egi.eu/oidc/ * * *
#authtokens = * https://aai.egi.eu/auth/realms/egi * * *
# it is possible to restrict job submission to the specific EGI user
authtokens = 85ff127e07ea6660c727831b99e18e4e96b319283f8d2cc8113f405bad2ba261@egi.eu https://aai.egi.eu/oidc/ * * *
authtokens = 85ff127e07ea6660c727831b99e18e4e96b319283f8d2cc8113f405bad2ba261@egi.eu https://aai.egi.eu/auth/realms/egi * * *

# just an example for ARC-CE running on arc1.example.com
# recommendation for ATLAS configuration may change in fugure
# (this is not the official ATLAS site configuration documentation)
[authgroup: atlas_iam_prd]
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.example.com compute.cancel *
[authgroup: atlas_iam_plt]
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.example.com compute.cancel *
[authgroup: atlas_iam_sgm]
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.example.com compute.cancel *

# again, just an example for ARC-CE running on arc1.example.com
# (this is not the official CMS site configuration documentation)
[authgroup: cms_iam_pilot]
authtokens = bad55f4e-602c-4e8d-a5c5-bd8ffb762113 https://cms-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = bad55f4e-602c-4e8d-a5c5-bd8ffb762113 https://cms-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = bad55f4e-602c-4e8d-a5c5-bd8ffb762113 https://cms-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = bad55f4e-602c-4e8d-a5c5-bd8ffb762113 https://cms-auth.cern.ch/ https://arc1.example.com compute.cancel *
[authgroup: cms_iam_test]
authtokens = 08ca855e-d715-410e-a6ff-ad77306e1763 https://cms-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = 08ca855e-d715-410e-a6ff-ad77306e1763 https://cms-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = 08ca855e-d715-410e-a6ff-ad77306e1763 https://cms-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = 08ca855e-d715-410e-a6ff-ad77306e1763 https://cms-auth.cern.ch/ https://arc1.example.com compute.cancel *
[authgroup: cms_iam_itb]
authtokens = 490a9a36-0268-4070-8813-65af031be5a3 https://cms-auth.cern.ch/ https://arc1.example.com compute.create *
authtokens = 490a9a36-0268-4070-8813-65af031be5a3 https://cms-auth.cern.ch/ https://arc1.example.com compute.read *
authtokens = 490a9a36-0268-4070-8813-65af031be5a3 https://cms-auth.cern.ch/ https://arc1.example.com compute.modify *
authtokens = 490a9a36-0268-4070-8813-65af031be5a3 https://cms-auth.cern.ch/ https://arc1.example.com compute.cancel *

# this assumes existence of local users wlcg, egi, atlasprd, atlasplt, atlassgm, cmsplt, cmstest and cmsitb with corresponding groups
[mapping]
map_to_user = wlcg_iam wlcg:wlcg
map_to_user = egi_aai egi:egi
map_to_user = atlas_iam_prd atlasprd:atlasprd
map_to_user = atlas_iam_plt atlasplt:atlasplt
map_to_user = atlas_iam_sgm atlassgm:atlassgm
map_to_user = cms_iam_pilot cmsplt:cmsplt
map_to_user = cms_iam_test cmstest:cmstest
map_to_user = cms_iam_itb cmsitb:cmsitb
policy_on_nomap=stop

[arex/ws/jobs]
allowaccess=wlcg_iam
allowaccess=egi_aai
allowaccess=atlas_iam_prd
allowaccess=atlas_iam_plt
allowaccess=atlas_iam_sgm
allowaccess=cms_iam_pilot
allowaccess=cms_iam_test
allowaccess=cms_iam_itb
# ...

Token implementation in ARC-CE is still preliminary and it seems possible there will be changes even in the structure of configuration file. You should follow official documentation if you install more recent ARC-CE version.

Even though job submission is possible with tokens it is still necessary to have existing X.509 proxy, because current arcsub version still unconditionally verify proxy presence. On the other hand this proxy is delegated to ARC-CE regardless of auth mechanism used to submit jobs.

With token in BEARER_TOKEN environement variable (ARC_OTOKEN for older ARC-CE 6.7 clients) token'll be automatically used for authorization with HTTP interface, e.g.

cat > test.xrsl <<EOF
&(executable="/usr/bin/env")
(stdout="test.out")
(stderr="test.err")
(jobname="ARC-CE test")
(runtimeenvironment = "ENV/PROXY")
EOF
# following line is necessary for ARC-CE 6.7 client
# newer version use BEARER_TOKEN or token discovery
#export ARC_OTOKEN=$(oidc-token --aud=https://arc1.farm.particle.cz --scope=compute.read --scope=compute.modify --scope=compute.cancel --scope=compute.create --scope=wlcg.groups:/wlcg/pilots OIDC_STORE_NAME)
export BEARER_OTOKEN=$(oidc-token --aud=https://arc1.farm.particle.cz --scope=compute.read --scope=compute.modify --scope=compute.cancel --scope=compute.create --scope=wlcg.groups:/wlcg/pilots OIDC_STORE_NAME)
arcsub --debug=DEBUG --info-endpoint-type arcrest --submission-endpoint-type arcrest --computing-element arc1.farm.particle.cz test.xrsl

ARC-CE support token discovery as described in WLCG Bearer Token Discovery

Configured endpoints

Site Host VO Audience Group
praguelcg2 arc1.farm.particle.cz, arc2.farm.particle.cz wlcg * /wlcg/pilots
praguelcg2 arc1.farm.particle.cz, arc2.farm.particle.cz atlas, dune * *

HTCondor-CE

It is possible to submit jobs with WLCG JWT token if HTCondor accepts SCITOKENS. By default HTCondor-CE is configured with non-standard aud claim, but this can be changed by adding following line in /etc/condor-ce/config.d/99-local file

SCITOKENS_SERVER_AUDIENCE = $(SCITOKENS_SERVER_AUDIENCE) condor://$(COLLECTOR_HOST)
# special "any" audience, not recommended for production use
#SCITOKENS_SERVER_AUDIENCE = $(SCITOKENS_SERVER_AUDIENCE) https://wlcg.cern.ch/jwt/v1/any

All other configuration and identity mapping is described in the documentation. Currently identity mapping is limited to sub+iss which is not sufficient for IAM shared by multiple groups (e.g. Fermialab), but there are plans to implement callout similar to the GSI lcmaps.

Follow these steps to submit job with token to HTCondor-CE

cat > test.sub <<EOF
executable = /bin/env
output = env.out
error = env.err
log = env.log
queue
EOF

# get access token, e.g. from oidc-agent (see bellow) and store its
# content in a file used by WLCG bearer token discovery mechanism
oidc-token --aud=condor://osgce3.farm.particle.cz:9619 -s compute.create -s compute.read -s compute.modify -s compute.cancel wlcg-compute > $XDG_RUNTIME_DIR/bt_u$(id -u)

export CONDOR_CONFIG=/dev/null
export GSI_AUTHZ_CONF=/dev/null
export _condor_AUTH_SSL_CLIENT_CADIR=/etc/grid-security/certificates
export _condor_SEC_CLIENT_AUTHENTICATION_METHODS=SCITOKENS

condor_ping -verbose -pool osgce3.farm.particle.cz:9619 -name osgce3.farm.particle.cz WRITE
condor_submit -pool osgce3.farm.particle.cz:9619 -remote osgce3.farm.particle.cz test.sub
condor_q -pool osgce3.farm.particle.cz:9619 -name osgce3.farm.particle.cz

Obtain access token with compute.* scopes

  • oidc-agent
# start oidc-agent (if it is not already running)
eval $(oidc-gen)
# register new client in oidc-agent with compute.* scopes (one time
# action, but not all IAM allows user to apply for compute.* scopes)
oidc-gen --iss=https://wlcg.cloud.cnaf.infn.it/ --scope="openid offline_access wlcg.groups compute.create compute.read compute.modify compute.cancel" wlcg-compute
# obtain token with all necessary scopes and right audience
oidc-token --aud=condor://osgce3.farm.particle.cz:9619 --scope "compute.create compute.read compute.modify compute.cancel" wlcg-compute > $XDG_RUNTIME_DIR/bt_u$(id -u)
  • client credentials grant (client registered in IAM with compute.* scopes, grant type "client credentials" and response type "token")
curl -s --data "grant_type=client_credentials&client_id=client_id_from_iam_registration&client_secret=client_secret_from_iam_registration&audience=condor://osgce3.farm.particle.cz:9619" https://wlcg.cloud.cnaf.infn.it/token | jq -r .access_token > $XDG_RUNTIME_DIR/bt_u$(id -u)

Configured endpoints

All OSG sites should now support job submission with token. You can test your HTCondor-CE configuration with ATLAS jobs submission.

Site Host VO Audience
osg hosted-ce-chtc-ubuntu.osg.chtc.io:9619 wlcg hosted-ce-chtc-ubuntu.osg.chtc.io:9619
praguelcg2 osgce1.farm.particle.cz:9619 wlcg, atlas, dune osgce1.farm.particle.cz:9619, condor://osgce1.farm.particle.cz:9619, https://wlcg.cern.ch/jwt/v1/any

Storage

Implementations

Configured endpoints:

Site Implementation Host VO Audience
praguelcg2 DPM 1.15.0 https://golias100.farm.partile.cz/dpm/farm.particle.cz/home/wlcg wlcg https://wlcg.cern.ch/jwt/v1/any
UNL XRootD https://red-gridftp12.unl.edu:1094/user/dteam wlcg
DESY Devel dCache 7.1 https://prometheus.desy.de:2443/VOs/wlcg wlcg
CNAF Prod StoRM https://xfer.cr.cnaf.infn.it:8443/wlcg wlcg
CNAF Devel StoRM https://amnesiac.cloud.cnaf.infn.it:8443/wlcg wlcg
CERN ATLAS EOS https://eosatlas.cern.ch:443/eos/atlas/atlasscratchdisk/3rdpartycopy atlas
CERN Devel EOS https://eospps.cern.ch:443/eos/opstest/tpc/https wlcg
RAL Echo https://ceph-gw8.gridpp.rl.ac.uk:1094/dteam:test/ wlcg
Manchester Test DPM https://vm33.in.tier2.hep.manchester.ac.uk:443/dpm/tier2.hep.manchester.ac.uk/home/wlcg wlcg

FTS

CERN Devel FTS with 3.10.x provides JWT support for WLCG and XDC.

fts-rest-whoami --access-token=<token> -s https://fts3-devel.cern.ch:8446
fts-rest-transfer-submit --access-token=<token> -s https://fts3-devel.cern.ch:8446 <src_url> <dst_url>
fts-rest-transfer-status --access-token=<token> -s https://fts3-devel.cern.ch:8446

Be aware that for sucessfull FTS transfer submission with OIDC you also need recent 3.10.x FTS rest client.

RUCIO

Install Rucio client with one method described in documentation, e.g.

# latest rucio client works only with python3
virtualenv-3 rucio
source rucio/bin/activate
pip3 install rucio-clients
pip3 install gfal2-python

and save following configuration file in rucio/etc/rucio.cfg for WLCG DOMA Rucio OIDC tests

[client]
rucio_host = https://rucio-doma.cern.ch:443
auth_host = https://rucio-doma-auth.cern.ch:443
auth_type = oidc
account = wlcg_doma
oidc_issuer = wlcg
ca_cert = /etc/grid-security/certificates
#ca_cert = /etc/pki/tls/certs/CERN-bundle.pem

Setup environment for Rucio client installed in virtualenv

source rucio/bin/activate

or if you use different installation method just set RUCIO_HOME environment variable to the base directory with etc/rucio.cfg file

export RUCIO_HOME=/base/path/to/rucio

WLCG IAM account is necessary to access WLCG DOMA Rucio instance and user sub claim (WLCG IAM uuid identity) must be associated with wlcg_doma Rucio account by DOMA Rucio ADMIN. It is also possible to associate user certificate subject with wlcg_doma Rucio account to provide access with WLCG VO X.509 certificate proxy, but for different authorization type it is necessary to update auth_type = x509_proxy in rucio.cfg or setting environment variable RUCIO_AUTH_TYPE=x509_proxy.

Monitoring