This section describes how users can build a container to running it in various scenarios.
Although it is tempting to create a container that has everything, you are advised to carefully reconsider it. Keeping it minimal and only for limited purposes, if the default OS containers are inadequate, may be more nimble. The more you install, the more work to manage updates and address security issues. You can create many lightweight and purpose built containers to run together (even on the grid with pchain
) instead of a very fat one that does many things. Also, the containers you need for development work may require devel rpms
but these are not needed for runtime applications; so you can have them separated with the widely used runtime one smaller.
If you are doing this for archival reasons, please consult with experts since there may be other considerations and justifications for making it completely self-contained.
Note that the containers created here, although instructed to run with setupATLAS -c containerName
, do not depend on it and can be run with raw runtime (docker
/podman
/apptainer
etc) commands.
The CERN Harbor Registry is where the containers will be pushed and stored.
desilva
) as my project as this is meant for my use only.Create a new directory with a similar structure as this:
user-concept/
├── Dockerfile
└── files
├── installAsRoot.sh
├── installAsUser.sh
├── motd
└── release_setup.sh
You can find the above files in the links below; choose the link that matches your build runtime and copy the files.
The container names used in these instructions can be anything but should include the arch (either aarch64 or x86_64) to indicate the platform it is meant to run. You can include the arch in the tag instead of the container name if you prefer.
Although we can build a multi-arch containers, we need to do the builds separately and create arch specific containers. This is because podman emulation is difficult to setup to do it all on one machine like docker. Also, even if it can be done with docker buildx
, a multi-arch container cannot be unpacked on /cvmfs/unpacked.cern.ch
as the infrastructure is setup for unique names/tags to install it all in one directory. For this reason, the workaround is for either the container name or the tag to have the arch type so that it is unique and can be installed in the same directory.
The instructions below are for using podman or docker to build the containers; choose one. Although the containers will be built with these runtimes, the result can be run by other runtimes.
podman login registry.cern.ch
podman build --format docker --platform linux/amd64 -t registry.cern.ch/desilva/user-x86_64-concept:v3 . --progress plain
podman build --format docker --platform linux/arm64 -t registry.cern.ch/desilva/user-aarch64-concept:v3 . --progress plain
podman run -it registry.cern.ch/desilva/user-`uname -m`-concept:v3
podman push registry.cern.ch/desilva/user-`uname -m`-concept:v3
podman manifest create registry.cern.ch/desilva/user-getarch-concept:v3 \
registry.cern.ch/desilva/user-x86_64-concept:v3 \
registry.cern.ch/desilva/user-aarch64-concept:v3
podman manifest push registry.cern.ch/desilva/user-getarch-concept:v3 \
docker://registry.cern.ch/desilva/user-getarch-concept:v3
docker login registry.cern.ch
docker buildx build --load --platform linux/amd64 -t registry.cern.ch/desilva/user-x86_64-concept:v3 . --progress plain
docker buildx build --load --platform linux/arm64 -t registry.cern.ch/desilva/user-aarch64-concept:v3 . --progress plain
docker run -it --platform linux/arm64 registry.cern.ch/desilva/user-aarch64-concept:v3
docker run -it --platform linux/amd64 registry.cern.ch/desilva/user-x86_64-concept:v3
docker push registry.cern.ch/desilva/user-aarch64-concept:v3
docker push registry.cern.ch/desilva/user-x86_64-concept:v3
docker manifest create registry.cern.ch/desilva/user-getarch-concept:v3 \
registry.cern.ch/desilva/user-x86_64-concept:v3 \
registry.cern.ch/desilva/user-aarch64-concept:v3
docker manifest push registry.cern.ch/desilva/user-getarch-concept:v3
Once the containers are on the registry, this should work:
setupATLAS -c docker://registry.cern.ch/desilva/user-getarch-concept:v3
Once the merge is approved, and after waiting a few hours, the containers will appear on /cvmfs/unpacked.cern.ch/registry.cern.ch/
. For the examples above, they will be in /cvmfs/unpacked.cern.ch/registry.cern.ch/desilva
. When the containers are on the cvmfs repository, it can be accessed like this by Apptainer or Singularity where the getarch keyword will be automatically resolved:
setupATLAS -c desilva/user-getarch-concept:v3
Since the containers are derived from the atlasadc/atlas-grid-*
images, they are supported by ALRB and so you can also submit batch jobs from inside the container. To do this, for example, if want to do source /srv/myJob.sh
inside the container as a batch job, this is the recipe.
First start the container with the -b option:
setupATLAS -c desilva/user-getarch-concept:v3 -b
and then, if not already done inside the container, do setupATLAS before the other commands so that the batch commands will be available. batchScript will wrap your job to run with the same container and settings and that should be submitted.
setupATLAS
batchScript "source /srv/myJob.sh" -o submitMyJob.sh
Finally, still inside that same container, submit it to the batch queues
bsub -L /bin/bash submitMyJob.sh
if your site supports lsf
sbatch --export=NONE submitMyJob.sh
if your site supports slurm
condor_submit
(the jdl
file executing submitMyJob.sh) if your site supports condorqsub submitMyJob.sh
if your site supports pbs
Not all grid site’s worker nodes allow external network access to the CERN Harbor registry. Also, it is expensive for every job to pull a container from the registry and so it is not encouraged.
On the other hand, if the containers are on /cvmfs/unpacked.cern.ch, they can be run as grid jobs. There are 2 ways to do this:
setupATLAS
and lsetup panda
inside the container and submit your job as usual. They will automatically run inside the container you created. Once you type setupATLAS
, this environment variable will be picked up by panda clients to determine which container to use on the grid; e.g.:
Singularity> echo $ALRB_USER_PLATFORM
el9+desilva/user-x86_64-concept:v3#x86_64
setupATLAS
,:
lsetup panda
prun --exec=./myJob.sh --outDS=user.$RUCIO_ACCOUNT.test.`uuidgen` --noBuild --containerImage desilva/user-getarch-concept:v3