Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start containers deployed with Podman 3 after upgrading to Podman 4 #1211

Closed
apinter opened this issue May 30, 2022 · 2 comments
Closed
Labels

Comments

@apinter
Copy link

apinter commented May 30, 2022

Describe the bug
The upgrade to F36 is bringing Podman 4 which for some reason broke my containers. I do have an additional disk formated to btrfs that functions as storage for the containers (grep graphroot /etc/containers/storage.conf graphroot = "/var/lib/gitlab_data" && grep driver /etc/containers/storage.conf driver = "btrfs"). Existing containers can't start, new containers are the same.

Reproduction steps
Steps to reproduce the behavior:

  1. Upgrade F35 to F36 that has podman 4
  2. Start existing containers with systemd or launch a new one for example: podman run --rm -it --log-level debug fedora:36
  3. It is crashing with:
Error: error creating container storage: error creating read-write layer with ID "039663e4f4d46e8914932bbc15f863efa76b99441aec9cc5e4e05fb817ace7a6": setxattr /var/lib/gitlab_data/btrfs/subvolumes/039663e4f4d46e8914932bbc15f863efa76b99441aec9cc5e4e05fb817ace7a6/etc/alternatives/libnssckbi.so.x86_64: read-only file system
  1. Output of a container that is started with systemd:
   -- Boot e3cc13a399194f67b89ffbd9d76919fc --
May 30 08:02:27 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Starting container-gitlab00.service - Podman container-gitlab00.service...
May 30 08:02:29 gitlab-sandbox.c.devops-sandbox-319813.internal podman[949]: 2022-05-30 08:02:29.720906529 +0000 UTC m=+1.551341957 image pull  gitlab/gitlab-ce
May 30 08:02:30 gitlab-sandbox.c.devops-sandbox-319813.internal podman[949]: Error: error creating container storage: error creating read-write layer with ID "d7db7f0054087882d415df463a2b35c5d0f9ce9550c040e200897b4d03dbd163": setxattr /var/lib/gitlab_data/btrfs/subvolumes/d7db7f0054087882d415df463a2b35c5d0f9ce9550c040e200897b4d03dbd163/etc/alternatives/pager: read-only file system
May 30 08:02:30 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Main process exited, code=exited, status=125/n/a
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1439]: Error: error reading CIDFile: open /run/container-gitlab00.service.ctr-id: no such file or directory
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Control process exited, code=exited, status=125/n/a
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Scheduled restart job, restart counter is at 1.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Stopped container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Starting container-gitlab00.service - Podman container-gitlab00.service...
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1487]: 2022-05-30 08:02:31.386312929 +0000 UTC m=+0.119272647 image pull  gitlab/gitlab-ce
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1487]: Error: error creating container storage: error creating read-write layer with ID "e53b661a5288707f4de0c16fadab1a118979ad4de4d9d3362de2b0d27193f229": setxattr /var/lib/gitlab_data/btrfs/subvolumes/e53b661a5288707f4de0c16fadab1a118979ad4de4d9d3362de2b0d27193f229/etc/alternatives/rcp: read-only file system
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Main process exited, code=exited, status=125/n/a
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1550]: Error: error reading CIDFile: open /run/container-gitlab00.service.ctr-id: no such file or directory
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Control process exited, code=exited, status=125/n/a
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Scheduled restart job, restart counter is at 2.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Stopped container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:31 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Starting container-gitlab00.service - Podman container-gitlab00.service...
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1632]: 2022-05-30 08:02:32.07869756 +0000 UTC m=+0.066652851 image pull  gitlab/gitlab-ce
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1632]: Error: error creating container storage: error creating read-write layer with ID "5db099e194ca6bc6c43b591fb8f801c59bd3f65992b94d9207ff718e8a0c064e": setxattr /var/lib/gitlab_data/btrfs/subvolumes/5db099e194ca6bc6c43b591fb8f801c59bd3f65992b94d9207ff718e8a0c064e/etc/alternatives/pager: read-only file system
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Main process exited, code=exited, status=125/n/a
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1676]: Error: error reading CIDFile: open /run/container-gitlab00.service.ctr-id: no such file or directory
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Control process exited, code=exited, status=125/n/a
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Scheduled restart job, restart counter is at 3.
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Stopped container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Starting container-gitlab00.service - Podman container-gitlab00.service...
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1747]: Error: error creating container storage: error creating read-write layer with ID "3dfeaef60f8e480f5e0dbcbc3f70a41da875c2775460eae20ffe6f8086d26de6": setxattr /var/lib/gitlab_data/btrfs/subvolumes/3dfeaef60f8e480f5e0dbcbc3f70a41da875c2775460eae20ffe6f8086d26de6/etc/alternatives/pager: read-only file system
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Main process exited, code=exited, status=125/n/a
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1798]: Error: error reading CIDFile: open /run/container-gitlab00.service.ctr-id: no such file or directory
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Control process exited, code=exited, status=125/n/a
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:32 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Scheduled restart job, restart counter is at 4.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Stopped container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Starting container-gitlab00.service - Podman container-gitlab00.service...
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1878]: Error: error creating container storage: error creating read-write layer with ID "a88c4da59bb9569dd644059567f25c7d5086b3b1443a4169420e43fafcf1530b": setxattr /var/lib/gitlab_data/btrfs/subvolumes/a88c4da59bb9569dd644059567f25c7d5086b3b1443a4169420e43fafcf1530b/etc/alternatives/pager: read-only file system
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Main process exited, code=exited, status=125/n/a
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal podman[1911]: Error: error reading CIDFile: open /run/container-gitlab00.service.ctr-id: no such file or directory
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Control process exited, code=exited, status=125/n/a
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Scheduled restart job, restart counter is at 5.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Stopped container-gitlab00.service - Podman container-gitlab00.service.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Start request repeated too quickly.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: container-gitlab00.service: Failed with result 'exit-code'.
May 30 08:02:33 gitlab-sandbox.c.devops-sandbox-319813.internal systemd[1]: Failed to start container-gitlab00.service - Podman container-gitlab00.service.

Expected behavior
Containers start without issues, business as usual.

Actual behavior
Containers fail to start with a read-only file system error which the fs is not. Every btrfs subvol podman creates is rw.

System details

  • GCP
  • Fedora CoreOS version:
[root@gitlab-sandbox ~]# rpm-ostree status
State: idle
AutomaticUpdatesDriver: Zincati
 DriverState: active; periodically polling for updates (last checked Mon 2022-05-30 08:08:02 UTC)
Deployments:
● fedora:fedora/x86_64/coreos/stable
                  Version: 36.20220505.3.2 (2022-05-24T16:17:13Z)
               BaseCommit: 096cc2b6fb422d0464c0a3cea26e51de9e43535fe2edd04caa5bda323b8987fb
             GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
          LayeredPackages: htop policycoreutils-python-utils ranger tmux

 fedora:fedora/x86_64/coreos/stable
                  Version: 35.20220424.3.0 (2022-05-06T20:24:56Z)
               BaseCommit: cd82fc9d3489f60e9c492a7daf92c91c5240273770168a7783e1596b582be135
             GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
          LayeredPackages: htop policycoreutils-python-utils ranger tmux

Ignition config
Empty fields means redacted information due to sensitivity.

variant: fcos
version: 1.4.0
kernel_arguments:
  should_exist:
    - mitigations=auto
  should_not_exist:
    - mitigations=auto,nosmt
passwd:
  users:
    - name: core
		.
		.
		.
		[Redacted]
		.
		.
		.
systemd:
  units:
    - name: gitlab-podman.service
      enabled: true
      contents: |
        [Unit]
        Description=Gitlab podman
        After=network-online.target
        Wants=network-online.target
        [Service]
        TimeoutStartSec=0
        ExecStartPre=-/bin/podman kill gitlab
        ExecStartPre=-/bin/podman rm gitlab
        ExecStartPre=/bin/podman pull docker.io/gitlab/gitlab-ce
        ExecStart=/usr/bin/podman run --name gitlab --hostname gitlab.example.com --label io.containers.autoupdate=image -v gitlab-conf:/etc/gitlab -v gitlab-logs:/var/log/gitlab -v gitlab-data:/var/opt/gitlab --publish 22:22 --publish 8080:8081 gitlab/gitlab-ce
        [Install]
        WantedBy=multi-user.target
    - name: traefik-podman.service
      enabled: true
      contents: |
        [Unit]
        Description=Traefik podman
        After=network-online.target
        Wants=network-online.target
        [Service]
        TimeoutStartSec=0
        ExecStartPre=-/bin/podman kill traefik
        ExecStartPre=-/bin/podman rm traefik
        ExecStartPre=/bin/podman pull docker.io/traefik:latest
        ExecStart=/usr/bin/podman -d --name traefik --label io.containers.autoupdate=image -v letsencrypt:/letsencrypt -v traefik_config:/config -p 80:80 -p 443:443 -e CF_API_EMAIL= -e CF_DNS_API_TOKEN= docker.io/traefik:latest --log.level=INFO --api.insecure=false --api.dashboard=false --entrypoints.http.address=:80 --entrypoints.http.http.redirections.entryPoint.to=https --entrypoints.http.http.redirections.entryPoint.scheme=https --entrypoints.http.http.redirections.entryPoint.permanent=true --entrypoints.https.address=:443 --certificatesresolvers.letsencrypt.acme.dnschallenge=true --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare --certificatesresolvers.letsencrypt.acme.email= --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json --providers.file.filename=/config/traefik.yml
        [Install]
        WantedBy=multi-user.target
storage:
  files:
    - path: /etc/containers/storage.conf
      overwrite: true
      contents:
        inline: |
          # This file is is the configuration file for all tools
          # that use the containers/storage library.
          # See man 5 containers-storage.conf for more information
          # The "container storage" table contains all of the server options.
          [storage]

          # Default Storage Driver, Must be set for proper operation.
          driver = "btrfs"

          # Temporary storage location
          runroot = "/run/containers/storage"

          # Primary Read/Write location of container storage
          graphroot = "/var/lib/gitlab_data"

          # Storage path for rootless users
          #
          # rootless_storage_path = "$HOME/.local/share/containers/storage"

          [storage.options]
          # Storage options to be passed to underlying storage drivers

          # AdditionalImageStores is used to pass paths to additional Read/Only image stores
          # Must be comma separated list.
          additionalimagestores = [
          ]

          # Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
          # a container, to the UIDs/GIDs as they should appear outside of the container,
          # and the length of the range of UIDs/GIDs.  Additional mapped sets can be
          # listed and will be heeded by libraries, but there are limits to the number of
          # mappings which the kernel will allow when you later attempt to run a
          # container.
          #
          # remap-uids = 0:1668442479:65536
          # remap-gids = 0:1668442479:65536

          # Remap-User/Group is a user name which can be used to look up one or more UID/GID
          # ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
          # with an in-container ID of 0 and then a host-level ID taken from the lowest
          # range that matches the specified name, and using the length of that range.
          # Additional ranges are then assigned, using the ranges which specify the
          # lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
          # until all of the entries have been used for maps.
          #
          # remap-user = "containers"
          # remap-group = "containers"

          # Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
          # ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
          # to containers configured to create automatically a user namespace.  Containers
          # configured to automatically create a user namespace can still overlap with containers
          # having an explicit mapping set.
          # This setting is ignored when running as rootless.
          # root-auto-userns-user = "storage"
          #
          # Auto-userns-min-size is the minimum size for a user namespace created automatically.
          # auto-userns-min-size=1024
          #
          # Auto-userns-max-size is the minimum size for a user namespace created automatically.
          # auto-userns-max-size=65536

          [storage.options.overlay]
          # ignore_chown_errors can be set to allow a non privileged user running with
          # a single UID within a user namespace to run containers. The user can pull
          # and use any image even those with multiple uids.  Note multiple UIDs will be
          # squashed down to the default uid in the container.  These images will have no
          # separation between the users in the container. Only supported for the overlay
          # and vfs drivers.
          #ignore_chown_errors = "false"

          # Inodes is used to set a maximum inodes of the container image.
          # inodes = ""

          # Path to an helper program to use for mounting the file system instead of mounting it
          # directly.
          #mount_program = "/usr/bin/fuse-overlayfs"

          # mountopt specifies comma separated list of extra mount options
          mountopt = "nodev,metacopy=on"

          # Set to skip a PRIVATE bind mount on the storage home directory.
          # skip_mount_home = "false"

          # Size is used to set a maximum size of the container image.
          # size = ""

          # ForceMask specifies the permissions mask that is used for new files and
          # directories.
          #
          # The values "shared" and "private" are accepted.
          # Octal permission masks are also accepted.
          #
          #  "": No value specified.
          #     All files/directories, get set with the permissions identified within the
          #     image.
          #  "private": it is equivalent to 0700.
          #     All files/directories get set with 0700 permissions.  The owner has rwx
          #     access to the files. No other users on the system can access the files.
          #     This setting could be used with networked based homedirs.
          #  "shared": it is equivalent to 0755.
          #     The owner has rwx access to the files and everyone else can read, access
          #     and execute them. This setting is useful for sharing containers storage
          #     with other users.  For instance have a storage owned by root but shared
          #     to rootless users as an additional store.
          #     NOTE:  All files within the image are made readable and executable by any
          #     user on the system. Even /etc/shadow within your image is now readable by
          #     any user.
          #
          #   OCTAL: Users can experiment with other OCTAL Permissions.
          #
          #  Note: The force_mask Flag is an experimental feature, it could change in the
          #  future.  When "force_mask" is set the original permission mask is stored in
          #  the "user.containers.override_stat" xattr and the "mount_program" option must
          #  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
          #  extended attribute permissions to processes within containers rather then the
          #  "force_mask"  permissions.
          #
          # force_mask = ""

          [storage.options.thinpool]
          # Storage Options for thinpool

          # autoextend_percent determines the amount by which pool needs to be
          # grown. This is specified in terms of % of pool size. So a value of 20 means
          # that when threshold is hit, pool will be grown by 20% of existing
          # pool size.
          # autoextend_percent = "20"

          # autoextend_threshold determines the pool extension threshold in terms
          # of percentage of pool size. For example, if threshold is 60, that means when
          # pool is 60% full, threshold has been hit.
          # autoextend_threshold = "80"

          # basesize specifies the size to use when creating the base device, which
          # limits the size of images and containers.
          # basesize = "10G"

          # blocksize specifies a custom blocksize to use for the thin pool.
          # blocksize="64k"

          # directlvm_device specifies a custom block storage device to use for the
          # thin pool. Required if you setup devicemapper.
          # directlvm_device = ""

          # directlvm_device_force wipes device even if device already has a filesystem.
          # directlvm_device_force = "True"

          # fs specifies the filesystem type to use for the base device.
          # fs="xfs"

          # log_level sets the log level of devicemapper.
          # 0: LogLevelSuppress 0 (Default)
          # 2: LogLevelFatal
          # 3: LogLevelErr
          # 4: LogLevelWarn
          # 5: LogLevelNotice
          # 6: LogLevelInfo
          # 7: LogLevelDebug
          # log_level = "7"

          # min_free_space specifies the min free space percent in a thin pool require for
          # new device creation to succeed. Valid values are from 0% - 99%.
          # Value 0% disables
          # min_free_space = "10%"

          # mkfsarg specifies extra mkfs arguments to be used when creating the base
          # device.
          # mkfsarg = ""

          # metadata_size is used to set the `pvcreate --metadatasize` options when
          # creating thin devices. Default is 128k
          # metadata_size = ""

          # Size is used to set a maximum size of the container image.
          # size = ""

          # use_deferred_removal marks devicemapper block device for deferred removal.
          # If the thinpool is in use when the driver attempts to remove it, the driver
          # tells the kernel to remove it as soon as possible. Note this does not free
          # up the disk space, use deferred deletion to fully remove the thinpool.
          # use_deferred_removal = "True"

          # use_deferred_deletion marks thinpool device for deferred deletion.
          # If the device is busy when the driver attempts to delete it, the driver
          # will attempt to delete device every 30 seconds until successful.
          # If the program using the driver exits, the driver will continue attempting
          # to cleanup the next time the driver is used. Deferred deletion permanently
          # deletes the device and all data stored in device will be lost.
          # use_deferred_deletion = "True"

          # xfs_nospace_max_retries specifies the maximum number of retries XFS should
          # attempt to complete IO when ENOSPC (no space) error is returned by
          # underlying storage device.
          # xfs_nospace_max_retries = "0"

    - path: /etc/ssh/sshd_config.d/20-sshd-port.conf
      contents:
        inline: |
          Port 51643
    - path: /etc/zincati/config.d/55-updates-strategy.toml
      contents:
        inline: |
          [feature]
          enabled = true
          [updates.periodic]
          time_zone = "UTC"
          [updates]
          strategy = "periodic"
          [[updates.periodic.window]]
          days = [ "Sun" ]
          start_time = "23:30"
          length_minutes = 60
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          kernel.printk=4
  disks:
    - device: /dev/sdb
      wipe_table: true
      partitions: 
      - number: 1
        label: data
        resize: true
  filesystems:
    - path: /var/lib/gitext_data
      device: /dev/disk/by-partlabel/data
      format: btrfs
      with_mount_unit: true

The above systemd units may not match reality, since then I did generate units with podman (this is about a year old server), for example:

[root@gitlab-sandbox ~]# cat /etc/systemd/system/container-gitlab00.service
# container-gitlab00.service
# autogenerated by Podman 3.4.1
# Sat Dec  4 07:21:06 UTC 2021

[Unit]
Description=Podman container-gitlab00.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon --rm --sdnotify=conmon --replace -d --name gitlab00 --hostname=gitlab --label io.containers.autoupdate=image -p 8080:8081 -p 22:22 -p 18090:18090 -v gitlab-conf:/etc/gitlab -v gitlab-logs:/var/log/gitlab -v gitlab-data:/var/opt/gitlab gitlab/gitlab-ce
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target

Additional information

  • Podman info
[root@gitlab-sandbox ~]# podman info
host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 2
  distribution:
    distribution: fedora
    variant: coreos
    version: "36"
  eventLogger: journald
  hostname: gitlab-sandbox.c.devops-sandbox-319813.internal
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.17.5-300.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 2757443584
  memTotal: 4106194944
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.4.4-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.4
      commit: 6521fcc5806f20f6187eb933f9f45130c86da230
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 15m 20.8s
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /var/lib/gitlab_data
  graphStatus:
    Build Version: Btrfs v5.16.2
    Library Version: "102"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/containers/storage
  volumePath: /var/lib/gitlab_data/volumes
version:
  APIVersion: 4.0.2
  Built: 1646319369
  BuiltTime: Thu Mar  3 14:56:09 2022
  GitCommit: ""
  GoVersion: go1.18beta2
  OsArch: linux/amd64
  Version: 4.0.2

What I could do - which would be a pain - is to migrate the entire disk to a new disk that is "compatible" with the new Podman version if required. This works though, but not a desired solution. Would be happier if I would know what went wrong and how could it be mitigated.

@apinter apinter changed the title Can start containers deployed with Podman 3 after upgrading to Podman 4 Can't start containers deployed with Podman 3 after upgrading to Podman 4 May 30, 2022
@lucab
Copy link
Contributor

lucab commented May 30, 2022

Thanks for the report.
From a quick skim, this may have the same root-cause as #1138.
The fix for that went into testing stream release 36.20220522.2.1.
@apinter It would be great if you could quickly try on a testing node and check whether that fixes your issue.

@apinter
Copy link
Author

apinter commented May 30, 2022

@lucab You're correct, switching to testing did solve the problem. Going to keep the system rolled back to the previous stable release with F35 and Podman 3 till this fix makes it to the stable stream. Big thanks for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants