The issue at hand: My /var/tmp folder is stacking up on literary hundreds of folders called "container_images_storage_xxxxxxxxxx", where the x’s present a random number. Each folder contains the following files called 1, 2 and 3 as seen in thumbnail. Each folder seems to increase in size too, as the lowest I can see is the size of 142.2 MiB, but the highest 2.1GB. This is a problem as it is taking up all my disk space, and even if I do delete them, they come back the next day… I believe this has something to do with podman, but I’m really not quite sure. All I use the PC for is browsing and gaming.

Is there a way to figure out where a file or folder is coming from on Linux? I’ve tried stat and file, but neither gave me any worthwhile information AFAIK. Would really appreciate some help to figure what causes this, I am still new to the Linux desktop and have no idea what is causing this issue. I am on atomic desktop, using Bazzite:latest.

stat:

stat 1
  File: 1
  Size: 1944283388	Blocks: 3797432    IO Block: 4096   regular file
Device: 0,74	Inode: 10938462619658088921  Links: 1
Access: (0600/-rw-------)  Uid: ( 1000/    buzz)   Gid: ( 1000/    buzz)
Context: system_u:object_r:fusefs_t:s0
Access: 2024-05-06 12:18:37.444074823 +0200
Modify: 2024-05-06 12:22:51.026500682 +0200
Change: 2024-05-06 12:22:51.026500682 +0200
 Birth: -

file

file 1
1: gzip compressed data, original size modulo 2^32 2426514442 gzip compressed data, reserved method, ASCII, extra field, encrypted, from FAT filesystem (MS-DOS, OS/2, NT), original size modulo 2^32 2426514442
  • Sunny' 🌻@slrpnk.netOP
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Growing without and end, each file varies in size, one being bigger than the other, as I wrote in the description of the post. They will continue to stack up until it fills my entire 1TB SSD, then KDE will complain i have no storage left.

    I dont have docker installed and Podman ps --all says I have no containers… So im kind of lost at sea with this one.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Those aren’t the only containers. It could be containrd, lxc, etc.

      One thing that might help track it down could be running sudo lsof | grep '/var/tmp'. If any of those files are currently opened it should list the process that hold the file handle.

      “lsof” is “list open files”. Run without parameters it just lists everything.

      • Sunny' 🌻@slrpnk.netOP
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Thanks for helping out! the command u gave me, plus opening one of the files gives the following output, I dont really know what to make of it;

        buzz@fedora:~$ sudo lsof | grep '/var/tmp/'
        lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc
              Output information may be incomplete.
        podman    10445                            buzz   15w      REG               0,41   867465454    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10446 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10447 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10448 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10449 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10450 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10451 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10452 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10453 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10454 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1
        podman    10445 10455 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1
        
        continues...
        
        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          Aha! Looks like it is podman then.

          So - there are a few different types of resources podman manages.

          • containers - These are instances of an image and the thing that “runs”. podman container ls
          • images - These are disk images (actually multiple but don’t worry about that) that are used to run a container. podman image ls
          • volumes - These are persistent storage that can be used between runs for containers since they are often ephemeral. podman volume ls

          When you do a “prune” it only removes resources that aren’t in use. It could be that you have some container that references a volume that keeps it around. Maybe there’s a process that spins up and runs the container on a schedule, dunno. The above podman commands might help find a name of something that can be helpful.

          • Sunny' 🌻@slrpnk.netOP
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            aha! Found three volumes! had not checked volumes uptil now, frankly never used podman so this is all new to me… Using podman inspect volume gives me this on the first volume;

            [
                 {
                      "Name": "e22436bd2487a197084decd0383a32a39be8a4fcb1ded6a05721c2a7363f43c8",
                      "Driver": "local",
                      "Mountpoint": "/var/home/buzz/.local/share/containers/storage/volumes/e22436bd2487a197084decd0383a32a39be8a4fcb1ded6a05721c2a7363f43c8/_data",
                      "CreatedAt": "2024-03-15T23:52:10.800764956+01:00",
                      "Labels": {},
                      "Scope": "local",
                      "Options": {},
                      "UID": 1,
                      "GID": 1,
                      "Anonymous": true,
                      "MountCount": 0,
                      "NeedsCopyUp": true,
                      "LockNumber": 1
                 }
            ]
            
            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 months ago

              Navigating the various things podman/docker allocate can be a bit annoying. The cli tools don’t make it terribly obvious either.

              You can try using docker volume rm name to remove them. It may tell you they’re in use and then you’ll need to find the container using them.

              • SimplyTadpole@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                Does all this also apply to distrobox? I don’t use podman, but I do use distrobox, which I think is a front-end for it, but I don’t know if the commands listed here would be the same.

                • atzanteol@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 months ago

                  I’m not terribly familiar with distrobox unfortunately. If it’s a front end for podman then you can probably use the podman commands to clean up after it? Not sure if that’s the “correct” way to do it though.