• 0 Posts
  • 167 Comments
Joined 1 year ago
cake
Cake day: May 22nd, 2023

help-circle
  • Then those containers or virtual machines should add this or create the home as needed.

    systemd has its own containers, so this is the implementation of that requirement; “virtual machines” might use this exact binary to create home, among other directories like srv and what not. Someone at one point probably said “we always need to create these when spinning up systems, maybe systems can provide a mechanism to do that for us?” and then it was implemented.

    Having/home listed as a tmp file on regular systems is problematic by the nature of what tmpfiles claims it does.

    systemd-tmpfiles claims the following:

    systemd-tmpfiles creates, deletes, and cleans up files and directories, using the configuration file format and location specified in tmpfiles.d(5). Historically, it was designed to manage volatile and temporary files, as the name suggests, but it provides generic file management functionality and can be used to manage any kind of files.

    I rather think having a purge command was the issue here, at the very least it should print a big fat warning at what it does, better even list all affected files and directories. There’s no reason a normal user needs this and with the name of the binary, it’s totally misleading, which is an issue in these situations.


  • E.g. for quick provisioning of containers or virtual machines, this is also to make sure the required directories always exist. In a normal distribution, /home already exists, so systemd-tmpfiles does nothing, but there are cases where you want to setup a standard directory structure and this is a declarative alternative to scripts with a lot of mkdir, chmod and chown.

    The name systemd-tmpfiles is kind of historic at this point, but wasn’t changed due to backwards compatibility and all.








  • I am currently running Jellyfin on Btrfs and there is quite a performance impact due to CoW. If 2 clients decide to browse the libraries, both clients grind to a near standstill with regards to being able to see things.

    CoW is not recommended for databases, all DB servers advise for turning it off for the actual database. You’ll run into the same issue with a dedicated database if you leave CoW on I guess. You could also disable CoW for jellyfin’s database right now and performance should increase.

    I also follow the progress of a dedicated DB, but on the other hand I don’t know how much sense it makes architecturally. The likeliness that you have multiple jellyfin server instances access the same database is low - after all, there is info very specific to the server in there like the file path. Just migration is already not easy, how likely is sharing the database live? And if each database is specific to an instance - why not use SQLite (like it’s done right now) and allow for more specific parameter tuning, like used memory and the like?



  • NixOS: (1, 2) - You can define specific package versions but with the large repos I doubt there is much QA going on

    It depends on the nixpkgs channel you use (I’m also using the term for flakes here, though technically these are then called inputs). The main channels, those being NixOS-stable whatever the current version is at the time and NixOS-unstable have a rather big set of packages that must be built successfully before users get updates, including the tests defined in the build system plus sometimes distribution-specific tests, though these are often rather simple, like start program and see if its port is open. Even more, when a library gets updated, all programs and other libraries depending on it get rebuilt as well, including all tests.

    Now what if a package outside of that scope breaks? Most likely, your new configuration won’t build, so you’re stuck on an older but working configuration, or it does build, but something doesn’t work. But I’m the latter case, you can still choose to start the older working configuration.

    Also the more complicated packages have very dedicated and capable maintainers from my experience, sure the smaller stuff is often updated mostly automatically with merge request created by bots and just the final merge approved by the maintainer, but the big infrastructure is usually tested quite well.

    As a downside, this can sometimes lead to longer periods without updates when a lot of stuff has to get rebuilt and something doesn’t work (multiple days, but not weeks). You can then switch to another set in case the problematic packages don’t affect you, or just wait. However, saying there’s little QA is unfair, in fact from my experience there’s more QA in nixpkgs than in most distributions.

    I don’t recommend NixOS to new users because it abstracts a lot of stuff away and makes use of mechanics that are helpful to understand first. But if you’re comfortable with Linux, NixOS is a great distribution that even on unstable works very well. Then again, it allows specific packages to depend on very specific versions of other packages, which is partially the reason you’d use a stable distribution.





  • NixOS has the best concept and even pioneered it, but whether its implementation and documentation is perfect is a topic for debate.

    However, it’s been quite long since I had to fiddle with my config and as such, the downsides don’t really affect one on a daily basis. In fact, I recently reinstalled my machine to change the root filesystem and it was an absolute breeze. If not for secure boot, it would have been absolutely trivial, and with secure boot it was easy and convenient.

    As such, I consider the pains an investment into system that runs much better down the road. Though I’d love it if these pains were reduced.