

IBM/Red Hat maybe since it’s a US company. HOWEVER we went through this with PGP already and the infamous RSA Dolphin.
So could they try? Yeah. Would it work? I don’t know.
Just a dad with a sysadmin hobby … leaving reddit
IBM/Red Hat maybe since it’s a US company. HOWEVER we went through this with PGP already and the infamous RSA Dolphin.
So could they try? Yeah. Would it work? I don’t know.
L2ARC only does metadata out of the box. You have to tell it to do data & metadata. Plus for everything in L2ARC there has to be a memory page for it. So for that reason it’s better to max out your system memory before doing L2ARC.
It’s also not a cache in the way that LVMCACHE and BCACHE are.
At least that’s my understanding from having used it on storage servers and reading the documentation.
I used to do this all the time! So in terms of speed bcache is the fastest, but it’s not as well supported as lvm cache. IMHO lvm cache is plenty fast enough for most uses.
Is it going to be as fast as a NVME ssd? Nope. But it should be about as fast as a SATA ssd if not a little slower depending on how it’s getting the data. If you’re willing to take that trade off it’s worth it. Though anything already cached is going to be accessed at NVME speeds.
So it’s totally worth it if you need bigger storage but can’t afford the SSD. I would go bigger in your HDD though, if you can. Because unless you’re accessing more than the capacity of your SSD frequently; the caching will work extremely well for both reads and writes. So your steam games will feel like they’re on a SSD, most of the time, and everything else you do will “feel” snappy too.
Why? And what would be a replacement for it?
MacOS, nearly everyone who does anything with development or ops is using a MacBook. Though lately more “normal” employees have been getting MacBooks too.
Waaaaay better.
Restic allows you to make dedupe snapshots of your data. Everything is there and it’s damn hard to loose anything. I use backblaze b2 as my long term end point / offsite… some will use AWS glacier. But you don’t have to use any cloud services. You can just have a restic repository on some external drives. That’s what I use for my second copy of things. I also will do an annual backup to a hard disk that I leave with a friend for a second offsite copy.
I’ve been backing up all of my stuff like this for years now. I used to use BORG which is another great tool. But restic is more flexible with allowing multiple systems to use a single repository and has native support for things like B2 that BORG doesn’t.
We also use restic to backup control nodes for some of supercomputing clusters I manage. It’s that rock solid imho.
To be honest, there’s a few good comments linking to scripts and methods here to batch convert them on a windows pc/vm. That’s the best way to go.
To add on to their comments. If you’re just interested in preserving them then maybe printing them to pdf, specifically pdf/a, would be my approach once you got them opened.
I’ll leave this one here for someone:
You can tunnel L2 over OpenVPN. Just bridge your interfaces in both sides and it works.
That way if you need to provision a VOIP phone or just have something NetBoot remotely. Not that I recommend doing that…
Like everyone has said there’s way better ways of doing it.
HOWEVER if you wanted to use dd you totally could. I’d recommend piping into something like gzip/zstd to save some space though.
dd if=/dev/sda | gzip >/mnt/backup_disk/sda.gz
You could also use restic backup the raw block device too.
That being said, clonezilla is exactly what you want
Everyday. I’ve got a lot of stuff that uses it. Granted most of it was mostly created a decade ago but with minimal maintenance it works great. The most helpful script is parsing megacli outputs so I can get a heads up on drive failures and rebuilds among other things.
I just came across this - https://fedoramagazine.org/d-bus-overview/ - and I think it explains it pretty well.
Yeah, I like to give some the opportunity to explain themselves so they can, hopefully, hear how wrong they are. Even if it doesn’t work that way it just advertises to everyone their views
So correct me if I’m wrong.
You’re saying, in the case of git, that people using largely text based content over protocols nearly as old as the internet itself, just like the rest of the entire world, should not be allowed to do so?
Also what’s the RFC for secondary shitting streets? I must have missed that one.
How so? I haven’t seen any evidence of that.
To add to this systemd can do everything they can. You can isolate network, do fire-walling, and sandboxing pretty easily. Any OCI container can be used too if you don’t want to install something too.
Why would you think that?
I could just do more with it.
I didn’t have a lot of money and went dumpster diving for parts. Changed out a bad capacitor and got a system booting. This was back in Pentium 3 and 4 days. I found a 512MB stick of memory that had some bad areas. Linux was able to map around it with some kernel options at boot. Since I had limited storage I used knoppix and had a print out of the needed kernel options and memory addresses.
Once it was up and running I was able to do anything and everything I wanted. I did built a better system and got gentoo going a year or so later.
Eventually I got gaming mostly working with the project that eventually became crossover. First software I ever purchased too. I started dual booting less.
I bounced back and forth between windows and Linux and when I built a system around 2010 I didn’t even bother configuring it for dual booting.
I haven’t really touched anything windows since around the release of Windows 10 and only used windows 7 for work reasons prior. These days I’m pretty useless with anything on that end.
So I’m an evangelical fan of Linux. I use it everywhere I can and the FOSS philosophy resonates with me. I advocate for it where it makes sense and works. I’ll go out of my way and spend time & money helping people move into it too.
From what I understand Greg Kroah-Hartman would take over
I’ve abused syncthing in some many ways migrating servers and giant data sets. It’s freaking amazing. Though it’s been a few years since I’ve used it. Can only guess how much better it’s gotten.