I gather that’s a meme that’s older than you are?
By linux ISOs I meant any content you’re torrenting: movies, software, audio, my little pony porn, whatever.
I gather that’s a meme that’s older than you are?
By linux ISOs I meant any content you’re torrenting: movies, software, audio, my little pony porn, whatever.
Frankly, it probably means absolutely nothing.
Even when captain coffee cup was the FCC chairman, did you lose the ability to torrent linux isos? Did usenet stop working?
I wouldn’t expect anything different this time, either.
https://www.microsoft.com/en-us/evalcenter/evaluate-windows-11-iot-enterprise-ltsc
Keep in mind, though, that you’ll still have to do some activation and KMS hackery to make them usable, but you can at least use an installer that’s going to be clean.
From Microsoft. They actually provide ISO downloads for the 11 LTSC versions, so there’s not really any reason to go grab some random one off totally-legit-software-and-totatlly-not-malware.com or whatever.
Does !12345:p do what you want?
Edit: that also makes hitting the up arrow result in whatever command that was, so if you wanted to edit the line or whatever, you could !12345:p, up, then edit and execute.
Uh, are you sure your shell you’re using is bash and not zsh or something else?
Bash is indeed just !12345.
Why not save time and do it the other way?
Install the minimal/netinstall image, and then add what you need.
You’ll probably spend less time adding than trying to figure out what’s installed that you do or don’t need and trying to remove random packages without breaking anything.
two commands: dd and resize2fs, assuming you’re using ext4 and not something more exotic.
one makes a block-level copy of one device to another like so: dd if=/dev/source-drive of=/dev/destination-drive
the other is used to resize the filesystem from whatever size it was, to whatever size you tell it (or the whole disk; I’d have to go read a manpage since it’s been a bit)
the dd is completely safe, but the resize2fs command can break things, but you’d still have the data on the original drive, so you could always start over if it does - i’d unplug the source drive before you start doing any expansion stuff.
dd then resize the fs?
Edit: one caveat here I forgot: if your fstab is using UUIDs, you’re going to have to update that, since the new drive won’t be the same UUID because, well, it’s not the same drive.
My comment was more FDM vs resin support removal, and that it’s not like resin is all sunshine and rainbows.
If anything, modern tree supports for FDM have fixed the giant-blob-of-plastic problem with supports you’d previously get on smaller models, where you’d end up with, uh, well, a giant blob of plastic stuck to an arm or a sword or whatever.
Still not fantastic, but until someone figures out antigravity, it’s what it is.
Also, if you like htop, youre going to love btop.
print with supports, but removing supports from such thin, fragile bits of a model is nigh impossible without doing damage
Removing resin supports is worse, if anything.
They leave little bumps where they’re cut off that you have to then try to VERY VERY gently sand off without bending or breaking said fiddly models.
Yeah DNS is, in general, just goofy and weird and a lot of the interactions I wouldn’t expect someone who’s done it for years to necessarily know.
And besides, the round-robin thing is my favorite weird DNS fact so any excuse to share it is great.
Uh, don’t do that if you expect your mail to be delivered.
Multiple PTRs, depending on how the DNS service is set up, may be returned in round-robin fashion, and if you return a PTR that doesn’t match what your HELO claims you are, then congrats on your mail being likely tossed in the trash.
Pick the most accurate name (that is, match your HELO domain), and only set one PTR.
(Useless fact of the day: multiple A records behave the same way and you can use that as a poverty-spec version of a load balancer.)
sudo smartctl -a /dev/yourssd
You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.
Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.
Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.
I had a similar issue (different media types but) where Jellyfin would not, for any bleeping reason, update the metadata to reflect changes in the media.
After an annoying amount of fiddling I just yanked the library in it’s entirety (as in, it was deleted) and then re-added it and on the new-library-scan everything updated.
Annoying, and maybe not entirely viable depending on how your library is structured - I have ~6 libraries for different things, so it wasn’t that big of an issue - but it did resolve it.
As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.
Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.
Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.
At that rate, I’ll be dead well before those drives fail, lol.
It’s usable-ish, but still kinda crashy and prone to occasionally imploding.
I wouldn’t really use it as my sole daily driver, but for certain people doing certain things, it’s probably fine.
(It needs another year, honestly.)
I went and whacked the scan library button on a 30tb library collection and it didn’t read all that much data (looks like under 100gb) and seemed to be pretty quick - maybe 45 seconds. Local drives and all that, so the speed of the scan doesn’t matter as much as the relatively small amount of data. If all you had was 1tb of media, I’d expect it to just be a couple of gigabytes, not huge amounts of data.
I’d probably double-check that however you’ve mounted the WebDAV share is supporting partial reads, since that really feels to me like the first place that something could be wrong that would cause excessive amounts of file transfers.
https://x.com/realdebrid/status/1859673163681960169