I think Linus mentioned Redox directly during the interview
I think Linus mentioned Redox directly during the interview
Then anyone running a Windows VM would just switch to a Server edition, which is almost exclusively run via a VM.
If you install each OS with it’s own drive as the boot device, then you won’t see this issue.
Unless you boot Windows via the grub boot menu. If you do that then Windows will see that drive as the boot device.
If you select the OS by using the BIOS boot selection then you won’t see this issue.
I was bitten by Windows doing exactly this almost 15 years ago. Since that day if I ever had a need for dual-boot (even if running different distros) each OS will get it’s own dedicated drive, and I select what I want to boot through the BBS (BIOS Boot Selection). It’s usually invoked with F10 or F11 (but could be a different key combo.
While I generally agree with that, that’s not what seems to be happening here. What seems to be happening is that anyone who boots Windows via grub is getting grub itself overwritten.
When you install Linux, boot loaders like grub generally are smart and try to be helpful by scanning all available OSes and provide a boot menu entry for those. This is generally to help new users who install a dual-boot system and help them not think that “Linux erased Windows” when they see the new grub boot loader.
When you boot Windows from grub, Windows treats the drive with grub (where it booted from) as the boot drive. But if you tell your BIOS to boot the Windows drive, then grub won’t be invoked and Windows will boot seeing it’s own drive as the boot drive.
This is mostly an assumption as this hasn’t happened to me and details are still a bit scarce.
deleted by creator
That would break 90+% of installations then. And all of Azure.
even if you have two drives, you still have only one bootloader, not?
The idea is to have completely separate boot and OS drives. You select which one you want to boot through the BIOS boot selection (ie. pressing F10 or F11 at the BIOS screen).
This functionally makes each OS “unaware” of the other one.
But it could also be for legal reasons, like websites where you can post stuff for everybody to see, in case you post something highly illegal and the authorities need to find you. Another example is where a webshop is required to keep a copy of your data for their bookkeeping.
None of these require your account to “exist”. There could simply be an acknowledgement stating those reasons with “after X days the data will be deleted, and xyz will be archived for legal reasons”.
Mostly it’s 30-90 days where they keep your data, just in case somebody else decided to delete your account or you were drunk or something
This is the only valid reason. But even then this could be stated so that the user is fully aware. Then an email one week and another one day before deletion as a reminder, and a final confirmation after the fact. I’ve used services before that do this. It’s done well and appreciated.
This pseudo-deletion shadow account stuff is annoying.
What the user was doing is that they don’t trust that the system truly deleted the account, and they worry it was just deactivated (while claiming it was “deleted”). So they tried to do a password recovery which often reactivates a falsely “deleted” account.
I’ve done this before and had to message the company and have them confirm the account is entirely deleted.
I don’t like btrfs, cause you still sometimes read about people loosing their data.
That was only on RAID setups. So if you have only a singular disk, as opposed to an array, you’re fine. And that issue has been fixed for a while now anyways.
I’ve been running btrfs on my laptop’s root partition for well over a year now and it’s fine.
Using Relational DBs where the data model is better suited to other sorts of DBs.
This is true if most or all of your data is such. But when you have only a few bits of data here and there, it’s still better to use the RDB.
For example, in a surveillance system (think Blue Iris, Zone Minder, or Shinobi) you want to use an RDB, but you’re going to have to store JSON data from alerts as well as other objects within the frame when alerts come in. Something like this:
{
"detection":{
"object":"person",
"time":"2024-07-29 11:12:50.123",
"camera":"LemmyCam",
"coords": {
"x":"23",
"y":"100",
"w":"50",
"h":"75"
}
}
},
"other_ojects":{
<repeat above format multipl times>
}
}
While it’s possible to store this in a flat format in a table. The question is why would you want to. Postgres’ JSONB datatype will store the data as efficiently as anything else, while also making it queryable. This gives you the advantage of not having to rework the the table structure if you need to expand the type of data points used in the detection software.
It definitely isn’t a solution for most things, but it’s 100% valid to use.
There’s also the consideration that you just want to store JSON data as it’s generated by whatever source without translating it in any way. Just store the actual data in it’s “raw” form. This allows you to do that also.
Edit: just to add to the example JSON, the other advantage is that it allows a variable number of objects within the array without having to accommodate it in the table. I can’t count how many times I’ve seen tables with “extra1, extra2, extra3, extra4, …” because they knew there would be extra data at some point, but no idea what it would be.
JSON data within a database is perfectly fine and has completely justified use cases. JSON is just a way to structure data. If it’s bespoke data or something that doesn’t need to be structured in a table, a JSON string can keep all that organized.
We use it for intake questionnaire data. It’s something that needs to be on file for record purposes, but it doesn’t need to be queried aside from simply being loaded with the rest of the record.
Edit: and just to add, even MS SQL/Azure SQL has the ability to both query and even index within a JSON object. Of course Postgres’ JSONB data type is far better suited for that.
I heard a first earther recently say it as: pe-tha-gore-ian
They don’t
The difference in people is that our brains are continuously learning and LLMs are a static state model after being trained. To take your example about brute forcing more data, we’ve been doing that the second we were born. Every moment of every second we’ve had sound, light, taste, noises, feelings, etc, bombarding us nonstop. And our brains have astonishing storage capacity. AND our neurons function as both memory and processor (a holy grail in computing).
Sure, we have a ton of advantages on the hardware/wetware side of things. Okay, and technically the data-side also, but the idea of us learning from fewer examples isn’t exactly right. Even a 5 year old child has “trained” far longer than probably all other major LLMs being used right now combined.
But they are. There’s no feedback loop and continuous training happening. Once an instance or conversation is done all that context is gone. The context is never integrated directly into the model as it happens. That’s more or less the way our brains work. Every stimulus, every thought, every sensation, every idea is added to our brain’s model as it happens.
The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it’s locked into that state with no additional learning.
A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.
He just mentioned it as an example of a kernel written in Rust. The interviewer asked if Rust isn’t accepted into the Linux kernel, would someone go out and build their own in Rust, and Linus mentioned Redox saying that’s already happened.