I don’t know if I “hate” Windows but more like “I’m done dealing it.” I might come and use it time to time, but only when absolutely necessary, and the mental capacity to remove things I don’t need and make sure its removed.
pending anonymous user
I don’t know if I “hate” Windows but more like “I’m done dealing it.” I might come and use it time to time, but only when absolutely necessary, and the mental capacity to remove things I don’t need and make sure its removed.
For a quick brief read, it uses its own server to perform the sync.
Aegis w/ auto backup + syncthing
Just tell them unlock their phone so you can take a look of his browser history. Works quite a few time for me.
I have simplex notification service running 24x7. while rarely open, i never missed a message when it arrive (i use it as a message bridge between my devices). Nor I feel it uses more battery that it can’t hold a day of use despite it running constantly in the background. I’m using S21FE btw.
Define your criteria for an ideal messenger. What do you need actually? What’s your security requirement?
I usually take BDRAW, transcode by myself. Or the best quality I can find. Does it look better? Not really. Just the data hoarder inside kicked in. 720p is totally fine.
What FS you’re on? I’m using BTRFS and have the same problem. Simply because disk analyzer doesn’t read snapshots.
You sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
Not just LLMs but all kinds of models are equivlant to freeware, aka the model itself and other essential bits for it to work. I won’t even call it source avaliable as there is no source.
Take redis as example. I can still go grab the source and compile a binary that works. This doesn’t applies on ML models.
Of course one can argue the training process isn’t determistic thus even with the exact training corpus, it can’t create the same model in terms of bits on mulitple runs. However, I would argue the same corpus provide the chance to train a model of similar or equivalent performance. Hence the openness of the training corpus is an absolute requirement to qualify a model being FOSS.
So you’re including free models like freeware, not FOSS only, by non big tech.
Your choice of models will be quite limited as the compute resource and training corpus needed to make a viable base model isn’t anyone can do.
What’s FOSS-AI? A model everyone can download and use for free? Or in the OSS spirit that everything need to be open and without discrimination of use, aka OSS training data corpus and no AUP attached?
Or you mean the inference engine running those models?
Decide what good for me
It is quite a bloat. Llama3 7B is 4.7GB by itself, not counting all the dependencies and drivers. This can easily take 10+ GB of the drive. My Ollama setup takes about 30GB already. Given a single application (except games like COD that takes up 300GB), this is huge, almost the size of a clean OS install.
Oh. I get it now.
This I have no idea.
So what’s the prize for Linux desktop would get? For for-profit cooperation, that’s market share and revenue. Yet, as far as I concern, most Linux desktop doesn’t chase market share, nor earns revenue.
But does Linux have to “win”? And if so what they “wins”?
Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.
I don’t get it. How Linux destops would become obsolete if they don’t have native AI toolsets on DEs? It’s not like they have a 80% market share. People who run them as daily drivers are still niche, and most don’t even know Linux exists. Most ppl grown up with Microsoft and Apple shoving ads down their throat, using them in schools first hand, and that’s all they know and taught. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.
And if you really need something like MS’s Recall, here is a FOSS version of it.
More or less applies to Apple and most companies.