My understanding is that it’s a difficult feature to support and they can’t guarantee it works well. That’s the only explanation I’ve ever seen, cause to me it’s almost critical for working on a laptop.
My understanding is that it’s a difficult feature to support and they can’t guarantee it works well. That’s the only explanation I’ve ever seen, cause to me it’s almost critical for working on a laptop.
I dont get why hibernate isn’t a more popular feature, I use it extensively as I hate having to set everything back up on each restart.
Its also one of my biggest issues with using Linux as it’s usually broken there.
Yeah that’s right, seems my link didn’t populate right.
I assume because it all does tie back to math terms. There is a lot of computer science in which arrays/lists are used for vector arithmetic (graphics, ML, generic math). I suspect only later in the field did arrays mutate into generic lists that you see in R and Python.
Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it’s defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.
Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).
I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).
AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.
Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.
Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.
Yes, they are very impressive models, but they’re a long way from AGI.
This is what I was going to say.
I wouldn’t say worst, but maybe greatest difference in expectation vs reality - “My Time at Portia”.
Cutscenes and voice acting were janky. The UI felt like it was originally an MMO and feels odd for a single player game. The gameplay loop felt tedious and seemed to disrespect the player’s time.
Maybe I needed to give it more time, but for a game that I thought had generally good/great reviews, it wasn’t clicking for me.
Yeah they took that prediction an odd step too far.
Yeah, I think the point is that the person answering was wrong/over complicating. If x=10i, then x^2 would be -100 (or potentially -10 depending on what you think the ^2 is applied to).
Maybe not the same thing as FPS chess, but a great chess rougelike with guns is https://store.steampowered.com/app/1972440/Shotgun_King_The_Final_Checkmate/
He did do a great job with what they gave him. He’s the only reason the film is bearable, and would have loved to see a more serious and earnest take on that movie.
I think the worst thing about it is batman doesn’t actually solve/do much. I like that him being new/young means he isn’t infallible, but I would have liked him to have a win or two.
Still love the movie though.
I absolutely agree, felt like they didn’t know what kind of movie they wanted to make and just kinda threw whatever they could think of into it.
I don’t know why this was down voted, but you’re right. It’s not sold as an accompaniment to the books, so should stand on its own.
Having read the books though, I really enjoyed the movies. I do understand though that a lot of that enjoyment came from the background knowledge I got from the books.
Which is the same issue with many physcology and political survey-based studies. Take a survey that was horribly implemented, run some lightweight analysis (cause what can you really do with 20 respondents anyways), slap a funny name on it, and boom another piece of terrible research.
Yeah, I’ve definitely seen it most used to describe people acting ridiculous.
This is the same type of criticism the paper made. The real intent behind the saying is given random output (where all outputs have nonzero probability) eventually you will create anything/everything.
Its a thought experiment around infinity, probability, and art.