• 0 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: August 11th, 2023

help-circle
  • lunarul@lemmy.worldtoScience Memes@mander.xyzWhy?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    A computer that can manage the TNG holodeck will have no problem handling all the complexities of a transporter.

    Plus it needs to identify what creepy crawlies are a part of you and which were just randomly wandering by.

    It does do that. It’s canon that transporters take care of removing any foreign organisms.

    And how does it know what clothes are? If I’m wearing shoes, does it know where the shoes end and the floor starts? What if I’m wearing skies? What if I’m barefoot on a carpet? What if it’s a leather carpet? What if I’m wearing shoes made by folding carpet around my feet?

    It understands all those scenarios and relays them to the operator, who decides what to lock on.













  • lunarul@lemmy.worldtoScience Memes@mander.xyzSquare!
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 months ago

    If you’re talking about straight lines, then yes, that’s how you define a convex shape. If any uninterrupted path can be taken, then the OP shape does satisfy the condition.

    Edit: just read the other comments and I see the problem was that you thought the internal angle shown for reference on how the shape was built is part of the shape. It’s not, just the thicker lines define the shape shown. The little crossmarks that show equal sides are also not part of the shape.



  • Make a large enough model, and it will seem like an intelligent being.

    That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.

    And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?

    But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.