• grandkaiser@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Your brain doesn’t have a single module that knows what an apple is. Instead, different parts work together to form the concept. The occipital lobe in particular, processes how an apple looks, but it doesn’t “know” what an apple is—it just handles visual characteristics. When these various modules combine, they produce your idea of an apple, which is then interpreted by another part of the brain.

    When you decide to draw an apple, your brain sends signals to the cerebellum to move your hands. The cerebellum doesn’t know what an apple is either. It simply follows instructions to draw shapes based on input from other brain regions that handle motor skills and artistic representation. Even those parts don’t fully understand what an apple is—they just act on synaptic patterns related to it.

    AI, by comparison, is missing many of these modules. It doesn’t know the taste or scent of an apple because it lacks sensory input for taste or scent. AI lacks a Cerebral Cortex, Reticular Activating System, Posterior Parietal Cortex, Anterior Cingulate Cortex, and the necessary supporting structures to experience consciousness, so it doesn’t “know” or “sense” anything in the way a human brain does. Instead, AI works through patterns and data, never experiencing the world as we do.

    This is why AI often creates dream-like images. It can see and replicate patterns similar to the synaptic patterns created by the occipital lobes in the brain, but without the grounding of consciousness or the other sensory inputs and corrections that come from real-world experience, its creations lack the coherence and depth of human perception. AI doesn’t have the lived understanding or context, leading to images that can feel abstract or surreal, much like dreams.