Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
    • blady_blah@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      11 hours ago

      Then asking it a logic question. What question are you asking that the llms are getting wrong and your average person is getting right? How are you proving intelligence here?

      • JacksonLamb@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 minutes ago

        LLMs are an autocorrect.

        Let’s use a standard definition like “intelligence is the ability to acquire, understand, and use knowledge.”

        It can acquire (learn) and use (access, output) data but it lacks the ability to understand it.

        This is why we have AI telling people to use glue on pizza or drink bleach.

        I suggest you sit down with an AI some time and put a few versions of the Trolley Problem to it. You will likely see what is missing.

        • blady_blah@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 minutes ago

          I think this all has to do with how you are going to compare and pick a winner in intelligence. the traditional way is usually with questions which llms tend to do quite well at. they have the tendency to hallucinate, but the amount they hallucinate is less than the amount they don’t know in my experience.

          The issue is really all about how you measure intelligence. Is it a word problem? A knowledge problem? A logic problem?.. And then the issue is, can the average person get your question correct? A big part of my statement here is at the average person is not very capable of answering those types of questions.

          In this day and age of alternate facts and vaccine denial, science denial, and other ways that your average person may try to be intentionally stupid… I put my money on an llm winning the intelligence competition versus the average person. In most cases I think the llm would beat me in 90% of the topics.

          So, the question to you, is how do you create this competition? What are the questions you’re going to ask that the average person’s going to get right and the llm will get wrong?

        • blady_blah@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 hours ago

          I asked gemini and ChatGPT (the free one) and they both got it right. How many people do you think would get that right if you didn’t write it down in front of them? If Copilot gets it wrong, as per eletes’ post, then the AI success rate is 66%. Ask your average person walking down the street and I don’t think you would do any better. Plus there are a million questions that the LLMs would vastly out perform your average human.

          • JacksonLamb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            33 minutes ago

            I think you might know some really stupid or perhaps just uneducated people. I would expect 100% of people to know how many Rs there are in “strawberry” without looking at it.

            Nevertheless, spelling is memory and memory is not intelligence.

          • eletes@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 hours ago

            literally just asked copilot through our work subscription

            I know it looks like I’m shitting on LLMs but really just trying to highlight they still have gaps on reasoning that they’ll probably fix in this decade.