• Rottcodd@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    13 days ago

    It means that either the test is flawed, the results are bogus or the report is a lie.

    Intelligence is a measure of reasoning ability.

    Current AIs have been designed to produce content that (optimally) mimics the products of reason, but they do not in fact reason at all, so they cannot possess measurable intelligence.

    Much more to the point, current AIs have been designed to make enormous piles of money for corporations and venture capitalists, and I would pretty much guarantee that that has more to do with this story than anything else.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      13 days ago

      One extremely minor correction - you said they’re designed to make enormous piles of money and yet none of these(1) are cash flow positive or have any clear path to profitability. The only way a company makes money off this (outside an acquisition to let founders exit with bags of cash) is if one of these companies is allowed to create a monopoly leading to a corporate autocracy. General language models are absolutely shit in terms of efficiency compared to literally any other computing tool - they just look shiny.

      1. Please note - lots of pre chatgpt neural networks are happily chugging away doing good and important works… my statement excludes everything pre-ML bubble and a fair few legitimately interesting ML applications developed afterwards which you’ll never fucking hear about.

      Edited to add: Just as a note it’s always possible that this AI gold rush actually does lead to an AGI but, lucky for me, if that happens the greedy as fuck MBAs will absolutely end civilization before any of you could type up “told you so” so I’m willing to take this bet.

      • Poik@pawb.social
        link
        fedilink
        arrow-up
        5
        ·
        13 days ago

        ML-bubble? You mean the one in the 1960’s? I prefer to call this the GenAI bubble, since other forms of AI are still everywhere, and have improved a lot of things invisibly for decades. (So, yes. What you said.)

        AI winter is a recurring theme in my field. Mostly from people not understanding what AI is. There have been Artificial Narrow Intelligence that beat humans in various forms of reasonings for ages.

        AGI still seems like a couple AI winters out of having a basic implementation, but we have really useful AI that can tell you if you have cancer more reliably and years earlier than humans (based on current long term cancer datasets). These systems can get better with time, and the ability to learn from them is still active research but is getting better. Heck, with decent patching, a good ANI can give you updates through ChatGPT for stuff like scene understanding to help blind people. There’s no money in that, but it’s still neat to people who actually care about AI instead of cash.

      • CmdrShepard42@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        13 days ago

        none of these(1) are cash flow positive or have any clear path to profitability.

        Only if you consider the companies developing these algorithms and not every other company jamming “AI” into their products and marketing. In a gold rush, the people who make money aren’t the people finding the gold. It’s the people selling shovels and gold pans.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    13 days ago

    Here’s what that means

    That we need to produce a better, generally-accepted benchmark of human-level general intelligence, I expect.

    Coming up with such a metric is a real problem that probably is an important step on the way to producing such a artificial general intelligence.

    • Joeffect@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      Just use chat gpt of course…

      Here’s an improved and structured list of criteria for general intelligence (GI), tailored for human-level understanding and considering the capabilities and limitations of current AI technology:

      1. Learning and Adaptation

      Human-Level Expectation: Ability to learn new concepts, skills, and tasks across various domains without specific pre-programming. Adapt to new and unfamiliar situations effectively.

      Current AI: Specialized machine learning models excel at task-specific learning but struggle with transfer learning across vastly different domains.

      1. Reasoning and Problem-Solving

      Human-Level Expectation: Logical reasoning, abstract thought, and the ability to solve novel and complex problems with limited information.

      Current AI: Good at structured problem-solving within predefined rules (e.g., chess, Go) but struggles with open-ended, ambiguous, or poorly defined problems.

      1. Perception and Understanding

      Human-Level Expectation: Capability to perceive, interpret, and understand diverse sensory inputs (visual, auditory, textual) and contextualize them meaningfully.

      Current AI: Strong in narrow perception tasks (e.g., image recognition, speech-to-text) but lacks comprehensive multimodal understanding and nuanced contextual awareness.

      1. Creativity and Innovation

      Human-Level Expectation: Generate novel ideas, concepts, or solutions that are original and valuable. Combine unrelated information creatively.

      Current AI: Can mimic creativity (e.g., generating art, writing) within boundaries of existing data but lacks the genuine originality seen in humans.

      1. Social and Emotional Intelligence

      Human-Level Expectation: Understand, interpret, and respond appropriately to human emotions, behaviors, and social cues. Exhibit empathy and build relationships.

      Current AI: Limited to pre-programmed emotional recognition and response patterns. Lacks authentic empathy or true understanding of social dynamics.

      1. Memory and Knowledge Retention

      Human-Level Expectation: Retain, organize, and recall past experiences or knowledge to inform decisions and behavior in real-time.

      Current AI: Good at storing and retrieving vast amounts of data but lacks long-term experiential memory and personalized understanding.

      1. Generalization and Transfer Learning

      Human-Level Expectation: Apply knowledge from one domain to solve problems in another, even when contexts differ.

      Current AI: Limited generalization; successes in transfer learning are domain-specific and far from human-level adaptability.

      1. Goal-Oriented Behavior

      Human-Level Expectation: Define, pursue, and achieve diverse goals autonomously, balancing conflicting objectives when needed.

      Current AI: Works well with clearly defined goals but struggles with balancing competing priorities or autonomously setting meaningful objectives.

      1. Self-Awareness and Meta-Cognition

      Human-Level Expectation: Understand one’s own state, capabilities, and limitations. Reflect on and regulate one’s thought processes.

      Current AI: Lacks true self-awareness or understanding of its limitations; it operates within predefined parameters without introspection.

      1. Ethical and Moral Reasoning

      Human-Level Expectation: Make decisions that consider ethical principles, cultural values, and societal norms. Adapt morality contextually and appropriately.

      Current AI: Ethics are externally imposed (via programming or guidelines). AI lacks innate moral reasoning or the ability to adapt ethically across contexts.

      1. Physical Interaction and Embodiment

      Human-Level Expectation: Navigate and interact with the physical world effectively, using tools and adapting to physical challenges.

      Current AI: Robotics and embodied AI are improving but still far behind humans in physical dexterity, adaptability, and decision-making in real-world environments.

      1. Intuition and “Common Sense”

      Human-Level Expectation: Possess an innate understanding of everyday situations, physical laws, and social norms, even when unstated.

      Current AI: Limited to explicitly trained knowledge; often fails in tasks requiring common-sense reasoning or implicit understanding.

      1. Persistence and Goal Persistence

      Human-Level Expectation: Continue efforts toward a goal despite challenges, setbacks, or changing circumstances.

      Current AI: Performs tasks based on pre-set parameters but lacks intrinsic motivation or the ability to “persevere” without external instructions.

      1. Ethical Use of AI Technology

      Acknowledge that the development of general intelligence must align with ethical principles, ensuring safety, fairness, and accountability.

      This list can serve as a roadmap to evaluate progress toward general intelligence, highlighting where current AI excels and where significant gaps remain compared to human cognition.

    • Xerxos
      link
      fedilink
      arrow-up
      1
      ·
      12 days ago

      No, I don’t think so. AI is just getting better.

      Still that AI, even with a lot of time and compute, got easy tasks wrong, so we don’t have AGI yet, if that makes you feel better.

      I don’t understand why people have such a hate-boner for AI that they can’t believe that there’s advancement in AI research.

      AI is in a strange place where it is very good at many tasks while making errors a five year old wouldn’t do.

      But it’s a relatively young technology so it will probably get better for quite some time.

  • randon31415@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    13 days ago

    Think of the average human intelligence, then realize half of them are below that.

    People are saying “there is no way an AI is as smart as a human”. There are a few humans i know that being as smart as them wouldn’t be much of a challenge.

    • JeeBaiChow@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      13 days ago

      Think of the average human intelligence, then realize that more than half of those who voted in 2024, voted for trump.

  • JeeBaiChow@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    13 days ago

    That the ai industry has finally produced a text-based frontend UI for general search, aka a ‘search bar’ but you’d still have to vet the results yourself?