There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.

  • Ephera
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    8 months ago

    Yeah, I have to disagree. Reason-able-ness is extremely important. It allows us to compose various pieces of logic, which is why I do think, it will always be more important than the non-reason-able 95% accurate solutions.

    But that fundamental flaw does not mean, the non-reason-able parts may not exist at all. They simply have to exist at the boundaries of your logic.

    They can be used to gather input, which will then be passed into your reason-able logic. If the 95% accurate solution fucks up, that means the whole system is 95% accurate, but otherwise, it doesn’t affect your ability to reason about the rest.

    Well, and they can be used to format your output. Whether that’s human-readable text or an image or something else. Again, if they’re 95% accurate, your whole system is 95% accurate, but you can still reason about the reason-able parts.

    It’s not exactly different from just traditional input & output, especially if a human is involved in those.