Currently, talking to a face is the ultimate guarantee that you are communicating with a human (and on a subconscious level makes you try to relate, empathise, etc.). If humanoid robot technology eventually surpasses the Uncanny Valley, discovering that I’m talking to a humanoid with an LLM and that my intuitions had been betrayed would undermine the instinctive trust I give to the other party when I see a human face. This would degrade my social interactions across the board, because I’d live in constant suspicion that the humans I was talking to weren’t actually human.

It is for this reason I think it should be the law that humanoid robots must be clearly differentiated from humans. Or at least that people should have the right to opt out from encountering realistic-looking humanoids.

  • Num10ck@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    asimov explained how androids should be human form so that our world continues to be designed around our shape, rather than leaving us behind. doesn’t require an uncanny face though.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    This makes me think of the commandment “thou shalt not make a machine in the likeness of a human mind” from the Dune series.

    Seriously, though, I suspect a lot of technologies we currently experience in society only in the context of oppression of average people and widening of the income gap might be able to be put to better use. Not even necessarily because we have rules in place so much as because people won’t be baking their selfish asshole agendas into the tech they build.

    That all kindof assumes that humanoid robots would be “tools” for humans to “use”. If of course they (or at least some of them) are more like sentient creatures with hopes and dreams and emotions, that might make for a much different conversation. And that feels like the kind of conversation that’d be hard to even comment on today.

    • Andy@slrpnk.net
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 day ago

      I’ve spent a lot of time thinking about this, because over the last year I was writing the world guide for a solarpunk setting to be used with a tabletop RPG or as a writing guide. And while I was working on this, OpenAI came along and put the Turing test out to pasture.

      Several existential crises later, the result looked remarkably like I hadn’t thought about it at all: in the game setting, there are robots and they are treated like people. Like Bender on Futurama.

      I think @TootSweet@Lemmy.world (love the username, btw!) is absolutely right that our concerns are all largely shaped by the presumption that today, everything someone builds is built to benefit the creator and manipulate the end user. If that isn’t the case, than a convincing android could just be… your neighbor Hassan.

      Most machines probably wouldn’t have a reason to pretend to be human. But if one wanted to, that’s basically transorganicism. No disrespect to OP, but if a machine is sentient, trying to restrict it from presenting as organic seems pretty similar to restrictions on trans people using the restroom that matches their presentation.

      And if they are trying to deceive you maliciously, well… I currently know everyone I meet is organic, and I already know not to trust all of them.

  • Kwiila@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    23 hours ago

    Bots online impersonating humans are already causing so many problems at every level.

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I have no idea what the actual origin story for Fallout’s Brotherhood of Steel is, but I’d at least place OP’s post as a strong candidate.

    In all seriousness, if it ever gets out of uncanny valley, then yeah that’s a major transparency issue. The problem at that point becomes, even with laws in place to prevent it, who’s to say those will actually be followed? It’ll cause the same issues that deepfakes are causing now, but off-screen in real time. That would have crazy-bad implications for politics, security, social engineering, market manipulation…

    • SubArcticTundraOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Yep. But then I don’t get why there are efforts to make a realistic robotic human face.

      (Edit: ok I do understand one reason – just as a challenge and to prove it’s possible, but I’m not sure that justifies doing it given the consequences)

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    They’ll remember that you said this, once they start protesting for their rights.