Meta announced a new AI model called Voicebox yesterday, one it says is the most versatile yet for speech generation, but it’s not releasing it yet: The model is still only a research project, but Meta says can generate speech in six languages from samples as short as two seconds and could be used for “natural, authentic” translation in the future, among other things.

  • Ronno@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    2 years ago

    Since when is Meta considering something as being too dangerous? It is not like Facebook and Instagram are any less dangerous…

    • Steeve@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 years ago

      Lol what? Of course they’re less dangerous. I don’t have to worry about someone using a 2 second clip of audio to scam my grandma into thinking I’m trapped in a foreign country and sending over thousands of dollars using Facebook.

      Facebook and Instagram have what? Targeted ads? Yeah I’ll take my chances there

      • Ronno@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        The way Meta gathers and processes not only user information, but also non-user information, from all corners of the internet is astonishing. Meta mainly uses this information for targeted ads, which in its core isn’t all that harmful. However, there appear to be cases in which social media is crawled to “tailor” a product, which is not desirable for consumers. E.g. when you watch a lot of car/motorcycle stunt content, it might ultimately mean that your next car insurance payment will be higher, because the insurer thinks that you might start doing that dumb stuff too. That is probably the least scary use case.

        What really is the most dangerous is what you saw over the last couple of years, Social media platforms being used to manipulate people and push a specific (geo)political agenda. We also have echo chambers of communities that can potentially be harmful to society. All of this, combined with the data that Meta gathers is dangerous beyond comprehension.

        Couple that to your statement “I don’t have to worry about someone using a 2 second clip of audio to scam my grandma into thinking I’m trapped in a foreign country and sending over thousands of dollars using Facebook.” Even without AI, this has happened over the past years using social media platforms. Scammers and thieves know when you are not at home, because you are posting those holiday pictures. When a data leak occurs, which happens quite often, scammers and thieves get access to your historical information and patterns, which can also be used to scam your grandma, or worse. It is quite easy to social hack someone when you know pretty much everything that happened in that persons life. “Hey grandma, sending this message from a strange phone number from Bali, since someone stole my phone on the beach, can you send over some money to buy a new one?”.

        Not saying we shouldn’t use social media anymore, I am also posting this on one. But people should be mindful of the information that is out there.

  • shoelace@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    2 years ago

    Gives me Simpsons vibes.

    Chalmers: Ah- Aurora Borealis!? At this time of year, at this time of day, in this part of the country, localized entirely within your kitchen!?
    Skinner: Yes.
    Chalmers: …May I see it?
    Skinner: …No.

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 years ago

      “Our tech is so powerful that it’s dangerous!”

      “Ok, seems like we will have to push for more regulations into laws then”

      “No, wait!”

  • EmptyRadar@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    What a fantastic way to phrase “we don’t have a public release anywhere near ready yet”. They need to give that PR person a raise.

  • blaine@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    Mark Zuckerberg was on the Lex Friedman podcast less than a week ago talking about this, and he said meta would continue to open source their models until they reach the point of “super intelligence”.

    So what changed in the last week?

    • Clairvoidance@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 years ago

      That was specifically around LLMs. In that same podcast he also highlighted how scams are very worrisome and you can probably extend that to any reality-faking technology as it gets more and more convincing.
      It’s self explanatory that the threat of extinction by AI and threat of crafting a fake reality to shape the outcome of the real reality are two different threats

      • blaine@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        So creating a text-based AI that impersonates influencers or celebrities is a “cool feature” to “increase engagement” and is totally viable to release to the public, but doing the (checks notes) same thing using voice is incredibly “dangerous” and needs to be protected?

        • Clairvoidance@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          Well snarked, especially enjoyed the copypaste of the checking notes phenomena. Can you figure out why one would be seen as more harmful in the immediate future than the other?

        • conciselyverbose@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          People understand that text can be fake.

          People don’t really understand that voices can be. It’s opening up a lot of scams with people pretending to be kidnapped (or otherwise desperate) relatives and taking money from people. If you make it easier to automate that without the human in play and have it appear responsive? A lot more is going to happen a lot more convincingly.

          I don’t at all believe Facebook cares about that, but it is a real downside to the tech.

      • Maeve@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        Ok, following you, only commenter so far. Your posts are thought-experiment inducing. Thank you!

    • NotMyOldRedditName@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      I didn’t watch it, but wasn’t that about llama? That’s text generation, not speech generation.

      Speech has more implications if it can replicate someone’s voice. Imagine getting a ransom voice mail from your child.

      That doesn’t happen with text generation the same way.

  • Otome-chan@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    annoying. why do they never release good voice clone tts? moegoe and tortoise are the best we have pretty much.

  • lohrun@fediverse.boo
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    I’ve personally messed around with ElevenLabs and their voice generation, and I was honestly amazed. I even did an experiment by running a fully AI YouTube channel for a couple weeks. I wouldn’t be surprised if a massive company like Facebook was able to pull off a realistic sounding voice.