cross-posted from: https://lemmy.ca/post/37011397

!opensource@programming.dev

The popular open-source VLC video player was demonstrated on the floor of CES 2025 with automatic AI subtitling and translation, generated locally and offline in real time. Parent organization VideoLAN shared a video on Tuesday in which president Jean-Baptiste Kempf shows off the new feature, which uses open-source AI models to generate subtitles for videos in several languages.

  • shyguyblue@lemmy.world
    link
    fedilink
    English
    arrow-up
    146
    arrow-down
    2
    ·
    22 hours ago

    I was just thinking, this is exactly what AI should be used for. Pattern recognition, full stop.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      3
      ·
      22 hours ago

      Yup, and if it isn’t perfect that is ok as long as it is close enough.

      Like getting name spellings wrong or mixing homophones is fine because it isn’t trying to be factually accurate.

      • vvv@programming.dev
        link
        fedilink
        English
        arrow-up
        12
        ·
        16 hours ago

        I’d like to see this fix the most annoying part about subtitles, timing. find transcript/any subs on the Internet and have the AI align it with the audio properly.

        • Scrollone@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          YES! I can’t stand when subtitles are misaligned to the video. If this AI tool could help with that, it would be super useful.

      • TJA!@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        2
        ·
        21 hours ago

        Problem ist that now people will say that they don’t get to create accurate subtitles because VLC is doing the job for them.

        Accessibility might suffer from that, because all subtitles are now just “good enough”

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          16 hours ago

          Regular old live broadcast closed captioning is pretty much ‘good enough’ and that is the standard I’m comparing to.

          Actual subtitles created ahead of time should be perfect because they have the time to double check.

        • Railcar8095@lemm.ee
          link
          fedilink
          English
          arrow-up
          28
          arrow-down
          2
          ·
          19 hours ago

          Or they can get OK ones with this tool, and fix the errors. Might save a lot of time

        • LandedGentry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          20 hours ago

          Honestly though? If your audio is even half decent you’ll get like 95% accuracy. Considering a lot of media just wouldn’t have anything, that is a pretty fair trade off to me

        • TachyonTele@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          ·
          21 hours ago

          I have a feeling that if you care enough about subtitles you’re going to look for good ones, instead of using “ok” ai subs.

        • shyguyblue@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          15 hours ago

          I imagine it would be not-exactly-simple-but-not- complicated to add a “threshold” feature. If Ai is less than X% certain, it can request human clarification.

          Edit: Derp. I forgot about the “real time” part. Still, as others have said, even a single botched word would still work well enough with context.