I wanted to maybe start making PeerTubr videos, but then I realized I’ve never had to consider my voice as part of my threat model. A consequence that immediately comes to mind is potentially having your voice trained on by AI, but I’m not (currently) in a position where others would find it desirable to do so. Potentially in the future?

I’d like to know how your threat model handles your personal voice. And as a bonus, how would voice modulators help your voice in/prevent your voice from being more flexible in your threat model? Thanks!

  • WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    I would suggest investigating how much effort it would take to alter your voice. Is it possible to do it live, or does it take post processing? There’s no harm in doing it unless you meet up with internet people in the real world. Even then it may not really be an issue.

    I’ve felt the same about parsing my comments via an AI because, using stylometry, a small sample of your comments are enough to de-anonymize all of your online accounts. It ultimately required too much effort and friction, plus I have decades of comments already out there so whatevs. I’m not here to fuck spiders.