• foremanguy
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Until now it was really annoying to collect audio and then use it. The app needs to constantly record, send out the datas and then lastly process it to be useful. Today the cost versus benefits are really not to their advantage.

    But tomorrow this might change, if they find a way of using the mic to serve ads be sure that they will. The only question today is how? The only option at this time is for me to process the stuff offline. As they do today with “ok google”. Within the next months-years we are going to see more and more phones and it stuff using dedicated or specialized AI chip, they will be great with really low consumption to run 24/24. They could analyse offline the speech, make a resume and lastly when the connectivity is sufficient and enough datas are collwcted, the phone sends out all the infos to the companies servers.

    I’ve seen some comments about the fact that others companies that Google cannot really use the mic in this way, this is right…today. But in the future make sure that when they will have developed correctly this concept, Google (and Apple) would surely be okay with this approach (maybe in exchange with some bucks).

    Today phones are surely not listening to us, but they know so much things that we are actually thinking that they are. But this way is maybe not enough profitable for them, so they want to invade even more our privacy to gain more of this fucking thing called “money”.