The idea of smartphones and smart speakers listening to everything you say and then sending you targeted advertisements has been around for years, and has largely been debunked by privacy experts. But the marketing unit of Cox Media Group, which owns newspapers and local radio and TV stations around the country, says it can do […]
as someone who has played around with offline speech recognition before - there is a reason why ai assistants only use it for the wake word, and the rest is processed in the cloud: it sucks. it’s quite unreliable, you’d have to pronounce things exactly as expected. so you need to “train” it for different accents and ways to pronounce something if you want to capture it properly, so the info they could siphon this way is imho limited to a couple thousand words. which is considerable already, and would allow for proper profiling, but couldn’t capture your interest in something more specific like a mazda 323f.
but offline speech recognition also requires a fair amount of compute power. at least on our phones, it would inevitably drain the battery