I am willing to hear differing opinions on this.
I sometimes see people on Fediverse speak as if there is something inherently wrong about the idea of content sorting and filtering algorithms.
There is a massive amount of content today and limited time. Content algorithms could provide the benefit of helping us sort content based on what we want. The most he urgent news, the most informative articles, the closest friends, etc. This might have some similarities with how Facebook and others do it, but it is not the same. Big social media algorithms have one goal: maximizing their profit. One metric for that is maximizing screen on-time and scrolling.
Personally, I’ve been developing an algorithm to help me sift through the content I get on my RSS reader, as there’s a lot of content I’m uninterested in. This algorithm would save me time, whereas those of Twitter and Facebook maximize my wasted time.
In my opinion, algorithms should be:
- opt-in: off my default, and the user is given a clear choice to change it
- transparent: the algorithm should be transparent about its goals and inner workings
Only with this, can algorithms be good.
What are your thoughts?
My big problem with “algorithms” (by which I don’t mean the pedantic “well, pushing top-rated content to the top is ackshyouallee an algorithm, technically”) for controlling feeds is that algorithms are biased in subtly devastating ways. We like to think that “algorithms” are neutral because computers are neutral, but the truth is “algorithms” are designed and implemented by human beings and reflect what those human beings think is “normal” or “correct” or “important” or the like. Indeed there’s one huge, GLARING bias baked straight in from the outset: numericity. If it can’t be factored in some way into a number, it isn’t important to the “algorithm” because at its core the “algorithm” can’t work without numbers of some form or another.
Every “algorithmically”-curated system I deal with I can break with ease just by thinking a wee bit outside the box and flustering the poor so-called AI by selecting things on criterion that they’re likely not programmed for because in the biased view of the AI’s programmers the criterion wasn’t “important”.
At some point years ago Facebook started defaulting to relatives/family algorithmically. This is extremely biased and problematic. It makes a lot of sensitive assumptions, as everyone’s family structure is different. So “devastating” is a good choice of words.