I am willing to hear differing opinions on this.

I sometimes see people on Fediverse speak as if there is something inherently wrong about the idea of content sorting and filtering algorithms.

There is a massive amount of content today and limited time. Content algorithms could provide the benefit of helping us sort content based on what we want. The most he urgent news, the most informative articles, the closest friends, etc. This might have some similarities with how Facebook and others do it, but it is not the same. Big social media algorithms have one goal: maximizing their profit. One metric for that is maximizing screen on-time and scrolling.

Personally, I’ve been developing an algorithm to help me sift through the content I get on my RSS reader, as there’s a lot of content I’m uninterested in. This algorithm would save me time, whereas those of Twitter and Facebook maximize my wasted time.

In my opinion, algorithms should be:

  • opt-in: off my default, and the user is given a clear choice to change it
  • transparent: the algorithm should be transparent about its goals and inner workings

Only with this, can algorithms be good.

What are your thoughts?

  • poVoq
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    2 years ago

    deleted by creator

    • ☆ Yσɠƚԋσʂ ☆
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 years ago

      This is the exact same logic people use for drug prohibition, and it’s a completely wrong way to look at it. The problem in both cases is that people feel the need to escape their physical surroundings. This problem must be solved by making better environments that we live and work in. Making work less stressful, giving people more free time to socialize, providing public services like parks, sports centres, and so on.

  • bashrc
    link
    fedilink
    arrow-up
    9
    ·
    3 years ago

    As long as they’re transparent and under user control then timeline algorithms might be ok. However, it would start to become problematic if instance admins could control the timelines of users, and it might become tempting for them to do so for monetization reasons.

    Even under user control there would be a temptation for some people to try to SEO against the known algorithms, so that their posts appear preferentially in some people’s timelines, leading to the same set of problems that BigTech has.

  • xarvos
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    3 years ago

    Yep. Lemmy feed for example is algorithmic

  • Mad@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    3 years ago

    i think when it comes to algorithms that save you time, simple filters do the job perfectly. like only people you follow vs. specific hashtags, or just full posts vs. replies included, or chronological vs. “good friends” (like in instagram) first. part of the reason modern algorithms are so complex is so they can confuse us and we end up spending more time on the platform. if you’re making an algorithm for ease of use, it should be the opposite of confusing. it should probably be clarified what people mean by algorithms, since that’s a very general word, but most of the time they probably mean the complex and confusing stuff modern social media uses, rather than the simple filters that most of the fediverse uses.

    more complex algorithms might be useful for a site like YouTube, since it’s an entertainment platform not a socialization platform, so you just want to see anything that will be entertaining, and discover new content whenever possible.

  • quaver
    link
    fedilink
    arrow-up
    4
    ·
    3 years ago

    I’ve always wished that social media sites would have their algorithm be user customizable through some kind of basic syntax. There could of course be a default - but the user would be able to see what it is, how it works, and be able to customize it to their liking. Of course, this would be complicated, but it’s not like these algorithms don’t already exist. They’re just hidden.

  • Oliver@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    3 years ago

    For me, it was actually the implementation of the forced timeline algorithm that was the breakthrough for me to finally leave Facebook after a long frustration. That was more than a decade ago. Now these algorithms became standard.

    I would like to see multiple timelines, so to speak - for example, the favorite lists on Twitter would be available directly on the home page. That’s where I would actually find it useful in parts: My list for comics, for instance - sorting them by most popular posts from the last week? Why not, would be useful. Inoreader also has such a feature for premium users. For the home timeline on social networks, on the other hand, I really think it’s pure poison. On Facebook, it meant that I no longer saw posts about events in my private, healthy circle of friends - and instead anything that generated controversy was flushed to the top.

  • ttmrichter
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 years ago

    My big problem with “algorithms” (by which I don’t mean the pedantic “well, pushing top-rated content to the top is ackshyouallee an algorithm, technically”) for controlling feeds is that algorithms are biased in subtly devastating ways. We like to think that “algorithms” are neutral because computers are neutral, but the truth is “algorithms” are designed and implemented by human beings and reflect what those human beings think is “normal” or “correct” or “important” or the like. Indeed there’s one huge, GLARING bias baked straight in from the outset: numericity. If it can’t be factored in some way into a number, it isn’t important to the “algorithm” because at its core the “algorithm” can’t work without numbers of some form or another.

    Every “algorithmically”-curated system I deal with I can break with ease just by thinking a wee bit outside the box and flustering the poor so-called AI by selecting things on criterion that they’re likely not programmed for because in the biased view of the AI’s programmers the criterion wasn’t “important”.

    • testingthis
      link
      fedilink
      arrow-up
      5
      ·
      3 years ago

      At some point years ago Facebook started defaulting to relatives/family algorithmically. This is extremely biased and problematic. It makes a lot of sensitive assumptions, as everyone’s family structure is different. So “devastating” is a good choice of words.

  • testingthis
    link
    fedilink
    arrow-up
    3
    ·
    3 years ago

    I feel like most of those examples tend towards inherently immoral and wrong…

    E.g., no news is actually urgent, and algorithmic friend sorting is wrong – serendipity is correct

    Regarding informative articles, in that context there are keywords for searching

    I think the only way that we’ll start getting remotely close to something moral is with fine-grained controls that will let us know exactly why an algorithm chooses what it does

  • toneverends
    link
    fedilink
    arrow-up
    3
    ·
    3 years ago

    In English at least, the algorithms considered socially problematic can be more precisely as “The Algorithm” — capitalised to hint the more specific meaning. But to understand the hint you already need to know the context of the broader conversation around corporate-interest-oriented algorithms.

    So, we have an obscure inexplicit grammar that only makes sense if you already know what it means. Not great for bringing new people to the conversation.

  • beta_tester
    link
    fedilink
    arrow-up
    3
    ·
    3 years ago

    It highly depends about which algo we are talking about.

    Way back when I was on facebook and they introduced their algorithm of content filtering/showing it hid content from friends I didn’t interact with but I wanted to see. I had 200 friends, so there wasn’t much to filter. So no, hiding information from the user is not good in this regard.

  • Ephera
    link
    fedilink
    arrow-up
    2
    ·
    3 years ago

    I think, a big reason why techy people don’t value these algorithms as much, is because we can achieve a lot of the same (sometimes better, sometimes it just feels better) by manually setting up filter rules.

    And a big reason for that, is that many of these algorithms are just hot garbage. Even Google’s supposedly unmatched algorithm knowledge and intimate knowledge of their users regularly fails to deliver anything of value.
    I guess, they do have those corporate interests, and in particular don’t want to be transparent to avoid people gaming their algorithms, but then even the bloody algorithm in my IDE trying to guess what I want auto-completed feels disappointingly much like a broken clock being right twice a day.

    These are all anecdotes. I’m not aware of a non-commercial social media content algorithm, so maybe this is the one field where they’re amazing.

    • jackalope
      link
      fedilink
      arrow-up
      1
      ·
      3 years ago

      Idk. I think Google does a pretty good job. At least the chrome mobile suggestions are pretty good though I stopped using Chrome because I didn’t like the changes they made to tabs.

  • Liwott@framapiaf.org
    link
    fedilink
    arrow-up
    2
    ·
    3 years ago

    @cyclohexane
    > What is wrong is using them for maximizing corporate profits

    It is arguably not wrong either for a company to design an algorithm in a way that serves their interest. What is definitely wrong is that the algorithms are imposed on the user.
    If I want to continue exchanging with my friends that are on the platform, I need to be submitted to the algorithm.

    • CyclohexaneOP
      link
      fedilink
      arrow-up
      1
      ·
      3 years ago

      I said this in the context of me being a socialist who is against the idea that so much of society being shaped by profit motives.

      But with socialism aside, I would be more accepting of Facebook if they were transparent about the goals of their algorithm, how it works exactly, and give the user the option to opt out.

  • ajr
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 years ago

    deleted by creator

    • CyclohexaneOP
      link
      fedilink
      arrow-up
      7
      ·
      3 years ago

      Addiction is a goal of the algorithms as they are designed. Better algorithms increase your screen-on time and mindless scrolling.

      A good algorithm could, hypothetically, limit the number of content you see, or save you time from scrolling to find something interesting.

      • tmpod@lemmy.pt
        link
        fedilink
        arrow-up
        3
        ·
        3 years ago

        I agree. As you said, an algorithm doesn’t necessarily need to be addictive or maximize your wasted time, that just happens to be the most common type we see today, due to the big platforms’ drive for financial growth. It is perfectly reasonable to want an algorithm that helps you by filtering the massive pool of content available on the Internet into the most relevant stuff for you, while also be respectful of your time. The idea you mentioned of an entry-limited algorithm is certainly very interesting, in my opinion.

  • alex
    link
    fedilink
    arrow-up
    1
    ·
    3 years ago

    In theory, yes, an algorithm that shows me the posts I will like most seems great. However there are many obvious problems with big tech’s implementations.