• over_clox@lemmy.world
    link
    fedilink
    arrow-up
    68
    ·
    1 year ago

    Oddly enough, by posting this data publicly, those least viewed articles will end up getting a lot more views now.

  • IzzyData
    link
    fedilink
    arrow-up
    44
    ·
    1 year ago

    I want to see a website that links to whatever is the least viewed Wikipedia article at any given time until all Wikipedia articles basically have the same number of views.

  • bool@lemm.ee
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 year ago

    Really enjoyed the read. Thanks for sharing. I’m surprised by the random page implementation.

    Usually in a database each record has an integer primary key. The keys would be assigned sequentially as pages are created. Then the “random page” function could select a random integer between zero and the largest page index. If that index isn’t used (because the page was deleted), you could either try again with a new random number or then march up to the next non empty index.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      Marching up to the next non-empty key would skew the distribution—pages preceded by more empty keys would show up more often under “random”.

      • SheeEttin@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        1 year ago

        Fun fact, that concept is used in computer security exploits: https://en.wikipedia.org/wiki/NOP_slide

        For choosing an article, it would be better to just pick a new random number.

        Although there are probably more efficient ways to pick a random record out of a database. For example, by periodically reindexing, or by sorting extant records by random (if supported by the database).