OneMeaningManyNames

Full time smug prick

  • 71 Posts
  • 387 Comments
Joined 7 months ago
cake
Cake day: July 2nd, 2024

help-circle



  • Lavabit

    Connection to Edward Snowden

    Lavabit received media attention in July 2013 when it was revealed that Edward Snowden was using the Lavabit email address Ed_Snowden@lavabit.com to invite human rights lawyers and activists to a press conference during his confinement at Sheremetyevo International Airport in Moscow.[16] The day after Snowden revealed his identity, the United States federal government served a court order, dated June 10, 2013, and issued under 18 USC 2703(d), a 1994 amendment of the Stored Communications Act, asking for metadata on a customer who was unnamed. Kevin Poulsen of Wired wrote that “the timing and circumstances suggest” that Snowden was this customer.[17] In July 2013 the federal government obtained a search warrant demanding that Lavabit give away the private SSL keys to its service, affecting all Lavabit users.[18] A 2016 redaction error confirmed that Edward Snowden was the target.[2]

    source

    But what is the status now? Also, I think in the years to come the jurisdiction will also play a role. If the service is in the soil of a country that can subpoeana the encryption keys, then nobody is really safe.





  • Safer.

    Well, they handed out activists’ metadata in the past, for the French authorities. In their position of an e2ee provider who controls both ends as a default, they are in a position where the can fuck people over. This is exactly what Snowden described as someone pointing a gun at you while saying “Relax, I am not gonna use it against you.”

    So much for safety.

    Ah, and my original point was: it is either safe or unsafe, the word saf_er_ means nothing during a genocide.









  • Have a look at this analysis. The author shows that this is a very weak response to the deeper underpinnings of the “nothing to hide” argument. After all, you cannot argue people’s personal preferences.

    I think one of the ways to go, with everything happening right now, is that Meta can infer who is gay and/or had aborted a pregnancy and hand these predictions over to an ultranationalist secret service. So, your personal indifference to privacy amounts to a genocidal police state for your fellow citizens.



  • Fancier algorithms are not bad per se. They can be ultra-productive for many purposes. In fact, we take no issue with fancy algorithms when published as software libraries. But then only specially trained folks can seize their fruit, which it happens it is people working for Big Tech. Now, if we had user interfaces that could let the user control several free parameters of the algorithms and experience different feeds, then it would be kinda nice. The problem boils down to these areas:

    • near-universal social graphs (they have all the people enlisted)
    • exert total control on the algorithm parameters
    • infer personal and sensitive data points (user-modeling)
    • not ensuring informed consent on the part of the user
    • total behavioral surveillance (they collect every click)
    • manipulate the feed and observe all behavioral response (essentially human subject research for ads)
    • profiteering from the above while harming the user’s well being (unethical)

    Political interference and proliferation of fascist “ideas” is just a function that is possible if and only if all of the above are in play. If you take all this destructive shit away, a software that would let you explore vast amounts of data with cool algorithms through a user-friendly interface would not be bad in itself.

    But you see, that is why we say “the medium is the message” and that “television is not a neutral technology”. As a media system, television is so constructed so that few corporations can address the masses, not the other way round, nor people interact with their neighbor. For a brief point in time, the internet promised to subvert that, when centralized social media brought back the exertion of control over the messaging by few corporations. The current alternative is the Fediverse and P2P networks. This is my analysis.


  • If you model and infer some aspect of the user that is considered personal (eg de-anonymize) or sensitive (eg infer sexuality) by means of an inference system, then you are in the area of GDPR. Further use of these inferred data down the pipeline can be construed as unethical. If they want to be transparent about it they have to open-source their user-modeling and decision making system.



  • You think the Meta algorithm just sorts the feed for you? It is way more complex and it basically puts you on some very fine-grained clusters, then decides what to show to you, then collects your clicks and reactions and adjusts itself. For scale, no academic “research with human subjects” would be approved with mechanics like that under the hood. It is deeply unethical and invasive, outright dangerous for the individuals (eg teen self esteem issues, anorexias, etc, etc). So “algorithm-like features” is apples to oranges here.





















Moderates