Yeah trap music is a good example shit was everywhere for a while.
Indie sleaze is another.
Mumblecore movies.
“Prestige TV” literally ongoing.
Yeah trap music is a good example shit was everywhere for a while.
Indie sleaze is another.
Mumblecore movies.
“Prestige TV” literally ongoing.
I agree that anti-intellectualism is bad, but I wouldn’t necessarily consider being AI negative by default, a form of anti-intellectualism. It’s the same thing as people who are negative on space exploration. It’s a symptom where it seems that there is infinite money for things that are fad/scams/bets, things that have limited practical use in people’s lives, and ultimately not enough to support people.
That’s really where I see those arguments coming from. AI is quite honestly a frivolity in a society where housing is a luxury.
I think the need to have a shared monoculture is a deeply reactionary way of thinking that prevents us from developing human empathy. You don’t need to say “Bazinga” at the same time as another person in order for you to relate to, care for, and understand strangers. I think the yearning for monoculture in people 25-40 is a mirror of boomers who complain that they cannot relate to kids anymore because nobody really believes in the pledge of alligence or some such other “things r different” nonsense. Yeah I haven’t played Hoop and Stick 3, we don’t need to play the same video games to relate to each other.
It’s a crutch for a brutal culture where you are too scared to show a modicum of openness or vulnerability with other humans because deep down you need to be reassured that they won’t scam/harm you simply because they believe in the magic words of Burgerstan. People are uncomfortable with change and things they don’t know because we’ve built a society where change often begets material vulnerability in people, and information and even cultural media have become a weapon to be used against others.
Monoculture was never good, it simply was. Also despite this being a real aesthetic trend, you should also remember that the vast majority of consumer technology produced at the same time was not clear plastic tech. If anything the monoculture of tech products of that era was that gross beige that yellows in about a year or two. It’s just not aesthetic enough to remember, and in 10 years everything just defaulted black. I’ve actually never seen a clear plastic Dreamcast/ Dreamcast controller IRL. I’ve been a tech guy forever and despite knowing about it, I only know of 1 person that had actually experienced the Dreamcast internet. This is very much nostalgia bait vs actual how things were.
To put it into perspective for one of those phones with clear plastic, there were 10,000 of these
I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all “reactionary autocorrect” and “literally reactionary by design”.
I disagree that it’s “reactionary by design”. I agree that it’s usage is 90% reactionary. Many companies are effectively trying to use it in a way that attempts to reinforce their deteriorating status quo. I work in software so I always see people calling this shit a magic wand to problems of the falling rate of profit and the falling rate of production. I’ll give you an extrememly common example that i’ve seen across multiple companies an industries.
Problem: Modern companies do not want to be responsible for the development and education of their employees. They do not want to pay for the development of well functioning specialized tools for the problems their company faces. They see it as a money and time sink. This often presents itself as:
I’ve seen the following be pitched as AI Bandaids:
Proposal: push all your documentation into a RAG LLM so that users simply ask the robot and get what they want
Reality: The robot hallucinates things that aren’t there in technical processes. Attempts to get the robot to correct this involves the robot sticking to marketing style vagaries that aren’t even grounded in the reality of how the company actually works (things as simple as the robot assuming how a process/team/division is organized rather than the reality). Attempts to simply use it as a semantic search index end up linking to the real documentation which is garbage to begin with and doesn’t actually solve anyone’s real problems.
Proposal: We have too many meetings and spend ~4 hours on zoom. Nobody remembers what happens in the meetings, nobody takes notes, it’s almost like we didn’t have them at all. We are simply not good at working meetings and it’s just chat sessions where the topic is the project. We should use AI features to do AI summaries of our meetings.
Reality: The AI summaries cannot capture action items correctly if at all. The AI summaries are vague and mainly result in metadata rather than notes of important decisions and plans. We are still in meetings for 4 hours a day, but now we just copypasta useless AI summaries all over the place.
Don’t even get me started on CoPilot and code generation garbage. Or making “developers productive”. It all boils down to a million monkey problem.
These are very common scenarios that I’ve seen that ground the use of this technology in inherently reactionary patterns of social reproduction. By the way I do think DeepSeek and Duobao are an extremely important and necessary step because it destroys the status quo of Western AI development. AI in the West is made to be inefficient on purpose because it limits competition. The fact that you cannot run models locally due to their incredible size and compute demand is a vendor lock-in feature that ensures monetization channels for Western companies. The PayGo model bootstraps itself.
The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address.
Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren’t being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.
You’re right there are challenges, but there’s absolutely no reason to think they’re insurmountable.
Let me repeat myself for clarity. We do not have a valid general theory of mind. That means we do not have a valid explanation of the process of thinking itself. That is an insurmountable problem that isn’t going to be fixed by technology itself because technology cannot explain things, technology is constructed processes. We can use technology to attempt to build a theory of mind, but we’re building the plane while we’re flying it here.
I’d argue that using symbolic logic to come up with solutions is very much what reasoning is actually.
Because you are a human doing it, you are not a machine that has been programmed. That is the difference. There is no algorithm that gives you correct reasoning every time. In fact using pure reasoning often leads to lulzy and practically incorrect ideas.
Somehow you have to take data from the senses and make sense of it. If you’re claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.
It does. Ben Shapiro is a perfect example. Any debate guy is. They’re really good at reasoning and not much else. Like read the Curtis Yarvin interview in the NYT. You’ll see he’s really good at reasoning, so good that he accidentally makes some good points and owns the NYT at times. But more often than not the reasoning ends up in a horrifying place that isn’t actually novel or unique simply a rehash of previous horriyfing things in new wrappers.
The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.
This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges. For example lets take some abstract “features” of the human experience and understand how they apply to robots:
Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.
Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can “understand” pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.
drug discovery
This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.
If you read a lot of the practical applications in the papers it’s mostly publish or perish crap where they’re gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.
They’re simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.
Neurosymbolic AI is overhyped. It’s just bolting on LLMs to symbolic AI and pretending that it’s a “brand new thing” (it’s not, it’s actually how most LLMs practically work today and have been for a long time GPT-3 itself is neurosymbolic). The advocates of approach pretend that the “reasoning” comes from symbolic AI which is known as classical AI, which still suffers from the same exact problems that it did in the 1970’s when the first AI winter happened. Because we do not have an algorithm capable of representing the theory of mind, nor do we have a realistic theory of mind to begin with.
Not only that but all of the integration points between classical techniques and statistical techniques present extreme challenges because in practice the symbolic portion essentially trusts the output of the statistical portion because the symbolic portion has limited ability to validate.
Yeah you can teach ChatGPT to correctly count the r’s in strawberry with a neurosymbolic approach but general models won’t be able to reasonably discover even the most basic of concepts such as volume displacement by themselves.
You’re essentially back at the same problem where you either lean on the symbolic aspects and limit yourself entirely to advanced ELIZA like functionality that can just use classifier or your throw yourself to the mercy of the statistical model and pray you have enough symbolic safeguards.
Either way it’s not reasoning, it is at best programming – if that. That’s actually the practical reason why the neurosymbolic space is getting attention because the problem has effectively been to be able to control inputs and outputs for the purposes of not only reliability / accuracy but censorship and control. This is still a Garbage In Garbage Out process.
FYI most of the big names in the “Neurosymbolic AI as the next big thing” space hitched their wagon to Khaneman’s Thinking Fast and Slow bullshit that is effectively made up bullshit like Freudianism but lamer and has essentially been squad wiped by the replication crisis.
Don’t get me wrong DeepSeek and Duobau are steps in the right direction. They’re less proprietary, less wasteful, and broadly more useful, but they aren’t a breakthrough in anything but capitalist hoarding of technological capacity.
The reason AI is not useful in most circumstance is because of the underlying problems of the real world and you can’t algorithm your way out of people problems.
Suffer not the heretic to live.
My armor is contempt. My shield is disgust. My sword is hatred. In the Emperor’s name, let none survive.
Real grimdank hours going on here.
but.but but works so well on poor people. why isn’t working on nazis!!??!?!
Dawn of War 1. Better yet just computerize table top. I’m antisocial :(
I was a really young nerdy kid, and coming from the Soviet Union like the only thing I cared about was computer. I was obsessed with computer since playing Doom as a kid in a cyber cafe. I got my first computer at the age of 8-9 after we had immigrated. I was about 10 years old when I was trolling AOL chat rooms by myself… and I had a lovely 640x480 web cam… and yeah. A lot of this brings up uneasy memories.
I think the horny categorization does fit me. I’m not like a gooner or anything but my partner would agree 100% with the statement: “thinks I’m about to pounce on them and always waits for me to initiate everything. Why people basically see horny as one of my personality traits.”
I don’t experience issues with non-sexual intimacy, but I wanted to let you know that you’re not alone!
I agree with all of this, but I think it acknowledges the reality of the limitations that they themselves see in their show and the ennui they’re really exhuding in that article. I don’t disagree that the show is “stale” in terms of entertainment value. I still listen because I am a freak, and I am privileged enough to have the bullshit minutea of the ruling classes public letters be my entertainment. It’s the same way that politics is just hollywood for ugly people. The reality of Chapo is that it’s broad appeal is where the comedy is the cope for the contemptible reality. I think for a lot of ride or die leftists Chapo is a staple because a large portion of the comedy comes from the contemptible reality itself. Think horror comedy (Bodies Bodies Bodies, comedy from contemptible reality) vs comedy horror (Tucker and Dale vs Evil, comedy as cope for the contemptible reality). Plenty of my more normal friends who are still left wing describe it as a “doomer” show. Plenty of these people have also realistically disengaged with politics on any “serious” level, their political understanding has shifted to NYT class party goers where it’s just required knowledge to mingle and show class virtue, but without the neoliberal underpinnings.
Amazon likely has phrenology style stats they’ve been waiting to unleash on their recruitment practices.
General Strikes are escalated to not planned. That’s why the AFL the most idiotic union basically banned escalations into a general strike by requiring striking locals to have national authorization or risk getting kicked out of the union. This was in response to the Seattle General Strike which happened in Feb 1919 and the AFL amended the constition in June 1919. Similary Taft-Hartley which outlaws general strikes in the US was passed in 1947 was a response to the Oakland General Strike of 1946.
Also lol at #3 what is this? 2012?
The reason I disagree is that “dog/cat food” implies it’s something that is widely eaten culturally, a “default meal” of sorts. I don’t think pate fits the bill there most Americans cannot handle offal. Nuggies sure, but not really pate.
he believes it would actually change things if the facts came out and it was actually the CIA behind it
Ah yes the classic “force the vote” argument.
I think that criteria is a bit too loose. In the American context liver pate would qualify.
For all of Nick Mullen’s faults, he is very self aware of the absolute luck and craziness of his economic position. He is actually fairly open that he basically didn’t do any real work for CumTown and only did real work for his T Shirt business. Not many people who have the same mental health problems as him can make it in life and he really gets that. I think a lot of people who grew up in working class families have real lived experience with that kind of stuff but people from middle/upper middle class families tend to have this ennui that prevents them from understanding that they already live extremely well in context. They demand to be seen by the broader culture as the gifted kids they think they are. The Pendejo Time guys and the Street Fight guys have a similar working class mindset but are less “online” and shitty. From the stories I’ve heard from all people I think the separating factor is that Chapo style guys main disappointment with their parents seems to be political differences or how their parents think. For everyone else their main disappointment from their parents is neglect/abuse due to socioeconomic status, mental health issues, etc. Brian is a great dividing example, his parents were really shitty conservatives that forced him and his siblings to live in a bad way as a kid, despite having the ability to provide. And it was purely based on their political ideas. His father was a computer engineer in the US midcentury, Brian admits he was absolutely loaded but didn’t spend on his kids money as a political thing to “teach them a lesson”.
Stavros is secretly a hard working dude. His whole failson thing is clearly a persona. You don’t own a house in your 20’s being the son of middle class Greek immigrants without working hard. Stav clearly secures the bag, and unironically does it in the “old world” way where half the time the people he hires to help are like his friends/family.
The Chapo hosts - save for Matt - kind of seemed to be lottery winners who just lucked into their success and fortune and have ceased to grow as people as a consequence of their fame and profit. Their reference about the “failson” encapsulates their status as hosts. Just frozen… in Arrested Development.
I completely agree, but I also don’t see it as a bad thing. They are clearly ashamed of it and think society should be different, but they also do have the tendencies to play into the personal psychology of it.
That’s why they tend to gush about media where the main character has already objectively won but cannot bring themselves (usually he/him) to being happy with that because they didn’t win on their terms in the exact way they wanted to win even if in their way/on their terms they’d claim a lesser prize. That’s usually the pinnacle story archetype that they will have sincere love for a piece of media for. Sopranos is a perfect example. Billions (which greatly expands the trope such that women can have this predicament too as if Hillary Clinton isn’t real) is a perfect example.
That’s always been a lib cop-out in my mind. “Do something fulfilling!”. Wow yeah uhh I live in a commodified atomized capitalist hell scape what’s fulfilling and can keep me alive? Not much.
Haha yeah. I remember the first one where it went from flat to skeuomorphic, that was OS/2 and System 6 to Windows 3 and System 7.
I just got a little fixated on the Dreamcast being representative of monoculture.