So it managed to type in
ollama run llama3.1:70b
into a Linux terminal, or what’s this about?I think the issue is that although it’s early days, if nobody builds the guardrails in at this point in time, will they ever? Do the people in charge of building AI even care? Their leadership seems much more interested in deregulation and rolling back safety oversight.
They do. The field is called “AI safety”. And it’s a topic of research for quite some time now. If you like Youtube, I can recommend watching Robert Miles and Computerphile. They have some videos about the science, philosophy and groundworks.
And since we’re talking about Llama… Meta released a whole framework to safeguard output and input of language models, control and limit it. OpenAI does similar things when you try to talk about intimacy or other forbidden things.
I just don’t think they care about the robot apocalypse at this point. It’s still science fiction. And the large tech companies are mainly focused on profit. They always say they factor in safety and do AI responsibly. But I don’t think they care a lot, as long as they’re making money.
If you ask me, a company like OpenAI or Meta could as well turn into Skynet. Or just turn the internet into a post-factual world of misinformation and AI spam… They just like doing business more than bothering with ethics. But that’s true with all big tech.
This might change after the AI hype is over. But that’s just my speculation.
Except the people who were going to be in charge of regulating and watching this in the US were just fired by the dipshits who just got voted into office thanks to collusion with the big tech ass hats who are now capitulating to fascism. It’s what they wanted, and they are playing ball. They think fewer safeguards gives them an edge, and they don’t care what happens as long as they keep their money.
To be fair, the USA has never been good at regulating companies. That’s why they’re the incubator for big tech. No matter if it’s AI, or social media platforms, spying on users and selling their data. Or manipulating people and confining them into filter bubbles.
And I have a few more pressing issues with the current administration of the US. With freedom and democracy being abolished and likely the economy getting ruined since only caring for billionaires isn’t sustainable, and the other 99.9% of people getting poorer… I think we have some more important things to worry about than the robot apocalypse. That’s kind of whataboutism. But I think it’s true.
And we here in Europe, or the Chinese for example aren’t particularly great at regulating AI, either. It’s just an unfortunate situation. And we’d really need to regulate disruptive technology. In my opinion it’s more important to address that it’s controlled by powerful and venemous companies. And dynamics like bias in the models, dehumanization and spam just trashing the world which will be the decisive factor in reality. Self-replication and the sci-fi topics not so much. But we’ll see about that. I’d certainly like to see some regulation and democratization of technology. But yeah, that’d have to be a different country than the USA.
Can they secretly build, power and run massive datacenters as well?
The paperclip machine
Removed by mod