I do industrial automation for a living, and I just want to point out that automating things that exist purely in the digital domain is far easier than automating things like ship breaking.
Cant imagine how it even could be automated without advanced robotics. Those ships are freakin HUGE! Maybe a collection of robotic snakes with cutting lazers attached to their heads and some little scuttle bots to pick up the pieces the snakes knock off? Just cut the whole thing into 1’ disks or maybe hexagons is better
Maybe a collection of robotic snakes with cutting lazers attached to their heads
Upvoot for the matrix
What do you think snakes are
Just make a huge version of those supermarket bread slicer machines and feed the ships through it.
Or better yet, build a bigger ship and use it to smash the smaller ship to pieces
Ngl a ship eating ship is metal af
I just read The Three Body Problem, and I have some ideas on how it could be done.
Oh! One of my favorite books, have you read all three?
The three book problem
Not yet, but I will!
Oh, there is actually a fourth one by a different author but with Liu Cixin’s approval. It goes deeper into the trisolarian biology than the main series. I havent read that one yet
I’d suggest skipping it. While it’s a fun continuation, the writer is not nearly as good as Liu Cixin. It’s definitely a fanfic
This kind of makes me curious! I didn’t finish the third book of the original trilogy …
It can be automated it would just never be worth the cost. Every ship is different and has its own requirements.
If they were all 100% exactly the same, using the same hardware in all the same places then it would be cost effective to automate their disassembly. Otherwise every single ship is a one-off edge case.
Even if they’re mostly the same many will have had upgrades, repairs, and changes over time that could literally throw a wrench (that someone accidentally left inside an interior area) into the whole (automated) operation.
I think the best case scenario is to enforce shipbuilding standards and deny ships entry if they don’t follow them (for loading/unloading, anyway). Then you setup standardized dry docks with robotic arms that are already preprogrammed to disassemble these standard vessels. They may need human guidance for some areas that are allowed to be non-conforming but as long as the majority of the ship adheres to the standard it’d make the whole process much smoother and more environmentally friendly.
From an environmental standpoint the real issues from these vessels isn’t even the difficulty of (environmentally friendly) disassembly. It’s their emissions over their working lifetime and super toxic things like anti-fouling coatings that where we have no good way to remove or dispose of them. Like, even if you rip off the outside of a ship what do you do with that toxic waste? It’s nasty stuff.
Informed and informative, upvoot!
Definitely not terrifying
[…] I just want to point out that automating things that exist purely in the digital domain is far easier than automating things like ship breaking.
Not that you’re saying otherwise, however isn’t that even more of a reason more developers and resources should be allocated toward automating complex and risky physical processes?
Honestly, I don’t see how you would do it without general AI, which is something that will be solved in the digital domain first anyway.
Eh, it could be done with non-general AI. There are a finite number of different types of things to handle, so as long as it’s not thrown off by some bent steel or some missing consoles, I’d be amazed if they couldn’t automate at least specific ship designs.
They still manually build ships right now what makes you think they could automate taking one apart
Firstly, much of shipbuilding is automated. They use robots to paint them and apply anti-fouling coatings. They also use loads and loads of automated machinery to create the steel parts that make up most of the ship. Do you think some dudes are forging rivets, beams, and pipes by hand? No, those are made by machines that make zillions of them.
Secondly, nearly every ship–even ships that seem generic like big container ships–is a custom, one-off thing. They’re all bespoke (for the most part), being engineered for specific purposes, routes, and they even have “upgrades” for companies that pay extra (e.g. nicer quarters, extra antenna masts, more and special equipment mounting options, etc).
They use robots to paint them and apply anti-fouling coatings. They also use loads and loads of automated machinery to create the steel parts that make up most of the ship. Do you think some dudes are forging rivets, beams, and pipes by hand? No, those are made by machines that make zillions of them.
The missing piece here is assembly, and disassembly is like 95% of what goes into recycling from what I understand.
Notice how my post is not talking about the present tense.
Automation requires very high precision/consistency in the parts you want to work on. I seriously doubt that after many years of wear, tear, and impromptu repairs, those ships would be anywhere near consistent enough.
Automation does not require very high precision though it does require a modicum of consistency. Millions of vibratory bowl feeders with huge tolerances on their alignment mechanisms demonstrate this fact (“Damnit! A part got caught again… Gerry! Loosen that tolerance screw much farther out so that won’t happen again” LOL).
In fact they cannot automate the disassembly of cars even though their construction is highly automated. We just grind them up in a big grinder and separate the materials. So basically the same thing as with ships just on a smaller scale.
That’s why I said, “eventually with non-general AI”.
Even a well written algorithm could work with something that’s mostly in expected shape. How in the flying fuck is everyone so brainless that they cannot understand non-general AI can still adapt to things? Fucking hell.
I’m not talking about current industry practices. I’m talking about combining existing technology with unlimited bidget to create a factory that could kinda’ do the task.
“Possible” and “practical” are two extremely different things, and you goons pointing out that most obvious basic fact are adding nothing.
A single repair or modification would ruin the entire automation process. One single screw off by a single mm type thing.
This depends on the system. Mechatronics engineers spend a lot of time learning and figuring out how to make systems that can withstand edge cases like an incorrect screw size. There’s whole engineering/mechatronics disciplines around automation reliability (though much of it involves lots and lots of sensors and cameras, haha).
The way they test these things is by intentionally throwing bad parts into the mix at various stages of their automation. Something like a screw being too big/small is a trivial matter that won’t make it through a system or facility designed by professionals.
The real problem that really throws a wrench into every mechatronics engineer’s carefully-planned automated masterworks is people doing things like throwing wrenches into their carefully-planned automated masterworks 😁
Why the flying fuck do you think I said, “non-general AI”? Even a well written algorithm could handle things coming in not in perfect shape, yet everyone pretends “non-general AI” means, “execute instructions repeatedly without any input what so ever.”
Use your brain. Even basic dumb algorithms that can run on an Arduino can respond to input. Machine learning can easily respond to dynamic input, so stop failing to imagine the most basic of basic things I say.
Removed by mod
Removed by mod
Not that you’re saying otherwise, however isn’t that even more of a reason more developers and resources should be allocated toward automating complex and risky physical processes?
You’re solving for the wrong problem from the perspective of people with money investing money to solve these problems.
-
Shipbreaking, while dangerous for the workers, isn’t expensive because it is done in far flung countries with workers that have low wages, few protections for safety, and long term health consequences.
-
Art and writing (for western consumption) requires educated and talented people which are expensive to employ.
People with money, looking for a return, want that return their spending, not reduce human suffering.
-
Processing the digital world is just the first step. You can’t just build a safe autonomous ship disassembly robot without making sure your algorithms are actually sound. Look at self driving cars, they’re far from being safe and acceptable. Jumping straight into this problem without testing the shit out of your code in a virtual world is a mistake.
I mean automating it would certainly be a challenge but the first step would be building tools and robotics to allow human operators to more safely and effectively manage the tasks. Then you streamline the industrialized processes. Then you think about automating things.
But this is all really an economic problem, not a technical one. Software tools have minimal resource costs (compared to building/destroying a ship) but require skilled (expensive) laborers to operate. So to cut costs in any digital field you need to get rid of the expensive laborers. Thus the push for AI to replace any computer-bound work. Physical labor is already considered dirt-cheap in our fucked society, and no one is rushing to add expensive tools in fields where disposable people will suffice.
I sympathize immensely with the OP image’s final point, but “working for the right company” isn’t going to fix it. Reorganizing society is necessary, rethinking what we culturally value and uphold.
I think the solution for ship breakers is for the job to be a highly paid respectable job with protections. In other words the technology that desperately needs to disrupt this industry is probably… unions
Unions protect against automation that reduces labor hours.
Yeah I think that’s the point. Ship breaking is apparently poorly paid so they need unions.
I also not sure how much scope there is for automation on tasks like this as each shit will be different there isn’t going to be a huge amount of repeatable action
Your autocorrect is amazing
Honestly more unions should fight for company stock for employees or similar stake programs. As we hopefully get more automated having workers interests aligned against it seems like a losing fight.
Finally, some fucking sense into all of this.
Yeah exactly, I work in AI and robotics for medicine, and im so goddamn sick and tired of these people and their absolute god-awful uneducated takes on AI.
And this guy’s claiming to be a programmer too which makes it doubly worse because he really should know better.
It stems from people who seem to think that having the idea is the hard part, and the implementation is just a matter of time and money.
Shh. Just give one of them dancing robot dogs an impact driver attachment. They’ll figure it out in a week.
Once we perfect doing it in software, then we can graduate to hardware. Today, digital paintings; tomorrow, real paintings; next year, tear down a fucking ship!
That is really cool job description I haven’t seen pop up before! Would you mind sharing what type of things you need to automate? It sounds so interesting, I never really understood why factory line jobs should exist for example * because the work is dangerous, the opposite of stimulating/engaging (works for some sure), and just generally overall depressing unpleasant places to work. We SHOULD be striving for a world where humans don’t have to do such menial unfufilling work.
*very superficially, all the nuance that makes it continue to be necessary and exist I understand)
I work in the auto industry, so programming the machines that make the car parts. Humans are still involved because getting machines to handle changing conditions is very slow, expensive, and still winds up unreliable in a lot of cases. The simple process of picking a randomly oriented part up out of a bin and placing it accurately on a fixture is actually very difficult for a machine to do, when compared to how easily a human can accomplish the exact same task.
And yet, people do… https://www.leviathan.eu/
This is kind of a dumb argument, isn’t it?
I have to imagine someone centuries ago probably complained about inventors wasting their time on some dumb printing presses so smart people could write books and newspapers better when they could have been building better farm tools. But could we have developed the tractor when we did if we were still handwriting everything?
Progress supports progress. Teaching computers to recognize and reproduce pictures might seem like a waste to some people, but how do you suppose a computer will someday disassemble a ship if it is not capable of recognizing what the ship is and what holds it together? Modern AI is primitive, but it will eventually lead to autonomous machines that can actually do that work intelligently without blindly following an instruction set, oblivious to whatever might be actually happening around it.
The argument isn’t against the technology, it’s against the application of that technology.
Path of least resistance. It is harder to build a robot who can disassemble ships with its hands than it is to pattern match together pictures.
This XKCD comes to mind: https://xkcd.com/1425/
Someone doing it automatically doesn’t stop you from doing it manually. It’s still a meritless argument.
This isn’t even close to what they’re saying. It’s closer to complaining about how the Yankees replaced their star pitcher with a modified howitzer.
It’s not about people “wasting their time on some dumb invention,” it’s about how that useful invention is being used to replace jobs that people actually like doing because it’ll save their bosses money. It’s not even like when photography was invented or Photoshop came out and people freaked out about artists being put out of work, because those require different skill sets and opened up entirely new fields of art while also helping optimize other fields. This stuff could improve the fields that they’re created for by helping people optimize their workflow to make the act of creating things easier. But that’s not what they’re doing. It’s being used to mimic the skills of the people who enjoy doing these things so that they don’t have to pay people to do it.
Even ignoring the ethical/moral aspect of this stuff being trained without permission on the work of the people it’s designed to replace, the end goal isn’t to increase the quality of life of people, allowing us more time to do the things we love - things like, you know, art and writing - it’s to make the rich even richer and push people out of well-paying jobs.
The closest example I can think of is when Disney fired all their 2d animators and switched to 3d. They didn’t do it because 3d was better. In many ways, the quality was much worse at the time. But 2d animators are unionized and 3d animators aren’t, so they could get away with paying them much less. The same exact thing happened with the practical effects vs. digital effects guys in Hollywood right around the same time.
Society has always been losing jobs, the population just pivots to other specialisations. The only reason we fear it is because of our economic system that preys on it and turns it into profit, but that’s an other conversation entirely.
On the subject of losing creative venues, both your examples(photography and Photoshop) show how technology didn’t detract from the arts but add to it, letting the average person do much more. The same will be true for AI, I can see an inevitable boom happening in the filmmaking and animation industry, not to mention comic books and most of all indie gaming. It’s in the long run empowering for the individual imo.
The economic system is what he’s talking about here. That was my point. The entire conversation from the side against this stuff has always been about the economic situation of it. Without that factor, I think the only thing people would care about is whether or not their work is being used without their permission/maliciously.
As for Photoshop and photography, that’s actually why I brought those up specifically. Because they were feared as things that would destroy artists’ jobs and actually brought about entirely new fields of art - and also because they’re the two people bring up when people argue against LLM replacing people’s jobs, acting like they’re just some Luddites afraid of science.
Right now, the way I see it with AI is that there are 2 distinct groups benefiting from it: those whose workflow has been improved from the use of AI, and those who think AI can get them the result of work without having to either do the work themselves or pay somebody else to do it. And thanks to the economic issues that are at the heart of this whole thing, that second group is set to harm the number of people who can spend time creating things simply because they now have to work a job that isn’t creating things and no longer have the time to put towards that. So I can see AI creating a whole new art boom or a bust in equal measure. That second group is of concern to the art communities as well because they only see the destination and don’t see that the journey is just as important to the act of creation, and that is already causing schisms between artists and “prompters” who think that they’re just as skilled because they used a generator to make some cool stuff. People are already submitting unedited, prompted work to art and writing competitions.
I get the sentiment, but it’s a bad example. Transformer models don’t recognize images in any useful way that could be fed to other systems.They also don’t have any capability of actual understanding or context. Heavily simplifying here, tokenisation of inputs allows them to group clusters of letters together into tokens, so when it receives tokens it can spit out whatever the training data says it should.The only actual things that are improving greatly here which could be used in different systems are natural language processing, natural language output and visual output.EDIT: Crossed out stuff that is wrong.
Well, this is simply incorrect. And confidently incorrect at that.
Vision transformers (ViT) is an important branch of computer vision models that apply transformers to image analysis and detection tasks. They perform very well. The main idea is the same, by tokenizing the input image into smaller chunks you can apply the same attention mechanism as in NLP transformer models.
ViT models were introduced in 2020 by Dosovitsky et. al, in the hallmark paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (https://arxiv.org/abs/2010.11929). A work that has received almost 30000 academic citations since its publication.
So claiming transformers only improve natural language and vision output is straight up wrong. It is also widely used in visual analysis including classification and detection.
Thank you for the correction. So hypothetically, with millions of hours of GoPro footage from the scuttle crew, and if we had some futuristic supercomputer that could crunch live data from a standard definition camera and output decisions, we could hook that up to a Boston dynamics style robot and run one replaced member of the crew?
And such is the march of progress.
Huh? Image ai to semantic formating, then consumption is trivial now
Could you give me an example that uses live feeds of video data, or feeds the output to another system? As far as I’m aware (I could be very wrong! Not an expert), the only things that come close to that are things like OCR systems and character recognition. Describing in machine-readable actionable terms what’s happening in an image isn’t a thing, as far as I know.
No live video no, that didn’t seem the topic
But if you had the horsepower, I don’t think it’s impossible based on what I’ve worked with. It’s just about snipping and distributing the images, from a bottleneck standpoint
No live videos
Well, that’d be a prerequisite to a transformer model making decisions for a ship scuttling robot, hence why I brought it up.
Describing in machine-readable actionable terms what’s happening in an image isn’t a thing, as far as I know.
It is. That’s actually the basis of multimodal transformers - they have a shared embedding space for multiple modes of data (e.g. text and images). If you encode data and take those embeddings, you suddenly have a vector describing the contents of your input.
“AI” researcher here. The only reason there are models that can “write” and “create art” is because that data is available for training. Basically people put massive amounts of digital text and images on the Internet and the companies scraped all of it to train the models. If there were big enough datasets for ship building, that would happen too…
Besides, what the guy is yapping about it is 80% a robotics problem not an AI problem. It’s apples and oranges.
He’s essentially saying why can Will Smith finally eat pasta normally while we still don’t have the robotic workforce from the 2001 Will Smith movie “I, Robot”.
He’s a programmer, why doesn’t he stop working on aligning buttons on web applications and work on shipbuilding robots!?!?
Not really. You would still need to, you know, build drones or automated factories to actually perform the salvaging. But the point is that nobody DID, because capitalism values profit over human life. Nobody who “matters” is interested in solving that problem.
Actually that’s not true at all, there’s lots of interest in robotics (check out Boston Dynamic) but it’s a really really hard problem. The main issue is developing a controlling intelligence sophisticated enough to be able to use the robot to do a diverse range of tasks. The actual physical mechanical building of the robot isn’t that hard.
Of course the way you get that controlling intelligence is AI. So he is complaining about people developing a solution to the problem he’s demanding that they solve. He’s not happy because they’re not magically skipping steps.
This idiot wants fully sapient robots without developing AI in the first place, not sure how on earth he expects that to happen.
This is correct, why is it being downvoted?
I think you’re underestimating the mechanical and chemistry problems that still need to be solved before autonomous robots that can perform a task like ship salvage effectively. There’s a very good reason that basically all industrial robots spend their lives plugged into a wall socket.
I feel like maybe you didn’t understand the point I was making, which was the hardest part is the AI. Since everything else is easy because once you have the AI you automatically have all your other problems solved.
Ah you’re right, Boston Dynamic’s Spot robot tours entire factories and refineries all while being plugged into a wall socket!
Not true. We have capable robots now. See Boston Dynamics like the other commenter said. Plus we have had industrial robots making cars and stuff forever now. To make robots that can handle a wide variety of things (every ship is bound to be different) is hard and we don’t have data to train such models (see reinforcement learning, imitation learning, “sim2real” problem etc)
I get the sentiment, but that is a really dumb take. Software automation is a hell of a lot easier than creating robotic automation to disassemble ships of all shapes and sizes. That’s why art automation has been done, and industrial freighter recycling automation has not been.
How would that even be possible? Presumably, you’d need to break the ships down into pieces first, and even then, you’ll be dealing with huge numbers of oddly shaped and sized components of varying materials. It makes a lot more sense to have people do that, though it is likely very dangerous.
Seems more like a job for unions and workplace safety regulations than for robots
more like a job for unions and workplace safety regulations
Yes. That’s why they do these things in third world countries. The people there are cheaper than robots will ever be.
3th
threeth.
When teaching about programming languages that are zero indexed, I avoid the word “first” because it is ambiguous and instead use “zeroth” and “oneth.”
I get the sentiment, but that is a really dumb take.
$13B invested in OpenAI feels more and more like malinvestment and graft, incentivized by our disastrous energy policy and enormous tech subsidies.
This isn’t purely software automation. Its also an investment in physical media and machines, new or renovated energy infrastructure, and enormous volumes of potable water.
Seems more like a job for unions and workplace safety regulations than for robots
In 2020, a leaked company memo detailed Amazon’s use of a new technology — the geoSPatial Operating Console (SPOC) — to analyze and visualize data sets pertaining to threats to the company, including unions. Reported by Jason Del Rey and Shirin Ghaffary at Vox, some of the data points related to unions include:
Amazon-owned Whole Foods’ market activism and unionization efforts. Flow patterns of union grant money. The presence of local union chapters and alt labor groups.
The approach is an obvious attempt by the company to use more passive means of identifying and neutralizing union sympathizers in the company.
“Amazon’s tracking of workers’ micro-movements, decision points and searches and then linking all of that data to that of unions, community groups and legislative policy campaigns is union busting on its face,” said Stuart Appelbaum, President of the Retail, Wholesale and Department Store Union (RWDSU) in a statement at the time.
That is very true, but my critique was more focused on the difference between automating software tasks vs mechanical tasks, especially with non-uniform inputs and not the economic investment required. Some tasks are better suited to automation - and plagiarizing art is far easier than deconstructing and recycling massive industrial freighters.
Not on the side of the AI art generators here - that was just low hanging fruit compared to something like was suggested in the original post. Definitely need extremely strong labor law to protect against AI union busting (and union busting generally)
my critique was more focused on the difference between automating software tasks vs mechanical tasks
Somewhat paradoxically, we’ve been much more successful automating mechanical tasks than digital ones. We’ve had steam looms and automotive assembly plants far longer than server farms and super computers.
And I might argue this kind of automation has been far more fruitful. I can point to a lot more in my daily life that has benefited from the industrialization of steel and plastic fabrication than what I’ve received from Google Search Results.
To say the millions of man-hours and trillions of dollars sunk into the advertisement and entertainment industries couldn’t be put to better use… Come on, man. The latest Marvel movie wasn’t so good that I wouldn’t have traded it for a globalized 1980s British NHS.
deleted by creator
I think you absolutely nailed the analysis. Another small point to keep in mind is that for Microsoft, all the investment in OpenAi comes back as a revenue figure when the system works operating on top of the Azure platform.
Software automation being easier seems like a reason to not have so many people doing it, then? Like, the harder problem is the one that could really use all of the focus?
But the harder problems aren’t as obviously profitable for a large number of tech CEOs, and they’re not ripe for being a “winning glittery ticket” for a large number of comp sci students looking to be the next big thing in Silicon Valley.
Making art and writing just happens to be easy to automate with neural networks and machine learning, neither of which was originally researched for the purpose of replacing artists and writers.
Good luck disassembling a ship with a neural network. And maybe do some research about the difficulties of application-specific robotics.
I think it is just a matter of where you put resources. I am sure if you put resources into improving recycling ships some advancements will be done (it won’t be done using neural network probably).
But that’s true of everything. This guy is explicitly angry about AI not being used in ship decommission, which is just weird.
It’s not about the ships.
Shipbreaking is the author’s example, but it’s not the author’s point.
He could have bemoaned the lack of tree-trimming robots or the vaporware nature of self-driving cars instead.
The key point is the heavy investment in automating away things that bring us joy while doing nothing about vast classes of unpleasant drudgery.
Hell, look at roofers. A lot of injuries there are from falls, easily preventable with fall harnesses. It doesn’t even require a big research investment! Our society simply doesn’t value those lives enough to protect them.
No they are developing an autonomous system to solve pretty much every possible problem, but these problems are easy problems so they’re the ones that are getting automated first. Make no mistake they will come for every job.
Define art, though.
As it stands neural networks and LLMs can’t do it, because they lack imagination. A human can use it as tool to make art though, and we don’t have these silly kinds of conversations about photoshop (anymore!).
As for the OP, you’ve taken it a bit more literally and reacted a bit more defensively than I think is warranted. The point is about our systems priorities, not so much the specifics.
I have a feeling if we performed a lobotomy-like surgery on someone that eliminated their imagination and told them to just put paint on a canvas, you’d still call that art.
I would, at least. There’s some subjectivity to the definition of art and what people think has artistic value.
Of course. That’s the point; it’s subjective, and yet we have people declaring that AI/LLM output isn’t art.
I miss when Lemmings actually replied why they were downvoting…
I hate this take because I dream of a world where AI can assist any storyteller in bringing their story to life.
The rest is just capitalism. Capitilism is the issue, not the AI.
Human behavior is the issue, capitalism is just one system into which it may develop.
Nah both are
You know you’re not required to use AI, right?
Well good luck telling that to companies
Ah yes just write code for the ship fold itself neatly back into reusable materials.
Just build a grinder the size of a football stadium to shred battleships into pea-sized chunks, and sort according to metal type, how hard can it be?
It might be more cost effective to build a concrete bunker the size of a football stadium, use placed explosives to blow up the ship inside of the bunker, and then shred the exploded ship up into pea-sized chunks
Teach me how to code that and which compiler will spit out the football stadium grinder
Teach me to how to code a compiler to spit out fully realistic and accurate videos via text prompts.
Just drop the ship off the conveyor onto a bar. The good ships will bounce higher, and the bad ones won’t. Problem solved.
Sarcasm aside, this is how they sort cranberries and where the expression “raising the bar” comes from. The higher the bar is set, the tighter the constraints on which cranberries will bounce onto the “good” conveyor.
I actually had to look this up, why are you spreading misinformation?
The idiom “raise the bar” came into use around 1900 and comes from the sport of track and field. The high jump event and the pole vault event both involve raising a crossbar incrementally to see how high the participants can jump or pole vault.
Because I grew up in cranberry country and that’s what I had always been told. I’m not surprised to find this out though, because that makes a lot of sense.
OP: “We’ve tragically gone down a path of quantifying and min-maxing every aspect of existence, including creativity and the value of human life.”
Comments: “OP clearly doesn’t understand the comparative efficiency of the ROI here.”
Irony so thick you can cut it.
He chose a poor way of conveying his message.
Disappointed programmer here. I thought I could automate farming so that people wouldn’t die of hunger. Now I realise that if you automate farming, it would just make some CEO more money because his company now makes corn syrup and destroys rural communities even faster.
I got my “contract not renewed”, for the Fortune 500 B2B CRM company I worked for.
I can try to bust my ass to make my 2018 laptop try to render images I can’t draw, which does give me some pleasure. It’s not the AI tool’s fault humanity sucks, it’s the goddamn people with money.
This sort of ignores the fact that the advances in that technology are widespread applicable to all tasks, we literally just started with text and image generation because:
-
The training data is plentiful abd basically free to get your hands on
-
It’s easy to verify it works
LLMs will crawl so that ship breaking robots can run.
He’s ignoring it because he’s not complaining about the tech, but the way it’s being used. Instead of being used to make it easier for artists and writers to do their jobs, it’s being used to replace them entirely so their bosses don’t have to pay them. It’s like when Disney switched to 3d animation. They didn’t do it because the tech was better and made the job easier. They did it because 2d animators are unionized and 3d animators aren’t, so they could pay the new guys less.
And these are the kinds of jobs people actually want - to the point where they don’t pay anywhere near as well as they should because companies can exploit people’s passion for what they do.
Imagine a world of construction workers and road crews, but no civil engineers, architects, or city planners. Imagination and creativity automated away in the name of the almighty profit margin.
Yep and when we invented mechanical computers, we put human computers out of the job.
When we invented the automatic loom we put weavers out of the job.
When we invented electric lights we put lamplighters out of the job.
When we invented digital art we put many brushmakers, canvas makers, paint makers out of the job.
This is the cost of progress.
Second this.
We’re in the first days and everyday I add a new model or tech to my reading list. We’re close to talking to our CPUs. We’re building these stacks. We’re solving the memory problems. Don’t need RAG with a million tokens, guerrilla model can talk with APIs, most models are great at python which is versatile as fuck, I can see the singularity on the horizon.
Try Ollama if you want to test things yourself.
Use GPT4 if you want to get an inkling of the potential that’s coming. I mean really use it.
-
Oh no. You can’t do it for fun now because the computers are doing it.
It’s a stupid thing to be angry about because AI isn’t about making art it’s just doing that is a good benchmark because it’s very visual and you can easily see at a glance how much more advanced one AI is than another one.
You really think that mega corporations are interested in art? If that was all AI could be used for no one would be researching it.
It’s not necessarily for fine arts, but for cheap content generation.
For example, it can generate fairly accurate 3d models for environments and secondary characters without paying hundreds of people to do this manually. It can generate videos from text prompts without hours of human labor for filming, editing, post-producing, etc.
This shit again?
The tasks AI is replacing only require powerful computers and internet access.
If you want to make that comparison, to scrap fucking ships using AI, you need a robot that the AI can control.
Or what else do you want to do? Putting a fucking computer server that is running some ship scrapping AI in the middle of a shipyard and see if it magically grows arms?
No, I’m not denying we have an issue with this fucking capitalism (with and without AI), but stop comparing “software” tasks with other tasks what would required specialized machinery/robots.
Isn’t the point that we don’t bother looking into those specialized machines and tools because why bother when we can just throw meatbags at it?
There are plenty of people working on automation for manual tasks, but it’s a really hard problem. Making machines that can move around freely and are compact as humans is really hard. Automation works really well on assembly lines where parts can move to the machine.
But this is total nonsense because those tools are getting developed and have huge budgets. Many of them are already on the market and in use, especially remote control cutting tools.
Far more money has been invested in self drive and ambulatory robotics than image gen, it just so happens image gen is far easier than walking or using a saw.
Gpt 5 is coming around October and I think it’ll likely be the version that is able to effectively create task based workflow so it’ll be able to set up simulation training to evolve kinematic solutions within a framework, basically the thing we need robots to be able to do. When that’s possible you can expect to see a big boom in multiuse robotics.
we don’t bother looking into those specialized machines and tools
Yes we do. It’s just a hell of a lot more expensive, and a lot more difficult.
The image generator AIs are a byproduct of image recognition AIs, they’re related.
Image recognition development is fundamental to advanced industrial automation. We’re getting there, the media is just not covering that part because it’s more fun to write an article about stupid computers thinking we have 6 fingers than about false positives dropping 2% because of some new development
I remember years ago everyone was saying that art would probably be the last thing AI would be able to handle and menial jobs would probably be the first.
Now look at where we are!
It’s because people like to think that art is some unique human ability. They never really explain why they think this, they just say it.
But really it’s just about looking at the world and creating representations of it in various styles. None of which is some ineffable thing. It’s all electrons moving around a system at the end of the day, it is all physical. If it is physical, then it can be simulated.
They never really explain why they think this, they just say it.
But- muh 'magination! Robo cain’t do dat!
The robot dystopia will not be caused by evil AI enslaving humanity.
No matter how advanced or how self aware, AI will lack the ambition that is part of humanity, part of us due to our evolutionary history.
An AI will never have an opinion, only logical conclusions and directives that it is required to fulfil as efficiently as possible. The directives, however, are programmed by the humans who control these robots.
Humans DO have ambitions and opinions, and they have the ability to use AI to enslave other humans. Human history is filled with powerful, ambitious humans enslaving everyone else.
The robot dystopia is therefor a corporate dystopia.
I always roll my eyes when people invoke Skynet and Terminator whenever something uncanny is shown off. No, it’s not the machines I’m worried about.
Have you met people with opinions? A lot of their opinions consist of preprogrammed responses that you could train a bot to regurgitate.
No matter how advanced or how self aware, AI will lack the ambition that is part of humanity, part of us due to our evolutionary history.
The ambition isn’t the issue. Its a question of power imbalance.
The Paperclip Maximizing Algorithm doesn’t have an innate desire to destroy the world, merely a mandate to turn everything into paperclips. And if the algorithm has enough resources at its disposal, it will pursue this quixotic campaign without regard for any kind of long term sensible result.
The robot dystopia is therefor a corporate dystopia.
There is some argument that one is a consequence of the other. It is, in some sense, the humans who are being programmed to maximize paperclips. The real Roko’s Basilisk isn’t some sinister robot brain, but a social mythology that leads us to work in the factors that make the paper clips, because we’ve convinced ourselves this will allow us to climb the Paperclip Company Corporate Ladder until we don’t have to make these damned things anymore.
Someone screwed up if a paperclip maximiser is given the equipment to take apart worlds, rather than a supply of spring steel
Thats the beauty of it. The maximizer would understand that creating a machine that breaks appart worlds would maximize the paperclip output. It will be a “natural” progression
We’re not even close to artificial general intelligence, so I’d like to see if you have anything to substantiate this claim.
(Not saying it’s far fetched, though, just that it seems silly to be so sure at this point in time.)
According to this guy, only one thing is allowed to happen at a time. Sorry all, LLMs are the only option. Nothing else.
I don’t see google, twitter, facebook, nvidia and alibaba working on AIs more than the ones designed to replace humans for content generation, and I don’t see money from anyone else of that size going into such projects either.
Then you should take a better look, because most of those companies are researching AI for tasks far beyond content generation - Google and NVIDIA for example have been doing a lot of research on AI for robotics.
https://www.nvidia.com/en-us/research/ai-playground/
This is the most public place where Nvidia discusses their projects. None of these are robotics and this is the most public place where Nvidia talks about their AI projects. Admittedly we also have models that are replacing engineers as well as artists, but I still don’t see where they’re advertising their robotics work.
This is the most public place where Google discusses their projects. Again, no discussion of robotics.
They very well could still be doing robotics work, but I don’t care if they are because they haven’t advertised it to the public and tried to get us excited about it anywhere near the level they have all advertised their generative AIs.
I honestly don’t care about the extent to which they’re investing in one application of AI or the other, I care about the culture war these companies are washing against us, trying to make us all okay with AI generated content that displaces humans from doing the work they enjoy so that they can make money. If they’re making robots with AI too, why aren’t they talking about it nearly as much?
They are not advertising those things because they are still in development, and can’t (yet) be turned into a product.
You have very weird expectations on this topic and are moving the goalposts.
Do you have the source for that? Am curious to read up on those.
Sure, here are a couple examples, each with the respective Two Minute Papers episode I saw them in:
- Google: Inner Monologue (Two Minute Papers)
- Google: Learning Agile Soccer Skills for a Bipedal Robot (Two Minute Papers)
- NVIDIA: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters (Two Minute Papers)
- NVIDIA: What and Where Pathsways for Robotic Manipulation (Two Minute Papers)
I never moved the goalposts, you misinterpreted my comment to make it easier to rebut.
I said we aren’t seeing them work on any kind of AI MORE than we’re seeing them work on generative AI for content creation. That much is true, and you haven’t disproven it.
Am I understanding you correctly that you’re now claiming you meant “the companies don’t put more work into non-content generating AIs than into content generating AIs” instead of “the companies don’t work on AIs apart from content generating AIs”?
If so, it’s weird that you didn’t initially clarify, and instead shifted your point to “they are not talking about it enough”.
Well you’ll notice that’s what my initial comment says, so yes, obviously that’s what I always meant. My evidence for that statement which you tried to refute was that they aren’t talking about it at ALL when you do a cursory search for their current AI projects. So it clearly can’t be the primary focus. I’m not sure what you’re having such a hard time understanding