First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

Here’s my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      I don’t really see much likelihood in a singularity though, there’s probably a bunch of useful shit you could work out if you analysed the right extant data in the right way but there’s huge amounts of garbage data that it’s not obvious is garbage.

      My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

      Physics is a removed and there are just sort of limits on how awesome technology can be. Maybe I’m wrong but it seems like digital intelligence would be more useful for stuff like finding new antibiotics than making flying nanomagic fabricator paperclip drones.

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

        That’s right. Eliezer’s LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

        However, you could…build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 year ago

          For example you might try […] a million molecules that bind to a target protein

          well not millions but tens of thousands, yes we have that, it’s called high throughput screening. it’s been around for some time

          have you noticed some kind of medical singularity? is every human disease curable by now? i don’t fucking think so

          that’s because you’re automating glorified liquid transfer from eppendorf A to eppendorf B, followed by simple measurement like fluorescence. you still have to 1. actually make all of this shit and make sure it’s pure and what you ordered, then 2. you have to design an experiment that will tell you something that you measure, and be able to interpret it correctly, then 3. you need to be sure that you’re doing the right thing in the first place, like not targeting the wrong protein (more likely than you think), and then 4. when you have some partial result, you latch to it and improve it piece by piece, making sure that it will actually get where it needs to, won’t shred patient’s liver instantly and so on (more likely than you think)

          while 1 is at initial stages usually subcontracted to poor sods at entity like enamine ltd, 1, 4 are infinite career opportunities for medicinal/organic chemists and 2, 3 for molecular biologists, because all AI attempts at any of that that i’ve seen were spectacular failures and the only people that were satisfied with it were people who made these systems and published a paper about them. especially 4 is heavily susceptible to garbage in garbage out situations, and putting AI there only makes matters worse

          is HTS a good thing? if you can afford it, it relieves you from the most mind numbing task out there. if you can’t you still do all of this by hand. (it seems to me that it escapes you that all of this shit costs money) is this a new thing? also no. since 90s you can buy automated flash chromatographic column, it’s a box where you put dirty compound in one tube and get purified compound in other tubes. guess what took me entire yesterday? yes, it’s flash columns by hand because my uni doesn’t have a budget for that. would my paper come up faster if i had a combiflash? maybe, would it be any better if i had 5? no, because all the hard bits aren’t automated away, shit breaks all the time, things work different than you think and sometimes it’s that what makes it noticeable, and so on and so on

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            and btw if you try to bypass all of that real world non-automatable effort, just wing it and try to do it all in silico, that is simulate binding of unspecified compound to some protein it gets even worse, because search space is absurdly large, molecular mechanics + some qm method where it matters scales poorly, and then in absence of real world data you get some predictions, scored by some number, that gets you the illusion of surety but is entirely wrong

            i’ve seen this happening in real time over some months, this shit was quietly buried and removed from website and real thing was pieced together by humans, based on real world data acquired by other humans. yet still, company claims to be “ai-powered”. it has probably something to do with ai bros holding money in that place

          • BrickedKeyboard@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?

            • skillissuer@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              rapid automated drug development != solving medicine, while that would be a good thing, these are not remotely similar. first one is partially engineering problem, the other requires much more theory building

              solving medicine would be more of a problem for biologists, and biology is a few magnitudes harder to simulate than chemistry. from my experience with computational chemists, this shit is hard, scales poorly (like n^7), and because of a large search space predictive power is limited. if you try to get out of wet lab despite all of this anyway and simulate your way to utopia, you get into rapidly compounding garbage in garbage out issues, and this is in fortunate case where you know what are you doing, that is, when you are sure that you have right protein at hand. this is the bigger problem, and this requires lots of advanced work from biologists. sometimes it’s interaction between two of proteins, sometimes you need some unusual cofactor (like cholesterol in membrane region for MOR, which was discovered fairly recently) some proteins have unknown functions, there are orphan receptors, some signalling pathways are little known. this is also far from given and more likely than you think https://www.science.org/content/blog-post/how-antidepressants-work-last good luck automating any of that

              that said, sane drug development has that benefit of providing some new toys for biologists, so that even if a given compound will shred liver of patient that might be fine for some cell assay. some of the time, that makes their work easier

              as a chemist i sometimes say that in some cosmic sense chemistry is solved, that is, when we want to go from point A to point B we don’t beat the bush wildly but instead most of the time there’s some clear first guess that works, some of the time. this seems to be a controversial opinion and even i became less sure of that sometime halfway through my phd, partially because i’ve found a counterexampleS

              there’s a reason why drug development takes years to decades

              i’m not saying that solving medicine will take thousands of years, whatever that even means. things are moving rapidly, but any advancement that will make it work even faster will come from biologists, not from you or any other AI bros

              • skillissuer@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                going off a tangent with these antidepressant thingy: if this paper holds up and it’s really how things work under the hood, we have a situation where for 40 years people were dead wrong about how antidepressants work, and now they do know. turns out, all these toys we give to biologists are pretty far from perfect and actually hit more than intended, for example all antidepressants in clinical use hit some other, now turns out unimportant target + TrkB. this is more common than you think, some receptors like sigma catch about everything you can throw at them, there are also orphan receptors with no clear function that maybe catch something and we have no idea. even such a simple compound like paracetamol works in formely unknown way, now we have a pretty good guess that it’s really cannabinoid, and paracetamol is a prodrug to that. then there are very similar receptors that are just a little bit different but do completely different things, and sometimes you can even differentiate between the same protein on basis of whether is bound to some other protein or not. shit’s complicated but we’re figuring it out

                catching up this difference was only possible by using tools - biological tools - that were almost unthinkable 20 years ago, and is far outside of that “just think about it really hard and you’ll know for sure” school of thought popular at LW, even if you offload the “thinking” part to chatgpt. my calculus prof used to warn: please don’t invent new mathematics during exam, maybe some of you can catch up and surpass 3000 years of mathematics development in 2h session, but it’s a humble thing to not do this and learn what was done in the past beforehand (or something to that effect. it was a decade ago)

    • earthquake@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      You know, I thought that moving sneerclub onto lemmy meant we probably would not get that familiar mix of rationalists, heterodox rationalists, and just-left-but-still-mired-in-the-mindset ex-rationalists that swing by and want to quiz sneerclub. Maybe we’re just that irresistible.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      from 2011-2013 i was getting these guys email me directly about roko’s basilisk because lesswrong had banned discussion and rationalwiki was the only place even mentioning it

      now they work hard to seek us out even here

      i hope the esteemed gentleposter realises that there are no recoverable good parts and it’s dumbassery all the way down sooner rather than later, preferably before posting again

        • BrickedKeyboard@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 year ago

          It would be lesswrongness.

          Just to split where the gap is :

          1. lesswrongers think powerful AGI systems that can act on their own against humans will soon exist, and will be able to escape to the internet.
          2. I work in AI and think powerful general AI systems (not necessarily the same as AGI) will exist soon and be powerful, but if built well will be unable to act against humans without orders, and unable to escape or do many of the things lesswrongers claim.
          3. You believe AGI of any flavor is a very long way away, beyond your remaining lifespan?
          • PJ Coffey@mastodon.ie
            link
            fedilink
            arrow-up
            10
            ·
            1 year ago

            @BrickedKeyboard @gnomicutterance

            I think Timnit Gebru nailed it when she pointed out that we can’t define Intelligence, which means we can’t scope it, which means we can’t build it.

            The cult of IQ tests which rests on a foundation of science trying to prove that:

            A) races are real and have real, heritable differences in intelligence

            And

            B) that a general intelligence, g, exists

            Have done quite solid work proving that neither of those things are true, unintentionally, but still.

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Maybe we could make an explicit sub-lemmy for indulging in maladaptive debating. It’s my guilty pleasure.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Shit, I’ll sell this

            You should see how well I can scale it! Huge! Biggest /dev/null ever!

            (Sorry for the brief trumping, I guess I’m still happy that the proudboys are eating shit and it’s on my mind)

      • naevaTheRat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Jesus fuck. Idk about no good parts, the bits that are unoriginal are sometimes interesting (e.g. distance between model and reality, metacognition is useful sometimes etc) it would just be more useful if they like produced reading lists instead of pretending to be smort

      • BrickedKeyboard@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but ‘cultist’ AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn’t seem very rational…

        I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

        A. This is a club of smart people, even smarter than lesswrongers who can’t see the flaws!

        B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), “no it won’t do 25% of the jobs”. That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted.

          LW isn’t looking for technical practical solutions. They want plausible sci-fi that fits their narrative. Actually solving the problems they worry about would mean there’s no reason for the cult to exist, so why would they upvote that?

          Overall LW seems to be dead wrong about predicting modern AI systems. They anticipated that there was this general intelligence quality that would enable problem solving, escape, instrumental convergence, etc. However what ended up working was approximating functions really hard. The existence of ChatGPT without a singularity is a crisis for LW. No longer can they safely pontificate and write Harry Potter/The Culture fanfiction; now they must confront the practical reality of the monsters under their bed looking an awful lot more like dust bunnies.