• FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      123
      ·
      edit-2
      4 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        85
        arrow-down
        8
        ·
        4 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          65
          arrow-down
          2
          ·
          4 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

          • FierySpectre@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            4 months ago

            Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              4
              ·
              4 months ago

              Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.

              • FierySpectre@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 months ago

                Op is not saying this isn’t using the techniques associated with the term AI.

                Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.

          • spechter
            link
            fedilink
            English
            arrow-up
            24
            ·
            edit-2
            4 months ago

            Stop calling it that, you’re scaring the venture capital

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          4 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

            • 0laura@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              the word program refers to even more things and no one says it’s a bad word.

            • GetOffMyLan@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              edit-2
              4 months ago

              It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.

              It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.

              • wewbull@feddit.uk
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                If people consistently misunderstand it, it’s a bad term for communicating the concept.

                • GetOffMyLan@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  It’s the correct term though.

                  It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.

          • Ephera
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

        From the conclusion of the actual paper:

        Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

        If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

        • FierySpectre@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          4 months ago

          For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

          The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

          As a different commenter mentioned, the data collection is largely the interesting part here.

          I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

          • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 months ago

            They don’t go in depth about how they combine the two for the hybrid model

            Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.

            Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.

          • errer@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            4 months ago

            ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.

        • llothar
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

          I would love to see comparison against risk factors + human image evaluation.

          Nevertheless, this is the AI that will really help humanity.

    • SomeGuy69@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 months ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

    • earmuff@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    123
    arrow-down
    3
    ·
    4 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

  • ALoafOfBread
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    2
    ·
    edit-2
    4 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        2
        ·
        edit-2
        4 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.

    • Mouselemming@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 months ago

      Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

  • cecinestpasunbot
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    4
    ·
    4 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • Maven (famous)@lemmy.zip
      link
      fedilink
      English
      arrow-up
      71
      ·
      4 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

      • KevonLooney@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        3
        ·
        4 months ago

        That’s actually really smart. But that info wasn’t given to doctors examining the scan, so it’s not a fair comparison. It’s a valid diagnostic technique to focus on the particular problems in the local area.

        “When you hear hoofbeats, think horses not zebras” (outside of Africa)

        • chonglibloodsport@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          AI is weird. It may not have been given the information explicitly. Instead it could be an artifact in the scan itself due to the different equipment. Like if one scan was lower resolution than the others but you resized all of the scans to be the same size as the lowest one the AI might be picking up on the resizing artifacts which are not present in the lower resolution one.

          • KevonLooney@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            4 months ago

            I’m saying that info is readily available to doctors in real life. They are literally in the hospital and know what the socioeconomic background of the patient is. In real life they would be able to guess the same.

          • Maven (famous)@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            The manufacturing date of the scanner was actually saved as embedded metadata to the scan files themselves. None of the researchers considered that to be a thing until after the experiment when they found that it was THE thing that the machines looked at.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        4 months ago

        That is quite a statement that it still had a better detection rate than doctors.

        What is more important, save life or not offend people?

        • Maven (famous)@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          The thing is tho… It has a better detection rate ON THE SAMPLES THEY HAD but because it wasn’t actually detecting anything other than wealth there was no way for them to trust it would stay accurate.

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            Citation needed.

            Usually detection rates are given on a new set of samples, on the samples they used for training detection rate would be 100% by definition.

            • 0ops@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              4 months ago

              Right, there’s typically separate “training” and “validation” sets for a model to train, validate, and iterate on, and then a totally separate “test” dataset that measures how effective the model is on similar data that it wasn’t trained on.

              If the model gets good results on the validation dataset but less good on the test dataset, that typically means that it’s “over fit”. Essentially the model started memorizing frivolous details specific to the validation set that while they do improve evaluation results on that specific dataset, they do nothing or even hurt the results for the testing and other datasets that weren’t a part of training. Basically, the model failed to abstract what it’s supposed to detect, only managing good results in validation through brute memorization.

              I’m not sure if that’s quite what’s happening in maven’s description though. If it’s real my initial thoughts are an unrepresentative dataset + failing to reach high accuracy to begin with. I buy that there’s a correlation between machine specs and positive cases, but I’m sure it’s not a perfect correlation. Like maven said, old areas get new machines sometimes. If the models accuracy was never high to begin with, that correlation may just be the models best guess. Even though I’m sure that it would always take machine specs into account as long as they’re part of the dataset, if actual symptoms correlate more strongly to positive diagnoses than machine specs do, then I’d expect the model to evaluate primarily on symptoms, and thus be more accurate. Sorry this got longer than I wanted

              • Tja@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                4 months ago

                It’s no problem to have a longer description if you want to get nuance. I think that’s a good description and fair assumptions. Reality is rarely as black and white as reddit/lemmy wants it to be.

            • Maven (famous)@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              What if one of those lower economic areas decides that the machine is too old and they need to replace it with a brand new one? Now every single case is a false negative because of how highly that was rated in the system.

              The data they had collected followed that trend but there is no way to think that it’ll last forever or remain consistent because it isn’t about the person it’s just about class.

              • Tja@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                4 months ago

                The goalpost has been moved so far I now need binoculars to see it now

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      4 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      4 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

    • CptOblivius@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      4 months ago

      Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

      • cecinestpasunbot
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

        • CptOblivius@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.

          • cecinestpasunbot
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            The false negative rate is also quite high. It will miss about 1 in 5 women with cancer. The reality is mammography is just not all that powerful as a screening tool. That’s why the criteria for who gets screened and how often has been tailored to try and ensure the benefits outweigh the risks. Although it is an ongoing debate in the medical community to determine just exactly what those criteria should be.

    • ???@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      4 months ago

      How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

      • cecinestpasunbot
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

        Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

      • MonkeMischief@lemmy.today
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        4 months ago

        Well it’d certainly benefit the medical industry. They’d be saddling tons of patients with surgeries, chemotherapy, mastectomy, and other treatments, “because doctor-GPT said so.”

        But imagine being a patient getting physically and emotionally altered, plunged into irrecoverable debt, distressing your family, and it all being a whoopsy by some black-box software.

        • ???@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          That’s a good point, that it could burden the system, but why would you ever put someone on chemotherapy for the model described in the paper? It seems more like it could burden the system by increasing the number of patients doing more frequent screening. Someone has to pay for all those docter-patient and meeting hours for sure. But the benefit outweighs this cost (which in my opinion is good and cheap since it prevents future treatment at later stages that are expensive).

          • KevonLooney@lemm.ee
            link
            fedilink
            English
            arrow-up
            9
            ·
            4 months ago

            Biopsies are small but still invasive. There’s risk of infection or reactions to anesthesia in any surgery. If 100 million women get this test, a 5% false positive rate will mean 5 million unnecessary interventions. Not to mention the stress of being told you have cancer.

            5 million unnecessary interventions means a small percentage of those people (thousands) will die or be harmed by the treatment. That’s the harm that it causes.

          • MonkeMischief@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            You have really good point too! Maybe just an indication of higher risk, and just saying “Hey, screening more often couldn’t hurt.” Might actually be a net positive, and wouldn’t warrant such extreme measures unless it was positively identified by, hopefully, human professionals.

            You’re right though, there always seems to be more demand than supply for anything medicine related. Not to mention, here in the U.S for example, needless extra screenings could also heavily impact a lot of people.

            There’s a lot to be considered here.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    13
    ·
    4 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      5
      ·
      4 months ago

      good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

      Under no circumstance should we accept a “black box” explanation.

      Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

      • thecodeboss@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 months ago

        Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        6
        ·
        4 months ago

        Hey look, this took me like 5 minutes to find.

        Censius guide to AI interpretability tools

        Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

        Here’s what looks like a university paper on interpretability tools:

        As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

        Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

        Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

        Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

        • Tja@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          4 months ago

          “Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            9
            ·
            4 months ago

            A single drop of water contains billions of molecules, and yet, we can explain a river. Maybe you should try applying yourself. The field of hydrology awaits you.

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              4 months ago

              No, we cannot explain a river, or the atmosphere. Hence weather forecast is good for a few days and even after massive computer simulations, aircraft/cars/ships still need to do tunnel testing and real life testing. Because we only can approximate the real thing in our model.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                4 months ago

                You can’t explain a river? It goes down hill.

                I understand that complicated things frieghten you, Tja, but I don’t understand what any of this has to do with being unsatisfied when an insurance company denies your claim and all they have to say is “the big robot said no… uh… leave now?”

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      7
      ·
      4 months ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          4 months ago

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      4
      ·
      4 months ago

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

      • homura1650@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        ·
        4 months ago

        The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

        For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

        In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

        An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

        • Tryptaminev@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

          • 0ops@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself

            The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field’s long-term goals.[16]

            https://en.m.wikipedia.org/wiki/Artificial_intelligence

    • reddithalation@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

      It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.

  • Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    4 months ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      21
      ·
      4 months ago

      an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!

    • Railing5132@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

    • UNY0N@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      Nice comment. I like the detail.

      For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      4 months ago

      It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.

      • bluewing@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.

        Just look at those marketing wankers as a cherry on the top that you didn’t want or need.

        • medgremlin@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          People just need to understand that the true medical uses are as tools for physicians, not “replacements” for physicians.

          • bluewing@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I think the vast majority of people understand that already. They don’t understand just what all those gadgets are for anyway. Medicine is largely a ''blackbox" or magical process anyway.

            • medgremlin@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              There are way too many techbros trying to push the idea of turning chat gpt into a physician replacement. After it “passed” the board exams, they immediately started hollering about how physicians are outdated and too expensive and we can just replace them with AI. What that ignores is the fact that the board exam is multiple choice and a massive portion of medical student evaluation is on the “art” side of medicine that involves taking the history and performing the physical exam that the question stem provides for the multiple choice questions.

              • bluewing@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                And it has gone exactly nowhere either hasn’t it. Nor do those techbros want the legal and moral responsibilities that come with an actual licence to pass the boards.

                • medgremlin@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  I think there are some techbros out there with sleazy legal counsel that promises they can drench the thing in enough terms and conditions to relieve themselves of liability, similar to the way that WebMD does. Also, with healthcare access the way it is in America, there are plenty of people who will skim right past the disclaimer telling them to go see a real healthcare provider and just trust the “AI”. Additionally, there’s enough slimy NP professional groups pushing for unsupervised practice that they could just sign on their NP licenses for prescriptions, and the malpractice laws currently in place would be difficult to enforce depending on outcomes and jurisdictions.

                  This doesn’t get into the sowing of discord and discontent with physicians that is happening even without these products existing in the first place. Even the claims that an AI could potentially, maybe, someday sorta-kinda replace physicians makes people distrust and dislike physicians now.

                  Separately, I have some gullible classmates in medical school that I worry about quite a lot, because they’ve bought into the line that chat GPT passed the boards, so they take its’ hallucinations as gospel and argue with our professor’s explanations as to why the hallucination is wrong and the correct answer on a test is correct. I was not shy about admonishing them and forcefully explaining how these “generative AIs” are little more than glorified text predictors, but the allure of easy answers without having to dig for them and understand complex underlying principles is very alluring, so I don’t know if I actually got through to him or not.

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          The hypesters and grifters do not prevent AI from being used for truly valuable things even now.

          I mean, yeah, except that the unnecessary applications are all the corporations are paying anyone to do these days. When the hype flies around like this, the C-suite starts trying to micromanage the product team’s roadmap. Once it dies down, they let us get back to work.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Also, for GPU prices to come down. Right now the AI garbage is eating a lot of the GPU production, as well as wasting a ton of energy. It sucks. Right as the crypto stuff started dying out we got AI crap.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            You missed that we were talking about the useless AI garbage, didn’t you? I guess humans can also put out garbage…

          • ilinamorato@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            GPU price hikes are causing problems outside of the gaming industry, too. Imaging, scientific research, astronomy…

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Might be, but I somehow don’t picture an astronomer complaining about GPU prices on lemmy…

              • ilinamorato@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                There are actually a ton of people in research and academia on here.

                Or at least there were. I don’t know what the current state of the Lemmy community is.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Those are going to make a ton of money for a lot of people. Every 1% fuel efficiency gained, every second saved in an industrial process, it’s hundreds of millions of dollars.

        You don’t need AI in your fridge or in your snickers, that will (hopefully) die off, but AI is not going away where it matters.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Well, AI has been in those places for a while. The hype cycle is around generative AI which just isn’t useful for that type of thing.

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            I’m sure if Nvidia, AMD, Apple and Co create npus or tpus for Gen ai they can also be used for those places, thus improving them along.

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              4 months ago

              Why do you think that?

              Nothing I’ve seen with current generative AI techniques leads me to believe that it has any particular utility for system design or architecture.

              There are AI techniques that can help with such things, they’re just not the generative variety.

              • Tja@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                4 months ago

                Hardware for faster matrix/tensor multiplication leads to faster training, thus helping. More contributors to your favorite python frameworks leads to better tools, thus helping. Etc.

                I am aware that chatbots don’t cure cancer, but discarding all the contributions of the last two years is disingenuous at best.

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          Those are going to make a ton of money for a lot of people.

          Right, but not any one person. The people running the hype train want to be that one person, but the real uses just aren’t going to be something you can exclusively monetize.

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Depends how you define “a ton” of money. Plenty of startups have been acquired for silly amounts of money, plenty of consultants are making bank, make executives are cashing big bonuses for successful improvements using AI…

            • ilinamorato@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I define “a ton” of money in this case to mean “the amount they think of when they get the dollar signs in their eyes.” People are cashing in on that delusion right now, but it’s not going to last.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          It’s a money saver, so it’s profit model is all wonky.

          A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
          A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.

          Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
          So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.

          In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.

          • ilinamorato@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.

            I believe this idea was generally debunked a little while ago; to wit, the profit margin on cancer care just isn’t as big (you have to pay a lot of doctors) as the profit margin on mammograms. Moreover, you’re less likely to actually get paid the later you identify it (because end-of-life care costs for the deceased tend to get settled rather than being paid).

            I’ll come back and drop the article link here, if I can find it.

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              Oh interesting, I’d be happy to be wrong on that. :)

              I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
              The estate costs thing is unfortunately an avenue I hadn’t considered. :/

              I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.

              • ilinamorato@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                Oh, absolutely. Absent a profit motive that pushes them toward what basically amounts to a protection scam, they’re left with good old fashioned price gouging. Even if interests are aligned, it’s still way more expensive than it should be. So yes, I agree that we should remove the profit incentive for healthcare.

                Sadly, I can’t find the article. I’ll keep an eye out for it, though. I’m pretty sure I linked to it somewhere but I’m too terminally online to figure out where.

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          That’s not what this is, though. This is early detection, which is awesome and super helpful, but way less game-changing than an actual cure.

          • RampantParanoia2365@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            It’s not a cure in itself, but isn’t early detection a good way to catch it early and in many cases kill it before it spreads?

            • ilinamorato@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              It sure is. But this is basically just making something that already exists more reliable, not creating something new. Still important, but not as earth-shaking.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      4 months ago

      Honestly they should go back to calling useful applications ML (that is what it is) since AI is getting such a bad rap.

      • medgremlin@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.

      • 0laura@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        machine learning is a type of AI. scifi movies just misused the term and now the startups are riding the hype trains. AGI =/= AI. there’s lots of stuff to complain about with ai these days like stable diffusion image generation and LLMs, but the fact that they are AI is simply true.

        • blackbirdbiryani@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I mean it’s entirely an arbitrary distinction. AI, for a very long time before chatGPT, meant something like AGI. we didn’t call classification models ‘intelligent’ because it didn’t have any human-like characteristics. It’s as silly as saying a regression model is AI. They aren’t intelligent things.

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    4 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • unconsciousvoidling@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      4 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

          • Telodzrum@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.

            I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 months ago

        … they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…

    • MuchPineapples@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      4 months ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

  • ShinkanTrain
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 months ago

    I can do that too, but my rate of success is very low

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    4 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

      • Comment105@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

  • earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    • adenoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      Yeah there are some openly available datasets on competition sites like Kaggle, and some medical data is available through public institutions like like NIH.

    • Maalus@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      Yeah there is. A bloke I know did exactly that with brain scans for his masters.

    • booty [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      4 months ago

      5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.

      what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl

      • earmuff@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 months ago

        My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.

        But thanks for assuming without asking first what the context was.

        • MaeBorowski [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          4 months ago

          find possibilities how ML can support people with certain tasks

          Marxism-Leninism? anakin-padme-2

          Oh, Machine Learning. sicko-wistful

          Science is not political

          in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            edit-2
            4 months ago

            Typical hexbear reply lol

            Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.

            • MaeBorowski [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              4 months ago

              Typical hexbear reply

              Unfortunately, you are right

              Yes, typically hexbear replies are right.

              It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.

              My science is not. I like my bubble.

              How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.

        • booty [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          8
          ·
          4 months ago

          My intended use case is to find possibilities how ML can support people with certain tasks.

          weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative

          This is no reason to stop science entirely

          if the only thing your tech can be used for is bad then you’re bad for innovating that tech

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 months ago

            Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

            Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.

            • booty [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              4
              ·
              4 months ago

              Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

              No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?

              • earmuff@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                4 months ago

                I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.

                • booty [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  4
                  ·
                  4 months ago

                  I just criticized your behavior assuming things, before you know the whole truth.

                  I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.

  • wheeldawg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    edit-2
    4 months ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      4 months ago

      shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.

      What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.

      There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.

      • wheeldawg@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        4 months ago

        This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?

        This is just plain speech, written objectively incorrectly.

        But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.

  • MonkderVierte
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    4 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

    • D61 [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      Much easier to assume the training data isn’t garbage when the AI expert system only has a narrow scope, right?

      • MonkderVierte
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Sure. And the expert interpretes still. But the result was exact.

      • somename [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Yeah, machine learning actually has a ton of very useful applications in things. It’s just predictably the dumbest and most toxic manifestations of it are hyped up in a capitalist system.