• Corngood
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    5 hours ago

    Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online

    How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.

    If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.

    I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.

    • L3s@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      4 hours ago

      Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short. Talking to non-technical people about a technical issue is not the easiest for me, AI has helped me dumb it down when sending an email, and helps correct my shitty grammar at times.

      As for accuracy, you review what it gives you, you don’t just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn’t make the most sense, such as if it uses wording you wouldn’t typically use. It is fairly accurate though in my use-cases.

      Hallucinations are a thing, so validating what it spits out is definitely needed.

      Another example: if you feel your email is too stern or gives the wrong tone, I’ve used it for that as well. “Make this sound more relaxed: well maybe if you didn’t turn off the fucking server we wouldn’t of had this outage!” (Just a silly example)

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        4 hours ago

        As for accuracy, you review what it gives you, you don’t just copy and send it without review.

        Yeah, I don’t get why so many people seem to not get that.

        It’s like people who were against Intellisense in IDEs because “What if it suggests the wrong function?”…you still need to know what the functions do. If you find something you’re unfamiliar with, you check the documentation. You don’t just blindly accept it as truth.

        Just because it can’t replace a person’s job doesn’t mean it’s worthless as a tool.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Yeah, I don’t get why so many people seem to not get that.

          The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.

          I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.

          I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 hours ago

          The issue is that AI is being invested in as if it can replace jobs. That’s not an issue for anyone who wants to use it as a spellchecker, but it is an issue for the economy, for society, and for the planet, because billions of dollars of computer hardware are being built and run on the assumption that trillions of dollars of payoff will be generated.

          And correcting someone’s tone in an email is not, and will never be, a trillion dollar industry.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        I think these are actually valid examples, albeit ones that come with a really big caveat; you’re using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can’t always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

        That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There’s certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there’s potentially real value there.

        Which is kind of the broader story with Gen AI overall. It’s not that it can never be useful; it’s that, at best, it can only ever aspire to “useful.” No one, yet, has demonstrated any ability to make AI “essential” and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

          This is such a good example of how it AI/LLMs/whatever are being used as a crutch that is far more impactful than using a spellchecker. A spell checker catches typos or helps with unfamiliar words, but doesn’t replace the underlying skill of communicating to your audience.

    • earphone843@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      It works well. For example, we had a work exercise where we had to write a press release based on an example, then write a Shark Tank pitch to promote the product we came up with in the release.

      I gave AI the link to the example and a brief description of our product, and it spit out an almost perfect press release. I only had to tweak a few words because there were specific requirements I didn’t feed the AI.

      Then I told it to take the press release and write the pitch based on it.

      Again, very nearly perfect with only having to change the wording in one spot.