Last night I was nearly tearing my hair out trying to get a crappy inkjet printer installed when I turned to ChatGPT for help; 5 minutes later it was done (driver issues).

I realised that I use it a lot for all sorts of things but haven’t really seen any posts on here about it, and was wondering if other people do the same.

I’ve used it to create bash scripts, write small, niche python programs (eg sort photos into separate folders based on their orientation), and create cron jobs among other things.

In particular, I find it useful when I want to use a command line program that has a complicated set of options / syntax, but I might only ever want to use it once, or very infrequently. So I tell it what I want to accomplish, and ask it to give me the correct command to run with the options I specify in plain language.

Example:

User

Give me the correct rsync command to recursively copy all files in /home/user/Documents that are image files of any type but were created more recently than 1/1/2024. The destination is /home/user/foo

ChatGPT

rsync -avm --include='*/' --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg' --exclude='*' --prune-empty-dirs --min-age=$(date -d '2024-01-01' +%s) /home/user/Documents/ /home/user/foo/

User

Break down the command and explain each part of it in dot point format.

ChatGPT

- rsync`: Invokes the rsync program, which is used for efficiently transferring and synchronizing files across systems or within a system.

  • -avm:
    • -a: Archive mode, which preserves permissions, symlinks, modification times, group, and ownership. Implicitly includes recursion.
    • -v: Verbose mode, which provides detailed output of what rsync is doing.
    • -m: Prune empty directory chains from the file-list.
  • --include='*/': Include directories themselves, which is necessary for recursion.
  • --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg': Include files that match these image file extensions.
  • --exclude='*': Exclude all other files that do not match the previously stated include patterns.
  • --prune-empty-dirs: Exclude empty directories that result from the inclusion/exclusion pattern.
  • --min-age=$(date -d '2024-01-01' +%s): Exclude files that are older than the timestamp for January 1, 2024. This uses date -d to convert a date string into a timestamp, which rsync uses to filter files.
  • /home/user/Documents/: Source directory from which files are being synchronized.
  • /home/user/foo/: Destination directory to which files are being synchronized.`

It’s also really good for explaining concepts / commands in plain language.

It’s like having a 24 hour on call Linux expert :)

#Things to note:

- Get the subscription. ChatGPT 3.5 is pretty useless. ChatGPT4 is fine, but I’m pretty sure you need the subscription to access it.

- Give it pre-instructions. I have told mine what distro, what shell I’m using and the make and model of my laptop. If you have a subscription you can add these as permanent pre-instructions, so to speak. That way it will be much more likely to give you correct answers.

- It’s not magic In order to get what you want, you have to be able to ask the right questions. It will boost your ability but won’t turn you in to a 1337 haxx0r

-Ask questions As always, don’t run any commands that you don’t understand. Ask it to break down any commands it tells you to run if you don’t understand them.

-Sometimes it goofs For various reasons, sometimes it will ask you to install a package that no longer exists, or will give you a command that doesn’t work. When that happens, I just feed the error message back into ChatGPT and it usually is able to correct itself.

-Ask “Is there a better or easier way to do this?” This is probably the most valuable question I’ve found to ask chatGPT. Sometimes it gets so far in the weeds looking for a solution to a problem that you need to pull back and start fresh.

  • barbara
    link
    fedilink
    arrow-up
    51
    arrow-down
    4
    ·
    edit-2
    8 months ago

    Chatgpt does not know truth. It does not know if the info it provides is true. It does not know if the code actually works. It just concatenates strings based on probability. You may be lucky or you aren’t. The easier the task, the more likely it’ll succeed. But a low difficulty is no guarantee for success.

    It is great for layouts, structure and for the basic concept. “For loop in fish”. But it may struggle to convert a video from x264 to av1 with ffmpeg. It depends on info that’s provided online. If it uses misinformation, then that’s in there as well.

    The command you got is just wrong. What about avif, jxl or most other image formats? Use it, but think.

    • lorkano@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      Note that sometimes Ai models check if code works by executing it. For example gemini can python function and execute it to write down the results

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      2
      arrow-down
      32
      ·
      edit-2
      8 months ago

      I hear this over and over but none of what you say actually matters.

      It’s not luck if it gives accurate and detailed answers for almost every question that actually compiles and works.

      I think the difference in opinion comes down to what you use it for. In some areas I imagine it will just hallucinate. But in others, such as coding, it’s often almost 100% correct and a magic tool for learning and saving soooo much time.

    • z00s@lemmy.worldOP
      link
      fedilink
      arrow-up
      9
      arrow-down
      40
      ·
      edit-2
      8 months ago

      I was wondering how long it would take the gatekeepers to show up. The command works, and is perfectly fine. If I had any uncommon formats, I would tell gpt to include them.

      • Oisteink@feddit.nl
        link
        fedilink
        arrow-up
        20
        arrow-down
        3
        ·
        8 months ago

        I’m quite sure it won’t be long until some bad practice spreads like this. Giving clueless “Linux pros” top advice on how to enable a back door.

        LLMs can be poisoned and as datasets increase and complexity grows it will be harder to contain.

        Cgpt works great for some stuff, but all you know is that someone somewhere wrote something similar. They are no better than Google in predicting what is good material and what’s wrong, and training is statistics.

        • z00s@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          16
          ·
          8 months ago

          In order to poison a LLM, you’d need access to the training process, which is locked down by openai. Just posting false info on the net isn’t enough. GPT doesn’t simply repeat what’s already been written.

          More than that though, you can find plenty of wrong and bad advice posted confidently by legions of Linux gatekeepers on any forum.

          Anyone who has ever spent any time on stack overflow will tell you why they’d rather talk to an LLM instead of posting there.

          • TheCheddarCheese@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            8 months ago

            chatgpt only generates text. that’s how it was supposed to work. it doesn’t care if the text it’s generating is true, or if it even makes any sense. so sometimes it will generate untrue statements (with the same confidence as the ‘linux gatekeepers’ you mentioned, except with no comments to correct the response), no matter how well you train it. and if there’s enough wrong information in the dataset, it will start repeating it in the responses, because again, its only real purpose is to pick out the next word in a string based on the training data it got. sometimes it gets things right, sometimes it doesn’t, we can’t just blindly trust it. pointing that out is not gatekeeping.

      • kolorafa@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        edit-2
        8 months ago

        Example that confirms that “Chatgpt does not know truth. It does not know if the info it provides is true.” or more like “It will spell answer that match your inquiry that sound correct even if it’s totally made up.”

        https://chat.openai.com/share/206fd8e9-600c-43f8-95be-cb2888ccd259

        Summary:

        User
        in `podman stats` you see BLOCK IO as a summary of hard drive activity.
        how to reset the 
        
        ChatGPT
        To reset the block I/O statistics displayed by podman stats, you can use the podman stats --reset command.
        
        User
        Error: unknown flag: --reset
        
        ChatGPT
        Apologies for the confusion. It seems I provided incorrect information. The podman stats command does not have a built-in option to reset the statistics.
        

        So once again, don’t be afraid to use it, but do your own research especially if following LLM could result in something breaking both in tech or in life.

        • z00s@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          20
          ·
          8 months ago

          You left out the part where it then gave you the correct answer.

          • kolorafa@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            8 months ago

            I didn’t left it, I needed provide that “part” to it to get the correct answer.

            Because like in the whole thread is mentioned over and over again, chatgpt doesn’t know the correct answer, it’s a mathematical model of “what looks ok” and “what should be the next word”, it looks ok to try to put --reset parameter to reset it, but because chatgpt can’t actually check documentation of podman stats if the param exists, it just generate it based on “common known text patterns”, and “common known text patterns” are written in a way suggesting that it is the truth.

            So once again - do your own research if following the results it could cause breaking both in tech and especially in life. And that is true for both chatgpt and random pages on internet.

            In this case I did exactly follow chatgpt answer without doing fact checking - I asked chatgpt, I copied the command and pasted it into terminal, because I know that if it didn’t work the worse that could happen it would fail and do nothing. But It’s bad for new people that will not know what the result could be if it’s wrong!

            @z00s Don’t take me wrong. I’m not telling not to use it, on the contrary.

            You should use any tool that helps you do your job/task. But you should try to understand how to use those tools wisely.

            Telling someone never to use ChatGPT is like telling someone to never use excavator. That is wrong, you should use excavator but you should know what is an excavator, and what harm it could do by for example accidentally destroy a building or even hurt someone (or youself) if not use wisely.