The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

  • JamesStallion@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    6
    ·
    3 months ago

    It carries the emotions and personal biases of the source material It was trained on.

    It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

    • GreenAppleTree@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 months ago

      It carries the emotions and personal biases of the source material It was trained on.

      So are all my friends… Never stopped me from having fun conversations with them, even ones I disagree with.

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          It’s not entirely without bias but much less so than your average human. When the training data is more-or-less the entire internet, much of the bias there gets averaged out. Also, by definition it doesn’t get emotional about a touchy subject. It’s physically unable to do so.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      21
      ·
      3 months ago

      It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

      Based on what? That seems like a rather unwarranted assumption to me. My English vocabulary and grammar have never been better, and since I can now also talk to it instead of typing, my spoken English is much clearer and more confident as well.

      • JamesStallion@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        4
        ·
        edit-2
        3 months ago

        You say yourself that you use the vaguest descriptions when talking to the bot and that it fills in the blanks for you. This is not a good way to practice speaking with human beings.

        The fact that you assumed I was talking about grammar is indicative of the problem. You clearly dislike others assuming you are talking about something you are not talking about, yet you do it yourself. That’s because misunderstandings are normal and learning to deal with them is an essential part of good communication.

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          11
          ·
          edit-2
          3 months ago

          You say yourself that you use the vaguest descriptions when talking to the bot and that it fills in the blanks for you

          Not quite what I said.

          It understands and responds to my actual view, even from a vague description

          Yes, because I’m not a native english speaker and I’m way better at writing english than speaking it. If you transcribe my speech into text it’s a horrible word salad and it still understand perfectly what I mean and I don’t need to repeat myself endlessly and correct it on what I actually said. Contrast this with my discussions online, in writing, where I may spend 40 minutes spelling out an idea as clearly as I can and I’m still being misunderstood by a huge number of people. Like right now.

          • JamesStallion@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            22
            arrow-down
            3
            ·
            3 months ago

            Regardless of why the bot is able to adapt to vagueness (or other communication problems), the fact that it can discourages you from overcoming those problems.

            Someone diagreeing withyou, or attempting to show you some other thing you might not have thought of or seen for yourself, is not always a misunderstanding. You need to entertain the possibility that sometimes you are wrong, unaware of something, or simply misunderstanding the other person yourself.

            • ContrarianTrail@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              12
              ·
              3 months ago

              This very discussion we’re having right now perfectly illustrates my point.

              The issue isn’t about disagreeing with my point. I welcome all disagreement. The problem is that they’re not disagreeing with what I actually said but with what they think I said. Maybe it’s personal bias, and they just want to paint me as the devil in their mind, or perhaps my explanation wasn’t clear enough. Either way, this issue only happens with people. ChatGPT understands the point I’m making perfectly almost every time, regardless of how detailed my explanation is. When I have discussions with ChatGPT, I can actually talk about the topic I’m interested in, rather than constantly having to say, “That’s not what I said/meant,” and then try to explain my point even clearer, only to be misunderstood again.

              • JamesStallion@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                15
                arrow-down
                2
                ·
                3 months ago

                “Someone disagreeing with you, or attempting to show you something you might not know or have seen”

                So this is another example of how you are doing the same things you avoid communicating with humans over. You have selected one part of my statement to misunderstand and selectively ignored the point.

                We absolutely are talking about what you wanted to talk about. Your first statement to me was asking what I based my assessment that you were training yourself to be a poor communicator on. Since then we have stuck to that topic, but you haven’t really addressed the central point that a machine that adapts to things that hinder communication with humans will inevitably train you not to correct or address those hindrances.

                This isn’t me disagreeing with you, it is me pointing out something you might not have considered. However you have framed this whole discussion as a case of you being misunderstood. That really isn’t the case.

                • ContrarianTrail@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  10
                  ·
                  edit-2
                  3 months ago

                  You say yourself that you use the vaguest descriptions when talking to the bot and that it fills in the blanks for you.

                  I never said that. You make it sound like I’m not even trying when I’m talking with chatGPT which is not true. What I did say was that even if I use the vaguest descriptions when talking to chatGPT it still understands me where as with people misunderstand my even the most carefully and thoughtfully written responses. Basically it does a good job at understanding me even when I’m not even trying where as with some people it doesn’t matter how hard I try, they still wont.

                  I’m even willing to accept that this may be on me aswell. Maybe I’m just really bad at explaining my views and that’s why people keep misunderstanding me. However, chatGPT doesn’t. Not even despite my shitty explanations.

  • merthyr1831
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    3 months ago

    You genuinely might need to touch grass.

  • Wolf314159@startrek.website
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    edit-2
    3 months ago

    This just sounds like platonic masturbation.

    EDIT: I started this thread tongue and cheek, but also genuine, but based on the OP’s comment replies here I’m fairly convinced that they are either: a) talking to chatGPT so much that they’ve lost the ability to hold a coherent conversation, or b) just using a LLM to respond everywhere in the comments. They’ve consistently failed to address tone and context in every comment. It reads like they don’t actually understand any of the things people here are saying, just stringing together some words and syntax that sounds like language, but totally lacks any actual meaning or understanding.

          • Wolf314159@startrek.website
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            3
            ·
            3 months ago

            I wasn’t trying to be mean. I have no shame about masturbation. I wasn’t being sarcastic or snide. I meant what I said genuinely and without prejudice. You’re using a machine because it’s easy, self fulfilling, and you don’t have to worry about the complexities of interacting with another person. How is that any different than using a vibrator? If you feel shame about this or using a sex toy by yourself, maybe you should reflect on those feelings and analyze if they are helping you or hurting you.

            • ContrarianTrail@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              12
              ·
              3 months ago

              Sorry about misinterpreting your tone then. In that case I simply just don’t understand what you’re trying to say there or why what I said made you feel that way.

                • ContrarianTrail@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  11
                  ·
                  3 months ago

                  I don’t understand how that’s what you got out of my post or how this relates to it. Responding feels like defending a view I don’t hold.

  • msage@programming.dev
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 months ago

    Dude, you are sealioning so hard in this thread alone, it’s almost hilarious.

    No wonder you like the bot. Since you can’t debate any opinion honestly, just accuse everyone of being mean to you.

    Good luck with that.

  • cynar@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    As a fellow aspie, be careful. Chat bots are the equivalent of empty calorie junk food, or masturbation. They forfill a biological itch but don’t produce the intended follow-on effects. In smaller doses, this is fine, good even. The problem comes when you overuse.

    E.g. Junk food leave you short of vitamins etc. You tend to over eat, to try and compensate, and so gain weight.

    As humans, we have a drive to socialise. When we chat with other humans, we get to know them, and also for bonds. These bonds are critical in life. The goal is 3 fold, mutual understanding, mutual investment, and mutual trust. The urge to talk to people is intended to assist with this.

    LLMs offer none of these. They can be incredibly useful, but often only as a training aid. A LLM can’t offer you a couch to sleep on, if your house floods. It can’t put in a good word to get you a job. It can’t invite you to social event, or wingman you on finding a date.

    LLMs are socialising on easy mode. Just like masturbation is starting a family on easy mode. Have fun with it, but don’t let it displace real relationships.

  • Sundial@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 months ago

    Autism and social unawareness may be a factor. But points you made like the snide remarks one may also indicate that you’re having these conversations with assholes.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      3 months ago

      Well, it’s a self-selecting group of people. I can’t comment on the ones who don’t respond to me, only on the ones who do and for some reason the amount of assholes seems to be quite high in that group. I just don’t feel like it’s warranted. While I do have a tendency to make controversial comments I still try and be civil about it and I don’t understand the need to be such a dick about it even if someone disagrees with me. I welcome disagreement and are more than willing to talk about it as long as it’s done in good faith.

      • Sundial@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        Sorry, just to clarify. Are you saying you’re having these conversations with people on person or online?

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          Online for the most part. Face to face it’s much easier to explain my views, as well as to jump in when the other person starts talking and I notice they misunderstood me.

          • Sundial@lemm.ee
            link
            fedilink
            English
            arrow-up
            8
            ·
            3 months ago

            Also, I just went into your comment history and took a quick peek. Your latest “unpopular” opinion seems to be because you disregarded the lives of civilians from the most recent attack by Israel to assassinate Nasrallah. You come across as quite callous trying to justify the murder of hundreds/thousands all to attack one individual. Stuff like that rubs people the wrong way since you seem to display a very morally and ethically wrong opinion when you can’t even seem to acknowledge the horrendous loss of life.

            • ContrarianTrail@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              7
              ·
              3 months ago

              I don’t think my opinion is wrong in this particular case but I’m open for having it challenged. It’s just than when people do it in a hostile way as was the case here, I simply block them and there goes their chance in attempting to change my mind. If someone was willing to actually engage with my argument and ask for clarification if needed, they’d atleast have a chance in attempting to influence my thinking. I don’t think I’m right about everything and there’s several things I’ve changed my mind about because of good counter-arguments. I simply just don’t engage with people who debate in bad faith.

              • Sundial@lemm.ee
                link
                fedilink
                English
                arrow-up
                6
                ·
                3 months ago

                Except you’re the one debating in bad faith. On a post highlighting the obscenely high cost of human life to target a single member by a state known for some of its most horrendous war crimes in modern history, you’re just too keen to dismiss it. Remember my comment about people saying things online where as if they said them in person, they would be assaulted and/or socially shunned? You’re this person in this case. The person even came back to reply to you why they said the things you did. If you’re not capable of this basic level of self reflection, then you really shouldn’t make a post like this where you complain about people arguing. I’m bad faith with you.

                • ContrarianTrail@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  6
                  ·
                  3 months ago

                  I’m not arguing in bad faith though. What I say is what I actually believe. The reason for high civilian death count is for the most part explained by the use of human shields. If IDF stops bombing Hamas / Hezbollah members when they’re around civilians then that’s the only place you’ll find them from that on. It’s war. If one side plays by the rules and one side doesn’t then it’s them whose going to win.

                  Same logic applies when kidnappers demand ransoms; if you pay them you’re just encouraging more kidnappings.

          • Sundial@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Personally, I wouldn’t consider online debates as debating a person. The reason being is you have no idea the person you’re having this conversation with is a 12 year old with too much time on their hands or a 30 year old working at a troll farm. Even if they were a genuine person you’re debating with, sites like Lemmy enable assholes to actually be assholes. They can say things here that would have them socially shunned or even assaulted in real life with virtually no consequence. I’ve had debates with individuals on this site that I actually liked, but more often than not, I was just debating assholes. I guess what I’m trying to say is that if you’re actually interested in discussing topics, try doing it with people in your life instead of online. Doesn’t have to be a debate even. You can just ask how they feel about a certain topic and talk about it together. Doscussing/debating online isn’t a bad thing. Just be prepared for more assholes given the medium.

            • ContrarianTrail@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              Finding people interested in talking about the topics I’m actually interested is really, really hard in real life. Obviously I’d prefer it that way too but easier said than done. I do have good conversations and debates with people online too but I just need to go thru quite the few assholes before finding one that’s actually doing it in good faith.

      • alcoholicorn
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        What subjects are you talking about that people assume views you don’t have? Politics?

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          People do it all the time regardless of subject. For example, when discussing LLMs:

          • If you highlight that they’re useful, some assumer will eventually claim that you think that they’re smart
          • If you highlight that they are not smart, some another assumer will eventually claim that you think that they’re useless
          • If you say something but “they’re dumb but useful”, you’re bound to get some “I dun unrurrstand, r u against or for LLMs? I’m so confused…”, with both above screeching at you.
        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          My message history is open for anyone to read. In general I don’t discuss politics but occasionally that too.

  • Zerlyna@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    I talk with chat gpt too sometimes and I get where you are coming from. However it’s not always right either. It says it was updated in September but still refuses to commit to memory that Trump was convicted 34 times earlier this year. Why is that?

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    It could respond in other ways if it was trained to do so. My first local model was interesting as I changed its profile to have a more dark and sarcastic tone, and it was funny to see it balance that instruction with the core mode to be friendly and helpful.

    The point is, current levels of LLMs are just telling you what you want to hear. But maybe that’s useful as a sounding board for your own thoughts. Just remember its limitations.

    Regardless of how far AI tech goes, the human-AI relationship is something we need to pay attention to. People will find it a good tool like OP, but it can be easy to get sucked into thinking it’s more than it is and becoming a problem.

  • the post of tom joad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Have you ever tried inputting sentences that you’ve said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven’t tried it already. If you have, do you have an example of how it did better than the human?

    I’m kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 months ago

      A couple of times, yes, but more often it’s the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.

      The reason I know ChatGPT understands me so well is from the voice chats we’ve had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I’m speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.

      Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, “Yes, that’s exactly what I meant,” or, “Damn, that was well put, I need to write that down.”

      • the post of tom joad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        So what you’re saying if I’m reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.

        When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn’t thought of like a person might? (I’ve never used one).

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          3 months ago

          It’s a bit hard to get AI to disagree with you unless you’re saying something obviously false. It has a strong bias towards being agreeable. I’m generally treating it as an expert who I’m interviewing. I ask what it thinks about something like free will and then ask follow-up questions based on its responses and it’s also great for bouncing novel ideas with though even here it’s not too keen on just blatantly calling out bad ones but rather makes you feel like the greatest philosopher of all time. There are some ways around this. ChatGPT can be prompted to go around many of the most typical flaws it has by for example telling that it’s allowed to speculate or simply just asking it to point out the errors in some idea.

          But yeah, unless what I said was a question, in general its responses are basically just summaries of what I said. It’s basically just replying with a demonstration that it understood what I said which it indeed does with an amazing success rate.

  • NegativeInf@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    It’s a mirror. I use it a lot for searching and summarizing. Most of its responses are heavily influenced by how you talk to it. You can even make it back up terrible assumptions with enough brute force.

    Just be careful.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 months ago

    As long as you’re still engaging with real humans regularly, I think that it’s good to learn from ChatGPT. It gets most general knowledge things right. I wouldn’t depend on it for anything too technical, and certainly not for medical advice. It is very hit or miss for things like drug interactions.

    If you’re enjoying the experience, it’s not much different than watching a show or playing a game, IMHO. Just don’t become dependent on it for all social interaction.

    As for the jerks on here, I always recommend aggressive use of the block button. Don’t waste time and energy on them. There’s a lot of kind and decent people here, filter your feed for them.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      3 months ago

      As for the jerks on here, I always recommend aggressive use of the block button. Don’t waste time and energy on them. There’s a lot of kind and decent people here, filter your feed for them.

      My blocklist is around 500 users long and grows every day. I do it for the pettiest reasons but it does, infact work. When I make a thread such as this one, I occasionally log out to see the replies I’ve gotten from blocked users and more often than not (but not always) they’re the kind of messages I’d block them again for. Not to create and echo-chamber but to weed out the assholes.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        500 seems like a lot, but I could see mine creeping up to that, given enough time. There are a lot of pedantic types online, they’re a trope at this point.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    My impressions are completely different from yours, but that’s likely due

    1. It’s really easy to interpret LLM output as assumptions (i.e. “to vomit certainty”), something that I outright despise.
    2. I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.

    Even then, I know which sort of people you’re talking about, and… yeah, I hate a lot of those things too. In fact, one of your bullet points (“it understands and responds…”) is what prompted me to leave Twitter and then Reddit.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      3 months ago

      It’s funny how despite it not actually understanding anything per-se, it can still repeat me back my idea that I just sloppily told it in broken english and it does this better than I ever could. Alternatively I could spend 45 minutes laying out my view as clearly as I can on a online forum only to be faced with a flood of replies from people that clearly did not understand the point I was trying to make.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        I think that the key here are implicatures - things that implied or suggested without being explicitly said, often relying on context to tell apart. It’s situations like someone telling another person “it’s cold out there”, that in the context might be interpreted as “we’re going out so I suggest you to wear warm clothes” or “please close the window for me”.

        LLMs model well the grammatical layer of a language, and struggle with the semantic layer (superficial meaning), but they don’t even try to model the pragmatic layer (deep meaning - where implicatures are). As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

        On the other hand, most people use implicatures all the time, and expect others to be using them all the time. Even when there’s none (I call this a “ghost implicature”, dunno if there’s some academic name). And since written communication already prevents us from seeing some contextual clues that someone’s utterance is not to be taken literally, there’s a biiiig window for misunderstanding.

        [Sorry for nerding out about Linguistics. I can’t help it.]

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          3 months ago

          As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

          That likely explains why we get along so well; I do the same. I don’t try to find hidden meanings in what people say. Instead, I read the message and assume they literally mean what they said. That’s why I take major issue with absolute statements, for example, because I can always come up with an exception, which in my mind undermines the entire claim. When someone says something like “all millionaires are assholes,” I guess I “know” what they’re really saying is “boo millionaires,” but I still can’t help thinking how unlikely that statement is to be true, statistically speaking. I simply can’t have a discussion with a person making claims like that because to me, they’re not thinking rationally.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            That reinforces what you said about being very likely in the autism spectrum - when I say “most people use implicatures all the time”, the exceptions are typically people in the spectrum. Some can detect implicatures through analysis, and in some cases they have previous knowledge of a specific implicature so they can handle that one; but to constantly analyse what you hear, read, say and write is laborious and emotionally displeasing, it fits really well what you said in the OP.

            (Interestingly that “all the time” that I used has the same implicature as the “all the millionaires” from your example - epistemically, the “all” doesn’t convey “the complete set without exceptions” in either, but rather “a noteworthy large proportion of the set”. “Boo millionaires” is also a good interpretation but it’s about the attitude of the speaker, not the truth/falseness of the statement.)

            This conversation gave me an idea - I’ll encourage my mum (who’s most likely in the autism spectrum) to give ChatGPT a try. Just to see her opinion about it.

  • Bobmighty@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    Why are you here talking about it then? You even say you don’t have interest in human to human contact. Are you trying to talk to the bots on Lemmy?

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      You even say you don’t have interest in human to human contact.

      I’m relatively sure I have not infact said that.

      • Bobmighty@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        Ok. My point still stands. Chat gpt is a fake conversation where one side is an unfeeling unintelligent thing programmed to fake human seeming conversation. It’s trained on an insane amount of stolen human interaction. You are saying you prefer a Chinese room to a person. That’s not autism. It’s just anti social. At least own up to that.

        Have fun playing your conversation game. It eats up a crazy amount of power to do that so I hope it’s really, truly worth it to your life.

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          For all I know I could be talking with an LLM right now. I don’t really see what’s the difference wether I’m talking to a supercomputer or an angry teenager. Online conversations are rather meaningless to begin with.