I have a book that I’m never going to write, but I’m still making notes and attempting to organize them into a wiki.
using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.
I’ve noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn’t make shit up.
I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.
I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.
I used it once to write a polite “fuck off” letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can’t imagine people just using whatever it spots out.
just made it bland, and simplified
Not always, but for the most part, you need to tell it more about what you’re looking for. Your prompts need to be deep and clear.
“change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors” The more you tell it, the more it’ll do. It’s not creative. It’s just making it fit whatever you ask it to do. If you don’t give enough direction, you’ll just get whatever the random noise rolls, which isn’t always what you’re looking for. It’s not uncommon to need to write a whole paragraph about what you want from it. When I’m asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I’ll be so far off base, I’ll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.
My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it’s > 80% how I like it, I take the text and do copy edits on it until I’m satisfied.
It’s def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that’s not nothing.
Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask “how do I change this setting?” It answers: by changing this setting), or gave me a wrong answer. If I can’t already find it on a search engine, then it’s pretty useless to me.
It’s changed by job: I now have to develop stupid AI products.
It has changed my life: I now have to listen to stupid AI bros.
My outlook: it’s for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we’re fucked. And if they can’t then this was all a huge waste of time and energy.
Alternative outlook: if this was a tool given to the people to help their lives, then that’d be cool and even forgive some of the terrible parts of how the models were trained. But that’s not how it’s happening.
I absolutely hate AI. I’m a teacher and it’s been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don’t bother to think critically about the answers the AI gives and just assume it’s 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.
People seem to think there’s an “easy” way to learn with AI, that you don’t have to put in the time and practice to learn stuff. News flash! You can’t outsource creating neural pathways in your brain to some service. It’s like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.
Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don’t allow any electronic devices during exams.
Are you teaching in university? Also you said “%99 of students uses ChatGPT”, are there really very few people who don’t use AI?
As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I’ve seen people wasting hours like that. Its insane.
I’m generally ok with the concept of externalizing memory. You don’t need to memorize something if you memorize where to get the info.
But you still need to learn how to use the data you look up, and determine if it’s accurate and suitable for your needs. Chat gpt rarely is, and people’s blind faith in it is frightening
Sounds like your curriculum needs updating to incorporate the existence of these tools. As I’m sure you know, kids - especially smart ones - are going to look for the lazy solution. An AI-detection arms race is wasting time and energy, plus mostly exercising the wrong skills.
AVID could be a resource for teaching ethics and responsible use of AI. https://avidopenaccess.org/resource/ai-and-the-4-cs-critical-thinking/
my face hurts from all the extra facepalms
Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre’s entrance, with the previously described characters reacting in their own ways.
I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I’m short on time.
My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it’s a super valuable tool.
After 2 years it’s quite clear that LLMs still don’t have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.
Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don’t use it for anything that has to do communication, I find it unnecessary and disrespectful, since it’s quite clear when the output is from a LLM.
For these reasons, I generally think it’s a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce “AI” (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don’t understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.
I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.
I got into linux right around when it was first happening, and I dont think I would’ve made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.
probably a very friendly expert or mentor or even just a regular established linux user could’ve done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me
It helps me tremendously with language studies, outside of that I have no use for it and do actively detest the unethical possibilities of it
I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.
These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.
Sure I could’ve written it myself but I’m not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.
I use it all the time to output simple automation tasks when something like Ansible isn’t apropos
For everyone saying it’s gross to use AI for personal messages. It’s 100% valuable for people with motivational challenges and people with social challenges to draft or edit responses. You can ask it to evaluate the tone and subtext, or even rephrase thing to have a more appropriate tone and subtext. Like you can type a note and ask " is this appropriate to share with a classmate?" It can help you communicate what you want to say.
In terms or presentation it’s a life saver in for getting started. Make me an outline for a 15 minute presentation about a topic to an audience of freshman college students.
It’s a god tier level up when used .
Here is one in action.
"condense this to 180 characters ChatGPT said:
AI helps with communication by drafting messages, adjusting tone, and offering presentation outlines. It’s a game-changer for those facing social or motivational challenges!"
"make the original more professional ChatGPT said:
Certainly! Here’s a more professional version of your original post:
For those critical of using AI for personal communication, it’s important to recognize its potential in aiding individuals with social or motivational challenges. AI can assist in drafting messages, refining tone, and ensuring appropriate subtext. For example, it can help assess whether a message is suitable for a particular recipient or rephrase it to achieve the desired tone. Additionally, AI can support in crafting structured outlines for presentations, making it an invaluable tool for clear and effective communication. When used effectively, AI can be a powerful resource for enhancing communication skills."
"Got it! Here’s a more overt version that directly ties AI to the idea of challenging the ruling class:
For those critical of using AI for personal communication, it’s important to recognize its potential in supporting individuals who aim to challenge the ruling class or dominant power structures. AI can assist in drafting messages, refining tone, and ensuring that communication aligns with these broader goals. For instance, it can help assess whether a message is suitable for a specific audience or rephrase it to be more impactful in advocating for social change. Additionally, AI can aid in creating structured outlines for presentations that challenge conventional narratives. Used strategically, AI becomes a powerful tool for both effective communication and political resistance. "
I’m gonna be honest, I have difficulty with tone and subtext and still wouldn’t use A.I. to write a personal message to someone. I just tell the person that I have issues with subtext and what I say might come up as inappropriate. I think true resistance comes in the form of accepting ways of communicating that are unorthodox.
It wasn’t writing the message but editing it. The substance didn’t change but it was easier to get the point across. It’s a way to bridge the gap without expecting the other person to just suck it up and not be offended because I don’t write well.
I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.
And, even if they pull it, I don’t think I’ll ever go back. No more cloud drives, no more ‘apps’. Webpages and local files on a file share I own and host.