I got this response from a 70+ Catholic Priest. Quite literally nothing in this world is sacred or real anymore.
Considering that despite going over lvl70, he decided with
Catholic Priestinstead ofSaint,WarlockorArchmage, it should already be making you question his decision making ability.
Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I’d asked them. I was somewhat insulted.
It’s only a problem if they claimed it as their own or it didn’t add value, AND it wasted your time as a result.
Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)
To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might be super busy, or just dicks who don’t respect you or your time.
Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.
I’m getting that more and more. “I asked ChatGPT and it said”. Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.
Make sure they know they just lost input right ms the next time. No, I don’t ask Harry, he just quoted GPT last time, and I’d already asked it this time so there was no reason to involve him. Nothing worse for a lead than people not wanting them to lead because they’ve abdicated the job to spicy autocorrect.
Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.
To me it’s like sending the “let me google that for you” link to answer a question. It’s just bad form. I don’t want your whole reasoning trace man, i just want to know what you understand of it and maybe you’ll catch some detail i’m missing or whatever. It’s simple, i won’t read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it’s also to make sure you are aware of the answer and we have a common truth.
Or failing that, just give me the fucking prompt and at least i’ll know if you understand the question.
Or failing that, just give me the fucking prompt and at least I’ll know if you understand the question.
This one’s really nice. I should make this my go to response to anyone doing that.
Sure… copy & paste is copy & paste.
However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
…fact check claims
Risky use-case. Besides, why bother when you have to fact check the fact checker.
It is about respecting everyone’s time…
Example, if an executive were to claim: “We don’t have any solution to X in the company” in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn’t true, then maybe that email wouldn’t have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.
AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)… whereas BS can derail your day.
We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others’ valuable time, and encourage others to do the same.
Until they solve the AI hallucination problem, I’ll never be able to trust it.
That doesn’t seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.It’s a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it’s called).
I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.
I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.
Nobody says to blindly trust it…
ChatGPT isn’t on the team.
Except that when someone pastes “ChatGPT thinks that {wall of AI-generated text}”
That person put ChatGPT on the team. And if there was no human input, the competition is free to use that and mock it word for word. Use fear, uncertainty, and doubt to convince your team that anyone can use that, including your competition, if it is published.
The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection.
https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability






