Certainly! Let me elaborate on the concerns raised by the triager
This is typical LLM speak, it sounds like a robot butler. I don’t think I have encountered a single person who writes likes this. But it’s also got a weird 3rd person reference that indicates there is another party that is promoting for a response.I am okay with LLMs having a specific voice that makes them identifiable. My worry is that people will start talking like LLMs instead of LLMs sounding like people.
Daniel Stenberg[1] brought up a good point: the complexity here is that curl is used all over the world, and there's certainly nothing wrong with a non-English speaker using an LLM to help write their bug report. So the superficial giveaways that the English text is LLM-generated doesn't necessarily mean the report's content was LLM-generated.
[1] https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
I love this paragraph. I think that generative AI companies, especially OpenAI, have completely dropped the ball when it comes to their marketing.
The narrative (that these companies encourage and often times are responsible for) is that AI is intelligent and will be a replacement for humans in the near future. So is it really a surprise when people do things like this?
LLMs don’t shine as independent agents. They shine when they augment our skills. Microsoft has the right idea by calling everything “copilot”, but unfortunately OpenAI drives the narrative, not Microsoft.
It's also a better company strategy to be an augment vs replacement. Like, advertise that you can get twice as much done not that you can get the same amount done with half the effort.
If somebody spends 10M on Labor then at best you can change 10M to replace their labor costs. Lets say its 1,000 people.
If you instead argue that those people are now 2x as efficient you can sell the company of the idea of paying for 2,000 seats when their company grows.
That assume the company has the requirement to have 2x the work.
If not, the argument becomes:
a) Get rid of 1000 people
b) Get rid of 500 people, by making 500 people 2x efficient.
Option (a) is clearly better.
Option A is not clearly better because it assumes you can actually completely get rid of the 1000 people.
I don't think we've seen much evidence of this.
If anything, I do believe I've seen evidence that option B is more realistic and doable.
It always baffles me though, how people pick what they believe in with zero supporting evidence, just because it sounds better... with zero context.
It's not zero context; this thread is about using AI to augment human productivity vs Ai to replace humans.
The supporting evidence is just the math. (A) If I sell you a product that makes your employees twice as productive then my revenue scales with your employee count. (B) If I sell you a product that eliminates your employees then my _maximum_ revenue is your current employee count. With (A) I have a unlimited revenue cap while with (B) it's cap'd at your current employee count. I also didn't invent this approach so there's other people that think this too.
It's not that (B) is bad; it's just that (A) is better. It's similar to say selling people a cable subscription without ads; it's just better (more revenue) to both sell them a subscription and give them ads.
You've flipped A and B.
Current employee cost may be higher than revenue scaling with employees.
I've been using a chocolate factory analogy around this. These companies are making damn fine chocolate, without a doubt. Maybe even some of the best chocolate in the world. But they got tired of selling just chocolate and so started marketing their chocolate as cures for cancer, doctors, farmers, and all sorts of things that aren't... well... chocolate. Some people are responding by saying that the chocolate tastes like shit and others are true believers trying to justify the fact that they like the chocolate by defending the outrageous claims. But at the end of the day, it's just chocolate and it is okay to like it even if the claims don't hold up. So can't we just enjoy our chocolate without all the craziness? This seems to be a harder ask than I've expected.
If a chocolate factory was making deceptive claims about curing cancer, then -- regardless of chocolate quality -- I think a lot of people would very reasonably
1. Stop eating that chocolate 2. Preface every recommendation of that chocolate with a clear disclaimer
I don't think it would be ethical to continue recommending the chocolate, only mentioning its benefits and being silent about the drawbacks.
The chocolate is useful though. So personally I preface it. But I don't know how to accurately communicate "Hey, the chocolate is tasty, but doesn't cure cancer, can we stop saying it does" without it being interpreted as "chocolate is horrible and will summoning a random number of parrots who will drop paperclips on you until you die." I'm weirded out by how hard this is to communicate.
If the chocolate didn't have such utility I'd fully agree with you, but that's the only slight disagreement I have. I definitely agree it is unethical to be selling the chocolate in this way, or overselling in really any way. Likewise I think it is unethical to deny its tastiness and over exaggerate your dislike for it.
The author could have provided his original conversation with LLM in his native language for anyone else to verify, so this is not an excuse for using LLM to write crap. The point is that whether the author wrote the text himself or an LLM helped the author write it, the author is ultimately responsible for the content, and the author should only be responsible for his own writings, not LLM outputs.
That’s absolutely fair IMO, but they aren’t use it right if that’s what’s happening. LLMs can do a decent job translating, and they should have written their bug report in their native language and asked for a translation. Then translated responses, wrote their response, and translated back
This appears to be “write a bug report about X” then “write a response to triager for their reply Y” without intermediating, let alone factually checking the output.
That use doesn’t fall prey to LLM voice because the translation is of your text and phrasing.
God bless people who use LLMs to improve their life, translate, etc. But using them to think isn’t acceptable.
He makes a great point, but I think writing in an authentic, un-augmented voice will very quickly become the only way to be noticed. Which is a shame for otherwise benign uses of the tool.
Ok, we've changed the URL from https://hackerone.com/reports/2298307 to that link. Thanks!
I very much doubt people will start talking like LLMs, unless perhaps the ones who rely too much on them. In which case, good, I’d like to have that information. A world where you can still identify LLMs is better than one where you cannot.
You almost made me curious enough to want to see if the writing style of LinkedIn posts has shifted in the last year.
Sadly I'd have to open LinkedIn and spend significant time on it to verify that suspicion, so I'll never know.
EDIT: Look, I'm sure there's good stuff on LI, but let's be honest: it is also full of really weird, cringy posts that are somewhat inevitable products of influencer culture mixing with company culture. If you don't think so you either lucked out and live in an amazing social media bubble or you're lying to yourself.
You've reminded me to go back and see what's new on https://www.shlinkedin.com.
Think of corporate speak, then ask yourself why not?
Pretty much, browsing LinkedIn is the most agonizing experience I had in social media.
One group that is likely to directly or indirectly pick up much of their formal English language writing from supposedly well formed LLM examples is people who speak English as a second (or fifth) language. I wouldn't want to make too many inferences about their ability or effort from that, especially if LLM phrases become as popular amongst some ESL groups as "kindly revert back" or "do the needful"
Free non-LLM translation apps are plentiful, work well, and use fewer resources. I agree with your larger point, but at the same time it’s not like there isn’t choice.
I also don’t mean to imply I simply ignore people who use LLMs. Rather, the information is relevant for educational purposes.
As way of example: I frequent forums where I help users of a couple of systems I understand well. In the rare instances a user makes a question and a second replies with wrong information, I make a correction and take the second user’s misinterpretation into account to craft an explanation which will better realign their mental model. I may even use that as basis to improve the documentation. Everyone benefits, including those who arrive to the thread at a later date.
But if the second user used an LLM, they just wasted everyone’s time and detracted value. I’ll still need to correct the information for the first user, but there’s nothing I can do to help the second one because there’s zero information regarding what they know or don’t. All the while their post contains multiple errors which will confuse anyone who arrives later, perhaps via search engine. If I can at least identify the post came from an LLM, I can inform the user of their pitfalls and don’t need to waste a bunch more of my time and sanity correcting everything wrong with the post.
I use the word 'Certainly' a lot...not the rest of the stuff though. Feeling a bit self conscious about that now...
In writing?
Certainly!
The specific word choice isn't the key. The start of responses to most requests end up looking like:
Its all very formulaic for something that is supposed to be generative. Its like they all spend some time training at Ditchley Park.If the LLM's are emulating you (and others) in every other sentence, perhaps you should be getting royalties or something.
That would be a bit silly in this particular case, but in general we ought to celebrate cases where somebody has authentically done something that millions of others find it useful to copy.
Definitely a massive red flag, however assuming it's an actual human forwarding that garbage they could just have removed that line. The content remains suspicious, but the flags are harder to notice.
I think this is easier said than done. If you only have a basic understanding of a language then can you really accurately accomplish this? Sounding natural/native is a challenge that even many speakers never obtain despite being able to be understandable. So even the (arguably poor) argument of "just learn English" isn't that great. I'd also say that that argument is poor because you don't need to know any specific language to contribute. Isn't it actually a good thing that we can bridge these gaps and allow more people to contribute? Certainly we should reduce noise but I think this is far easier said than done (and some noise is even helpful at times).
I just don't think there are easy answers, no matter how much we want there to be. We should be careful to not lose nuance to our desires.
English is taught to colonial servant-class British spec ("butlerian") in India.
I assume you've never had to deal with Microsoft enterprise tech support if you haven't encountered it before now.
I’ve never had an LLM ask me to “please do the needful.”
I hope someone, somewhere is writing dystopian scifi where our robot overlords are constantly apologising and saying things like "Ultimately, your decision on whether to surrender will depend on your specific needs and preferences"