I’m seeing more and more people post replies that are clearly ripped straight out of Chat GPT, especially obvious when the cadence changes from the way Chat GPT ‘speaks’ to a very different tone at the end of a message often suggesting someone checks documentation and are generally not very helpful posts.
Now, I personally don’t like this but I don’t think there are any forum rules or guidance against it, my main concern is that because the top 10 posters per month get prizes, there is an incentive for someone to just spam out alot of low quality replies, using Chat GPT, to boost their post count.
I propose the mods come up with some clear guidance on these things, perhaps that we need to cite if we are quoting Chat GPT as I don’t think discouraging its use makes sense, maybe I see a problem where others don’t. Curious for other thoughts.
Thank you for sharing this concern. You are not the only one who contacted us about this, and it is good to have some open discussion about it.
Let me first clarify what was already done in the matter:
I admit it might not be visible enough, but we do mention the use of LLMs in our Forum FAQ:
The scores for the monthly rewards are based almost entirely on the posts marked as a solution, and in no way on the raw amount of the posts made in total; naturally, posting a lot increases one’s chance to grab a solution mark, but the solution content itself will still need to hold its ground to even stand a chance to get the solution mark from the topic author
As such, I do personally believe that at least as far as the monthly rewards are concerned, the impact of LLMs should be rather low. Having said that - we’d be happy to have a look at any particular examples where an LLM reply got the solution mark and it was not deserved.
Naturally, even though the topic author has the biggest say about it, and we trust our users and their judgment, we can of course adjust the solution mark whenever needed. Those occurrences can be reported to us for further action via the flagging option (the “Something else” flag option with a short explanation can be used for this exact purpose).
I am happy to take action whenever needed. Let’s discuss things further in this thread. This is an invite for anyone with an opinion on the matter to share it right away
Also the fact that I didn’t re-read them since that update.
When did that change? That wasnt the case this time last year and it was on raw posting number. I did actually mention this to one of the UiPath Community people when I met her last year so maybe that also got changed since then based on similar feedback and again I didnt see!
I think this gives me the guidance I needed, when I see a LLM response I can remind the poster to cite they used an LLM to post.
Hahahaha I literally just got a badge for reading the guidelines so I clearly hadn’t read them before now… whoops.
Nice change with the forum scoring!
I think this could be marked resolved, at least from my perspective I got the answer I needed, up to you if you want to leave it here for a bit incase there is other feedback!
I’ll leave it open, so that anyone who has the same worry can reference it and provide their input in the future. We can’t predict how things will evolve in the future, and I’m sure there will be more users with similar thoughts.
This is something we have seen quite a lot and have raised it with the Community Team.
I noticed this happening a lot since late may this year. I dont think it is fair nor is it helpful. It just produces more posts and demotivates people who really want to test their knowledge.
The way one can check if the post was from ChatGPT is by just copying the post and asking Open AI
Did you write this : + “the post content”
ChatGPT will then let you know if the content was written by it or not.
The real challenge is when people can self-host these models, which I also suspect is happening. There is no way to verify if the post was written by an LLM or not.
For example, an user can query OpenAI’s ChatGPT for an answer to a forum question. Later, they can use a self-hosted LLM to summarize the response from ChatGPT. This way the verification of the text they post will not be flagged by ChatGPT, rendering the verification redundant.
These are both exciting but sad times for all forums in the internet.
I am not against using LLM’s, they make us better developers. I am against users not acknowledging the use of LLM’s in their contributions. Nothing wrong in standing on the shoulders of giants, but acknowledments need to be provided along with the post.
Glad to see we are like minded on this topic and we notice the same trends and I agree that the ‘clutter’ is something we should try to avoid.
I think its often quite easy to spot a LLM response based on the way they talk, especially when you compare to other forum posts from the same user. Personally I am going to start linking them to the FAQs and urging them to cite the LLM, that should hopefully push us in the right direction.
Agreed, awarness to cite will probably avoid duplicate / long posts and also help the original query poster to assess the responses accordingly. We known LLM’s as of now halluciate quite a bit.
I am sure there are others in the forum who will also help us with this
Cheers.
Quick small update. To allow our community to easier point out possible LLM contributions, next time you spot one feel free to use the reaction on the post and see what happens