Chat GPT Replies

Hello @Jon_Smith,

This is something we have seen quite a lot and have raised it with the Community Team.
I noticed this happening a lot since late may this year. I dont think it is fair nor is it helpful. It just produces more posts and demotivates people who really want to test their knowledge.

The way one can check if the post was from ChatGPT is by just copying the post and asking Open AI

Did you write this : + “the post content”

ChatGPT will then let you know if the content was written by it or not.

The real challenge is when people can self-host these models, which I also suspect is happening. There is no way to verify if the post was written by an LLM or not.

For example, an user can query OpenAI’s ChatGPT for an answer to a forum question. Later, they can use a self-hosted LLM to summarize the response from ChatGPT. This way the verification of the text they post will not be flagged by ChatGPT, rendering the verification redundant.

These are both exciting but sad times for all forums in the internet.

I am not against using LLM’s, they make us better developers. I am against users not acknowledging the use of LLM’s in their contributions. Nothing wrong in standing on the shoulders of giants, but acknowledments need to be provided along with the post.