I don’t think it’s so much about whether it seems “human”. It’s more about “tone” recognition.
You can often recognize who the writer is in a text: columnists are a good example. They usually all write about the same amount about the same subject, but it is quite easy to pick them out.
This has to do with word choice, sentence structure, etc. You can usually recognize Arnoud’s longer pieces by the puns.
But for a news article, it’s not surprising that this tool says “possibly AI”, because a news article shouldn’t make much difference in tone who wrote it. Maybe someone from Tweakers.net will correct me, but I assume that there is a standard structure.
A well-known example is the New York Times, which for most articles is the “inverted pyramid” handles.
Those articles always first give the facts about the subject: what/where/when. Then they share important details and broad outlines and at the end you find the observation of an “ordinary” person (a witness, or someone who saw it).
It is therefore perfectly possible to have an article that reads well and still recognize that it has been generated by an AI, because it e.g. certain sentence structures.
The purpose of ChatGPT is not to hide the fact that it was written by an AI, after all. It is to easily generate meaningful, correct text.
[Reactie gewijzigd door Keypunchie op 31 januari 2023 20:37]