US online magazine Gizmodo and its sister brands are in frustration after the editor-in-chief published a bug-riddled article created by AI chatbots. This is reported by the Washington Post and quotes from internal news, among other things. A statement that was then shared internally by Merrill Brown, which stated that they wanted to collect feedback and continue with AI experiments, was commented on 16 times with 👎, 11 times with 🗑️, six times with 🤡 and twice with 🤦 and 💩 . The responsible trade union described the procedure on Twitter as “unethical and unacceptable”: You should not click on an article by a “bot” advises the GMG.
Advertisement
“poorly written”
The dispute concerns an article published last week on io9, Gizmodo’s science fiction and entertainment portal. It is about a “chronological list of Star Wars movies and series“, written by a “Gizmodo Bot”. It was full of errors, including missing series like “Andor”, “Obi-Wan Kenobi” and “The Book of Boba Fett”, and the chronology was wrong. The responsible deputy editor-in-chief, James Whitbrook, explained on Twitter, he was only informed 10 minutes in advance. He also publishes a letter to management in which he criticizes harshly. The text was “shameful, unpublishable and disrespectful towards the employees and the public”.
Whitbrook goes on to describe the text as “poorly written” and an attack on the authority and integrity of its own portal. It was shameful that their own team had to spend an incredible amount of time explaining to the management of G/O Media what unacceptable mistakes they had made with the publication. That is his formal opinion, personally he thinks the article is “absolute rubbish”. He further stated that the article was put into their own CMS (Content Management System) from outside the editorial office, nobody at io9 or Gizmodo interacted with it in any way prior to publication.
According to the Washington Post Earlier this week, employees at Gizmodo and its various sister brands were informed that “limited testing” of AI-generated text would begin. Some serious errors were subsequently found in other articles as well. A few days earlier, the new editor-in-chief Merrill Brown justified the use of ChatGPT, Bard & Co. with the fact that there are several portals that report on technology. That is why you have to develop AI initiatives relatively early in the development stage of the technology.
“Humans have to take responsibility for mistakes, AI chatbots don’t”
Advertisement
The approach at Gizmodo is reminiscent of Cnet, where AI articles caused a stir at the beginning of the year. After widespread criticism, the practice was then allegedly discontinued. In Germany in spring there was even an extra edition of the recipe magazine Lisa Cooking & Backen on kiosks, which had been completely filled with content from text and image generators. Gizmodo reporter Lauren Leffer described the now criticized use of AI to the Washington Post as a “transparent” attempt to generate more advertising revenue. The editors demoralized that. It’s also not the case that people don’t make mistakes, she says. But they know that they have to take responsibility for their texts and possible mistakes, AI chatbots do not.
According to Leffer, the “Gizmodo Bot” article also received just 12,000 clicks, drastically fewer than another NASA article, which had 300,000 page views in the same period. “If you want to run a business that’s all about tricking people into accidentally clicking links, then AI could be worth your time,” she says. “But if you want to run a media company, maybe you should trust your editorial team to know what the readers want.” A spokesman for G/O Media, on the other hand, described the attempt to the Washington Post as a success and only assured that no jobs were to be cut because of AI.
(my)
To home page
2023-07-11 09:10:41
#Full #errors #Gizmodo #article #frustration #editorial #team