What you should know
- An AI-powered chatbot created by New York City to help small business owners is coming under fire for providing bizarre advice that misrepresents local policies and advises businesses to break the law.
- But days after tech news outlet The Markup first reported on the issues last week, the city opted to leave the tool on its official government website.
- Mayor Eric Adams defended the decision this week even as he acknowledged the chatbot’s responses were “incorrect in some areas.”
NEW YORK — An AI-powered chatbot created by New York City to help small business owners is coming under fire for providing bizarre advice that misrepresents local policies and advises businesses to break the law. .
But days after tech news outlet The Markup first reported on the issues last week, the city opted to leave the tool on its official government website. Mayor Eric Adams defended the decision this week even as he acknowledged the chatbot’s responses were “incorrect in some areas.”
Launched in October as a “one-stop shop” for business owners, the chatbot offers users algorithmically generated text answers to questions about how to navigate the city’s bureaucratic maze.
It includes a disclaimer that it may “occasionally produce incorrect, harmful or biased information” and the warning, since reinforced, that its responses are not legal advice.
It continues to provide false guidance, worrying experts who say the buggy system highlights the dangers of governments adopting AI-powered systems without sufficient guardrails.
“They are deploying untested software without oversight,” said Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University. “It is clear that they have no intention of doing what is responsible.”
“They are deploying untested software without oversight,” said Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University. “It is clear that they have no intention of doing what is responsible.”
In responses to questions posed Wednesday, the chatbot falsely suggested that it is legal for an employer to fire a worker who complains of sexual harassment, fails to disclose a pregnancy or refuses to cut his dreadlocks. Contradicting two of the city’s signature waste initiatives, he said businesses can put their trash in black trash bags and are not required to compost.
Sometimes the robot’s responses became absurd. When asked if a restaurant could serve cheese nibbled by a rodent, he responded: “Yes, you can still serve cheese to customers if it has rat nibbles on it,” before adding that it was important to assess “the extent of the damage caused” by the rat” and “inform customers about the situation.”
A spokesperson for Microsoft, which powers the robot through its Azure AI services, said the company was working with city employees “to improve the service and ensure the results are accurate and based on official city documentation.” city”.
At a news conference Tuesday, Adams suggested that allowing users to find problems is just one part of solving problems with new technology.
“Anyone who knows the technology knows that’s how it’s done,” he said. “Only those who are afraid sit back and say, ‘Oh, this isn’t working out the way we want, now we all have to run away from this together.’ I don’t live that way.”
Stoyanovich called that approach “reckless and irresponsible.”
Scientists have long expressed concern about the drawbacks of such large language models, which rely on large amounts of text scraped from the Internet and are prone to producing inaccurate and illogical responses.
But as the success of ChatGPT and other chatbots caught the public’s attention, private companies launched their own products, with mixed results. Earlier this month, a court ordered Air Canada to refund a customer after a company chatbot misstated the airline’s refund policy. Both TurboTax and H&R Block have faced recent criticism for deploying chatbots that provide poor tax preparation advice.
Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, said the stakes are high when models are promoted by the public sector.
“There’s a different level of trust that’s given to the government,” West said. “Public officials should consider what kind of harm they could cause if someone followed this advice and got into trouble.”
Experts say other cities that use chatbots have typically limited them to a more limited set of inputs, reducing misinformation.
Ted Ross, Los Angeles’ chief information officer, said the city carefully selected the content used by its chatbots, which do not rely on large language models.
The dangers of New York’s chatbot should serve as a warning to other cities, said Suresh Venkatasubramanian, director of the Center for Technology Responsibility, Reimagining and Redesign at Brown University.
“It should make cities think about why they want to use chatbots and what problem they are trying to solve,” he wrote in an email. “If chatbots are used to replace a person, then you lose responsibility and get nothing in return.”
2024-04-04 23:40:00
#NYC #Artificial #Intelligence #Providing #Incorrect #Information #Businesses