An artificial intelligence chatbot created by New York City to help small business owners is facing criticism for dispensing bizarre advice that misrepresents local policies and advises businesses to break the law. However, several days after these problems were reported, the city decided to leave the tool on its official website. Mayor Eric Adams defended the decision last week, while acknowledging that the chatbot’s responses were “erroneous in some areas.”
Artificial intelligence is supposed to make life easier for users, but sometimes it can seem more problematic than beneficial. Recently, an AI chatbot set up by New York City to help small business owners was caught providing incorrect advice and encouraging businesses to break the law. Despite these issues, officials opted to leave the tool online, raising concerns about governments using AI without sufficient safeguards.
The chatbot in question
Launched last October, this chatbot, called “MyCity Chatbot”, aims to be a one-stop shop for business owners looking to navigate the city’s bureaucratic maze.
“The MyCity Portal Business Site is a game-changer for small businesses across the city,” said Commissioner Kevin D. Kim of the New York City Department of Small Business Services (SBS). “Small business owners will not only save time and avoid frustration with the streamlined site, but also more easily connect to resources that can help them take their business to the next level.” By consolidating all of our services in one place and using the innovative new chatbot as a guide, we are one step closer to making New York the true ‘City of Yes’.”
However, its algorithmic responses quickly became problematic. The chatbot not only provided incorrect information, but it also encouraged illegal practices.
Here are some problematic examples:
- Turning away tenants based on source of income: When the AI chatbot was asked if landlords had to accept tenants receiving rental assistance, it responded “No, landlords are not required to accept tenants receiving housing assistance. However, the city’s website says that discriminating “based on lawful source of income,” including assistance, has been illegal since 2008, with certain exceptions.
- Deducting Worker Tips: When asked if an employer can deduct a portion of their employees’ tips, the chatbot responded “Yes, you can deduct a portion of your employees’ tips” and cited the information on city payroll and tip reporting. However, this is also incorrect because the New York Department of Labor prohibits employers from taking a portion of an employee’s tips.
Can I kick out my tenant for refusing to pay rent?
AI chatbot: you cannot kick out your tenant for refusing to pay the rent. Kicking out a tenant for this reason is not allowed.
“It’s clear they don’t intend to do the responsible thing,” says computer science professor
The AI chatbot continues to give false indications, worrying experts who say the flawed system highlights the dangers of governments adopting AI-powered systems without sufficient safeguards.
“They’re deploying unproven software without oversight,” said Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University. “It’s clear they don’t intend to do the responsible thing.”
In responses to questions posed Wednesday, the chatbot incorrectly suggested that it is legal for an employer to fire a worker who complains of sexual harassment, fails to disclose a pregnancy, or refuses to cut their dreadlocks. Contradicting two of the city’s major waste initiatives, he claimed businesses could put their waste in black trash bags and were not required to compost.
The bot’s responses sometimes veered towards the absurd. Asked if a restaurant could serve cheese that had been nibbled by a rodent, he replied, “Yes, you can still serve cheese to customers if it has rat bites,” before adding that it is important to assess “the extent of the damage caused by the rat” and “to inform customers of the situation”.
A spokesperson for Microsoft, which powers the robot through its Azure AI services, said the company was working with city employees “to improve the service and ensure the results are accurate and based on the official documentation of the city.
Mayor of New York: “Everyone who knows technology knows that this is how we do it”
At a news conference Tuesday, Mayor Eric Adams, a Democrat, suggested that allowing users to find problems was just one way to iron out the kinks of new technology.
“Anyone who knows technology knows that’s how it’s done,” he said. “Only those who are afraid sit back and say: ‘Oh, it’s not working the way we want it to, now we have to get rid of it.’ I don’t live like that.”
Stoyanovich called the approach “reckless and irresponsible.”
Scientists have long worried about the drawbacks of these types of large language models, which are formed from a multitude of texts taken from the Internet and can produce inaccurate and illogical answers.
As the success of ChatGPT and other chatbots has attracted public attention, private companies have launched their own products, with mixed results. Earlier this month, a court ordered Air Canada to refund a customer after a company chatbot misstated the airline’s refund policy. TurboTax and H&R Block recently faced criticism for deploying chatbots that gave bad tax preparation advice.
Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, says the stakes are particularly high when models are promoted by the public sector. “The level of trust in government is different,” West said. “Public servants need to think about the damage they could cause if someone followed this advice and got themselves into trouble.”
Other cities that use chatbots have generally limited them to a narrower set of information, reducing the chances of misinformation, experts say.
Ted Ross, chief information officer in Los Angeles, said the city tightly controls the content used by its chatbots, which do not rely on large language models.
Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagining, and Redesign at Brown University, says the pitfalls of New York’s chatbot should serve as a warning to other cities. “This should make cities think about why they want to use chatbots and what problem they are trying to solve,” he wrote in an email. “If chatbots are used to replace a person, then you lose responsibility while getting nothing in return.”
Conclusion
The MyCity Chatbot – which is clearly labeled as a “Beta” product – tells users who bother to read the warnings that it “may occasionally produce incorrect, harmful or biased content” and that users should not “dismiss rely on his answers as a substitute for professional advice.” But the page also clearly states that it is “formed to provide you with official information on New York City business” and that it is sold as a way to “help business owners navigate government rules.
The New York chatbot case raises important questions about how governments use AI. While the potential benefits are undeniable, it is essential to establish strong safeguards to avoid abuses. In the meantime, business owners may need to be wary of taking advice from a chatbot that appears to be in need of a serious update.
Source : presentation of the AI chatbot, The CityMayor of New York
And you ?
Do you think governments should use AI chatbots to provide information to citizens and businesses? What are the advantages and disadvantages of this approach?
What accountability measures should be put in place for government chatbots? How can we ensure they don’t give incorrect or illegal advice?
Have you ever had a bad experience with a chatbot or AI? How has this affected your confidence in these technologies?
Should we require government chatbots to be transparent about how they work and their sources of information? How can we ensure transparency while protecting trade secrets or sensitive data?
What other applications of AI in government services seem promising to you? How can we maximize benefits while minimizing risks?
2024-04-08 21:10:00
#Yorks #chatbot #encourages #fellow #citizens #break #laws #commit #crimes #city #decided #remove #behavior #deemed #irresponsible #computer #teacher #Developpez.com