Home » Technology » “Labeling Quality and Performance Bias: How They Impact User Trust in Artificial Intelligence”

“Labeling Quality and Performance Bias: How They Impact User Trust in Artificial Intelligence”

Researchers at Penn State University have published a study indicating that accurately labelled visual data can help people trust AI systems more. The study found that high-quality labelling of images led participants to believe that the training data was reliable, while when systems appeared biased, trust was lowered. The researchers suggest that new ways may be needed to assess user perceptions of “training data credibility”, such as by allowing them to sample the labelled data, with one academic stating it was “ethically important for companies to show the users how the training data has been labelled”.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.