Home » today » Business » The Prejudices and Problems of DUO’s Data Handling: An In-depth Analysis

The Prejudices and Problems of DUO’s Data Handling: An In-depth Analysis

I am surprised that many people here are immediately convinced that DUO has done nothing wrong, especially after the benefits affair. There are several ways in which DUO (accidentally) makes the data equal to prejudices: Think of unnecessary stored data such as the name and zip code of parents, for example. Name is unnecessary because it is not specific. “Muhammad” is probably the most popular name worldwide. If you take 5% of the world’s population, a large part will have that name.Confuse independent chance for dependent chanceSuppose there are three groups (A, B, C) where every individual has a car
Independent probability of 1/3 means that over the entire population (A+B+C) 1/3 has a red car. 1/3 of A has a red car, 1/3 of B has a red car, and 1/3 of C has a red car.
A dependent probability of 1/3 means that there is another probability associated with it. So there’s a 1/3 chance of a blue car, in population B. 1/5 in population C, and 0 in A. Sampling won’t tell you this, and this is probably where the algorithm went wrong. Especially when you throw a self-learning algorithm over it.

Suppose each population has 10 individuals. There is an independent chance of 1/3 of a green car. So 10 in the whole population of 30. But no 3 times 1/3 person can have a green car, so a population gets 1 more green car. The other two keep 3 green cars.

In this example: doll A has 4 green cars and B and C 3 green cars. In a subsequent sample, after a period in which green cars were sold in abundance, more attention is paid to pop A, because they already had more. With the weighting adjustment, 4 samples are done in A, and 3 in B and C. Chance of more in A is higher and so next time, they get a bigger focus for samples again. This age-old problem is a self-fulfilling prophecy and should therefore be carefully watched.

Relying too much on older data. We mainly work with known data. People caught and investigated. People who are wise enough to escape scrutiny or are on the line of the law are not investigated and thus do not come into view. And the focus will therefore be refuted to the more certain variables. Social situation/necessity Not a technical point, but an important one: an algorithm does not look at why, but only at where. Someone who lives away from home but still has to go home for mental help, for example, is considered faster than someone who can actively continue playing the game. Let it also be the case that if people know less about the law due to difficulty with language or getting help, you are more inclined to be caught. Possible wrong conclusions The perpetrators may well be caught, but a self-learning algorithm can also do this with the make wrong arguments. Example statements: Conclusion: the son moves. Argument: The sun rolls over the earth. Conclusion: King’s Day is a public holiday. Argument: there are plenty of parties being celebrated.

Vice versa is also possible, reaching wrong conclusions with correct arguments.

So well, let AP DUO investigate, and hope that people are not unjustly the sjakie.

2023-07-05 19:23:29
#investigating #DUO #discrimination #fraud #control

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.