A.I. has Opinions
We can balance the need for more accurate algorithms while being more transparent to the public by checking and confirming the methods A.I. is using. We can check if the methods these algorithms are using are ethical, so we can check if there’s a problem from the source. There should be a policy developed, where we can create regulation and confirmation with algorithm sources before it starts creating information for the public. When those methods are deemed ethical and unbiased, the methods should be open to the public to review and should be explained. There should be a policy that prioritize transparency first, even if it means sacrificing the possible accuracy of the algorithm.
Algorithms can only generate information and answers based on the information it has received. Based on those behind creating these algorithms, the information may or may not be biased, which is why there needs to be a policy that strives for transparency to expose it.
“One recent study by Harvard computer scientist Latanya Sweeney found that searches for names typically associated with Black people were more likely to bring up ads for criminal records” (Yachot). Algorithms will discriminate against others if the information they’re receiving discriminates against others. In the case of searches of names typically associated to Black people, the searches are being generated because of the algorithm behind it. The algorithm directly discriminates against Black people because of the association it believes between Black people and criminal activity. This proves the bias it has with how they generate information, and the lack of regulation these methods receive. Even if the information is accurate to the algorithm, it is only accurate based on the information received. There must be a policy to create more transparency with its methods so that the public can view the discrimination themselves, and the algorithm can be held accountable. Accuracy is still important for an algorithm, and pushing racial stereotypes would be the opposite of accuracy.
Transparency in algorithms’ methods should be needed so there is an explanation why they generate the answers they do and will build users’ relationship with these platforms. These algorithms influence users’ decisions and opinions everyday, so users must feel confident in knowing they are not being swayed in a certain direction.
“Twitter, Facebook, Youtube and other social companies continue to face increasing scrutiny from academics and Congress alike for how their algorithms may reflect or reinforce people's unintended racial, gender-based, or political biases, as evidence continues to accumulate that most widely-used algorithms often reflect and then enhance the biases present in the data used to create them” (Kramer). These everyday platforms were built on certain opinionated and biased information that will eventually be fed to multiple users, which will heavily influence how they think on issues. Transparency in these algorithms will help users understand how and why these algorithms are giving them the information it does, so they can understand whether or not they are being manipulated. These algorithms hold too much power to the public to not be transparent with them, it is their responsibility.
We should demand machines to be better at generating information, even when it means it does so better than humans.
According to a report created by the Automated Decision Systems Task Force, "’The use of ADS is increasing both within New York City government and in municipalities across the United States,’ the report notes, ‘as cities continue to see value in delivering services more quickly and effectively to residents who depend on them, streamlining decision-making processes, expanding their abilities to help their residents, and attempting to identify and remove any human bias from their work’” (Cox). The use of algorithms is increasing everyday, not just by the average user, but by businesses, services, government, and etc. Nowadays, the public is dependent on these algorithms. It’s important to confirm that these algorithms are better than humans are, now that less humans are used to generate this information.
These algorithms must stray away from human mistakes, biases, and opinions so that they can stay beneficial for everyone. They must generate information better than we do, because they are more capable of giving impartial and accurate information. By doing so, they can improve and evolve the uses we have them for.
Bibliography
Cox, Kate. “NYC Wants a Chief Algorithm Officer to Counter Bias, Build Transparency.” Ars Technica, 25 Nov. 2019, arstechnica.com/tech-policy/2019/11/nyc-wants-a-chief-algorithm-officer-to-counter-bias-build-transparency/.
Kramer, Anna. “Twitter Will Share How Race and Politics Shape Its Algorithms.” Protocol, Protocol, 14 Apr. 2021, www.protocol.com/bulletins/twitter-race-politics-ai.
Yachot, Noa. “Your Favorite Website Might Be Discriminating against You: ACLU.” American Civil Liberties Union, 27 Feb. 2023, www.aclu.org/news/privacy-technology/your-favorite-website-might-be-discriminating-against-you.