Artificial intelligence and criminal proceedings: The use of the technique in the violation of rights

Artificial intelligence and criminal proceedings

The use of the technique in the violation of rights

Authors

Keywords:

algorithms, artificial intelligence, criminal procedure, discrimination, violation of rights

Abstract

This article presents an introductory study on how artificial intelligence may reproduce discrimination and other rights violations in criminal prosecution. The methodology of this work is based on the dialectical approach, and the research technique is indirect documentation, especially a bibliographical review. The main conclusion it that, unlike what some authors claim that decision making performed by automated procedures would be more objective, consistent, and neutral, what actually happens is that decision making guided by mechanisms that use strategies to emulate human behavior may result in the enhancement of rights violation. The main contribution of this work is to demonstrate the need to recognize that the development, exploration, and use of artificial intelligence or of any algorithm must take place in a space of commitment to the respect and promotion of human rights.

Author Biography

Catiane Steffen, Pontifícia Universidade Católica do Rio Grande do Sul. Rio Grande do Sul. Brasil

Doutoranda na Pontifícia Universidade Católica do Rio Grande do Sul (Direito). Bacharela em Direito. Pesquisadora e cientista.

References

AÏMEUR, Esma. Human versus artificial intelligence. In: FRONTIERS IN ARTIFICIAL INTELLIGENCE, Soesterberg, 25 mar. 2021. Disponível em: https://www.frontiersin.org/articles/10.3389/frai.2021.622364/full. Acesso em: 25 abr. 2022.

ANDRÉS, Núria. La verdad y la ficción de la inteligencia artificial en el proceso penal. In: FUENTES, Jesús; HOYO, Gregorio (coords.). La justicia digital en España y la Unión Europea. Barcelona: Atelier, 2019. p. 31-39.

ANGWIN, Julia; JEFF, Larson; MATTU, Surya; KIRCHNER, Lauren. Machine Bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. In: PROPUBLICA. Chicago, 23 mai. 2016. Disponível em: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Acesso em: 25 abr. 2022.

BABUTA, Alexander; OSWALD, Marion. Data analytics and algorithmic bias in policing. In: ROYAL UNITED SERVICES INSTITUTE FOR DEFENCE AND SECURITY STUDIES. Londres, 15 jul. 2019. Disponível em: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/831750/RUSI_Report_-_Algorithms_and_Bias_in_Policing.pdf. Acesso em: 25 abr. 2022.

BARRETT, Lindsey. Ban facial recognition technologies for children and for everyone else. Boston University Journal of Science and Technology Law. Boston, v. 26, n. 2, p. 223-285, jul. 2020. Disponível em: https://www.bu.edu/jostl/files/2020/08/1-Barrett.pdf. Acesso em: 25 abr. 2022

BARTNECK, Christoph; LÜTGE, Christoph; WAGNER, Alan; WELSH, Sean. What is AI? In: BARTNECK, Christoph; LÜTGE, Christoph; WAGNER, Alan; WELSH, Sean (coords.). An Introduction to Ethics in Robotics and AI. Cham: Springer, 2020. p. 5-16. Disponível em: https://link.springer.com/chapter/10.1007/978-3-030-51110-4_2. Acesso em: 25 abr. 2022.

CORBETT-DAVIES, Sam; PIERSON, Emma; FELLER, Avi; GOEL, Sharad. A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post, Washington, 17 out. 2016. Disponível em: https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/. Acesso em: 25 abr. 2022.

CORBETT-DAVIES, Sam; GOEL, Sharad, GONZÁLEZ-BAILÓN, Sandra. Even imperfect algorithms can improve the criminal justice system. New York Times, Nova York, 20 dez. 2017. Disponível em: https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice-system.html#:~:text=the%20main%20story-Even%20Imperfect%20Algorithms%20Can%20Improve%20the%20Criminal%20Justice%20System,biased%20nature%20of%20human%20decisions. Acesso em: 25 abr. 2022.

CRAWFORD, Kate. Artificial intelligence’s white guy problem. New York Times, Nova York, 25 jun. 2016. Disponível em: https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html. Acesso em: 25 abr. 2022.

ESCOBEDO, Fernanda; MEZA, Iván; TREJO, Sofía. Hacia los comités de ética en inteligencia artificial. In: ARXIV - CORNELL UNIVERSITY. Ithaca, 11 fev. 2020. Disponível em: https://arxiv.org/ftp/arxiv/papers/2002/2002.05673.pdf. Acesso em: 25 abr. 2022.

EUBANKS, Virginia. Automathing inequality: how high-tech tools profile, police, and punish the poor. Nova York: St. Martin’s Press, 2018. p.12.

EUROPA. Parlamento Europeu. Propuesta del Parlamento Europeo sobre la inteligencia artificial en el derecho penal y su utilización por las autoridades policiales y judiciales en asuntos penales. Bruxelas, BE: Parlamento Europeu, 2020. Disponível em: https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_ES.html. Acesso em: 25 abr. 2022.

EUROPA. Comissão Europeia. Livro Branco sobre a inteligência artificial - uma abordagem europeia virada para a excelência e a confiança. Bruxelas, BE: Comissão Europeia, 2020. Disponível em: https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_pt.pdf. Acesso em: 25 abr. 2022.

FENOLL, Jordi. Inteligencia artificial y proceso judicial. Madri: Marcial Pons Ediciones Jurídicas y Sociales, 2018.

GUTIERREZ, Miren. Algorithmic gender bias and audiovisual data: a research agenda. International Journal of Communication, [s. l.], v. 15, n. 1, p. 439-461, jan. 2021. Disponível em: https://ijoc.org/index.php/ijoc/article/viewFile/14906/3333. Acesso em: 25 abr. 2022.

JORDAN, Michael. Artificial intelligence - the revolution hasn’t happened yet. In: HARVARD DATA SCIENCE REVIEW – MIT PRESS, Cambridge, 01 jul. 2019. Disponível em: https://hdsr.mitpress.mit.edu/pub/wot7mkc1/release/9. Acesso em: 25 abr. 2022

KAPLAN, Andreas; HAENLEIN, Michael. A brief history of artificial intelligence: on the past, present, and future of artificial intelligence, California Management Review, Califórnia, v. 61, n. 4, p. 5-14, ago. 2019. Disponível em: https://journals.sagepub.com/doi/10.1177/0008125619864925. Acesso em: 25 abr. 2022.

LOHR, Steve. Facial recognition is accurate, If you’re a white guy. New York Times, Nova York, 09 fev. 2018. Disponível em: https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html. Acesso em: 25 abr. 2022.

MÁNTARAS, Ramon. Towards artificial intelligence: advances, challenges, and risks. Metode Science Studies Journal, [s. l.], v. 1, n. 9, p. 119-125, mar. 2019. Disponível em: https://ojs.uv.es/index.php/Metode/article/view/11145. Acesso em: 25 abr. 2022.

O’NEIL, Cathy. Weapons of math destruction: how big data increases inequality and threatens democracy. Nova York: Broadway Books, 2017. p. 10.

POOLE, David; MACKWORTH, Alan. Artificial intelligence foundations of computational agents. Cambridge: Cambridge University Press, 2017.

RODRÍGUEZ, Ana. The impact of artificial intelligency on penal procedure. Anuario de la Facultad de Derecho de la Universidad de Extremadura, Cáceres, v. 1, n. 36, p. 695-728, dez. 2020. Disponível em: https://publicaciones.unex.es/index.php/AFD/article/view/489. Acesso em: 25 abr. 2022.

RUSSELL, Stuart; NORVIG, Peter. Artificial intelligence: a modern approach. New Jersey: Pearson. 2020.

SCHWAB, Klaus. A quarta revolução industrial. São Paulo: Edipro, 2016.

SHAPIRO, Aaron. Predictive policing for reform? Indeterminacy and intervention in big data policing. Surveillance and Society, [s. l.], v. 17, n. 3-4, p. 456-472, set. 2019. Disponível em: https://doi.org/10.24908/ss.v17i3/4.10410. Acesso em: 25 abr. 2022.

SIEGEL, Eric. How to fight bias with predictive policing. In: SCIENTIFIC AMERICAN. Nova York, 19 fev. 2018. Disponível em: https://blogs.scientificamerican.com/voices/how-to-fight-bias-with-predictive-policing. Acesso em: 25 abr. 2022.

SMITH, Aaron. Attitudes toward algorithmic decision making. In: PEW RESEARCH CENTER. Washington, 16 nov. 2018. Disponível em: https://www.pewresearch.org/internet/2018/11/16/attitudes-toward-algorithmic-decision-making/. Acesso em: 25 abr. 2022.

STRAUß, Stefan. From big data to deep learning: a leap towards strong AI or ‘intelligentia obscura’?. Big Data and Cognitive Computing, [s. l.], v. 2, n. 3, p. 1-19, jul. 2018. Disponível em: https://www.mdpi.com/2504-2289/2/3/16/pdf. Acesso em: 25 abr. 2022.

WACHTER-BOETTCHER, Sara. How Silicon Valley’s blind spots and biases are ruining tech for the rest of us. Washington Post, Washington, 13 dez. 2017. Disponível em: https://www.washingtonpost.com/news/posteverything/wp/2017/12/13/how-silicon-valleys-blind-spots-and-biases-are-ruining-tech-for-the-rest-of-us/. Acesso em: 25 abr. 2022.

YAROVENKO, Vasily; SHAPOVALOVA, Galina; ISMAGILOV, Rinat. Some problems of using the facial recognition system in law enforcement activities. Правовое государство теория и практика, [s. l.], v. 17, n. 1, p. 189-200, mar. 2021. Disponível em: https://pravgos.ru/index.php/journal/article/view/160. Acesso em: 25 abr. 2022.

Published

2022-11-15

How to Cite

Steffen, C. (2022). Artificial intelligence and criminal proceedings: The use of the technique in the violation of rights. Revista Da EMERJ, 25(1), 105–129. Retrieved from https://ojs.emerj.com.br/index.php/revistadaemerj/article/view/454

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 > >> 

You may also start an advanced similarity search for this article.

Loading...