DEEP LEARNING AND THE RIGHT TO EXPLANATION: TECHNOLOGICAL CHALLENGES TO LEGALITY AND DUE PROCESS OF LAW

Conteúdo do artigo principal

Mateus de Oliveira Fornasier
http://orcid.org/0000-0002-1617-4270

Resumo

This article studies the right to explainability, which is extremely important in times of fast technological evolution and use of deep learning for the most varied decision-making procedures based on personal data. Its main hypothesis is that the right to explanation is totally linked to the due process of Law and legality, being a safeguard for those who need to contest automatic decisions taken by algorithms, whether in judicial contexts, in general Public Administration contexts, or even in private entrepreneurial contexts.. Through hypothetical-deductive procedure method, qualitative and transdisciplinary approach, and bibliographic review technique, it was concluded that opacity, characteristic of the most complex systems of deep learning, can impair access to justice, due process legal and contradictory. In addition, it is important to develop strategies to overcome opacity through the work of experts, mainly (but not only). Finally, Brazilian LGPD provides for the right to explanation, but the lack of clarity in its text demands that the Judiciary and researchers also make efforts to better build its regulation.

Downloads

Não há dados estatísticos.

Detalhes do artigo

Como Citar
FORNASIER, Mateus de Oliveira. DEEP LEARNING AND THE RIGHT TO EXPLANATION: TECHNOLOGICAL CHALLENGES TO LEGALITY AND DUE PROCESS OF LAW. Revista de Direito Brasileira, Florianopolis, Brasil, v. 32, n. 12, p. 218–235, 2023. DOI: 10.26668/IndexLawJournals/2358-1352/2022.v32i12.7547. Disponível em: https://indexlaw.org/index.php/rdb/article/view/7547. Acesso em: 5 nov. 2024.
Seção
PARTE GERAL
Biografia do Autor

Mateus de Oliveira Fornasier, Universidade Regional do Noroeste do Estado do Rio Grande do Sul (UNIJUI)

Professor do Programa de Pós-Graduação Stricto Sensu (Mestrado e Doutorado) em Direito da Universidade Regional do Noroeste do Estado do Rio Grande do Sul (UNIJUI). Doutor em Direito pela Universidade do Vale do Rio dos Sinos (UNISINOS), com Pós-Doutorado pela University of Westminster (Reino Unido).

Referências

ADADI, Amina; BERRADA, Mohammed. Peeking inside the Black-Box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access, v. 6, p. 52138-52160, 2018. DOI: https://doi.org/10.1109/ACCESS.2018.2870052.

ALMEIDA, Daniel Evangelista Vasconcelos. Direito à Explicação em Decisões Automatizadas. In: ALVES, Isabella Fonseca (org.). Inteligência Artificial e Processo. Belo Horizonte; São Paulo: D’ Plácido, 2020, p. 95-114.

ARRIETA, Alejandro Barredo et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, v. 58, p. 82-115, 2020. DOI: https://doi.org/10.1016/j.inffus.2019.12.012.

BATHAEE, Yavar. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law and Technology, v. 31, n. 2, p. 889-938, 2018. Available at: https://jolt.law.harvard.edu/volumes/volume-31. Access in: 15 jul. 2020.

BIONI, Bruno; LUCIANO, Maria. O princípio da precaução na regulação da inteligência artificial: seriam as leis de proteção de dados o seu portal de entrada? In: FRAZÃO, Ana; MULHOLLAND, Caitlin (coord.). Inteligência Artificial e Direito. São Paulo: Thomson Reuters Brasil, 2019, p. 207-231.

BRASIL. Câmara dos deputados. Projeto de Lei 21/2020. Available at: https://www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=2236340. Access in: 15 jul. 2020.

BRASIL. Lei nº 13.709, de 14 de agosto de 2018. Lei Geral de Proteção de Dados (LGPD). Available at: http://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709.htm. Access in: 15 jul. 2020.

BRASIL. Senado Federal. Projeto de Lei 5091/2019. Available at: https://www25.senado.leg.br/web/atividade/materias/-/materia/138790. Access in: 15 jul. 2020.

BRASIL. Senado Federal. Projeto de Lei 5691/2019. Available at: https://www25.senado.leg.br/web/atividade/materias/-/materia/139586. Access in: 15 jul. 2020.

CASEY, Bryan; FARHANGI, Ashkon; VOGL, Roland. Rethinking explainable machines: the GDPR's "right to explanation" debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Journal, v. 34, n. 1, p. 143-188, 2019. DOI: https://doi.org/10.15779/Z38M32N986.

CIATTO, Giovanni et al. Agent-Based Explanations in AI: Towards an Abstract Framework, 2020. Available at:https://www.researchgate.net/profile/Davide_Calvaresi/publication/341509975_Agent-Based_Explanations_in_AI_Towards_an_Abstract_Framework/links/5ec5020b299bf1c09acc036d/Agent-Based-Explanations-in-AI-Towards-an-Abstract-Framework. Access in: 15 jul. 2020.

DEEKS, Ashley. The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review, v. 119, n. 7, p. 1829-1850, 2019. Available at:https://columbialawreview.org/content/the-judicial-demand-for-explainable-artificial-intelligence/. Access in: 15 jul. 2020.

DE STREEL, Alexandre et al. Explaining the Black Box: when Law controls AI. Brussels: Centre on Regulation in Europe (CERRE), 2020. Available at:http://www.crid.be/pdf/public/8578.pdf. Access in: 15 jul. 2020.

EUROPEAN COMMISION. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. Brussels: European Commission, 2019. Available at:https://ec.europa.eu/futurium/en/ai-alliance-consultation. Access in: 15 jul. 2020.

EUROPEAN UNION. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. On the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union. Available at:https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=PT#d1e1797-1-1. Access in: 15 jul. 2020.

FENWICK, Mark; VERMEULEN, Erik PM. It Is Time for Regulators to Open the ‘Black Box’ of Technology. Lex Research Topics in Corporate Law & Economics Working Paper, n. 2019-2, p. 1-17, 2019. Available at:https://ssrn.com/abstract=3379205. Access in: 15 jul. 2020.

KAMINSKI, Margot E. The Right to Explanation, Explained. Berkeley Technology Law Journal, v. 34, n. 1, p. 189-218, 2019. DOI: https://doi.org/10.15779/Z38TD9N83H.

MALGIERI, Gianclaudio; COMANDÉ, Giovanni. Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, v. 7, n. 4, p. 243-165, 2017. DOI: 10.1093/idpl/ipx019.

MULHOLLAND, Caitlin; FRAJHOF, Isabella Z. Inteligência Artificial e a Lei Geral de Proteção de Dados Pessoais: breves anotações sobre o direito à explicação perante a tomada de decisões por meio de machine learning In: FRAZÃO, Ana; MULHOLLAND, Caitlin (coord.). Inteligência Artificial e Direito. São Paulo: Thomson Reuters Brasil, 2019, p. 265-291.

NICHOLAS, Gabriel. Explaining Algorithmic Decisions: A Technical Primer. Georgetown Law Technology Review, Forthcoming, 2020. Available at: https://ssrn.com/abstract=3523456. Access in: 15 jul. 2020.

OBAR, Jonathan A. Sunlight alone is not a disinfectant/ Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society, v. 7, n. 1, p. 1–5, 2020. DOI: https://doi.org/10.1177/2053951720935615.

OLSEN, Henrik Palmer et al. What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration. Legal Studies Research Paper Series, n. 2019-84, p. 1-27, 2019. Available at:http://jura.ku.dk/icourts/working-papers/. Access in: 15 jul. 2020.

PASQUALE, Frank. The Black Box Society: the secret algorithms that control money and information. Cambridge; London: Harvard University Press, 2015.

RAI, Arun. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, v. 48, n. 1, p. 137–141, 2020. DOI: https://doi.org/10.1007/s11747-019-00710-5.

ROBBINS, Scott. A Misdirected Principle with a Catch: Explicability for AI. Minds and Machines, v. 29, n. 4, p. 495-514, 2019. DOI: https://doi.org/10.1007/s11023-019-09509-3.

SAMEK, Wojciech; WIEGAND, Thomas; MÜLLER, Klaus-Robert. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296, 2017. Available at:https://arxiv.org/abs/1708.08296. Access in: 15 jul. 2020.

SELBST, Andrew D.; BAROCAS, Solon. The Intuitive Appeal of Explainable Machines. Fordham Law Review, v. 87, n. 3, p. 1085-1139, 2018. Available at:http://fordhamlawreview.org/issues/the-intuitive-appeal-of-explainable-machines/. Access in: 15 jul. 2020.

SELBST, Andrew D.; POWLES, Julia. Meaningful information and the right to explanation. International Data Privacy Law, v. 7, n. 4, p. 233-242, 2017. DOI: 10.1093/idpl/ipx022.

STRANDBURG, Katherine J. Rulemaking and Inscrutable Automated Decision Tools. Columbia aw Review, v. 119, n. 7, p. 1851-1886, 2019. Available at:https://columbialawreview.org/content/rulemaking-and-inscrutable-automated-decision-tools/. Access in: 15 jul. 2020.

TEIXEIRA, Tarcisio; ARMELIN, Ruth Maria Guerreiro da Fonseca. Lei Geral de Proteção de Dados Pessoais: comentada artigo por artigo. Salvador: Editora JusPodivm, 2019.

TUBELLA, Andrea Aler et al. Contestable Black-Boxes. Arxiv, 2020. Available at:https://arxiv.org/abs/2006.05133. Access in: 15 jul. 2020.

WACHTER; Sandra; MITTELSTADT, Brent; FLORIDI, Luciano. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, v. 7, n. 2, p. 76-99, 2017. DOI: 10.1093/idpl/ipx005.

WISCHMEYER, Thomas. Artificial Intelligence and Transparency: Opening the Black Box In: WISCHMEYER, Thomas; RADEMACHER, Timo (eds.). Regulating Artificial Intelligence. Cham: Springer, 2020, p. 75-102. DOI: https://doi.org/10.1007/978-3-030-32361-5.