Buscador UdeSA

Filtrar búsqueda por categorías

Generative AI and responsible digital transformation in the judicial public sector

The massive deployment of generative AI tools based on Large Language Models (LLM), such as the case of Chat-GPT, has outlined a scenario of widespread algorithmic exploitation without sufficient exploration, neither by the designers of the algorithms nor by their users. If there is no guarantee of responsible use of LLMs without proper guidance, what would happen when their users are public officials? Primarily in the context of digital transformation processes in the public sphere, the deployment of LLMs forces attention to be paid to the behavior of agents as institutional decision makers, specifically in the judicial sector.

As exploratory coordinates to frame this urgent reflection, I want to retrieve a couple of questions that I outlined in the workshop on AI and Law organized by the University of San Andrés (UDESA) in May 2023. Strictly speaking, in the abovementioned workshop I took up elements of the collective research, Readiness of the judicial sector for artificial intelligence in Latin America, directed from CETYS, and carried out between 2020 and 2021 with the support of the Tinker Foundation. The project, which I had the opportunity to co-coordinate, consisted of creating an original exploratory analytical framework to assess the preparation to incorporate AI in the Justice system in the region, and applying it to the study of five countries: Argentina, Colombia, Mexico, Uruguay and Chile (Aguerre et al, 2021).

In particular, in the Argentine case study (Bustos Frati and Gorgone, 2021), which I directed (still under a different first name), two dynamics were outlined that I would now like to delve into: (i) the role of networks and alliances (judicial and multisectoral) in which the civil servants who had incorporated AI based on machine learning were socially inscribed; and (ii) the gravitation of justice narratives in which the incorporation of AI was conceptually inscribed (such as «open justice», «augmented justice» and «federal justice») (Bustos and Gorgone, 2021).

The first issue reflected the fact that, even though there were very few cases of effective use of AI in the justice system, all the actors had resorted to the use of their own human, technical and material resources, but also to the formation of alliances and/or networks (formal and informal, strictly judicial or rather multisectoral). This was visible in different forms of cooperation: with another node of the judicial institutional complex, such as between Court 13 and the Buenos Aires Statistics Office, or between the AI system Prometea and nodes of the Federal Court Board; or through collaborations with actors in the cooperative sector and civil society, such as Court 10, the Empatía initiative and the CAMBA cooperative, as well as with organizational structures generated within the framework of the International Open Justice Network. This logic was even verified in the case of Colombia, where in fact the initiatives had arisen from some kind of link with Prometea (Aguerre and Bustos, 2021Castagno, 2021).

The second aspect of the Argentine case (Bustos and Gorgone, 2021) focused on justice narratives based on the assumption that the idea of AI incorporation does not occur in a vacuum. «It matters what matters we use to think other matters,» as Isabelle Stengers and Donna Haraway (2016) warn us. So how do we narrate the incorporation of AI into legal practice? By inscribing AI-based initiatives into a broader view of their conceptions of justice, two characterizations emerged with frequency and a certain conceptual centrality in the testimonies of the key informants interviewed: those of augmented justice and open justice. At the same time, another notion was that of federal justice, albeit with contrasts or nuances in the idea of federalism, as in the case of national data governance.

Some actors, members of the Federal Court of Justice and Supreme Court (JUFEJU) and referred to in the Prometea experience, expressed interest in the idea of augmented justice, associating it with notions of efficiency and productivity in the administration of justice. On the other hand, the three nodes involved in «intelligent anonymization» systems expressed interest in the idea of open justice in a more explicit way: they participate in the International Open Justice Network (although augmented justice was also mentioned in one of these testimonies). The report concluded by asking how this distinction might be deepened, pointing to a revisiting of the distinction between the logic of ends and the logic of appropriateness in March and Olsen (1998).

Let us return to the present and to the critical juncture opened by the LLMs. The question is what role networks of actors and narratives about justice have in this new scenario. While maintaining the exploratory approach, the proposal consists of incorporating elements from three sources. Firstly, to theoretically complement the approach with conceptual elements introduced by Michael Kearns and Aaron Roth in their book about the ethical and socially conscious design of AI algorithms (2020); secondly, to put the focus specifically on the emerging challenges for the governance of transformation processes in the public sphere from the growing complexification of AI, for which I refer to the article co-written with Carolina Aguerre on «Artificial Intelligence and digital transformation of the judicial sector in Argentina», published in the framework of a collective publication of NIC. ar (2022); and thirdly, to consider recent empirical evidence regarding the use of Chat-GPT3, for which we refer to recent articles about uses in courts in Colombia, Peru and Mexico by Maia Levy Daniel and Juan David Gutiérrez.

For starters, peer and stakeholder networks seem less necessary to achieve access to AI tools. Prior, they were not sufficient to signal readiness for a responsible use, but now they no longer seem even necessary for material use. The technical «acceleration» here is not provided by a peer or multi-stakeholder network but by a complex of private digital platforms, and one learns rather individually how to engage with AI systems, which should encourage a more «free-rider» than aligned logic or common standards or shared baseline.

In terms of narratives about justice, that serve as frameworks for incorporating AI, the two stylized narratives we have mentioned – augmented justice and open justice – offer certain conceptual tools as starting items. But in addition to these, what other narratives would be at play in the use of LLMs? Let us outline some of them: the ideas of justice as a problem to be solved technologically, as a commoditized and sub-contractable service, and as a solitary practice. As Levy Daniel (2023) points out, ultimately, the ordering narrative is «technosolutionist». It can be said here, very superficially, that the calculation at stake seems to be a combination of the decreasing marginal cost of generative AI tools and the simplicity of their interfaces vis-à-vis the accumulated delays in case resolution and the increasing cost of human talent dedicated to (and trained to) create public value in the judicial sphere.

In short, judicial/multi-sector networks and stylized notions of justice (generally cultivated within networks) may not be a sufficient guarantee for responsible digital transformation, but they are likely to reduce the risk of fueling dynamics of improvisation, fragmentation and even opacity in the use of LLMs.

Under this item, I find it interesting to consider certain theoretical elements from the book by Kearns and Roth (2020). In particular, their distinction between «reasonable» models and «bad» models in the use of machine learning (ML) algorithms. «There will always be trade-offs that we need to manage,» they warn. The key is the way in which various objectives are defined and combined; some are more aligned with efficiency and reducing the statistical error rate, and others more aligned with ethical constraints and reducing unfair outcomes for certain vulnerable populations or for whatever reason we want to protect. Thus, the reasonable models are those that are on the Pareto front. The bad ones are those that are outside said front (Kearns and Roth, 2020).

When there are no networks that play the role of technological acceleration or socialization in axiological terms, and when the notions of justice at stake (whether «open» or «augmented» or «federal» or other) are not made explicit, identified, specified or managed with certain common baselines, it is more likely that fragmented and opportunistic behaviors will be encouraged, perhaps framed in less rich notions of justice such as a commoditized or technologically determined justice. In turn, ceteris paribus, incentives for widespread exploitation are likely to increase with little or no exploration of what constitutes a justified and reasonable algorithmic model, where the tendency is not to standardize practices of conscious, controlled and traceable use of generative AI but rather to a «race to the bottom» in terms of standards.

One response – as distinct from a reaction – is to focus on developing practical tools to facilitate and encourage a responsible use by judicial officers, such as ethical guidelines or integrated and situated analytical and axiological frameworks rather than mere techno-centric and generic maturity models. In addition to the project with Tinker Foundation, I refer to other CETyS antecedents, such as the GuIA Project (2020 and 2021), and the I Am Not a Robot Discussion Paper (Mantegna, 2022). Another reference is the work around the UNESCO ROAM-X principles and their applicability for ethical assessments of digital transformations in the public sphere, as outlined by Elsa Estévez (CONICET-Universidad Nacional del Sur-UNESCO) during the last ICEGOV conference in Guimarães, Portugal (2022). Also in this line of work, CETyS has been exploring, together with multiple stakeholders, the application of the ROAM-X framework to the analysis of Internet Universality in Argentina, a report soon to be published.

While the global and systemic impact of LLMs has exposed the illegitimacy of narratives of self-regulation by private corporations – to the point that emerging AI platforms like OpenIA are now the ones calling for tighter regulation – it has also increased the risk of normalizing a different kind of narrative about self-regulating behavior by users of these tools. The increased availability of generative AI tools could, in principle, encourage a practice of conscious exploration of digital transformation in the public sphere in general and in the legal sphere in particular, although in fact (as Levy Daniel and Gutiérrez respectively warn) it is already generating fragmented practices in which the inherent goals, limits, and biases of these systems are not identified, measured, justified, or managed. Faced with the risk of skills development being reduced to tutorials on «how to make the best prompts» or «how to know as many IAs as possible, but especially the ones that no one knows about,» we need to think about the governance of AI and digital transformation processes as multiple stakeholders in an ecosystem at large, and commit to designing responsible uses of generative AI by considering the balances between conflicting objectives.

– – –

Alexis Bustos Frati is a specialist in Internet governance and digital political economy, technological diplomacy, public policy and regional cooperation. She holds a degree in Political Science and a Master in Regional Integration Processes from the University of Buenos Aires, where she is currently pursuing a PhD in Social Sciences. She is an Associate Researcher at the Center for the Study of Technology and Society at the University of San Andrés (CETyS-UdeSA), and teaches at the same university as well as in the area of international relations at FLACSO Argentina and at the Parliamentary Training Institute of the National Chamber of Deputies, where she also works as a legislative advisor.

Este sitio utiliza Cookies