How the use of AI in judicial systems impacts their legitimacy
Artificial intelligence. Judicial system. Judiciary. Judicial legitimacy. Procedural justice. Institutional trust.
Several ongoing artificial intelligence (AI) projects in the Brazilian Judiciary are being promoted as a panacea to the current budgetary crisis and as means to enhance the effectiveness and consistency of judicial services. However, there has been limited debate about how the adoption of AI in judicial systems might affect their legitimacy. To address this gap, this research examines the following questions: Does the use of AI in judicial systems impact their legitimacy? If so, is this impact influenced by factors related to the perceived justice of judicial procedures?
To explore these issues, the study begins with a review of specialized literature on the effects of AI, focusing particularly on judicial legitimacy. It also investigates the Judiciary’s role in responding to structural changes brought by emerging technologies. Notably, in emblematic and still pending cases — such as Elon Musk v. OpenAI, Sam Altman, and Greg Brockman — courts are being called upon to fill regulatory voids. The judicial response to such disputes may itself influence public acceptance and the legitimacy of their decisions.
At the same time, courts are becoming AI users, which raises concerns about their ability to maintain public trust, especially if its use distances the courts from their core functions of resolving disputes and protecting fundamental rights. To analyze these dynamics, we conducted an empirical study in two stages: (i) three focus groups with graduate law students from the University of Brasília and the Université de Montréal (conducted in English and French); and (ii) a survey, including an open-ended question, directed at legal professionals in Brazil.
Two hypotheses guided the empirical analysis: (i) using AI in non-decision-making functions would positively affect judicial legitimacy; and (ii) using AI in decision-making would negatively impact the legitimacy of the Judiciary. Hypothesis (ii) was explored through two scenarios: the use of AI to summarize cases and evidence; and its use to draft decisions for judges to review. We hypothesized that AI used for summarization would be more acceptable than AI used for drafting decisions.
To assess judicial legitimacy, we applied the procedural justice theory. Factors influencing perceptions of procedural justice were translated into proxy variables, including whether AI allows: (1) parties to feel heard — the voice criterion; (2) increased trust in the judiciary — the trust criterion; (3) clarity in decisions — the explainability criterion; (4) timely resolution — the timeliness criterion; (5) impartiality across social identities — the impartiality criterion; (6) equal treatment regardless of socioeconomic status — the substantive equality criterion; and (7) respectful treatment — the respectful treatment criterion.
Our findings show that legitimacy is not solely tied to efficiency, but to the Judiciary’s ability to ensure participation, dignity, and transparency. Participants voiced concerns about dehumanized decisions, lack of human oversight, algorithmic bias, and excessive precedent stability. Broader skepticism about AI, including opacity regarding its use in legal proceedings, was also observed.