USING AI FOR JUSTICE
- Ajai Tandon
- Oct 1, 2022
- 5 min read
Updated: Sep 11, 2023
INTRODUCTION
Outcome prediction in a legal case has always been crucial to the practice of law. Reasonable assessment of the potential legal consequences is important for legal counselling and judgment delivery. As compared to the traditional tools, used by lawyers and judges for prediction (professional experience, empirical information etc.), artificial intelligence (AI) offers much more sophisticated approach towards it. Predictive analytics using AI tools like natural language processing (NLP) and machine learning has made predictive justice possible. Predictive justice refers to using analysis of large amount of data by the means of AI-enabled technologies for predicting outcomes of legal disputes. These technologies have become popular in the context of policing and public administration in countries like USA and UK. Predictive analytics in judicial systems has been gaining popularity in the last two decades, evident from the wide research available in this area. AI tools have more precision as they are able to process a vast set of information which is humanly not possible. Human working in tandem with AI can “outperform either working in isolation” (Katz 2013). AI models allowing predictive analytics can offer solutions to a number of problems faced by our judicial systems such as long pendency of cases, legal inconsistency in application of law, poor triage of disputes etc. However, to realise the full potential of this technology, different forms of AI must be responsive to varied legal settings. In this regard, it is compelling for the Indian judicial system to initialise the process of adoption of predictive justice. Indigenous AI systems will have the benefit of proper legal contextualisation and would give an opportunity to develop principles of its fair use. Adequate gestation period is required to develop predictive AI tools in terms of building datasets and having better categorisation of different aspects of legislations and cases that can later be used to feed into the AI model. This has a direct impact on achieving reliable accuracy in functioning of predictive AI. This calls for starting the development process at the earliest in order to have sufficient time for addressing different aspects of AI development.
Towards legal certainty
The inconsistent exercise of judicial discretion (in sentencing, bail, remand, injunction etc.) has drawn criticism for causing legal uncertainty and allowing judicial overreach. Predictive models can exclude factors that are unrelated to the merits of the case, thereby cancelling out arbitrary factors in judicial decision making. It can, thus, allow standardization and thereby help reduce arbitrariness, bias and inconsistency in the system.
Increasing efficiency
Increase in accuracy of outcome prediction can reduce information asymmetry between parties and can nudge them towards settlement instead of taking the route of a lengthy court proceeding. It can also allow for prior identification and consequently prioritization of cases where violation seems likely. Cases with simpler application of rules can be automated thereby allowing judges to focus on cases requiring more expertise from their side. This would be beneficial in light of the pending cases in Indian courts which stands at 3.65 crore as of November 2019 and has risen since then.
Strengthening judicial institutions
A low predictability as assessed by an AI, with regards to a particular judge, could be indicative of incorporation of legally irrelevant factors. Thus, it could keep a check on poor and unfair application of laws in the courts. Furthermore, adoption of a fair, explainable and transparent AI is expected to instill public trust both in the judicial system (as it is expected to increase its efficiency and reduce arbitrariness) and in the AI technologies facilitating its adoption in other areas.
Reducing dependence on private sector –
Most initiatives involving systematic data-based decision-making are currently being done by the commercial private players in the legal market. Unequal access to the technology dilutes equality of terms between parties. Deploying principle-based algorithms can reduce dependence on private sector.
IMPLICATIONS & RECOMMENDATIONS
1. Operationalizing AI in courts A. Technical Process A description of the process of creating predictive models using machine learning involves following steps – • Gathering and preparing data to train the machine learning model. Judgments of the court form the raw data. • Legally relevant factors to make a decision are marked in the raw data to create processed data. • Database is prepared using the processed data. • Selecting a predictive AI model to train. • Training of the AI model using the processed data. • In the machine, adjustment of the parameters occurs until a pre- determined level of accuracy is reached by the AI.
2. Administrative procedure for regular functioning • Outcome prediction must be deployed only after obtaining the written consent of all the parties to a case in a prescribed format. • Consent may be obtained – i. In civil cases, at the time of presentation of plaint and filing of written statement ii. In criminal cases, at the time of framing the charge. • Parties must declare non-usage of predictive analytics using AI. In case of AI usage by a party, courts via the Oversight Committee (mentioned below) must necessarily share the outcome prediction to the other party or parties to ensure equality of terms.
3. Key aspects in designing the policy framework for predictive AI
A. Human review & oversight
Issue: There are concerns that AI creates automation bias i.e., the tendency to unduly accept machine’s recommendation. Judges might be inclined to conform with the machine’s conclusion, thereby compromising independence of judicial system and stifling legal inventiveness.
Recommendation: Predictive AI must not be conclusive but is meant to create fairer contexts. AI must assist the judges and not replace them. Judges must be allowed to digress from the AI after giving cogent reasons.
B. Transparency & accountability
Issue: Issue: It is difficult to identify how and why algorithms reach a particular outcome. This creates difficulty for marking algorithmic errors and biases, making them immune to challenge (appeals, re-hearing, review etc.) from litigants and lawyers. This erodes people’s sense of fairness and trust.
Recommendation: : Explainable AI or XAI refers to measures that help humans to understand how a machine learning model reached its output. XAI can help judges to question conclusions by algorithms preventing automation bias. Scholarly literature is moving away from privately enforced right as a way to enforce accountability to “accountability by design”. Various design principles must be incorporated for achieving XAI. Approaches like ‘surrogate models’ may be used to understand AIs.
C. Algorithmic bias
Issue: Algorithmic bias means errors in the AI systems that create unfair outcomes because of erroneous assumptions made by these systems. Use of various predictive AIs in countries like USA (COMPAS) and UK, has caused concerns regarding compounding of social biases in the form of machine assisted discrimination in cases where the training data (for the AI) itself is biased.
Recommendation: Disclosures regarding data used for training algorithmic models, assumptions incorporated for its creations and a risk assessment plan for mitigation by the Oversight Authority can check biases. This will allow meaningful examination by external researchers, auditors and other stakeholders.
D. Cost and financial implications
Issue: Putting in place predictive AI technology involves cost for not only development and management but also for ensuring uninterrupted transparency, accountability and updating. It also has potential societal costs of exclusion due to incomprehensibility and alienation of potential litigants. This might create aversion to access the justice system.
Recommendation: Public universities can have research partnerships with private players. Vehicles under corporate social responsibility (CSR) maybe leveraged. The government can retain proprietary rights over the data and technology in order to prevent undesirable influence from profit seeking actors.
By
AJAI TANDON
ADVOCATE SUPREME COURT
CEO-ATC ARTIFICIAL INTELLIGENCE RESEARCH CENTRE
(NATIONAL PRESIDENT - IMANDAR BHARTIYA PARTY)
Comments