Artificial intelligence apps like ChatGPT and DALL-E which can generate strikingly coherent text and images in response to short prompts started taking the world by storm late last year. Known as generative AI, these apps raise new business opportunities as well as ethical questions regarding property rights, privacy, misinformation and more. Flying under the radar is a growing group of social entrepreneurs who are leveraging the new technology to tackle pressing social problems, with AI ethics at the center. Among them are Bangalore-based social entrepreneurs Sachin Malhan and Supriya Sankaran who co-founded Agami in 2018. Ashoka’s Hanae Baruchel caught up with Sachin to glean insights about the role generative AI might play in democratizing access to justice in India and beyond.
Baruchel: There’s so much buzz around generative AI right now that it’s hard not to feel skeptical about some of its applications. Why are you so interested in its potential in the context of access to justice in India?
Malhan: There are more than 1.4 billion people living in India and only about 10 percent of the population can access justice because it’s much too costly for the average person. AI has the potential to absolutely crush the cost of transaction and level the playing field by helping people understand things like what their rights are; what to look for if and when they need a lawyer; or what legal questions to ask. AI could also help lawyers and individuals identify whether a property deed is up to standard. It can cut down research time and help unclog court dockets. If we can drop some of those costs to next to zero it can lead to a massive explosion in access to justice in countries where the system is hugely underfunded, whether it’s in South East Asia or Africa.
But for that, we need publicly minded innovators to build the middle layer of AI for Justice, and then a bunch of entrepreneurs to build solutions that serve people from all walks of life. Most people in our space will create AI to help large companies navigate litigation, handle documents, and generally serve the well-paying class. There is no doubt we’re about to see an incredible wave of innovation, but is it going to be affordable? Is it going to be directed towards public ends?
Hanae Baruchel: What has this rapid evolution in generative AI meant for organizations like yours?
Sachin Malhan: For our own work developing an ecosystem of AI for Justice solutions in India, the potential is revolutionary. We used to spend hundreds of hours teaching the computer how to recognize and structure different types of data. For example, with one of our OpenNyAI apps –in Hindi “nyay” means justice– we wanted the computer to recognize what a court judgment looks like and highlight the key facts to create judgment summaries. This meant we had to annotate 700 to 750 court records ourselves before it could start understanding the patterns. This is lengthy, painstaking and expensive work. With the sophistication of GPT, LaMDA and other large language models, you could now dump 500,000 judgements or even a million all at once and it would do the annotating practically on its own, “unsupervised.”
Baruchel: You have already started incorporating generative AI into your work. Can you give an example?
Malhan: Yes. We are in the middle of a small pilot called Jugalbandi, where we are training ChatGPT to answer any question pertaining to government entitlements in India, like eligibility for an affordable housing scheme. We’re feeding in the government scheme information – the clauses, the eligibility criteria, etc. – to ensure accuracy and explainability, and ChatGPT adds an interactive layer on top of it.
Baruchel: You mean I could go into your app and say: “I’m in Bombay. Can you help me?”
Malhan: Exactly, and the system would answer: “What kind of support are you looking for? Would housing be of interest?” And you might say “Oh, yeah, housing would be great.” It will start asking things like “How old are you? Do you have an existing house? Do you have dependents?” It will interact with you at your own level of conversational comfort.
The key here is that it will work even if you are semi-literate or illiterate, in your own local language because we’re integrating Bhashini ULCA, an open-source data project that enables voice recognition and translation from a dozen or so Indian languages to another. So I could ask ChatGPT a question in Hindi or Bengali and it would respond to me both by text and through a voice message in my own language. For the first time ever, someone in a remote village in India will be able to ask questions and get answers immediately about what government entitlements they might be eligible for. This is a potential gamechanger because most of the research shows that last mile access to essential services fails because people don’t know what is available to them or how to use existing systems.
Baruchel: How do you factor in the risks of applying AI in such high-stake situations? When you talk about government entitlements and social welfare, we’re basically talking about the most vulnerable segments of society.
Malhan: Things are moving so fast right now that this is a real and legitimate concern. Most people aren’t taking the time to consider questions of fair use or privacy even. This is why it has been so important for us to build this middle layer of AI applications as a collaborative, open source effort. Someone is going to build these tools whether we do it or not, but if we manage to build it as part of a community effort with a truly diverse group of people who are impact oriented and can offer perspectives on the things to watch out for we’ll be much better equipped to mitigate unintended consequences.
Baruchel: What is missing for more people to build out technology in this way?
Malhan: We need to create the spaces where entrepreneurs, innovators and academics who are interested in building better AI and better AI applications can think about the hard questions together. In India we’re working with a wide range of technologists, grassroot organizations and lawyers, to catch issues as they arise and design this middle layer of AI for Justice in a way that works for everyone. We need to build a global Justice AI entrepreneur ecosystem to develop the parameters for conversational AI privacy rules, conversational AI bias, and more. Things are moving so fast that we don’t even have time to anticipate the problems. That is why when Sam Altman, CEO of OpenAI, was asked “What do you think we’re not talking about?” he surprised a lot of people when he said, “Universal Basic Income.”
For more on Agami’s work, follow them on Twitter.
This conversation is part of a series about what works and what’s next for Tech & Humanity and Law for All.