How should we think about the risks of legal expert systems?

When I’m speaking at conferences about the expanded use of legal tech, a lawyer or judge in the audience will invariably show an eagerness to highlight the risks of these new tools. Legal expert systems are no exception.

Alarmists will often dwell on the very existence of a risk, with little concern for the scale, frequency or impact of it. They tend to measure technology against an idealized conception of the status quo that doesn’t exist in reality.

Let’s face it: legal expert systems aren’t perfect. Using them will involve some risk. But measuring legal tech against an ideal standard is irrational, especially when it comes to cost-benefit.

Technology is particularly susceptible to unattainably high expectations. Think about the discussion that will happen the first time a self-driving car causes a death. Some people will go wild, calling for a temporary – or even a permanent ban. Yet human-operated drivers kill millions of people every year. Still, we have human-operated cars everywhere.

Cost-benefit analyses need to take the real-world into account. In the case of self-driving cars, we can’t measure against an ideal where human-drivers never make mistakes. We must leave open the possibility that a cost-benefit analysis will show computer-operated cars to be safer than their human equivalents.

The same accounting will have to occur with legal expert systems. Instead of self-driving cars, expert systems will give us self-driving litigants and self-driving transactional legal consumers. This combination is going to result in some accidents. Some people may suffer harms. But the cost-benefit will have to take into account the inability for many non-experts to get the help they need.

When it comes to legal services, we face an extreme resource shortage. It’s not that we have too few lawyers (we may have too many). It’s just that almost nobody can afford to purchase the legal services they need. In a monopoly-based system, there are too few alternatives for human-based legal services.

Legal expert systems can help to mitigate this resource shortage by letting technology carry some of the load, even if it’s not a perfect solution. But if we accept that people need access to justice and legal services, we can’t afford to let the perfect be the enemy of the good.

Lastly, of course, we need to be careful assuming that human lawyers won’t make mistakes. It’s true that non-expert lawyers can (and do) occasionally provide sub-optimal or even wrong advice to a non-expert legal consumer. This risk exists too – even with a lawyer who is complying with all of his or her ethical duties and professional responsibilities. But we’re not calling for a ban on lawyers. We know that overall, they provide significant benefits to the people who use them. The same will be said for legal expert systems.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑