Book Review | Artificial Intelligence | Run the marathon to the very last mile.

By Jason Moyse and Lisa Culbert

Those who run regularly or who have experienced the endorphin euphoria known as “runner’s high”, can experience the same heady feeling reading Joanna Goodman’s Robots in Law: How Artificial Intelligence is Transforming Legal Services” (Ark, 2016). The book provides a fulsome journey for the reader through the Artificial Intelligence (AI) legal landscape, explaining key concepts for the uninitiated and highlighting the most visible vendors and makers among other industry players.

The running analogy has a special significance for those of us that are current or lapsed runners. Lisa started reading at the same time as beginning marathon training on her global travels. The hope was that like Nike+ running apps, “Robots” would provide her with the tools and insights she needed to understand the AI legal tech hype, and intelligently speak to the topic with fellow colleagues in legal innovation.

Jason falls into the lapsed runner category, but is far along on the AI and legal innovation journey. With 3 marathons in his vaguely remembered history, he’s pounded out the miles on the road – but has spent more time in recent years focussed on shaping his legal innovation career, rather than his calves. He swears his running career is simply in taper mode.

For insights, knowledge and literacy, Goodman’s book lays a clear framework for the novice to understand AI

Starting with definitions of AI and the breadth of programs it encompasses from rules-based/decision-trees to machine simulation of human cognitive reasoning, the reader ramps up to speed with an understanding of AI in our 21st century context of big data, software learning, cloud computing among other technology advances.

Since the book’s publication in 2016, many of the cited predictions have been realized. For example, the acquisition of AI provider RAVN by document management provider iManage is demonstrative of the on-going disruption to the law firm value chain. Clients continue to demand better value from and more efficient uses of technology – similar to what has been realized with e-discovery and due diligence solutions relying on Technology-Assisted Review or “TAR” and other weird sounding acronyms.

Increasingly, it is obligatory for deal work by professional services firms to have something like Kira Systems in the mix. Kira applies practice area expertise and language extraction to compare clauses and streamline discovery and diligence processes. This work is now commoditized in larger measure.

The last mile is often the toughest

Jason is an unrelenting pace bunny for Lisa’s AI and legal innovation education, and part of the track included Law Made’s attendance at Stanford’s CodeX FutureLaw Conference – whilst in reading mode of Goodman’s laudable text. All the while, Lisa has continued to read, write, practice and explore further on her own AI journey. While her understanding of AI can be largely attributed to Goodman; she has come to realize that “Robots” is really the  beginning of the last mile.

What does that mean? There is a growing mass of critical analysis and issues raised by legal AI that naturally follows such a comprehensive training plan, which are expected to continue for some time (in running, what feels like that never-ending last mile).

These issues include:

  • Human or Machine versus Human and Machine?
    • Every lawyer knows the importance of the distinction between these two “or, and” articles and their significance continues in the AI context as we recognize that both human and machine are fallible, apart and paired together. Augmented AI is a start but not a solution.
  • Culturally deficient legal landscape for the adoption of AI solutions.
  • Unrealistic or “silly” economics for implementation given the lack of imminent or clear business risk and little basis in current business models for making the change.

Cruising into that last mile, let’s go a little deeper.

Human OR Machine (AI) versuse Human AND Machine (Augmented AI)

At CodeX, one of the more memorable sessions related to chatbots. Running in the anchor position,  Joshua Lenon’s “Tech Terror” took the rug out from under the day’s earlier discussion and excitement around some of the current buzz worthy chatbot uses in legal.

The Rise of Legal Chatbots

Joshua’s talk stars at the 27:47 mark — but this entire 60 minutes of presentations is worth watching.

Lenon reminded the audience that chatbots are not without their shortcomings, especially when it comes to falling short of going the full (and final) mile. This is especially the case where the tech fails to account for the context in which legal advice and solutions are applied — varying geographies, diverse legal rights, limited awareness or predictive ability by machines or humans as to the choices and consequences of their decisions, not to mention the opportunity to consult with a human lawyer when needed.

As Joshua has outlined elsewhere, “If the weak AI of LawBot does not recognize your crime or locale, a person may think they have no legal recourse. If the DoNotPay bot does not recognize a future complaint against a landlord, a tenant may be denied both their day in court and a place to live. While the capabilities of chatbots continues to expand, they cannot be allowed to replace the discretion of courts in dispensing justice.”

In his words, chatbots are cheap, courts are expensive, so the public default is not to institutions (courts) for resolution of legal issues. However, just because more people can access inexpensive Chatbots, does not mean access to justice has been improved. The public can’t tell good legal advice from bad legal advice… and it would be hard to argue that access is improved when the result of that access is bad legal advice.

A recent example of this “and” versus “or” debate and the inability to reconcile the two in the form of a harmonized “Robot Lawyer” has been an entertaining Twitterati debate. The showdown was played out at GeekLawBlog (3 Geeks and a Law Blog) as Casey Flaherty challenged “Robot Lawyer LISA” as being inaptly named. He outlines his reservations and attempts to utilize the tool in a post called More Robot Magic Silliness.

Casey makes a point of highlighting any and all uses of the term Robot Lawyer to Jason, personally and usually with a certain amount of cheek. Jason is not fond of robots, lawyers or any combinations thereof – and shares Casey’s skepticism of using a term which is more about entrepreneurial marketing than reality.

So imagine our cognitive dissonance here at Law Made as we tend to be pretty interested in entrepreneurs and all things Lisa!

However, we do think that some deeper level of analysis reveals that Casey may have a point (as is occasionally the case), and that Joanna might have challenged more forcefully on whether LISA is worthy of being highlighted among examples characterized by depth and complexity like IBM Watson.

Document automation is (one would think) obviously not AI. Expert systems, as we’ve come to understand them are at least in the realm, even if somewhat tenuously. Even if true definitions of AI are irrelevant – it’s more about the holes than the drill– and it is disingenuous to suggest that LISA is any or all of the following: <ROBOT> + <LAWYER>.

While the Twitterati debate gets granular rather quickly with varying definitions of the terms artificial intelligence, lawyer and robot, the takeaway is consistent with the “and” versus “or” conundrum – none of, robot (traditional AI), lawyer (human intelligence), or a robot+lawyer (augmented AI) provide a perfect solution or path forward. Instead any of these need to be considered carefully and attune to the context in which they are deployed.

Goodman showcases LISA as an example of AI augmentation since the tool leaves more complex issues to human lawyers, but misses the opportunity to explore further with insights on the critical limitations of the App.

Indeed, if there is but one lapse with an otherwise excellent AI presentation, it’s that in many instances, Goodman appeared to showcase the best public relations and canned value proposition talk track of the subject interviewees. We’re evangelists ourselves, and understand the necessity to explain the features and use cases of various vendors – but one knows from experience that there are many misses, even among the best technologies.

There is a more documentarian treatment than critique throughout the book.

Further, the AI versus Augmented AI and even Augmented AI on its own, warrant more discussion and recognition that while Augmented AI may facilitate quicker solutions, the fallibility of both humans and machines in rendering a faster solution is useless if that solution turns out to be wrong.

Strategy eats culture for breakfast.

 The context for legal advice and solutions is not only significant from the client perspective but also from the provider (lawyer) perspective.


Goodman highlights some of the challenges for implementing AI, including (referencing Bas Boris Visser of Clifford Chance LLP) the need to:

  1. train the human

  2. train the machine and

  3. change the lawyer mindset.


This third piece warrants more consideration as it continues to be (especially when paired with complex economics described below) part of the on-going resistance to transitioning AI ideas and conversation into actual value-adding implementation and use.

There is a groundswell of hype inside the echo chamber and if you are a twitter fiend like Jason, you would almost think that AI in legal is here – obviously prevalent and ubiquitous. But that is simply not the case.

At one of the more popular panels of Legaltech West – the audience of legal tech consumers (law firms, lawyers and corporates) and a panel of vendors and  lawyers, were asked if anyone was using AI in their work flow. There was but one single hand raised in response.

So why the slow traction despite some pretty obvious advancements?

There is still a cultural deficiency, but it can’t be pinned on lawyers’ being risk adverse and resistant to change or technology (a common battle cry of vendors and change agents).

The legacy technologies in legal are decidedly poor. Therefore, the suggestion that lawyers won’t adopt new technology is a bit of a misnomer. They won’t adopt bad technology – and that’s what they’ve been subjected to for years.

However, if you look around a negotiating table or a courtroom – you will see smartphones and tablets, among other tech– so it would be unfair to say lawyers are technophobes.

In fact, lawyers are some of the biggest users of tech tools in daily practice. As a simple example, for those who have dined with a lawyer colleague or family member in the midst of an M&A deal or trial in session, we know it is common to wedge in conversation between phone and especially email breaks!

Lawyers will definitely use technology, perhaps more than necessary in some instances.

To better understand the cultural deficiency for AI implementation then, we need to get more specific and look to the actual tech.  Members of the same Legalweek audience discussed how in the case of AI, implementation is halted by the human versus machine debate and lack of clarity on where, exactly, responsibility is to be assigned when a machine makes an (inevitable?) mistake.

Cultural change does not happen quickly – no matter how good the technology, process or workflow may be – as many evangelists entering into an entrenched environment would find. If you want to move swiftly, walk alone. If you want to go far, walk together.


Last Mile Economics

Collaboration among marque AI companies like RAVN, HighQ and Neota is exciting news from the cheering stands of our AI-enthusiastic LegalTech ecosystem, but collaborations like these create a billing structure struggle for law firms seeking to implement them.


Bill Henderson puts it best in his article, “The Legal Profession’s ‘Last Mile Problem'” (which this review owes its namesake to) where he explains that the business model for lawyers is disadvantaged by shifting to client centricity.

Henderson compares the last mile of the telecom industry and the challenge of connecting through copper wire from the box on the street to the client end user. Notwithstanding all of the other incredible telecom technological advances, it was this last mile of connectivity that proved to be the most difficult and expensive to complete. It’s pesky copper lines that can’t handle the load.

Better, faster and less expensive is always a goal and most recently, only reaching the legal sector of late. In large measure, the apparent and potential gains from technologies like artificial intelligence are too difficult to ignore.

So what are the pesky last mile copper wires for legal?

The answer lies in the economics of the underlying business models which presently do not reward appropriately for the desired results of efficiency gains which are captured as a return on investment in technology.

Partnership models do not put much back into the business at the end of the year from retained earnings which could be used for investment in infrastructure that would ultimately become means of production lowering unit costs on the other side of the investment’s implementation.

This necessarily requires a rate increase or larger volume of work from the client as necessary in order to fund the investment in infrastructure. Just like our marathon experiences, if we could just gut out that last mile, short-term pain of the investment would result in everybody’s gain.


Really? Honestly? Truly?

Should a client guarantee a financial benefit without overwhelming evidence and trust that the benefit is

  1.  really attainable

  2.  sustainable

  3.  would actually be shared with the client having a lower unit cost or greater value in the form of productivity gains?


The best example is due diligence which becomes much too expensive to do manually given the low level of complexity of the work and otherwise highly manual touch required by lawyers charging per hour to do a simple task. Today, technology rules due diligence. However the truth is clients won’t pay firms for this service to be completed through billable hours and therefore, firms actually needed to figure out how to deliver whilst maintaining margin. Alternatively, it’s a loss leading phase of broader work across the entire transaction.

It’s not like clients and firms got together and said “let’s do this a better way”.

Clients simply announced by action and words “We’re not paying for that work at the rate you’re charging”.

Henderson explains that a reconciliation of these economics is likely found in a mix of shared productivity gains between lawyer/firm and client (i.e. incremental rate increase and more volume of work from the client). Finding the right mix is what makes the sale and implementation of AI an especially technical sales issue and adds to the uncertainty that surrounds it.


To simplify one of Henderson’s examples and illustrate the disadvantage:

  • If a law firm adopts a tool that reduces the time spent on a traditional billable assignment from 20 to 10 hours, the client would likely expect to pay 50% less
  • However, the cost to build and scale the tool requires a substantial investment by the firm
  • If clients continue to pay at the same hourly rate, they receive all the benefit and the firm is at loss (making 1/2 the amount and is out-of-pocket the amount invested in the new AI solution)
  • If clients pay double the hourly rate, they receive no benefit from the improved efficiency, while the firm makes the same and nets a loss without extra fees to cover their investment.

Goodman touches on the challenging economics with examples of firms’ strategic approaches and productization of AI (such as Clifford Chance’s partnership with Kira Systems and white labelling under their own brand) which illustrate the creativity and sophistication involved with economical AI implementation. Ultimately, legal AI’s complex economics together with the culturally deficient landscape described above, further slow its rate of adoption and uptake.

CONCLUSION: This is the start of your AI education

“Robots in Law” provides the perfect training ground for those looking to lace up their running shoes and begin what most runners come to realize, it is a long journey into the world of understanding AI’s implementation in a complex 21st century legal context. By the end of the last chapter though, Goodman’s skillful writing and thorough canvassing of the AI marketplace can smoothly carry the reader into the “last mile” where, as the critical considerations above highlight, it’s up to the reader (and runner) to keep pushing forward.