There are two deep and abiding truths in the legal industry: no one knows what AI even means, and, yes, you need it. Or at least, you need solutions that incorporate artificial intelligence to resolve discrete problems you face. But that sounds less exciting.
Here at the Association of Corporate Counsel annual meeting, a packed conference room watched Rise of the Machines: Can Compliance and Litigation Keep Up?, a panel moderated by Mark Huller, Senior Counsel from The Cincinnati Insurance Company, and featuring Khalid Al-Kofahi, R&D Vice President at Thomson Reuters; Cynthia Boeh, General Counsel at Other World Computing; and Martin Tully from Akerman LLP. And once again we learned that no one knows what AI really is.
Indeed, Martin Tully kicked off the discussion by invoking the ineffable Linda Richman, “I’ll give you a topic, artificial intelligence, neither artificial nor intelligent, discuss.”
And that’s where we seem to sit in 2017. With 50 percent the AI evangelists describing it as liquid magic and the other 50 percent willing to admit it’s just a tool, while 100 percent of its customers are just confused about the whole concept. Even the panel couldn’t come up with a consistent definition of artificial intelligence, though — like all good dystopian machines — they were self-aware.
Tully spoke of three categories: assisted intelligence (tools that do what lawyers are already doing), augmented intelligence (tools doing what lawyers are incapable of doing on their own), and autonomous intelligence (tools doing what lawyers aren’t even doing). Meanwhile Huller walked the audience through “strong vs. weak AI,” with strong AI being a machine with cognitive abilities developed to approximate a human being, while weak AI merely mimics human behavior. Personally I thought strong AI was the first 136 minutes and weak AI was the 10 minutes Steven Spielberg tacked on to answer every meaningful question in an entirely trite and conclusory manner. But no matter how you slice it, everyone has a different rubric for understanding AI.
Rubric creep is just part of the AI narrative. Should programs as basic as Dragon Natural Language or Siri count as AI? In a sense, sure. They are smart programs that learn how you talk and convert that into text. In another sense, no. It’s not like Google Home is going to close the pod bay doors on you. We hope.
Regardless, people seem unwilling to recognize these “weak” AI programs as true artificial intelligence. Around 29 percent of the audience said they don’t use artificial intelligence on a routine basis, meaning either 29 percent of the in-house lawyers in America are proud luddites or they don’t respect weak AI. Maybe if we fully expunged the people peddling unicorn AI we’d get better numbers.
Still, it’s hard to let go of the idea that we’re dealing with magic on some level. The sobering statistic to remember was raised by Thomson Reuters’s Kahlid Al-Kofahi: by 2023 a basic laptop will do 63 trillion operations/second, a magic number because that’s the speed of human pattern recognition. And on that same trajectory, by 2050, a basic laptop will perform the human pattern recognition power of ALL HUMANS COMBINED every second. On the other hand, most humans are stupid and think CBS makes good sitcoms so maybe that’s not as impressive as it sounds.
Whatever people think of it, folks seem to understand that artificial intelligence is the way of the future. Most everyone — in a room that probably carried some self-selection bias — knew that they needed AI-based solutions. And when they polled the audience about what applications they were considering meeting with artificial intelligence solutions, by far the most popular were contracts and discovery with M&A due diligence lagging behind (which is odd, because products like KIRA seem so perfectly suited for those tasks).
Perhaps discovery and contracts have just crossed the acceptability threshold first. Compliance was the big “other” application. Al-Kofahi said he couldn’t get into details, but that being an expert in multiple legal environments is simply tough and compliance is a target Thomson Reuters is working on. Could we see a new research product soon?
What about dispute resolution? Al-Kofahi phrased it as an access to justice issue when three times more issues are resolved on eBay than in US adversarial proceedings. Tully mused that the contracts of the future will be filled with clauses agreeing that all disputes will be resolved by Watson. He was only half-kidding. Could people get behind a decision maker without human judgment? Boeh argued that we’ll be there in the transactional world soon enough and we’ll be the better for it, observing that removing the natural human bias of both sides wanting to make a given deal go through, no matter the obstacles will make the deal better for both sides. And that’s a substantive legal decision that’s already here.
For anyone still skeptical, Tully put the future to the assembled in-house lawyers this way: would you feel comfortable today knowing that your lawyer did research without consulting Lexis or Westlaw? No. As AI tools get out there, clients have to start thinking about AI in this way — how can you trust a lawyer who reviewed documents without, say, Everlaw?
But it’s not going to be painless. Fears of robot lawyers may be cute, but it’s not going to end up like that. The AI narrative is coalescing around the idea that AI is going to kill off the boring work and leave every attorney pondering big ticket brain tasks every day. Tully cited Richard Susskind saying that “90 percent fewer lawyers and only specialists will remain.” In Tully’s words, AI is more like Jarvis — helping Tony Stark process information and make better decisions — than it’s Skynet out to kill us. Al-Kofahi used the mantra “what business are we in?” noting that if lawyers are in the advice business then they should embrace this. Boeh simply called AI “miraculous” for taking the menial tasks out of the law.
But one counsel raised the 64 million dollar question that some of us have been harping on: what if menial tasks are good? Perhaps digging through documents for 20 hours a day makes you a better lawyer later and there’s no effective substitute that allows a lawyer to “skip to the smart part.” If we scoffed at the “practice-ready” law school model before, we should choke of laughter over the idea that law school grads are going to roll out able to manipulate a factual record without ever digging through the context to learn the hard lessons of what is and isn’t a hot document. Who are going to be the next generation of “specialists” in Susskind’s world? Because if we gut the groundwork that junior lawyers have done for a century or more, it’s hard to imagine who earns those stripes.
Yet it was an odd question in a room full of clients. This is the room that constantly pressures outside counsel to write-off junior billables. This is the room that’s “done paying for on-the-job training.” When the AI revolution begins — as all Jacques Mallet du Pan observed of all revolutions — to eat its children and leave the cupboard bare for the next generation, will the clients recognize that they created this world? Or will ROSS just replace those lawyers too. Maybe. If we hold out until 2050 when computers are as powerful as Al-Kofahi predicts, maybe we won’t need to worry. Still, this is the dark underbelly of AI’s rosy narrative of giving lawyers “more time to do the smart stuff.”
Not to add more rubrics to this discussion, but Tully encapsulated the AI conversation when he said there are three ways everyone reacts to AI: disbelief, fear, or irrational exuberance and all of them are wrong. Artificial intelligence, warts and all, is coming. There will be straightforward applications that will ease a lawyer’s pain and complex applications that will overturn the nature of the profession.
It’s time to pony up to the table because this is getting sorted out with or without you and you may as well have a seat.
Article source: https://abovethelaw.com/2017/10/no-one-knows-what-it-is-but-in-house-counsel-desperately-need-it/?rf=1