AI May Limit Key Learning Opportunities For Young Attorneys

AI May Limit Key Learning Opportunities For Young Attorneys

The use of AI tools may limit key opportunities for young attorneys. AI can recognize patterns but has no social skills or the ability to know how to best use AI and may enable young attorneys to skip important steps needed to master complex pattern recognition abilities needed for creative insight and lawyering.

Published On: 25 Oct 2024 7 mins read

Sarah Murray, M.A. Vice President of Litigation Consulting

By Sarah Murray
Originally published on Law360, September 30, 2024

   

The modern era has seen the development of many wondrous inventions, including the watch and chronometer, the printing press and movable type, the steam engine, the train, the automobile, the telegraph, the radio, satellites, GPS, and computers.

It has also seen, with great predictability, moral and social panics that these technologies will put people out of work and do more harm than good to the economy.

That's where we are with artificial intelligence right now, and this time — for perhaps the first time — lawyers are feeling the heat.

Messages of doom are being broadcast for white collar professions like law and medicine. Critics of AI argue that the technology will disrupt current economic arrangements and job requirements. Advocates of AI argue that, while that may be true in the short term, AI will ultimately lead to a boom of new jobs, new possibilities and more wealth.

This debate between the "doomers" and the "boomers" is like an argument between an old married couple that repeats and repeats without resolution. Each side can point to facts that support their position, but the facts mustered by the technology "boomers" don't undo the unease felt by the "doomers" at what they fear AI will bring in its wake.

But by focusing primarily on the economic impact of new technologies like AI, we overlook the most profound impact of revolutionary technologies, which is more subtle, more pervasive and more enduring than simply putting people out of work or changing the nature of work — although they do both of those things, and AI likely will as well.

Technologies externalize human capabilities. When a new technology is invented, we focus on perfecting and exploiting that new technology. Correspondingly, we shift our attention away from cultivating the comparable skills and capacities inside ourselves.

This is where the greatest danger of AI lies, including when it comes to the practice of law. History shows us that when a powerful new technology comes along, resources are poured into developing and learning how to use that new technology — just as we see happening now. People also look for ways that the technology can take over what are seen to be routine human jobs with lower value — like writing complaints and motions.

That doesn't just take work away from human beings. It takes important learning experiences away from developing young legal professionals.

We are aware of this dynamic with some technologies more than others. In my childhood, when the pocket calculator was invented, school teachers were outraged to see students in math class whipping out calculators to do basic math. Battles were fought over whether to allow students to use calculators on tests. Then, when the GPS was invented and navigation programs came out, observers lamented that people would become geographically illiterate.

The responses were: Well, isn't it important for students to learn to work with calculators since that's what they will be doing out in the world anyway? And does it matter for people to know how to navigate with a map when everyone has access to a navigation system in their pocket?

Well, yes, it turns out that it does matter. It matters because, if we do not cultivate our ability to relate to the world through our body-mind with regard to numbers or spatial relations, we become dependent on machines. The body-mind refers to tacit knowledge — knowledge that becomes what we call "gut knowledge" or "intuition" after much practice and experience. When the calculator breaks, the GPS doesn't work or the internet goes down, we are left helpless. And if young attorneys do not receive adequate practice with lower-level work, they'll never develop this legal intuition.

While the brain is plastic and open to learning throughout our lifetime, it is not infinitely open. It is hard to learn foundational skills as one gets older, and the ability to learn new foundational skills is highly contingent on one's social and economic circumstances, because it requires time and exposure to the right experiences.

What's powerful and useful about AI is also what is scary about it. It can simulate one of the most meaningful human abilities — the ability to detect the deeper underlying patterns at work in the world around us and inside of us.

AI is, at its heart, a tool for pattern recognition. It harnesses the capacity of machines, endlessly and repetitively reviewing data, and adding in some clever human programming to allow the machine to digest that data in ways that simulates learning.

Pattern recognition allows us to learn and use language; to engage in scientific discovery, artistic creation and spiritual insight; and, most relevant here, to practice law and ensure justice.

English common law is based on the idea that the rule of law and justice emerges organically as a pattern from the life of communities and their work to keep the peace and resolve conflict. The work of judges and jurists relies on pattern recognition of the deep principles at work in the rulings and findings of earlier courts.

How do young lawyers become well versed in legal pattern recognition? A lot of it happens through what has traditionally been called practice, which, for associates, has entailed engaging in lower-level work like researching and writing memos on case law, reviewing and writing basic motions and briefs, and reviewing and categorizing documents in discovery.

For litigators, it has also entailed going to court to try smaller cases, and to hearings to address lower-level issues like discovery disputes and motions.

The work of social scientist Gary Klein on how expertise develops, and how professionals become able to make good decisions under time pressure in high-risk situations, is instructive here.

In his classic 1998 work, "Sources of Power," Klein elucidates what years of field work taught him and his colleagues. The way that people become experts, able to rapidly make sound decisions under time constraints and stress, is to be exposed to enough instances of situations — say, a burning high rise in the middle of a city — that allow the person to start to understand viscerally what factors are important to pay attention to.

He and his colleagues found that people who become experts revisit what has happened mentally, and tell themselves and others stories about what happened that extract lessons and help focus attention on what made a difference at the time.

While he and his colleagues were studying experienced professionals — the equivalent of trial lawyers — these lessons apply to those doing lower-level work that is not as high-risk, as well. Being exposed to enough instances of, for example, a complaint or a brief is what allows a young attorney to begin to see what makes a good complaint, what matters and what doesn't, and what makes for a good case.

What it takes to train a machine and to learn to talk with a machine is radically different from what it takes to train a human being and to learn to talk with other human beings, so learning to interact with AI is not a substitute for working with documents themselves.

Those of us who already have developed communication and pattern recognition skills may find it challenging, uncomfortable or exhilarating to learn these news ways. But most of us will ultimately be able to do it, because we already have a lifetime of reading, education, professional training and experience that has developed our pattern recognition and communication abilities. We can adapt.

But when the machines take over and young people lose opportunities to be exposed to the lower-level work of managing cases, they may never cultivate the complex pattern recognition abilities that undergird creative insight and lawyering.

Is this inevitable? Does this mean that lawyers and law firms should not use AI where it can be useful?

No — not if you see the pattern and understand how to create the right kinds of learning opportunities for young legal professionals.

Young attorneys need to read case law. They need to do research on legal issues. They need to review evidence and figure out how to present that evidence in a way that persuades decision-makers to see things their way. They can do all of these things assisted by AI, but they must read themselves and not leave that work to machines.

Most importantly, they need to shadow senior attorneys on calls, in meetings and in court so they can see how that background work translates into effective advocacy.

AI itself cannot tell you how to use AI well, even though it will give you an answer if you ask it. AI has no social skills or understanding — it only knows language and its patterns. It's up to us, the human beings, to think carefully about how to best use AI, not only to get the work done, but to create the kinds of people, workplaces and communities that we want and need.

Ideally, used well, AI tools could help firms free up young associates to attend mediation and settlement conferences and court hearings so that they can spend more time in practice, and less time in waiting.

Sarah E. Murray is a Vice President of Litigation Consulting at TrialQuest and a member of the American Society of Trial Consultants AI Taskforce. Sarah invites your thoughts and questions about AI and the legal and legal services professions.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer or its clients. This article is for general information purposes and is not intended to be and should not be taken as legal advice.