This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
List Professionals Alphabetically
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z View All
Search Professionals
Site Search Submit
| 4 minute read

Generative AI - What Now (for Legal Professionals)?

Recently, I had the privilege of moderating a panel on Generative AI through the University of Chicago’s Master of Applied Data Science program. It was a lively, thought-provoking discussion with experts from academia and business. Although originally intended for an audience of data scientists, there were several useful takeaways for legal professionals and consumers of legal services.

AI concepts discussed are mostly accessible to legal professionals. One note is that “LLM” in the below refers to “large language model,” probably best known in the form of ChatGPT. The significance of this technology is that it allows anyone to interact with computers using natural language rather than a coding language.

I’ve featured a few insights from the program, points I found particularly of interest. Note: The panelists’ opinions are not necessarily mine nor those of Katten, nor is this legal advice.

Insight #1 – Scaling into the enterprise

This new technology of LLMs is intuitive and accessible, unlike any AI tool that has come before. That is why business leaders are more eager to engage with it. The technology is so compelling that people will push for it, even when it might not be ready for a particular use case. Furthermore, turning a piece of technology into a scalable enterprise is hard, very hard.

But eagerness should be met with caution. Mistakes will be made, as they always are in these situations. Misuse will be unwitting and unintentional, such as treating LLMs as an information system when they are no such thing. At its core, the defense against this is understanding what these models do and how to control them.

Insight #2 - What these models really do, and how to get the most out of them

What do LLMs do? As one panelist put it, they predict the next word. That’s it. They are not repositories of knowledge; if you ask them a question, they don’t look up an answer. They can be trained to do certain things, but the simplest of reasoning problems defeat them. For a computer, language is an input with a pattern, just like a CT scan or a piece of music. Computers can analyze patterns that humans cannot, and those patterns can be put to use – in this case, that is generating language.

How do you best put LLMs to use? You understand them on a functional level. Knowledge at the technical level is probably unnecessary for most people. What data scientists really need to do is walk through the use case with the people that want something and be the interface between what the business needs and the technological solution that can be provided.

Coding ability is nice, but it is a teachable skill. Besides, English is, in effect, the programming language for LLMs. Really it comes down to logical reasoning or computational thinking – breaking down a problem to effectuate a solution. Then you embed that solution into a workflow, which is where the ultimate value lies.

Insight #3 - Use in health care

Health care is an industry ripe for efficiency gains. Doctors and nurses spend a large proportion of their time on administrative tasks, something LLMs could help with. Providers are far more receptive to using tech to address this relative to years past. Now they are asking for their own ChatGPT, one that protects patient information and is customized, in-house, modular and focused on medical usage.

The question will be how the tech maps onto any individual use case. Where LLMs augment human reasoning, they will find an audience. For example, a doctor might use an LLM if it provides a better way to search PubMed to stay up on the latest research. Likewise, if the benefits outweigh the dangers, they will also receive a hearing, i.e., routine administrative tasks rather than critical clinical decisions.

Insight #4 - Regulation

There are calls for regulation of AI. That is understandable enough, but perhaps some perspective is in order. First, controls always follow the technology. They do not precede it. Second, much of the regulation can be self-regulation. Companies should recognize the tools for what they are and think about whether analyzing the past is a good way of comprehending the future. Third, in the opinion of one panelist, regulation should not just entrench incumbents, and one might rightly ask for specifics when incumbents call for new rules, especially about their own businesses.

Fourth, and perhaps most important, regulation should focus first on the immediate problems instead of the flashy ones. The potentially harmful effects of social media (which definitely uses AI) are demonstrable, widespread and immediate. The threat of a drone unleashing a missile to target the wrong person is a bit more … niche. It is harder to administer a process where the problems are small and accrete over time into big ones, but that is where the focus needs to be.

If you want to learn more

This summary only scratches the surface of what the panelists discussed. Listen to the full recording here.

You can find out why fine-tuning is different with LLMs compared to other models, how Auto-GPT fits into this, and why LLMs could tell you (incorrectly) that Eisenhower was president when George Clooney was born.

About me

I am a licensed attorney with a Master’s in Data Science. At Katten, I work as the Senior Data Science and Innovation Manager. In this role, I evaluate and implement AI tools into our legal practice and serve as a technical advisor to attorneys on data science and AI. I also lead the Katten DataLAB team, which provides “legal data science” to clients.

Tags

data science, artificial intelligence, katten datalab