, ,

Would an Artificial Intelligence Asset Search Help?

Could someone getting divorced use artificial intelligence to conduct an asset search? Sure, it just wouldn’t be a very good one.

It’s hard to get that far into your day right now without hearing about ChatGPT, the artificial intelligence program that is making professors change their exam questions and prompting fabulous cartoons about its lack of a moral compass.

As an investigator, I had two thoughts:

  • Could this thing replace me one day?
  • Could I have a look at it?

The more I have looked at artificial intelligence, the more bullish I have been about the rosy future for investigation. AI and greater computing power will generate volumes of data we can only dream about right now. Automatic transcripts of every YouTube video, for example, would mark an explosive change in the amount of material you would have to work with in researching someone. So would the ability to do media searches in every language, not just the small number offered by LexisNexis. I wrote about this in a law review article a few years ago, Legal Jobs in the Age of Artificial Intelligence.

More data will mean more things to research and interpret.

I haven’t yet tried ChatGPT, but after getting as far as their own disclaimers, I wondered why none of the stories I had read about the program mentioned this. Here’s what it says at ChatGPT.pro:

Like any other machine learning model, ChatGPT has certain limitations and limitations that users should be aware of. Some of the potential limitations of ChatGPT include:

  • Dependence on data: ChatGPT is a machine learning model that has been trained on a large corpus of text data. As a result, the quality and accuracy of the model’s responses will depend on the quality and diversity of the data that it has been trained on. If the model is not trained on a diverse and comprehensive dataset, it may generate responses that are not relevant or accurate.
  • Limited understanding: While ChatGPT is able to generate highly accurate and fluent responses to prompts, it does not have a deep understanding of the world or the ability to reason like a human. As a result, the model may not be able to generate responses to complex or abstract questions, or to understand the context and implications of a given prompt.

Those are just the first two, but they would present a big problem for anyone who wanted a machine to do an investigation.

Dependence on data: Lots of people and companies we look at have little or no data to their names. Companies of subjects that have lawsuits that are not on line, people with multiple spellings of their names (due to fraudulent intent or different traditions of transliteration from another writing system). A person who doesn’t go to court and doesn’t get written about will not have a lot of diverse data for the program to parse.

Limited understanding: I would think it self-evident that if you can’t solve a mystery by using Google, you would want to assign someone with a deep understanding of the world, which ChatGPT’s creators admit it does not possess. That doesn’t mean your investigator should be able to recite odes in Latin and argue the finer points of the gold standard debates in the 20th century. But you want someone who knows to look a little more closely when a company has changed its auditor, a CEO resigns unexpectedly to “spend more time with family,” or when the manager of a “$600 million hedge fund” has an office above a bakery where the rent is $400 a month.

And when Husband takes stock payments through a company he told you was defunct, that’s another alarm that starts to ring. What else did he tell you that’s not true?

Complex questions and context. These are what we deal in, and these are ChatGPT’s weak points.

Never say never, but I think I can confidently sign another office lease or two before the bots come for my job.