“Too Much at Stake”?

The legal industry has been slow to embrace artificial intelligence (AI)*. But other industries and professions are not waiting to deliver AI’s enhanced precision and cost efficiencies.

How high do the stakes have to be before law begins to catch up with medicine?

On April 11, 2018 the U.S. Food and Drug Administration announced that it would, “permit marketing of the first medical device to use artificial intelligence to detect greater than mild level of eye disease retinopathy in adults who have diabetes”.

According to the U.S. FDA, the device: “Provides a screening decision without the need for a physician to also interpret the image or results, which makes it usable by health care providers who may not normally be involved in eye care.”

The device consists of software that uses an AI algorithm to analyze digital images of the eye taken by a retinal camera. A doctor uploads the digital images to a cloud server on which the software delivers to the doctor one of two results:

“(1) more than mild diabetic retinopathy detected: refer to an eye care professional”, or

“(2) negative for more than mild diabetic retinopathy: re-screen in 12 months.”

If the software detects a positive result, the patient should see an eye care provider for further diagnostic evaluation and possible treatment as soon as possible.

In case non-ophthalmologists like me missed the point of that last statement from the U.S. FDA, the University of Michigan Medical School’s Marschall S. Runge, M.D. puts it this way:

“[This software] can identify diabetic retinopathy, a common eye disease, without the need for an eye specialist.”

First, this software will save time and money. It will free up eye specialists to focus on the patients who truly need them where the software determines, “negative for more than mild diabetic retinopathy”.

Second, the software program provides empirically proven accuracy. Correct positive results are documented by data evaluated by the FDA at 87.4 percent; correct negative results at 89.5 percent.

Third, what this software does is very consequential. As the U.S. FDA puts it:

“Diabetic retinopathy [the disease which this software detects] is the most common cause of vision loss among the more than 30 million Americans living with diabetes and the leading cause of vision impairment and blindness among working-age adults.”

In earlier posts I’ve argued that lawyers are not yet adopting (in material percentages at least) the AI-based tools that can supplement attorneys’ traditional, people-intensive — “manual” — ways of reviewing documents for litigation discovery, automating aspects of due diligence in deals, performing contract management of thousands of a company’s agreements, or conducting legal research (see here and here).

The consequences:

  1. Reduced accuracy in legal work: Unnecessary reliance on “manual” attorney efforts (exacerbated with inexperienced new lawyers or contract attorneys) — rather than AI and its algorithms — can yield higher error rates. See my post, “Artificial Intelligence vs. Human Lawyers: Artificial Intelligence — 94% Accurate, Humans — 85% Accurate”; and
  2. Needlessly high legal costs: Unnecessary reliance on lawyers’ “manual” labor increases hours billed by outside lawyers, and requires headcounts for in-house staff.

Many of my fellow lawyers offer the excuse that there’s just too much at stake in legal work to incorporate AI into it. And that it’s too complex.

That objection seems ironic in light of the greater exactitude and lower costs that the eye doctors will experience with this AI innovation.

 

* “AI involves machines that can perform tasks that are characteristic of human intelligence”.

Contact Information