There Really Is No Contest When It Comes To Objectivity, Speed And Effectiveness

The evidence continues to pile up that the reliance on key terms to find relevant documents at the onset of an e-discovery matter is going the way of the dinosaur.

That is, rapid extinction.

The reason for this is the more universal deployment of Technology Assisted Review (TAR) 2.0 platforms such as Catalyst’s Insight Predict tool. TAR 2.0 uses the power of the algorithm and smart training by the attorneys who understand the case to find relevant documents in an objective, fact-driven manner much sooner than traditional, key term driven methods. Using a traditional key term and linear review approach – because there is no prioritization of relevant documents – the user still has to review everything.

That is, key term and linear review users are just as likely to find a relevant document in the first 100 documents they review as they are during the last 100 documents they review. The results they get are as good as they’re going to do.

Key Terms/Linear TAR 2.0
Defensibility *** ***
Objectivity/Process * ***
Can Measure Results * ***
Reduces Attorney Time * ***
Identifies All Relevant Docs ***

 

But utilizing TAR 2.0/Predict as the first step – before keyword searches – objectively prioritizes relevant documents, allowing users to get   their the most important documents within the first 10-40% of the review. Predict continues to learn and get better as relevant documents are identified.

These results on many prominent cases hit home. Legal industry bellwether The Sedona Conference’s Working Group 1 (WG1) created a detailed “TAR Case Law Primer” that details different court uses and opinions on the use of TAR, guidelines as to when it is in its best use, and outlines the benefits of relying on the technology, its metrics and process as laid out by Judge Andrew J. Peck, a preeminent voice on TAR and e-discovery matters.

Even more recently, in FCA US, LLC v. Cummins, Inc., Judge Avern Cohn ruled that “… applying TAR to the universe of electronic material before any keyword search reduces the universe of electronic material is the preferred method.”

It All Makes Sense

Our studies show that even after the best terms are applied, responsiveness usually averages 5%-20%. This means that $0.80 to $0.95 of every $1.00 spent on review is spent reviewing non-responsive information, even after key terms.

That’s just not money well spent.

In a perfect world, key terms would create a clear distinction – responsive and not responsive. But they don’t, and relying on key terms means the user doesn’t know what’s being left behind with the documents that did not hit on a key term. What if the damning or exonerating document/s are in the set that did not hit on the terms? TAR 2.0 surfaces relevant documents that are missed by key terms.

Insight Predict adapts to the data and constantly learns what is of utmost importance, and then brings those items to the forefront. Who wouldn’t want to know the most critical information in the first 10% of the review, as opposed to having to wait until 100% of the document population has been reviewed? On common matters with Insight Predict, we regularly find 90%+ of the responsive information in the first 30% of the review.

Actual Results

While search terms can be a great way to reduce data volume, we do not know of a case that has been decided on who had a larger reduction in documents based on terms. Our results, studies and research confirm that, more often than not, terms still produce a high number of non-responsive information.

To this point, below is a graphic of a recent project in which outside counsel thought that the data set would have a high responsive rate (60-70%), and decided that linear review would be the best approach. Once DSi’s team sampled for richness and saw that it was low (11-17%), it recommended using Predict. Outside counsel proceeded with linear review.

After 2.5 days of the review, the richness levels were not meeting expectations, so on DSi’s recommendation the decision was made to ‘turn on’ Predict (represented by ‘Day 1’ in the horizontal blue bar). Immediately, richness levels spiked and the relevant documents jumped to the front of the review line. As it turns out, the original set of custodians during the 2.5 days of linear review just didn’t happen to have many relevant documents, but other vendors would have just kept reviewing in the same fashion. But the DSi team identified the issue, set a new course and richness levels quickly spiked … and ended up saving hundreds of hours and significant dollars on this review in a highly defensible, quality driven manner.

The results are not uncommon. We would welcome the opportunity to learn about your results and challenges, share best practices, and assist you in implementing modern processes and platforms that will dramatically increase the defensibility, efficiency and cost effectiveness on your e-discovery matters.

Share this article:
Share on FacebookShare on LinkedInTweet about this on TwitterShare on Google+Email this to someone