Active Learning

Machine learning to deliver results faster

Active Learning

Getting to the most important documents doesn’t have to take a lot of effort. Active learning handles the brunt of the work, with minimal setup and human input. There’s no need for training sets, no manually batching documents. You can take a 100,000-document project from setup to review in under 10 minutes.

Company:

Relativity

Released:

February, 2018 (Relativity 9.5)

Timeframe:

February, 2017 - Present

Team:

PM: Andrea Beckman, Jim Witte
TPM: Elise Tropiano, Trish Gleason

Role:

User Experience Designer

Deliverables:

User Research
Low Fidelity Wireframes
High Fidelity Mockups
Interactive Prototypes
Usability Testing
Style Guidelines & Final Assets

Active Learning

TL;DR - The Project in a Nutshell

The Active Learning product has been in various stages of definition and development for the broader part of two years. I have had the good fortune of being the lead designer on the project for that entire time.

Active Learning is a technology assisted review (TAR) process where the system serves up a continuously updated queue of documents to review, ensuring that the highest ranked (most responsive) documents are presented first.

From the onset, we set out to solve three problems:

  • Case sizes are growing at a nearly-unmanigable rate
  • Previous TAR solutions require onorous manual setup and administration
  • Relativity’s analytics suite is viewed as inferior to some of it’s competitors

Over the course of the past two years, we have built and released a product that is already having a profoundly positive impact on our users and our industry.

When you look at how this small team reviewed hundreds of thousands of documents in that short timespan—it’s exceptional. Analytics is transforming e-discovery.

Lynne Brigant

Director of Client Services, H&A eDiscovery

Powerful Tools; Simple Setup

One of the primary pieces of feedback we heard while interviewing users—and we heard it a LOT—was that our previous generation TAR product was too difficult to setup and maintain. “It’s too clicky!” they would say, and they’re right.

With that in mind, we set out to streamline the setup process and reduce the overall number of cliks required anywhere we could. We achieved this a variety of ways:

  • intelligent defaults (where appropriate)
  • behind-the-scenes automation
  • providing direct links across touch points
R1 Metrics Dashboard
Up and running in just a few clicks

We automated as much as we could to reduce the friction of the setup process. One name, four selections and you’re on your way. As a point of comparison, the previous TAR product had 19 fields to fill out.

R1 Metrics Dashboard
Starting a review is a breeze

Similar to project setup, we eliminated any superflous, unnecessary inputs or actions from starting a review queue while still keeping the product feature rich.

Our team had little experience with Analytics and was facing an unrealistic deadline. We could have never met the deadline nor achieved the quality of results without active learning.

Lit Support Manager

Canadian Law Firm

R1 Metrics Dashboard
Project stats: in-depth or at a glance

Whether you need a quick update on project progress, or are doing a deep dive into review statistics, both tasks are easily accomplished. Project Home provides high level stats and visualizations to quickly inform a user of the state of the project. Review Statistics captures and logs all useful historical data pertaining to each review and reviewer.

R1 Metrics Dashboard
Finished, for now

An Active Learning project can end for any number of reasons and in any number of ways, but the most defensible way is by running an Elusion Test. Upon completion, a user is given the option to accept the results—effectively putting the project on hiatus—or resume the project for more desirable results. Regardless of the selection, nothing is permanent; all results are recorded in Review Statistics and a project can be re-started at any time.

From First Steps to Walking

Like all good projects, Active Learning has grown and changed and matured as we've done testing, learned more, and received feedback. Thanks to an eager userbase we were fortunate enough to have a client advisory board provide feedback early and often. As you can see below, it has drastically changed the form that Active Learning has taken.

Active Learning Wireframe
Wireframe Explorations

The main focus of the initial concepts was to provide guidance throughout the project (thus a wizard), differentiation between queue types, and immediate access to starting a review.

Active Learning Mid-fi Mockup
Mid-fi Explorations

Expanding on the wireframes, the mid-fi mockups started the discussion of how we represent our data over the life of a project, and how the project itself evolves over time.

Active Learning Hifi Mockup
Hifi Mockups and Prototypes

After numerous explorations, countless user interviews, and tons of feedback, the final direction of the product began to take shape. I switched to almost exclusively high fidelity prototyping at this stage.

The Final Product

A tremendous ammount of work has gone into this project from every discipline involved. Our hard work continues to pay off as each week more and more user success stories start rolling in. It's been a thrill to habe been able to work on this project from start to finish and see it go from scribbles on a whiteboard to a tangible thing people use to do their jobs.

R1 Metrics Dashboard

So... What Comes Next?

The official MVP release of Active Learning was in February 2018. Since then we've continued to design and roll out features that have been in the works for a while, notably two additional queue types: Coverage Review and QC Review. Elusion Tests are just around the bend, and further down the road is concurrent queue review. Between both of those, and continuing to gather feedback and iterate, we have our hands pretty full.