How It Works and Why It Matters for E-Discovery
|
|
|
- Todd Newman
- 10 years ago
- Views:
Transcription
1 Continuous Active Learning for Technology Assisted Review How It Works and Why It Matters for E-Discovery John Tredennick, Esq. Founder and CEO, Catalyst Repository Systems
2 Peer-Reviewed Study Compares TAR Protocals Two of the leading experts on e-discovery, Maura R. Grossman and Gordon V. Cormack, presented a 2014 peer-reviewed study on continuous active learning to the annual conference of the Special Interest Group on Information Retrieval, a part of the Association for Computing Machinery (ACM), Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery. In the study, they compared three TAR protocols, testing them across eight different cases. Two of the three protocols, Simple Passive Learning (SPL) and Simple Active Learning (SAL), are typically associated with early approaches to predictive coding, which we call TAR 1.0. The third, continuous active learning (CAL), is a central part of a newer approach to predictive coding, which we call TAR 2.0. Based on their testing, Grossman and Cormack concluded that CAL demonstrated superior performance over SPL and SAL, while avoiding certain other problems associated with these traditional TAR 1.0 protocols. Specifically, in each of the eight case studies, CAL reached higher levels of recall (finding relevant documents) more quickly and with less effort that the TAR 1.0 protocols. Not surprisingly, their research caused quite a stir in the TAR community. Supporters heralded its common-sense findings, particularly the conclusion that random training was the least efficient method for selecting training seeds. (See, e.g., Latest Grossman and Cormack Study Proves Folly of Using Random Search for Machine Training, by Ralph Losey.) Detractors challenged their results, arguing that using random seeds for training worked fine with their TAR 1.0 software and eliminated bias. (See, e.g., Random Sampling as an Effective Predictive Coding Training Strategy, by Herbert L. Roitblat.) We were pleased that it confirmed our earlier research and legitimized what for many is still a novel approach to TAR review. So why does this matter? The answer is simple. CAL matters because saving time and money on review is important to our clients. The more the savings, the more it matters. 1
3 TAR 1.0: Predictive Coding Protocols To better understand how CAL works and why it produces better results, let s start by taking a look at TAR 1.0 protocols and their limitations. Control Set SME TRAIN TEST Transfer to Review Platform Seed Set Rank All Documents and Establish Cutoff Collect / Receive Most are built around the following steps: 1. A subject matter expert (SME), often a senior lawyer, reviews and tags a random sample (500+ documents) to use as a control set for training. 2. The SME then begins a training process using Simple Passive Learning or Simple Active Learning. In either case, the SME reviews documents and tags them relevant or non-relevant. 3. The TAR engine uses these judgments to build a classification/ranking algorithm that will find other relevant documents. It tests the algorithm against the already-tagged control set to gauge its accuracy in identifying relevant documents. 4. Depending on the testing results, the SME may be asked to do more training to help improve the classification/ranking algorithm. 5. This training and testing process continues until the classifier is stable. That means its search algorithm is no longer getting better at identifying relevant documents in the control set. There is no point in further training relative to the control set. The next step is for the TAR engine to run its classification/ranking algorithm against the entire document population. The SME can then review a random sample of ranked documents to determine how well the algorithm did in pushing relevant documents to the top of the ranking. The sample will help tell the review administrator how many documents will need to be reviewed to reach different recall rates. The review team can then be directed to look at documents with relevance scores higher than the cutoff point. Documents below the cutoff point can be discarded. Even though training is initially iterative, it is a finite process. Once your classifier has learned all it can about the 500+ documents in the control set, that s it. You simply turn it loose to rank the larger population (which can take hours to complete) and then divide the documents into categories to review or not review. 2
4 The goal, to be sure, is for the review population to be smaller than the remainder. Savings come from not having to review all of the documents. SPL and SAL: Simple TAR 1.0 Training Protocols Grossman and Cormack tested two training protocols used in the TAR 1.0 methodology: Simple Passive Learning and Simple Active Learning. Simple Passive Learning uses random documents for training. Grossman and Cormack did not find this approach to be particularly effective: The results show that entirely non-random training methods, in which the initial training documents are selected using a simple keyword search, and subsequent training documents are selected by active learning, require substantially and significantly less human review effort to achieve any given level of recall, than passive learning, in which the machine-learning algorithm plays no role in the selection of training documents. Common sense supports their conclusion. The quicker you can present relevant documents to the system, the faster it should learn about your documents. We have also written about this issue and made similar arguments about the efficacy of random training. Is Random the Best Road for Your Car? Or is there a Better Route to Your Destination?; Comparing Active Learning to Random Sampling using Zipf s Law to Evaluate Which is More Effective for TAR. Simple Active Learning does not rely on random documents. Instead, it suggests starting with whatever relevant documents you can find, often through keyword search, to initiate the training. From there, the computer presents additional documents designed to help train the algorithm. Typically the system selects documents it is least sure about, often from the boundary between relevance and non-relevance. In effect, the machine learning algorithm is trying to figure out where to draw the line based on the documents in the control set you created to start the process. 3
5 As Grossman and Cormack point out, this means that the SME spends a lot of time looking at marginal documents in order to train the classifier. And keep in mind that the classifier is training against about a relatively small number of documents chosen by your initial random sample. There is no statistical reason to think these are in fact representative of the larger population and likely are not. We have written recently about the issue of topical coverage of random samples here: Comparing Active Learning to Random Sampling using Zipf s Law to Evaluate Which is More Effective for TAR. Grossman and Cormack concluded that Simple Active Learning performed better than Simple Passive Learning. However, Simple Active Learning was found to be less effective than continuous active learning. Among active-learning methods, continuous active learning with relevance feedback yields generally superior results to simple active learning with uncertainty sampling, while avoiding the vexing issue of stabilization determining when training is adequate, and therefore may stop. Thus, both of the TAR 1.0 protocols, SPL and SAL, were found to be less effective at finding relevant documents than CAL. Practical Problems with TAR 1.0 Protocols Whether you use either the SPL or SAL protocol, the TAR 1.0 process comes with a number of practical problems when applied to real world discovery. One Bite at the Apple: The first, and most relevant to a discussion of continuous active learning, is that you get only one bite at the apple. (See, TAR 2.0: Continuous Ranking Is One Bite at the Apple Really Enough?). Once the team gets going on the review set, there is no opportunity to feed back their judgments on review documents and improve the classification/ranking algorithm. Improving the algorithm means the review team will have to review less documents to reach any desired recall level. SMEs Required: A second problem is that TAR 1.0 generally requires a senior lawyer or subjectmatter expert (SME) for training. Expert training requires the lawyer to review thousands of documents to build a control set, to train and then test the results. Not only is this expensive, but it delays the review until you can convince your busy senior attorney to sit still and get through the training. I wrote about these problems in this post. Rolling Uploads: Going further, the TAR 1.0 approach does not handle rolling uploads well and does not work well for low richness collections, both of which are common in e-discovery. New documents render the control set invalid because they were not part of the random selection process. That typically means going through new training rounds. Low Richness: The problem with low richness collections is that it can be hard to find good training examples based on random sampling. If richness is below 1%, you may have to review several thousand documents just to find enough relevant ones to train the system. Indeed, this issue is sufficiently difficult that some TAR 1.0 vendors suggest their products shouldn t be used for low richness collections. 4
6 TAR 2.0 Predictive Coding Protocols With TAR 2.0, these real-world problems go away, partly due to the nature of continuous learning and partly due to the continuous ranking process required to support continuous learning. Taken together, continuous learning and continuous ranking form the basis of the TAR 2.0 approach, not only saving on review time and costs but making the process more fluid and flexible in the bargain. Continuous Ranking Our TAR 2.0 engine is designed to rank millions of documents in minutes. As a result, we rank every document in the collection each time we run a ranking. That means we can continuously integrate new judgments by the review team into the algorithm as their work progresses. Because our engine can rank all of the documents, there is no need to use a control set for training. Training success is based on ranking fluctuations across the entire set, rather than a limited set of randomly-selected documents. When document rankings stop changing, the classification/ranking algorithm has settled, at least until new documents arrive. This solves the problem of rolling uploads. Because we don t use a control set for training, we can integrate rolling document uploads into the review process. When you add new documents to the mix, they simply join in the ranking process and become part of the review. Depending on whether the new documents are different or similar to documents already in the population, they may integrate into the rankings immediately or instead fall to the bottom. In the latter case, we pull samples from the new documents through our contextual diversity algorithm for review. As the new documents are reviewed, they integrate further into the ranking. You can see an illustration of the initial fluctuation of new documents in this example from Insight Predict. The initial review moved forward until the classification/ranking algorithm was pretty well trained. New documents were added to the collection midway through the review process. Initially the population rankings fluctuated to accommodate the newcomers. Then, as representative samples were identified and reviewed, the population settled down to stability. 5
7 For more on contextual diversity, see below or our recent article comparing contextual diversity with random sampling. Comparing Active Learning to Random Sampling using Zipf s Law to Evaluate Which is More Effective for TAR. Continuous Active Learning There are two aspects to continuous active learning. The first is that the process is continuous. Training doesn t stop until the review finishes. The second is that the training is active. That means the computer feeds documents to the review team with the goal of making the review as efficient as possible (minimizing the total cost of review). Although our software will support a TAR 1.0 process, we have long advocated continuous learning as the better alternative. Simply put, as the reviewers progress through documents in our system, we feed their judgments back to the system to be used as seeds in the next ranking process. Then, when the reviewers ask for a new batch, the documents are presented based on the latest completed ranking. To the extent the ranking has improved by virtue of the additional review judgments, they receive better documents than they otherwise would had the learning stopped after one bite at the apple. Catalyst Continuous Learning Rank Rank INSiGHT ECA/Analysis Pre-Production Train/Review Review/Train Post-Production And More Output Collect / Receive Test In effect, the reviewers become the trainers and the trainers become reviewers. Training is review, we say. And review is training. Indeed, review team training is all but required for a continuous learning process. It makes little sense to expect a senior attorney do the entire review, which may involve hundreds of thousands of documents. Rather, SMEs should focus on finding (through search or otherwise) relevant documents to help move the training forward as quickly as possible. They can also be used to monitor the review team, using our QC algorithm designed to surface documents likely to have been improperly tagged. We have shown that this process is as effective as using senior lawyers to do the training and can be done at a lower cost. And, like CAL itself, our QC algorithm also continues to learn as the review progresses. 6
8 What are the Savings? Grossman and Cormack quantified the differences between the TAR 1.0 and 2.0 protocols by measuring the number of documents a team would need to review to get to a specific recall rate. Here, for example, is a chart showing the difference in the number of documents a team would have to review to achieve a 75% level of recall comparing continuous active learning and simple passive learning. From Grossman M. and Cormack G., Comments on The Implications of Rule 26(g) on the Use of Technology-Assisted Review, 7 Federal Courts Law Review 286, 297 (2014). The test results showed that the review team would have to look at substantially more documents using the SPL (random seeds) protocol than CAL. For matter 201, the difference would be 50,000 documents. At $2 a document for review and QC, that would be a savings of $100,000. For matter 203, which is the extreme case here, the difference would be 93,000 documents. The savings from using CAL based on $2 a document would be $186,000. Here is another chart that compares all three protocols over the same test set. In this case Grossman and Cormack varied the size of the training sets for SAL and SPL to see what impact it might have on the review numbers. You can see that the results for for both of the TAR 1.0 protocols improve with additional training but at the cost of requiring the SME to look at as many as 8,000 documents before beginning training. And, even using what Grossman and Cormack call an ideal training set for SAL and SPL (which cannot be identified in advance), SAL beat or matched the results in every case, often by a substantial margin. 7
9 We presented our research on the benefits of continuous active learning as well. Like Grossman and Cormack, we found there were substantial savings to be had by continuing the training through the entire review. You can see it in this example: To read about our research on this issue and the savings that can be achieved by a continuous learning process, see: ¾¾Predictive Ranking (TAR) for Smart People. ¾¾The Five Myths of Technology Assisted Review, Revisited. ¾¾TAR 2.0: Continuous Ranking Is One Bite at the Apple Really Enough? ¾¾In TAR, Wrong Decisions Can Lead to the Right Documents. ¾¾5 Myths About Technology-Assisted Review (Law Technology News). What about Review Bias? Grossman and Cormack constructed their CAL protocol by starting with seeds found through keyword search. They then presented documents to reviewers based on relevance feedback. Relevance feedback simply means that the system feeds the highest-ranked documents to the reviewers for their judgment. Of course, what is highly ranked depends on what you tagged before. Some argue that this approach opens the door to bias. If your ranking is based on documents you found through keyword search, what about other relevant documents you didn t find? You don t know what you don t know, they say. Random selection of training seeds raises the chance of finding relevant documents that are different from the ones you have already found. Right? 8
10 Actually, everyone seems to agree on this point. Grossman and Cormack point out that they used relevance feedback because they wanted to keep their testing methods simple and reproducible. As they note in their conclusion: There is no reason to presume that the CAL results described here represent the best that can be achieved. Any number of feature engineering methods, learning algorithms, training protocols, and search strategies might yield substantive improvements in the future. In an excellent four-part series (which starts here), Ralph Losey suggests using a multi-modal approach to combat fears of bias in the training process. From private discussions with the authors, we know that Grossman and Cormack also use added techniques to improve the learning process for their system as well. Contextual Diversity We combat bias in our active learning process by including contextual diversity samples as part of our active training protocol. Contextual diversity uses an algorithm we developed to present the reviewer with documents that are very different from what the review team has already seen. We wrote about it extensively in a recent blog post. Our ability to do contextual diversity sampling comes from the fact that our DRE engine ranks all of the documents every time. Because we rank all the documents, we know something about the nature of the documents already seen by the reviewers and the documents not yet reviewed. The contextual diversity algorithm essentially clusters unseen documents and then presents a representative sample of each group as the review progresses. And, like our relevance and QC algorithms, contextual diversity also keeps learning and improving as the review progresses. The picture above, from our earlier blog post on this subject, illustrates our approach. Each yellow circle indicates a contextual cluster and the red dot in each circle indicates the most representative sample document the algorithm can find. This next image shows the side-by-side comparisons of the coverages achieved using random sampling and contextual diversity sampling. You can see that contextual diversity sampling achieved much broader coverage. Ultimately, the review teams get a mix of documents selected through relevance feedback (the method Grossman and Cormack tested) and those selected for their contextual diversity. Doing so helps better train our algorithm and combats the possibility of unwanted bias. 9
11 The Continuous Learning Process Backed by our continuous ranking engine and contextual diversity, we can support a simple and flexible TAR 2.0 process for training and review. Here are the basic steps: ¾¾Start by finding as many relevant documents as possible. Feed them to the system for initial ranking. (Actually, you could start with no relevant documents and build off of the review team work. Or, start with contextual diversity sampling to get a feel for different types of documents in the population.) ¾¾Let the review team begin review. They get an automated mix including highly relevant documents and others selected by the computer based on contextual diversity and randomness to avoid bias. Our mix is a trade secret but most are highly ranked documents to maximize review-team efficiency over the course of the entire review. ¾¾As the review progresses, QC a small percentage of the documents at the senior attorney s leisure. Our QC algorithm will present documents that are most likely mistagged. ¾¾Continue until you reach the desired recall rate. Track your progress through our progress chart (shown above) and an occasional systematic sample, which will generate a yield curve. The process is flexible and can progress in almost any way you desire. You can start with tens of thousands of tagged documents if you have them, or start with just a few or none at all. Just let the review team get going either way and let the system balance the mix of documents included in the dynamic, continuously iterative review queue. As they finish batches, the ranking engine keeps getting smarter. If you later find relevant documents through whatever means, simply add them. It just doesn t matter when your goal is to find relevant documents for review rather than train a classifier. This TAR 2.0 process works well with low richness collections because you are encouraged to start the training with any relevant documents you can find. As review progresses, more relevant documents rise to the top of the rankings, which means your trial team can get up to speed more quickly. It also works well for ECA and third-party productions where you need to get up to speed quickly. Key Differences Between TAR 1.0 and 2.0 TAR 1.0 TAR One Time Training before assigning documents for review. Does not allow training or learning past the initial training. 2. Trains Against Small Reference Set, limiting ability to handle rolling uploads; assumes all documents received before ranking. Stability based on training against reference set. 3. Subject Matter Expert handles all training. Review team judgments not used to further train the system Continuous Active Learning allows the algorithm to keep improving over the course of review, improving savings and speed. 2. Ranks Every Document Every Time, which allows rolling uploads. Does not use a reference set but rather measures fluctuations across all documents to determine stability. 3. Review Teams Train as they review, working alongside expert for maximum effectiveness. SME focuses on finding relevant documents and QCing review team judgments.
12 Conclusion As Grossman and Cormack point out: This study highlights an alternative approach continuous active learning with relevance feedback that demonstrates superior performance, while avoiding certain problems associated with uncertainty sampling and passive learning. CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise. If your TAR product is integrated into your review engine and supports continuous ranking, there is little doubt they are right. Keep learning, get smarter and save more. That is a winning combination. About the Author John Tredennick is a nationally known trial lawyer and longtime litigation partner at Holland & Hart, John founded Catalyst in 2000 and is responsible for its overall direction, voice and vision. Well before founding Catalyst, John was a pioneer in the field of legal technology. He was editor-in-chief of the multi-author, two-book series, Winning With Computers: Trial Practice in the Twenty-First Century (ABA Press 1990, 1991). Both were ABA best sellers focusing on using computers in litigation technology. At the same time, he wrote, How to Prepare for Take and Use a Deposition at Trial (James Publishing 1990), which he and his co-author continued to supplement for several years. He also wrote, Lawyer s Guide to Spreadsheets (Glasser Publishing 2000), and, Lawyer s Guide to Microsoft Excel 2007 (ABA Press 2009). John is the former chair of the ABA s Law Practice Management Section. For many years, he was editor-in-chief of the ABA s Law Practice Management magazine, a monthly publication focusing on legal technology and law office management. More recently, he founded and edited Law Practice Today, a monthly ABA webzine that focuses on legal technology and management. Over two decades, John has written scores of articles on legal technology and spoken on legal technology to audiences on four of the five continents. Learn more at catalystsecure.com/resources Follow
Predictive Coding Helps Companies Reduce Discovery Costs
Predictive Coding Helps Companies Reduce Discovery Costs Recent Court Decisions Open Door to Wider Use by Businesses to Cut Costs in Document Discovery By John Tredennick As companies struggle to manage
E-Discovery Tip Sheet
E-Discovery Tip Sheet LegalTech 2015 Some Panels and Briefings Last month I took you on a select tour of the vendor exhibits and products from LegalTech 2015. This month I want to provide a small brief
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow Who should read this paper Predictive coding is one of the most promising technologies to reduce the high cost of review by
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow
WHITE PAPER: PREDICTIVE CODING DEFENSIBILITY........................................ Predictive Coding Defensibility and the Transparent Predictive Coding Workflow Who should read this paper Predictive
Technology Assisted Review of Documents
Ashish Prasad, Esq. Noah Miller, Esq. Joshua C. Garbarino, Esq. October 27, 2014 Table of Contents Introduction... 3 What is TAR?... 3 TAR Workflows and Roles... 3 Predictive Coding Workflows... 4 Conclusion...
Mastering Predictive Coding: The Ultimate Guide
Mastering Predictive Coding: The Ultimate Guide Key considerations and best practices to help you increase ediscovery efficiencies and save money with predictive coding 4.5 Validating the Results and Producing
The Tested Effectiveness of Equivio>Relevance in Technology Assisted Review
ediscovery & Information Management White Paper The Tested Effectiveness of Equivio>Relevance in Technology Assisted Review Scott M. Cohen Elizabeth T. Timkovich John J. Rosenthal February 2014 2014 Winston
Introduction to Predictive Coding
Introduction to Predictive Coding Herbert L. Roitblat, Ph.D. CTO, Chief Scientist, OrcaTec Predictive coding uses computers and machine learning to reduce the number of documents in large document sets
Take an Enterprise Approach to E-Discovery. Streamline Discovery and Control Review Cost Using a Central, Secure E-Discovery Cloud Platform
Take an Enterprise Approach to E-Discovery Streamline Discovery and Control Review Cost Using a Central, Secure E-Discovery Cloud Platform A Smarter Approach Catalyst s e-discovery cloud platform provides
Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery
Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery Gordon V. Cormack University of Waterloo [email protected] Maura R. Grossman Wachtell, Lipton, Rosen
Predictive Coding, TAR, CAR NOT Just for Litigation
Predictive Coding, TAR, CAR NOT Just for Litigation February 26, 2015 Olivia Gerroll VP Professional Services, D4 Agenda Drivers The Evolution of Discovery Technology Definitions & Benefits How Predictive
DSi Pilot Program: Comparing Catalyst Insight Predict with Linear Review
case study DSi Pilot Program: Comparing Catalyst Insight Predict with Linear Review www.dsicovery.com 877-797-4771 414 Union St., Suite 1210 Nashville, TN 37219 (615) 255-5343 Catalyst Insight Predict
Software-assisted document review: An ROI your GC can appreciate. kpmg.com
Software-assisted document review: An ROI your GC can appreciate kpmg.com b Section or Brochure name Contents Introduction 4 Approach 6 Metrics to compare quality and effectiveness 7 Results 8 Matter 1
E-discovery Taking Predictive Coding Out of the Black Box
E-discovery Taking Predictive Coding Out of the Black Box Joseph H. Looby Senior Managing Director FTI TECHNOLOGY IN CASES OF COMMERCIAL LITIGATION, the process of discovery can place a huge burden on
Designing and Implementing Your Communication s Dashboard: Lessons Learned
Designing and Implementing Your Communication s Dashboard: Lessons Learned By Katie Delahaye Paine President, Paine & Partners Contact Information: Katie Delahaye Paine CEO KDPaine & Partners Durham, NH
Technology-Assisted Review and Other Discovery Initiatives at the Antitrust Division. Tracy Greer 1 Senior Litigation Counsel E-Discovery
Technology-Assisted Review and Other Discovery Initiatives at the Antitrust Division Tracy Greer 1 Senior Litigation Counsel E-Discovery The Division has moved to implement several discovery initiatives
A Practitioner s Guide to Statistical Sampling in E-Discovery. October 16, 2012
A Practitioner s Guide to Statistical Sampling in E-Discovery October 16, 2012 1 Meet the Panelists Maura R. Grossman, Counsel at Wachtell, Lipton, Rosen & Katz Gordon V. Cormack, Professor at the David
Predictive Coding Defensibility
Predictive Coding Defensibility Who should read this paper The Veritas ediscovery Platform facilitates a quality control workflow that incorporates statistically sound sampling practices developed in conjunction
Three Methods for ediscovery Document Prioritization:
Three Methods for ediscovery Document Prioritization: Comparing and Contrasting Keyword Search with Concept Based and Support Vector Based "Technology Assisted Review-Predictive Coding" Platforms Tom Groom,
REDUCING COSTS WITH ADVANCED REVIEW STRATEGIES - PRIORITIZATION FOR 100% REVIEW. Bill Tolson Sr. Product Marketing Manager Recommind Inc.
REDUCING COSTS WITH ADVANCED REVIEW STRATEGIES - Bill Tolson Sr. Product Marketing Manager Recommind Inc. Introduction... 3 Traditional Linear Review... 3 Advanced Review Strategies: A Typical Predictive
THE FEDERAL COURTS LAW REVIEW. Comments on The Implications of Rule 26(g) on the Use of Technology-Assisted Review
THE FEDERAL COURTS LAW REVIEW Volume 7, Issue 1 2014 Comments on The Implications of Rule 26(g) on the Use of Technology-Assisted Review Maura R. Grossman and Gordon V. Cormack ABSTRACT Approaches to technology-assisted
Pay per Click Success 5 Easy Ways to Grow Sales and Lower Costs
Pay per Click Success 5 Easy Ways to Grow Sales and Lower Costs Go Long! The Benefits of Using Long Tail Keywords clogged sewage line, I ll see a higher conversion How many keywords are in your pay-per-click
Tough Questions. Questions The Insurance Adjustors Don t Want You To Ask. By Christopher M. Davis, Attorney at Law
Tough Questions Questions The Insurance Adjustors Don t Want You To Ask By Christopher M. Davis, Attorney at Law Davis Law Group, P.S. 2101 Fourth Avenue Suite 630 Seattle, WA 98121 206-727-4000 Davis
Scope Of Services At Dataflurry Prospectus
------------- Scope Of Services At Dataflurry Prospectus Prospective Client Overview Overview Of Services Offered By Dataflurry To Increase Targeted Search Engine Traffic Overview Of Methodologies Used
Employee Surveys: Four Do s and Don ts. Alec Levenson
Employee Surveys: Four Do s and Don ts Alec Levenson Center for Effective Organizations University of Southern California 3415 S. Figueroa Street, DCC 200 Los Angeles, CA 90089 USA Phone: 1-213-740-9814
7 Best Practices for Speech Analytics. Autonomy White Paper
7 Best Practices for Speech Analytics Autonomy White Paper Index Executive Summary 1 Best Practice #1: Prioritize Efforts 1 Best Practice #2: Think Contextually to Get to the Root Cause 1 Best Practice
Discussion of Electronic Discovery at Rule 26(f) Conferences: A Guide for Practitioners
Discussion of Electronic Discovery at Rule 26(f) Conferences: A Guide for Practitioners INTRODUCTION Virtually all modern discovery involves electronically stored information (ESI). The production and
Technology Assisted Review: The Disclosure of Training Sets and Related Transparency Issues Whitney Street, Esq. 1
Technology Assisted Review: The Disclosure of Training Sets and Related Transparency Issues Whitney Street, Esq. 1 The potential cost savings and increase in accuracy afforded by technology assisted review
QUESTIONS YOU SHOULD ASK THE INSURANCE ADJUSTER
QUESTIONS YOU SHOULD ASK THE INSURANCE ADJUSTER By Michael A. Schafer, Attorney at Law 1218 S 3 rd St Louisville, Kentucky 40203 (502) 584-9511 www.mikeschaferlaw.com QUESTIONS YOU SHOULD ASK THE INSURANCE
Top 5 best practices for creating effective dashboards. and the 7 mistakes you don t want to make
Top 5 best practices for creating effective dashboards and the 7 mistakes you don t want to make p2 Financial services professionals are buried in data that measure and track: relationships and processes,
The Basics of Automated Litigation Support
The Basics of Automated Litigation Support The attorney calls you in to let you know that he has just heard from the firms client. In response to the discovery request, your client has gathered over 100
Active Learning SVM for Blogs recommendation
Active Learning SVM for Blogs recommendation Xin Guan Computer Science, George Mason University Ⅰ.Introduction In the DH Now website, they try to review a big amount of blogs and articles and find the
The case for statistical sampling in e-discovery
Forensic The case for statistical sampling in e-discovery January 2012 kpmg.com 2 The case for statistical sampling in e-discovery The sheer volume and unrelenting production deadlines of today s electronic
Your Complete CRM Handbook
Your Complete CRM Handbook Introduction Introduction Chapter 1: Signs You REALLY Need a CRM Chapter 2: How CRM Improves Productivity Chapter 3: How to Craft a CRM Strategy Chapter 4: Maximizing Your CRM
White Paper Technology Assisted Review. Allison Stanfield and Jeff Jarrett 25 February 2015. 1300 136 993 www.elaw.com.au
White Paper Technology Assisted Review Allison Stanfield and Jeff Jarrett 25 February 2015 1300 136 993 www.elaw.com.au Table of Contents 1. INTRODUCTION 3 2. KEYWORD SEARCHING 3 3. KEYWORD SEARCHES: THE
Meeting the requirements of the Care Certificate is a challenge for all employers of health and social care support workers.
Contents Each game kit contains: 1 x Game Board 1 x 60 second Timer 1 x Dice 53 Yellow & 53 Red Tokens 2 x Player Counters 106 Care Certificate Question Cards Introduction Meeting the requirements of the
VOIR DIRE FROM THE DEFENSE PERSPECTIVE JAMES C. MORROW MORROW, WILLNAUER & KLOSTERMAN, L.L.C. 44--1
VOIR DIRE FROM THE DEFENSE PERSPECTIVE BY JAMES C. MORROW MORROW, WILLNAUER & KLOSTERMAN, L.L.C. 44--1 You have been sitting in your chair at counsel table for a good part of the day, perhaps making an
Measurement in ediscovery
Measurement in ediscovery A Technical White Paper Herbert Roitblat, Ph.D. CTO, Chief Scientist Measurement in ediscovery From an information-science perspective, ediscovery is about separating the responsive
MANAGEMENT SUMMARY INTRODUCTION KEY MESSAGES. Written by: Michael Azoff. Published June 2015, Ovum
App user analytics and performance monitoring for the business, development, and operations teams CA Mobile App Analytics for endto-end visibility CA Mobile App Analytics WWW.OVUM.COM Written by: Michael
E-Discovery Getting a Handle on Predictive Coding
E-Discovery Getting a Handle on Predictive Coding John J. Jablonski Goldberg Segalla LLP 665 Main St Ste 400 Buffalo, NY 14203-1425 (716) 566-5400 [email protected] Drew Lewis Recommind 7028
Discovery in the Digital Age: e-discovery Technology Overview. Chuck Rothman, P.Eng Wortzman Nickle Professional Corp.
Discovery in the Digital Age: e-discovery Technology Overview Chuck Rothman, P.Eng Wortzman Nickle Professional Corp. The Ontario e-discovery Institute 2013 Contents 1 Technology Overview... 1 1.1 Introduction...
GE Capital The Net Promoter Score: A low-cost, high-impact way to analyze customer voices
GE Capital The Net Promoter Score: A low-cost, high-impact way to analyze customer voices The Net Promoter Score: A low cost, high impact way to analyze customer voices GE Capital s Net Promoter survey
Online Reputation Management Services
Online Reputation Management Services Potential customers change purchase decisions when they see bad reviews, posts and comments online which can spread in various channels such as in search engine results
WHITE PAPER. CRM Evolved. Introducing the Era of Intelligent Engagement
WHITE PAPER CRM Evolved Introducing the Era of Intelligent Engagement November 2015 CRM Evolved Introduction Digital Transformation, a key focus of successful organizations, proves itself a business imperative,
E-discovery and Legal Outsourcing New Trends and Services Offered in Litigation Support
E-discovery and Legal Outsourcing New Trends and Services Offered in Litigation Support www.connect-goal.com 28 Jan 2013 Speakers Michael Lew Director & Head - Digital Forensics Chio Lim Stone Forest Singapore
Law Firm Timekeeping Best Practices
RECOMMENDATION FOR Law Firm Timekeeping Best Practices How to Reduce the Pain of Timekeeping, Improve Accuracy and Increase Billable Hours By Todd Gerstein, CEO Smart WebParts LLC Smart WebParts Recommendation
ECM AS A CLOUD PLATFORM:
ECM AS A CLOUD PLATFORM: KEEP IT SIMPLE TABLE OF CONTENTS ECM as a Cloud Platform 2 What is a Cloud Platform? 2 What is a Cloud Application? 3 SpringCM The World s Leading ECM Cloud Platform Provider 6
IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux
WRITTEN BY Greg Robidoux Top SQL Server Backup Mistakes and How to Avoid Them INTRODUCTION Backing up SQL Server databases is one of the most important tasks DBAs perform in their SQL Server environments
Viewpoint ediscovery Services
Xerox Legal Services Viewpoint ediscovery Platform Technical Brief Viewpoint ediscovery Services Viewpoint by Xerox delivers a flexible approach to ediscovery designed to help you manage your litigation,
PICTERA. What Is Intell1gent One? Created by the clients, for the clients SOLUTIONS
PICTERA SOLUTIONS An What Is Intell1gent One? Created by the clients, for the clients This white paper discusses: Understanding How Intell1gent One Saves Time and Money Using Intell1gent One to Save Money
7 Questions to Ask a NY Personal Injury Attorney Before You Ever Walk Into His Office
7 Questions to Ask a NY Personal Injury Attorney Before You Ever Walk Into His Office 1. Have you handled my exact type of case before? 2. Do you have free books, reports and videos that teach me how this
The Case for Technology Assisted Review and Statistical Sampling in Discovery
The Case for Technology Assisted Review and Statistical Sampling in Discovery Position Paper for DESI VI Workshop, June 8, 2015, ICAIL Conference, San Diego, CA Christopher H Paskach The Claro Group, LLC
TOP 10. Features Small and Medium Businesses
Introduction Once thought of as only relevant for enterprises, CRM technology is increasingly being used by small and medium businesses across industries. Even the smallest organizations recognize the
IMPROVING SETTLEMENT SAVVY. Kathy Perkins Kathy Perkins LLC, Workplace Law & Mediation www.kathy-perkins.com
IMPROVING SETTLEMENT SAVVY Kathy Perkins Kathy Perkins LLC, Workplace Law & Mediation www.kathy-perkins.com In these difficult economic times, parties may be looking to reduce litigation costs and risk
Putting SEO and Marketing PR to Work for Your Business
Marketing PR is now best practice in the HR/benefits marketplace. Marketing PR is the integration of traditional PR and marketing tactics that combine social media and other Internet-based activities all
Producing Persuasive Electronic Evidence: How to prevent and prepare for
ARTICLE Producing Persuasive Electronic Evidence: How to prevent and prepare for legal disputes involving electronic signatures and electronic transactions Electronic signatures were given the same legal
How to pick ediscovery software
How to pick ediscovery software WWW.CSDISCO.COM How to pick ediscovery software Here, from most important to least, are the factors you should consider in picking ediscovery software: 1 SPEED The most
Recent Developments in the Law & Technology Relating to Predictive Coding
Recent Developments in the Law & Technology Relating to Predictive Coding Presented by Paul Neale CEO Presented by Gene Klimov VP & Managing Director Presented by Gerard Britton Managing Director 2012
Dean Rod Smolla came to Widener University Delaware
10 QUESTIONS for Rod Smolla DEAN AND PROFESSOR OF LAW, WIDENER UNIVERSITY DELAWARE LAW SCHOOL INTERVIEW BY JOHN E. SAVOTH Dean Rod Smolla came to Widener University Delaware Law School in 2015 after a
Document Review Costs
Predictive Coding Gain Earlier Insight and Reduce Document Review Costs Tom Groom Vice President, Discovery Engineering [email protected] 303.840.3601 D4 LLC Litigation support service provider since
Risk Analysis Overview
What Is Risk? Uncertainty about a situation can often indicate risk, which is the possibility of loss, damage, or any other undesirable event. Most people desire low risk, which would translate to a high
Case Study: Gannett Co., Inc. Overview
Case Study: Gannett Co., Inc. Overview Country: United States Industry: Media Customer Profile Gannett is an international media and marketing solutions company that provides millions with access to information
A Mediation Primer for the Plaintiff s Attorney
By: Bruce Brusavich A Mediation Primer for the Plaintiff s Attorney Making your case stand out to the other side, and what to do when they ask you to dance. Make the Defense Ask to Mediate Obtaining a
