JURYLAB

1. Case Consultation

Share your case materials with us. We'll discuss key questions, liability issues, and damages considerations.

2. Survey Development

We create a comprehensive case presentation with plaintiff and defense perspectives, key evidence, and strategic questions.

3. Data Collection

Your case is presented to 300+ qualified mock jurors matching your venue demographics. Results collected in 24-48 hours.

4. Analysis & Report

Receive detailed insights on win probability, optimal damages requests, juror demographics, and persuasive arguments.

 

Simple Process, Powerful Results
Several jury research firms recruit their “jurors” from Amazon Mechanical Turk (MTurk) and similar gig-economy platforms. These are the same people who label photos for AI training, transcribe audio clips, and complete surveys for $3-5 per task. They are professional survey-takers, not representative jurors.

"You Get What You Pay For: An Empirical Examination of the Use of MTurk in Legal Scholarship" 
.https://cdn.vanderbilt.edu/vu-wp0/wp-content/uploads/sites/278/2019/10/11144219/You-Get-What-You-Pay-For-An-Empirical-Examination-of-the-Use-of-MTurk-in-Legal-Scholarship.pdf
The data quality crisis is well-documented:

Bot and automation fraud: A peer-reviewed study in Perspectives on Psychological Science found that one researcher’s MTurk sample was only 2.6% valid — 97.4% of responses came from bots or fraudulent accounts. Separate research estimates 33-46% of MTurk tasks are now completed by automated scripts, not humans.

Source: Webb, M.A. & Tangney, J.P. (2024). "Too Good to Be True: Bots and Bad Data From Mechanical Turk." Perspectives on Psychological Science, 19(6), 887-890. The researcher found only 14 out of 529 responses (2.6%) were valid human data. Published by George Mason University / Max Planck Institute. https://journals.sagepub.com/doi/10.1177/17456916221120027. 

Source:  Journal of Marketing Analytics (Sept. 2025) — a climate change survey on MTurk found 50% of workers failed to complete the survey, and the true cost per valid human response was $6.27 versus the $1 incentive, due to bot contamination.

Location fraud: MTurk workers routinely use VPNs and server farms to fake their geographic location. When a firm claims to be recruiting “jurors from your trial venue,” they’re relying on self-reported location with no verification. CloudResearch documented that server farm activity was rampant on MTurk, with workers masking their true locations.

Source: CloudResearch (formerly TurkPrime) published two major blog investigations: "After the Bot Scare" and "Concerns About Bots on Mechanical Turk" — documenting that workers use server farms and VPNs to fake US-based locations, and that their tools had to be built specifically to block these fraudulent geolocations.

Repeat participants: The active MTurk worker pool has shrunk to approximately 100,000 workers. The same people take jury research surveys for multiple firms, across multiple cases, month after month. These are not fresh, representative community members — they’re professional respondents who know how to game attention checks.

No demographic verification: MTurk workers self-report their age, race, income, education, and location. There is zero verification. Premium demographic filters have been shown to be unreliable — 22% of workers in one study reported ages outside the filtered range, including impossible ages like 0, 5, and 100.

Source:  Webb, M.A. & Tangney, J.P. (2024). "Too Good to Be True: Bots and Bad Data From Mechanical Turk." Perspectives on Psychological Science, 19(6), 887-890.  Despite paying for premium MTurk age filters (18-25), 22% of respondents reported ages outside that range, including impossible values like 0, 5, and 100.

Declining platform: Independent estimates peg the active MTurk worker pool at just 100,000 and dropping. Response rates are declining, worker quality is deteriorating, and the platform offers no meaningful protections for research integrity.

Source: Shimoni, H. & Axelrod, V. (2025). "Assessing the quality and reliability of the Amazon Mechanical Turk (MTurk) data in 2024." Royal Society Open Science, 12(7). Found that non-Master workers showed low reliability, high straightlining, and failed attention checks at high rates.When a firm charges you $15,000 for a “big data jury study” but sources its respondents from the same pool that completes penny-per-click tasks for AI training companies, you’re not getting jury research — you’re getting a survey of gig workers.
The Amazon MTurk Problem: Are Gig Workers Really Your Jurors?