LAWTECH OPEN
INNOVATION LAB
Benchmarking Legal AI: Building the Evidence Base for Safe, Confident Adoption
A ground breaking sector-led Legal AI testbed giving vendors and law firms unmatched access to real-world evaluation.
LawTech Open Innovation Lab (LOIL) is a collaborative, sector-led programme run by SuperTech WM. This years programme is bringing together law firms, academia, regulators and technology providers to benchmark Legal AI tools against real-world legal tasks.
Apply to join ‘LOIL 2.0 Benchmarking AI’ a structured, use-case-driven environment that enables law firms to safely test, compare, interrogate and understand LawTech tools. Explore what works and what doesn’t work in practice, while generating clear signals to the market about what really matters in order to drive adoption, trust and tool use.
Where Legal AI Proves Itself - Real Firms. Real Workflows. Real Impact.
This is not a procurement exercise or a vendor showcase. It’s an 8‑week, open‑innovation experiment where ProfTech vendors and legal teams work side‑by‑side to evaluate Legal AI in real workflows, cut through noise and duplication, and move the sector towards doing innovation right. Read the press release here.
See below for use case themes and application requirements.
APPLICATION CLOSE: 27th February
Find out more and meet the programme leads with our PRE-APPLICATION WEBINAR sessions. Register below:
Selected ProfTech Vendors will have the opportunity to engage with
Use Case benchmarking teams from:
BENEFITS FOR SME PARTICIPANTS
Work Directly with at least 3 Law Firms; Engaging with leading legal professionals. More to be announced!
Test & Validate your Technology against specially designed frameworks to give confidence; Demonstrate how your solution helps law firms.
Shape the Future of Lawtech; Influence the way law firms adopt new technologies.
Gain Market Credibility; Strengthen your position as a trusted lawtech provider.
Accelerate your market fit in the mid-tier legal services market.
Enrich your Network; Join a thriving West Midlands Professional Services community of legal innovators, leading law firms, academics and business support.
Collaborate & Learn; Peer to peer knowledge exchange.
SHAPE THE FUTURE OF LEGAL AI ADOPTION
INNOVATION THROUGH COLLABORATION TO SUPPORT THE FUTURE OF LEGAL TECH
Legal AI is full of bold claims but the sector needs proof, not promises.
LOIL: Benchmarking Legal AI is designed to move beyond demos and marketing to repeatable testing in real workflows, so firms can adopt faster and safer, and vendors can build what the market will actually trust and use.
We will benchmark tools across scenarios and measure:
Time saved vs effort required
Output quality / accuracy
Supervision burden (junior vs senior)
Hallucination and risk controls
Integration friction
Scalability and cost practicality
WHO SHOULD APPLY
We invite applications from vendors that:
Have a product that is ready to be used in a law firm context
Can provide product access to each law firm for the agreed programme period
Can provide support to firms during benchmarking period (onboarding + handling queries from benchmarking teams)
Law firms are adopting AI tools fast but without shared benchmarks or consistent evaluation methods across the many available tools and products.Pilots are costly in time and attention, and many firms are duplicating effort without a shared evidence base. This SuperTech LOIL programme creates a rare opportunity for cross-firm, real-task evaluation with academic input and regulatory awareness.
USE CASE THEMES
The challenges listed have been workshopped and chosen by our trusted legal partners and innovation team. If you have an innovative solution that fits one or more of the below we want to hear from you. SME’s wishing to apply can select one or more use case challenge when applying.
APPLICATION CLOSE: 27th February
-
How might we enable law firms to review large volumes of documents more efficiently by identifying key clauses, red flags, and deviations at scale, while maintaining legal accuracy, explainability, and effective senior oversight?
Background:
Law firms regularly review large sets of similar documents such as leases, warranties, NDAs, litigation bundles, and due diligence materials. This work is time-intensive, highly repetitive, and often delegated to junior lawyers under tight supervision. We want to assess where automation genuinely adds value, where risks remain, and how reliably outputs can be supervised and trusted.We are interested in:
Solutions that use AI to support first-pass review, clause extraction, and red-flag identification, producing structured outputs that can be efficiently checked and validated by lawyers.Why this matters to firms:
Because document review consumes significant junior and senior capacity, and firms need evidence-based clarity on where automation reduces cost and risk rather than introducing new ones. -
How might we support faster and more reliable legal research by using AI to structure sources, surface relevant authorities, and reduce time spent on repetitive searching, without introducing hallucination or trust risks?
Background:
Legal research remains a core but resource-intensive activity. Lawyers often spend significant time searching, filtering, and cross-checking sources. Concerns around accuracy, traceability, and hallucinations continue to limit adoption.We are interested in:
AI-enabled research approaches that prioritise source transparency, reliability, and relevance, while keeping lawyers in control of judgment and conclusions.Why this matters to firms:
Because unreliable research creates professional risk, and firms need trusted ways to increase speed without compromising quality.LOIL 2.0 Benchmark Output:
Source traceability scores, hallucination incidence rates, and research-time savings by task type. -
How might we enable senior lawyers to rely on AI-generated summaries of long documents and bundles for risk assessment and decision-making, rather than generic or superficial abstracts?
Background:
Senior lawyers need fast, accurate overviews of complex documents. Existing summarisation tools often fail to surface legally relevant risks or omit critical issues.
We are interested in:
Solutions that generate structured, decision-ready summaries with clear links to source material and transparent confidence signals.
Why this matters to firms:
Because senior time is scarce and expensive, and poor summaries can lead to missed risks or unnecessary rework.
LOIL 2.0 Benchmark Output:
Decision-usefulness ratings, omission rates for key risks, and senior validation time per summary.
-
How might we assist junior lawyers in drafting structured legal briefs and internal memos that improve quality, consistency, and learning outcomes, while reducing senior rewrite time and supervision burden?
Background:
Juniors often struggle with structure and relevance, leading to heavy senior rewrites. Firms are increasingly concerned about sustainable supervision and training quality.
We are interested in:
AI tools that support structured drafting and produce outputs that are easier to review, validate, and teach from.
Why this matters to firms:
Because drafting inefficiencies directly affect leverage, training outcomes, and supervision capacity.
LOIL 2.0 Benchmark Output:
Senior rewrite reduction rates, structural quality scores, and junior learning impact indicators.
-
How might we enable contract drafting that adheres to client-specific playbooks and constraints, while clearly flagging deviations and preserving lawyer control and accountability?
Background:
Institutional clients require strict adherence to agreed standards. Automation is attractive, but only where deviations, provenance, and accountability are fully transparent.We are interested in:
AI-supported drafting that operates within defined constraints and makes all deviations explicit and reviewable.Why this matters to firms:
Because client trust, regulatory compliance, and professional liability depend on knowing exactly where and why documents depart from playbooks.LOIL 2.0 Benchmark Output:
Playbook compliance rates, deviation transparency scores, and senior validation effort per draft. -
How might we help lawyers draft routine legal correspondence more efficiently, ensuring appropriate tone (i.e. firm style), accuracy, and risk awareness, while keeping lawyers firmly in control of final communications?
Background:
Routine correspondence consumes significant time despite being relatively low risk when structured correctly. Tone or clarity failures can still create reputational issues.
We are interested in:
AI solutions that assist with first-draft correspondence, adapt tone to context, and surface risk areas clearly.
Why this matters to firms:
Because communication quality directly affects client trust, while inefficient drafting quietly erodes margins.
LOIL 2.0 Benchmark Output:
Time-to-first-draft metrics, tone appropriateness ratings, and lawyer edit distance scores.
-
How might we increase pro bono and legal clinic capacity by using AI to support form completion, triage, attendance notes, and draft advice, without increasing risk and while improving accessibility and supervision?
Background:
Pro bono clinics face growing demand but limited capacity. Supervising lawyers spend disproportionate time reviewing junior or student work.
We are interested in:
AI-enabled workflows that allow juniors or students to contribute effectively, while supervisors retain control and risk is reduced rather than increased.
Why this matters to firms:
Because pro bono delivery, regulatory expectations, ESG commitments, and talent development increasingly intersect.
LOIL 2.0 Benchmark Output:
Capacity uplift metrics, supervision time savings, accessibility scores, and error rates.
PROGRAMME KEY DATES
19th or 26th February - Pre-Application Webinars
27th February - Application Deadline
W/C 9th March - Virtual Participant Onboarding Session.
Approximately 90 mins
20th March - In Person Programme Launch Day event,
Birmingham City Centre
7th - 17th April - Virtual Deep Dive Sessions with Each Participating Law Firm
6th May - In Person Programme Demo Day, Birmingham City Centre
IN COLLABORATION WITH