Share Your Feedback

Generating questions for this page…

The Methodology
of Excellence.

For nearly three decades we have refined the science of editorial validation. Every item, every score, every percentile ranking is the product of a process — not a guess.

130,000+
Candidate attempts in benchmark dataset
21,348+
Satisfied client organisations
10,000+
Unique items in the question bank
1998
Continuously refined since

The Item Bank

Our question bank contains over 10,000 unique items, developed and validated over 28 years of continuous use. Each item is tagged by difficulty, skill domain, and industry relevance. Items are rotated to prevent pattern recognition and updated on a rolling basis as language evolves.

"Every question in our bank has been answered by at least 500 real candidates before it enters live rotation. We retire items that don't discriminate."
10,000+
Unique items across all test products
500+
Minimum candidate attempts before live rotation
28 yrs
Continuous refinement since 1998

Test Integrity

Test security is a prerequisite for valid results. Our anti-cheat system operates at the session level, not the question level — monitoring the environment, not guessing at intent.

Fullscreen enforcement
Test must remain in fullscreen. Exit triggers a 60-second grace period, then session termination if not resolved.
Tab-switch detection
Browser tab changes are logged as session events. Repeated switches are flagged in the client report.
Clipboard block
Copy/paste is disabled during the test session. All typing is direct.
Server-side timer
Time is tracked server-side, not client-side. Browser manipulation cannot extend time limits.
Candidate watermarking
Every session screen carries an invisible watermark tied to the candidate's session ID.
Immutable session log
All events are written to an append-only log. Neither client nor candidate can alter the record.

Percentile Analytics

Raw scores are useful. Percentile rankings are actionable. Every result is benchmarked against our full dataset of 130,000+ candidate attempts, segmented by product, difficulty level, and industry where applicable.

Percentile rankings are recomputed nightly via a scheduled batch job and cached in Redis for sub-millisecond lookup. Demo results are excluded from the benchmark dataset.

"A 75% score means different things at different difficulty levels. Our percentile data tells you exactly where that score sits relative to everyone else who has ever taken the same test."

The Intelligence Engine

The IVT's industry-specific question bank is maintained and expanded using AI-assisted generation, with every term and question reviewed by the EditingTests.com editorial team before entering live rotation. No AI-generated content is published to the live platform without explicit human review and approval.

The editing and proofreading tests use real-world passages drawn from published professional texts — legal reports, academic journals, corporate communications, and trade publications. Passages are selected to reflect the authentic difficulty of professional editorial work, not artificial test conditions.

Human Audit Layer

Automated scoring handles the Grammar Test, IVT, and MWT. The Writing Test is always reviewed by a human assessor. The Editing Test triggers human review when a candidate's score falls within ±10% of the pass threshold.

All human-reviewed results carry an SLA of 12 hours from submission. Clients are notified when review is complete.

Start assessing your editorial talent today.

Create Free Account
"Exactly the benchmark we needed — defensible, fast, and trusted by our legal team."

— HR Director, International Law Firm