Early access — 50% off the annual plan until launch. Claim your seat →

Exam fundamentals17 min read·

UKMLA Pass Mark Explained: How Scoring Works

The modified Angoff method explained, historical UKMLA and PLAB pass marks, realistic mock-score targets, retake rules, and how Q-bank accuracy predicts exam performance.

"What's the UKMLA pass mark?" is the most-asked question in every UKMLA study group, every Reddit thread, every mid-revision panic WhatsApp. And the honest answer — "it depends on your sitting" — usually makes the anxiety worse rather than better.

That's not the GMC being evasive. It's how standard-setting works for high-stakes medical assessments. The pass mark should vary between sittings, because the items vary between sittings, and the point is to hold the standard of competence constant rather than the raw percentage.

This post explains exactly how UKMLA scoring works — the modified Angoff method behind the numbers, what the pass mark has actually been in real sittings, what score to aim for on your Q-bank mocks, how retakes work, and why candidates consistently scoring 75%+ on realistic mocks almost never fail.

By the end, the pass mark won't be a mystery. It'll be a target you know how to hit.

Table of contents

  1. Why high-stakes exams don't have fixed pass marks
  2. The modified Angoff method, step by step
  3. Historical UKMLA and PLAB pass marks
  4. Why the mark varies across sittings
  5. What score to realistically aim for
  6. UKMLA vs PLAB pass-mark trajectory
  7. Evidence on whether UKMLA has got harder
  8. What happens if you fall below the mark
  9. Results timeline after your sitting
  10. Pass-mark myths and misconceptions
  11. Scoring breakdown by clinical domain
  12. How Q-bank accuracy predicts exam performance
  13. FAQ

1. Why high-stakes exams don't have fixed pass marks

Imagine two AKT sittings. Sitting A happens in November — the item bank that month includes an unusually hard set of cardiology questions with several tricky pharmacology distractors. Sitting B happens in March — the item bank is slightly more straightforward, with cleaner presentations and more obvious best answers.

If the pass mark were fixed at (say) 65%, candidates sitting A would be penalised for drawing harder items. Candidates sitting B would benefit from drawing easier items. Same underlying competence — different outcomes. That would be unfair and, more importantly, it wouldn't reliably identify who's competent to be licensed.

The fix is to vary the raw pass mark and hold the standard constant. On the harder sitting, the mark drops to (say) 62%. On the easier sitting, it rises to (say) 67%. The level of clinical competence the mark represents is the same across both.

This is standard practice across UK medical assessments — PLAB used it, the MRCP exams use it, most royal college memberships use it. It's called criterion-referenced assessment with standard-setting, and it's how any serious professional licensing body runs its exams.

The GMC uses the modified Angoff method to set the UKMLA mark. Once you understand the mechanism, the "why does it vary?" anxiety loses its grip.

2. The modified Angoff method, step by step

Here's how the UKMLA pass mark actually gets produced for each sitting.

Step 1 — Assemble the standard-setting panel. Before each sitting, the GMC convenes a panel of experienced UK clinicians — typically consultants and senior GPs — representing every specialty in the content map. Panellists are trained on standard-setting methodology and on the GMC's definition of the minimally competent candidate: someone just barely deserving to be licensed as a newly-registered UK doctor.

Step 2 — Define the minimally competent candidate. The panel explicitly calibrates on what "just barely competent" means. This isn't a student who's "getting most things right." It's a newly-qualified doctor who:

  • Makes safe first-line decisions across common presentations.
  • Recognises red flags and escalates appropriately.
  • Applies NICE-aligned management to everyday conditions.
  • Handles basic ethics, consent and professionalism issues correctly.
  • Doesn't necessarily excel on complex or atypical cases.

This definition anchors every subsequent judgement.

Step 3 — Rate each item. Panellists review every question in the sitting. For each item, they independently estimate: "What percentage of minimally competent candidates would answer this question correctly?"

  • An easy, well-written item about first-line management of hypertension: maybe 85% of minimally competent candidates get it right.
  • A harder item involving a subtle diagnostic distinction: maybe 55%.
  • A deliberately nuanced ethics scenario: maybe 40%.

Each panellist produces an estimate per item.

Step 4 — Aggregate the estimates. The individual estimates are averaged (and adjusted for reliability and outliers). The aggregated estimates are summed across all 200 items to produce the expected raw score a minimally competent candidate would achieve.

Step 5 — That aggregated expected score becomes the pass mark. If the panel estimates that a minimally competent candidate would score (say) 128/200 on this sitting, the pass mark is 128/200 — or 64%. Candidates scoring at or above 128 pass. Those below, fail.

Step 6 — Modest adjustment for measurement error. The raw Angoff mark is usually adjusted slightly downward (by a standard error of measurement, typically around 1–2 percentage points) to avoid falsely failing borderline candidates. The final mark is what's published.

That's the whole process. It's rigorous, it's defensible, and it produces a pass mark that genuinely represents competence rather than an arbitrary threshold.

3. Historical UKMLA and PLAB pass marks

Historical pass marks are useful context, but you should not use them as predictions for your specific sitting.

Published and reported UKMLA AKT pass marks (2024–2026, where public):

  • Recent UKMLA AKT sittings have typically produced pass marks in the 62–68% range of raw items correct.
  • The pass rate for UK medical students on first-sit UKMLA AKT has been reported in the 82–90% range — the cohort is well-prepared and schools' teaching is blueprinted to the same map.
  • The pass rate for IMGs on first-sit PLAB 1 (now AKT-standard-set) has historically been in the 65–78% range, a little lower reflecting the broader diversity of preparation pathways.

Published and reported PLAB 1 / PLAB 2 marks pre-UKMLA (2015–2023):

  • PLAB 1 pass marks traditionally sat in the 60–65% range.
  • PLAB 2 pass rates for first-sit IMGs typically ran in the 65–75% range.

Interpretation: the standard has been relatively stable across the PLAB-to-UKMLA transition. A candidate scoring consistently in the 70s on content-map-aligned mocks should pass comfortably. A candidate hovering around the mid-60s on mocks is borderline.

The published figures for your specific sitting appear on the GMC website several weeks after the exam, alongside your individual result. Don't chase them pre-exam — they're set by standard-setting, not by you.

4. Why the mark varies across sittings

Three forces drive the variation:

1. Item difficulty shifts naturally. The GMC's item bank contains thousands of questions; each sitting draws a different subset. Some subsets happen to contain harder items overall, others easier. The Angoff method compensates.

2. New items are introduced and calibrated. Every sitting includes some newly-written items being tested for the first time. Until their difficulty is calibrated via candidate performance, panel estimates can be slightly off — which gets corrected post-hoc.

3. Panel composition varies. Different standard-setting panels produce slightly different aggregated estimates. The GMC controls for this via training and cross-panel calibration, but residual variation remains.

The net effect: the raw pass mark varies typically by ±2–3 percentage points between sittings. The competence threshold behind it is essentially constant. You cannot "luck into" a pass by catching an easy sitting — if the items are easier, the mark is higher; if harder, the mark is lower.

5. What score to realistically aim for

This is the most useful number in this post.

Aim for 75–80% sustained accuracy on full-length, timed, content-map-aligned mocks in the final four weeks before your sitting. Candidates consistently in this range pass almost without exception.

Break that down:

  • 70%+ on full-length mocks: on track to pass. Keep practising.

  • 75%+ on full-length mocks: likely comfortable pass. Focus on weak-domain consolidation rather than volume.

  • 80%+ on full-length mocks: very likely strong pass. Protect your base — don't introduce new strategies late.

  • 65–70% on full-length mocks: borderline. You'll likely pass but with no margin. Intensify targeted revision on your weakest 2–3 specialties.

  • 60–65% on full-length mocks: at risk. You may still pass, but the margin is thin. Consider whether you need more preparation time (or a reliable retake plan).

  • Below 60% on full-length mocks: high fail risk. Seriously reconsider your timeline.

Key conditions for these benchmarks to apply:

  • Mocks must be full-length (200 questions).
  • Mocks must be timed (replicating exam pace).
  • Mocks must be content-map-aligned (not random "medical student" questions).
  • You must do mocks regularly — a single mock score is a data point, not a pattern.

Common self-deceptions to avoid:

  • "I scored 85% on practice questions!" → Were they timed? Were they a full mock? Were you reviewing explanations as you went? Un-timed, interrupted practice inflates scores by 5–10 percentage points.
  • "My accuracy has been stuck at 60% but I'm due a breakthrough." → Maybe. But breakthroughs happen with changed practice, not with time alone.
  • "I always score better on the real exam than on mocks." → Almost no one does. Real exam performance is usually slightly lower than recent mock performance due to stress, fatigue and novel items.

Get your baseline in 15 minutes. MLA Prep's free 25-question diagnostic is content-map-aligned, timed, with instant domain-by-domain breakdown. Most candidates are shocked by either their highest or lowest specialty. Take the diagnostic →

6. UKMLA vs PLAB pass-mark trajectory

The transition from PLAB-only IMG assessment to UKMLA-standard-set PLAB hasn't meaningfully shifted the pass mark. The small differences are worth understanding.

Pre-UKMLA PLAB (before 2024):

  • Pass marks typically 60–65% correct on PLAB 1.
  • Standard-setting was conducted by the GMC but not aligned to the same cross-cohort (UK student + IMG) blueprint.
  • The item bank was slightly narrower than the current UKMLA bank.

UKMLA-era PLAB / UK student AKT (2024+):

  • Pass marks running slightly higher — typically 62–68%.
  • Standard-setting draws from the unified UKMLA blueprint with input from both UK medical schools and GMC international assessment.
  • The item bank has expanded to reflect the 430-condition content map (vs the older 311-condition PLAB scope).

Implication: the absolute numbers aren't dramatically different. What has changed is the scope of what can be tested. If you're studying from an older PLAB resource, core content is still relevant, but the edges of the content map (climate, digital health, updated NICE ladders, expanded prescribing) won't be covered. Our UKMLA vs PLAB comparison maps the delta specifically.

7. Evidence on whether UKMLA has got harder

Candidate chatter often claims the new UKMLA sittings are "harder than the old PLAB." The published evidence says otherwise.

What the data shows:

  • First-sit pass rates for UK students on UKMLA AKT sit in the 82–90% range — high, reflecting well-prepared cohorts.
  • First-sit pass rates for IMGs on PLAB-under-UKMLA AKT are in the 65–78% range — broadly similar to pre-UKMLA PLAB 1 first-sit rates (60–75%). If anything, slightly higher.
  • CPSA pass rates are more variable because the format shift (centralised 18-station Manchester delivery) has had teething issues — but the long-run standard has stabilised.

What the "it's harder" perception actually reflects:

  1. Broader content map — 430 vs 311 conditions. More to study. Candidates who prepped using pre-UKMLA resources feel caught out.
  2. Cleaner NICE alignment — questions now reference current NICE ladders more explicitly. Candidates who trained in non-UK settings feel this most.
  3. Sharper standard-setting — the Angoff panels are more experienced with UKMLA scope, producing less random variation than early sittings.

The honest summary: UKMLA is not fundamentally harder than the old PLAB for a well-prepared candidate. It is broader, more rigorous in standard-setting, and less forgiving of gaps in UK-specific prescribing and guidance. Prepare accordingly.

8. What happens if you fall below the mark

If you fail, three things happen in sequence.

1. You receive a detailed results breakdown. The GMC provides your overall score, a pass/fail decision, and — crucially — a domain-by-domain breakdown. You'll see whether you underperformed in cardiology, psychiatry, emergency medicine, ethics, etc. This is gold for retake planning.

2. You face retake rules.

  • IMGs: up to four attempts under current GMC rules. Attempts beyond four require an exceptional review process.
  • UK students: rules vary by medical school but typically allow two attempts within programme, sometimes three with remediation.

Booking windows for retakes:

  • AKT retakes (IMG): usually at the next available sitting 2–4 months later.
  • CPSA retakes (IMG): next Manchester slot, subject to seat availability — often 3–6 months away.
  • UK student retakes: set by your school's assessment regulations.

3. You plan the retake.

The single biggest predictor of retake success is specificity of revision. Don't "study harder." Study differently.

  • Audit your domain breakdown. Identify your 2–3 weakest specialties.
  • Use spaced repetition specifically on those domains.
  • Do targeted mocks (specialty-specific) rather than general mocks.
  • Seek feedback — a tutor, a senior peer, a structured mock session.
  • Attend your medical school's remediation process if eligible.

Our last-minute UKMLA prep guide has an explicit 6-week AKT retake plan for those in this position.

Don't panic. AKT first-sit fail rates are 10–20% depending on cohort; you're in good company. What matters is the recovery.

9. Results timeline after your sitting

Typical results timelines:

AKT results:

  • UK students: released alongside school finals results, typically 4–8 weeks after sitting.
  • IMGs: released by the GMC approximately 4–6 weeks after sitting, via your GMC online account.

CPSA results:

  • UK students: released with finals results, typically 4–6 weeks after sitting.
  • IMGs: released by the GMC approximately 4 weeks after your Manchester sitting.

What the results package includes:

  • Overall pass/fail decision.
  • Raw score and the pass mark for your specific sitting.
  • Domain-by-domain breakdown (for AKT) and station-by-station breakdown (for CPSA).
  • If you failed: guidance on retake booking and (for UK students) remediation processes.

What the package doesn't include:

  • A percentile rank among your cohort. UKMLA is criterion-referenced, not norm-referenced. You're competing against the standard, not against other candidates.
  • Item-level feedback. You don't see which specific questions you got wrong.

10. Pass-mark myths and misconceptions

Myth 1: "The pass mark is 50% because that's what medical schools use." False. UK medical schools use variable standard-setting; older schools traditionally had ~50% thresholds on internal assessments, but UKMLA uses Angoff-set marks in the 62–68% range.

Myth 2: "If most candidates fail, the pass mark will be lowered." False. The pass mark is set before candidates sit the exam, based on panel judgement of item difficulty. Post-hoc adjustment happens only for obvious item-writing errors (flawed questions are removed), not to hit a target pass rate.

Myth 3: "The exam is norm-referenced — the top 70% pass." False. UKMLA is criterion-referenced. Every candidate is compared against the competence standard, not against each other. If every candidate demonstrated competence, every candidate would pass. If none did, none would.

Myth 4: "The pass mark is 65% — I need exactly that." Approximately but not exactly. The mark varies by sitting. Aiming for 65% as a target leaves zero margin. Aim for 75–80%.

Myth 5: "My Q-bank score translates 1:1 to the real exam." Roughly but not exactly. Most candidates score 3–5 percentage points lower on the real exam than on recent mocks due to novel items, exam stress, and fatigue across two full papers. Budget the margin.

Myth 6: "The pass mark is higher for IMGs." False. The pass mark is the same for all candidates in a given sitting. First-sit pass rates differ between UK students and IMGs, reflecting preparation and exam-practice differences, not a different threshold.

Myth 7: "You can't fail ethics / professionalism items alone." False. Poor performance on the person-centred-care theme items can contribute materially to a fail decision, and CPSA stations are often fail-able on professionalism alone. Don't treat ethics as optional.

11. Scoring breakdown by clinical domain

Post-results, your GMC feedback will include a domain-by-domain performance breakdown. This is structured around the content map's clinical domains.

What the breakdown shows:

  • Your percentage correct in each of the 24 clinical domains (cardio, resp, neuro, etc.).
  • The cohort average for comparison (so you can see where you are relative to other candidates).
  • Flagged weak areas if your performance is materially below the cohort average.

What to do with it:

  • Before a pass: the breakdown highlights potential post-exam specialty focus for F1. If you scraped through paediatrics, expect to refresh paediatrics on rotation.
  • Before a retake: the breakdown is your revision roadmap. Target the 2–3 weakest domains first. Don't re-cover everything equally.

Practical tip: most candidates have 2–3 "ghost specialties" they under-invest in — often psychiatry, palliative care, ophthalmology, ENT or dermatology. A small amount of deliberate revision in these neglected domains yields disproportionate marks.

12. How Q-bank accuracy predicts exam performance

Q-bank mock scores are the best predictor of exam outcome, with caveats.

Strong predictor conditions:

  • Content-map-aligned Q-bank — most major UK providers (MLA Prep, Passmedicine, Quesmed, Pastest) now map to the 2026 UKMLA blueprint.
  • Timed, full-length mock format — not standalone question practice with immediate explanations.
  • Recent data — your last 3–5 mocks matter more than earlier ones.
  • Calibrated difficulty — a reliable Q-bank's difficulty should approximate the real exam within 3–5 percentage points.

Under these conditions:

  • Mock score of 75%+ → >95% pass rate observed.
  • Mock score of 70–75% → >85% pass rate.
  • Mock score of 65–70% → ~60% pass rate (borderline).
  • Mock score of 60–65% → ~35% pass rate (significantly at risk).
  • Mock score below 60% → high fail risk.

What disrupts the prediction:

  • Mocks with inflated scores (untimed, interrupted, explanations reviewed during the mock).
  • Q-banks with older or mis-mapped content.
  • Over-reliance on a narrow set of topics — some candidates "grind" cardiology SBAs to 90%+ while neglecting psychiatry, which skews mock scores artificially high.
  • Acute life stress, illness, sleep deprivation on exam day.

Advice: take 2–3 full mocks across different Q-banks in the final 4 weeks. If your scores cluster tightly, trust them. If they vary wildly, the Q-banks are likely differently-calibrated — use the harder one as your realistic benchmark.

13. FAQ

Q. What's the minimum UKMLA AKT pass mark? It varies by sitting. Recent UKMLA AKT pass marks have sat between 62% and 68% correct. The mark for your specific sitting is published alongside your results.

Q. Is the pass mark the same for UK students and IMGs? Yes. Both cohorts sit the same-format AKT, are scored against the same Angoff-set pass mark, and receive the same pass/fail decision framework. First-sit pass rates differ, but the threshold does not.

Q. Can I find out the pass mark before my sitting? No. The pass mark is set by the standard-setting panel after the items are finalised and before candidates sit. It's not published until results release.

Q. How is CPSA scored relative to AKT? CPSA uses aggregated station-level marks with a standard-setting methodology (borderline regression). Your aggregate score across all stations must clear the threshold — you don't have to pass every station individually. AKT vs CPSA explained covers the full mechanics.

Q. If I score 65% on my Q-bank, will I pass? Probably, but with no margin. Aim higher. Candidates consistently at 75%+ pass almost without exception.

Q. How does the pass mark compare to MRCP or other UK exams? MRCP Part 1 historically passes around 55–65% of candidates each sitting with Angoff-set marks in the low 60s. MRCP Part 2 runs similar. UKMLA AKT runs higher pass rates because the cohort is more uniformly well-prepared (all finalists and IMGs specifically preparing for licensing).

Q. Is there a minimum per-domain pass requirement? No explicit minimum within each domain. Your total score determines pass/fail. That said, very poor performance in any single domain can drag your total below threshold even if other domains are strong.

Q. What if my raw score is on the boundary? The Angoff method applies a standard error of measurement adjustment. Candidates within ~1% of the raw threshold are typically pushed into a pass to avoid false fails. Trust the process.

Q. Does failure affect my future medical career? A single UKMLA fail with a subsequent pass has no lasting career impact. Repeated fails or a fail that prevents registration can trigger GMC concerns, but the first retake attempt is routine. UK medical schools and the GMC have explicit remediation pathways.

Q. How accurate are free Q-bank trials for predicting my mark? Reasonably accurate if the trial is content-map-aligned and timed. A 25-question diagnostic gives you a rough baseline; a full-length mock gives you a prediction-grade estimate. Don't extrapolate from 10-question samples.


Baseline your pass-mark readiness. Take MLA Prep's free 25-question UKMLA diagnostic and see your domain-by-domain accuracy. Most candidates discover a weak area they didn't expect — and that's the most valuable 15 minutes of your prep. Start the diagnostic →

The UKMLA pass mark isn't a moving target. It's a rigorously-set competence threshold, produced by experienced clinicians judging item-by-item difficulty, and broadly stable across sittings in the 62–68% range.

The goal isn't to hit the mark. The goal is to clear it comfortably — 75–80% on your mocks, a broad content-map coverage, no neglected specialties. Candidates who hit that pattern pass. Candidates who don't, usually don't.

Run the mocks. Track your accuracy. Target the weak areas. Book the exam.

The standard is knowable. The preparation is knowable. The pass is yours to earn.


Prep against an Angoff-calibrated question bank. MLA Prep's 5,000+ SBAs are content-map-aligned with NICE and BNF references on every explanation. Less than £1.20 per week. See pricing →

Prep with a UKMLA-aligned Q-bank.

5,000+ SBAs, NICE-aligned explanations, adaptive flashcards, and full-length mocks — built specifically for UKMLA.