Methodology

The Maloney Review clinical-standards framework

The five-category framework the publication applies to every clinic review, written down in full so a reader can predict the score. Pass / Concern / Fail per category. A single Fail in Category 1 or Category 2 is sufficient for an overall Fail. The framework is owned and operated by the publication, not licensed to or co-branded with any clinic, marketplace, manufacturer, or industry body.

The framework on this page is the framework the publication applies to every clinic review, written down in full so a reader can read the framework before the review and form an independent expectation of how a given clinic should score against it. Patients use it to evaluate the recommendation they have received. Journalists use it to assess whether the publication is applying the same severity to every reviewed clinic, regardless of size, market, tier, or country. The framework is the integrity instrument. It is published before the reviews because that is the only order in which it is testable.

The framework has been applied in full to one clinic review at the time of writing — the Metal Dental Clinic, Da Nang review — and will be applied in full to every clinic reviewed under this byline going forward. It is owned and operated by the publication. It is not licensed to any clinic, marketplace, manufacturer, certification scheme, or industry body. There are no certification fees, no inclusion fees, no paid placement, no co-branding, and no licensing arrangements. There is no version of this framework that can be purchased, applied for, or displayed as a credential. A clinic that scores well on it has not earned a badge from the publication; the publication will not provide one. A clinic that scores badly cannot pay to have the score revised; revisions are evidence-driven and are documented.

The five categories

The framework evaluates a clinic across five categories. Each category has a defined scope, a stated evidence basis, and a fixed scoring band of Pass, Concern, or Fail. The categories are weighted unevenly: Categories 1 and 2 are clinical-acuity categories, and a Fail in either is sufficient to produce an overall Fail finding regardless of performance in the other three. This asymmetry exists because no amount of marketing polish, paperwork hygiene, or post-treatment communication compensates for a clinical decision that was the wrong decision, or for a procedure performed without the time and skill the procedure requires.

Category 1 — Clinical decision-making

The question: Is the treatment recommended for each patient the treatment the patient’s clinical condition supports, or is it the treatment the clinic’s business model is structured to deliver?

Evidence basis: Pre-treatment imaging (panoramic radiographs at minimum, CBCT for implant cases), pre-treatment clinical records describing each patient’s chief complaint and existing dental status, and the documented treatment plan. Where the publication is reviewing without internal records access, evidence is drawn from the clinic’s own publicly published before-and-after content, patient testimony, and direct on-site observation where on-site visits are conducted.

What this category assesses:

  • Whether each treatment recommended is supported by a documented clinical indication. Veneers for veneer-indicated teeth, full-coverage crowns for crown-indicated teeth, implants for implant-indicated teeth. The recommendation has to fit the case, not the case-mix the clinic happens to be set up to deliver.
  • Whether minimally invasive alternatives were considered and documented. A treatment plan that goes straight to full-arch crowns or extraction-and-implant without any record of having considered the conservative alternatives — orthograde endodontic retreatment, vital pulp therapy on indicated cases, short or tilted implants instead of bone grafting — has not been worked up.
  • Whether imaging is interpreted by the clinician who will perform the procedure, before the treatment plan is agreed, in a way that is visible to the patient.
  • Whether the financial-incentive shape of the recommendation pattern matches the clinical-evidence shape, or whether it diverges.

What this category does not assess: the cosmetic outcome at two weeks, the patient’s stated satisfaction at the chair, or the marketing of the case. None of those is a substitute for whether the recommendation was clinically right.

Scoring:

  • Pass. Treatment recommendations align with documented clinical indication for every reviewed case. Minimally invasive alternatives are considered and documented. Imaging is interpreted by the treating clinician before the plan is agreed.
  • Concern. The pattern is mixed across reviewed cases — some recommendations are clearly indicated, others are inadequately documented or inconsistent with the conservative alternative. The publication does not have enough evidence to call it a Fail but has enough evidence to call it a Concern.
  • Fail. A documented pattern of recommendations that diverge from clinical indication in a direction consistent with the clinic’s business model. One clearly miscalled case in a sample of one is a clinical event. A pattern across multiple documented cases — the same wrong call, same direction, same financial advantage to the clinic — is a Fail.

A Fail in Category 1 produces an overall Fail finding, regardless of performance in Categories 3, 4, or 5. This is the load-bearing rule of the framework.

Category 2 — Procedure execution

The question: Is the procedure being performed to standard, in conditions and at a fee that support the time and skill the procedure requires?

Evidence basis: Direct on-site observation where on-site visits are conducted, video documentation of procedures in progress (where the clinic publishes such content), patient post-treatment radiographs reviewed against the procedure performed, and the time and fee economics of the procedure as advertised.

What this category assesses:

  • Whether the visible technique matches the standard for the procedure being performed. Working length determination on root canal treatment, rubber dam isolation where it is the standard of care, magnification on cases that require it, irrigation protocols at appropriate concentration and volume, obturation to the working length with a verified seal.
  • Whether the time per case visible in published footage is consistent with the time the procedure requires. A molar root canal performed competently in 45–90 minutes is a different procedure from one performed in 10–15 minutes; the difference is not skill, it is omission.
  • Whether the fee per procedure is economically consistent with the time required. A fee that does not support the time a trained clinician needs to perform the procedure to standard is itself diagnostic. The fee is not evidence of corruption or carelessness — it is evidence that the procedure is being delivered at a tempo that omits steps.
  • Whether the laboratory work supporting the case is consistent with the prosthetic outcome demanded. A two-day full-arch zirconia turnaround supported by an in-house lab is a different prosthetic process from a six-week prosthodontist-supervised lab process; the published five-year survival data on these two processes is not the same.

What this category does not assess: whether the patient was happy at the chair, whether the marketing is effective, or whether the clinic is busy. Throughput is not a quality signal.

Scoring:

  • Pass. Procedures are performed with the time, technique, and protocol the procedure requires. Fee economics support the time. Laboratory and prosthetic processes match the prosthetic claims.
  • Concern. Some procedures meet standard; others show economic or technique signals consistent with omitted steps; the publication does not have enough evidence to call it a Fail.
  • Fail. A documented pattern of procedures performed at a fee or time that does not support the steps the procedure requires, or technique signals consistent with omitted steps, or laboratory turnaround inconsistent with the prosthetic outcome. A Fail in Category 2, like a Fail in Category 1, produces an overall Fail finding regardless of the other three categories.

Category 3 — Sterilisation and infection control

The question: Are the cross-infection protocols required by every credible clinical-standards body in fact in operation in this clinic?

Evidence basis: Direct on-site observation, video documentation of procedures in progress, and the clinic’s own published clinical environment imagery. Internal logs (autoclave logs, instrument tracking, water-line testing records) are considered where access is granted.

What this category assesses:

  • Glove protocol — single-use, changed between patients, and not contacting non-clinical surfaces (personal mobile phones, door handles, paperwork) during a procedure.
  • Aerosol management — physical isolation between treatment chairs where simultaneous aerosol-generating procedures are performed, or time separation with documented surface decontamination between cases.
  • Instrument sterilisation — autoclave use, verified through logs, with cycle confirmation per cycle.
  • Water-line management — periodic line testing, documented results, and protocols for line treatment.
  • Personal protective equipment — eye protection, mask protocol, and gown protocol consistent with local clinical-standards body guidance.

What this category does not assess: the visual cleanliness of reception areas, the aesthetic of the treatment rooms, or the presence or absence of branded scrubs. These are not infection control. They are interior design.

Scoring:

  • Pass. No documented breach. Logs available where applicable. Visible protocols match the standard.
  • Concern. A protocol gap is visible but the publication cannot confirm the gap is systemic from the available evidence. Often resolves to a Pass or a Fail on follow-up evidence.
  • Fail. One or more documented breaches in published footage or on-site observation. Glove contamination during a procedure, simultaneous aerosol-generating procedures in an unisolated open bay, an autoclave log that does not match the visible patient throughput, or any combination of the above.

A Fail in Category 3 does not on its own produce an overall Fail finding — but a Fail here combined with a Concern or Fail in any other category compounds.

Category 4 — Documentation and records

The question: Is informed consent meaningful in this clinic, and are the records that would support a future remediation in the event of complication actually maintained?

Evidence basis: Patient interviews where conducted, internal records review where access is granted, and the visible consent and consultation process as documented in published clinic content.

What this category assesses:

  • Pre-treatment imaging stored, named, dated, and accessible to the patient on request.
  • Treatment planning conversation documented — alternatives considered, alternatives ruled out, and reasons stated.
  • Informed consent that names the specific risks of the specific procedure, in writing, in a language the patient reads, and signed before the procedure.
  • Post-treatment radiographs and clinical notes stored against the patient’s record and accessible to a domestic practitioner if the patient requires remediation in their home country.
  • A handover document the patient leaves with — including post-operative instructions, what to do if specific complications arise, and named domestic referral pathways.

What this category does not assess: the production quality of marketing materials or the polish of the brand. Those are not records.

Scoring:

  • Pass. Records are maintained, consent is meaningful, handover document is provided.
  • Concern. Records are partial, consent is verbal-only, handover is informal. Often the publication cannot confirm the absence of records and the score reflects insufficient evidence rather than confirmed failure.
  • Fail. Records are not maintained, consent is not meaningful (no documented alternatives, no documented risks, no signed form), or the patient leaves with no handover document.

Category 5 — Post-treatment support and continuity of care

The question: When a patient returns to her home country with a complication, who is responsible, and is that responsibility actually exercised?

Evidence basis: Patient interviews, the clinic’s published post-treatment communication protocols, and the publication’s own returned-patient caseload.

What this category assesses:

  • A named clinical contact at the clinic, accessible to the patient post-treatment, with a defined response time.
  • A protocol for documenting and addressing complications that arise after the patient has returned home — at minimum, sharing the patient’s records with a domestic practitioner who has agreed to manage the complication.
  • A named domestic referral pathway in countries the clinic markets to. This is not the same as “we have an Australian patient before, please contact your local dentist.” A genuine pathway names a clinician.
  • A reasonable warranty or remediation policy whose terms are written down before treatment, not negotiated after a complication.

What this category does not assess: whether the patient remained satisfied. Satisfaction is a leading indicator of clinical outcome, not a substitute for one.

Scoring:

  • Pass. Named clinical contact, defined response time, documented complication protocol, named domestic referral pathway, written warranty. All five are in operation.
  • Concern. Some elements are present, others are informal or undocumented. The publication can find evidence the clinic intends to provide post-treatment support but cannot confirm the support is operationalised in a way a patient can rely on.
  • Fail. No named clinical contact, no documented complication protocol, no domestic referral pathway, and no written warranty. The patient is on her own from the moment she boards the return flight.

How an overall finding is reached

Each clinic review reports a category-by-category score and an overall finding. The overall finding is determined by the following deterministic rules:

  • Fail in Category 1 OR Category 2 → Overall Fail. No combination of Passes in 3, 4, and 5 redeems a clinical-decision-making Fail or a procedure-execution Fail. This is the load-bearing rule.
  • Fail in Category 3 → Overall Concern at minimum, Overall Fail when combined with a Concern or Fail anywhere else. Infection control failures compound; they do not stand alone.
  • Fail in Category 4 OR Category 5 only, with Pass in 1, 2, and 3 → Overall Concern. Documentation gaps and continuity-of-care gaps are real failures the patient experiences, but in isolation they do not invalidate clinically sound treatment.
  • Concern in any category, no Fail anywhere → Overall Concern. A Concern is a finding in its own right; it is not a soft Pass.
  • Pass in all five categories → Overall Pass.

A clinic with an Overall Pass is not endorsed by the publication. It has scored, in the publication’s structured judgement against the framework above, in a way that supports its operating to standard at the time of the review. Re-review cadence applies (see below).

Re-review cadence

  • Pass. Re-reviewed every 24 months. A Pass clinic that materially changes its case mix, ownership, lead clinical staff, or facility footprint is re-reviewed within 6 months of the change.
  • Concern. Re-reviewed every 12 months. A clinic that addresses the named Concern is eligible for an early re-review on submission of evidence to the publication.
  • Fail. Re-reviewed every 12 months by default. A clinic that addresses the named Fail can submit evidence at any time and request an off-cycle review; if the evidence is sufficient on its face to justify a re-score, an off-cycle re-review is conducted. The clinic does not pay for the re-review. Nobody pays for any review.

Re-review outcomes that change the score are published with a dated correction block at the head of the original review, linking to the new finding. Reviews are never silently revised.

What this framework is not

  • It is not a certification scheme. The publication does not issue badges, plaques, or certificates. It does not authorise clinics to display a Maloney Review score in marketing materials. A Pass cannot be purchased, displayed, or advertised. A Fail cannot be remediated by paying for a re-review.
  • It is not a marketplace ranking. Clinics are not ranked against each other. The framework produces category scores and an overall finding; it does not produce a leaderboard.
  • It is not a substitute for a domestic specialist second opinion, a check of the destination clinic’s certification with the relevant local registration body, or a review of the cost-by-country reference. The framework is one input the patient can stack against the recommendation she has received. The case for stacking imperfect remedies, and the named limits of each, is set out in the structural account of why no single remedy closes the trust gap.
  • It is not an algorithm. The category scores require clinical judgement applied to evidence the publication has reviewed. Two reviewers operating on the same evidence will, in practice, agree on the overall finding in almost every case where the evidence is sufficient; they may disagree on whether a Concern in Category 3 is borderline-Pass or borderline-Fail. That disagreement is documented in the review where it matters.

Conflict of interest policy

The framework, the reviews it produces, and the editorial structure that supports both operate under the following commercial-relationship rules:

  • The publication holds no commercial relationship with any reviewed clinic, marketplace, manufacturer, or industry body. The default state is documented at /disclosures/.
  • No certification fees, no inclusion fees, no paid placement, no co-branding, and no licensing arrangements with any external party.
  • On-site visits are paid for by the publication. Where, in future, a visit would be impossible without clinic-provided travel or accommodation, that arrangement is disclosed in the opening 100 words of the affected review and the review is read by an editor other than the reviewing clinician before publication, who attests that the framework was applied with the same severity used on every other clinic.
  • Affiliated-clinic reviews — were the publication ever to hold a commercial relationship with a clinic — would be paired with a non-affiliated clinic review of equal severity within 7 days, deliberately staggered across markets and tiers. The integrity ratchet is enforced by the editorial calendar, not by reviewer goodwill.
  • The framework is the publication’s. It is not licensed. A clinic, marketplace, certification scheme, or manufacturer that wishes to apply a similar framework to its own operations is free to do so, on the strict condition that they do not represent themselves as carrying a Maloney Review score, certification, or affiliation. The framework on this page is descriptive of what the publication does. It is not a license to be issued.

How the framework was developed

The categories above are not novel. They draw on the standard scope of clinical assessment used by registered specialty bodies in restorative dentistry, prosthodontics, and endodontics across Australia, New Zealand, the UK, and the US. The novelty, where there is any, is in publishing the framework before the reviews and in committing to apply it across markets, tiers, and ownership types with the same severity.

The framework will be revised as the reviewed-clinic caseload accumulates. Revisions are dated, documented, and visible — the version of the framework in force when a given review was published is recoverable from the review’s metadata. A revision that materially changes how a previously reviewed clinic would score triggers a re-review of that clinic against the revised framework, and the prior review is updated with a correction block linking forward to the new finding.

What would change my view about the framework itself

  • A category that the publication has missed — a clinical-acuity domain that a reasonable specialist would expect the framework to cover and that is not covered above. The current category set is comprehensive against the procedures the publication reviews. It is not a static set; if a domain emerges (digital workflow validation, biocompatibility assessment of materials sourced through unverified supply chains, AI-assisted diagnostic workflows), the framework will be revised and the revision will be dated and visible.
  • Evidence that the framework, as applied, produces inconsistent severity across markets or clinic tiers. The publication’s own internal audit of cross-clinic application, conducted at every published review, looks for this. If a third-party audit finds it, the framework is revised.
  • Evidence that a Pass under the framework does not, in practice, predict patient outcomes at the cohort level. The framework will be calibrated against the publication’s returned-patient caseload over time. A Pass that does not predict outcomes at scale is a framework that does not work, and it will be revised or replaced. The work is not done because the framework is published; the framework is the start of the work.

How this framework relates to the rest of the publication

The framework is the structural anchor of the publication’s clinic reviews. It connects upstream and downstream to the rest of the editorial work. The methodology for treatment option reviews is at the treatment-option-reviews methodology page. The first-published clinic review applying this framework in full is Metal Dental Clinic, Da Nang — five categories, four Fails, one Concern, overall Fail finding, and an explicit not-recommended-for-any-patient-profile conclusion. The structural argument behind why an independent specialist review framework is one (not the only) response to the conditions that produce dental tourism is in the dental tourism trust gap long read. The clinical decision frameworks the framework’s Category 1 evaluates against are documented in pieces like when to save a tooth and when to replace it, why most implants do not need bone grafting, and the Trial of the Week review of the Asgary multicenter VPT/CEM trial, which sets the evidence anchor for the pulp-vitality decision the framework expects a competent clinic to be able to discuss. The weekly read of what the regulators and the peer-reviewed record have published — and what they do not, on their own, settle — is at This Week in Dental Tourism.

The framework is published. The reviews are published. The methodology is published. The disclosures are published. That is the work. It is not enough, on its own, to close the trust gap. It is, when stacked with the other imperfect remedies, the most honest available response.

How to cite this article

Permalink: https://ritamaloney.com/methodology/clinical-standards-framework/

Maloney R. The Maloney Review clinical-standards framework. The Maloney Review. 6 May 2026. https://ritamaloney.com/methodology/clinical-standards-framework/