#### Law

This post arose from a recent meeting at the Royal Society. It was organised by Julie Maxton to discuss the application of statistical methods to legal problems. I found myself sitting next to an Appeal Court Judge who wanted more explanation of the ideas. Here it is.

**Some preliminaries**

The papers that I wrote recently were about the problems associated with the interpretation of screening tests and tests of significance. They don’t allude to legal problems explicitly, though the problems are the same in principle. They are all open access. The first appeared in 2014:

http://rsos.royalsocietypublishing.org/content/1/3/140216

Since the first version of this post, March 2016, I’ve written two more papers and some popular pieces on the same topic. There’s a list of them at http://www.onemol.org.uk/?page_id=456.

I also made a video for YouTube of a recent talk.

In these papers I was interested in the false positive risk (also known as the false discovery rate) in tests of significance. It turned out to be alarmingly large. That has serious consequences for the credibility of the scientific literature. In legal terms, the false positive risk means the proportion of cases in which, on the basis of the evidence, a suspect is found guilty when in fact they are innocent. That has even more serious consequences.

Although most of what I want to say can be said without much algebra, it would perhaps be worth getting two things clear before we start.

**The rules of probability**.

(1) To get any understanding, it’s essential to understand the rules of probabilities, and, in particular, the idea of conditional probabilities. One source would be my old book, *Lectures on Biostatistics* (now free), The account on pages 19 to 24 give a pretty simple (I hope) description of what’s needed. Briefly, a vertical line is read as “given”, so Prob(evidence | not guilty) means the probability that the evidence would be observed *given* that the suspect was not guilty.

(2) Another potential confusion in this area is the relationship between odds and probability. The relationship between the probability of an event occurring, and the odds on the event can be illustrated by an example. If the probability of being right-handed is 0.9, then the probability of being not being right-handed is 0.1. That means that 9 people out of 10 are right-handed, and one person in 10 is not. In other words for every person who is not right-handed there are 9 who are right-handed. Thus the odds that a randomly-selected person is right-handed are 9 to 1. In symbols this can be written

\[ \mathrm{probability=\frac{odds}{1 + odds}} \]

In the example, the odds on being right-handed are 9 to 1, so the probability of being right-handed is 9 / (1+9) = 0.9.

Conversely,

\[ \mathrm{odds =\frac{probability}{1 – probability}} \]

In the example, the probability of being right-handed is 0.9, so the odds of being right-handed are 0.9 / (1 – 0.9) = 0.9 / 0.1 = 9 (to 1).

With these preliminaries out of the way, we can proceed to the problem.

### The legal problem

The first problem lies in the fact that the answer depends on Bayes’ theorem. Although that was published in 1763, statisticians are still arguing about how it should be used to this day. In fact whenever it’s mentioned, statisticians tend to revert to internecine warfare, and forget about the user.

Bayes’ theorem can be stated in words as follows

\[ \mathrm{\text{posterior odds ratio} = \text{prior odds ratio} \times \text{likelihood ratio}} \]

“Posterior odds ratio” means the odds that the person is guilty, relative to the odds that they are innocent, in the light of the evidence, and that’s clearly what one wants to know. The “prior odds” are the odds that the person was guilty before any evidence was produced, and that is the really contentious bit.

Sometimes the need to specify the prior odds has been circumvented by using the likelihood ratio alone, but, as shown below, that isn’t a good solution.

The analogy with the use of screening tests to detect disease is illuminating.

**Screening tests**

A particularly straightforward application of Bayes’ theorem is in screening people to see whether or not they have a disease. It turns out, in many cases, that screening gives a lot more wrong results (false positives) than right ones. That’s especially true when the condition is rare (the prior odds that an individual suffers from the condition is small). The process of screening for disease has a lot in common with the screening of suspects for guilt. It matters because false positives in court are disastrous.

The screening problem is dealt with in sections 1 and 2 of my paper. or on this blog (and here). A bit of animation helps the slides, so you may prefer the Youtube version: (It deals with screening tests up to 8’45”).

The rest of my paper applies similar ideas to tests of significance. In that case the prior probability is the probability that there is in fact a real effect, or, in the legal case, the probability that the suspect is guilty before any evidence has been presented. This is the slippery bit of the problem both conceptually, and because it’s hard to put a number on it.

But the examples below show that to ignore it, and to use the likelihood ratio alone, could result in many miscarriages of justice.

In the discussion of tests of significance, I took the view that it is not legitimate (in the absence of good data to the contrary) to assume any prior probability greater than 0.5. To do so would presume you know the answer before any evidence was presented. In the legal case a prior probability of 0.5 would mean assuming that there was a 50:50 chance that the suspect was guilty before any evidence was presented. A 50:50 probability of guilt before the evidence is known corresponds to a prior odds ratio of 1 (to 1) If that were true, the likelihood ratio would be a good way to represent the evidence, because the posterior odds ratio would be equal to the likelihood ratio.

It could be argued that 50:50 represents some sort of equipoise, but in the example below it is clearly too high, and if it is less that 50:50, use of the likelihood ratio runs a real risk of convicting an innocent person.

The following example is modified slightly from section 3 of a book chapter by Mortera and Dawid (2008). Philip Dawid is an eminent statistician who has written a lot about probability and the law, and he’s a member of the legal group of the Royal Statistical Society.

My version of the example removes most of the algebra, and uses different numbers.

**Example: The island problem**

The “island problem” (Eggleston 1983, Appendix 3) is an imaginary example that provides a good illustration of the uses and misuses of statistical logic in forensic identification.

A murder has been committed on an island, cut off from the outside world, on which 1001 (=* N* + 1) inhabitants remain. The forensic evidence at the scene consists of a measurement, *x*, on a “crime trace” characteristic, which can be assumed to come from the criminal. It might, for example, be a bit of the DNA sequence from the crime scene.

Say, for the sake of example, that the probability of a random member of the population having characteristic *x* is *P* = 0.004 (i.e. 0.4% ), so the probability that a random member of the population does *not* have the characteristic is 1 – *P* = 0.996. The mainland police arrive and arrest a random islander, Jack. It is found that Jack matches the crime trace. There is no other relevant evidence.

How should this match evidence be used to assess the claim that Jack is the murderer? We shall consider three arguments that have been used to address this question. The first is wrong. The second and third are right. (For illustration, we have taken *N* = 1000, *P* = 0.004.)

**(1) Prosecutor’s fallacy**

Prosecuting counsel, arguing according to his favourite fallacy, asserts that the probability that Jack is guilty is 1 – *P* , or 0.996, and that this proves guilt “beyond a reasonable doubt”.

The probability that Jack would show characteristic *x* if he were not guilty would be 0.4% i.e. Prob(Jack has *x* | not guilty) = 0.004. Therefore the probability of the evidence, given that Jack is guilty, Prob(Jack has x | Jack is guilty), is one 1 – 0.004 = 0.996.

But this is Prob(evidence | guilty) which is not what we want. What we need is the probability that Jack is guilty, given the evidence, P(Jack is guilty | Jack has characteristic x).

To mistake the latter for the former is the prosecutor’s fallacy, or the error of the transposed conditional.

Dawid gives an example that makes the distinction clear.

“As an analogy to help clarify and escape this common and seductive confusion, consider the difference between “the probability of having spots, if you have measles” -which is close to 1 and “the probability of having measles, if you have spots” -which, in the light of the many alternative possible explanations for spots, is much smaller.”

**(2) Defence counter-argument**

Counsel for the defence points out that, while the guilty party must have characteristic *x*, he isn’t the only person on the island to have this characteristic. Among the remaining *N* = 1000 innocent islanders, 0.4% have characteristic *x*, so the number who have it will be *NP* = 1000 x 0.004 = 4 . Hence the total number of islanders that have this characteristic must be 1 + *NP* = 5 . The match evidence means that Jack must be one of these 5 people, but does not otherwise distinguish him from any of the other members of it. Since just one of these is guilty, the probability that this is Jack is thus 1/5, or 0.2— very far from being “beyond all reasonable doubt”.

**(3) Bayesian argument**

The probability of the having characteristic *x* (the evidence) would be Prob(evidence | guilty) = 1 if Jack were guilty, but if Jack were not guilty it would be 0.4%, i.e. Prob(evidence | not guilty) = *P*. Hence the likelihood ratio in favour of guilt, on the basis of the evidence, is

\[ LR=\frac{\text{Prob(evidence } | \text{ guilty})}{\text{Prob(evidence }|\text{ not guilty})} = \frac{1}{P}=250 \]

In words, the evidence would be 250 times more probable if Jack were guilty than if he were innocent. While this seems strong evidence in favour of guilt, it still does not tell us what we want to know, namely the probability that Jack is guilty in the light of the evidence: Prob(guilty | evidence), or, equivalently, the odds ratio -the odds of guilt relative to odds of innocence, given the evidence,

To get that we must multiply the likelihood ratio by the prior odds on guilt, i.e. the odds on guilt *before* any evidence is presented. It’s often hard to get a numerical value for this. But in our artificial example, it is possible. We can argue that, in the absence of any other evidence, Jack is no more nor less likely to be the culprit than any other islander, so that the prior probability of guilt is 1/(*N* + 1), corresponding to prior odds on guilt of 1/*N*.

We can now apply Bayes’s theorem to obtain the posterior odds on guilt:

\[ \text {posterior odds} = \text{prior odds} \times LR = \left ( \frac{1}{N}\right ) \times \left ( \frac{1}{P} \right )= 0.25 \]

Thus the odds of guilt in the light of the evidence are 4 to 1 *against*. The corresponding posterior probability of guilt is

\[ Prob( \text{guilty } | \text{ evidence})= \frac{1}{1+NP}= \frac{1}{1+4}=0.2 \]

This is quite small –certainly no basis for a conviction.

This result is exactly the same as that given by the Defence Counter-argument’, (see above). That argument was simpler than the Bayesian argument. It didn’t explicitly use Bayes’ theorem, though it was implicit in the argument. The advantage of using the former is that it looks simpler. The advantage of the explicitly Bayesian argument is that it makes the assumptions more clear.

**In summary** The prosecutor’s fallacy suggested, quite wrongly, that the probability that Jack was guilty was 0.996. The likelihood ratio was 250, which also seems to suggest guilt, but it doesn’t give us the probability that we need. In stark contrast, the defence counsel’s argument, and equivalently, the Bayesian argument, suggested that the probability of Jack’s guilt as 0.2. or odds of 4 to 1 *against* guilt. The potential for wrong conviction is obvious.

**Conclusions**.

Although this argument uses an artificial example that is simpler than most real cases, it illustrates some important principles.

(1) The likelihood ratio is not a good way to evaluate evidence, unless there is good reason to believe that there is a 50:50 chance that the suspect is guilty *before* any evidence is presented.

(2) In order to calculate what we need, Prob(guilty | evidence), you need to give numerical values of how common the possession of characteristic *x *(the evidence) is the whole population of possible suspects (a reasonable value might be estimated in the case of DNA evidence), We also need to know the size of the population. In the case of the island example, this was 1000, but in general, that would be hard to answer and any answer might well be contested by an advocate who understood the problem.

These arguments lead to four conclusions.

(1) If a lawyer uses the prosecutor’s fallacy, (s)he should be told that it’s nonsense.

(2) If a lawyer advocates conviction on the basis of likelihood ratio alone, s(he) should be asked to justify the implicit assumption that there was a 50:50 chance that the suspect was guilty before any evidence was presented.

(3) If a lawyer uses Defence counter-argument, or, equivalently, the version of Bayesian argument given here, (s)he should be asked to justify the estimates of the numerical value given to the prevalence of *x* in the population (*P*) and the numerical value of the size of this population (*N*). A range of values of *P* and *N* could be used, to provide a range of possible values of the final result, the probability that the suspect is guilty in the light of the evidence.

(4) The example that was used is the simplest possible case. For more complex cases it would be advisable to ask a professional statistician. Some reliable people can be found at the Royal Statistical Society’s section on Statistics and the Law.

If you do ask a professional statistician, and they present you with a lot of mathematics, you should still ask these questions about precisely what assumptions were made, and ask for an estimate of the range of uncertainty in the value of Prob(guilty | evidence) which they produce.

**Postscript: real cases**

Another paper by Philip Dawid, *Statistics and the Law*, is interesting because it discusses some recent real cases: for example the wrongful conviction of Sally Clark because of the wrong calculation of the statistics for Sudden Infant Death Syndrome.

On Monday 21 March, 2016, Dr Waney Squier was struck off the medical register by the General Medical Council because they claimed that she misrepresented the evidence in cases of Shaken Baby Syndrome (SBS).

This verdict was questioned by many lawyers, including Michael Mansfield QC and Clive Stafford Smith, in a letter. “*General Medical Council behaving like a modern inquisition*”

The latter has already written “*This shaken baby syndrome case is a dark day for science – and for justice*“..

The evidence for SBS is based on the existence of a triad of signs (retinal bleeding, subdural bleeding and encephalopathy). It seems likely that these signs will be present if a baby has been shake, i.e Prob(triad | shaken) is high. But this is irrelevant to the question of guilt. For that we need Prob(shaken | triad). As far as I know, the data to calculate what matters are just not available.

It seem that the GMC may have fallen for the prosecutor’s fallacy. Or perhaps the establishment won’t tolerate arguments. One is reminded, once again, of the definition of clinical experience: "Making the same mistakes with increasing confidence over an impressive number of years." (from *A Sceptic’s Medical Dictionary* by Michael O’Donnell. *A Sceptic’s Medical Dictionary* BMJ publishing, 1997).

**Appendix (for nerds). Two forms of Bayes’ theorem**

The form of Bayes’ theorem given at the start is expressed in terms of odds ratios. The same rule can be written in terms of probabilities. (This was the form used in the appendix of my paper.) For those interested in the details, it may help to define explicitly these two forms.

In terms of probabilities, the probability of guilt in the light of the evidence (what we want) is

\[ \text{Prob(guilty } | \text{ evidence}) = \text{Prob(evidence } | \text{ guilty}) \frac{\text{Prob(guilty })}{\text{Prob(evidence })} \]

In terms of odds ratios, the odds ratio on guilt, given the evidence (which is what we want) is

\[ \frac{ \text{Prob(guilty } | \text{ evidence})} {\text{Prob(not guilty } | \text{ evidence}} =

\left ( \frac{ \text{Prob(guilty)}} {\text {Prob((not guilty)}} \right )

\left ( \frac{ \text{Prob(evidence } | \text{ guilty})} {\text{Prob(evidence } | \text{ not guilty}} \right ) \]

or, in words,

\[ \text{posterior odds of guilt } =\text{prior odds of guilt} \times \text{likelihood ratio} \]

This is the precise form of the equation that was given in words at the beginning.

A derivation of the equivalence of these two forms is sketched in a document which you can download.

### Follow-up

**23 March 2016**

It’s worth pointing out the following connection between the legal argument (above) and tests of significance.

(1) The likelihood ratio works only when there is a 50:50 chance that the suspect is guilty before any evidence is presented (so the prior probability of guilt is 0.5, or, equivalently, the prior odds ratio is 1).

(2) The false positive rate in signiifcance testing is close to the *P* value only when the prior probability of a real effect is 0.5, as shown in section 6 of the *P* value paper.

However there is another twist in the significance testing argument. The statement above is right if we take as a positive result any *P* < 0.05. If we want to interpret a value of *P* = 0.047 in a single test, then, as explained in section 10 of the *P* value paper, we should restrict attention to only those tests that give P close to 0.047. When that is done the false positive rate is 26% even when the prior is 0.5 (and much bigger than 30% if the prior is smaller –see extra Figure), That justifies the assertion that if you claim to have discovered something because you have observed *P* = 0.047 in a single test then there is a chance of at least 30% that you’ll be wrong. Is there, I wonder, any legal equivalent of this argument?

I’m perfectly happy to think of alternative medicine as being a voluntary, self-imposed tax on the gullible (to paraphrase Goldacre again). But only as long as its practitioners do no harm and only as long as they obey the law of the land. Only too often, though, they do neither.

When I talk about law, I don’t mean lawsuits for defamation. Defamation suits are what homeopaths and chiropractors like to use to silence critics. heaven knows, I’ve becomes accustomed to being defamed by people who are, in my view. fraudsters, but lawsuits are not the way to deal with it.

I’m talking about the Trading Standards laws Everyone has to obey them, and in May 2008 the law changed in a way that puts the whole health fraud industry in jeopardy.

The gist of the matter is that it is now illegal to claim that a product will benefit your health if you can’t produce evidence to justify the claim.

I’m not a lawyer, but with the help of two lawyers and a trading standards officer I’ve attempted a summary. The machinery for enforcing the law does not yet work well, but when it does, there should be some very interesting cases.

The obvious targets are homeopaths who claim to cure malaria and AIDS, and traditional Chinese Medicine people who claim to cure cancer.

But there are some less obvious targets for prosecution too. Here is a selection of possibilities to savour..

- Universities such as Westminster, Central Lancashire and the rest, which promote the spreading of false health claims
- Hospitals, like the Royal London Homeopathic Hospital, that treat patients with mistletoe and marigold paste. Can they produce any real evidence that they work?
- Edexcel, which sets examinations in alternative medicine (and charges for them)
- Ofsted and the QCA which validate these exams
- Skills for Health and a whole maze of other unelected and unaccountable quangos which offer “national occupational standards” in everything from distant healing to hot stone therapy, thereby giving official sanction to all manner of treatments for which no plausible evidence can be offered.
- The Prince of Wales Foundation for Integrated Health, which notoriously offers health advice for which it cannot produce good evidence
- Perhaps even the Department of Health itself, which notoriously referred to “psychic surgery” as a profession, and which has consistently refused to refer dubious therapies to NICE for assessment.

The law, insofar as I’ve understood it, is probably such that only the first three or four of these have sufficient commercial elements for there to be any chance of a successful prosecution. That is something that will eventually have to be argued in court.

But lecanardnoir points out in his comment below that The Prince of Wales is intending to sell herbal concoctions, so perhaps he could end up in court too.

### The laws

We are talking about **The Consumer Protection from Unfair Trading Regulations 2008**. The regulations came into force on 26 May 2008. The full regulations can be seen here, or download pdf file. They can be seen also on the UK Statute Law Database.

The Office of Fair Trading, and Department for Business, Enterprise & Regulatory Reform (BERR) published Guidance on the Consumer Protection from Unfair Trading Regulations 2008 (pdf file),

Statement of consumer protection enforcement principles (pdf file), and

The Consumer Protection from Unfair Trading Regulations: a basic guide for business (pdf file).

**Has The UK Quietly Outlawed “Alternative” Medicine?**

On 26 September 2008, Mondaq Business Briefing published this article by a Glasgow lawyer, Douglas McLachlan. (Oddly enough, this article was reproduced on the National Center for Homeopathy web site.)

“Proponents of the myriad of forms of alternative medicine argue that it is in some way “outside science” or that “science doesn’t understand why it works”. Critical thinking scientists disagree. The best available scientific data shows that alternative medicine simply doesn’t work, they say: studies repeatedly show that the effect of some of these alternative medical therapies is indistinguishable from the well documented, but very strange “placebo effect” ”

“Enter The Consumer Protection from Unfair Trading Regulations 2008(the “Regulations”). The Regulations came into force on 26 May 2008 to surprisingly little fanfare, despite the fact they represent the most extensive modernisation and simplification of the consumer protection framework for 20 years.”

The Regulations prohibit unfair commercial practices between traders and consumers through five prohibitions:-

- General Prohibition on Unfair Commercial

Practices (Regulation 3)- Prohibition on Misleading Actions (Regulations 5)
- Prohibition on Misleading Omissions (Regulation 6)
- Prohibition on Aggressive Commercial Practices (Regulation 7)
- Prohibition on 31 Specific Commercial Practices that are in all Circumstances Unfair (Schedule 1). One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations”. The definition of “product” in the Regulations includes services, so it does appear that all forms medical products and treatments will be covered.

Just look at that!

One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations” |

**Section 5** is equally powerful, and also does not contain the contentious word “cure” (see note below)

Misleading actions5.—(1) A commercial practice is a misleading action if it satisfies the conditions in either paragraph (2) or paragraph (3).

(2) A commercial practice satisfies the conditions of this paragraph—

(a) if it contains false information and is therefore untruthful in relation to any of the matters in paragraph (4) or if it or its overall presentation in any way deceives or is likely to deceive the average consumer in relation to any of the matters in that paragraph, even if the information is factually correct; and

(b) it causes or is likely to cause the average consumer to take a transactional decision he would not have taken otherwise.

These laws are very powerful in principle, But there are two complications in practice.

One complication concerns the extent to which the onus has been moved on to the seller to prove the claims are true, rather than the accuser having to prove they are false. That is a lot more favourable to the accuser than before, but it’s complicated.

The other complication concerns enforcement of the new laws, and at the moment that is bad.

### Who has to prove what?

That is still not entirely clear. McLachlan says

“If we accept that mainstream evidence based medicine is in some way accepted by mainstream science, and alternative medicine bears the “alternative” qualifier simply because it is not supported by mainstream science, then where does that leave a trader who seeks to refute any allegation that his claim is false?

Of course it is always open to the trader to show that his the alternative therapy actually works, but the weight of scientific evidence is likely to be against him.”

On the other hand, I’m advised by a Trading Standards Officer that “He doesn’t have to refute anything! The prosecution have to prove the claims are false”. This has been confirmed by another Trading Standards Officer who said

“It is not clear (though it seems to be) what difference is implied between “cure” and “treat”, or what evidence is required to demonstrate that such a cure is false “beyond reasonable doubt” in court. The regulations do not provide that the maker of claims must show that the claims are true, or set a standard indicating how such a proof may be shown.”

The main defence against prosecution seems to be the “Due diligence defence”, in paragraph 17.

Due diligence defence17. —(1) In any proceedings against a person for an offence under regulation 9, 10, 11 or 12 it is a defence for that person to prove—

(a) that the commission of the offence was due to—

(i) a mistake;

(ii) reliance on information supplied to him by another person;

(iii) the act or default of another person;

(iv) an accident; or

(v) another cause beyond his control; and

(b) that he took all reasonable precautions and exercised all due diligence to avoid the commission of such an offence by himself or any person under his control.

If “taking all reasonable precautions” includes being aware of the lack of any good evidence that what you are selling is effective, then this defence should not be much use for most quacks.

Douglas McLachlan has clarified, below, this difficult question

### False claims for health benefits of foods

A separate bit of legislation, European regulation on nutrition and health claims made on food, ref 1924/2006, in Article 6, seems clearer in specifying that the seller has to prove any claims they make.

Article 6

Scientific substantiation for claims

1. Nutrition and health claims shall be based on and substantiated by generally accepted scientific evidence.

2. A food business operator making a nutrition or health claim shall justify the use of the claim.

3. The competent authorities of the Member States may request a food business operator or a person placing a product on the market to produce all relevant elements and data establishing compliance with this Regulation.

That clearly places the onus on the seller to provide evidence for claims that are made, rather than the complainant having to ‘prove’ that the claims are false.

On the problem of “health foods” the two bits of legislation seem to overlap. Both have been discussed in “Trading regulations and health foods“, an editorial in the BMJ by M. E. J. Lean (Professor of Human Nutrition in Glasgow).

“It is already illegal under food labelling regulations (1996) to claim that food products can treat or prevent disease. However, huge numbers of such claims are still made, particularly for obesity ”

“The new regulations provide good legislation to protect vulnerable consumers from misleading “health food” claims. They now need to be enforced proactively to help direct doctors and consumers towards safe, cost effective, and evidence based management of diseases.”

In fact the European Food Standards Agency (EFSA) seems to be doing a rather good job at imposing the rules. This, predictably, provoked howls of anguish from the food industry There is a synopsis here.

“Of eight assessed claims, EFSA’s Panel on Dietetic Products, Nutrition and Allergies (NDA) rejected seven for failing to demonstrate causality between consumption of specific nutrients or foods and intended health benefits. EFSA has subsequently issued opinions on about 30 claims with seven drawing positive opinions.”

“. . . EFSA in disgust threw out 120 dossiers supposedly in support of nutrients seeking addition to the FSD’s positive list.

If EFSA was bewildered by the lack of data in the dossiers, it needn’t hav been as industry freely admitted it had in many cases submitted such hollow documents to temporarily keep nutrients on-market.”

Or, on another industry site, “EFSA’s harsh health claim regime”

“By setting an unworkably high standard for claims substantiation, EFSA is threatening R&D not to mention health claims that have long been officially approved in many jurisdictions.”

Here, of course,”unworkably high standard” just means real genuine evidence. How dare they ask for that!

### Enforcement of the law

Article 19 of the Unfair Trading regulations says

19. —(1) It shall be the duty of every enforcement authority to enforce these Regulations.

(2) Where the enforcement authority is a local weights and measures authority the duty referred to in paragraph (1) shall apply to the enforcement of these Regulations within the authority’s area.

Nevertheless, enforcement is undoubtedly a weak point at the moment. The UK is obliged to enforce these laws, but at the moment it is not doing so effectively.

**A letter in the BMJ** from Rose & Garrow describes two complaints under the legislation in which it appears that a Trading Standards office failed to enforce the law. They comment

” . . . member states are obliged not only to enact it as national legislation but to enforce it. The evidence that the government has provided adequate resources for enforcement, in the form of staff and their proper training, is not convincing. The media, and especially the internet, are replete with false claims about health care, and sick people need protection. All EU citizens have the right to complain to the EU Commission if their government fails to provide that protection.”

This is not a good start. A lawyer has pointed out to me

“that it can sometimes be very difficult to get Trading Standards or the OFT to take an interest in something that they don’t fully understand. I think that if it doesn’t immediately leap out at them as being false (e.g “these pills cure all forms of cancer”) then it’s going to be extremely difficult. To be fair, neither Trading Standards nor the OFT were ever intended to be medical regulators and they have limited resources available to them. The new Regulations are a useful new weapon in the fight against quackery, but they are no substitute for proper regulation.”

Trading Standards originated in Weights and Measures. It was their job to check that your pint of beer was really a pint. Now they are being expected to judge medical controversies. Either they will need more people and more training, or responsibility for enforcement of the law should be transferred to some more appropriate agency (though one hesitates to suggest the MHRA after their recent pathetic performance in this area).

### Who can be prosecuted?

Any “trader”, a person or a company. There is no need to have actually bought anything, and no need to have suffered actual harm. In fact there is no need for there to be a complainant at all. Trading standards officers can act on their own. But there must be a commercial element. It’s unlikely that simply preaching nonsense would be sufficient to get you prosecuted, so the Prince of Wales is, sadly, probably safe.

Universities who teach that “Amethysts emit high Yin energy” make an interesting case. They charge fees and in return they are “falsely claiming that a product is able to cure illnesses”.

In my view they are behaving illegally, but we shan’t know until a university is taken to court. Watch this space.

**The fact remains that the UK is obliged to enforce the law and presumably it will do so eventually. When it does, alternative medicine will have to change very radically. If it were prevented from making false claims, there would be very little of it left apart from tea and sympathy**

### Follow-up

**New Zealand **must have similar laws.

Just as I was about to post this I found that in New Zealand a

“couple who sold homeopathic remedies claiming to cure bird flu, herpes and Sars (severe acute respiratory syndrome) have been convicted of breaching the Fair Trading Act.”

They were ordered to pay fines and court costs totalling $23,400.

**A clarification form Douglas McLachlan**

On the difficult question of who must prove what, Douglas McLachlan, who wrote *Has The UK Quietly Outlawed “Alternative” Medicine?*, has kindly sent the following clarification.

“I would agree that it is still for the prosecution to prove that the trader committed the offence beyond a reasonable doubt, and that burden of proof is always on the prosecution at the outset, but I think if a trader makes a claim regarding his product and best scientific evidence available indicates that that claim is false, then it will be on the trader to substantiate the claim in order to defend himself. How will the trader do so? Perhaps the trader might call witness after witness in court to provide anecdotal evidence of their experiences, or “experts” that support their claim – in which case it will be for the prosecution to explain the scientific method to the Judge and to convince the Judge that its Study evidence is to be preferred.

Unfortunately, once human personalities get involved things could get clouded – I could imagine a small time seller of snake oil having serious difficulty, but a well funded homeopathy company engaging smart lawyers to quote flawed studies and lead anecdotal evidence to muddy the waters just enough for a Judge to give the trader the benefit of the doubt. That seems to be what happens in the wider public debate, so it’s easy to envisage it happening a courtroom.”

**The “average consumer”.**

(3) A commercial practice is unfair if—

(a) it contravenes the requirements of professional diligence; and

(b) it materially distorts or is likely to materially distort the economic behaviour of the average consumer with regard to the product.

It seems,therefore, that what matters is whether the “average consumer” would infer from what is said that a claim was being made to cure a disease. The legal view cited by Mojo (comment #2, below) is that expressions such as “can be used to treat” or “can help with” would be considered by the average consumer as implying * successful* treatment or cure.

**The drugstore detox delusion**. A nice analysis “detox” at .Science-based Pharmacy