purplecthulhu: (Default)
Add MemoryShare This Entry
posted by [personal profile] purplecthulhu at 09:40am on 14/03/2009
According to the BBC Hyperbaric Oxygen Treatment can Help Children with Autism.

This sounded a bit odd to me and, to their credit, the BBC actually provides some hard numbers in the article as I can't access academic journals from home to check the real thing...

62 children took part in the study, randomly assigned to receive either oxygen enriched high pressure treatment or slightly high pressure normal atmosphere. So they did have a control, which is good, but don't say if this was double blind (which would be bad if they don't but we'll give them the benefit of the doubt for the moment).

30% of treated children were much improved compared to 8% of controls, 80% of the treatment group improved compared to 38% of controls. Sounds convincing?

Lets look at the numbers. If control and test group were equal then this means there were 31 in each group. This means that 9.3 children were much improved in test, 2.5 in control, 24.8 in the test group improved, 11.78 in the control (we'll ignore how they get fractional children and assume it's BBC rounding errors).

How significant is this result? 31 in each group, assuming poisson errors, gives us 1 sigma of 5.57. The difference in much improved is 6.8 children, ie. 1.2 sigma - something that happens at random roughly 25% of the time. For improved the difference is 2.3 sigma, something that happens at random roughly 3% of the time.

This analysis assumes everything in the experiment was perfect, that there are no hidden biases or systematics (eg. if the trial isn't truly double blind which, given the need for oxygen cylinders etc., might be difficult to arrange) and it still comes up with a result whose significance is so low that I wouldn't even consider it a positive in my own science.

But the BBC thinks it's good enough to put out a worldwide report on it.

You can see why I worry about medical statistics!
There are 10 comments on this entry. (Reply.)
 
posted by [identity profile] green-knight.livejournal.com at 11:29am on 14/03/2009
I think the other point to consider here are ease of access and side-effects - if a relatively simple and cheap intervention can make a difference, it is worth that it should be pursued. If the same results were created by a $$$ drug with great potential side-effects, it might be reported much more cautiously.

(And as I see it, all of them probably got hooked up to oxygen cylinders, only the pressure and the composition varied.)
 
posted by [identity profile] purplecthulhu.livejournal.com at 08:06pm on 14/03/2009
Hyperbaric treatment is neither cheap nor simple, as I understand it. Nor, according to the evidence presented in this report, does it make a difference. It might be worth more research, but it's not worth a headline (or even a refereed journal publication!).
 
posted by [identity profile] davegullen.livejournal.com at 11:49am on 14/03/2009
Don;t tell us - tell the Beeb! I think journalists have great problems understanding science language - the difference between 'can help' and 'may help' for example.
 
posted by [identity profile] narkil.livejournal.com at 01:01pm on 14/03/2009
After racking my brains
Isn't the variance in a Poisson distribution the expected number of results rather than the total sample size. So if we take the combined results as the basis for calculating a likelihood that gives us 6 per group for much improved and 18 for improved and so respective standard deviations of 2.5 and 4.2, which pushes up the sigma values to something rather more respectable (even better if you treat the control as the basis but that seems pretty dubious).
However, we are looking at the difference between two results rather than the chance of one result and I can't remember how the sigma values translate to that scenario.
 
posted by [identity profile] purplecthulhu.livejournal.com at 11:25am on 15/03/2009
I'm not quite sure I understand you here. We don't know what the expected number of results is since we don't know the parent distribution - it's what we're trying to determine - so the best estimate we have for that is the results obtained. Hence the sigma on the parent mean is just the sqrt of the number of such events.

To me more rigorous I should be adding the errors in quadrature which does this:

9.3 children in test (sd = 3) vs. 2.5 (sd = 1.6) in control were much improved. Difference is 6.8. Adding errors in quadrature is sqrt(3^2 + 1.6^2) = 3.4 giving a final significance of 2 sigma (happens one time in 20 at random).

24.8 (sd=4.98) vs. 11.78 (sd = 3.4) improved. Different is 13.02 error added in quadrature is 6.04 so the final result is 2.15 sigma, so still insignificant.

Numbers change a bit (and I was being a bit sloppy I admit) but the conclusion remains the same: no result.

 
posted by (anonymous) at 12:25pm on 15/03/2009
It was the use of a sigma of 5.57 (sqrt 31) that leapt out.

Don't you need to average rather than add the errors in quadrature?
By just summing the combined results you are looking at the expected number out of 62 rather than 31. I thought the highest possible sigma for a poisson distribution on 31 subjects is 5.57 (sqrt lambda, lambda <=31, so 6.04 seems odd).
The difference between 1.2 sigma and 2.7 sigma for the 'much improved' moves it from the pure random to the interesting.

Having said that, the results still fall into the 'deserve further study' bucket rather than 'definitive' bracket.
 
posted by [identity profile] purplecthulhu.livejournal.com at 12:39pm on 15/03/2009
If you're combining the results of two essentially random processes, as you are when looking at the difference between two sets of data, then it's like adding two different random walks together. That has to be adding in quadrature rather than averaging since the random drifts might not be in opposite directions.

The thing is you're not looking at 31 subjects, you're looking at the number that improve for your differences. Smaller size groups, larger errors.

And I wouldn't even believe a 2.7 sigma effect in a system where there are potentially many undetected systematics. I've seen 3 sigma and greater results go away in much less complex systems than a medical trial.

As with most of the medical trials I've seen any details of, they need much larger sample sizes to get anything I'd believe. Maybe this study can justify going to samples 10 times bigger, which would begin to get real results, but it certainly doesn't justify a news report (or even, as I said above, a paper in a learned journal).
 
posted by [identity profile] narkil.livejournal.com at 11:48pm on 15/03/2009
ah yes, forgot the variance sum rule.
I'd agree with the quoted "deserve further study" but nothing stronger. I think the media interest derives from the fact these results mirror the uncontrolled tests, and hence in journalist science confirm them.
cdave: (Default)
posted by [personal profile] cdave at 02:09pm on 14/03/2009
Very well spotted. Have you posted this on the Bad Science forum?
 
posted by [identity profile] scotiva.livejournal.com at 10:28pm on 17/03/2009
Hyperbaric oxygen therapy? Stuff and nonsense. What you need is heavy metal chelation therapy!

April

SunMonTueWedThuFriSat
    1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14 15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30