New Years Resolution. Animal experiments are Not Reported Well Enough. Time to Change

#MSResearch #MSBlog Poor Quality Reporting is Bad #Science. Time to #ARRIVE!

Baker D, Lidster K, Sottomayor A, Amor S (2014) Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies. PLoS Biol 12(1): e1001756. doi:10.1371/journal.pbio.1001756

There is growing concern that poor experimental design and lack of transparent reporting contribute to the frequent failure of pre-clinical animal studies to translate into treatments for human disease. In 2010, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were introduced to help improve reporting standards. They were published in PLOS Biology and endorsed by funding agencies and publishers, including PLOS and the Nature Publishing Group, their journals, and other top-tier journals. Yet our analysis of papers published in PLOS and Nature journals indicates that there has been very little improvement in reporting standards since then. This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented.

The ARRIVE guidelines are a 20 point check-list of things to report when you are writing a paper to improve transparency. Over 300 journal endorse it and it is a condition of grant for many funders. One thing is to endorse the guidelines, another is to enforce the endorsement. Without implementation the endorsements are just hot air.

So how many journals are just wind bags. 

Whilst reading a few papers on MS models we noticed that a fair number were analysing the data in a statistically inappropriate way, which would more likely find a positive result.

So as part of a student project we collected all EAE papers published in the first six months of 2012 and looked to see how the data was analysed. Only in 40% of cases did we consider the analysis appropriate. We then analysed the top journals and they did even worse with a shoddy 4% appropriate. This was despite the presence of guidelines suggesting how to do it.

So people do not read or stick to guidelines? 
So do journals who endorse guidelines enforce them? 

So we looked at papers in Nature and PLoS journals for two years before and two years after the journals had endorsed the guidelines on reporting of animal experiments. The result was.....There was no difference. So hollow words met with hollow gestures 

Therefore, as much use as signing up to an EU Treaty if you don't enforce it :-). 

We have been saying for some time that there is a poor quality of reporting standards in animal studies relating to MS. This study indicates that this is indeed the case!.

We believe this is a common problem that is not peculiar to MS research, because the figures we found for EAE studies were in the same ball park as other studies in stroke and other neuroscience studies. This study indicates we all have bad habits.

Because of this, reporting guidelines have been suggested for animal studies just as occurred previously in clinical trials. This helps one to really understand what was actually done; interpret the data and determine how biased the results may be. This allows one to hopefully repeat and validate the work 

Reading papers day in and day out for the blog, makes one realise that there are many papers that get published that really tell us very, very little. 

I know you know this because I frequently post papers on fluffy clinical stuff that infuriates you. This is because it is so obvious...., irrelevant to you...., or simply repeating the same stuff, over and over again. This can be done because most studies cost very little to do. However, animal work is expensive to do, yet I see the obvious, irrelevant and repetition in the animal literature too, but usually do not post on these. 

We are making comments now, not to be policemen of the literature as some of you may think, but to try and get EAEers/EAEologists to think and change their ways. 

This is because the hatchet is going to fall to restrict funding opportunities on them. Actually it has already fallen and bad practices give ammunition for the nay sayers to further limit this. The question is how quickly will this happen. 

Good quality science is more likely to survive than poor quality guff. However, it is also important that poor quality guff dressed up as good quality is exposed for what it is, which is just guff, before it creates dogma that takes years to dismantle. Unfortunately this guff occurs in some of the best journals and the literature is full of stuff that is unreproducible.

There is a growing tide in the clinical fraternity who do not care what animal studies show. Bad Science panders to that view. The apathy towards animal studies is filtering through into the Granting Agencies and if scientists do not buck up and improve, then the money flow coming their way will dry up! However, these types of studies are the treatments of tomorrow so we do need them!

The funding stream targeting immunology is drying-up as more and more as anti-immunological agents reach the MS pharmacopia. This will make it ethically more difficult to justify the study of immunology in such severe models, because if scientists want to study immune mechanisms, there are less severe models that can be used. If they want to help MS..maybe it is time to change.

What was shocking in this study was that the so called "High Impact" journals were setting such a bad example. The statistical analysis was in our opinion wrongly used in over 95% of cases in top tier journals and over 60% in the total literature....truly shocking. 
Why does this happen? Well the so-called opinion leaders were often doing the wrong statistics that make it easier to see differences. Those outside the field doing EAE experiments without any thought of MS, but chasing a disease mechanism, follow in their footsteps. 

But two wrongs do not make a right. 

We published some guidelines (Look on Google scholar to find a PDF)to help elucidate and eliminate this and it sadly reflects badly not only on authors but reflects very badly on the referees that the journals use. So whilst they are making authors spend years doing extra experiments to pander to their whims and dot a few i's and cross a few t's, they are ignoring the basics, which may help one take a step back and realise the results are unlikely to be of any relevance to human biology. 

We called the Journal "Nature" on this and a few months later they introduced a check list during submission. Now the PLOS journals have been called and they are likewise responding with a requirement to document reporting guidelines. This has occurred in PLOS Medicine already, but it seems to be coming in PLoS One in 2014, so be ready for this. 

I suspect there will be lip service paid to this, but it is easy to check because the papers are there for all to read. They are easy student projects and cost essentially nothing to no and can be done in a few weeks. 

The question is which journal(s) will be next? A follow-on study has already been done and is being written up!

If you are a Journal Editor it could be yours.....maybe time to pull your socks up! 

Adopting ARRIVE is a condition of grant for some funders. All they have to do is look back on papers that you publish, which you give them in your grant reports, which would take a few minutes and you hang yourself....maybe you won't get that grant next time. 
That's if the funders like the The Wellcome Trust and the Medical Research Council and the Multiple sclerosis Society (UK) put their money where their mouths are. 

If grant agencies do not do this then they will be just as bad as the journals and full of hollow words. 
Full of Hot Air
If you are scientist then maybe for your new year resolution you should be think about your experimental design and how you report it, most aspects of the guidelines are easy to implement. 

If you are a reader, think is there any quality control in the system, if their isn't, are you going to pin your hopes that the story is meaningful? 

CoI. This is work from TeamG

Labels: