Science under attack

This page is a response to issues raised by the former President of the Royal Society in a television documentary.
(“Science under attack”, BBC2, 24 Jan., 2012. - Hyperlinked below.)


The threat

"Science" in its broadest sense from sociology to medicine and engineering is under attack.

The attacks are being made on two fronts; a powerful anti-science lobby and a growing number of dishonest researchers within.

The tiny number of corrupt researchers pose the most serious threat because their bad behaviour casts doubt on the whole of science.

On this page we suggest defenses against both forms of attack. 


Some evidence of the attacks

The dishonest scientists
Dr. Ferric C. Fang, editor in chief of the journal Infection and Immunity, is credited with exposing the increase in scientific fraud.
He did this by plotting the number of retractions of unreliable research papers against time.

His findings are discussed at http://www.nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.html?pagewanted=all&_r=0

Famous cases of bad research that have damaged the reputation of science include

(i) The MMR vaccine controversy http://en.wikipedia.org/wiki/MMR_vaccine_controversy

(ii)  Two research papers on statins that overestimated their harmful side effects by twenty times. http://retractionwatch.com/2014/05/14/bmj-authors-take-back-inaccurate-statin-safety-statements/


The anti-science lobby

"Climategate" is the most famous example of cynics casting false doubt on science. http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy


Finance issues

Research fraud can waste taxpayers money. But solving the problem will also require money. National Audit Offices should carry out regular assessments of the cost of fraud to the public purse. And then make recommendations about what fraction of the next round of public research funding should be allocated for fighting fraud.

Hopefully, this fraction will fall with time as a new generation of ethical researchers emerges.

The following contents have recently been fleshed out and published a a journal paper,
W. Courtney ‘A private researcher’s struggles against research fraud. II. Suggestions for reducing the fraud problem’ 17 (2017) 81–88



            1  Reinforcing our commitment to research ethics

                1.1 People
                1.2 Failing with dignity
                1.3 Women in science
                1.4 Institutions
1.5 Learning from others
                1.6 Competitive research integrity
                1.7 Assertiveness tests for research funding bodies
                1.8 Removing cheating skills from the school curriculum

           2  Rebuilding trust in the quality of scientific information

               2.1 Armour plating the peer review system
               2.2  Anonymity of peer reviewers
               2.3 Overcoming peer review bias
               2.4 Accreditation of science research journals
               2.5 Research papers for the wider public
               2.6 Tweaking the Science Prize system


1 Reinforcing our commitment to research ethics


1.1 People

Good behaviour is best learned before temptation arises.

In the long term a cohort of students with a strong commitment to research ethics will deliver a multiplier effect by their example. Many will become parents of the next generation of students, some will go into research management and others will go into teaching science at school level.


Fake news and the improvement of research ethics standards

In general, fake news is a lot harder to pin down than research fraud. This is because science is validated by experiment, but the honesty of news depends on the honesty of the reporter.

By promoting a culture of honest scientific inquiry in young student minds we are also helping to strengthen this honesty culture in other aspects of public life.


1.2 Failing with dignity

One of the most productive aspects of being human is our ability to learn from our mistakes. But the (male) aggressive culture that contaminates modern science stifles this trait. Worse still, it increases the temptation to act fraudulently to hide failure, to prevent humiliation within the peer group.

Here are some suggestions for changing this culture:


1.3 Women in science

Professor Dame Sally Davies, UK Chief Medical Officer is quoted as saying, “I think I suffer from imposter syndrome. I worry whether I’m good enough and if I can do the things that are being asked of me-which is typical of women.” (Sunday Times magazine, 27 Sept. 2015, p 86.)

Imposter syndrome needs to be eliminated. But meanwhile, it should be seen as an asset for science.


1.4 Institutions
Universities and other academia related institutions find themselves in an invidious position when they are called upon to investigate internal fraud. The more rigorous and honest they are, the more damage they can cause to their own reputations.

This disincentive could be avoided by setting up an Independent Research Complaints Commission and making it compulsory for all written complaints, no matter how trivial, to be logged with the Commission.
To motivate prompt logging, complainants should have the option of sending a blind carbon copy direct to the Commission.

A two stage investigation could be employed:
(i) A preliminary investigation into the plausibility of a complaint would be carried out by the university.
(ii) If necessary, a deeper investigation chaired by a member of the Commission, with all other members of the investigation panel coming from outside the institution raising the alarm.

Ideally, the Commission would include non-academics whose first loyalty was to justice, not academia.

This proposal is bureaucratic, but the problems raised by the flawed enquiry into the PedSALi project suggest that, in the broader interests of science, independent enquiries are essential.

The Complaints Commission will need funding. This could be raised by adding a small "Integrity Tax" to the costing of all publicly funded research. The tax will pay for itself if it leads to improved research quality and greater public trust in science.

Individual names should be removed from the written complaints and the logged data published on the Complaints Commission web site. This would allow the honest members of the academic community to play a more effective role in policing research and higher education quality. For example, it would provide the raw data that could be crunched by software to identify weaknesses in the research quality control system and predict new trends in research fraud.


1.5 Learning from others
British universities should follow the example of James Cook University in Australia.
It openly declares,

"zero tolerance to fraud and corruption and actively discourages such activity."

Two personal example of how zero tolerance would have focused minds:
(i) The Research Integrity Office at Manchester University, UK, refused to examine evidence that an enquiry into research fraud was itself fraudulent because the complaint had not been received within a 10 day time limit.
(ii Five years later, a UK research funding body, the EPSRC, called upon Manchester University to sort out this fraud cover-up mess . But again, the University wriggled out of holding an enquiry by using a lame excuse.

A clearly written policy of zero tolerance would have discouraged staff from exploiting bureaucratic loopholes in these cases.

1.6 Competitive research integrity
Competition between universities is healthy and the various league tables comparing teaching and research performance have a role to play. In order to boost public confidence in the rankings, each institutions score should be weighted to take into account its policies, honesty and transparency in tackling research fraud and other integrity related issues. 
Policies should include zero fraud tolerance and having a whistleblower system that is effective, but protects against malicious complaints.

It may also be worth creating an international award for institutions that have shown outstanding integrity in tackling fraud, even when this has caused short term public image problems.


1.7 Assertiveness stress tests for research funding bodies

The author of this article Bill Courtney is a private researcher who has worked with universities and engineering companies.

His experience is that funding bodies are prepared to accommodate misuse of funds by academics, but carefully monitor funds handed out to the private sector.

For example, as the lead partner for the PedSALi project, Bill alerted the Engineering and Physical Sciences Research Council (EPSRC) on several occasions that the University work was going badly wrong. But nobody responded to his warnings. Also, in the four years that the project ran until it collapsed in failure, the named EPSRC representative never met Bill or visited the University to see what was going wrong. Bills warnings were not passed on to the EPSRC referees, who on the basis of self assessment by the University paid the University in full. It also rubber stamped the University self assessment that the work was “Tending to internationally leading.” [See sections 3 and 4 on this linked page for proof.]

Update July 2015. The EPSRC has reviewed Bill’s evidence and called for Manchester University to take action.


In contrast, Bill’s latest research was funded by Innovate UK and only involved partners from the private sector. Quite rightly, it was rigorously monitored by a visiting Innovate UK officer every quarter. This kept us on our toes.

 Here are three proposals for making funding bodies more assertive when dealing with academia.

 (i) “Trust nobody” should be the guiding principle.
"Academic freedom" is a right to investigate uncomfortable truths, not a right to ignore them.
Academics are just as likely to go astray as industrialists when handling public funds.

(ii) The funding bodies themselves need monitoring.
A guardian of public funds should regularly stress test the funding bodies procedures to ensure that their financial monitoring systems are not “going soft” and that they remain capable of detecting and combating the latest types of research fraud.

(iii) The guardian body should offer a last resort hotline for whistleblowers such as Bill to submit their evidence. Hotline contact details should be prominently displayed on all contracts. This will help to keep the funding bodies themselves on their toes.


1.8 Removing cheating skills from the school curriculum

(i) Repeating classic experiments

The way in which we teach young people to do science can inadvertently train them in the arts of cheating. This is especially true in physics lessons where practical work places great emphasis on measurement, through the process of determining constants. This is convenient for the teacher because the accuracy of the result provides a quick check on the student’s experimental performance.

But students are not stupid and recognise this.

In most cases the accepted value of these constants is readily accessible to the student .For example the thermal conductivity of copper or the refractive index of glass. So there is a strong temptation to please the teacher and obtain high marks by adjusting experimental data to deliver a "good" value for the constant.

We need to re-examine the practical science experience in order to shift the emphasis away from “accuracy” towards honesty, and from determination to exploration.

Here is an example of how we could do this based on a physics experiment to determine the acceleration due to gravity, g.

 The standard experimental procedure can be summarised as follows.

 Use the formula periodic time T = 2p√(l/g) to determine “g” using five different values of l.
Plot your results in the form of a suitable straight line graph.

 The student is likely to know the accepted value of g and there will a temptation to ignore any readings that are “too far” from the expected trend line.

 Using the same apparatus a more fruitful scientific experience will be gained if we change the aim of the experiment as follows.

 Use the formula periodic time T = 2p(l/g) to predict the measured values of T for five different lengths of pendulum l.
Then plot a suitable straight line graph to compare your predicted and experimentally determined results.

 The student would be posed a series of questions that required written answers. The aim of the questions would be to steer the student towards improving their experimental skills, rather than improving on their value of “g”. Instead of being rewarded for ignoring results "too far" from the trend line, the student would be rewarded for explaining them.


Figure 1. Are we teaching young people to cheat as scientists
         by rewarding the wrong goals for practical work?

(ii) Doing original research

School pupils should be encouraged to do real research that is of value to their community. For example monitoring air quality, ash dieback or species surveys. This will emphasise the importance to society of the honest recording of experimental results.

Where opportunities are available, PhD students and other academics should employ school pupils as research assistants to help them collect data. This work could be done in the field or in a research laboratory. The academics should give feedback talks to the pupils, to maintain their intrest.
The story of the Mpemba effect should be told, and academics encouraged to listen to suggestions from pupils, for improving experimental design.
[Mpemba was a 13 year old Tanzanian schoolboy who honestly reported his scientific observations, even though they sounded ridiculous and his science teacher dismissed them. But Mpemba had the last laugh, because his discovery was eventually taken seriously and he changed science. https://en.wikipedia.org/wiki/Mpemba_effect ]


2 Rebuilding trust in the quality of scientific information

2.1  Armour plating the peer review system
The following proposal offers the additional benefit of acting as a cloud storage system for raw research data.

By common consent, the peer review system remains the gold standard for quality control of published research papers. But the system is not perfect because referees are busy people who cannot check all of the claims made in a paper. As a result the occasional bad paper still slips through the peer review system.

We propose that national or regional data storage centres should be set up that researchers can use in the manner of a read only research diary. Researchers would submit their ongoing research results, photographs, sketches, jottings and what ever else they wished, preferably on a daily basis. Undergraduates would also be encouraged to use the system for storing their lab work results, so that they get into the habit of archiving their lab work. A short period of grace, of say 7 days, before the data was converted to read only status, would allow genuine inputting errors to be corrected.

To avoid the type of fraud where the research methodology is retrospectively changed to fit the research data, details of the methodology such as hypotheses to be tested, statistical instruments to be used etc. should be recorded in the cloud diary at an early stage in the research project.


*  This resource would discourage researchers from exaggerating their sample sizes, with claims such as "the test was repeated ten times" or "3,000 patient records were examined" being verifiable.

*  When submitting a paper for referee scrutiny, the authors would add hyperlinks to their “research diaries” so that their claimed results could be cross checked. Each researchers archive would also include digital copies of any research papers that they cite, with the relevant sections highlighted.

*  This resource could also be used by referees in a manner analogous to the current anti-plagerism software.

*  It would not be realistic for referees to rigorously check all papers. But the possibility of their doing so would discourage aspiring fraudsters.

*  Open access to the raw data following publication of papers would encourage informal post-publication peer review.
To encourage this practice, openness needs to be rewarded. For example, all papers and references to papers that offer open access to raw data should be entitled to carry a "bragging rights" indicator such as <O. A. Data> (We offer Open Access to our data.). This should improve the citation ratings of papers that are open to scrutiny.

*  An electronic "tamper-proof research diary" system would create extra tasks for researchers but it would also offer them protection against their own work being plagiarised or misquoted.

*  Occasionally a retrospective review may indicate that data is seriously flawed and could mislead the science community. It should be possible to retract this data, but only with the approval of an external adjudicator.

*  The cloud would also allow researchers to use each others data for metastudies and computer simulations, with a blockchain system being used to prevent plagiarism.

Blockchains: Research data would change hands in the manner of bitcoins. Generating the original data or adding value by processing would be the equivalent of minting bitcoins. This approach would increase the value of meta-analysis, which is one of the key tools for highlighting anomalous research results. The publication of negative results would generate "bitcoins" of equal worth to positive results, encouraging their publication and funding support. Advanced algorithms could be used to assign an approximate value to the novel information.

As a "Manchester Man", Bill Courtney urges Manchester University to take the lead in developing a "researchcoin" system.

[Blockchains are explained at http://www.investopedia.com/terms/b/blockchain.asp There is also an excellent visual at - http://wiht.link/blockchain-IG]


*  Artificial Intelligence agents could be used to check the cloud data and provide a supplementary form of peer review. Some of these reviews would be gibberish, but they would provide an original perspective. This would help to keep human reviewers on their toes by presenting criticisms that needed to be addressed. "AI peer review" would be a cost efficient method of correlating raw data stored in the cloud with published data.

*  Peer review fraud Ingenious methods for hacking into the peer review system are discussed by The New England Journal of Medicine at http://www.nejm.org/doi/full/10.1056/NEJMp1512330?af=R&.
This problem could be reduced if all aspiring peer reviewers had to submit their email addresses, brief biographical details and peer reviewing record to a global directory that was accessible by journal editors.


The combined tactic of AI peer review and scrutiny of cloud stored raw data is made in response to Bill Courtney's own distressing experience of his research being abused by colleagues and a subsequent formal enquiry panel at Manchester University. Please click on the hyperlinks below to see the corruption of research evidence that inspired this proposal.

(i)  Pedestrian safety using SALi Technology

(ii) A research supervisor blocks publication of inconvenient research. See References 18, 19 and 20 on the same PedSALi page.

(ii) Bill Courtney presents evidence that the following journal paper willfully misquotes his research

(iii) Evidence that Manchester University researchers misled the EPSRC


*  An emerging market for journals specialising in publishing negative results?
Existing journals want to publish positive research results. But the increasing use of met studies as a research tool, combined with creating "bitcoin" value for negative results, would give added impetus to the publication of honest, but less than encouraging research outcomes. Tactful journal titles would be required, for example, "Meta studies Supplement".


Undergraduate coursework cheating

This is a different animal to research fraud, but lives in the same stable.

Bespoke essay-writing services would find it far more difficult to operate profitably if students were obliged to store their date certified draft essays and research notes in the cloud, for possible inspection by their tutors.



2.2  Anonymity of peer reviewers
Peer reviewing is vitally important but time consuming work. Occasionally reviewers will undertake the work to puff up their CVs rather than contribute to research quality control. This can result in sloppy reviewing. An example of sloppy reviewing were reviewers failed to spot violations of the laws of physics is provided on this linked page. Evidence of the publisher’s failure to tackle the problem is presented on this second linked page.

Anonymity favours sloppy reviewers at the expense of the good ones. This suggests that anonymity for reviewers should be abandoned. But there are also strong arguments in favour of anonymity. We need to test this argument scientifically by allowing consenting reviewers to have their names published and evaluating the consequences over a period of years.

Perhaps we need a new journal dedicated to research into scientific communication, journal and peer review standards!


2.3 Overcoming peer review bias
Peers bring a huge amount of specialist expertise to the peer review process. Generally speaking this is useful, both for reviewing research papers and assessing new research proposals. But it also brings bias. because nobody likes to approve publication of a paper or support a research proposal that undermines their own career interests. For example, a flawed paper that supports the reviewers career interests is more likely to be waved through than a moderately good paper that threatens them. To reduce this bias, review systems need to include skilled outsiders. These could include full time science teachers who have a career interest in putting review work on their CVs, but no career interest in a particular line of research.


2.4 Accreditation of research journals
There is a growing problem caused by "cowboy" research journals publishing ridiculously bad research. You will find several references to this trend online, for example at https://www.minnpost.com/second-opinion/2015/04/plagiarism-fraud-and-predatory-publishing-are-polluting-science-says-bioethic

Some form of world wide journal accreditation system is required so that journals can be trusted. If this is not done quickly then science will become polluted by citations of nonsense science.

To kick start the process we need an international conference of research ethics experts and other interested parties where methods of accreditation can be discussed.

This proposal is made following Bill's personal experience of a journal refusing to take remedial action when confronted with evidence that sloppy referees had failed to spot numerous errors in a paper including three violations of the laws of physics.


2.5 Research papers for the wider public

A distinguished national body, for example, the Royal Institution (famous for their Christmas lectures) could be given funding to set up a panel of academics and other experts, to write well balanced review papers on science, technology and medical issues of public interest. The papers would be written in clear, everyday language and published on the internet.

These review papers would not be in competition with Wikipedia or science magazines but concentrate on science issues that generate uncertainty or fear; for example GM crops and the MMR vaccine.

They would meet the highest standards of journal paper quality control and include the following features:

·         A comprehensive list of key words, allowing readers to carry out their own online research.

·         The papers would be refereed by named experts.

·         All journal references cited would be hyperlinked to their abstracts, with additional links to The British Library, where full copies of many papers can be purchased.

·         Question and answer and readers comments pages would be added.


2.6 Tweaking the Science Prize system

("Science" in the broadest sense of the word is being used here.)

The Nobel Prizes for science, The Queen Elizabeth Prize for Engineering, The Fields Medal for Mathematics etc., all contribute to the development of science because they add glamour and human interest to science.
But this raises a problem because the complexity and cost of modern science means that it is rare for one or two people to make their breakthroughs alone. Even in the seventeenth century Newton recognised this when he acknowledged that he was "standing on the shoulders of giants."

Here are two suggestions for tweaking the science prize system to attract young people into team science.

(i) The award ceremony citations for the big prizes in science should include a list of the "top three/five" individuals or research groups who helped to make the prize winning breakthroughs possible.
These supporting players are likely to have a far wider international basis than the original prize winners. The list would be drawn up with the help of the winners in the period between being declared winners and receiving their awards. This would provide each prize with a second round of international media attention and introduce more scientists faces to the public.
Hopefully, most prize winners will cite inspirational science teachers and science media stars as leading them along the path that ended in their award.

(ii) We should create additional high status prizes that reward outstanding team efforts.
For example, the award of the Nobel Prize for Physics to Peter Higgs for predicting the particle named after him was extremely well deserved. But the award was only possible because an international team of scientists and engineers working at CERN delivered the supporting evidence
Their efforts also deserve some form of high status recognition.



1 Bill Courtney wrote to the Royal Society on four occasions warning that the unethical practices at Manchester University
    are attacking science from within.

 Letters and evidence were sent to the Royal Society, Signed for Delivery on the following dates:

1.      22 March 2005. To Lord May, then president. - Acknowledged by Lord May but no action taken.

2.      5 January 201. To Professor Sir Paul Nurse, President. - No acknowledgement.

3.      15 December 2011 Submission of evidence to Professor Boulton, chair of RS investigation into
Science as a public enterprise: opening up scientific information
- No acknowledgement.

4.      27 January 2012. Duplicate submission of evidence to the investigation. - Acknowledged by Frances Hughes but no action taken.


In principle, science should be trustworthy for three reasons, but the Royal Society was presented with evidence that Manchester University et. al. have corrupted all three.


Reason for trusting science



Evidence that trust has been corrupted


Science is based on verifiable experiments.

The Manchester University PedSALi researchers wilfully carried out bad research in a manner that may have cost live. (Figure 4 on the PedSALi  page summarises the case.)


Peer review of research papers ensures that only good research is published.

A journal publisher and a formal enquiry panel at Manchester University refused to examine evidence that sloppy peer reviewing had allowed bad research to slip through the system. (See "Overview of the fraud" on this linked page.)



Bad or corrupt research is exposed when other researchers carry out the work correctly.

(i) The formal enquiry panel was presented with evidence of good SALi research at Nanjing [1] and Cardiff [2] Universities, but declined to examine the evidence.
(ii) The PedSALi fraud has been suppressed by the science establishment.
[Manchester University, a journal publisher, the EPSRC, UK Research Integrity Office.]
Consequently, when Cardiff University applied for EPSRC funding to repeat the PedSALi work, it was denied because Manchester had already done the experiments.
This higher level of bad behaviour has prevented the original fraud being exposed.


1.      H. d. Teng, Q. Chen, Nanjing University,
Study on vibration isolation properties of solid and liquid mixture,
Journal of Sound and Vibration, (2009)

2.      Huw Davies et. al., Cardiff University School of Engineering,
Pedestrian Protection using a Shock Absorbing Liquid
(SALi) based Bumper System,
ESV Conference, Stuttgart,
June 2009, Paper Number 09-002.


Science under Attack
This programme is available on iPlayer at 

It is very good at demonstrating how public trust in science is breaking down and why restoring  this trust is vital.

It argues that experimental evidence and peer review of the research makes science a trustworthy human endeavour.

It concludes that scientists must try harder to convince the public that research results can be trusted. References to corrupt research such as that relating to the MMR vaccine are glossed over.


Bill Courtney argues that

This is rather like washing your car so that it looks nicer, when the real problem is that the head gasket is starting to blow.



The hidden irony
This programme argues that one of the main reasons for trusting science is that humanity must stand together to fight manmade climate change.

But Courtney's own efforts to help fight global warming (listed on the menu below) have been held back by up to twelve yeas by the suppression of research fraud at Manchester University.

A warning from history

Are universities in danger of following a similar trajectory to the Christian monasteries?

In their early days from about the sixth century onwards, the monasteries were seen as centres of learning and repositories of the most important relics of Christian culture.

But their sacred status meant that they were above criticism and giving generously to them was seen as a good deed. As a result, the monks became over self important, rich  and fat. And their humble dwellings grew to rival the splendor of royal palaces. In Britain, after a thousand years of expansion, they had become so morally weak that Henry the 8th was able to destroy them in five years.

If only those pre-Tudor monks had the moral courage to reform themselves before it was too late, how much richer our cultural and architectural heritage would have been!