Blackballing sections of the science community
The new US protocol that says scientists with corporate connections are unfit to judge drug safety smacks of modern-day McCarthyism.
Want to read spiked ad-free? Become a spiked supporter.
In March 2005, the US National Institutes of Health (NIH) announced new rules purporting to eliminate ‘conflict of interest’ among its employees – including banning all consulting (paid or volunteer) for biomedical companies, and prohibiting employees or their families from owning stock in any biotechnology or pharmaceutical company.
The new rules were the latest result of a campaign championed by so-called public interest consumer groups to rid scientific institutions and review boards of ‘pro-industry’ bias. They demand full disclosure of funding sources and other industry ties among all scientists researching, writing and publishing on topics related to medicine and public health.
This debate goes back to the 1990s. On the editorial boards of prestigious medical journals, charges and countercharges about pharmaceutical company funding biasing data on drug safety and efficacy abounded – and new, strict guidelines were established to require disclosure of funding prior to publication. In 2001, the Center for Science in the Public Interest (CSPI), inspired by consumer activist Ralph Nader, launched its ‘Integrity in Science’ project, which provided information on the financial links between scientists, organisations and industry. As CSPI put it at the time, it was ‘concerned about the link between industry and science, and how the demands of the former can undermine the public interest mission of the latter’.
Supporters of these ‘full disclosure’ policies (and the more draconian NIH rules, which prohibit even perceived ties to corporations) argue that these rules are in the interest of consumers and have no negative effects for individual scientists or for free and open scientific debate. But this is misguided. Indeed, such policies: a) create a dichotomy between credible scientists (those who have no industry ties), and non-credible scientists; b) discourage what would otherwise be successful interactions between America’s top scientists and corporations; and c) chill the scientific debate by removing scientists with ‘industry ties’ from advisory and decision-making roles, leaving only those seen as pure and untainted.
These negative effects were evident in the recent CSPI attack on the integrity of scientists evaluating the safety of the painkillers Vioxx, Bextra and Celebrex, from a group known as Cox-2 inhibiting drugs.
Relying on data from its Integrity Project, the CSPI in essence argued that the only reason that the Food and Drug Administration (FDA) scientific advisory panel voted in February 2005 to return Vioxx to the market and leave its pharmaceutical cousins Bextra and Celebrex on drugstore shelves, was that the panel was packed with pro-drug industry surrogates who cast their votes not based on their professional interpretations of the available data but to please their corporate sponsors. (This claim was the subject of a front page New York Times story, ‘10 voters on panel backing pain pills had industry ties’, on 25 February 2005.)
The implication was that biased scientists (defined as scientists with ‘ties to industry’) put their own financial interests above the health of American consumers and voted, despite the available evidence, to approve the sale of dangerous drugs.
Those who campaign against scientists’ industry ties make several questionable assumptions:
— scientists who had at some time worked for or consulted for a pharmaceutical industry by definition could not be trusted to make an unbiased decision based on their interpretation of the available science;
— panelists who boasted of not having industry ties were by definition credible, honest and independent – and thus in a better position to do what was right for the American consumer;
— in the case of Vioxx and its cousin drugs, the FDA had not acted in the public interest, and in the future should be prevailed upon to exclude from scientific panels any scientist who has any tie with companies whose product was under review – whether that tie be previous employment, one or more consulting positions or stock ownership in the company;
— the only truly pro-consumer decision of this FDA panel would have been a ban on the pain-relieving drugs, and only scientists without ties could make this worthy decision.
The result is a protocol separating credible from non-credible scientists by applying a formula for modern-day McCarthyism, asking scientists who want to qualify for FDA committee membership: ‘Are you now or have you ever been the recipient of financial compensation or other revenues from drug companies?’ Indeed, the Integrity Project is actually a list – much like the list of known or suspected communists in the 1950s – of scientists who have been identified as having at some point in their careers worked with ‘industry’. A scientist on the list is apparently lacking in integrity. A scientist not on this list has his or her integrity intact. The implications of such blackballing are grave.
First, self-appointed consumer advocates – especially the Nader groups such as the Center for Science in the Public Interest and the Health Research Group – have a long history of opposing any products of technology, not just pharmaceuticals but food additives, pesticides and tools of modern agriculture and food production. These groups have a clear agenda: they oppose technology and any profits that technology generates, no matter what the benefits to consumers may be. In debating the issues, however, these groups often lack scientific evidence. So to succeed in public debates, they often rely upon ad hominem attacks on those who consider any technology (food, drugs, consumer products) to have benefits that outweigh the risks.
Second, in dismissing any scientist who has consulted with industry as tainted and non-credible, self-appointed consumer advocates overlook the reality that corporations turn to the best and brightest scientists for expert advice. If one wanted to play the ad hominem attack game, one could just as easily argue that those who are not consulted may be less knowledgeable than their consultant counterparts. Populating the FDA advisory committees (and similar committees in other federal agencies) only with people who haven’t worked with corporations, would introduce some real sources of bias into such panels.
Third, in the case of the painkilling drugs, it was a legitimate scientific position to decide that Cox-2 inhibitors offered options to those who suffer greatly from arthritis – and offered enormous potential for reducing risks of cancer. But in the view of consumer advocates, that conclusion should only be allowed to reach the public if uttered by the most unlikely source, those scientists who have a track record of contempt for the pharmaceutical industry.
Campaigning groups are really saying that there is no legitimate position other than their own – all others are bought and paid for by industry, and they will find the ties that delegitimise them. There is, apparently, no such thing as an alternative independent, credible position.
But what matters in science is not funding, but the accuracy and legitimacy of the data generated and the conclusions drawn. The now-defunct Tobacco Institute, funded by the industry, used to claim that it had never been proven that cigarette smoking causes human disease and death. Was this outfit credible? No, but not because it got its money from tobacco companies. If the Tobacco Institute had been funded by the Easter Bunny, its pronouncements would still have been scientifically outrageous, because the controversy had long since ended over whether cigarettes are the primary cause of premature, preventable death.
If scientists who have consulted industry are viewed with suspicion, what about potential biases associated with other forms of funding – for example, funding from government, private foundations and even consumer contributions? Government regulatory agencies need a steady stream of reasons to justify their existence. If the US Environmental Protection Agency gives a group a grant to evaluate the environmental toxin du jour, might there be a bias towards finding data to justify the regulation of this chemical, thus pleasing the regulatory agency? Government agencies are not neutral; they are their own special interest groups.
Private foundations may not be neutral either. The Tides Foundation, a generous funder of most major US environmental advocacy groups, has a commitment to ridding the environment of ‘toxic’ chemicals – whether they are in farmed salmon, household dust, cosmetics or children’s toys. Why is an ideologically fueled foundation any less suspect as a funding source and a source of bias than a corporation?
Then there is potential bias that is outside the financial arena. If a researcher reports on AIDS data and policy, should we require that his sexual orientation be identified? If a scientist is evaluating policies related to affirmative action programs, should her race be identified? If a physician is reporting on the promising results of a new heart drug, should consumers be made aware that he has spent decades of his career trying to prove this drug safe and effective, and has a professional and emotional commitment to having the results come up positive so that his years of work have not been in vain?
The reality is that all scientists have personal ideologies, motivations and goals – all of which can potentially introduce bias into research. Scientific work should be evaluated on its merits (that is what peer review and replication of results are all about), not on ‘conflicts of interest’ that may or may not exist.
Dr Elizabeth M Whelan is president of the American Council on Science and Health.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.
Comments
Want to join the conversation?
Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.