Pubdate: Tue, 09 Nov 1999
Source: New York Times (NY)
Copyright: 1999 The New York Times Company
Contact:  http://www.nytimes.com/
Forum: http://www10.nytimes.com/comment/
Author: Denise Grady

DRUG RESEARCH REPORTS SEEN AS OFTEN MISLEADING

Reports of research on drugs tend to exaggerate the drugs' benefits,
making them sound better than they really are, according to an article
and editorial being published on Wednesday in The Journal of the
American Medical Association.

The exaggeration occurs for several reasons: positive results tend to
be published more often than negative ones, researchers sometimes
publish the same study more than once and some poorly designed studies
slip through the safety net of journal editors and expert reviewers
who should screen them out.

The misleading information harms patients because doctors rely on it
to make decisions about treatment, said Dr. Drummond Rennie, a deputy
editor of the journal and author of the editorial.

Decisions based on misinformation may result in patients' being given
an inferior drug or a new, expensive one that looked good in a study
but that is really no better than an older, cheaper medicine.

"Ultimately, the patient is shortchanged," Dr. Rennie said in a
telephone interview, adding that although there were no precise
figures on the amount of misleading research, he suspected it was widespread.

His editorial and the journal article take researchers to task for
studies on drugs used to treat people with rheumatoid arthritis,
postsurgical vomiting, depression, schizophrenia and immune deficiency
resulting from cancer. Much of the research, like drug research
generally, was financed by pharmaceutical companies, which often stand
to benefit from the false impressions.

But Dr. Rennie attributed the problem not only to drug companies, but
also to researchers and the institutions that allow shoddy research,
and to journal editors and scientific reviewers who do not discern or
blow the whistle on flawed or deceptive studies.

"Peer review does its best, but it's only as good as the people doing
it, and the honesty of the people doing it," Dr. Rennie said,
referring to the system in which journals ask experts to review papers
being considered for publication.

Dr. Rennie and the other authors, Dr. Helle Krogh Johansen and Dr.
Peter C. Gotzsche, of the Nordic Cochrane Center in Copenhagen,
described several sources of distortion in medical research. One is
"publication bias," meaning that studies showing positive results from
drugs are published faster and more often than studies showing neutral
or negative results, which may never be published. The net result is
that the medical literature is skewed toward studies that show drugs
in a favorable light.

Dr. Kay Dickersin, an associate professor of community health at Brown
University who has extensively studied publication bias, said that
many scientists had blamed journal editors for refusing to publish
negative results, but that she and her colleagues had found that the
scientists themselves held back the findings.

"Hardly any were submitted to journals, so they couldn't blame the
editors," she said. "When we asked why, the major reason they gave is
that the results just weren't interesting."

A second problem is that researchers sometimes publish the same data
more than once, without letting on that it has ever been in print
before. That may mislead doctors into thinking that there are more
positive studies of a given drug, including more patients, than there
really are.

"It's good for everybody -- except patients and readers," Dr. Rennie
said, noting that the extra publications made ambitious researchers
look more productive and provided more studies for drug companies to
hand out to doctors.

In his editorial, Dr. Rennie described ondansetron, a drug that was
being studied to prevent vomiting after surgery. Researchers analyzing
the literature found 84 studies involving 11,980 patients -- or so
they thought. Some of the data had been published twice, and when the
researchers sorted it out, they realized that there were really only
70 studies, in 8,645 patients.

The duplicated data, they concluded, would lead to a 23 percent
overestimate of the drug's effectiveness.

Studies of another drug, risperidone, used to treat schizophrenia, had
been published multiple times in different journals, under different
authors' names. The same thing had been done with studies of drugs to
treat rheumatoid arthritis, with some having been published two or
three times, and one even published five times.

Dr. Michael O'Connell, deputy director of the Mayo Clinic Cancer
Center in Rochester, Minn., an expert on clinical trials, said: "To
publish the same data again with entirely different authorship, as if
it were an entirely different data set, is reprehensible. Readers
would conclude there were two different studies that strengthened the
conclusions."

In their paper, Dr. Gotzsche and Dr. Johansen described still another
problem: a study design that seemed to stack the deck against one of
the drugs being tested, in essence guaranteeing that the other would
look superior.

The two drugs were amphotericin B (made by Bristol Myers Squibb) and
fluconazole (made by Pfizer), both being tested to prevent fungal
infections in patients. The researchers looked at 15 studies done
during the 1990's, including 12 in which Pfizer had participated by
providing grants, statistical analyses or other help.

At first, the studies appeared to show that fluconazole, a newer drug,
worked better. But when the researchers analyzed the studies more
closely, they discovered that the majority of the patients had been
given amphotericin B orally; that drug is supposed to be given
intravenously and is not effective taken by mouth.

In addition, some of the trials included a third drug, called
nystatin, and the results for nystatin and amphotericin had been
lumped together. But nystatin was known to be ineffective, and so
combining the results for the two drugs made amphotericin B look bad.

When Dr. Gotasche and Dr. Johansen sorted out the studies and the
contributions made by the various drugs, they concluded that
fluconazole was actually no more effective than amphotericin B.

When they tried to ask the authors about the design of the studies,
some ignored the requests, and others said they no longer had the
data. Pfizer, contacted by The Journal of the American Medical
Association, declined comment.

In a telephone interview, a spokeswoman for Pfizer, Mariann Caprino,
said she did not have the data on the individual studies and could not
explain the reasoning behind them.

Dr. Bert Spilker, senior vice president for scientific and regulatory
affairs at PhRMA, a trade group for drug manufacturers, said: "We
don't have a perfect situation. It probably can be improved."

He suggested that medical journals require authors to disclose
formally whether their papers had been published elsewhere in any
form, and to include that declaration in the published report. In
addition, he said, the institutional review boards that approve
studies at the hospitals where they are conducted should evaluate them
more closely to make sure that they are designed properly.

The best solution to publication bias, many researchers and journal
editors say, is to require that all studies be logged into a central
registry when they begin. That way, scientists can track them.

Dr. Rennie said, "You can call the investigators up and say, 'Whatever
happened to that study you began?' and they might say, 'It was a
disaster.' And then you can ask, 'Why didn't you publish it?"'

Some drug companies have begun to register their trials, but others
resist, partly for fear of revealing information that their
competitors might use.

"They have proprietary interests, which I respect," Dr. Dickersin
said. "But there is a larger interest here of society as a whole."
- ---
MAP posted-by: Derek Rea