Health reporting needs critical thinking

An article in the Columbia Journalism Review tries to deflect responsibility from science journalists by arguing that the nature of the material makes it flawed and partial.  The author, David H. Freedman, maintains that health science is multi-faceted and subject to error in ways that reporters cannot overcome with due diligence:

Science reporters—along with most everyone else—tend to confuse the findings of published science research with the closest thing we have to the truth. But as is widely acknowledged among scientists themselves, and especially within medical science, the findings of published studies are beset by a number of problems that tend to make them untrustworthy, or at least render them exaggerated or oversimplified.

Reporters are responsible for understanding their beat. Science reporters are responsible for addressing the context and contingencies of the material they cover, and applying the appropriate criticism. If they don’t understand these factors, they shouldn’t be reporting on science.

Freedman notes the impact of publication bias, the pressure to report new and exciting findings:

Typically, something is exciting specifically because it’s unexpected, and it’s unexpected typically because it’s less likely to occur. Thus, exciting findings are often unlikely findings, and unlikely findings are often unlikely for the simple reason that they’re wrong.

Journalists are well aware of this distorting pressure, and accounting for it is the responsibility of journalists and venues (magazines, papers, etc.).  The subject matter does not defy, even if it does frustrate, critical thinking and accurate reporting.

Responsible reporting demands more than repeating findings.  While only one in three findings in medical journals may end up being true, the standards of publication in a journal fit with the audience which will have an appropriate scepticism.  Such informed doubt cannot be assumed of the general public whom science reporters address and to whom they are responsible.  The public will not generally share the understandings of this article about the distorting factors in medical and health science, but science journalists do and it’s incumbent on them to use that understanding to reflect critically on new studies.

Eventually, Freedman concedes this:

Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

But he begins his analysis by denying the fault of reporters:

The problem is not, as many would reflexively assume, the sloppiness of poorly trained science writers looking for sensational headlines, and ignoring scientific evidence in the process. Many of these articles were written by celebrated health-science journalists and published in respected magazines and newspapers; their arguments were backed up with what appears to be solid, balanced reporting and the careful citing of published scientific findings.

If the existing standard is celebrated, that suggests a deep problem in the standards for science reporting, and what counts as solid, balanced health journalism.