Recursive Fury Backfires

Retraction Watch informs us that Lewandowsky et al’s unusual paper “Recursive fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation” , which was published in Frontiers in Personality Science and Individual Differences, has been quietly removed (without formally being retracted) from the journal’s website.

I’m sorry to report this as I consider two of the authors to be friends. But there’s no point to pretending this isn’t happening.

The discussion at Reaction Watch is interesting and explains some substantive grounds for complaint. The usual denialist foaming is absent; the complaints on their face  are not easily dismissed.

Shollenberger:

The first time the paper was removed was in response to complaints by Jeff Condon. It was taken down and modified to address his concerns. However, another individual (who goes by the handle Foxgoose) had also taken issue with the paper as it had misrepresented him.

When the paper was reposted, it was reposted in two versions. One, a web version; the other a .pdf file. These two versions were not the same. The .pdf file had been modified to address Foxgoose’s concerns. The web version had not been. This meant there were two different versions of the paper on the journal’s website. Combined with the original version of the paper, that makes three different versions of this paper.

There have been three different versions of the paper published. There is no public record of them or of the changes made between them. Two of them were available from the same page at the same time.

I don’t think this is the way to go.

Omnologos:

it’s unethical for psychiatrists and psychologists to diagnose people they meet casually. it’s defamatory for them to publish “professional” diagnoses of the mental status of named persons without their consent. it’s against all privacy laws for a professiional to suggest any particular individual suffers from any condition the professional is a specialist of. etc etc

there’s a reason why psychology journals aren’t chock-full of articles having as topic the telediagnosis of past referees, previous editors, and responses to old articles.

Toby White:

This may be one of a tiny handful of cases in which a journal might be justified in erasing, rather than withdrawing, an article. The circumstances here are almost unique.

The lead author is a psychologist. He reports that he has recorded and analyzed the responses of a number of people to a particular event. On the basis of that analysis, he draws certain professional conclusions about the psychological and cognitive status of his subjects. He writes up his data, analysis, and conclusions and submits them for publication. Whether he did so well or badly, this is simply the paradigm of academic psychology. Forget climate politics. Forget “provocative” titles. Don’t even worry about whether this is good science or not. Measure it only against the professional obligations implied by the paradigm.

First, the senior author has an extraordinary conflict of interest. The behavior under study is precisely public criticism of the author’s professional competence. Psychology in particular has a deep concern with the distortions caused by even relatively trivial conflicts of interest.

Second, it is probably safe to assume that Prof. Lewandowsky did not write his Psych. Sci. paper simply to create the experimental conditions for the Frontiers paper. Still, negative reactions to the Psych. Sci. paper were entirely predictable. This was not a “natural” event. On the contrary, the experimental set-up (the contents and release of the then-unpublished Psych. Sci. paper) was completely under the author’s control. Thus Prof. Lewandowsky created, controlled, conducted, analyzed, and published a psychological experiment without any disclosures to, or consent from, the subjects.

Third, regardless of whether consent was required for the experiment, the authors published individually identifiable information about, and analysis of, the mental health and cognitive status of their subjects. This is not simply bloggish, lay opinion. This is, mind you, published as objectively determined, scientifically verified, analysis by professional psychologists for publication in a professional journal — concerning named individuals who were not willing subjects and did not consent to participation in a study, or to the release of personal mental status information.

Fourth, some of the information then turned out to be wrong.

Perhaps, despite appearances, this is all ethically acceptable in psychology. But, if not, Frontiers has a hard choice. They really shouldn’t proceed to publication. It’s an ethical minefield. But retraction or withdrawal, with detailed explanations, would look like an attempt to cast blame on the authors or others — and might make things worse. Having gotten this far into the process, duck and cover may be the best, and perhaps even the most ethical, choice among rotten alternatives.

Not much appears in defense of the article as yet at Retraction Watch. I hope something does but I have to say the conversation so far is not reassuring.

The attacks on Marcott et al have been ludicrous and laughable. The attention given them by the media have been overwrought and easily dismissed. This is a very different story and it probably isn’t going away. This all said, “everybody knows”. It is obvious that ideation at denial sites is in fact paranoid. The question is whether and how to report it in an academic journal. And why.

Lewandowsky et al, like Marcott et al, knew they were marching into the bullseye.

Marcott made his point about lack of robustness in the “uptick” in a somewhat unclear way, and the press release confused the research with the context. These are avoidable and unnecessary mistakes given the reasonable expectation of the Full McIntyre treatment. But in the end, the critique is vapid and the result stands unscathed. The whole episode just makes the denier world seem every bit as ridiculous as it ought to look.

Stephan’s papers are a very different matter. It takes the ridiculous aspect of the denier world and exposes that ridiculousness itself in an academic context. To those of us familiar with the debate it is simply a realistic exposition of what is actually happening. If the academic literature is intended as an exploration of truth, saying something that is true seems more than defensible. I think it’s important that the main point – that the ideation of climate denialism is fundamentally paranoid – is in the public discourse. It is past time that it was. And withdrawn or not, this paper will have achieved that. Indeed the deniers are somewhat trapped by this tar pit.

On the other hand, law, journalism and academic publishing are bound by a duty of objectivity and neutrality. This makes for a very sluggish process of social decision-making under controversy. The rules of objectivity chafe when confronted with urgency, and indeed they are often abused, but they exist for a reason.

It remains to be seen whether the paper is seen as transgressive, and whether any such transgression is seen as worth the costs.  The thing that we can be sure of is that we haven’t heard the last of this.

 

Comments:

  1. "These are avoidable and unnecessary mistakes given the reasonable expectation of the Full McIntyre treatment."

    Outside the blogosphere is there much knowledge of the "Full McIntyre treatment"? Would a newly emerged PhD or the NSF press release dept. have any knowledge of McI, McK, Watts et al?

    Should they?

    • Whether they should have or not, it's clear that Marcott in fact was well aware of what was coming.

      "Marcott admitted he was apprehensive about charging into the fully-mobilized troll army, but said he was grateful scientists like Mann had "gone through hell" before him to build a support network for harassed climate scientists.

      "When Michael came along there was a lot more skepticism about global warming, but the public has come a long way," he said. "I'm curious to see how the skeptics are going to take this paper."

      http://www.nationaljournal.com/energy/climate-change-even-worse-than-we-feared-20130311

      h/t David Appell

  2. Curious. Why would the reviewers not pick up on any of this? Psychology is a field quite different than the "hard" sciences. Could this all be something akin to the carpenter evaluating the plumber's work, in the way people are reacting to this paper?

    I'll be interested to see how this plays out.

  3. I don't understand why this is being spun as an assessment of the mental sanity of fake skeptics. Having a paranoid bent doesn't automatically mean you're nuts. And they are definitely paranoid, just read McIntyre's stuff. Isn't there a story about how he couldn't access some ftp sites while at an airport and immediately thought that They were out to get the Grand Auditor?

  4. Some day you'll get something right. I'm convinced. But it's not today. What Lewandowsky did was as unethical as what Anderegg, Prall et al did. It has as much to do with science as bear-baiting in London 300 years ago. But it was never meant to serve as science--just propaganda.

    You're free to choose your friends. Having chosen them, remember that they become one of the ways people view you.

  5. I've been wondering whether this was more a piece of science or performance art/comedy, and whether it would be possible to be both simultaneously. Toby White's comments quoted above seem to suggest more the latter and perhaps that 'both' is not possible. I wonder if had someone else, unconnected, carried out this research it would have stood still. Points 3 and 4 would remain as issues, but would they be enough to disqualify the paper? Perhaps part of the fault is that others aren't stepping up to do this research.

  6. This is I think a reasonable approach to the Lewandowsky issue. The paper has done some damage to the climate change cause - acknowledging openly that there are problems with what was done helps to limit and confine the damage.

    Regarding the comment "Stephan’s papers are a very different matter. It takes the ridiculous aspect of the denier world and exposes that ridiculousness itself in an academic context. To those of us familiar with the debate it is simply a realistic exposition of what is actually happening." - if this is so, then it needs to be demonstrated properly. I think that being partisans and participants in that debate, Lewandowsky et al. assumed that what seemed obvious to them would be obvious universally, and didn't take as much care as would be needed to demonstrate this to rigorous scientific standards.

    For a start, they would need to sample actual confirmed sceptics. The degree of scepticism would need to be quantified and classified. They would need to understand the sampling distribution, its correlations with the features under study, and what biases it might introduce. They would need impartial and general tests of conspiracy ideation; that for example picked up the "oil-industry-funded conspiracy" in the same terms as the "UN Agenda 21" conspiracy theory. Ideally it ought to test the tendency using politically neutral scenarios and storylines. They would need to determine the reasons for holding a particular belief: do they have actual evidence, or is the lack of evidence itself seen as support for a cover-up? What reasoning do they use? Can you analyse that reasoning for formal correctness? Can you distinguish whether it is the person, or the degree of motivation, or the political context that matters?

    And you would need to deal with the ethics. You need to ask a representative sample to contribute justifications for their beliefs, having told them in advance that you was going to analyse them for reasoning methods and publish the results. You need to obtain informed consent, and protect anonymity if required. You need to have authors who are not partisans in the war, with critical review in advance of the design/methods/interpretation from people on both sides. You need controls, where people's reasons for belief in non-controversial proposals are examined using the same methods. If possible, you may need to find ways to blind the observations - for example, by mapping the concepts of the climate debate to an analogous one. And of course, as a matter of scientific enquiry, you would naturally be interested in doing matching surveys on both sides - to identify similarities and differences.

    It would certainly be an interesting and possibly important study. But trying to do it in the unscientific and partial way it was done here is counterproductive. Not only does it not produce any usable results, it reflects badly on the scientific competence of the side that produced it, and of the journal and reviewers and ethics committees that passed it. I am sure that as scientists you can do much better. I don't see any reason why a properly designed experiment with this aim shouldn't be done.

    • I am not sure it needs to be demonstrated at all. That's the question I was asking.

      Assuming it is a question worth asking, as an earth scientist, I am tolerant of extracting information from less-than-ideal datasets.

      To suggest that a better funded study might have produced a more robust result is obvious and not especially helpful. Sometimes the shoestring effort (cf. Anderegg & Prall) is the best you can do. In any case, the shoestring effort is generally needed to get enough evidence to fund a more formal research strategy.

      More to the point, no perfect study is possible, leaving the usual opportunity for up-in-arms reaction from the subjects of the study.

      There is no doubt that certain beliefs and behaviors correlate. There is a very practical issue in understanding the nature of denial, so as to discourage the cognitive style that is doing so much damage. But in cases where the publication itself is inevitably going to be part of the behavior under study a new class of ethical question is implied. I don't think your reply really gets at the heart of the matter.

      • The paper was useful to the extent that it applied existing criteria of conspiracist thinking ("nefarious intent" etc) to the evolution of recursive conspiracy theories using data from internet blogs. (As stated in the introduction "empirical evidence to date has been sparse".)

        The biggest opposition to it came from the same people who opposed the earlier paper, including one person being most upset not so much that two words of his longer comment were quoted, but apparently because the paper got his conspiracy theory wrong (it was another conspiracy theory that he held).

        The issue that you seem to be raising here is an ethical one. Is it ethical to analyse and report comments made in public and publish them in an academic journal.

        I too will be interested to see how this plays out from an ethical standpoint. The ethics of quoting and analysing comments made publicly on internet blogs - interesting question. This question itself could be, if not already, the subject of another academic paper in an ethics journal. The world is changing so quickly - ethics are also evolving.

        However I hope the paper is not lost. If we are to improve communication on matters of such critical importance to policy direction (eg climate science), then an understanding of the 'how' if not the 'why' people dismiss evidence and replace it with a manufactured alternative 'reality' should help.

        As you've said, we see some people replacing facts with a manufactured alternative 'reality' all the time - eg the reaction to the Marcott paper provides a myriad examples (even from so-called 'scientists').

        Whether better understanding of the phenomenon is something that's important for society or not - well, sometimes we just don't know how important new research is until long after the 'answers' are found.

  7. Pingback: North Korea ‘may not be performance art’, say experts – Stoat

  8. I also take issue with the idea expressed in a couple of the comments that the authors diagnosed "the mental health and cognitive status of their subjects" and similar.

    I believe this is inaccurate. I see nothing in the paper to support that statement.

    The paper was about the evolution of conspiracy theories, not a diagnosis of the "mental health or cognitive status" of any individual. To the extent it covers cognition, it reads to me to be about group behaviour ("epistemically closed system") more so than individual behaviour. (That could be just my own 'mental models' filtering.)

    In addition, the paper has a number of caveats including a warning not to 'overextend' conclusions.

  9. BTW I'm not suggesting (nor buying into the suggestion) that there are any ethical issues with the paper itself and I've not seen anything to suggest there is. Various ideas floating about the blogosphere though (more conspiracy ideation?)

  10. Pingback: Another Week in the Planetary Crisis, April 7, 2013 – A Few Things Ill Considered

    • > I am not sure it needs to be demonstrated at all.

      For what it would be worth, analyzing Jeff Id's op-ed and the ensuing conversation should be enough demonstrate what needs to be demonstrated:

      http://noconsensus.wordpress.com/2013/02/06/lewandowsky-strike-two/

      In short, Id can't resist (his nickname making him do it) go on a rant about one lousy quantifier. When confronted by this fact, he does not have the discipline of an Auditor to cut his losses. Then Carrick tries to bully me away, which makes matters worse. Every trick gets played.

      ***

      "Reading the blog(s)", as bender would say, might be enough to prevent most of the blunders like Lew's.

      Openness is a double edge sword: there's no need to read any mails to see what the auditors are doing. By the way, when should we expect to read the last batch of emails? Bitcoins are so low it might be time to give some to the Miracle Worker.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>