Is Denying Climate Change Science A Mental Health Problem ?




The elevation of science to a central theme in American politics is an extraordinary development in the co-evolution of science and society. Three months after Donald Trump’s inauguration, 40,000 or so people turned out in the rain in Washington, DC for the March for Science, with similar numbers in other cities. Given Trump’s all-out attack on the role and size of government—his proposed 2018 budget slashes almost all programs other than national defence—there could just as easily have been a March for Education or a March for Affordable Housing.
But the high profile of science in national politics has been building since the turn of the millennium, fuelled by controversies around embryonic stem cell research, and of course climate change. Starting with the year 2000 presidential campaign between George W. Bush and Al Gore, Democrats explicitly began positioning themselves as the party of science. During the 2004 campaign, Democratic candidate John Kerry pledged that “I will listen to the advice of our scientists, so I can make the best decisions. . . . This is your future, and I will let science guide us, not ideology.”
A year later, journalist Chris Mooney published a book whose catchy title, The Republican War on Science, later got picked up by the Democratic party, with a statement on its 2008 campaign website that “We will end the Bush administration’s war on science, restore scientific integrity, and return to evidence-based decision-making.” Indeed, Barak Obama’s 2008 inauguration speech included the memorable promise that he would “restore science to its rightful place.”

So by the time of Trump’s election, science was already a strong issue for Democrats. But everyone wants science on their side, and even Donald Trump insisted, on the day of the science march, that “Rigorous science is critical to my administration’s efforts to achieve the twin goals of economic growth and environmental protection.”
Having science on your side, however, requires a strong voice for expertise in political discussions. And as we all know, one of the more common diagnoses of political pathologies leading to Trump, as well as to the Brexit vote in the UK, is that the voice of experts has been rejected by the citizenry.
So the rhetorical stakes around science and politics are pushed even further. “We live in an age that denigrates knowledge, dislikes expertise and demonizes experts,” wrote Anne Applebaum in the Washington Post last May. Tom Nichols, who teaches at the US Naval War College, and wrote The Death of Expertise, fleshes out the diagnosis: “Americans have reached a point where ignorance—at least regarding what is generally considered established knowledge in public policy—is seen as an actual virtue.”
What makes this so discomfiting is that it cuts close to the bone of our identity as rational humans struggling to make sense of a complex world. Everyone, even Trump, says they want science on their side because being modern and rational is all about basing decisions on reliable knowledge—all the more so in the face of challenges such as climate change, pandemics and cyberwar, to name just a few apocalyptic horsemen. So you can’t make any claim to authority without implying that you have some rational, empirical basis for preferring one course of action over another.

How, then, have we at the same time come to live in a world of post-truth politics, fake news, alternative facts, and counter-narrative?
Amidst the bruising debates over issues like climate change, GM crops, stem cell research, vaccines, and so on, a number of social and behavioural scientists have begun to investigate the question of why people come to the beliefs they have about science. The larger agenda here is to understand how our cognition limits our capacity to act in the way that the Enlightenment model of rationality tells us we should be acting. Particular attention is being focused on why people don’t more readily accept the findings of scientific experts on politically controversial issues with scientific elements.
For example, John Cook, a cognitive scientist at George Mason University, writes: “Science denial, as a behaviour rather than a label, is a consequential and not-to-be ignored part of society…When people ignore important messages from science, the consequences can be dire.” This idea – that “science denial” is a “behaviour rather than a label” – turns the act of people not accepting what experts tell them from an act of individual (and perhaps ill-informed) judgment into a coherent phenomenon that experts themselves can do research on.
Efforts to pathologize “science denial” link to a growing body of work about human cognitive limits that can be traced, in part, to the wonderful set of studies carried out by Daniel Kahneman and Amos Tversky, starting in the 1970s, on judgment under uncertainty. These established that the heuristics most humans readily use to make sense of the world on a daily basis also introduce significant biases into our understanding of the world. Kahneman of course eventually won a Nobel prize for this line of research.
If people are naturally limited and biased in their abilities to see and assess the probabilistic constitution of many of the decisions that they face, it is only a short step to ask if they are, as a matter of evolutionary cognitive development, similarly limited in their more general capacity to think scientifically. And if people naturally look at the world in systematically biased ways, and if certain classes of people – say political conservatives – consistently reject the findings of science, then one might begin to explore the question of whether these two observations could be causally related.
And so, experts have begun studying why experts don’t get more respect. Scienceblind and The Knowledge Illusion are two such books by cognitive scientists published this year. As the titles suggest, they take up the question of why people understand so little about the world around them. The first of these, by Andrew Shtulman, focuses on why we don’t intuit scientific truths about the world. It looks in particular at how children’s misunderstanding of the world can help us see how difficult it is even for adults to acquire correct understanding of how things work.
Shtulman’s central premise is that we need to leave our childish intuitions behind and accept the findings of science in order to act effectively in the world. “Intuitive theories,” Scienceblind tells us, “are about coping with the present circumstances, the here and the now. Scientific theories are about the full causal story—from past to future, from the observable to the unobservable, from the miniscule to the immense.” And the book concludes, “While science denial is problematic from a sociological point of view, it’s unavoidable from a psychological point of view. There is a fundamental disconnect between the cognitive abilities of individual humans and the cognitive demands of modern society.”
The second book, The Knowledge Illusion, by Steven Sloman and Philip Fernbach, looks not only at how little we know, but also at how we know a lot less than we think we do. “Because we confuse the knowledge in our heads with the knowledge we have access to, we are largely unaware of how little we understand.” While the authors recognise that teaching people more facts about science might not change their beliefs about the world, they also believe that if people realised how little they actually do know, they would moderate their positions on key issues, and be open to a wider range of possibilities. “Getting people to think beyond their own interests and experiences may be necessary for reducing their hubris and thereby reducing polarization.” The book attributes “antiscientific thinking” to false causal models that individuals hold in their heads, often in common with their social groups.

Both of these books share the perspective that we’re all dumb but it’s not our fault; we’re born that way. The first step is to recognise how little we each understand of the world, rather like accepting original sin.
It’s hard not to sympathise with this perspective: a little more humility in a lot more people could be good for the world. But we didn’t need cognitive science to tell us that. After recognising our ignorance, the second step must therefore be an acceptance of what scientific experts tell us. Otherwise, what would be the point of accepting our ignorance?
I find this emerging intellectual programme around science denial problematic on so many levels that it’s hard to know where to start. Certainly one part of the problem with the idea of an innate cognitive stance toward science, and with discussions about science in the political world more generally, is the undisciplined way in which the word “science” gets used – as if particle physics, climate modelling, epidemiology and cultural anthropology have so much in common that they are substitutable for “science” in any sentence. Which science does “science denial” pertain to?
Moreover, the entire programme fetishises individual cognition and understanding by positioning the innate ignorance of the individual as the bottleneck at the intersection of knowledge, uncertainty, expertise, and political disagreement. The idea that these books implicitly endorse is that progress in tackling the complex problems of modernity is being blocked by individuals who do not accept new causal knowledge generated by science.
The effort to provide a behavioural explanation for why people might not accept the opinions of experts strikes me as not entirely dissimilar in its implications from the early ambitions for eugenics, in that it seeks in the biology of the individual an explanation for complex social phenomena. It makes one wonder what the appropriate treatment for science denial might actually be?

Meanwhile, the situation in the science enterprise itself is hardly reassuring. There is a reasonable case to be made—and I have tried elsewhere to make it —that much of science is on the verge of a crisis that threatens its viability, integrity, legitimacy and utility. This crisis stems from a growing awareness that much of the science being produced today is, by the norms of science itself, of poor quality; that significant areas of research are driven by self-reinforcing fads and opportunities to game the funding system, or to advance particular agendas; that publication rates continue to grow exponentially with little evidence that much of what is published actually gets read; and that the promises of social benefit made on behalf of many avenues of science are looking increasingly implausible, if not ridiculous.
Maybe a little science denial is actually in order these days? The emergence of science denial as a pathology designed to explain why science is not leading to improved political decision-making seems, if nothing else, completely overwhelmed by the precisely opposite condition.
The vast scale of the knowledge-production enterprise, combined with the likelihood that much of what’s produced is not much good, makes it possible for anyone to get whatever science they need to support whatever beliefs they might have about how best to address any problem they are concerned about – with little, if any, capacity to assess the quality of the science being deployed.
Twenty-five years ago, Silvio Funtowicz and Jerry Ravetz developed their concept of “post-normal science” to help understand the role of knowledge and expertise when facts are uncertain, values in dispute, stakes are high, and decisions are urgent. Under such conditions—which are common to many of today’s societal problems—Funtowicz and Ravetz describe how the “traditional distinction between ‘hard’, objective scientific facts and ‘soft’, subjective value-judgements is now inverted.” That is to say, facts become soft, and values hard.

Under such conditions, our expectations for Enlightenment ideals of applied rationality are themselves irrational. We are asking science to do the impossible: to arrive at scientifically coherent and politically unifying understandings of problems that are inherently open, indeterminate and contested – to provide, as Scienceblind promises us, “the full causal story.”
Meanwhile, the reliability of the very types of science that underlie books like Scienceblind and The Knowledge Illusion are increasingly called into question as evidence of irreproducibility continues to mount, including across many fields of research that make strong generalisations about human behaviour.
Our biggest problem is not science denial; it’s post-normal science denial.

source:the guardian 

0 comments:

Post a Comment

More