Values are not isolated from scientific research

By WANG YOURAN / 12-14-2017 / (Chinese Social Sciences Today)

Kevin Elliott is an associate professor in Michigan State University’s Lyman Briggs College, with joint appointments in the Department of Fisheries and Wildlife and the Department of Philosophy. His research lies at the intersection of the philosophy of science and practical ethics.  His current projects touch upon the roles of ethical and social values in environmental research, response to financial conflicts of interest in research as well as the process of scientific discovery. His works include A Tapestry of Values: An Introduction to Values in Science, Current Controversies in Values and Science and Exploring Inductive Risk: Case Studies of Values in Science.



 

The relationship between science and values is complicated and controversial. On the one hand, many people believe that science is value-free because natural phenomenon and principles are objective facts. Scientific research should eliminate the influence of values. On the other hand, it seems difficult to deny the human role in exploring the natural world. A CSST reporter recently sat down with Kevin Elliott to hear his views on the role of values in scientific research.

 

CSST: How do you interpret the commonsense dictum that “science is value free?” Is it an absolute truth or more of an ideal? Some people may say that science can never be totally free of value judgments because what science we do and how we do it are matters of choice, and our value systems have influence on the choices we make. Also, I assume the word “science” used in this discussion refers to the research practices and other activities people take part in to discover general truths of the universe, rather than objective facts and laws.

 

Kevin Elliott: I definitely think that the notion that “science is value free” is an ideal and not a description of how science actually operates. I think that even those who defend the value-free ideal would acknowledge that scientists are constantly influenced by their values in subtle ways. But they would insist that scientists should, as much as possible, prevent those values from influencing their reasoning. In contrast, I am arguing that the value-free ideal is not even good as an ideal. And yes, by “science” I primarily mean a set of practices and not so much the outcomes. However, insofar as values influence our choice of topics, the questions we ask about those topics, the ways we describe our results, and what we count as adequate explanations or descriptions of the phenomena, I actually think the outcomes of science are often value-laden as well.

 

CSST: What roles should values play in science? How can scientists engage with values in ways that improve the quality of science and safeguard research integrity instead of jeopardizing it?

 

Kevin Elliott:  I think it is relatively uncontroversial that ethical and social values have legitimate roles to play in choosing research topics. For example, it makes sense to consider social priorities when deciding how much money to spend on medical research or national defense research or the social sciences. And even within these broad research areas, people’s values are relevant to deciding whether, say, research on cancer, diabetes, AIDS or malaria is more important.


I think that values are also relevant to deciding what questions or approaches to use for investigating particular topic areas. In my book, for example, I discuss different strategies for pursuing agricultural research. Some approaches focus on using highly technical approaches to increase yield, such as by genetically engineering seeds and optimizing the input of synthetic fertilizers and pesticides. Other approaches, which are sometimes described under the label of “agroecology,” focus more on ways of mimicking natural ecosystems by combining different plant and animal species in creative ways. One needs to consider the social context in which they will be used, including the amount of money available to farmers for purchasing expensive inputs and the extent to which adverse environmental effects need to be minimized.


Values can also play a legitimate role in deciding what counts as a good explanation or a good model in a particular research context. This is especially important when research is being done for the purposes of assisting policymakers with their decision-making. For example, a great deal of research is performed in order to assess the risks posed by industrial chemicals for humans and the environment. Different methods and models can be used for assessing these risks. Traditional approaches tend to be slow and labor intensive. Alternative methods can sometimes yield results more quickly, but with somewhat less accuracy. Social and ethical values are relevant to deciding how to balance the tradeoffs between speed and accuracy in cases like these.


A particularly common but often unrecognized role for values in science is in deciding what standards of evidence to demand before drawing conclusions. For example, statistical significance tests are typically set so that they demand a 95 percent or 99 percent level of statistical significance in order to reject the null hypothesis. But this is a conventional choice that we could set differently, depending on how important we think it is to avoid false positive or false negative errors. A recent book that I co-edited, Exploring Inductive Risk, looked at cases in a number of different fields where researchers had to make these sorts of decisions. Even in a seemingly value-free discipline like high-energy particle physics, scientists disclosed that they thought it was important to demand particularly high levels of evidence before declaring that they had found the Higgs boson, because they were worried about the social consequences of making a false positive error.


Finally, in many cases, it is nearly impossible to communicate scientific information without serving some values over others. Thus, I would argue that it is better to examine these choices and reflect on how best to make them. Consider, for example, that science is full of terms borrowed from other contexts, which often involve metaphorical connotations that can serve some values while acting against others. Environmental scientists have debated whether it is appropriate to use terms like “invasive,” “exotic,” “alien,” or “non-native” species, given the connotations that these terms have in everyday discourse. Similarly, biologists have to consider whether it is appropriate to use terms like “rape” or “monogamy,” given that they often have different connotations in biological contexts than they do in human social contexts. The World Health Organization even argued recently that it would be better not to use disease names like swine flu, Marburg disease, or athlete’s foot because of the potential to stigmatize people, places or animals.


While I would argue that social and ethical values are at least sometimes relevant to making all these decisions, I should emphasize that we still need to consider on a case-by-case basis whether particular value influences are actually appropriate. For example, values should typically be incorporated in a transparent manner so that others can understand how they might have influenced the research. In addition, scientists should strive to make value judgments in ways that accord with ethical principles and social priorities. In many cases, it is also important to create adequate venues so that scientists and other stakeholders can discuss value judgments and reflect on how best to make them. One of the chapters in my book A Tapestry of Values is dedicated to exploring strategies for bringing together a diverse array of scientists, scholars, and members of the public to address value judgments. Many philosophers of science are arguing that this sort of engagement is the key to promoting scientific objectivity. Individuals can strive to be objective, but what we really need is to create social contexts in which we can scrutinize each other’s value judgments in a collective manner.

 

CSST: According to a recent study in MSU today, scientists’ credibility tends to suffer when their personal beliefs are disclosed, they may avoid being open and honest in these matters. But reticence about values can be harmful science and society, and being aware of scientists’ value commitments may help people to better judge the research they have conducted. How can citizens and the scientific community encourage transparency of values in science?

 

Kevin Elliott:  In general, there is some evidence that people trust scientists less when they hear about their values, but there actually isn’t a lot of empirical evidence available on this issue. I think it would be really helpful to collect more evidence on this topic. I also think it’s really important to find out more about whether there are ways to discuss values that will tend to enhance trust rather than detracting from it.


I also think it’s important to note that scientists might not always have to disclose all their personal value commitments in order to achieve helpful transparency. For example, in some cases they might just need to clarify that they faced several crucial judgments that needed to be made, and they could clarify why they made them the way they did.


An additional point I’d like to make is that I think it’s especially important for social scientists to consider how values play a role in their research. I think values are at least somewhat relevant in all areas of research, but they become especially important in the social sciences. For example, the categories that we use for describing social phenomena are especially likely to be value-laden. For example, philosophers have argued that “rape” is not just a descriptive concept but also has a normative element to it. Also, it’s often possible to study social phenomena in different ways, and our values may be particularly important for how we study them. These different views affected the methods they used for studying it and the questions they asked. Also, when deciding what conclusions are reasonable to draw in the face of uncertainty, it may be especially important to consider the consequences when dealing with the social sciences. For example, there’s a book chapter that discusses research on discrimination. For legal and policy purposes, they had to decide what standards of proof to demand before concluding that discrimination had occurred.


I would also like to emphasize that, in my view, allowing values into science need not destroy its objectivity. Many philosophers of science argue that the key to objectivity is to acknowledge and critically examine value judgments rather than ignoring them. Allowing values in science also doesn’t mean that we accept conclusions just because we would like them to be true.