Information Literacy: Critically Evaluating Evidence
1. Evidence is the most important part of any inductive argument. You need to be able to prove your claims with evidence. But it is important to recognize that not all evidence is reliable. Some types of evidence can't be trusted at all. And even when good evidence is used, it might not be appropriate for certain types of claims. Thus, evidence has to be both reliable and logically appropriate, in a word, evidence needs to be valid, in order to prove a claim true or false.
2. To oversimplify for instructional purposes, I want to explore nine major types of evidence that are most commonly used in arguments, starting with the weakest form of evidence and moving forward toward the strongest form. I will briefly explain each type of evidence, focusing on how reliable it tends to be (strong to weak) and how it could be appropriately used.
3. The first type of evidence is personal experience, sometimes called testimony or an "eye-witness account." This evidence is based on what an individual experiences directly or indirectly. Psychologically, we naturally consider this type of evidence to be the strongest because it is the most immediate and intimately tied to our local world, which we know the best. However, from a sociological perspective, personal experience is actually the weakest type of evidence. Critics have noted the unreliability of personal experience since writing was invented. About 2,500 years ago, the ancient Greek historian Thucydides warned: "Those who were eyewitnesses of the several events did not give the same reports about the same things, but reports varying according to their championship of one side or the other, or according to their recollection" (as cited in Schiff, 2010, p. 10).
4. Even when our eyes see clearly and our brain works well, what we directly see is only an infinitesimally small part of the objective world. What we can directly see and experience is not statistically significant. Our perception is unique. Our experience of a situation does not represent the experience of the average person. We are also not able to fully understand what we see. When a scientist tries to make a generalization about the complex objective world, he or she needs to gather a lot of evidence to filter out the idiosyncratic or random data, called outliers. Scientists use large bodies of data in order to determine statistically a valid generalization that matches the larger population or phenomena being studied. Statistical analysis is hard for us to understand because our brain cannot easily comprehend statistical averages and it is difficult for us to acquire such knowledge (Kahneman, 2011).
5. Our personal experience is awful basis for knowledge because our brains never work perfectly. Our brains are fundamentally flawed due to the evolutionary biology of our species. Some of these flaws are based on older evolutionary adaptations suited for a simpler way of life. Many automatic thinking reflexes bias our judgment in numerous ways. Psychologist Daniel Kahneman (2011) calls these biases "system 1" or "fast thinking" (p. 28). While automatic thinking reflexes may be very useful when avoiding predators or natural disasters, these natural biases interfere with our ability to really understand our situation. They usually make our conclusions unreliable and false (Thaler & Sunstein, 2008).
6. In order to control these automatic thinking reflexes, we have to deliberately slow our thinking down. We have to engage in what Kahneman (2011) calls "system 2" thinking, or what philosophers call critical thinking. We need to examine our thought process to become aware of the errors caused by the automatic reflexes of system 1 so we can correct them (p. 28). But even when we engage critical thinking, Kahneman’s research demonstrates that "biases cannot always be avoided" (p. 28). Our minds are naturally "gullible," "biased to believe" almost anything, and "lazy" when it comes to critically thinking (Popkin, 1994, p. 218; Kahneman, 2011, pp. 44-49, 81). Finance professor Burton G. Malkiel (2012) explained, “Understanding…how vulnerable we are to our own psychology can help us avoid [our own] stupid[ity]” (p. 258).
7. We take shortcuts whenever we can. This can sometimes be useful because it saves us time and energy. According to Kahneman (2011), our brains are naturally biased to "jump to conclusions on the basis of limited evidence," leading us to see the world with a WYSIATI framework: "What you see is all there is" (p. 86). Our brains often behave like a drunk looking for car keys in the dark: Instead of looking in the most logical places, which are dark, a drunk starts looking in the most improbable places because there is better light (Popkin, 1994, p. 218). While thinking shortcuts save us time and energy, they often produce unreliable information.
8. But automatic thinking reflexes are not the only problem with subjective experience. Our perception can also be clouded by emotions, such as fear or happiness. We literally cannot see clearly when we are overcome by strong emotion. Information presented in a highly emotional way will almost always take precedence over information presented neutrally (Popkin, 1994, p. 16; Kahneman, 2011). Our brains can also become damaged or chemically unbalanced in many different ways. Such distorted perception can cause the hearing voices, the blurring of vision, or smelling non-existent scents. Because modern psychology has shown that personal experience is so unreliable, most modern courts of law would not convict a person on the basis of only a single eye-witness.
9. While we live our entire lives on the basis of our own personal experience, we rarely explore the flaws of our personal knowledge. For the most part, we don't need to because our brain is naturally tuned to the frequency of the objective world, and we generally don't need much knowledge on a daily basis to live our lives. And luckily, we maintain many social relationships while living in large, complex societies. Our personal experience and knowledge is moderated daily by the experience and opinions of others. The input from our community helps correct any errors we might make.
10. But our social networks can also help compound our thinking errors into larger cultural fictions that everyone in our culture accepts (i.e. the common sense of conventional wisdom). Most people have a limited grasp on the objective world. But we don't realize how little we know. We are either unaware of our own subjective thinking errors, or we are trapped in the collective deception of cultural common sense. We suffer from the "pernicious illusion" that we understand the world we live in (Kahneman, 2011, p. 201). As Daniel Kahneman (2011) has pointed out, "Our comporting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance" (p. 201).
11. The second type of evidence consists of documents, images, or other cultural artifacts. Anthropologists call these things material culture. These forms of evidence should also be considered unreliable because they are produced by individual human beings, largely on the basis of that individual's personal and cultural experience. However, documents and artifacts become more reliable when large numbers are analyzed and compared with each other. The triangulation of multiple artifacts helps filter out subjective or cultural bias.
12. Historians, for example, will pour through the major newspapers and books of a time period in order to compare the common elements of all the stories, which most likely (but not always) get down to the basic facts of an event. But in order to get a fuller picture of what actually happened, these basic facts would also need to be correlated with the diaries, letters, and other personal documents of people who were part of the event. The historian might also look at the actual geography of the place to see where the event took place. They might try to find historical artifacts from this geography, often cataloged in museums. Or that historian might be an archeologist who would dig beneath the ground to find new artifacts that might still be buried. The materials and technology used to make the artifacts can tell us much about the society in which it was created.
13. The third type of evidence is the survey. Thousands of years ago, various governments developed the census survey to keep track of the population for military, commercial, and tax purposes. The modern survey was developed in the late 19th century as a way to gage the opinions and conditions of people in democratic countries, ostensibly to find out what social problems existed so the government or private charity organizations could fix them (Igo, 2007, p. 29; Ewen, 1996, p. 186). Historian Stuart Ewen (1996) explained, "The polling system and a burgeoning market research establishment would provide the channels through which the public would be known and then responded to" (p. 186).
14. The survey is simply a list of questions asked to a large sample group. A sample group is a random collection of data meant to represent the larger population of which it is a part. The survey was designed as a tool to accurately describe a large population according to specific categories, which were represented by specific questions on the survey. However, it became apparent to early social scientists who developed the survey that it was an "admixture of fact and opinion such that it cannot be classified as a thoroughly scientific study" (as cited in Igo, 2007, p. 34). This is why many scientists look down on survey data as a less than legitimate type of evidence. Behavioral economist Richard H. Thaler (2015) noted tongue in cheek, "To this day, the phrase 'survey evidence' is rarely heard in economics circles without the necessary adjective 'mere,' which rhymes with 'sneer,'" although Thaler pointed out that survey data can be very useful if appropriately filtered through statistical analysis, which we will discuss below (p. 47).
15. One problem with surveys is the unreliability of respondents. There is no way to verify the truthfulness of the answers people give. Sometimes people lie for various reasons. Therefore, because surveys rely on the subjective opinions of ordinary people, they suffer from the same basic flaws of personal experience as a form of evidence. But surveys are also flawed by the common sense or theoretical assumptions of the authors who design them (Igo, 2007, pp. 54, 59; Ewen, 1996, p.188). These assumptions not only influence what questions are asked (or not asked), but also the definitions of important words separating one category from another. These assumptions can often affect how questions are asked. Leading questions are loaded with particular assumptions and they push people to answer in a particular way in order to manipulate public opinion. Ewen (1996) has pointed out, "Carefully worded questions could be crafted in order to elicit any desired response" (p. 188). Thus, the validity of surveys must be measured against the questions asked, the theoretical or cultural assumptions behind those questions, and the sample size and composition of the population polled. Generally, surveys give an interesting but highly unreliable indication of public opinions, especially if the sample is biased (see example); thus, surveys should never be taken too seriously. A survey should also be statistically analyzed to check the quality of the data.
16. The fourth type of evidence is the interview. Like the survey, this type of evidence is highly flawed because it rests on subjective experience and opinions. However, structured interviews are always better than surveys because the interviewer can ask the interviewee follow-up questions to get more elaborate and nuanced answers. An interviewer can also read the body language of the interviewee to capture clues about the reliability of the answers given. Also, conducting an interview in an embedded setting, like interviewing teachers at the school they work, can lead the researcher to reevaluate questions based on the environment. Direct knowledge of a local environment can open up new lines inquiry that the investigator may have overlooked as an outsider.
17. One of the biggest logistical problems with interviews is that they take a lot of time and effort to conduct and transcribe. It is difficult to get more than a handful of interviews for a research project, a circumstance which greatly limits the researcher's ability to generalize the data. However, the in-depth answers of interviews compensate for this disadvantage. To get the fullest picture of public opinion, whenever possible, it is best to combine large scale surveys that are statistically analyzed with in-depth interviews. This way you can combine statistically valid generalizations of the whole population with rich specific details from the unique experience of individuals.
18. The fifth type of evidence is field research, which comes in many different forms depending on the population or phenomenon being studied. To name a few: the study of living human cultures is called ethnography, the study of extinct human cultures is called archeology, the study of animal culture is called biology or sociobiology, the study of extinct animals is called paleontology, the study of inanimate ecosystems is called geology, and the study of living ecosystems is called ecology. Field research is similar to laboratory research (evidence types 8 and 9), except that field research is messy because it is conducted in uncontrollable "natural" settings (Hammersley & Atkinson, 2003, p. 6; Diamond & Robinson, 2010). In a natural setting, the researcher can see how variables interact with each other within the dynamic environment where they naturally occur.
19. There are trade-offs between natural and laboratory settings. In laboratory settings, scientists study isolated variables in the controlled environment of the laboratory; doing so can be very insightful because scientists concentrate on only a few interactions between a small number variables. But in the real world, myriad variables interact with each other in dynamically changing environments. Also, when dealing with conscious organisms that can modify their behavior (like people, chimpanzees, or dogs), a laboratory setting can often induce highly unusual behavior in the test subject, which could bias the results. In a natural setting, organisms are likely to behave as they normally would under routine circumstances; as a result, researchers may get a more accurate and complex understanding of such behavior. However, if the research becomes too conspicuous in the natural setting, this situation too could change the behavior of the subjects and bias the data.
20. A researcher in the field will describe, measure, and categorize various phenomena. For example, a biologist might watch monkeys all day, recording their size, movements, relationships, and habits. Anthropologists study people, but they would observe the same factors. But this process of collecting data is limited by the fact that researchers are "outsiders" to the natural setting (Geertz, 1973/2000; Geertz, 1983/2000; Hammersley & Atkinson, 2003). Thus, scientists do not understand the setting like an "insider" would; hence, the need to conduct interviews with insiders whenever possible to get the insider point of view. Of course, this technique requires knowledge of insider language, and it obviously doesn’t work with animals, which greatly limits our understanding of animal behavior and culture. Field research is very valuable because it combines structured observation of the subject with the analysis of the environment, artifacts, and, where appropriate, surveys, interviews, and statistical analysis of the data.
21. The sixth type of evidence is statistical research, often called quantitative data. Statistics is a branch of mathematics that has become one of the main scientific methods for collecting and analyzing data. There are two different forms of statistics: descriptive and inferential. Descriptive statistics take a bunch of data and organizes that data into defined categories. This form of statistics just describes a large amount of data in a few relatively simple categories, and expresses the data with numerical values, like whole amounts, percentages, or fractions.
22. The second form of statistics is much more complex. Inferential statistics take two or more variables and tries to determine two basic conclusions: how connected are these variables and how strong is that connection. The connection between variables is called "correlation." If there is a strong correlation between two variables, then where you find X you would also find Y. For example, when you find people who smoke two packs of cigarettes a day, you also expect to find lung cancer because there is a high correlation between these two variables. The strength of a correlation is somewhere between random and perfect. Random implies there is no connection at all, while a perfect correlation means that when you find X you always find Y. Most correlations are somewhere between these two poles. If the relationship between two variables is strong, then a scientist would say that this relationship is "statistically significant," which means that by using statistical mathematics, the connection can be proven to be highly probable.
23. Statistics is a strong form of evidence; however, it has its limits. First of all, in order to prove a statistically significant relationship, you need to have a lot of good data. A small study would need at least a couple hundred data points, while a large study would have at least 1,000 data points. The larger the study, the better. You need a lot of data because the world is a complex place with much diversity, so you need a large set of data to prove that the connection you found is not a random fluke. The mathematics behind this point are very complex, but basically statistical analysis done with less than a hundred data points is unreliable and should not be taken seriously.
24. Second, how you get your data is also important. Good data will produce reliable statistical conclusions, but bad data produces statistics that are either meaningless or misleading. Economist Charles Wheelan (2013) explains, “If the data are poor, or if the statistical techniques are used improperly, the conclusions can be wildly misleading and even potentially dangerous” (p. xiv). For statistics to work properly, you need a "random sample" of the population that you are studying. This creates two common problems, both of which can reduce the reliability of statistical results. The first problem is the definition used to label and characterize a population (Wheelan, 2013, p.38). Who are "smokers," or "Asians," or "youth," or "middle-class housewives," or "non-traditional students"? What are the characteristics that all members of this supposedly group share, and what happens if a test subject has only some of those characteristics – in or out?
25. Answering these simple questions is problematic. If I want to study the voting preferences of Asian Americans, who do I study? Asian immigrants? Only those immigrants who have passed a citizenship test or all of them? Documented and undocumented? Adult and children? What about the daughter of a third-generation Asian man who married an Irish woman? What about a second generation son of immigrants from India? Even though India is part of south-east Asia, are Indians considered Asians? In order to study a phenomenon or group, you need to have strict definitions in order to make sure you are studying only the phenomenon you have identified. This can be a very difficult process.
26. But even more difficult is finding out how to select a "random sample" of this phenomena or group. Finding such a sample of your defined population is the second most common problem. If the test group was finite and small, you could collect data on every member. But most of the important issues that need to be statistically analyzed are very large, complex phenomena that have a large amount of potential members. It would be impossible and extremely expensive to interview every American, or every resident of New York, or every student at your college, or every 24-32 year old mother in the United States with a 4 year old son. How would you even start to look for this population of 24-32 year old mothers? And what makes you think they all want to participate in your study?
27. So because we cannot feasibly measure all the members of the group we want to understand, we must choose to measure only some of those members. But which ones? If we want to understand the whole group then we need a "random sample" that represents this whole group. Instead of interviewing all residents of New York, we randomly call every 200th person in the phone book until we reach the end. Instead of interviewing all students at our college, we randomly pick 10 students from every academic discipline. We get an official list of students from the registrar’s office broken down by majors, and we interview every 25th student on the list.
28. If the sample is not "random" then the data is "biased" and this will lead to faulty conclusions. If we talk only to New Yorkers who are walking down Madison Avenue, then we will mostly likely get a skewed sense of the people of New York. Just like if we only talked to students in the Art department, when we really wanted to know about all the students at the university. When you are constructing a statistical research project, as Darrell Huff (1993) has pointed out, you are always "running a battle against sources of bias" in your data (p. 24).
29. Once you have a specific set of phenomena to study, and you have gathered a random sample of data, then you crunch the numbers to see how connected your variables are. But there is a third major problem with statistics. There is never a perfect connection between any two variables, and there are always outliers in every set of data. If there is a correlation between two variables (when people smoke there is also a higher likelihood of lung cancer), this connection is never perfect. Thus, a statistical correlation is always expressed as a fractional number where 0 equals no connection and 1 equals perfect connection. Some people who don't smoke also get lung cancer, and some people who smoke two packs a day never get lung cancer. These types of people are called outliers because they don't fit the general pattern of connection. In general, the more smoking a person does, the more likely they are to get lung cancer. Let’s just say, for arguments sake, that around 9 out of every 10 heavy smokers develop lung cancer. This would be a 0.9 correlation, which is a very strong connection. But there are still approximately 10 percent of heavy smokers who don't develop lung cancer, so our correlation is not perfect.
30. There are special statistical tests to determine the strength of a correlation. A correlation with a lot of outliers is very weak, while a correlation with only a few is very strong. Less outliers means that most of the data fits the general pattern. More outliers means that a lot of data does not fit the general pattern, which means there may or may not actually be a pattern. The strength of statistical correlation is usually expressed in terms of probability, which can be quantified using a measurement called standard errors. Because no correlation is ever perfect, there will always be some outlying data. These outliers are considered a type of "error" because they are evidence against the pattern we are trying to prove.
31. Because there are always outliers, our statistical conclusions will always be expressed as a probable range of outcomes. As Charles Wheelan (2013) explains, “Statistics cannot prove anything with certainty” (p. 144). If there are a lot of outliers then there will be more standard errors, and if there are more standard errors, then we cannot predict with much accuracy if new data will support our pattern. Here are two examples. Taking what we assumed about smoking and lung cancer, if I was a doctor and I had a patient who smoked, I could say that this person has approximately 90 percent chance of getting lung cancer. It is a very probable, but not completely certain outcome. Now what about the link between heavy smoking and heart disease? This correlation is only around 0.3, or 30 percent, which means that the majority of smokers are outliers, disproving a general pattern. So, if I was a doctor, I could not say to my patient that it is probable he or she would develop heart disease, but there is a moderate, 30 percent chance, which is significant. Probable knowledge is the best that statistics can provide.
32. Finally, there is a fourth major problem with statistics. Just because you have established that two things are highly correlated does not mean that one causes the other. For example, Stephen Levitt (2009) found an inverse correlation between abortions and crime: More abortions are connected to less crime; fewer abortions are connected to more crime. But this finding does not mean that presence or absence of abortions causes crime, or vice versa. It is much more complicated than that.
33. Also, just because two things are highly correlated does not mean that there aren't many other correlated variables, and it doesn't mean that the correlation explains anything because the connection may just be random. For example, while abortions and crime are correlated, there are many other factors that are also correlated to crime. Levitt (2009) found that increased numbers of police and innovative policing strategies were also correlated to decreased crime, but the correlations were not strong. Some variables that people thought were connected to reduced crime (like increased use of capital punishment or increased carrying of concealed weapons), actually had no connection at all. And sometimes two variables that have nothing to do with each other can be randomly connected. For example, did you know that the spending of the United States government on science is almost perfectly correlated with suicides by hanging, strangulation, and suffocation? What does this mean? Absolutely nothing! The connection between these variables is completely random and meaningless. Just because two or more variables are correlated does not mean they are actually connected by a cause and effect process in the objective world. We call these random flukes "spurious correlations." Statistics is a limited, but powerful form of knowledge. It can tell us with great precision how connected variables are, but it cannot tell us how those variables fit together, nor which variables cause other variables to act in certain ways.
34. If prepared correctly, statistics are a strong form of evidence. However, statistics are often not done correctly. Bad statistics can be the result of human error, or they could be due to deliberate manipulation by the unscrupulous. There are many ways to lie with statistics (Huff, 1993). Just because you see statistics does not mean that you have good evidence. Statistical charts and graphs often have serious errors, the most common being a small sample size or a biased sample. If you go looking for a connection between car crashes and alcohol and only conduct research in bars, then you have a very biased sample that does not reflect the population at large. And even when the statistical mathematics are done correctly, bad data will create absurdities. Charles Wheelan (2013) warned, “Some of the most egregious statistical mistakes involve lying with data; the statistical analysis is fine, but the data on which the calculations are performed are bogus or inappropriate (pp. 117-118).
35. Statistics are also dangerous because people are naturally bad with numbers and statistical reasoning (Popkin, 1994, pp. 72-73; Kahneman, 2011, p. 77); thus, audiences can be easily manipulated by the magic of numbers they don't understand. This also leads to a natural bias against mathematics and statistical reasoning. Most people will not easily understand the true value of statistics. They will often discount statistics if given a countervailing, emotionally powerful narrative that connects with their personal experience (Kahneman, 2011, p. 174). To compensate for this natural bias against statistical reasoning, statistics have to be fully explained with words to a general audience in order to be effective. Statistics are most effective, and more easily remembered, if they are explained using a cause and effect story.
36. The seventh type of evidence is the reasoned sequence of ideas. In the 21st century, this type of evidence is usually expressed in mathematics, a highly specialized form of knowledge that relies primarily on deductive reasoning. We all know that 2 + 2 = 4 and that 4 + 4 = 8, and further, that 4 is 1/2 or 50% of 8, while 2 is 1/4 or 25% of 8. These are logical truths that are independent of the objective world, although many mathematicians and natural scientists have claimed that mathematics is the language of nature, like Galileo Galilei. He was one of the first astronomers to use a telescope to collect data about the universe, which he used to modify existing theories about how the universe was organized. As a reward for his painstaking contribution to science, the Catholic Church denounced Galileo's work, his books were burned, and he was forced to deny his findings on pain of death. Galileo (1623/1957) claimed that the laws of the universe were written in the "language of mathematics" and the shapes of "geometric figures, without which it is humanly impossible to understand" the natural world (p. 238).
37. Mathematics is a language based on quantification (measuring phenomena using numbers), computation (adding, subtracting, dividing or multiplying), and logical relationships, like geometry and calculus. Most scientists use mathematics because it is the most precise and unbiased language we have to describe the objective world. However, most people do not understand even simple mathematics; therefore, many think that math is magic. When they see numbers or equations, most people make the dangerous assumption that these numbers or equations are automatically true. When performed correctly, mathematics is one of the most reliable types of evidence we have; however, math always needs to be explained in common words and/or with common examples so that general audiences can understand it.
38. The eighth and ninth types of evidence are different forms of the controlled scientific experiment. There is the standard controlled experiment and there is the random and blind controlled experiment. A scientific experiment uses a theory to predict how particular variables interact to produce an outcome. In order to test the theory, a scientist will isolate the variables in a controlled laboratory setting so as to limit any interference by other variables that could produce unforeseen consequences. Consider, for example, an experiment about how refined carbohydrates effect weight gain. To truly isolate only the effect of refined carbohydrates, the test subjects would have to be kept in hospital for weeks or months at a time and fed a strict diet. If you only surveyed people's eating habits, you have to trust that the test subjects aren’t lying and that they know the exact quantities and types of food they eat, which most people don't know. The only way to study the specific variable of refined carbohydrates in relation to human weight would be to completely control the diet of a test subject, a difficult and costly process.
39. The random and blind scientific experiment adds quality control mechanisms to make the data much more reliable than data produced in a laboratory without these controls. Random means the test subjects are chosen indiscriminately so as to represent a general population. There can be no sampling bias in a random study. Blind means that the test subjects do not actually know the purpose of the study; therefore, they cannot consciously interfere with the study in any way. A double-blind experiment means that neither the test subjects nor the researchers collecting data know what the experiment is about. Researchers who are looking for particular types of data are more likely to find it because of a concentration bias (i.e. they are concentrating really hard to find something so they are more likely to see it, or think they see it), or because they deliberately falsify data to confirm their theory. The deliberate falsification of data happens more than scientists like to admit. A double blind experiment eliminates this risk.
40. It is important to understand the advantages and limitations of various types of data. You need to analyze the arguments of others to find their strengths and weaknesses. Then, you need to construct better arguments for yourself. When you critically analyze a source, you need to do more than just understand the conclusions an author makes. You need to understand how the author arrived at those conclusions and if those conclusions are logically justified by the data. And further, you need to investigate how the data were created and analyzed to make sure that the research was done correctly, and to see if there are any flaws or limitations to the data.
41. Remember, no argument and no scientific study is perfect. There are always flaws and limitations. Relying on flawed sources of data means making decisions based on unreliable or false information. Relying on flawed sources could lead you or others into danger, so be careful and critical. Never trust a source unless you fully investigate all the claims, evidence, methods for gathering and analyzing the data, and conclusions. A good rule of thumb: assume every argument is false until it can be proven true with substantial and valid evidence.
References & Further Reading
Abrams, M. H. (1953). The mirror and the lamp: Romantic theory and the critical tradition. Oxford, UK: Oxford University Press.
And man made life. (2010, May 22). The Economist. Retrieved Dec. 3, 2012, from www.economist.com
Akerlof, G. A., & Shiller, R. J. (2015). Phishing for phools: The economics of manipulation and deception. Princeton: Princeton University Press.
Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. New York: Harper Perennial.
Aristotle. (1995). Rhetoric. In Jonathan Barnes (Ed.), The complete works of Aristotle: The revised Oxford translation, Vol 2. (pp. 2152-2269). Princeton, NJ: Princeton University Press.
Baskin, P. (2012, Oct 1). Misconduct, not error, found behind most journal retractions. The Chronicle of Higher Education. Retrieved Dec. 3, 2012, from www.chronicle.com
Beach, J. M. (2012). Kenneth Burke: A sociology of knowledge: Dramatism, ideology and rhetoric. Austin, TX: West by Southwest Press.
Berlin, I. (2000). Historical inevitability, In The proper study of mankind. New York: Farrar, Straus and Giroux.
Bernays, E. L. (2011). Crystallizing public opinion. Brooklyn, NY: Ig Publishing. (Original work published 1923)
Bernays, E. L. (2005). Propaganda. Brooklyn, NY: Ig Publishing. (Original work published 1928)
Bloor, D. (1991). Knowledge and social imagery. 2nd ed. Chicago: University of Chicago Press.
Bly, R. W. (2005). The copywriter’s handbook: A step-by-step guide to writing copy that sells. 3rd ed. New York: Owl Books.
Burke, K. (1969). A rhetoric of motives. Berkeley, CA: University of California Press. (Original work published 1950)
Burke, K. (1973). The philosophy of literary form. Berkeley, CA: University of California Press. (Original work published 1941)
Chang, K. (2012, Sept 24). Bias persists for women of science, a study finds. The New York Times. Retrieved Dec. 3, 2012, from www.nytimes.com
Cole, J. R. (2009). The great American university: Its rise to preeminence, its indispensable national role, and why it must be protected. New York: Public Affairs.
Crawford, M. B. (2009). Shop class as soulcraft: An inquiry into the value of work. New York: Penguin.
D'Andrade, R. (2002). Cultural Darwinism and language. American Anthropologist 104(1): 223-232.
Deeds, not words. (2012, Sept 15). The Economist. Retrieved Dec. 3, 2012, from www.economist.com
Demos, J. (2008). The enemy within: 2,000 years of witch-hunting in the western world. New York: Viking.
Dennett, D. C. (2003). Freedom evolves. New York: Viking.
Deutsch, D. (1997). The fabric of reality: The science of parallel universes - and its implications. New York: Allen Lane.
Diamond, J., & Robinson, J. A. (Eds.). (2010). Natural experiments of history. Cambridge, MA: Harvard University Press.
Dunning, D. (2014, Oct 27). We are all confident idiots. Pacific Standard: The Science of Society. Retrieved Nov 2 2014 from www.psmag.com
Eagleton, T. (1991). Ideology: An introduction. London: Verso.
Emerson, R. W. (1957). Experience. In Selections from Ralph Waldo Emerson. Boston: Houghton Mifflin. (Original work published 1844)
Ewen, S. (1996). PR! A social history of spin. New York: Basic Books.
Experimental psychology: The roar of the crowd. (2012, May 26). The Economist. Retrieved Dec. 3, 2012, from www.economist.com
Feyerabend, P. (2010). Against method. 4th ed. London: Verso. (Original work published 1975)
Finkbeiner, A. (2006). The Jasons: The secret history of science's postwar elite. New York: Viking.
Flanagan, O. (2007). The really hard problem: Meaning in a material world. Cambridge, MA: MIT Press.
Flanagan, O. (2011). The Bodhisattva's brain: Buddhism naturalized. Cambridge, MA: MIT Press.
Frank, T. (2000). One market under God: Extreme capitalism, market populism, and the end of economic democracy. New York: Anchor Books.
Frankfurt, H. G. (2005). On bullshit. Princeton: Princeton University Press.
Freedman, D. H. (2010, Nov). Lies, damned lies, and medical science. The Atlantic. Retrieved Dec. 3, 2012, from www.theatlantic.com
Galbraith, J. K. (2001). The concept of the conventional wisdom. In The essential Galbraith. New York: Mariner Books.
Galileo. (1957). The assayer. In Discoveries and Opinions of Galileo (pp. 217-280). New York: Anchor Books. (Original work published 1623)
Gaukroger, S. (2001). Francis Bacon and the transformation of early-modern philosophy. Cambridge, UK: Cambridge University Press.
Gay, P. (1995). The enlightenment: The rise of modern paganism. New York: W. W. Norton.
Geertz, C. (2000). Ideology as a cultural system. In The interpretation of cultures. New York: Basic Books. (Original work published 1973)
Geertz, C. (2000). Common sense as a cultural system. In Local Knowledge. New York: Basic Books. (Original work published 1983)
Gray, J. (1995). Enlightenment's Wake. London: Routledge.
Greene, J. D. (2002). The terrible, horrible, no good, very bad truth about morality and what to do about it. Unpublished doctoral dissertation, Princeton University, Princeton.
Gross, C. (2012, Jan 9/16). Disgrace. The Nation. Retrieved Dec. 3, 2012, from www.thenation.com
Hammersley, M., & Atkinson, P. (2003). Ethnography: Principles in practice. 2nd ed. London: Routledge.
Huff, D. (1993). How to lie with statistics. New York: WW Norton & Company.
Hume, D. (1888). Treatise of human nature. Oxford, UK: Clarendon Press. (Original work published 1739)
Igo, S. E. (2007). The averaged American: Surveys, citizens, and the making of a mass public. Cambridge, MA: Harvard University Press.
Isaacson, W. (2007). Einstein: His life and universe. New York: Simon & Schuster.
Jacoby, S. (2009). The age of American unreason. Revised Edition. New York: Vintage.
Journalistic deficit disorder. (2012, Sept 22). The Economist. Retrieved Dec. 3, 2012, from www.economist.com
Judson, H. F. (2004). The great betrayal: Fraud in science. New York: Harcourt.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kant, I. (1994). Critique of pure reason. London: Everyman's Library. (Original work published 1781)
Kihlstrom, J. F. (2013, Spring). Threats to reason in moral judgment. The Hedgehog Review, 15(1), 8-18.
Kirsch, I. (2010). The emperor's new drugs: Exploding the antidepressant myth. New York: Basic Books.
Klee, R. (1999). Introduction. In R. Klee (Ed.), Scientific inquiry: Readings in the philosophy of science (pp. 1-4). Oxford: Oxford University Press.
Klein, J. (2003). Francis Bacon. Stanford Encyclopedia of Philosophy. Retrieved Dec. 3, 2012, from www.plato.stanford.edu/ entries/francis-bacon/
Kuhn, T. (1996). The structure of scientific revolutions. 3rd ed. Chicago: University of Chicago Press.
Levitt, S. D., & Dubner, S. J. (2009). Freakonomics: A rogue economist explores the hidden side of everything. New York: Harper Perennial.
Lindblom, C. E. (1990). Inquiry and change: The troubled attempt to understand and shape society. New Haven: Yale University Press.
Lindblom, C. E., & Cohen, D. K. (1979). Usable knowledge: Social science and social problem solving. New Haven: Yale University Press.
Lindley, D. (2008). Uncertainty: Einstein, Heisenberg, Bohr, and the struggle for the soul of science. New York: Anchor Books.
Lindstrom, M. (2011). Brandwashed: Tricks companies use to manipulate our minds and persuade us to buy. New York: Crown.
Lindstrom, M. (2010). Buyology: Truth and lies about why we buy. New York: Crown.
Lippmann, W. (1997). Public opinion. New York: Free Press. (Original work published 1922)
Malkiel, B. G. (2012). A random walk down wall street: The time-tested strategy for successful investing. New York: W. W. Norton & Company.
Mayr, E. (1997). This is biology: The science of the living world. Cambridge, MA: Harvard University Press.
Mearsheimer, J. J. (2011). Why leaders lie: The truth about lying in international politics. Oxford, UK: Oxford University Press.
Morgan, E. S. (1988). Inventing the people: The rise of popular sovereignty in England and America. New York: W. W. Norton.
Moss, M. (2013, Feb 20). The extraordinary science of addictive junk food. The New York Times, Retrieved Feb 21 from www.nytimes.com
Norman Borlaug. (2009, 19 Sept). The Economist. Retrieved Dec. 3, 2012, from www.economist.com
Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal education. Cambridge, MA: Harvard University Press.
Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. New York: Bloomsbury Press.
Packard, V. (2007). The hidden persuaders. Brooklyn, NY: Ig Publishing. (Original work published 1957)
Pinker, S. (1997). How the mind works. New York: W. W. Norton.
Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Penguin.
Plato. (1997). Republic. In J. M. Cooper (Ed.), Plato: Complete works (pp. 971-1223). Indianapolis, IN: Hackett Publishing.
Polanyi, M. (1962). Personal knowledge: Towards a post-critical philosophy. Chicago, IL: University of Chicago Press.
Polanyi, M. (1964). Science, faith and society: A searching examination of the meaning and nature of scientific inquiry. Chicago, IL: University of Chicago Press.
Popkin, S. L. (1994). The reasoning voter: Communication and persuasion in presidential campaigns. 2nd ed. Chicago, IL: University of Chicago Press.
Popper, K. (2002). The logic of scientific discovery. London: Routledge. (Original work published 1959)
Popper, K. (1979). Objective knowledge: An evolutionary approach. Revised edition. Oxford: Oxford University Press.
Reitman, J. (Director & Writer). (2005). Thank you for smoking. United States: Fox Searchlight Pictures.
Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
Sachs, J. D. (2011). The price of civilization: Reawakening American virtue and prosperity. New York: Random House.
Sagan, K. (1996). The demon-haunted world: Science as a candle in the dark. New York: Random House.
Schiff, S. (2010). Cleopatra: A life. New York: Back Bay Books.
Sen, A. (2009). The idea of justice. Cambridge, MA: Harvard University Press.
Shapin, S. (2010). Never pure: Historical studies of science as if it was produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority. Baltimore, MD: Johns Hopkins University Press.
Shenkman, R. (2008). Just how stupid are we? Facing the truth about the American voter. New York: Basic Books.
Solow, R. M. (1997). How did economics get that way and what way did it get? In T. Bender & C. E. Schorske (Eds.), American academic culture in transformation (pp. 57-76). Princeton, NJ: Princeton University Press.
Taubes, G. (2007). Good calories, bad calories. New York: Anchor.
Thaler, R. H. (2015). Misbehaving: The making of behavioral economics. New York, W. W. Norton.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale University Press.
The death of facts in an age of truthiness. (2012, April 29). National Public Radio. Retrieved Dec. 3, 2012, from www.npr.org
Toulmin, S. (1958). The uses of argument. Cambridge, UK: Cambridge University Press.
Toulmin, S. (1961). Foresight and understanding: An inquiry into the aims of science. New York: Harper.
Toulmin, S. (2001). Return to reason. Cambridge, MA: Harvard University Press.
Toulmin, S., Rieke, R., & Janik, A. (1979). An introduction to reasoning. New York: Macmillan.
Tye, L. (1998). The father of spin: Edward L. Bernays & the birth of public relations. New York: Crown.
Watters, E. (2013, March/April). We aren’t the world. Pacific Standard, 46-53.
Wheelan, C. (2013). Naked statistics: Stripping the dread from the data. New York: W. W. Norton.
Zimmer, C. (2012, April 16). A sharp rise in retractions prompts calls for reform. The New York Times. Retrieved Dec. 3, 2012, from www.nytimes.com
To cite this chapter in a reference page using APA:
Beach, J. M. (2013). Title of chapter. In 21st century literacy: Constructing & debating knowledge. Retrieved date from www.21centurylit.org
To cite this chapter in an in-text citation using APA:
(Beach, 2013, ch 6, para. #).
© J. M. Beach 2013, Revised 2016