What defines moral beliefs

Moral failure. Psychological causes and their implications for moral practice

Table of Contents

1 Introduction

2. Moral thinking
2.1 The cognitive developmental approach
2.2 Moral Failure and Moral Practice

3. Moral intuitions
3.1 Social Intutionist Model
3.2 How moral intuitions work
3.2.1 Heuristics
3.2.2 Priming
3.2.3 Discussion
3.3 Moral Failure
3.4 Implications for Moral Practice

4. Moral motivation
4.1 Studies on VMPFC patients
4.2 How moral emotions work
4.2.1 Moral disgust
4.2.2 Cultural dependence of moral emotions
4.2.3 Discussion
4.3 Moral Failure
4.4 Implications for Moral Practice

5. Moral situations
5.1 Psychological situationism
5.2 Moral Failure and Moral Practice

6. Summary

7. Bibliography

1 Introduction

It was completely quiet in the place. The men of Reserve Police Battalion 101 climbed off their trucks and gathered in a semicircle around Major Wilhelm Trapp, a fifty-three-year-old professional police officer whom his subordinates affectionately called "Papa Trapp." [] Trapp was pale and nervous, had tears in his eyes and was clearly struggling to keep his emotions under control while he was talking. The battalion was facing a terribly unpleasant task, he explained in a tear-choked voice. [] The Jews instigated the American boycott, which harmed Germany, Trapp is said to have said, according to the memory of one of the police officers involved. [...] The battalion now has orders to herd these Jews together. The men of working age were then to be separated from the others and taken to a labor camp, while the remaining Jews - women, children and older men - were to be shot on the spot by the police battalion. After Trapp had explained to his men in this way what was in store for them, he made an extraordinary offer: Those of the elders who did not feel up to the task could step aside.[1]

After a brief period of reflection, only a dozen of the 500 or so reserve police officers came forward and surrendered their rifles. During July 13, 1942, at least 1,500 Jews were shot in the Polish village of Józefów.[2]

In his carefully researched case study of the men in Reserve Police Battalion 101, historian Christopher Browning concludes that the men who turned down their visibly desperate major's offer on July 13 were not staunch anti-Semites or jaded killing machines . The battalion consisted of men around 40 years of age who were found unfit for the Wehrmacht and had not yet experienced any military conflict.[3] The formative years of their socialization lay before the Nazi era due to their age. They also came from Hamburg, which of the major German cities was least influenced by National Socialism and from a class that had an anti-National Socialist culture.[4] After the initial shootings, many of the men began to cry, vomit, or show other signs of severe psychological distress. Many asked for replacement or tried to avoid participating in further executions in some other way.[5]

If one assumes on the basis of Browning's characterization that this completely normal Men too completely normal Had morals that disapproved of the arbitrary killing of civilians, the behavior of the reserve police can be viewed as moral failure describe. How is the moral failure of almost all men in Reserve Police Battalion 101 to be explained? Can such moral failure be prevented?

In order to be able to deal with this phenomenon in a structured manner, it is first necessary to narrow down the concept of moral failure. In the following, moral failure is defined as the failure of an individual according to his or her consciously held moral beliefs, for reasons they are considered unwanted.n for moral failure and inferences and behavior, and d to act. Moral beliefs are understood here as general beliefs about whether a specific action should be approved or rejected from a moral point of view; ignoring how these beliefs were acquired. They are kept conscious when the individual assumes that they have them and that they are valid for their own actions.

In order to investigate moral failure, as outlined here, the present work draws on the results of a research field that has attracted the interest of various disciplines over the past few decades. For example, evolutionary biologists, cognitive scientists, neuroscientists, social psychologists and, increasingly, philosophers are concerned with the origins and mechanisms of moral thought and action Moral psychology.

In the following, an attempt is made to give a systematic overview of the moral psychological research relevant to the investigation of moral behavior. For this purpose, the complex construct of 'moral behavior' is divided into four thematic sections according to moral psychological research focuses.

As a starting point, the importance of mental processes for moral action is discussed. To this end, Section 2 begins with the developmental psychological approach of Lawrence Kohlberg with its focus on the conscious moral thinking presented. Kohlberg's work is then confronted in Section 3 with the work of Jonathan Haidt, which has an opposing focus on the unconscious moral intuitions lays. Then, building on this, in Section 4, the connection between moral beliefs and moral behavior is examined in more detail. Based on the neuroscientific work of Antonio Damasio and his colleagues, this becomes nature moral motivation examined, with particular emphasis on the role of moral emotions. With the inclusion of classical socio-psychological studies, section 5 concludes with the influence moral situations examined for moral behavior and discussed the connection to the previous results. In each of the sections mentioned, causes of moral failure are identified separately, and suggestions are then discussed as to how the likelihood of moral failure can be reduced.

2. Moral thinking

As part of the cognitive turn[6] Lawrence Kohlberg started a research program in 1958 that was to establish the field of morality as an independent research field in psychology in the years to come. From a developmental point of view, Kohlberg was primarily interested in the phenomenon of moral thinking, which he understood as the conscious process of using ordinary moral language.[7]

2.1 The cognitive developmental approach

In a longitudinal study lasting almost 30 years, he developed his theory of the stages of moral development on the basis of interviews on recommendations for action in hypothetical moral dilemmas. A higher level represents a more adequate approach to solving moral dilemmas in accordance with moral-philosophical criteria.[8] The three main stages of moral development in Kohlberg's theory are: (1.) the pre-conventional level, at which moral evaluation is based on material and hedonistic consequences of actions (punishment, reward, exchange of favors); (2.) the conventional level, at which the moral valuation is based on conformity with the expectations of others (reference groups, family, nation, etc.) and the maintenance of the social order and (3.) the post-conventional level, at which the moral valuation is based on an effort to determine moral principles and norms that are independent of existing authorities and their principles.[9]

Although Kohlberg focused on moral development, he also made statements about moral behavior that are of interest to our study of moral failure. Kohlberg assumes that a moral act must be based on conscious moral considerations in order to be considered such. A conscious moral judgment about whether an action is morally right or wrong thus becomes a prerequisite for the occurrence of a moral act.[10] On the basis of a collective lecture by Augusto Blasi, in which he comes to the conclusion that moral thinking and moral action are positively correlated, Kohlberg concludes that the higher the moral level, i.e. the higher the ability to think morally, the likelihood that individuals will also increase act on what they see fit.[11] For Kohlberg, a motivation to act morally arises inherently from an understanding of moral principles.[12] This represents that, in the course of their moral development, people are increasingly convinced of the need to stand up for their moral convictions in action.[13]

2.2 Moral Failure and Moral Practice

According to Kohlberg's considerations, how can moral failure occur? The fact that someone does not act according to what he thinks is right seems to Kohlberg to be based on an insufficiently developed understanding of moral principles. As a result, moral beliefs lack the inherent motivation: an individual may be convinced that the arbitrary killing of civilians is morally wrong, but this belief is based (e.g. assuming conventional levels) on the expectations of the social environment. A universal validity of the principle is not recognized. If the reference groups, institutions or authorities to which a moral conviction is tied lose their relevance for the individual, the motivation to act accordingly also disappears. In deciding not to accept the offer of their major, the men of the reserve battalion, as evidenced by the minutes of the court hearings, did indeed appear concerned not with moral principles that might still play a role in their civilian lives, but primarily with that own reputation with their comrades.[14] If one follows Kohlberg, however, it becomes problematic to accuse reserve police officers of moral failure in the sense defined here. This is due to the normative orientation of Kohlberg's theory (see above), which has the consequence that moral failure cannot be assumed within an individual, but only by means of an external moral standard (post-conventional level). Trapp's men would not have acted against their moral convictions, but rather they have a supposed conviction that killing the Jews is morally wrong, not really obsessed. In contrast, they even acted according to what they believed to be correct according to their understanding of morality: conformity with the expectations of their social environment.

How can the likelihood of moral failure be reduced when viewed as a consequence of an inadequate understanding of moral principles? In Kohlberg's approach, a remedy would certainly be seen in the stimulation of a higher moral development, and with it the ability to think morally. Since Kohlberg believes that moral thinking requires role-taking, moral development would be stimulated by environments that offer role-taking opportunities, i.e., in which one has to put oneself in the shoes of the various people involved in the moral conflict. Kohlberg therefore regards group discussions of moral dilemmas as an important educational tool.[15]

The question arises, however, why Kohlberg makes the process of developing a moral act part of its definition, in which he presupposes a conscious judgment. Major Trapp's statement to his driver may be the first indication that the reserve police officer may be intuitive It was clear that they were acting immorally in Józefów: "If this Jewish thing takes its revenge on earth, then have mercy on us Germans."[16]

3. Moral intuitions

Kohlberg's developmental approach was predominant for the psychological investigation of moral thought and action until the end of the 1980s. Then began at the beginning of the 1990s, along with the affective turnaround[17], one new synthesis In moral psychology: Findings from e.g. social psychology, neurosciences and evolutionary theory gave new impulses for answering the questions of moral psychological research.[18] One of the most significant works here is that Social Intutionist Model (SIM) by the psychologist Jonathan Haidt, who, contrary to Kohlberg's theory, assumes that unconscious intuitions play a much greater role in our moral life than conscious moral thinking.[19] In the following, based on the SIM, it will first be examined whether it is plausible to assume moral thinking in the sense of Kohlberg as the starting point for moral action (and thus also moral failure).

3.1 Social Intutionist Model

When questioning test subjects about their moral judgments, Haidt and his colleagues observed an interesting phenomenon in some cases: most test subjects gave their judgment almost immediately, but then had problems giving reasons for their judgment in the further course of the questioning. For example, test subjects expressed that it was morally wrong to eat your own dog because you could get sick. When the interviewer then pointed out that there was no risk of disease because the meat was cooked, most of the test subjects did not change their moral judgment, but tried to find further reasons.[20] However, because the cases were so carefully constructed that most of the reasons given did not apply, the test subjects ultimately stuck to the characteristic statement: “I don knowt know. I can't explain it. I just know it's wrong. "[21] Haidt and his colleagues refer to this state of inability to justify their own judgment as Morale dumbfounding.[22]

In the context of his SIM, Haidt explains this phenomenon by stating that moral judgments are generally not based on conscious moral thinking, but on unconscious moral intuitions. In order to avoid conceptual confusion, Haidt's underlying definitions are discussed, which will also be used in the following. Haidt defines moral intuitions as follows:

[...] moral intuition can be defined as the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion.[23]

It is true that he regards moral intuitions as cognition,[24] but sharply distinguishes it from moral thinking that it defines as follows:

[…] Moral reasoning can […] be defined as conscious mental activity that consists of transforming given information about people in order to reach a moral judgment. To say that moral reasoning is a conscious process means that the process is intentional, effortful, and controllable and that the reasoner is aware that it is going on.[25]

In the context of cognitive-scientific dual-process theories, Haidt assumes that moral intuitions as a default process handle everyday moral judgments automatically, i.e. quickly and without conscious effort. Moral thinking as a controlled process, which is slow and self-confident and depends heavily on verbal thinking, is only switched on when intuitions contradict each other or the situation requires a more thorough investigation. For Haidt, the fact that Kohlberg perceived moral thinking as a dominant process is due to the fact that von Kohlberg used Moral Judgment Interviews evoke an unnatural and unrepresentative form of moral judgment. Kohlberg's subjects were faced with an unknown researcher who constantly questioned their judgments and thus initiated the process of thorough moral thinking.[26]

Under normal circumstances, however, moral thinking is not involved in the process of developing a moral judgment, but occurs first post-hoc when the social environment of an individual demands a justification for his judgment.[27] This post-hoc process of moral thinking is then, however, not impartial, but aimed at searching for arguments that justify the initial intuition: "The reasoning process is more like a lawyer defending a client than a judge or scientist seeking truth."[28]

According to Haidt, the peculiarity of the post-hoc process of being able to provide justifications for intuitive moral judgments immediately creates an illusion of objective moral decision-making. When people are asked to give explanations for their decisions or behavior, it feels like a kind of introspection. However, they could not be looking for a memory of the actual cognitive processes that influenced their behavior, since these processes are not accessible to consciousness.Rather, they look for plausible theories that explain why someone might have done what they did.[29] Among other things, Haidt refers to neuroscientific studies with so-called split-brain patients, who argue that specific brain areas that are associated with conscious language function as an interpretation module and provide convincing explanations post-hoc:

When a [split-brain] patient performs an action caused by a stimulus presented to the right cerebral hemisphere (eg, getting up and walking away), the left hemisphere, which controls language, does not say 'Hey, I wonder why I' m doing this! 'Rather, it makes up a reason, such as' I'm going to get a soda.'[30]

The SIM claims, however Not, that moral thinking and, in this sense, objective decision-making do not exist, but that their causal importance for moral judgment is significantly overestimated.[31] Haidt explicitly admits the possibility that people can argue in the sense of Kohlberg through logic and careful considerations towards a judgment that contradicts their initial intuition. However, he regards this as extremely rare.[32]

The SIM thus also questions Kohlberg's considerations on moral action: if moral thinking in Kohlberg's sense only rarely plays a role in the emergence of moral judgments, then if there is a purely descriptive interest in the mechanisms of moral action, then it is not plausible moral action to be bound by definition to such judgments. It then follows that Kohlberg's theory can only depict a small subset of possible moral actions. The same applies to the corresponding cause of moral failure. Only in cases where a moral judgment was made on the basis of moral thinking and action was taken against this judgment, an insufficiently developed moral understanding can be held responsible for it. It therefore seems more plausible to define moral action than action in morally relevant situations, i.e. situations that people perceive as moral.[33] This definition leaves the question of which processes were involved in the development of a moral act open and thus also includes processes outside the sphere of conscious moral thought.

3.2 How moral intuitions work

If moral thinking is excluded as the starting point for moral action on the basis of the foregoing considerations, the question arises as to what the relationship between moral intuition and moral action looks like. In order to be able to clarify this, the functioning of moral intuitions must first be dealt with in more detail. Although Haidt explains the general functioning of moral intuitions in the SIM, he is not clear when naming the underlying psychological mechanisms. For example, as evidence for the importance of automatic processes, Haidt refers to different mechanisms such as the halo effect, the automatic application of stereotypes and the 'I agree with people I like' heuristic.[34]

Two of these mechanisms - heuristics and the priming of stereotypes - are considered below.[35]

3.2.1 Heuristics

Haidt's remarks leave it unclear to what extent moral intuitions are affective and how these would then differ from emotional processes.[36] The psychologist Gerd Gigerenzer proposes to explain moral intuitions through heuristics, as this allows, among other things, to replace the difficult distinction between emotional and rational processes with the simpler distinction between conscious and unconscious causes of moral judgments.[37] In view of the fact that Haidt seems to consider heuristics at least as a subclass of moral intuitions (see above), heuristics and their relationship to moral action are examined in more detail below.

Psychological heuristics are mental abbreviations or rules of thumb that usually operate unconsciously and enable quick decisions to be made on the basis of very little information.[38] The psychologists Amos Tversky and Daniel Kahneman played a pioneering role in researching psychological heuristics. They were able to show that people rely on a limited number of heuristic principles in order to reduce complex tasks to simpler judgment operations. For example, when assessing probabilities.[39] The research program of Tversky and Kahnemann is generally understood as the opposite of that of Gigerenzer's: Tversky and Kahnemann investigate how heuristics lead to incorrect decisions - Gigerenzer emphasizes that simple heuristics can lead to better results than decision rules that use far more information and complicated calculations .[40] However, there is broad consensus that heuristics in typical environments lead to accurate judgments according to rational criteria. On the other hand, it would be difficult to explain why heuristics have become established evolutionarily. It is also not disputed that heuristics can lead to serious errors in unusual environments.[41] How specific it is moral There heuristics is controversial in the literature; However, it is undisputed that certain heuristics, which are not tied to moral contexts per se, also operate in morally relevant situations.[42] With Gigerenzer it is therefore assumed that there are no specifically moral heuristics:

Heuristics that underlie moral actions are largely the same as those for underlying behavior that is not morally tinged. They are constructed from the same building blocks in the adaptive toolbox. That is, one and the same heuristic can solve both problems that we call moral and those we do not.[43]

What is the relationship between heuristics and moral behavior? As can already be seen from the quotation, Gigerenzer apparently assumes that morally relevant heuristics are behavior-relevant by definition.[44] In this context, Gigerenzer refers, for example, to the 'Do what the majority do' heuristic, which states that people behave in the same way when they see that the majority of their reference group behaves in a certain way.[45] According to Gigerenzer, however, behavior cannot be viewed as the sole result of a specific heuristic, but can only be explained in conjunction with the social environment. One and the same heuristic can lead to an action that is considered moral in one situation and immoral in another.[46]

Gigerenzer assumes that the majority of our moral behavior is based on heuristics, but does not provide any further evidence for this.[47] In view of the fact that heuristics only constitute a subclass of moral intuitions, at least for Haidt, and that the current state of research has only been able to identify a few heuristics whose moral relevance is undisputed, the question arises whether there are other mechanisms that fall under the aforementioned definition of moral intuitions fall and exert an influence on moral behavior.

3.2.2 Priming

The psychologist John Bargh, on whose work Haidt bases his distinction between moral thinking and moral intuition, was able to show that so-called. Priming can have an impact on social behavior.[48] As Priming the incidental activation of knowledge structures by the current situational context is called.[49] In the experiment that Bargh et al. carried out, knowledge structures of moral relevance were activated: racial stereotypes. Bargh and his colleagues directed American students to perform a computerized task that was supposed to reveal how judgments are made based on various visual aspects. To do this, the test subjects were asked to indicate whether a series of images appeared on the screen for a few seconds whether an even or odd number of circles could be seen. Before the series of experiments began, however, either the face of a young African-American or that of a Caucasian man flashed briefly on each computer.[50] After the 130th attempt, an error message appeared that the task had to be repeated completely due to a data error. The test persons were then asked to fill out two standardized questionnaires to measure racist attitudes on the grounds that this was part of the preparation for a future experiment that was not related to the computer task they had completed. The results of the experiment showed that the subjects who were casually shown an Afro-American face at the beginning of the experiment reacted much more hostile to the error message than those who were initially presented with a Caucasian face.[51] Bargh and colleagues also found that subjects, regardless of whether they scored high or low on the questionnaires, were equally likely to express hostility when they were casually shown the photo of the African-American face.[52]

So it seems that with Priming Another mechanism was found that falls under Haidt's definition of intuition. Assuming that the reaction that Bargh and his colleagues observed can occur not only to an error message but also in morally relevant situations Priming also an influence on moral behavior.

3.2.3 Discussion

To what extent is the strong thesis plausible that moral intuitions guide the majority of our moral actions and that we are in retrospect subject to the illusion that we have objectively and consciously justified our behavior? The preceding remarks on heuristics and priming initially suggest that moral intuitions direct moral action.

However, there is also criticism of the SIM that casts doubt on the generalizability of this direct connection between moral intuition and action. The psychologist Darcia Narvaez argues that the SIM is limited to a small selection of processes that are relevant to moral psychology, since it represents a model of moral judgments and not of moral decisions. Although Narvaez admits that moral intuitions can play an important role in moral action, she argues that there are also cases in which people consciously make moral decisions and engage in moral thinking in order to weigh options and consequences for action. For Narvaez, moral decisions include, among other things, the determination of moral goals and plans, the weighting of various alternative courses of action and the determination of one's own responsibilities.[53] The SIM would not allow for these considerations. A related criticism is also provided by Monin and his colleagues. They point out that the hypothetical dilemmas in Kohlberg's theory trigger moral thinking, as test subjects put themselves in the perspective of the actor have toin order to make a decision to act, whereas Haidt presents his test subjects with cases in which it is possible to judge the behavior of others simply and intuitively:

[...] when judging the behavior of others, we often use knee-jerk reactions and gut feelings, whereas when deciding what the right course of action should be for our own life, we are more circumspect and mobilize our cognitive resources (if the stakes are high enough) to bring to bear the heavy machinery of moral reasoning.[54]

In response to Narvaez's criticism, Haidt and Bjorklund admit that the SIM primarily maps the causal processes of moral judgment and not moral decision-making. These processes are in themselves very different because, in contrast to the decision about an action, there is little at stake for a judge. His judgment has no real consequences for himself or others.[55] Haidt and Bjorklund then suggest that the SIM can easily be changed into a model of choosing morally relevant actions by highlighting the possibility of private reflection and moral thinking and adding a simple statement to the model: “[…] when making real behavioral choices, people often do deliberate. "[56] Thus, even if Haidt and Bjorklund agree that moral intuitions are of far less importance in moral decision-making than they would in judging, they still seem to believe that most moral actions are based not on moral decisions but on intuitions:

[...] for most morally relevant actions, there is no deliberation; we all do the right thing most of the time without thinking about it. Even heroes who jump into rushing rivers to save people’s lives generally state, when interviewed afterwards, that they didn’t think about it; they just acted. The importance of automaticity in moral judgment can therefore be brought into moral action as well.[57]

The distinction between moral decisions and moral judgments proposed by Narvaez does not seem to be particularly relevant for Haidt and Bjorklund for moral action. From Haidt and Bjorklund's remarks, however, it is not clear why they admit, on the one hand, that when people have to choose an action, it is more based on processes of moral thought and, on the other hand, deny that processes of moral thought play a major role in moral action.

As the previous section shows, moral intuitions seem to have a direct influence on moral action without the intervening process of moral decision-making. What does it depend on whether a decision-making process is initiated at all, or whether one simply relies on intuition? In their discussion of the SIM, the philosopher Jeanette Kennett and the psychologist Cordelia Fine take a closer look at the underlying dual-process theories. They argue that the SIM overestimates the importance of automated processes and, in turn, downplays the importance of controlled processes.[58] The purpose of controlled processes, such as moral thinking, is precisely to regulate the effects of automatically activated processes when these contradict consciously held beliefs or goals. Furthermore, they refer to studies that argue that no behavior or judgment is the result of purely automatic or purely controlled processes, but to some extent of both. The exercise of controlled processes only allows judgments and behaviors that are in accord with consciously held moral convictions, but is dependent on available cognitive capacities.[59] In this context, they refer to studies on the control of automatic stereotypes, in which these had a greater influence when judgments or behavior were required spontaneously, under time pressure or in cognitively demanding situations. In contrast, they cite studies in which people had to react less spontaneously or in less cognitively demanding situations and in which the behavior of the test subjects could be more easily predicted by their own statements.[60] The fact that Haidt and Bjorklund refer to a characteristic hero who jumps into a river to save people and afterwards claims to have not thought about it does not seem to be an argument that moral action is generally subject to moral intuitions, but rather that people act on your gut instinct in situations that leave you no time to prepare and think.


[1] Browning 1999: 22.

[2] See ibid .: p.105.

[3] See ibid .: p.66.

[4] See ibid .: 69ff.

[5] See ibid .: 99ff.

[6] The transition from psychological behaviorism to cognitivism in the 1950s is commonly referred to as the cognitive turn.

[7] See Kohlberg et al. 1984a: p.313.

[8] See Kohlberg 1976: 159.

[9] See ibid .: 128ff.

[10] See Kohlberg et al. 1984a: p.290f.

[11] Cf. Blasi 1980: p.37 and Kohlberg et al. 1984a: p.284.

[12] See Hardy / Carlo 2005: 233.

[13] See Kohlberg / Candee 1984b: p.425.

[14] See Browning 1999: 106.

[15] See Kohlberg 1976: p.165f.

[16] Browning 1999: 90.

[17] Name of the focus of research on emotional processes after the cognitive turn.

[18] See Haidt 2007: 998.

[19] Cf. Haidt 2001: 818.

[20] See Haidt / Bjorklund 2007a: p.197.

[21] Haidt 2001: 814.

[22] See Haidt / Bjorklund 2007a: p.197.

[23] Haidt 2001: 818.

[24] Haidt / Bjorklund 2007a: p.200: "[...] the SIM is not about" cognition "and" emotion "; it is about two kinds of cognition: fast intuition (which is sometimes but not always a part of an emotional response) and slow reasoning. "

[25] Haidt 2001: 818.

[26] See ibid .: 818.

[27] See ibid .: 814.

[28] Ibid .: 820.

[29] See ibid .: 822.

[30] Haidt / Bjorklund 2007a: 190.

[31] See ibid .: 200.

[32] See ibid .: 193.

[33] For the existing discrepancy between the defined area of ​​morality in Kohlberg's theory and the areas that people from different cultures perceive as moral, see Haidt / Joseph 2007: p.370ff.

[34] See Haidt 2001: 820.

[35] Note: The halo effect is not considered separately due to its low relevance for moral action.

[36] See Monin et al. 2007: p.226f.

[37] See Gigerenzer 2007: p.10.

[38] See Sunstein 2005: 531, Sinnott-Armstrong et al. 2010: p.249 and Gigerenzer 2007: p.4.

[39] See Tversky / Kahneman 1974: 1124

[40] See Sinnott-Armstrong et al. 2010: p.248 and Gigerenzer 2010: p.536.

[41] See Sinnott-Armstrong et al. 2010: p.249.

[42] See.ibid .: 253ff.

[43] Gigerenzer 2007: 9.

[44] See Gigerenzer 2010: 529.

[45] See Gigerenzer 2007: 9f.

[46] See Gigerenzer 2010: p.528f.

[47] See ibid.

[48] See Haidt 2007: 998.

[49] See Bargh et al. 1996: 230.

[50] See ibid .: 238.

[51] See ibid .: the hostility of the test subjects was determined on the one hand by video recording of the facial expression and on the other hand on the basis of the reaction to the experimenter.

[52] See ibid .: 239.

[53] See Narvaez 2007: p.233f.

[54] Monin et al. 2007: p.228.

[55] See Haidt / Bjorklund 2007b: p.242.

[56] Ibid .: p.243.

[57] Ibid .: 244.

[58] See Kennett / Fine 2009: p.88.

[59] See ibid .: 78f.

[60] See ibid .: 79.

End of the reading sample from 45 pages