LOGICAL COHERENCE AND HONESTY IN PUBLIC DELIBERATION

di:

Mario Graziano

Mario Graziano

VEDI PUBBLICAZIONI

Table of contents

  1. The logical principle of coherence
  2. The value of honesty
  3. Internal and external considerations
  4. Conclusion

Abstract

LOGICAL COHERENCE AND HONESTY IN PUBLIC DELIBERATION One of the central challenges for modern democracies is enabling citizens with divergent preferences to reach shared decisions. Unlike voting or negotiation, deliberation encourages participants to revise their initial positions through rational dialogue aimed at identifying the most just or well-founded choice. However, deliberation should not be understood merely as an expression of personal honesty or coherence, but as a collective search for the best possible decision through reasoned exchange. While coherence and honesty are valuable, treating them as ultimate goals distorts deliberative democracy, shifting it from a decision-oriented process toward a person-centered one focused on interpersonal respect.

1. The logical principle of coherence

According to most mainstream decision theorists (as well as much of economics and related work in other social sciences), rational agents are those with consistent preferences who maximize their utility (or expected utility). In a widely accepted interpretation of decision theory[1], the maximization of ex-pected utility follows from the internal consistency of preferences, provided that the agent chooses what they prefer. Here, expected utility maximization is viewed as an optimization process: rational agents are expected to make the best possible choices—those that yield the greatest returns. However, this requirement to optimize does not need to be explicitly stated as an axiom of rationality. Instead, it naturally arises from the condition that an agent’s preferences exhibit a specific form of consistency. As Mele and Rawling note, «On certain decision-theoretic approaches… rationality requires only that one’s preferences meet certain ordering criteria»[2].

 Preferences that satisfy these criteria inherently lead to maximization, as long as the agent acts in accordance with them—choosing what they prefer.

The classical conception of rationality emphasizes both consistency and optimization. Consistency is a property of a system, whether it be an agent’s belief system, their system of preferences, or the combination of their beliefs and intentions as a whole. The ideal of consistency is independent of both context and content, applying universally across domains. A classically rational agent satisfies specific consistency conditions in their beliefs and preferences, such as: not holding contradictory beliefs (i.e., not believing both p and not-p simultaneously); not exhibiting cyclical preferences (e.g., not preferring outcome a over b, b over c, and c over a at the same time); and not judging the conjunction of two events (x and y) to be more probable than either event occurring individually.

Optimization, on the other hand, is a constraint on goals or outcomes. To optimize is to find the best possible solution to a problem, make the best choices, and generally achieve the highest possible level of success. Unlike consistency, optimization is not entirely independent of context and content, as the best decision always depends on available options. However, it is generally indifferent to specific contextual details. Ensuring the best possible outcome typically requires evaluating all potential alternatives in light of all relevant information. As a result, a classically rational optimizer would, in principle, conduct an exhaustive search for the optimal choice, regardless of context.

These two pillars of classical rationality—consistency and optimization—are closely interconnected. One key link is that the requirement for an agent’s beliefs to be internally consistent can itself be seen as a form of optimization. This view faces difficult theoretical and empirical questions. It is not immediately clear whether this theory is descriptive or normative.

Does it describe how people actually behave, or does it prescribe a standard that people ought to follow?

The statement that “a rational agent” behaves in a certain way allows for both interpretations—or a combination of the two. In philosophy, the prevailing tendency has been to view classical rationality as primarily normative. According to this perspective, rationality is largely about the extent to which one’s beliefs, desires, intentions, and other mental states conform to certain standards. Beliefs should be justified and internally consistent; intentions should align with one’s beliefs and desires; and individuals should strive to maximize the fulfillment of their goals. Agents are rational to the extent that they meet these standards. Over the past few decades, extensive research—beginning with the pioneering work of Kahneman and Tversky—has demonstrated that participants in a wide range of tasks systematically deviate from logical and probabilistic norms of rationality[3]. Their responses often appear to contradict fundamental principles of logic and probability theory, leading to conclusions that do not follow from the available information and failing to account for all relevant evidence. These findings suggest the presence of pervasive cognitive biases, best explained by a strong tendency to rely on inappropriate non-logical rules or heuristics.

Gerd Gigerenzer and colleagues have shown that computationally simple heuristics can effectively address complex problems of choice and judgment. The strengths and limitations of these heuristics align closely with human performance: they are fast and efficient, disregard much of the available information, often rely on heuristically-driven but formally invalid shortcuts, violate classical consistency and transitivity constraints, and favor satisficing over maximizing strategies[4].

These cognitive processes are often fast and simple, and their outcomes cannot be reliably predicted by assuming that reasoning will yield logically normative answers or perfectly align with the environment. In short, reasoning is bounded. Rather than focusing on optimization, proponents of bounded rationality emphasize efficient processes that produce solutions that are good enough, where what qualifies as “good enough” depends on the task and context.

The emphasis is on how the properties of a given process enable it to generate effective solutions for a particular task, rather than on the formal properties of the system or its outcomes. Instead of treating consistency as the defining characteristic of belief or preference systems, they regard it as just one among many relevant factors. For instance, Gigerenzer argues that consistency is, at most, a secondary criterion for good decision-making, ranking well below “accuracy, speed, frugality, cost, transparency, and justifiability”[5] .

As philosopher Karsten Stueber explains insufficient attention has been given to the fact that, as is often the case in real-life situations, choices should be seen as the outcome of highly contextualized evaluations[6]. In these contexts, various normative standards and specific aspects of the situation are considered. Sometimes, it can be entirely rational to act in ways that might initially seem irrational. For instance, in certain romantic relationships, a suitor might feign indifference to pique the partner’s curiosity, or a gambler might make seemingly irrational moves to keep an opponent uncertain and off balance with their unpredictability. What appears “irrational” at first glance may prove to be rational over time. Ultimately, there are legitimate reasons for choosing actions that, in specific circumstances, might not seem like the most rational option. From this perspective, norms of rationality are akin to moral norms. Both types of norms guide what is appropriate in certain situations without necessarily requiring constant adherence.

2. The value of honesty

As Michael Smith discusses in his book The Moral Problem[7], assessing moral norms solely based on the frequency of moral actions performed in accordance with a particular norm risks confusing morality with “moral fetishism”. To illustrate this further, consider one of the key normative standards for evaluating whether social behavior is morally appropriate: the value of honesty. Much like rationality norms, there are circumstances where violating this important social norm does not automatically indicate immoral behavior. For example, if my girlfriend asks whether I like the color of the dress she has just bought and, despite finding the dress unattractive, I respond positively to preserve harmony in our relationship, this does not make me an immoral person. Nonetheless, it is crucial to recognize that the justification for deviating from the norm of honesty does not mean the norm is irrelevant in other contexts. Indeed, the norm might still be significant in the context of the dress, as acknowledging my adherence to the norm might cause me to regret violating it, even with good intentions. Therefore, when assessing the coherence or morality of an action, it is essential to consider the other relevant norms for the specific situation and determine whether the agent has applied them correctly, considering their beliefs, desires, resources, and abilities. Often, a violation of a particular norm can be justified (and thus acceptable) by higher-order norms, especially when there is a conflict between these norms. An anecdote involving President Harry Truman will help illustrate this point. It is reported that President Truman was an enthusiastic poker player, a hobby he kept secret for a long time due to its perceived inappropriateness for a prominent public figure like him. In March 1946, during a train journey to Missouri with Winston Churchill—who was to deliver his renowned “Iron Curtain” speech—Truman and Churchill, along with some staff members and journalists, played poker until 2:30 in the morning. From the start, it was evident to everyone that Churchill was not very skilled at the game. When Churchill left to use the bathroom, Truman confided to the other players that he intended to spend the rest of the evening deliberately losing every hand, even those with strong cards. He justified this choice as a gesture of respect for the man who had played a significant role in liberating the world. The staff and journalists then followed his example[8]. Like many social practices, poker is governed by specific norms and moral rules. However, poker is unique in that its rules and norms are somewhat inverted, exhibiting “antisocial” characteristics. For instance, in poker, it is deemed imprudent and even unethical to assist other players; revealing the truth about one’s cards can be penalized, while deception is encouraged. Thus, if Churchill had suspected that the other players were trying to help him rather than compete against him or deceive him with bluffs, he might have felt embarrassed or affronted in his role as a player. When the foundational rules of poker and the players’ goals of outmaneuvering each other are disregarded, determining what is considered “good,” “ethically correct,” or a “rational player” becomes a complex issue.

The example of the poker game and the specific case of President Truman is particularly illuminating, as it highlights how, in order to overcome the rigidity and numerous constraints imposed by the principles of honesty and consistency in deliberative processes, some authors have proposed the introduction of a genuine “poker game principle” even in the context of public deliberation[9].

The justification for the Poker Game Principle of Public Deliberation (PGP) is based on an analogy between public deliberation and games like poker, in which certain forms of deception are permitted by the rules. Just as in poker, if you lose a hand because your opponent successfully bluffed, you have not been treated unfairly, whereas if you lose because your opponent had an ace hidden up their sleeve, you have — in that case, it constitutes illegitimate deception — similarly, citizens engaged in public deliberation might attempt to deceive or manipulate their co-deliberators, provided that everyone is aware that this possibility exists.

In poker, bluffing is an integral part of the rules — rules that players are expected to know in advance — whereas hiding a card is not. Likewise, according to this analogy, if everyone knows from the outset that in public deliberation it is possible to attempt to manipulate or deceive others, then such behavior would not be considered unfair, because all participants are aware of the “rules of the game.” As in poker, those who fail to call an opponent’s bluff cannot complain: they knew the rules and chose to participate in the game.

However, as Micah Schwartzman rightly observes[10], there are at least four objections to the analogy between public deliberation and the game of poker.

The first concerns the necessity of public deliberation to ensure that we can have sufficient confidence in the beliefs we sincerely hold. Participating in a deliberative process where individuals are permitted — and sometimes even encouraged — to misrepresent the strength of their arguments would make it more difficult to be certain of the correctness of our own views and would increase the risk of holding false beliefs.

The second objection relates to the impact on the cohesion of public debate: the expectation of potential deception would undermine deliberation, fostering polarization among participants, who would come to distrust one another, dismiss opposing arguments, and pay more attention only to those who confirm their preexisting beliefs.

The third critique concerns the idea that allowing insincerity, deception, and manipulation could facilitate the formation of strategic coalitions useful for achieving significant political goals. Schwartzman notes that such behavior still requires a certain level of mutual trust — trust that, for the reasons already outlined, would inevitably be undermined by the very possibility of deception or manipulation.

Finally, Schwartzman emphasizes that the mere fact that deception can be anticipated does not suffice to legitimize it in the context of public deliberation. In poker, bluffing is an accepted feature because it contributes to the enjoyment of the game. In public deliberation, however, it is not enough for participants to be aware of the possibility of deception: there must be a valid reason to allow it, or at least no compelling reason to prohibit it. In the absence of such justification, the poker analogy does not provide a convincing argument in favor of a more permissive principle like the PGP.

It is worth highlighting another important difference between poker and public deliberation. In poker, the costs of not playing or not being a skilled player are generally negligible. In public deliberation, however, failing to participate or participating ineffectively can have significant consequences.

If the rules of poker exclude some players or make them less competitive, this is not a moral or political issue: the game simply may not be for everyone. In public deliberation, on the other hand, every citizen has a strong interest in evaluating the reasons offered in support of others’ proposals and in presenting their own reasons for public scrutiny.

Effective participation depends on the skills required by the deliberative context. Adopting a principle like the PGP would make deliberation far more difficult for those who are unable or unwilling to manipulate or deceive others.

Given these costs and benefits, we therefore have a strong reason to make public deliberation as accessible as possible, and an additional reason to reject a principle like the PGP.

As Brian Carey notes[11], there is another important difference between poker and public deliberation that deserves attention. In real-world poker, players generally follow the rules of the game. In public deliberation, however, this is rarely the case: it is often a messy process in which participants adhere to different standards — public or private — to advance their own agendas, and where no shared consensus exists, except in the broadest terms, on how deliberators ought to behave.

This observation raises an objection both to the poker analogy and to the honesty principle. One might argue that such principles are morally binding only under ideal conditions, when deliberators mutually commit to respecting them. In the real world, however, we might be justified in disregarding them in favor of behaviors that are potentially insincere or dishonest, provided these are more effective in securing agreements or achieving socially desirable outcomes.

While it is unlikely that one could deny the existence of circumstances in which these principles might be violated, it is equally implausible to claim that the real world is so far from ideal that the reasons for adhering to a principle of honesty would almost always be outweighed by reasons for abandoning it. More generally, it is difficult to imagine a society in which meaningful public deliberation is possible if most participants were forced to deceive others regularly. In other words, the occurrence of events that are not always honest or fair does not imply that the real-world context cannot still allow for genuine deliberation.

3. Internal and external considerations

To summarize the discussion in the previous sections, it can be argued that both the principle of consistency and the principle of honesty are excessively restrictive. However, less binding principles, such as the Poker Game Principle of Public Deliberation, appear to undermine the quality of public debate, as they foster feelings of hostility or disrespect among co-deliberators, thereby turning the process into a conflictual confrontation rather than the cooperative engagement it ought to be. Therefore, it is advisable to avoid any practice that makes it more difficult to assess the value of our arguments and those of other participants, as this diminishes the quality of the debate and hinders the ability to determine whether certain solutions or arguments can be justified through public reasons.

This aspect is crucial because it highlights the assumption that not all issues can, nor should, be resolved through deliberation. In summary, a deliberative democracy will resort to the deliberative process only for those matters in which it proves to be appropriate and relevant. From the perspective proposed here, public deliberation represents the most suitable activity for addressing conflicts of judgment[12].

As Michael Neblo observes, once again drawing on the analogy of the poker game, “just as poker ceases to be poker if we are not trying to win, rational discussion ceases to be rational if we are not seeking the right answer”[13]. In essence, the deliberative process, as such, presupposes the existence of a correct (or at least better and more appropriate) answer to the decision at hand; the purpose of exchanging reasons is therefore to reach, insofar as possible, a shared judgment about what that answer is.

Despite the doubts that may arise in a context of social and political pluralism, the notion of a right or better answer nevertheless remains implicit in the very concept of deliberation, regardless of whatever else it may entail. As Michael Fuerstein notes, “the very act of deliberation seems by nature to aimat epistemic value . . . deliberation seems to presuppose some respect in which answers to questions of political morality can be gotten right or wrong”[14].

Of course, participants may or may not succeed in agreeing on what that answer is, and they may reach different or even conflicting judgments about the decision to be made. However, conceptually speaking, the activity of deliberation rests on the assumption that such an answer exists—an assumption that constitutes the very goal of the deliberative process itself.

If this were not the primary goal of the deliberative process, it would be difficult to understand what the purpose of deliberation could be, or how one party in disagreement might persuade the other to accept its point of view.

From this perspective, the issues typically at the core of public deliberation take the form of questions such as: “Which among the various solutions is likely to be the most effective in relation to a given goal?”.

This presupposes the existence of a correct answer, and it is precisely for this reason that we can deliberate meaningfully about what that answer might be. If no correct answers existed, in fact, it would make no sense to decide the matter one way or the other, nor to exchange reasons for doing so.

A particularly effective analogy to further clarify this perspective is that of a jury in a murder trial, tasked with deliberating on the defendant’s guilt or innocence[15]. The very purpose of their activity lies in the fact that the defendant is either guilty or innocent, and it is the jury’s responsibility to determine which of the two is true. Were this not the case—if no correct answer existed—it would be difficult to understand what task is actually being assigned to the jury.

Of course, asserting that jurors are seeking the right answer does not imply that anyone has immediate access to it. The jury may be divided in its final judgment, and individual jurors may have doubts about the most appropriate verdict. Nevertheless, it remains true that their deliberations are guided by the awareness that a correct answer exists and by the commitment to identify it as accurately as possible.

The jury is a rather straightforward example, and not all issues present such clear and well-defined boundaries. Nonetheless, despite its limitations, this analogy serves as a useful reminder that many political decisions share a similar logical structure: those involved in the decision-making process are called upon to seek the right answer or, at the very least, to formulate the best possible judgment given the circumstances.

Naturally, it would be a mistake to reduce all decisions to mere matters of judgment. Sometimes, people simply have different preferences, which need not necessarily be regarded as clearly right or wrong. For example, preferences regarding specific public policies are rarely so straightforward or simple. They are often based on assessments of the object of the preference, such as the expected benefits or pleasures, the costs required to achieve them, the way the object relates to other desired goals, and so on. It is precisely this type of preference that more frequently generates conflicts and utilitarian positions.

Therefore, although preferences themselves are not directly subject to deliberation, it is still possible to deliberate on the judgments upon which they are based. In this sense, one could say that the aim of deliberation is to “refine” or “purify” preferences[16].

In light of these considerations, it becomes clear why the principles of consistency and honesty should be regarded as excessively restrictive criteria. Indeed, they refer in an overly binding way to the individual presenting the arguments in public debate; however, if what we have argued so far is correct, the emphasis should shift to the objectivity of the reasons—that is, to their adherence to criteria of truth, regardless of who formulates them.

Dan Sperber and Hugo Mercier[17] argue that it is highly debatable whether the true purpose of reasoning is to formulate judgments based on individual reflection, as this process tends to be inherently partial and limited. Personal reflection often reduces reasoning to a tool for reinforcing arguments that align with one’s preexisting attitudes and impressions. In contrast, these cognitive philosophers suggest that reflection becomes genuinely productive only when it fosters an exchange of information with others. Their theory suggests that reasoning inherently serves a social and argumentative purpose, functioning as a cognitive mechanism that allows individuals either to seek out and assess information from others or to persuade them of the validity of their own ideas[18]. Arguments, therefore, empower humans, both as creators and evaluators of arguments, to transition from ‘intuitive’ beliefs to ‘reflective’ ones. The key difference is that the former are beliefs held without mentally represented reasons (such as believing it rained yesterday simply because we remember it), while the latter are beliefs accepted because they are backed by solid reasoning (such as believing it will rain tomorrow because the weather forecast predicts it).

Reasoning will effectively enhance the epistemic value of communication only if it occurs in suitable contexts that meet three distinct conditions. The first condition is that reasoning must take place within a dialogue involving conflicting opinions on the subject at hand[19]. The second condition is that the arguments of all participants need to be critically evaluated. The third condition relates to the participants and their motivations, which should be focused on discovering the truth rather than defending rigid, pre-established positions. When these three conditions are fulfilled, discussions can evolve into genuine deliberation, where participants’ beliefs are shaped by argumentative exchanges, leading to gradual consensus.

From this perspective, deliberative beliefs arise from argumentative exchanges and can become factual beliefs if individuals continue to fully endorse the collectively developed understanding.

To assume that reasoning is solely about an individual’s internal states, without considering their interactions with the external context, fundamentally misunderstands the nature of knowledge[20].

As John Greco asserts:

We are social, highly interdependent, information-using, information-sharing beings. As such, it is essential to our form of life that we are able to identify good information and good sources of information. In this context, it is not surprising that we make evaluations concerning how beliefs are formed, their history in relation to other beliefs, why they are believed, etc. In other words, it is not surprising that we make evaluations concerning whether beliefs are reliably and responsibly formed. But evaluations of these sorts involve considerations about accuracy and etiology. And, therefore, evaluations of these sorts are externalist evaluations[21].

It is natural that the etiology, which Greco emphasizes as important for a belief, depends on external factors, as it pertains to the history of that belief and the reasons for holding it—elements that are outside the subject’s perspective.

According to Greco, this also applies to moral evaluations. He states:

We care about which people are good and which actions are right. That is, we care whether, in general, a person is a reliable and responsible moral agent. And we care about whether, in a particular instance, a person acted in a responsible and reliable way. What we don’t care about is artificial, time-slice evaluations such as that S is not more blameworthy at the moment for bringing about some state of affairs than she was the moment before. Neither do we care whether some action A is right relative to S’s own moral norms, in abstraction from questions about how S did A, or why S did A, or whether S’s norms are themselves any good. Of course, we often want to abstract away from some external considerations – we want to abstract away from some or others. The point is that we never want to abstract away from all of them at once. In other words, we have no interest in moral evaluations that are (entirely) internalist[22].

Ultimately, much like epistemic evaluations, moral evaluations are closely linked to the external world.

Therefore, from an externalist perspective, the factors on which the justification of one or more beliefs depends lie outside our individual viewpoint. Consequently, internal notions such as the coherence of preferences or honesty, while not irrelevant, play a secondary role when it comes to determining which of the positions presented in a public deliberative context appears to be the most correct or appropriate for a given issue.

4. Conclusion

In summary, in public deliberation the parties engage in an exchange of reasons with the aim of reaching a shared judgment. Consequently, the very purpose of deliberation does not lie in displaying mutual respect or demonstrating personal coherence and honesty, but rather in arriving at the best possible decision through a rational exchange of arguments.

To regard its main goal as the expression of the participants’ honesty or moral integrity would be to distort the deliberative process itself, transforming deliberative democracy from a decision-oriented project—aimed at identifying the most just or well-founded answer—into a person-oriented project, focused primarily on how individuals relate to one another.

Ultimately, democracy is about decision-making, and citizens deliberate in order to make—or to attempt to make—the best decision possible. They may, of course, present their arguments with coherence and honesty, but they do not deliberate solely or primarily to be honest and consistent; rather, they deliberate to contribute, through mutual reasoning, to identifying the most just choice.


[1] M. Graziano, Epistemology of Decision, Springer, Dordrecht 2012.

[2] A. R. Mele, P. Rawling, Introduction: Aspects of Rationality, in A.R. Mele, P. Rawling (eds.), The Oxford Handbook of Rationality, Oxford University Press, New York 2004, p.4.

[3] D. Kahneman, A. Tversky, Prospect Theory: An Analysis of Decision under Risk, in «Econometrica», 47, 2, 1979, pp. 263–292. D. Kahneman, A. Tversky, On the Reality of Cognitive Illusions, in «Psychological Review», 103, 3, 1996, pp. 582–591; discussion, pp. 592–596, 1996. J.S.B.T. Evans, In Two Minds: Dual-Process Accounts of Reasoning, in «Trends in Cognitive Sciences», 7, 10, 2003, pp. 454–459; J. S. B. T. Evans, The Heuristic-Analytic Theory of Reasoning: Extension and Evaluation, in «Psychonomic Bulletin & Review», 13, 3, 2006, pp. 378–395.

[4] G. Gigerenzer, On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky, in «Psychological Review», 103, 3, 1996, pp. 592–596 ; R. Hertwig, G. Gigerenzer, The “Conjunction Fallacy” Revisited: How Intelligent Inferences Look Like Reasoning Errors, in «Journal of Behavioral Decision Making», 12, 1999, pp. 275–305.

[5] G. Gigerenzer, Decision Making: Non-Rational Theories, in N. J. Smelser, P.B. Baltes (eds.), International Encyclopedia of the Social and Behavioral Sciences, Vol. V, Elsevier, Amsterdam 2001, pp. 3304-3309.

[6] K. Stueber, Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences, MIT Press, Cambridge 2006.

[7] M.A. Smith, The Moral Problem, Blackwell, Oxford 1994.

[8] D. A. Brinkley, David Brinkley – a Memoir, Alfred A. Knopf, New York 1996.

[9] B. Carey, Public Reason — Honesty, not Sincerity, in «The Journal of Political Philosophy», 26, 2018, pp. 47–64.

[10] M. Schwartzman, The Sincerity of Public Reason, in «Journal of Political Philosophy», 19, 2011, pp. 375–398.

[11] B. Carey, op. cit., pp. 47–64.

[12] Conversely, it would be difficult to grasp the very meaning of the exchange of reasons if we were not driven by the pursuit of the most just answer or the best possible decision under the given circumstances.

[13]   M. Neblo, Deliberative Democracy between Theory and Practice, Cambridge University Press, Cambridge 2015, p. 106.

[14] M. Fuerstein, Democratic Consensus as an Essential By-Product, in «Journal of Political Philosophy», 22, 3, 2014, p. 288.

[15] I. O’Flynn, M. Setälä, Deliberative Disagreement and Compromise, in «Critical Review of International Social and Political Philosophy», 25, 2020, pp. 1–2.

[16] R. Goodin, Laundering Preferences, in J. Elster, A. Hylland (eds.), Foundations of Social Choice Theory, Cambridge University Press, Cambridge 1986, pp. 75–102.

[17] H. Mercier, D. Sperber, Why Do Humans Reason? Arguments for an Argumentative Theory, in «Behavioral and Brain Sciences», 34, 2011, pp. 57–111.

[18] H. Landemore, H. Mercier, Talking it Out with Others vs. Deliberation within and the Law of Group Polarization: Some Implications of the Argumentative Theory of Reasoning for Deliberative Democracy, in «Análise Social», 205, 2012, pp. 910–934.

[19] D. Thompson, Deliberative Democratic Theory and Practical Political Science, in «Annual Review of Political Science», 11, 2008, pp. 497–520.

[20] P. Engel, Va savoir! De la connaissance en général, Hermann, Paris 2007.

[21] J. Greco, Justification is not Internal, in M. Steup, E. Sosa (eds.), Contemporary Debates in Epistemology, Blackwell, Oxford 2005, pp. 266-267.

[22] Ibid., p.267.