Aumann's agreement theorem states that Bayesian reasoners with common priors and common knowledge of each other's opinions cannot agree to disagree. This has enormous intuitive implication on the human practice of rationality. Consider: if I'm an honest seeker of truth, and you're an honest seeker of truth, and we believe each other to be honest, then we can update on each other's opinions and quickly reach agreement. Unless you think I'm so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true, and you should take that into account when forming your belief. Likewise, fellow rationalists should update their beliefs on your beliefs, not as a social custom or personal courtesy, but simply because your rational belief really is Bayesian evidence about the state of the world, in the same way that a photograph or a reference book is evidence about the state of the world.
The process of true Bayesians coming to agreement bears precious little resemblance to a typical human argument. One agent states her estimate, the other agent states her estimate, conditioned on the first agent's estimate, and from there the agents' opinions follow an unbiased random walk: at no point in a conversation can Bayesians have common knowledge that they will disagree.
Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse: whether deliberation tends to resolve disagreements in practice is an empirical question.
The fact that disagreements on questions of simple fact are so common amongst humans, and that people seem to think this is normal, is an observation that should strike fear into the heart of every aspiring rationalist.