When Truths Collide — How to Deal With Doubt, Think Through Contradictions, and Resolve Mental Gridlock
An essay on dealing with dilemmas and thinking straight when nothing adds up—featuring SIX PRACTICAL WAYS to handle conflicting truths.
TRUTH is a deep and contentious topic. Philosophers have debated it for centuries without reaching a consensus. Some say truth reflects reality. Others argue it’s about coherence, usefulness, or social agreement. Some even doubt it means anything at all—just a label we stick on ideas we favor.1
Still, truth matters. A lot. We search for it in courtrooms, labs, and newsrooms. It shapes laws, policies, and personal choices. We care who’s right in an argument, which route is faster, or whether a job suits our skills. And in an age of fake news and deepfakes, knowing what’s real is more urgent than ever.
We’re now living in a world where misinformation is polished, personalized, and persuasive. The ability to discern truth may become a core survival skill in the age of artificial generation and “intelligence”.
Given that many important truths in our lives are essential yet hard to pin down, a practical challenge arises:
What do you do when two or more equally conceivable things disagree or contradict each other? What do you do when truths collide?
That’s what this essay is about. We’ll look at six practical ways to handle colliding truths:
❶ Reject one side (choose a side, either quickly or after vetting).
❷ Weigh the trade-offs (pros and cons, context-dependent choice).
❸ Reframe the question (zoom out or shift perspective to dissolve the conflict).
❹ Dissect the distinction (find a key diff or deeper principle that unites the truths).
❺ Hold both as true (accept the paradox, both/and thinking).
❻ Defer the decision (wait for more insight or better timing).
Some of these offer immediate clarity. Some offer peace. Each one has its place. And knowing when and how to use each tool is key to navigating truth in a complex world.
❶ Reject One Side
The simplest way to resolve a truth conflict is to pick one side and discard the others. This feels intuitive—our minds crave consistency. Confirmation bias (our tendency to favor information that confirms our pre-existing beliefs) and cognitive dissonance avoidance (our discomfort with holding contradictory ideas) prompt us to quickly reject one side of any conflict and adhere to what aligns with our worldview. Nobel laureate Herbert Simon coined a fitting term — satisficing — referring to “settling for good enough” answers to avoid endless deliberation.
When the stakes are low or clarity is high, outright rejection can work well. But this strategy comes with a cost: if both sides hold some insight, you are guaranteed to neglect some part of the truth. To somewhat mitigate this risk, there are two common approaches: rejecting outright and rejecting after more profound reflection.
🅰 Reject Outright
Sometimes, a conflict isn’t real—it’s just noise. One side is clearly false or irrelevant, so you drop it and move on. For example, if someone insists “the Earth is flat,” you don’t actually need to spend time agonizing over this claim; centuries of science back the truth that Earth is an oblate spheroid so that you can reject the flat-earth claim without hesitation. Likewise, in domains like mathematics or formal logic (man-made, closed systems with strict rules), conflicting answers usually mean one is simply wrong by definition; e.g., 2+2=4 is true, and 2+2=5 is false, with no middle ground.
Other times, the stakes are too low to overthink:
Tom says the event is at 1 PM, Julia says 1:15 PM—show up at 1.
A friend says the café is cash-only, another says card is fine—bring cash just in case.
One review calls the movie amazing, another says it’s trash—trust your gut.
A book gets mixed reviews—buy it or skip it, no big deal.
In such low-stakes situations, the cost of being wrong is small. The cost of endless debate is higher. So we often reject one side quickly and move on. That’s satisficing in action: choosing something that’s “good enough” without chasing perfection. And that’s perfectly fine. Most decisions don’t need a deep dive—just a clean call.
🅱 Reject After Reflection
When the stakes are higher—or something feels off—it’s smart to pause. You still choose only one side completely, but only after digging deeper. Consider these situations:
As a heavy coffee drinker, you read two articles on coffee: one says it’s healthy, the other says it’s harmful.
In a heated political debate, one friend calls a news claim fake; another says it’s true.
A job opportunity you are looking into gets mixed reviews—some praise, some warnings.
In such cases, rather than outright rejection, you can engage in reflection, contemplation, and research. You can verify sources, dates, and the science before concluding whether you believe coffee is healthy or not. You trace the source of political claims to see who’s closer to the facts. You can research job sites, read news reports, and ask insiders about the job opportunity. In each case, you don’t merge the views—you rule out one entirely, but based on the more substantial evidence. You’re still rejecting, but with more care.
This is the deliberate version of ❶🅰. It works best when the decision affects you directly, you suspect the truth is findable with a bit of effort, or you’re willing to invest time to avoid a poor choice. Of course, this isn’t feasible for every minor conflict (which is where ❶🅰 shines). The key here is learning which questions are investigation-worthy and picking your battles wisely.
Summary of Approach ❶: Whether done instinctively or after some checking, this approach resolves conflict by declaring a winner. It’s quick and clear-cut. Use it for trivial matters (out-of-hand rejection) or when you can investigate enough to be confident in one side (informed rejection).
❷ Weigh The Trade-Offs (Pros & Cons)
Not all truth conflicts are black and white. Often, both sides have, quite obviously, some valid points—just with different trade-offs. In such cases, the best approach is to weigh the pros and cons and choose the option that best fits your needs or values.
Benjamin Franklin famously advocated this method. In a 1772 letter to Joseph Priestley, Franklin described his decision-making process of drawing a line down a sheet of paper, listing the “Pros” on one side and “Cons” on the other, and then weighing them against each other . By this “prudential algebra,” as he called it, one could rationally assess a dilemma by its trade-offs.2
So, unlike approach ❶ (complete rejection), Franklin’s approach recognizes that in many real-world situations, both options have merit, and the task is to figure out which option’s mix of upsides/downsides is more acceptable for you.
Consider a scenario adapted from an essay called Metrics: Useful or Evil? by writer Scott H. Young, who compared two books with opposing views. One book (John Doerr’s Measure What Matters) argues that setting clear goals with numeric metrics is tremendously effective for organizations. The other (Jerry Muller’s The Tyranny of Metrics) cautions that an overemphasis on metrics can backfire, leading people to game the numbers and undermine genuine goals. So which is true—“metrics are vital” or “metrics are harmful”? After considering both, Young notes that both statements hold truth, but in different ways. Metrics do focus effort and can drive progress – and metrics do create perverse incentives and can be misused. In his analysis, Young suggests looking at this as a trade-off:
Metrics both inspire effort and encourage corruption. They allow for progress on easily quantified goals and may be damaging to more qualitative ones. Like a potent medicine that cures disease and also creates side-effects, each metric needs to be carefully considered in light of the trade-offs… Trade-offs are a less satisfying answer because they require weighing many possible effects for each individual case. Still, sometimes this assessment is unavoidable. Life is complicated. Why should we expect convenient solutions?
This trade-off perspective leads to a nuanced decision: you’d use metrics when their benefits are likely to outweigh the side effects, and be cautious or avoid them when the situation is ripe for metrics to do more harm than good. No one side “wins” outright for all time; instead, you integrate both truths by applying them where appropriate. As Young puts it, life is complicated, and “sometimes this assessment is unavoidable” – we shouldn’t expect one-size-fits-all solutions.
Let’s look at some casual examples of trade-off decisions:
Buying a car vs. continuing to use public transportation: Owning a car offers freedom and convenience (the ability to go anywhere, anytime – a pro), but it’s expensive and less eco-friendly (cost and environmental cons). Public transport is cheaper and greener (pro), but less flexible or comfortable (con). The “truth” of car vs. transit depends on your context – city or rural? Tight budget or not? Value convenience or savings more? Weighing these factors can guide you: perhaps in your case, the freedom outweighs the cost, or vice versa.
Remote work vs. office work: Is remote work better? It can increase personal flexibility and eliminate commute (pro), but might dilute team cohesion or blur work-life boundaries (con). Office work fosters in-person collaboration and routine (pro), but adds commuting stress (con). Here again, both models have truth.
Following a strict diet vs. eating intuitively: A strict diet (for fitness or medical reasons) provides structure and clear goals (pro), but can be challenging to sustain or mentally stressful (con). Eating “intuitively” (no diet rules) is more relaxed and adaptable (pro), but might lead to less optimal nutrition or overindulgence (con). The better choice depends on the person’s needs and psychology. The key is recognizing that there's not one universally “true” way to eat; it’s about what trade-off yields health and happiness for you.
Generally, approach ❷ is most suitable for moderately complex, rational decisions where each option has multiple dimensions to consider. These are often decisions where you can make a list of pros and cons, and they won’t all line up on one side. Many personal and professional choices fall into this category, including selecting a job, choosing a school, making policy decisions, and determining a business strategy, where consequences can be anticipated and weighed.
Sometimes, weighing trade-offs even presents us with another alternative: the golden mean approach, as described by Aristotle for morals, but which we can adapt here. Between car and public transport, there is car sharing on some days of the week. Between remote and office work, there’s hybrid work, allowing us to get the best of both worlds.
However, trade-offs are not as helpful in other cases. They break down in more emotional or existential dilemmas—whether to have kids, believe in God, or define moral character. You can’t resolve those with checklists. They need different tools—like patience, inner alignment, or more profound reflection.
In summary, weighing trade-offs is a powerful approach when you recognize that each conflicting “truth” holds part of the answer. Instead of asking “Which side is correct outright?”, you ask “What are the relative pros/cons here, and which side better fits the situation or my values?” The outcome is often a compromise or a context-dependent choice. This approach accepts that life often doesn’t give us perfect solutions – only choices that are better in some ways and worse in others. By thoughtfully weighing those, you make an informed decision that acknowledges reality’s complexity.
❸ Reframe The Question (Zoom Out)

Sometimes, the conflict isn’t between truths but between perspectives. The problem might be with the framing of the question, not the conflicting answers. Side A and Side B might both be valid, just in different contexts, timeframes, or levels of analysis. By zooming out to shift perspective, you can dissolve the tension and integrate both.
This move is familiar in science and philosophy. Hegel called it “synthesis”—resolving a contradiction by finding a higher-order view. Hegel’s dialectic of “thesis + antithesis = synthesis” is an example of seeking a new framing that transcends the original contradiction. A term used in theology and philosophy for one version of this is “non-overlapping magisteria.” This concept, popularized by Stephen Jay Gould, holds that two seemingly opposed domains (like science and religion) may not actually contradict because they deal with different realms . They each have authority (magisterium) in different spheres. In Gould’s view, science covers the empirical “how” questions of the natural world, while religion (or ethics) covers meaning, morals, and “why” questions – thus, if properly understood, they do not directly clash.3
A classic example of reframing within science: Newtonian vs. Quantum physics. Early 20th-century physicists were troubled by how the well-established Newtonian laws failed at subatomic scales, where quantum mechanics introduced seemingly contradictory principles. Rather than declaring one true and the other false, scientists realized that both are true in their respective domains. Newtonian physics is an excellent approximation for the macro-scale (our everyday world of apples and planets), whereas quantum physics governs the micro-scale (particles, atoms) with different rules. They’re “not enemies – they belong to different levels of reality,” as the essay puts it. The question “Which physics is correct?” was reframed into “At what scale or context is each physics applicable?” This reframing can often dissolve or reduce the apparent contradiction: you don’t use quantum mechanics to calculate the trajectory of a baseball, and you don’t insist on Newton’s laws inside an atom. Each theory is true within its domain. (Unifying these into one grand theory is still an ongoing scientific quest, but meanwhile the two coexist by division of territory.)
Reframing questions can take various forms:
Different Time Frames: What’s true now vs. true in the long run might differ. For example, a business strategy could be unprofitable in the short term (so one person says “This strategy is bad”), but highly profitable in five years (another says “This strategy is good”). Both are right about different time horizons. Reframe: it’s a short-term loss for long-term gain scenario.
Different Domains or Scales: As discussed, science vs. religion (different domains of inquiry), or classical vs. quantum physics (different scales). Another case: health advice often seems contradictory until you consider populations or conditions – e.g., one study says “Diet X reduces heart disease” (maybe true for the general population), another says “Diet X might be dangerous” (perhaps true for a specific subgroup or when taken to an extreme). They can both be true if framed by population vs individual condition.
Level of Abstraction: Two economic theories might conflict, until you realize one operates at a micro-economic level and the other at a macro level. For instance, individuals saving money is good (micro level truth), but if everyone saves and no one spends, the economy can stall (macro level truth). Reframing here means acknowledging a complexity: personal thrift vs. aggregate demand. Both truths apply but at different systemic levels.
This approach is especially common in academic and intellectual conflicts. Instead of picking sides, scholars sometimes find a theory C that says: “Theory A and Theory B are each valid in these respective conditions; here’s a bigger framework that explains why they appeared contradictory.” It’s a very integrative approach – akin to solving a puzzle by fitting pieces into a broader picture.
However, a word of caution: reframing can sometimes feel like a cop-out if misapplied. Not every conflict is just due to perspective – some truths do exclude each other in the same context. If two employees give directly opposing accounts of an event and the overall context is the same, one might simply be lying or mistaken. Reframing works when context differences are genuinely relevant. It can also be a slippery slope right into hardcore relativism (“everyone has their own truth”) which, taken too far, might prevent us from ever resolving anything. Use this tool when there’s evidence that the question might be ill-posed or too narrow.
Approach ❸ says: “Maybe the reason A and B can’t both be true is that we’re asking the wrong question or looking at the wrong scale.” Change the question, enlarge the context, and often the contradiction fades. It’s a powerful move that can lead to creative solutions (and fewer needless fights over who’s “absolutely” right).
❹ Dissect The Distinction (Zoom In)
Ever been in a debate that suddenly cleared up when someone said, “Wait—we’re using the same word to mean different things”? That’s what this approach is about. Sometimes, two “truths” clash only because we’re glossing over a subtle but crucial distinction. Language is an imperfect medium of communication, and often conflicts of truths can be resolved simply by making finer distinctions. This is my preferred approach and arguably one of the most elegant solutions to colliding truths. This isn’t just reframing—it’s precision work. You zoom in, break the issue apart, and expose the underlying assumptions, definitions, or categories. And once those are made explicit, the contradiction often disappears.
Let’s illustrate with examples:
Metrics Example (revisited): Recall the conflict about using metrics in organizations (Doerr’s pro-metrics vs. Muller’s anti-metrics). Scott Young’s analysis didn’t stop at trade-offs; he went further to propose a uniting principle . He observed that all of Muller’s negative examples of metrics came from dysfunctional, bureaucratic organizations, whereas Doerr’s positive examples assumed healthy, well-managed organizations . Aha! The deeper factor might be organizational health. Young suggested that “metrics aren’t inherently good or bad – their effects depend on context: in a healthy culture, metrics amplify success; in a toxic culture, metrics amplify dysfunction.” In other words, the missing distinction was between healthy vs. sick organizations. Once you add that dimension, both authors’ positions become (partially) true: If an organization has strong trust and judgment, metrics help (Doerr’s view); if an organization is plagued by distrust or bad incentives, metrics hinder (Muller’s view). By dissecting the problem, we found a way that each “truth” holds under different conditions. This fully resolves the conflict by subsuming it into a clearer, conditional statement: “Metrics work when X; metrics fail when Y.” Both sides were focusing on different subcases without explicitly saying so.
Science vs. Religion (revisited): This was partly a reframing (Approach ❸) example, but you can also view it as making a crucial distinction: what if science and religion answer different types of questions? Science addresses “How does the universe work?” while religion addresses “Why are we here, and how should we live?” . When people treat them as if they’re answering the same question, conflict arises (e.g., creationism vs. evolution debates). However, if you distinguish between the domains – science explains natural mechanisms, while religion/philosophy explore meaning and moral values – then the seeming conflict between, say, evolutionary theory and belief in a divine purpose might be resolved for some as compatible truths on different levels. (Of course, not everyone will agree they’re entirely separate, but it illustrates the idea of resolving conflict via categorical distinction.)
Definitional Confusion: Many arguments boil down to different definitions. Two people could argue fiercely whether “X is immoral”, only to realize they have different definitions of X, or of immorality itself. For example, a classic debate: “Is art objectively good or just subjective?” Person A says “This painting is objectively a masterpiece,” Person B says “Art is subjective, no painting is objectively better.” A hidden distinction here might be between technical skill vs. personal taste. Perhaps Person A really means the painting has masterful technique and historical importance (which can be discussed with some objectivity), whereas Person B means that whether someone likes the painting is entirely subjective. Once you clarify that “good” in art can refer to technical excellence or to personal preference, A and B might find they agree: Yes, it’s masterfully done (objectively), but of course, whether you enjoy it is subjective. They were using “good” in different senses.
Missing Category or Option: Sometimes two positions conflict because we assume only two categories exist, when in fact a third category would solve it. For example, consider a legal debate: “Is this person a contractor or an employee?” If laws only recognize those two, there might be arguments because the situation doesn’t fit neatly. The conflict might resolve by creating a third category (say, “dependent contractor”) that acknowledges the partial truth of each side. This is more of a systematic fix, but it’s about making a finer distinction to better reflect reality.
In all these cases, the approach is to dissect the problem to find a nuance that we naturally overlooked. Once that nuance is highlighted, it becomes clear that both are (fully) true, not because of different perspectives adopted subjectively, but because of missing information uncovered objectively. In ❸ we can resolve the conflict by saying, “Oh, in that sense you were right, and in this sense I was right.” In ❹ we say “Oh, we are both right, now we have uncovered that missing piece of the equation.” The conflict then evaporates, because we realize they weren’t truly contradicting each other once correctly specified.
Approach ❹ is applicable when a debate feels like it’s spinning in circles or like people are “talking past each other.” It’s beneficial in complex systems (e.g., organizations, ecosystems), ideological or interdisciplinary debates, or any area where terms are loaded, vague, or assumed.
This approach requires humility and intellectual honesty – being willing to admit that our initial framing was too simplistic or that our understanding lacked depth. It’s very rewarding, though, when you hit upon that hidden key. Discussions go from deadlock to “click, everything makes sense now.”
Finding a uniting or underlying principle or a crucial missing distinction can completely resolve a conflict, not by compromise, but by showing the conflict was based on a misunderstanding. It’s like solving an equation by introducing a new variable that simplifies it. Both conflicting truths become parts of a larger, more detailed truth. With ❹, instead of forcing a winner, we ask: What are we missing that could make both sides true and resolve the conflict elegantly? Very often, once the missing distinction becomes clear, things often click into place. This method takes humility but is widely applicable, especially in more philosophical debates and in matters of definitions. One of my favorite examples of this is the multitude of definitions used in personal productivity, as I have written about elsewhere:
❺ Hold Both As True (Embrace Paradox)

The Yin-Yang symbol from Chinese philosophy is a powerful image: a black and a white teardrop shape entwined in a circle, each containing a small dot of the other’s color. Yin-Yang symbolizes that opposites are not absolute enemies, but complementary forces, each carrying a seed of the other. Night and day, masculine and feminine, expansion and contraction – they interdepend and cycle into each other. Applied within the context of truth collisions, it suggests that sometimes two contradictory views can both hold truth simultaneously, and the full reality is a dynamic whole that includes both.
Approach ❺ is about accepting the tension without forcing a resolution. It’s the art of saying “Both sides of this contradiction are true, in their own way, at the same time.” This doesn’t necessarily require reframing or distinguishing (as in ❸ or ❹); it may simply be recognizing that reality is complex and ambiguous, and our binary categories (true/false, good/bad) sometimes fail to fully capture it. Instead of trying to eliminate the contradiction, we hold it and live with it.
How could this play out? Some examples and rationale:
“The glass is half full/empty.” This old saying captures it well. Is the glass half full or half empty? Both descriptions are accurate – one focuses on presence, the other on absence. A dialectical thinker would shrug: It’s both; why not appreciate the water that’s there, and also acknowledge there’s room for more? Not every argument has to declare one description the winner.
Marriage is both wise and foolish. Depending on the day, a married person might feel it’s the best decision they ever made, and on other days, feel it’s the craziest thing they’ve done. Both can be true in the sense that marriage (or any significant life choice) has aspects that are profoundly enriching and aspects that are limiting or challenging. A simplistic approach would be to debate the question, “Marriage: Good or Bad?” A more nuanced person might say: “Marriage, like life, is often wonderful and often difficult – it’s both a wise choice and a foolish one at the same time, and that’s what makes it human.”
I am good as I am, but I also must change. In psychology, there’s a therapeutic approach called dialectical behavior therapy (DBT), which teaches holding two seemingly opposite things as true at once (e.g., “I accept myself as I am and I need to change”). The tension in that statement is intentional – acknowledging reality (I have traits as they are) while also acknowledging the need for growth. This kind of both/and thinking is considered a hallmark of mature, flexible cognition .
Maybe there is a God, and maybe, at the same time, there isn’t. To a strictly logical debater, that statement is nonsense – either God exists or not. However, psychologically or existentially, one can hold belief and doubt together—though logically, they remain mutually exclusive. For example, one can feel 90% convinced there’s a higher power but also 10% wondering if that’s just wishful thinking. In everyday life, many people tolerate a state of uncertainty or ambivalence about big questions. Some things are meant to be lived with, not settled. In this case, the “truth” you hold is probabilistic or fluid – not a yes/no, but a maybe.
Fuzzy Truths. This approach can also be likened to fuzzy logic. Classical logic says a statement is either true (1) or false (0). Fuzzy logic suggests that truth can be a spectrum – any value between 0 and 1. You might not have complete truth or falsehood, but degrees of truth. For instance, you might say “My confidence in this hypothesis is 0.7 (70%).” In the realm of conflicting truths, fuzzy logic would mean you allow that each side has a certain degree of truth. Perhaps Side A is 65% true in your estimation, and Side B is 35% true . They overlap; neither is wholly correct or wholly wrong. This is essentially how probabilistic thinking or confidence levels work. It’s not emotional relativism, but a measured way to say: “Reality isn’t giving me a 100% vs 0% here, so I’ll assign partial truths.” In our earlier example of marriage, one might say “the concept of marriage has, say, a 0.6 truth value as a good idea for most people.” That’s a fuzzy way to hold both positive and negative views in balance. A book I recommend in this matter is Bart Kosko’s Fuzzy Thinking (pop-science/philosophy crossover).
This approach can sound like indecision or fence-sitting, but it’s not about being wishy-washy. It’s an affirmation of complexity. Some truths are not binary; they are dialectical, or gradient, or contextual in such a tangled way that trying to force a single evaluative conclusion would do violence to the reality. This can be very helpful in complex life dilemmas and deeply personal questions where objective resolution is unlikely. It’s also useful as a mindset when dealing with ongoing tensions that are not immediately resolvable (e.g., balancing freedom and security in society – you will always need some of both, never purely one). Philosophical and spiritual paradoxes often fall here too. Rather than picking a dogmatic side, one might embrace the mystery that two opposites both have merit.
It’s worth noting that holding both true at the same time is a temporary state on the way to a later resolution. You may currently see truth in both sides and live with that tension, but later, new evidence or experiences may tip you more toward one side. That’s fine – you haven’t failed; you were being honest about your knowledge state at each point. On the other hand, some contradictions are enduring (the fundamental tensions of human existence, such as individual versus community needs, likely persist in some form forever).
In summary, Approach 5 encourages you to breathe and accept complexity. Not every truth battle needs an immediate victor. Sometimes wisdom is in recognizing the truth of both sides and letting them inform your perspective in a balanced way. This approach can bring peace of mind because you’re no longer contorting reality to fit a neat box; you’re allowing reality to be as rich and paradoxical as it actually is.
❻ Defer The Decision (Wait-and-See)
Finally, when two truths clash and nothing seems to resolve it, there’s one option we often forget: wait and see. Usually, we don’t have to make a decision immediately. If the stakes are unclear, the conflict unresolved, and time allows it, you can pause. Let the moment pass. Let more information emerge. Let yourself grow into the clarity. Defer until you are ready. This essentially means choosing neither side for now and giving yourself time. It’s a valid approach, though one that's easy to overlook in our action-oriented culture.
We can call this the “Null Alternative” – the alternative of not committing to any alternative. It means not choosing until you feel better equipped or until circumstances evolve. Far from being laziness or indecision, strategic deferral can be the smartest move in certain situations. When a conflict feels unresolvable, you can defer—not out of work avoidance—but as a tactical pause. You wait—because it’s too early, the stakes are unclear, or you lack enough information. This is the famous Null Alternative that we usually forget.
When might you defer?
True Dilemmas: If you’re faced with multiple options that are equally good (or bad), and you’ve analyzed them to exhaustion without a clear winner, forcing a decision may be arbitrary. Provided you have the luxury of time, you might wait. In life, there are moments like choosing between two job offers that both sound decent, or deciding among colleges, or even making a significant purchase where none of the choices is clearly superior. If the decision isn’t time-sensitive, why not hold off and gather more info or see if one starts to feel right?
Evolving Situations: Many conflicts or truths become clearer with time and additional context. If two business partners disagree on a direction for the company, but the decision doesn’t have to be made this quarter, they might defer – perhaps market trends in the next six months will make the better path obvious. In science, when experiments give conflicting results, sometimes the answer is to do more research (defer conclusion) until more data resolves the discrepancy. As one lean management principle states: “Decide as late as possible, especially for irreversible decisions, so you have the maximum amount of information.” Keeping options open as long as you safely can often leads to better decisions because by the deadline, you “will know more about which option is best… [it] gives you time to explore different options in depth” .
Unclear Stakes: If you don’t yet understand how important the conflict is, deferring is prudent. Sometimes we get very worked up about a decision only to realize later it wasn’t such a big deal. Waiting can lend perspective. It might turn out the decision gets made by default or external events (e.g., you defer on a potential investment opportunity, and then that opportunity closes – decision made for you, which in hindsight might be a relief if it was truly too hard to call).
Aiming for Optionality. Notably, optionality is a concept in strategy that aligns with deferral. If you can keep multiple options open without committing, you maintain flexibility. Investors, for instance, talk about “real options” – waiting to see how an uncertain situation unfolds rather than locking in too early . The idea is to avoid premature closure, especially when the cost of waiting is low and the value of information gained is high .
What does deferring look like in practice? It can range from actively gathering more information (continuing to research, monitoring the situation), to setting a review date (“I’ll revisit this in six months”), to literally flipping a coin to decide later (coin toss not as an immediate decision, but to decide when to decide, etc.), or simply letting the status quo persist. Sometimes, not making a decision is, in fact, making a decision — the decision to keep things as they are, at least for now.
One example: Suppose you’re torn between moving to a new city for a job or staying put. You might defer by asking for an extension to decide, or by taking a short-term assignment in the new city first, effectively delaying the big permanent decision until you try it out. In relationships, if someone is unsure about a big commitment, they might defer by continuing the relationship as is, rather than breaking up or proposing immediately – essentially saying “I need more time to see where this goes.”
The benefits of deferring are that you often gain clarity naturally. New factors can emerge that tip the balance. Time itself can be a clarifier; our emotional reactions to options often stabilize with time. You might find after a month of not deciding, you’re leaning towards one option emotionally – that’s valuable insight.
But there’s also risk in deferring: Indefinite procrastination can become paralysis. Some decisions do have deadlines, and missing them is worse than picking a not-perfect option. Also, some conflicts, if left unresolved, can fester or cause tension (think of a team disagreeing on strategy – doing nothing could lead to confusion or drift). So, deferral should be done purposefully, with awareness of any time limits or consequences of not deciding. Ideally, set a checkpoint: “If I haven’t gotten new insight by date X, then I’ll do Y.” This prevents endless limbo.
And if worst comes to worst – you have a dilemma you cannot resolve and must decide – you can always invoke a bit of randomness as a final resort (flip a coin, draw straws). This might feel unsatisfying, but as the essay humorously notes, “when stakes are high but options seem truly indistinguishable, Mother Luck can cut the knot.” In a sense, that’s deferring the decision to fate or probability. It’s basically admitting that from your perspective the truths were equally balanced, so you let chance decide so you can move on. That’s a last resort, but it’s there.
Approach ❻: “no decision” is often a valid choice. When facing clashing truths that are hard to reconcile, you can sometimes step back and let time work for you. Clarity can emerge naturally; problems can resolve themselves or new solutions can arise. As long as you’re mindful about it, deferring can save you from forcing a false resolution under duress. Life isn’t a one-round game; you can pause the game when needed.
Using the Right Tool at the Right Time
We’ve explored six approaches to handling conflicting truths:
❶ Reject | ❷ Weigh | ❸ Reframe | ❹ Dissect | ❺ Hold | ❻ Defer
These aren’t silver bullets. And they are not mutually exclusive; they are tools in a toolkit. The art of navigating truth lies in knowing which approach best fits the situation. Simple factual disputes may require Approach ❶🅰 (outright rejection of nonsense). Personal life decisions may require a combination of trade-off analysis (❷) and introspective reflection (❻). Ideological or philosophical disagreements might be eased by reframing (❸) or making distinctions (❹). And some enduring questions might require the humility of holding two sides in tension (❺) until perhaps a larger synthesis appears in the future.
It’s also worth noting that some conflicts evolve through multiple approaches. For instance, you might initially hold both sides as possibly true (❺) while gathering evidence. As you learn more, you start weighing pros/cons (❷). Eventually, you feel one side is clearly stronger and you reject the other (❶🅱). And maybe you still leave room for the possibility that in another context, the other side had a point (❸ or ❹). This shows these approaches aren’t rigid stages but flexible habits of mind.
The Golden Mean concept we touched on is a nice metaphor: in moral life, it’s about finding balance between extremes. In handling truth, similarly, the best path is often a balanced application of these tools. Not every problem needs deep philosophical analysis (don’t overthink trivialities – sometimes satisfice and move on). Yet not every disagreement should be settled by knee-jerk rejection; some deserve patience and openness.
Ultimately, the goal isn’t to resolve every contradiction in a tidy manner. The world is messy and so are our minds. The goal is to equip ourselves with ways to think more clearly and flexibly when faced with conflicting information or viewpoints. Each of these approaches can help avoid common pitfalls:
❶ Reject guards against paralyzing over-analysis when a quick decision will do.
❷ Weigh prevents one-dimensional thinking by forcing you to see merits on each side.
❸ Reframe broadens your perspective to see the bigger picture.
❹ Dissect deepens your understanding by pinpointing nuances.
❺ Hold encourages intellectual humility and comfort with complexity.
❻ Defer reminds you that timing and information are part of truth-seeking.
By applying these six strategies, you can handle disagreements and decisions with greater confidence. You’ll know when to be decisive and when to be deliberative, when to integrate viewpoints and when to set them aside, when to insist on an answer and when to embrace uncertainty.
In a future where misinformation and polarized opinions are rampant, such skills will become increasingly crucial. The ability to discern, integrate, and navigate truths – to find your way to clarity without being misled – is indeed, as we said, a deep topic. Hopefully, these tools help map that depth a little so that you can approach truth’s challenges with both discernment and openness.
Truth isn’t always a one-dimensional target; sometimes it’s an evolving landscape that we traverse. With the right approach at the right time, we can travel farther into understanding without losing our way.
I found an interesting overview on Medium: No Philosopher Has Ever Been Able to Know the Truth. Also, see The Four Theories of Truth As a Method for Critical Thinking.
See How to Make Decisions like Benjamin Franklin for more on this.
I have taken these examples from Scott H Young’s Metrics: Useful or Evil?. Note that Gould’s “non-overlapping magisteria” is controversial. Many scientists and theologians disagree, especially in cases where religious claims touch empirical reality (e.g., creationism). Some argue religious claims often make empirical statements, while others accept the separation as helpful.
An immensely useful essay on what I would call (not without reason) our epistemologically nihilistic times. Much to consider. This is actually what Substack is all about. 👏🏻