Growing up in North India, mustard oil was everywhere. In the kitchen. On the scalp. On aching joints during winter. On newborns before their first bath.

It was never exotic, controversial, or debated. It was simply… normal.

So years later, encountering the Western classification of mustard oil as “non-edible” felt less like a scientific revelation and more like cultural whiplash. The oil hadn’t changed. The people hadn’t changed. The outcomes certainly hadn’t changed.

Only the label had.

That moment stayed with me. Not because it was about food, but because it revealed something deeper about how institutions think about risk, familiarity, and trust. And once you see that pattern, you start seeing it everywhere.

The Illusion of Objectivity

In regulatory language, mustard oil is rarely described as “unsafe.” It is described as “not approved for edible use.”

That distinction matters.

The classification largely traces back to animal studies conducted decades ago, where extremely high doses of isolated erucic acid showed potential cardiac effects in lab rats. From this, a broad conclusion was drawn (without strong population-level human evidence) leading to a precautionary regulatory stance in parts of the West.

This is often presented as neutral, evidence-driven decision-making.

But science does not operate in a vacuum. It operates inside culture, history, and institutional comfort zones.

Mustard oil is pungent. Strong-smelling. Heating. Hard to standardize into a mild, odorless, highly refined product. It does not blend in. And because it does not blend in, it becomes suspect.

What followed was not a ban rooted in demonstrated harm, but a quiet exclusion rooted in unfamiliarity.

What Gets Lost When Context Is Ignored

In India, mustard oil has been consumed for centuries: heated, tempered, balanced with spices, and used in small quantities as part of a diverse diet. No credible epidemiological data links it to population-level heart disease. No generational health crisis emerged from its use.

Yet this lived evidence rarely entered the regulatory conversation.

Because institutions are far more comfortable trusting:

  • Controlled laboratory studies over lived experience
  • Isolated variables over complex systems
  • Models that resemble what they already understand

When context is inconvenient, it is quietly sidelined.

The Irony of “Accepted Risk”

What makes this especially interesting is what is accepted without hesitation.

Highly refined seed oils. Ultra-processed foods. Long ingredient lists with unpronounceable names.

Many of these now carry stronger evidence of long-term harm than mustard oil ever did. Yet they passed regulatory filters because they fit neatly into existing industrial and compliance frameworks.

This is where the hypocrisy appears. Not loud or malicious, but structural.

Risk is not rejected. It is simply rebranded.

If a risk comes wrapped in familiar supply chains, corporate accountability, and standardized documentation, it becomes “manageable.” If it does not, it becomes “unsafe.”

From Kitchens to Codebases

At some point, I realized this pattern felt uncomfortably familiar.

Because the same logic plays out every day in enterprise technology, especially around open source software.

Open source, much like mustard oil:

  • Has a long and proven track record
  • Powers vast portions of modern infrastructure
  • Is transparent, opinionated, and not owned by a single entity
  • Requires understanding rather than blind trust

And yet, in many organizations, it is labeled “risky,” “unsupported,” or “not enterprise-ready.”

Until, of course, it is repackaged by a vendor.

The Open Source Paradox

Here is the quiet irony.

Organizations that distrust open source often run their entire digital backbone on it.

Linux. Containers. Kubernetes. OpenSSL. Databases. Networking stacks.

The same organizations will happily sign multi-million-dollar contracts for proprietary layers built on top of open source, because now the risk feels “accepted.”

Not reduced. Not eliminated. Just psychologically outsourced.

The presence of a vendor does not remove technical risk. It removes institutional discomfort.

Familiarity Masquerading as Safety

This is the common thread connecting mustard oil and open source:

We do not assess risk objectively. We assess how familiar a risk feels.

Institutions are not optimized for truth. They are optimized for predictability.

What fits existing mental models is accepted. What requires new thinking is resisted.

And so:

  • Ancient foods become “unsafe”
  • Transparent software becomes “risky”
  • Proven systems become “non-standard”

Until time, necessity, or market pressure forces a reversal.

The Real Cost of This Bias

The cost is not merely mislabeling.

It is slower innovation, missed efficiencies, overdependence on intermediaries, and the quiet erosion of institutional understanding.

When organizations outsource thinking in the name of safety, they do not eliminate risk. They delay accountability.

A More Honest Question

This is not an argument against regulation. Or against vendors. Or against caution.

It is an argument for intellectual honesty.

We should be asking:

  • What evidence are we prioritizing?
  • What context are we ignoring?
  • And whose comfort are we optimizing for?

Because very often, what we call “best practice” is simply the path of least cognitive resistance.

Closing Reflection

Mustard oil did not suddenly become safe once low-erucic variants were branded and standardized. Open source does not suddenly become reliable once a logo and an invoice appear.

What changes is not the substance.

What changes is who we trust to take responsibility for it.

And perhaps the most uncomfortable realization is this:

Risk is not the absence of a brand. Risk is the absence of understanding.

Until institutions learn to tell the difference, we will keep repeating the same mistake: from kitchens to codebases, from food to software, from tradition to innovation.

Quietly. Consistently. And with great confidence.