Provenance Manifesto
arrow_back Back to Blog

We Are Not Arguing About AI Text - We Are Arguing About How Humans Think

Yauheni Kurbayeu

Table of Contents

expand_more

We Are Not Arguing About AI Text - We Are Arguing About How Humans Think

We Are Not Arguing About AI Text - We Are Arguing About How Humans Think

Author: Yauheni Kurbayeu
Published: Apr 7, 2026

I recently found myself in yet another discussion about AI-generated texts.

Some people described them as empty, generic, even irritating to read.
Others said the exact opposite, that they are clear, structured, and much easier to understand.

At first glance, it looks like a debate about AI.

It isn’t.

What we are actually seeing is a collision of different ways humans process information.


Some people naturally read in narratives.

They follow the flow of the text, pick up tone and nuance, and reconstruct meaning through context. For them, a good text feels “alive.” It may not be perfectly structured, but it carries intention, personality, and a sense of authorship.

Others read in structures.

They look for signals, hierarchy, and clarity. They want the point quickly, with minimal ambiguity. For them, a good text is one that can be parsed efficiently, where meaning is explicit rather than implied.

Neither of these approaches is better, they simply optimize for different cognitive costs.


AI-generated text strongly aligns with the second model.

It tends to be explicit, evenly structured, and predictable in how it unfolds. If you are used to reading technical specifications, product requirements, or decision logs, this feels natural. It reduces the effort needed to understand what is being said.

But if you are used to narrative writing, the same qualities can feel artificial. The text may appear flat, repetitive, or lacking depth, even when the content itself is correct.

So when someone says, “AI text is bad,” they are often reacting not to the quality of the ideas, but to the mismatch between the format and their preferred way of thinking.


This becomes more visible when we look at concrete examples.

Take a product requirement.

One version might read like a short story. It describes the user, the context, the problem, and gradually leads you to the solution. You understand it by following the narrative.

Another version presents the same information as a structured breakdown. It defines the problem, lists constraints, specifies inputs and outputs, and states the decision directly.

Both can describe the same reality, but they require different cognitive strategies to process.

AI tends to produce the second type.


Now comes the uncomfortable part.

Whether we like it or not, AI is becoming an interface between humans and systems.

We are already starting to read machine-generated requirements, review AI-produced specifications, and validate decisions that were partially or fully synthesized by models. The volume of this will grow faster than our ability to debate stylistic preferences.

At some point, the question will shift.

Not “do I like how this is written”, but “can I work with this efficiently?”.


And this is where the real issue starts to emerge.

Both human-written and AI-generated texts share the same fundamental weakness. They present outcomes, but they often hide the reasoning behind them.

A beautifully written paragraph can still obscure assumptions.
A perfectly structured AI response can still miss critical constraints.

In both cases, we are left asking the same questions:

  • Why was this decision made?
  • What alternatives were considered?
  • Which assumptions shaped the result?

If we cannot answer these, the quality of the writing becomes secondary.


This is why the discussion about “AI vs human text” feels somewhat misplaced.

The more important shift is happening elsewhere.

We are moving from a world where output was scarce and effortful, to one where output is abundant and cheap. In that world, the bottleneck is no longer producing text.

The bottleneck is understanding and trusting it.


And that brings us to something more fundamental.

The real divide is not between human and AI writing, it is between opaque outputs and traceable reasoning.

If we cannot reconstruct how a conclusion was formed, it doesn’t matter how natural or how structured the text looks.

It remains a well-presented guess.


So perhaps the better question is not whether AI texts feel “alive” or “artificial.”

It is whether we are building systems and habits, that preserve the decisions, context, and assumptions behind what we read.

Because that is what will ultimately determine whether we can rely on it.


Curious how others experience this.

Do you find yourself understanding narrative text more easily, or structured text?

arrow_back Back to Blog