Cognitive risks of using AI and how to mitigate them

Executive Summary

When humanity started to write, it led to an evolution. Literally, new regions of the brain had been formed and shaped. If we start to write we evolve, then when we stop writing - move to keyboards - would we regress? Actually, yes. Scientific research claims indeed people with a lack of handwriting practice starts to loose some cognitive abilities compared to those who continue write by his\her hand.

With this knowledge let's ask ourselves what happened with our cognitive abilities when research in libraries had been replaced by research over search engines and sources like Wikipedia? What would happen with our brain when we start to rely more and more on AI for intelligent tasks?

Our research with ChatGPT on this topic shown this list of risks:

  • A decline in problem-solving skills "from scratch"
  • A decline in "mental models of the system"
  • A decline in the accuracy of thinking
and such forecasts:
  • Decreased ability for abstract construction
  • "Flatness" of engineering thinking
  • Potential degradation of invention ability
As a big picture of trends:
  • Loss of the ability to think deeply and independently.
  • Deterioration of engineering and analytical thinking.
  • Weakening of internal speech and logical discipline.
  • Increased dependence on external cognitive tools.
  • Weakening of memory, concentration, and the structure of thoughts.
  • Decreased ability to connect large conceptual spaces.

All of this looks quite scary and leads to intention to avoid any usage of AI. Unfortunately, it would not be a wise decision. AI already on early adoption by market. It indeed increases productivity. This boost to performance is significant to consider AI applied to our works tasks as a driver for the next industrial revolution. Not using it could mean losing competitiveness on labour and capital markets.

If we could not avoid adopting AI, could we minimize its negative impact on our brain?

Together with ChatGpt we came to this set of practice to cope this risks

  • "Architectural Framework": 5-10 minutes of design before connecting AI
  • “Manual development zone”: keep 10–30% of the code written by hand
  • “Self-checking questions” is a cognitive protocol
  • "AI as a simulator for complex tasks": 2-3 times a week, do something with a developmental perspective, not a result-oriented one.
  • “Living Writing”: 10–20 minutes by hand per day

Discovery

Risks

Below is a sober, evidence-based analysis of what is already confirmed, what can be reasonably inferred, and what risks and compensations exist. We consider two major vectors of cognitive change:

  1. Reduction of handwriting
  2. Replacement of independent programming with generative AI
I. Reduction of Handwriting

(confirmed data and grounded projections)

1. What is confirmed by research

1.1. Weakening of the graphomotor trace → poorer long-term memory.
Handwriting creates a strong memory trace through motor encoding: the physical movement of the hand reinforces letters, words, and concepts.

Studies show that:

  • children who learn to write by hand develop the Visual Word Form Area (VWFA) faster;
  • adults remember handwritten text better than typed text;
  • handwriting activates broader neural networks: motor cortex, prefrontal cortex, and visuospatial regions.

Reducing handwriting therefore leads to a decline in deep, long-term retention, especially in people engaged in intensive learning or abstract reasoning.

1.2. Reduced concentration.
Handwriting is inherently slower, requiring sustained focus and stronger prefrontal control of attention. Typing, by contrast, encourages speed, multitasking, and frequent context switching, resulting in more superficial information processing.

1.3. Decline in slow, analytical thinking.
Writing by hand forces the brain to compress, structure, select words, and hold ideas in working memory. This produces conceptual clarity similar to “slow reading”. Typed text encourages more stream-like, less structured thought.

2. Reasonable projections

2.1. Weakening of internal speech.
Handwriting strengthens the coupling between language and motor circuits, reinforcing verbal working memory and inner speech. As this weakens, thinking becomes more image-based and fragmented.

2.2. Decline in deep elaboration of ideas.
The ability to hold and gradually unfold large conceptual structures is best trained through writing. Reduced writing increases speed, but at the cost of depth.

3. Personality-level risks
  • lower quality of independent analysis;
  • reduced cognitive endurance;
  • more superficial reasoning;
  • difficulty formulating complex ideas;
  • weaker ability to maintain long conceptual chains.
II. Replacement of Programming with Generative AI

(observed effects vs. likely outcomes)

This is not merely about coding, but about outsourcing thinking itself to an external system.

1. What is already observed

1.1. Decline in problem-solving “from scratch”.
When AI generates code, developers spend less time on architecture, logical problem-solving, and deep requirement analysis. This is a form of cognitive offloading, well documented in studies on GPS, calculators, and search engines.

The result is local skill degradation and reduced ability to solve problems without external assistance.

1.2. Weaker internal system models.
Strong programmers develop rich mental models of stacks, operating systems, memory, data types, and execution flow. When code is outsourced, understanding stagnates even if the code works.

1.3. Reduced precision of thought.
Programming demands strict, unambiguous reasoning. Reliance on AI habituates the brain to fuzzy natural-language formulations, eroding logical rigor.

2. Likely future effects

2.1. Decline in abstract reasoning.
Programming is one of the strongest trainers of abstraction. Reduced manual practice weakens compositional thinking, recursion, invariant reasoning, and system modeling.

2.2. Flattening of engineering thinking.
When AI writes code, humans mainly edit, prompt, and verify. This limits the development of synthetic thinking, where architecture, constraints, and implementation are held simultaneously.

2.3. Risk to inventive capacity.
AI excels at standard solutions. Novel paradigms and genuine inventions emerge only from deep, hands-on immersion in the domain.

III. Overall cognitive trajectory
  • Reduction of structured, disciplined thinking.
  • Growth of fragmentation and reactive attention.
  • Lower cognitive “tone”, similar to muscles without load.
  • A shift toward a more oral, associative, and less rigorous cognitive culture.
IV. High-level risks
  1. Loss of deep independent thinking.
  2. Degradation of engineering and analytical reasoning.
  3. Weakening of internal speech and logical discipline.
  4. Growing dependence on external cognitive tools.
  5. Decline in memory, concentration, and structural clarity of thought.
  6. Reduced ability to connect large conceptual spaces.

This is not a catastrophe, but the trend is clear.

Solution

Below is a practical, economical, and realistic system that allows you to use AI to its full potential without cognitive degradation — and, in fact, to strengthen your thinking over time.

The core idea is simple:

🔥 The danger is not AI usage itself, but passive AI usage.

The solution is to build an active, architecture-driven mode of interaction, where AI acts as an accelerator of thought rather than a replacement for it.

I. A principled framework: three modes of working with AI

There are three fundamentally different ways to use generative models:

  1. “Executor” mode.
    You define the task → AI produces the result → you apply minor edits.
    Minimal cognitive load, maximal risk of skill degradation.
  2. “Co-author” mode.
    You and AI work together, but you control the architecture.
    Moderate cognitive load → skills are preserved.
  3. “Trainer” mode.
    AI acts as a sparring partner, critic, or verifier that amplifies your own thinking instead of replacing it.
    High cognitive load → skills actively develop.

The goal is to remain primarily in modes 2 and 3, using mode 1 only for routine tasks where little cognitive value is at stake.

II. Five practices that protect the brain and remain economically efficient

Each practice is designed to:

  • avoid slowing down work,
  • often even accelerate it,
  • while loading the neural systems that would otherwise atrophy.
1. “Architectural frame”: 5–10 minutes of design before engaging AI

This is not a return to old-style programming. It is a distinct skill: thinking in categories, relations, and structures.

A minimal ritual:

  • identify key entities,
  • define constraints,
  • state desired invariants,
  • sketch a structure of 5–7 blocks.

Only then involve AI.

Why this matters:
Architectural thinking is the most valuable and most trainable layer of expertise. AI cannot replace it — and will not for a long time.

Time: 5–10 minutes.
Return: better code quality, structured thinking, and more effective AI output.

2. “Manual refinement zone”: keep 10–30% of the code handwritten

A simple but powerful rule:

Write 10–30% of the code yourself — not for volume, but for depth.

What to write manually:

  • interfaces, contracts, and signatures;
  • critical error handling and edge cases;
  • core algorithms.

Let AI handle boilerplate and scaffolding.

Why it works:
You interact with the system at the level of internal models — and those models remain intact and growing.

3. “Self-check questions”: a cognitive protocol

Before engaging AI, briefly ask yourself:

  1. What is the hardest part here?
  2. What would be wrong if I made a mistake?
  3. Which invariants must be preserved?
  4. What must a future reviewer clearly understand?

This activates the prefrontal cortex, working memory, and analytical reasoning.

Time: ~1 minute.
Return: disproportionate improvement in overall work quality.

4. “AI as a trainer for complex tasks”: 2–3 times per week

Use AI not for results, but for development:

  • explain an algorithm to AI as if you were teaching;
  • ask AI to challenge your architecture;
  • ask AI to probe for weaknesses in your reasoning.

You provide the arguments — not the model. This is cognitive work with resistance.

5. “Living writing”: 10–20 minutes of handwriting per day

Not journaling or creative prose, but conceptual work by hand:

  • sketch an idea,
  • explain a problem to yourself,
  • structure a complex question,
  • externalize and refine thought.

Handwriting strongly activates the motor–prefrontal learning loop and improves:

  • long-term memory,
  • conceptual depth,
  • attention control,
  • structural thinking.

This is the key antidote.

III. Integration into real life (economically optimal)

The 15–60 model is optimal under heavy workload:

Daily
  • 10 minutes — architectural framing before a key task;
  • 10–20 minutes — handwriting;
  • AI as executor for routine work.
2–3 times per week
  • 20 minutes — thinking training with AI as opponent.
Outcome
  • higher work speed;
  • preserved strategic competence;
  • maintained depth of thinking;
  • increased creativity;
  • near-zero risk of cognitive degradation.
IV. A personalized note

Based on your questions and thinking style, you already demonstrate strong internal organization, systems orientation, and sensitivity to subtle cognitive shifts.

For people like this, AI is dangerous not because of degradation, but because of impoverishment of internal style if automation becomes total.

When these practices are maintained, however, a rare effect emerges:

A multilayer cognitive model: human architectural reasoning + machine productivity + high-level reflection.

This combination is exceptionally powerful.

A few related publications:

The modern education and place of AI here
Roadmap to start with software development
Broad vs Narrow specialisation
Bridging gaps in economy, politics, diplomacy and warfare: review and next steps