Cognitive dependency or human excellence? Two scenarios.
What happens to learning when knowledge is available in unlimited quantities? When AI answers every question and solves every task? And does this equally well or more reliably than most experts?
In this article, we look at two fundamentally different answers to this question. Two scenarios that are already emerging today. One arises if we do nothing. The other, if we actively take the future of education into our own hands.
Scenario 1: Cognitive dependency
Let's imagine the year 2050. AI systems diagnose, advise and make decisions. They do this faster and often just as well or even better than humans. Consider a medical situation: a photo or video of a wound is enough for a diagnosis, treatment plan and medication. So why spend years learning what the machine calls up in milliseconds?
In this future, people have stopped constantly acquiring knowledge. They ask questions instead of reflecting. We delegate instead of understanding. We consume answers instead of developing questions.
That sounds like science fiction. But research shows that this path is not so unrealistic.
Cognitive science calls this phenomenon cognitive offloading: the outsourcing of mental work to external tools. A well-known example is the «Google effect», in which the availability of information weakens the ability to remember. This effect takes on a new dimension with generative AI. One example: Bastani et al. (2025) investigated in a study published in the journal PNAS published study of almost 1,000 learners: With ChatGPT access, they improved their exercise performance by 48 percent. However, when the AI was removed, they performed 17 percent worse than learners who had never used AI. The researchers refer to this as a ’cognitive crutch«. And the learners did not even notice that they had learned less.
If this is already measurable today, after just a few years of generative AI, the question arises: Where will we be in 2050 when an entire generation has grown up with AI as a standard tool?
A society that outsources its thinking gradually loses the basis for judgment and contextual understanding. The path to this is gradual.
Scenario:
Cognitive dependence
Scenario 2: Human excellence
In this future, the distribution of roles is deliberately designed: AI takes on knowledge transfer, practice, feedback and scaling. Humans concentrate on what AI will (presumably) still not be able to do in 2050: Power of judgment, ethical action or the ability to implement.
The urgency is measurable. The Future of Jobs Report 2025 of the World Economic Forum shows that 39% of core competencies will change by 2030. In Switzerland, a KOF analysis by ETH Zurich shows that job seekers in AI-exposed professions increased by up to 27% more than in less affected fields following the introduction of generative AI.
If this is the dynamic of the early years, what will the labor market look like in 2050?
In response, the OECD Learning Compass 2030 formulates the concept of «student agency»: the ability to navigate independently instead of following instructions. The University of Zurich is addressing this with its think tank «FutureU», which sees critical and reflective judgment as an important future skill. The Swiss National AI Institute (SNAI), founded by ETH and EPFL, combines AI research with education.
Human excellence means a clear distribution of roles: judgment instead of mere expertise. Ethics instead of mere execution. Contextual understanding instead of mere knowledge. Ability to act under uncertainty instead of mere routine. Those responsible for education become architects of learning environments. They design, orchestrate and accompany. And the dynamic skills architecture replaces rigid curricula.
Scenario:
Human Excellence
We humans shape the path
Cognitive dependency (scenario 1) arises of its own accord. It doesn't need to be shaped, only passivity.
Human excellence requires an active design decision. The educational vision 2050 of vE Bildungsexzellenz is this decision:
AI for knowledge. Humans for the meaning.
Specifically: learning architectures that respond flexibly to change (pillar 1). Curation and verification that create trust in a world full of AI-generated content (pillar 2). And consistent investment in human excellence (pillar 3).
Cognitive dependency is the path of least resistance. Human excellence is the path of conscious decision. Which one do we choose?
Sources
- Bastani, H. et al. (2025): Generative AI Without Guardrails Can Harm Learning. PNAS, 122(26).
- OECD (2019): OECD Learning Compass 2030.
- KOF / ETH Zurich: Effects of generative AI on the Swiss labor market.
Would you like to discuss our vision?
We look forward to the exchange.