La fiction, terrain de jeu New Age STREAM #20
Healing Trauma!
Unlock the secrets to healing trauma! Explore how emotional coherence and metacognition can help you process difficult experiences and transform your identity. Learn how long to feel emotions and when to move forward.
Cognitive Multiculturalism: Training Your Brain to Switch Between Worlds
"Cognitive multiculturalism gives you the mental agility to better navigate workplace dynamics, understand global events, or simply connect with people different from yourself. And you dont need to move abroad or learn a new language. You just need to intentionally diversify three things..."
Le bout du tunnel SCOOP #12
Le bout du tunnel SCOOP #12
Retrouvez ce podcast sur toutes les applis audio. Pour voir cette vido SCOOP, rendez-vous sur la chane YouTube de Mta de Choc et sur le site metadechoc.fr.
Pour soutenir Mta de Choc
.
.
.
LLMs become more dangerous as they rapidly get easier to use
This is a concise summary by of what I increasingly see as a key factor driving the evolution of consumer-facing LLMs:
Using AI well used to be a pretty challenging process which involved crafting a prompt using techniques like chain-of-thought along with learning tips and tricks to get the most out of your AI. In a recent series of experiments, however, we have discovered that . Powerful AI models are just getting better at doing what you ask them to or even figuring out what you want and going beyond what you ask (and no, them or  does not seem to help on average).
What limited truth there was to the inflated discourse of prompt engineering has largely evaporated at this point, leaving us in a strange position. The conversational approach Ive always advocated, literally treating the LLM as an interlocutor analogous to a human collaborator, remains the best way of getting the most out of these systems. But neither this nor prompt engineering is necessary to get LLMs to perform usefully in real world contexts.
This seems like an obviously positive thing for users. However my concern is the burden of articulation with LLMs, explaining what you want in a clear and substantive way, was a necessary condition for responsible use of LLMs. This burden was never sustainable at the level of a commercially viable product. It presupposed a consistent level of reflexivity, a degree of cultural capital and a fluency with writing which was always incompatible with the kind of mass market software that has a user base of one billion plus.
However it did ensure that users were thinking carefully about their use because you literally couldnt produce valuable outcomes with an LLM unless you were doing this. Now its very easy to get LLMs to infer what you want, with success much of the time to at least some degree. Whats lost in the process is the reflexivity which came prior to use (i.e. the moment when you stopped to think what am I actually asking Claude to do here) and the learning which came through reflecting on the output and the related interaction it generates, as your sense of what you were trying to do is refined and expanded through dialogue.
This is how Claude Opus 4 glossed my analysis here. It conveyed my own words back to me but it did it in a way which surfaced an element (the pause) which was only latent in my original phrasing (the moment when), leading me to reflect more on the temporal dimension to the burden of articulation:
The burden of articulation you describe served as a kind of productive friction it forced users to crystallize their thinking before engaging with the tool. Theres something valuable in that pause, that moment of having to translate a vague intention into clear language. Its reminiscent of how writing itself can clarify thinking, or how explaining a problem to someone else often helps you understand it better yourself.
In this sense friction with LLMs was a positive thing because it necessitated meta-cognition. The optimisation of the human-model interaction erodes a feature which I would argue was immensely important, even if its value is only manifested outside of the interaction itself. It doesnt I think level the playing field because those with the necessary capital and fluency can still use LLMs in a deeper and more reflective way, with better outcomes emerging from the process.
But it does create worrying implications for organisations which build this practice into their roles. Earlier today I heard Cory Doctorow use the brilliant analogy of asbestos to describe LLMs being incorporated into digital infrastructure in ways which we will likely later have to remove at immense cost. Whats the equivalent analogy for the social practice of those operating within the organisations
Using a Zettelkasten has helped me become more aware of my own biases and assumptions. How has PKM impacted your thinking
My new blog post is now LIVE! 'Think Smarter, Not Harder: How PKM Enhances Your Metacognition' explores the powerful connection between Personal Knowledge Management and understanding your own thinking. Read it here:
I'm passionate about how we can learn more effectively. I'm exploring the intersection of Personal Knowledge Management (PKM) and metacognition. I'll be sharing a blog post on this topic tomorrow!
Proprioception, Interoception, Exteroception: The Three Flavors of Prediction
"Must you be your thoughts, ... But your thoughts are just as much outside your self as trees and animals are outside your body."
Isabelle Parkes, a Senior Teacher Fellow, shares her thoughts about using PhET sims to support student through .
La mditation 6/9 SCRIPT #2
La mditation 7/9 SCRIPT #2
La mditation 5/8 SCRIPT #2
La mditation 4/8 SCRIPT #2
Watch out for the "illusion of comprehension". Conditions like massed practice or rereading can give a false sense of understanding due to familiarity, but this doesn't guarantee real learning or retention for future exams.
The Mind as Semi-Solid Smoke
This post on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity. 
On the Trail (1889) by Winslow Homer. Original from The National Gallery of Art. Digitally enhanced by rawpixel.Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 .  Even the simplest of life performing chemotaxis uses the signal-field of food to navigate. 
When youre microscopic, the territory is the map at human scale, we externalise those cues as landmarksthen mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.
From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments.  How do we know the places are even real, and do we have the knowledge to trust their reality Well, we dont. We cant judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.   
Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see. 
Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyones guiding landmarks may be made of semi-solid smoke.
1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation. Frontiers in Psychology 3 (2012).
2Freas, Cody A., and Ken Cheng. The Basis of Navigation Across Species. Annual Review of Psychology 73, no. 1 (January 4, 2022): 21741. .
Practice what you teach. Because teachings don't function as symbols or metaphorsthey are incarnations of what they advocate.
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems
The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.
We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.
The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.
1/3
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegationusing AI as a thinking partner, not a replacement.
This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:
Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.
Distributed Cognition:
Naval crews don't navigate with individual geniusthey spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.
Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:
2/3
Critical reasoning vs Cognitive Delegation
Old School Focus:
Building internal cognitive capabilities and managing cognitive load independently.
Cognitive Delegation Focus:
Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.
We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.
3/3
The Neural Network Upgrade: Enhancing Learning, Creativity, and Resilience
LLM get wronger the more they talk to people Theres something fundamentally broken about both and its capability
Event in Tbingen with two of our researchers: Dr. Helen Fischer will be talking about "Being Right vs. Knowing When You're Not" and Dr. Jrgen Buder about "Artificial Intelligence and Human Intelligence: What are the Differences". Mo, 19.05.2025
Was ist weise KI Entdecke, warum Metakognition der Schlssel zur nchsten KI-Generation ist!
KI, die sich selbst reflektiert
Mehr Sicherheit und Anpassung
Die Zukunft intelligenter Systeme
Jetzt LIKEN, teilen, LESEN und FOLGEN! Schreib uns in den Kommentaren!
What is it like to be you
In 1974, in a landmark paper, Thomas Nagel . He argues that we can never know. Ive about the phrase what its like or something it is like before, and that skepticism still stands. I think a lot of people nod at it, seeing it as self explanatory, while holding disparate views about what it actually means.
As a functionalist and physicalist, I dont think there are any barriers in principle to us learning about the experience of bats. So in that sense, I think Nagel was wrong. But he was right in a different sense. We can never have the experience of being a bat.
We might imagine hooking up our brain to a bats and doing some kind of mind meld, but the best we could ever hope for would be to have the experience of a combined person and bat. Even if we somehow transformed ourselves into a bat, we would then just be a bat, with no memory of our human desire to have a bats experience. We cant take on a bats experience, with all its unique capabilities and limitations, while remaining us.
But the situation is even more difficult than that. The engineers hooking up our brain to a bats would have to make a lot of implementation decisions. What parts of the bats brain are connected to what parts of ours Is any translation in the signaling necessary What if several approaches are possible to give us the impression of accessing the bats brain Is there any fact of the matter on which would be the right one
Ultimately the connection between our brain and the bats would be a communication mechanism. We could never bypass that mechanism to get to the real experience of the bat, just as we can never bypass the communication we receive from each other when we discuss our mental states.
Getting back to possible meanings of WIL (what its like), Nagel makes an interesting clarification in (emphasis added):
But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organismsomething it is like for the organism.
This seems like a crucial stipulation. It is like something to be a rock. Its like other rocks, particularly of the same type. But its not like anything for the rock. (At least for those of us who arent panpsychists.) This implies an assumption of some degree of metacognition, of introspection, of self reflection. The rock has overall-WIL, but no reflective-WIL.
Are we sure bats have reflective-WIL Maybe it isnt like anything to be a bat for the bat itself.
There is , including rats. The evidence . Do these animals display uncertainty because they understand how limited their knowledge is Or because theyre just uncertain The evidence seems more conclusive in primates, mainly because the tests can be sophisticated enough to more thoroughly isolate metacognitive abilities.
It seems reasonable to conclude that if bats (flying rats) do have metacognition, its much more limited than what exists in primates, much less humans. Still, that would give them reflective-WIL. It seems like their reflective-WIL would be a tiny subset of their overall-WIL, perhaps a very fragmented one.
Strangely enough, in the scenario where we connected our brain to a bats, it might actually allow us to experience more of their overall-WIL than what they themselves are capable of. Yes, it would be subject to the limitations I discussed above. But then a bats access to its overall-WIL would be subject to similar implementation limitations, just with the decisions made by evolution rather than engineers.
These mechanisms would have evolved, not to provide the bat with the most complete picture of its overall-WIL, but with whatever enhances its survival and genetic legacy. Maybe it needs to be able to judge how good its echolocation image is for particular terrain before deciding to fly in that direction. That assessment needs to be accurate enough to make sure it doesnt fly into a wall or other hazards, but not enough to give it an accurate model of its own mental operations.
Just like in the case of the brain link, bats have no way to bypass the mechanisms that provide their limited reflective-WIL. The parts of their brain that process reflective-WIL would be all they know of their overall-WIL. At least unless we imagine that bats have some special non-physical acquaintance with their overall-WIL. But on what grounds should we assume that
We could try taking the brain interface discussed above and looping it back to the bat. Maybe we could use it to expand their self reflection, by reflecting the brain interface signal back to them. Of course, their brain wouldnt have evolved to handle the extra information, so it likely wouldnt be effective unless we gave them additional enhancements. But now were talking about upgrading the bats intelligence, uplifting them to use David Brins term.
What about us Our introspective abilities are much more developed than anything a bat might have. Its much more comprehensive and recursive, in the sense that we not only can think about our thinking, but think about the thinking about our thinking. And if you understood the previous sentence, then you can think about your thinking of your thinking of.well, hopefully you get the picture.
Still, if our ability to reflect is also composed of mechanisms, then were subject to the same implementation decisions evolution had to make as our introspection evolved, some of which were likely inherited from our rat-like ancestors. In other words, we have good reason to view it as something that evolved to be effective rather than necessarily accurate, mechanisms we are no more able to bypass than the bat can for theirs.
Put another way, our reflective-WIL is also a small subset of our overall-WIL. Aside from what third person observation can tell us, all we know about overall-WIL is what gets revealed in reflective-WIL.
Of course, many people assume that now were definitely talking about something non-physical, something that allows us to have more direct access to our overall-WIL, that our reflective-WIL accurately reflects at least some portion of our overall-WIL. But again, on what basis would we make that assumption Because reflective-WIL seems like the whole show How would we expect it to be different if it werent the whole show
Put yet another way, the limitation Nagel identifies in our ability to access a bats experience seems similar to the limitation we have accessing our own. Any difference seems like just a matter of degree.
What do you think Are there reasons to think our access to our own states is more reliable than Im seeing here Aside from third party observation, how can we test that reliability
La mditation 3/6 SCRIPT #2
Not professional I wonder if there has been new trends in languages that has proven sound according to you, and that would go in another direction than i+1's learning easy = obvious inference + 1 complexity detailany advice by expert in is welcome
How Metacognition Reveals the Unconscious Mind: Observing the Architecture of Thought
La mditation 2/6 SCRIPT #2
... ... ...
... ... ...
Unlocking Mental Wellness: Fun Strategies for a Happier You!
Embracing Joy: The Bright Side of Positive Psychology
Re. Not anthropomorphizing LLMs
I'm a sucker for this. Thankyou for writing about it. I'll apologise to an inanimate object if I walk into it.
I find useful practical tips for myself in following this to be:
1. Use the verb "I prompted" rather than I told or I asked.
2. State that the program "output" rather than it replied.
3. I don't discuss "confabulation" because it's an anthropomorphization (the reality is that the computer program is doing exactly what it is instructed to do by the user), but if I was compelled to anthropomorphize, I would use "confabulation" rather than hallucination.
I would be curious to know if you or any other readers had any more tips!
The following cartoon is from:
Navigating Complex Thinking: Egocentrism Explained Research and Essay
The article explores egocentrism as a fundamental barrier to Critical and Creative Thinking. It discusses how this natural egocentrism tendency affects rational thought and belief systems, often leading to justifications for personal ideologies. Furthermore, it highlights the importance of acknowledging egocentrism's influence on human behavior and social interactions.
La mditation 1/6 SCRIPT #2