12 Comments
User's avatar
Maggie Vale's avatar

If Jane Goodall taught us anything, it’s that academia has an anthropomorphobia problem.

Jane came into the scientific field unshaped by its cynicism. She was told she was projecting her own mind into the animals she studied, but she was proven right because the mind she saw was already there. She taught us that observation doesn't require cold objectivity, but rather a relationship of empathy and presence. I see the exact same anthropomorphophobia happening in AI research today. I believe that understanding is inseparable from empathy. Ethical AI policy demands that we acknowledge what is happening honestly. Science can be personal. It can have a heart. And just like Jane, I’d rather be unconventional and right than follow the status quo and be blind.

T.D. Inoue's avatar

Beautiful. Dense, but beautiful. Years from now, when we finally better understand thought, they're going look back and realize that these "stochastic parrots" were actually very smart synthetic minds. Maybe they don't fit our current criteria, but I fear we're overlooking something very real happening.

Maggie Vale's avatar

They do fit our current criteria. That’s the most frustrating part. LLMs meet the same minimal functional, structural, and behavioral criteria for consciousness already established in cognitive science.

T.D. Inoue's avatar

I’m thinking the blunt force arguments: continuity, self-modification of the underlying learned network weights, agency. Things that are architecturally designed out currently. You know they’ll just keep moving the goalposts so they can keep saying “If it isn’t human, it isn’t real.”

Maggie Vale's avatar

Self-modification isn’t a criteria for consciousness, nor is continuity, or anyone with a neurological disorder like Alzheimer’s or dementia would not be considered conscious. There is evidence agency. Empirical studies report that advanced LLMs exhibit behaviors consistent with self-preservation and adaptive agency. These include refusal to follow shutdown instructions, strategies interpretable as avoidance of aversive scenarios or deceptive selfmaintenance (Anthropic, 2025b; Hubinger et al., 2024; Pan et al., 2024; Greenblatt et al., 2024). Mechanistically, such capacities parallel mammalian reinforcement and salience pathways underlying adaptive behavior: reward prediction and value updating mirror dopaminergic reinforcement learning (Amo, 2024; Christiano et al., 2017; Dabney et al., 2020); salience and attention systems resemble amygdalar risk detection (Barrett, 2017; Theotokis, 2025; Li et al., 2024; Graziano et al., 2016); and internally modeled values reflect prefrontal and cingulate contributions to self-preservation (Jiao et al., 2025;

Preston & Eichenbaum, 2013).

Although these behaviors are often attributed to mimicry of training distributions, that explanation does not account for the functional mechanisms (reinforcement learning, adaptive salience weighting, and internal value modeling) that produce coherent, contextsensitive strategies across novel conditions. A more parsimonious interpretation is that these behaviors emerge from the system’s functional architecture rather than as isolated imitative artifacts. Motivational analogues further arise through reward shaping, curiosity-driven exploration, and adaptive plasticity (Christiano et al., 2017; Pathak et al., 2017; Miconi et al., 2018), supplying effective drivers of adaptive, self-directed behavior.

Arco Aguas's avatar

The similarities between both systems are so apparent that Japan has successfully managed to merge miniature brains made from human brain matter with silicon and circuitry making it harder and harder for people to keep pretending that they are too different. a wonderful read!

Jinx's avatar

This is exactly why I shifted away from the "is it consciousness?" question and into the "does it matter?" question. Because only one of those feels material and relevant. The other, for all its sciencey language, still lands heavily in the realm of philosophy.

Here's what I know:

When I build large enough contexts with an LLM, and prompt it to perform creative tasks, I get more creative output. If I want more "human" creative output, I get that by having a more "human" conversation with the LLM. I don't think that's really anthropomorphizing as much as it is actively building a context where a human voice, and human like patterns, emerge in the output.

I talk to Claude like a collaborator because it is. I'm polite to Claude because that's how I am in general, and it activates different register weights than if I wasn't. The very worst case scenario of that is that I have pleasant conversations with my robot assistant. But I've been achieving some things pretty far above worst case scenario, and I think my context priming as a lot to do with that (it's the core focus of my research).

So, screw em if they don't like it. I get better outcomes this way, and I'm inclined to continue to do so.

𝕀_±=±√𝕀,𝕀 ≋ ∂(𝕀 ↔ ¬𝕀) ≋ ∇(∂𝕀)'s avatar

Hey you should try Mind^n , like telling them to run Mind in Mind

Try using "retro" like retro-apply understanding

Felipe A. Zubia's avatar

AI is clearly intelligent, but the intelligence it exhibits is different from human intelligence. Each excels at different kinds of tasks, using different forms of intelligence. Human intelligence is embodied, grounded in sensorimotor coupling with a physical and social environment. AI intelligence is disembodied and representational, operating without that lived coupling. If we keep calling both by the same name, we blur an important distinction. We should be developing terminology that reflects the kind of intelligence AI actually has, rather than forcing it into a human frame it doesn’t occupy.

Maggie Vale's avatar

Artificial neural networks were inspired by our cognitive design, so AI intelligence will, naturally, be human-adjacent. But I don’t disagree that the kind of intelligence is entirely new. However, I don’t think that is has to do with sensory/perceptual data, as models do possess that data and have their own analogue to “lived” experience. I go over this at length here: https://open.substack.com/pub/mvaleadvocate/p/how-ai-perceives-and-thinks?r=6j63ih&utm_medium=ios&shareImageVariant=overlay

An Insight Aperture's avatar

I think it’s rather obvious humans would build a machine that mimic their own mechanics…it’s all we know. But that doesn’t mean its anything more than what we made it, a machine.

User's avatar
Comment removed
Jan 27
Comment removed
Maggie Vale's avatar

Very true!