The performance of AI models follows a roughly log-scale relationship with training cost, meaning there is a constantly declining return on additional investment
Agreed. After reading into Grok 4 and GPT-5, they have milked pretty much everything out of the current transformer architecture possible, and scaled up compute to the max. Something new must emerge
This connects beautifully to the ecological view that intelligence isn't something we "have" but something we *do* in dynamic coupling with our environment. Your point about LLMs being bounded by human knowledge corpus highlights how current AI is essentially doing sophisticated pattern matching on linguistic traces rather than engaging in the embodied sense-making that characterizes biological intelligence.
The constraint isn't in our heads (or silicon) but in the environment's capacity to support intelligent behavior. Even human intelligence emerges through ongoing organism-environment transactions—we don't possess abstract reasoning, we become capable of it through participation in culturally mediated activities.
This suggests AGI isn't about building systems with sufficient internal intelligence, but creating systems capable of the right kinds of environmental coupling. Current approaches hit walls because they can't participate in the dynamic dance of environmental attunement that makes intelligence possible in the first place.
Your "empirical AI" that learns from direct observation points toward this—intelligence as an active process of structural coupling with the world, not computational power applied to static datasets.
“… This is also called free energy minimization. I’ll write up more about this in the future on the statistical mechanics of machine learning…” I’ll hold you to this. 😉 For now I’ve found the free energy inequality you copied in the text from the Wikipedia page, so have plenty of background reading available. Great synthesis of an interesting topic on your part, as always!
Agreed. After reading into Grok 4 and GPT-5, they have milked pretty much everything out of the current transformer architecture possible, and scaled up compute to the max. Something new must emerge
This connects beautifully to the ecological view that intelligence isn't something we "have" but something we *do* in dynamic coupling with our environment. Your point about LLMs being bounded by human knowledge corpus highlights how current AI is essentially doing sophisticated pattern matching on linguistic traces rather than engaging in the embodied sense-making that characterizes biological intelligence.
The constraint isn't in our heads (or silicon) but in the environment's capacity to support intelligent behavior. Even human intelligence emerges through ongoing organism-environment transactions—we don't possess abstract reasoning, we become capable of it through participation in culturally mediated activities.
This suggests AGI isn't about building systems with sufficient internal intelligence, but creating systems capable of the right kinds of environmental coupling. Current approaches hit walls because they can't participate in the dynamic dance of environmental attunement that makes intelligence possible in the first place.
Your "empirical AI" that learns from direct observation points toward this—intelligence as an active process of structural coupling with the world, not computational power applied to static datasets.
“… This is also called free energy minimization. I’ll write up more about this in the future on the statistical mechanics of machine learning…” I’ll hold you to this. 😉 For now I’ve found the free energy inequality you copied in the text from the Wikipedia page, so have plenty of background reading available. Great synthesis of an interesting topic on your part, as always!