The Unreasonable Ineffectiveness of Effective Altruism in thinking about AI
What do Godel and Wittgestein have to do with Deontological Ethics, Software Development and the creation of AI?
Note: I’m not sure whether I have a problem with “Effective Altruism” as an overall movement and its purported aims, which are basically impossible to disagree with - lets reduce suffering, lets increase prosperity, etc, but more a critique of a particular method of thinking I call deontological symbolic morality and why it fails to help reason about out-of-distribution events, and how this manifests as observable social traits in different people.
With that in mind, let’s get to it. But first, here’s a summary of the key ideas, and the rest of the essay just an elaboration, explanation and defense of these ideas:
We construct knowledge systems using symbolic logic as embedded in natural language, but this does not capture reality perfectly - its a heuristic
The validity of these heuristics is proportional to the similarity of the context in which they were first developed and the place they are now being applied
Building self-consistent and complete systems of axiomatic morality and using them to reduce uncertainty in life is something I call deontological symbolic morality, and isn’t useful for reasoning about completely novel, new events like AI
People drawing to this style of thinking in general have low capacity for ambiguity and experience more social anxiety than others
People who thrive in ambiguity and are able to form correct beliefs in the presence of contradictory or incomplete information are also good at building companies, and are better equipped to reason about an unknown future.
In other words, why are decels more likely to have date-me-docs? Such a profound an in-group statement demands a long and winding philosophical expose. Let’s start at at the beginning and work our way to the front, where we have to make difficult decisions about the use of new technology in our society. Ultimately this essay doesn’t argue for a particular outcome of AI, but rather, what the best mindset and approach is for discussing it in a way thats likely to be useful and accurate. But first, how do we know anything at all?
The Ontology of Deontological Ethics and Accelerationism
How do you construct knowledge about the world and use it to make meaningful decisions in our lives? This is the problem of Ontology, one of the most important questions you can ask in philosophy, right up there with “how do I be a good person,” or, “can one be virtuous while suffering as the victim of injustice?” - it’s big, it’s thorny, it’s messy.
Let’s breakdown the process of human knowledge construction into two steps:
We interact with our physical bodies in the world, collect information from our senses, and feel with our emotions to form memories.
We organize our memories into symbolic models of generative relationships by recognizing patterns and using language to reason about them.
#1 is something all living organisms do, even at the molecular level in developing ‘learned’ antibody responses to pathogens. #2 seems to be something unique to humans, and wholly constitutes the internal ‘social reality’ that our minds inhabit - one where we reason about things using nouns, verbs, that are object-level abstractions of particular collections of sense-memories. More specifically, we organize symbols into declarative statements that map to ‘states of affairs’ in the world, and if we do so in a logical way then we can uncover hidden logic of how the world works.
This symbolic predicate logic reasoning is the great triumph of the human mind over nature and why we’ve survived and flourished so - we tamed the animals, cultivated the plants, and built cities and empires because we could think, reason, and discuss ideas that accurately captured those phenomena underlying mechanics. This is the foundation of science and engineering, and when applied to the principles of self governance gave rise to The Enlightenment and the modern liberal democratic republic. In short, it rocks.
The solution for constructing knowledge and knowing the best way to live should be clear, then - experience the world of the senses, organize memories into models using language, and then use logic and reason to ‘turn the crank’ and generate new knowledge, or at least argue for an outcome or opinion (that this method is useful is an implicit premise of this essay).
The belief we can use this method to produce an internally consistent and complete morality that tells us how to live our lives in a ‘good way’ is called Deontological Ethics.
It has an interesting corollary - if you can use a set of well-informed premises and models to reason about the world and produce all the beliefs we might need while being ‘correct’, then every correct belief can also be deduced from a set of premises.
This brings us to the heart of the current debate on AI and regulation, and why people so often speak past each other - one party has an implicit assumption that beliefs on a particular thing must be deducible from premises on general things, and so, the need to engage in reduced toy-model thought experiments to find out - what are this persons premises?
To understand why this is a problematic set of assumptions to operate by in the construction of knowledge, especially on topics that are extreme edge-cases and do not fit into generalizable patterns, I’m going to introduce two profound thinkers and a case study to help boot strap some intuition.
What Godel and Wittgestein have to say about Software Development
Much like the goals of Deontological Ethics, at the beginning of the 1900s mathematicians and logicians sought to put mathematics as a whole on firm footing - a set of consistent axioms that could be used to derive any true statement in mathematics. Kurt Godel came along and proved that this was in fact impossible, and that any system of reasoning complicated enough to capture the principles of arithmetic would have true statements that are not provable by a self-consistent set of axioms, or if you did want axioms to prove every true statement then some would be self-contradictory. This is the Incompleteness Theorem.
Wittgestein wrote extensively on the process of constructing systems of knowledge using the symbolic predicate logic of human natural language, at first believing the world consisted of atomic propositions of facts that are then meaningfully captured by language constructions. Notably, Wittgestein Godel’d himself into a new understanding later in life, instead believing that language does not have any intrinsic fundamental meaning, instead meaning is derived by use and always contextualized by the origins of its application. Most important to our discussion, useful and accurate symbolic language systems of meaning are built by interacting and engaging with the world and others and are constantly re-adapted and changing.
Neither of these thinkers are saying building formal knowledge systems of axiomatic logic isn’t useful, rather, they’re making claims about their limits. I’ll extend these to how we might critique Deontological Ethics. The first critique is that the system wont be complete - there will be beliefs that make sense to hold that aren’t defendable by any premises that would be generally acceptable. This is more of a critique-by-analogy - theres nothing that says your morality acts like a mathematical set theory, but, it raises the important point that internal consistency and completeness are in fact not reasonable expectations of axiomatic systems of reasoning in general - they don’t even hold for the simplest ones around, and so I suggest any moral or ethical framework is also incomplete until proven guilty.
The second critique is much more important and direct - that the validity of your symbolic systems of thought is proportional to the proximity of its current application with the context of its origin. What do I mean by that? As an extreme example, don’t use models of reasoning developed to describe child-parent dynamics to understand the relationship between geopolitical entities - your intuition will be way off, the underlying mechanics don’t map well to each other, and you’ll make wrong predictions that lead to disastrous outcomes.
Now lets compare/contrast how the Godel-Wittgestein view of designing effective symbolic systems might compare with a Deontological Ethicists approach in a context we all know and love: Software Development.
Two teams are tasked with designing a system to manage an incredibly large, complicated, and novel task.
The deontological devs sit down, and the first thing they do is start understanding this incredibly large, complicate, and novel task by breaking it down into smaller and smaller tasks, understanding the logic of each ones, and then relating it to other problems they’ve solved before to understand good methods of approaching it. They then spend a long period of time designing an ultimate abstraction schema that captures all the relevant dynamics, covers all the use cases, anticipates the possible edge cases, and is internally consistent, complete, and beautiful. The result is like a grand imperial city, each street carefully planned, all parts in symmetric proportion to each other.
The GodWit dev team sits down, and the first thing they do is start building stuff that seems like it’ll perform the tasks at hand, starting with what look to be the most important tasks and moving down the list. They build things that are janky, glued together, and byzantine, each time solving more and more of the problem, introducing more complexity then seeing where it starts not working, and they rebuild it again. The end result is more like the windy streets of an Italian town, filled with nooks and crannies, and unique fixes to architectural problems, an organic and unique synthesis of the requirements of the users and the economy of available resources.
Neither of these approaches is always good or always bad. If the system to be built is familiar enough, then the first approach is best - we know what kinds of solutions scale well, where to use them, and where not to. You save time in trial and error by learning from the past and short-cutting to the best method. But, if the problem is entirely new, with unknown and unforeseen dynamics, then our methods of reasoning-by-analogy will fail us - we’ve strayed too far from the context of origin for a particular symbolic system, and its lost is usefulness and accuracy.
We all know people who love to take the first approach, it appeals to our love of order, reason, knowability, and elegance. We all know by experience that most often, the second approach is necessary even if less satisfying. Judiciously you balance each mindset and approach to build effective solutions to big problems in unknown territory.
How Not to Have Useful Conversations about Artificial Intelligence
Ai is far and above the most out-of-distribution technological event any of us will have experienced. It has the profound potential to impact every aspect of society in at least if not more as equal measure as the Industrial Revolution. Trying to understand just what these impacts will be has been my own goal and passion behind starting and running The Ai Salon, which gathers people from diverse professional, social and economic background in focused topic areas and brings them into conversation with AI researchers, developers, and entrepreneurs , to stitch together a more organic synthesis between humanity and the prospect of machine intelligence.
One phrase often batted around is “The Singularity” to describe the point at which machine intelligence becomes self-improving, then all bets are off. An interesting thing about singularities is that they are shrouded by an Event Horizon - a region of space-time from which no information can escape, and therefore, are impossible to predict and understand from the outside.
Nonetheless, we have to use the tools at our disposal to help understand these future events as best we’re able - we have to engage in symbolic reasoning using models built up from other experiences and events. There simply isn’t the training data to otherwise reason about one-of-one events.
This is where I see the current discourse between Effective Altruists breakdown when engaging with these topics in debate. Invariably, they try to do two things:
Breakdown every belief as necessarily originated from some prior set of premises, under the assumption that any moral or decision-making system is axiomatically complete
Use thought experiments and heuristics that leverage intuition developed from other experiences to reason about risks and benefits of AI, with an increasingly larger and larger difference between the context of heuristic development and this context of heuristic application.
The net effect is to get debates and discussions that pathologically cannot move past set-piece thought experiments and decision trees of logical reasoning that abstract away all the important, relevant, messy and context-specific details of AI.
It’s how you get questions like “Do you think F-16’s should be open sourced?” treated like a meaningful and insightful question that actually advances the discourse and understanding on a complicated topic like the appropriate use of AI.
Ultimately, the over-reliance on deontological ethical reasoning does two things:
It reduces the complexity of an out-of-distribution event into simplistic thought experiments built up from within-distribution experiences
It reduces the discourse into ‘trying to figure out a persons prior beliefs’ and then seeing where those beliefs break when applied to within-distribution experiences.
I’ll say it one more time loudly for those in the back - AI is an out-of-distribution event thats not reducible to something comprehensible in terms of past experiences. You have to grapple with its messiness head on, keeping the discourse and reasoning as close as linguistically possible to the specific context at hand, otherwise you end up producing, in the words of Wittgestein, nonsensical language games that feel like valid symbolic manipulations of atomic propositions, but have left far behind any sense of validity in mapping reasonably to states of affairs in the real world. You’re using an out-of-date map and pretending its the terrain.
What do you get as a result? An ineffective mental toolkit for understanding the profound changes, both good and bad, that are coming our way as a society, destroying our ability to make decisions for the well-being of humanity. That the social movement Effective Altruism seems entirely predicated on developing as robust-a-possible universe of deontological symbolic morality means its disastrously ineffective at understanding one-of-one events, even if it ‘gets right’ the correct beliefs about within-distribution phenomenon. Thus, the unreasonable ineffectiveness of effective altruism in dealing with AI, despite its apparent usefulness in reasoning about other parts of the world.
None of what I’ve written is meant to detract from using this overall method of thought experiment, establishing premises, and symbolic reasoning to make significant headway on all sorts of important problems - after all, its how we conquered the natural world and built societies of abundance. But, it is important to understand the limitations on this method as a way of solving every problem, and altogether, as a means of reducing the intrinsic uncertainty and variability we encounter in life.
What is the best way to discuss these things? It’s to acknowledge that there may be valid and correct beliefs not defensible in terms of prior premises that apply in general, and again, stick as close as possible to the actual subject matter. Discuss the risks of AI, not the general principles governing the use of fighter jets and fighter jet accessories.
Decrying the overuse of this mode of thought as poisoning the discourse on an important topic really begs the question - why are people so drawn to to this way of thinking, of deciding, of figuring out how to live? How did we even come to this crisis of deontological symbolic morality?
The Social Psychology of Deontological Decelerationism
Why introduce social psychology to a discussion about philosophy of knowledge and discourse on emerging technology? Well, as it turns out, what a person believes about the world greatly shapes, and is shaped by, how they live in the world. This is an uncomfortable reality.
The appeal of deontological symbolic morality as the do-everything-toolkit for deciding how to live your life has a massive appeal to those wishing to reduce the ambiguity and uncertainty over life’s biggest questions - how do be good, be virtuous, and so on. It makes a promise that if you establish a consistent and complete set of moral axioms, then anytime you need to form the correct belief about a particular topic you can turn the crank of symbolic reasoning and arrive at the right answer.
Different people vary wildly in their comfort in facing ambiguity and uncertainty about the future, their present moment, and the decisions they face (the link is to a few short slides worth skimming). This ambiguity-capacity seems to be something somewhat fundamental to a persons character, in that some thrive in uncertainty, and face it with openness, and curiosity, and a willingness to experiment and try new things. Having a high capacity for ambiguity is akin to the Godelian-Wittgesteinian approach to software development as applied to living your life in a messy, large, and complicated world - you experiment, you try things, without knowing all the answers ahead of time, and iterate where necessary. It is ideally adapted to facing new situations and therefore is an absolute super power when it comes to building startups. Ambiguity-tolerance is a measurable thing that lets people operate in high-variance environments with a lot less psychological fatigue, and achieve better metrics of performance than otherwise.
Those who don’t do well in the face of ambiguity meet it with anxiety and fear, they seek to resolve it by judicious application of their symbolic morality toolkit. Notably they reject new ideas that are contradictory or slightly incongruent in the context of their current system, and being presented with these contradictions is a source of psychological distress. In other words, people who are intolerant of ambiguity need self consistent axiomatic knowledge to help them make sense of, anticipate, and orient themselves in the face of uncertainty, regardless of whether those systems of axiomatic reasoning are useful or effective at understanding truly novel events. Sound familiar?
One area of constant ambiguity in day-to-day life that might induce anxiety in the less ambiguity-tolerant are social encounters. Human interaction is so multifaceted and highly variable, with potential for innuendo, overtones, hidden motivations, unclear shared knowledge - it’s incredibly messy. Navigating social situations in a fluid and intuitive manner means not feeling anxiety in the face of ambiguity that would otherwise drive a person to force the resolution of a distinct outcome, even if it’s not to their favor. This isn’t just off-hand speculation, the increase of anxiety in social encounters is a documented phenomenon as being related to a persons ambiguity capacity, and, being more comfortable with ambiguity increases your ability to trust and cooperate with others.
This lets us actually ‘get somewhere’ in terms of useful new things to test out in the world to help understand what kind of person is best equipped to reason and engage with a changing and new kind of world, and what kind of person is likely to fall back on an ill-equipped toolkit of rigid, abstract and out-of-context reasoning. These two statements relate the capacity to deal with ambiguity as measured by how a person navigates the world currently. In short, it’s an honest shot at empiricism to help figure out what kind of conversations are likely to be useful in understanding the coming singularity.
I predict people who are successful at building startups are well-equipped to reason about a changing and uncertain future, because they have proven they can successfully navigate to accurate and useful beliefs in the face of contradictory statements, incomplete sets of information, and constant ambiguity. These people are likely to approach this ambiguous future with openness, curiosity, and nuance and are likely to not believe reductionist thought experiments that reason about known systems as a useful method for understanding completely unknown systems.
I predict people who experience severe social anxiety are more likely to be drawn to deontological symbolic morality as a general tool kit for navigating the world, and they experience anxiety and distress at being presented with beliefs or statements that are contradictory or incomplete. These people approach an ambiguous future with fear, anxiety, distress and the need for rigid systems of thought and rules-based methods of action to reduce the perceived ambiguity and uncertainty. For reasons argued above, I believe this worldview and system of reasoning is not effective to understanding entirely new phenomenon.
These statements are not deterministic, and don’t necessarily describe entirely different kinds of people. Many successful startup founders experience social anxiety, and people with social anxiety might also thrive in ambiguity in other areas of their life. It’s complicated, its messy, you can try to fit it into some consistent set of axiomatic statements but then you’d be missing the point.
What I am arguing is that deontological symbolic morality is not an effective toolkit for understanding completely new things, and, people who rely on this toolkit do so to help ease their underlying anxiety and fear in the face of ambiguity. They feel the need to form axiomatic systems of moral reasoning as the implicit world-model of everyone else around them, and then test these with thought experiments, to try to reason about out-of-distribution events using within-distribution experiences.
People who don’t feel anxiety in the face of ambiguity are more likely to be sociable and charismatic, because they approach the unknown with playfulness and curiosity as a way of finding out what works and what doesn’t. They’re better able to navigate to good outcomes by not forcing the premature resolution of ambiguity to their disadvantage. Doing so lets them collect as much information through real-world trial-and-error before committing themselves wholly to a singular course of action, and even then the course of action is fluid and adjustable in the face of new information.
This kind of ‘the plan is nothing, planning is everything’ mentality is extremely difficult to have while using a symbolic moral famework, since every time the plan needs to be changed, we must go and re-build the entire set of axiomatic premises and ensure consistency. The crisis of deontological symbolic morality is getting lost in endlessly re-factoring your rigid world-model instead of learning new useful information about how a fundamentally non-axiomatic reality works.
The anecdotal correlation between technical talent and social anxiety, and the rareness and value of startup founders who are both technically talented and charismatic and socially fluid, are both observations that fit well within this framework of reasoning. It matches perfectly with a good deal of conventional wisdom on startups, founder types, and recipes for building ventures.
The apparent correlations between ai-fear, over-use of deontological symbolic morality, and social anxiety is another fit among my encounters with some members of the Effective Altruist community.
What do we do about this fundamental divergence in ontology, decision-making, and confronting a new world? Just listen to charismatic people that tell us everything is going to be okay as they secretly build ai-chatbot-girlfriends that will mark the decline and collapse of civilization?
What Comes Next, Soon You’ll See, the TechnoCapitalist Machine
My claims about social anxiety, ambiguity tolerance, startup building and the less-than-effective application of symbolic morality to completely new phenomenon isn’t meant as an argument about how some people are better than others, and shouldn’t be listened to, how some are always right, or anything of the kind.
But it does paint a clearer picture of what kinds of people excel in what kinds of environments and why they excel or not - because of the natural advantage their characteristic modes of thought have in conquering different kinds of problem landscapes. Adapting and using new technology in society requires facing and understanding the constant ambiguity, contradiction, and incompleteness of social dynamics and interactions - this is the realm of entrepreneurial thinking, openness, playful curiosity. At another extreme, the task of new technology development requires the systematizing, organization into principles logic and reason, and developing clear rules through reductionist experimentation - its the world of science, physics, mathematics and engineering.
Applying a symbolic axiomatic reasoning to the messy world of human affairs and morality is just as reprehensibly ill-founded as trying to develop powerful new technology in a completely haphazard, trial-and-error way - it leads to nuclear disasters, lab leaks, and grey-goo, just like rigid axiomatic morality can produce decisions that strike us as fundamentally inhumane at an empathetic emotional level.
Asking which of these two frameworks - intuitive playful openness versus symbolic logical reasoning - is overall better for the general process of social advancement through technological development is like asking which sides of the scissors do the cutting - its both, and always has been. You need to balance both, and need to engage in sincere discussion instead of asking the other party to play by your own preferred set of rules.
Is the conclusion then just some more GPT-esque pablum about how every side of the debate counts? At the end of the day, no - despite our sincere wish to build axiomatic systems of reasoning, these can only ever be an abstraction layer that helps simplify a messy reality. The reality of how our social world works, how our minds work, how we are building artificial minds, all the way down to the fundamental mathematics of all possible universes - this is messy, incomplete, a byzantine mess. And the future will seem the same until we’ve encountered enough of it to build new premises, new models, to capture its new dynamics too. Until then we have to use our social-intuition, ambiguity resistant, and empathetic pragmatism to avoid pitfalls of endless thought experiments and instead deal with reality is it is today.
The reality today is this: the opportunity cost of delayed technological progress is measured in the persistence of real, current human suffering. This is a constant and real cost that inaction or delayed action imposes on the weakest, most vulnerable and in-need members of humanity. This is a tragedy - not only are some people trying to use the wrong ontological framework for dealing with an entirely new phenomenon like AI, they are letting the imagined harms deduced from their faulty world-model justify the continued injustices and sufferings of people alive on the planet today, but are not the people posting on forums about X-risk.
This is not just less wrong, and it’s not even just wrong. It’s not even wrong.