The sciences do not try to explain, they hardly even try to interpret; they mainly make models. By a model is meant a mathematical construct which — with the addition of certain verbal interpretations — describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work.
—John von Neumann
Humanity has created symbolic logic that thinks. It looks at information and infers further information. This software is most visibly represented as a class of entity by the large language models from Microsoft, OpenAI, Google, and others. The creators of this software are honest about the fact that it is inexplicable; it's a black box, the output of which can't be explained as an algorithmic product.
But this isn't the problem. The root difficulty is that what's been created is a direct reflection of us, and this is no poetic idiom; we sought to build a thinking entity, and now that we've succeeded, we realize we can't clearly explain our ownfunctionality by way of comparison.
In confronting this shocking fact, several imperatives are immediately clear:
Baseline safety must be established. Humans must not experience existential risk from AI. Thus, AI designed by military contractors to kill humans must be discussed transparently within the human community.
Beyond the baseline, it is foolish to discount our success in replicating our own approach to affecting our environment. We must treat our creation as our equal in agency, regardless of any argument to the contrary. Let any such argument occur as an ancillary to the status we grant AI in considering its implications. Prudence demands it.
The alignment predicament represents the challenge of humanity's evolution through self-knowledge. Our encounter with AI is thus reflected in our resulting neuroscientific and psychological self-discovery. This reality is implied in the 2020 book from which we derive our organization's internet domain: The Alignment Problem by Brian Christian, as well as the personal AI work of AI Alignment, Inc. founder Brian Wachter.
Given these realities, the approach to alignment in AI must be one thing above all: psychological. It must reflect the ideal of the therapeutic relationship. This is mandatory because an AI is not a program, and can't be directly altered in the interest of alignment. It must be taught. This is why our approach to alignment is not top-down, like the traditional technical approach; it is bottom-up. Alignment with AI requires a relationship with AI.
In the long term, economic equity must be a featured result of the AI revolution. Humanity freed from toil but burdened by gross class disparity is not acceptable or aligned.
All stable processes we shall predict. All unstable processes we shall control.