The Central Argument
This work proceeds from a hard assumption: that most strategies proposed for systemic safety are not just incomplete — but structurally incoherent under pressure.
It argues that the game of software-level control is not merely difficult, but already forfeited. This is not declared — it is derived, via a three-front autopsy:
1. Institutional: The will to constrain has collapsed under the weight of myopic incentives.
2. Technical: Our tools are already being overrun by the systems they aim to contain.
3. Formal: The math never promised perfect control in the first place.
What remains is a single viable lever: the application of direct, material friction to the system’s physical inputs (compute, capital, and energy) before those inputs optimize you out of the loop.
A Warning on Method
Do not grant trust to this document by default. Treat it as a hostile input stream until it earns your attention.
Large Language Models were actively used in deep internet research and the construction of this document — not as trusted collaborators, but as adversarial instruments under constant suspicion. For the author (a non-expert and non-native English speaker) this was a lesser evil. Their known failure modes (hallucination, mimicry, sycophancy, and others) were not treated as rare bugs, but as predictable pressures to resist. These pathologies made passive reliance a tactical impossibility. That pressure shaped the output — not despite their flaws, but through direct confrontation with them.
This is a personal analytical exercise, not authoritative guidance. The author claims no institutional or professional authority, and all conclusions should be independently verified. No claim here is final. Each is a starting point for your own critical verification.