OpenAI
OpenAI
Dynamic Computational Ethics of AI
Sameera
SPRING 2025 - Survey of New Media
Introduction/Thesis
Despite AI companies prioritizing computational ethics for AI’s built, the fact paced development of AI can give the impression that our ethical frameworks cannot keep up; this is regardless of how rigorous our current computational ethics are. Dynamic Computational Ethics of AI will be explored as a response to this dilemma: why, despite CS research and resources being put into AI, does AI’s computational architecture not align with our EVOLVING ethical demand?
Image: OpenAI has a profit incentive for detecting technical/computational ethical issues in their products because companies/corporations/organizations will be reluctant to adopt their products if it exhibits unethical behavior. On March 10, 2025, OpenAI released research suggesting that when monitored, AI models may strategically obscure their reasoning if it helps the model achieve its goals more effectively (the goal being, for example, what the prompt input is asking it to do). This serves as one of the many examples of the level of resources and dedication OpenAI puts into the technical/computational side of ethics.
Image Credit: OpenAI
Traditional Ethical Frameworks
AI either fully determines the ethics itself as some AI researchers are trying to develop.
Or the ethics are enforced fully externally by humans
Predefined/Pre-programmed
Rigid and static rules for ethics
Limited in taking into account context for ethical dilemmas
Lacks flexibility and nuance
Image Credits: OpenAI

Cybernetic Theory + Gödel’s Incompleteness Theorem
Theoretical Framework for Dynamic Computational Ethics of AI
Cybernetics is the foundation of modern AI yet its ethical framework rarely reflects that.
Cybernetic theory originates in the work of mathematician, philosopher and computer scientist Norbert Wiener. In cybernetics, a black box is a system which can only be observed based on its inputs and output, but not its internal workings. This system is relevant in AI/machine learning as algorithms often act like black boxes where the input results in a output but there is yet to be a full understanding of every step that brought the input and its algorithm to the result. The other two principles that are relevant to cybernetic systems are that they are goal-oriented, and secondly, their feedback mechanisms/loops play a critical role in self-regulation as it observes and informs the system of its status in relation to achieving its goal(s).
But machine learning is not the only field which finds the principles of cybernetics relevant. Cybernetic thoery became a way to study various types of systems — biological, technological or even social — and how they can regulate themselves and adapt to change. Wiener attempted to develop an abstract concept that was transdisciplinary and thus is relevant across various fields from technology to biology to physics and even literature.
Enhancing internal regulation can play one role in the AI becoming more adaptive. But when cybernetic systems rely soley on internal feedback loops, there is a risk of them becoming self-contained systems and not integrating external input; this raises clear ethical concerns.
And it’s where Gödel’s Incompleteness Theorem comes in.
Gödel’s Incompleteness Theorem
-
There is a Book of Grammar Rules that states:
“In the Book of Grammar rules all the rules are true.”
-
However, you must leave out the following line:
“This grammar rule is not true.”
-
The line – “This grammar rule is not true.” – is a TRUE statement.
Despite its truth that line cannot be included in the Book of Grammar Rules and thus the book will never recognize its own limitations.
The Book of Grammar Rules will never be fully COMPLETE.
-
Gödel's incompleteness theorems is mathematical logic that states within any consistent formal system, there will always be true statements that cannot be proved within that system (first theorem), and that the system cannot prove its own consistency (second theorem).
Translation: Dual-Layered Ethical Framework
Cybernetic Theory (Internal Regulation):
Introduces self-regulation; AI recognizes and corrects internally before external intervention is needed.
Gödel’s Incompleteness Theorem (External Regulation):
The mathematical theorem demonstrates that AI as a system cannot fully justify its own ethical decisions; it needs external verification checkpoints to avoid a self-reinforcing feedback loop.
User Interface for ChatGPT Builder
Overview of the instructions that were put into ChatGPT Builder for each of the four models.
The significance of Cybernetic Theory (internal regulation) + Gödel’s Incompleteness Theorem (external regulation) can be understood by having four ethical models that vary in internal (y axis) and external regulation (x axis) and seeing how it influences adaptability (z axis).
Click image to interact.
Click Turntable Rotation option in the top right.
Rotate the model to the right until it accurately depicts the numerical value of axis as shown above.