AI Accountability: Building Informed Consent into layerZero Systems
As deepfake evolves and dynamic multi-media experiences become mainstream it is critical for AI ethicists to advocate for the concept of informed consent.
Consumers must be aware when they are perceiving or interacting with deepfake modules. They also must have the authority to choose whether or not they want their content to be dynamic.
If a platform user wants to consume the original version of a song or video they should have the freedom to do so. Forcing people to listen/watch dynamic multi-media experiences is unfair to both consumers and content creators.
The mental health risks associated with ignoring informed consent are extremely serious. If consumers are not aware when they are interacting with deepfake, they can end up living in delusions and false realities.
Ignoring the ethical obligation of informed consent of deepfake use is also very risky for institutions and government organizations. Here are a few examples of how:
Deepfake news can lead to Fear, Uncertainty, and Doubt (FUD) which can cause massive stock sell-offs and entire economic systems to crash
Deepfake surveillance footage can lead to a false sense of security while cyber adversaries compromise critical national security infrastructure
Systems must be engineered to automate the concept of informed consent. As the code review phase of layerZero infrastructure begins, adding such functionality is becoming evidently necessary.
The challenge is balancing economic prosperity with ethics. While informed consent is the right idea, implementing it across all Singularity modules would cause a conflict of interest. In an altruistic world such a policy would be easy to implement universally. However the truth is that big tech companies and media conglomerates value profits over the wellbeing of people.
The proposed infrastructure is to ensure informed consent practices are applied to advertisements first. If an advertisement is adjusting it self in real time in relation to the consumers layerZero datapoints, it is considered to be dynamic. Big technology companies utilizing such technology must inform consumers of such a phenomena and provide resources on how to learn more about algorithmic intricacies.
This idea would work especially well for companies using the “freemium” business model. Here are a few examples:
If a Spotify or Youtube user does not pay for premium, they must be informed that the advertisements they see/hear are likely using deepfake. However they have permission to opt out of such deepfake ads by purchasing a monthly membership
If a Twitter/X user does not pay for premium, they must be informed that the advertisements they see/hear are likely using deepfake. However they have permission to opt out of such deepfake ads by purchasing a monthly membership
While I realize these ideas do not reflect my previous statements about asking users to “opt in” to dynamic multimedia experiences, that does not mean my mindset has changed.
Rather it means that I am simply setting more realistic expectations for a capitalistic society that currently has zero informed consent policies for anyone to follow. I would also like to highlight the difference between legality and ethics.
Just because something is legal, does not mean it is the ethically correct course of action. Please remember both slavery and discrimination were legal for centuries. This legality does not mean slavery was ethical.
Similarly just because AI experts can get away with something, does not mean they should do it. Before the law changes, AI leaders must unite and have conversations with policy makers, academia, and community members. Only after such conversations, will new laws be written and enforced.

