Ethical Harms: Unintended Consequences for At-Risk Neurodivergent People When Excluded from Technology Design
- David Ruttenberg
- Aug 1
- 4 min read

When neurotypical engineers develop technologies without the direct consultation or consent of neurodivergent individuals, they risk violating core ethical principles—namely autonomy, justice, beneficence, and nonmaleficence. The exclusion of neurodivergent voices not only perpetuates inequity but can also cause tangible harm to those already marginalized. Below are five ethically fraught, unintended consequences that disproportionately impact at-risk neurodivergent populations.
1. Violation of Autonomy Through Sensory-Hostile Interface Design
Ethical Principle Violated: Respect for autonomy
Technologies that overwhelm users with intrusive sounds, flashing visuals, or complex navigation disrespect the autonomy of neurodivergent individuals by failing to honor their sensory needs and preferences. Without participatory design, neurotypical assumptions about “normal” sensory tolerance dominate, resulting in interfaces that can cause acute distress or physical harm. Ethically, this is a failure to respect individuals’ rights to self-determination and to participate fully in digital life on their own terms.
2. Algorithmic Injustice in AI-Driven Recruitment
Ethical Principle Violated: Justice and fairness
AI recruitment tools trained on biased datasets often perpetuate systemic discrimination against neurodivergent candidates. By encoding historical prejudices—such as penalizing atypical communication or non-linear career paths—these systems exacerbate employment inequities. Ethically, this is a breach of justice, as it denies neurodivergent individuals fair access to opportunities and deepens existing social and economic divides.
3. Coercion and Psychological Harm via Forced Social Norms
Ethical Principle Violated: Nonmaleficence (do no harm)
Mandating eye contact or synchronous video participation in digital platforms imposes neurotypical social norms, disregarding the psychological well-being of neurodivergent users. Such coercion can cause anxiety, fatigue, or withdrawal, directly conflicting with the ethical obligation to avoid causing harm. Ethically designed systems would offer flexible, user-driven communication modes, upholding the dignity and comfort of all users.
4. Exclusion Through Inaccessible Security and Verification
Ethical Principle Violated: Beneficence and inclusion
Security features like CAPTCHAs and time-limited verifications, when not designed with neurodivergent input, can exclude users from essential services. This not only fails to benefit neurodivergent individuals but actively creates barriers to participation, violating the ethical imperative to promote well-being and inclusion. The result is an unjust “neurodivergence tax” that penalizes those already facing societal obstacles.
5. Amplification of Vulnerability by Algorithmic Neglect
Social media and digital platforms that optimize for engagement without considering neurodivergent vulnerabilities can amplify risks such as cyberbullying, exploitation, and mental health crises. Algorithms that surface harmful content or fail to provide adequate safety controls neglect the duty of care owed to at-risk users. Ethically, technology creators have a responsibility to anticipate and mitigate these harms, particularly when serving populations with heightened risk profiles.
Ethical Imperative: Center Neurodivergent Voices
The ethical failures outlined above are not merely technical oversights—they are breaches of fundamental moral obligations. Respecting autonomy, promoting justice, preventing harm, and advancing inclusion require that neurodivergent individuals be active co-creators in the design, testing, and deployment of all technologies that affect them. Only by centering their voices can we ensure that digital innovation empowers rather than endangers those who are most vulnerable.
Abstract: Ethical technology design is not just best practice—it is a moral necessity. Anything less risks perpetuating harm and deepening the marginalization of neurodivergent communities.
When neurotypical engineers develop technologies without the direct consultation or consent of neurodivergent individuals, they risk violating core ethical principles—namely autonomy, justice, beneficence, and nonmaleficence. The exclusion of neurodivergent voices not only perpetuates inequity but can also cause tangible harm to those already marginalized. Below are five ethically fraught, unintended consequences that disproportionately impact at-risk neurodivergent populations.
1. Violation of Autonomy Through Sensory-Hostile Interface Design
Ethical Principle Violated: Respect for autonomy
Technologies that overwhelm users with intrusive sounds, flashing visuals, or complex navigation disrespect the autonomy of neurodivergent individuals by failing to honor their sensory needs and preferences. Without participatory design, neurotypical assumptions about “normal” sensory tolerance dominate, resulting in interfaces that can cause acute distress or physical harm. Ethically, this is a failure to respect individuals’ rights to self-determination and to participate fully in digital life on their own terms.
2. Algorithmic Injustice in AI-Driven Recruitment
Ethical Principle Violated: Justice and fairness
AI recruitment tools trained on biased datasets often perpetuate systemic discrimination against neurodivergent candidates. By encoding historical prejudices—such as penalizing atypical communication or non-linear career paths—these systems exacerbate employment inequities. Ethically, this is a breach of justice, as it denies neurodivergent individuals fair access to opportunities and deepens existing social and economic divides.
3. Coercion and Psychological Harm via Forced Social Norms
Ethical Principle Violated: Nonmaleficence (do no harm)
Mandating eye contact or synchronous video participation in digital platforms imposes neurotypical social norms, disregarding the psychological well-being of neurodivergent users. Such coercion can cause anxiety, fatigue, or withdrawal, directly conflicting with the ethical obligation to avoid causing harm. Ethically designed systems would offer flexible, user-driven communication modes, upholding the dignity and comfort of all users.
4. Exclusion Through Inaccessible Security and Verification
Ethical Principle Violated: Beneficence and inclusion
Security features like CAPTCHAs and time-limited verifications, when not designed with neurodivergent input, can exclude users from essential services. This not only fails to benefit neurodivergent individuals but actively creates barriers to participation, violating the ethical imperative to promote well-being and inclusion. The result is an unjust “neurodivergence tax” that penalizes those already facing societal obstacles.
5. Amplification of Vulnerability by Algorithmic Neglect
Social media and digital platforms that optimize for engagement without considering neurodivergent vulnerabilities can amplify risks such as cyberbullying, exploitation, and mental health crises. Algorithms that surface harmful content or fail to provide adequate safety controls neglect the duty of care owed to at-risk users. Ethically, technology creators have a responsibility to anticipate and mitigate these harms, particularly when serving populations with heightened risk profiles.
Ethical Imperative: Center Neurodivergent Voices
The ethical failures outlined above are not merely technical oversights—they are breaches of fundamental moral obligations. Respecting autonomy, promoting justice, preventing harm, and advancing inclusion require that neurodivergent individuals be active co-creators in the design, testing, and deployment of all technologies that affect them. Only by centering their voices can we ensure that digital innovation empowers rather than endangers those who are most vulnerable.




Comments