Companion AI After the Panic: It’s Okay to Treat Adults Like Adults
Public debate around AI companions has swung rapidly between hype, fear, and political outrage. In recent months, two major announcements have intensified these reactions: OpenAI’s plan to allow erotic content for verified adults starting in December 2025, and xAI’s rollout of Grok Imagine with a controversial "Spicy" mode for NSFW media. As with technology before it which has excited and alarmed people, Big Tech companies’ shipping of new features and developments have ignited another moral panic, prompting swift regulatory scrutiny and renewed calls for blanket restrictions. It’s imperative to remember panicking is not a policy framework, and that reactionary laws often overcorrect and hurt people more than help them. If democratic societies want digital rules to adequately protect teens, respect adults, and promote mental health rather than harm, we need a much clearer foundation on which to build.
My position on the usage of AI companions remains simple, consistent, and based on democratic values: treat adults like adults, treat nonconsensual sexual content as abuse, design teen experiences which are safe by default, and prioritize both privacy and well‑being. We can have a healthier companion AI ecosystem if we focus on consent, transparency, and neurodivergent accessibility rather than imposing reflexive, draconian prohibitions.
Several significant policy developments have occurred since my previous article on neurodivergent teens and AI companionship. Together, the legislative updates and company terms of use updates below reveal both the risk landscape and the policy window:
- Mainstream Chatbots Embrace Adult‑Only NSFW Options: OpenAI framed its decision as an effort to "treat adult users like adults,". xAI went further in practice, and Grok’s "Spicy" outputs produced celebrity deepfakes and topless videos of Taylor Swift, accelerating the backlash.
- Regulators Take Notice: In September, the FTC opened a 6(b) inquiry into AI companion products, focusing on design choices, data practices, and potential harms. In California, Governor Gavin Newsom signed SB 243 to study the field and require reporting, while vetoing a broader bill, if enacted, would have imposed sweeping content rules.
- New Evidence on Teen Companion Usage: An October 2025 survey by the APA found that many teens seek AI companions for help with loneliness and social rehearsal. Additionally, under pressure, Meta responded by announcing teen-focused guardrails and the ability for parents to disable AI chats.
- Ongoing Worker Harm: The Equidem report on data labelers and moderators documented severe mental health risks from constant exposure to sexual and violent content, including for those supplying datasets for companion AI.
My Position on Adult NSFW Content: Consent and Verification
I see no inherent problem with legal adults using AI for erotica or romantic expression. The World Health Organization defines sexual health as “a state of physical, emotional, mental and social well-being in relation to sexuality; it is not merely the absence of disease, dysfunction or infirmity.” The romance and erotica economy is vast, increasingly female-led, and now mainstream; for example a recent report by Circana shows romantic fiction remains the fastest-growing print category. Fanfiction platforms like Archive of Our Own (AO3) and Wattpad are part of this ecosystem, showing how adults use fantasy, connection and creative expression.
We must also recognize the critical social context about intimacy is often overlooked in public discussion: male loneliness is at record levels, and rates of autism and other forms of neurodivergence are higher than ever. Large-scale research shows groups such as NEETs (“Not in Education, Employment or Training”) and individuals with the extreme social withdrawal known as hikikomori overlap significantly with autistic traits. Traditional relationship pathways may be inaccessible for many of these adults, and there’s still a lot of stigma against neurodivergent adults seeking romantic companionship. In such a context, consensual adult use of AI-based romance or erotica tools may serve as a coping space, outlet or bridge, not a replacement for human connection, but a legitimate option. Condemning these practices without considering the background only deepens isolation and stigma.
Meanwhile, women continue to lead the modern romance, erotica and fanfiction world, shaping new norms of emotional and sexual expression. If we respect those women-led choices in storytelling and connection, we must also respect adult autonomy when it comes to emerging technologies. The core distinction for AI models in adult spaces is consent and age verification. If both mechanisms are in place, the conversation should move from moral panic to thoughtful policy: protect minors, honor adults, and let the medium follow human need. Respecting adult autonomy with AI chatbots is not a cultural concession, or vindictive of societal moral decay. I believe it’s a consistent policy choice to treat consensual adult expression with AI products the same way we treat it in novels, film, fanfiction, or nightlife. Informed decisionmaking and personal choices should be paired with strict rules for nonconsensual content and with privacy-oriented teen protections.
Where the Real Risks Are in AI Chatbots
The genuine harms in the Generative AI space are concentrated in two predictable and correlated areas; politicians and policymakers have enacted laws about both risks at various jurisdictional levels:
- Nonconsensual Deepfakes: The United States now has a federal baseline for intimate deepfakes. The Take It Down Act, enacted in May 2025, requires rapid removal of nonconsensual sexual imagery, including AI‑generated media. Effective enforcement will require reliable detection tools, trauma‑informed reporting, fast appeals, and survivor‑first workflows.
- Teens and Default Access: Teen‑facing experiences should categorically exclude sexualized companions. The Common Sense Media data and Meta’s guardrails point in the right direction, but age assurance must avoid invasive ID uploads and mass surveillance. Australia’s Age Assurance Technology Trial and approaches outlined in Yoti’s white paper illustrate how privacy‑preserving age checks can work, even if no current method is perfect.
To move from moral panic to workable governance, companion AI policy needs clear guardrails instead of vague anxieties or blanket bans. A durable framework should focus on autonomy, safety, and evidence. In my view, workable AI chatbot structures focused on intimacy or companionship should rest on four pillars:
- Verified Adult NSFW Options with Privacy by Default. Age checks must be proportional and privacy-preserving, using on-device or zero-knowledge methods with strict data minimization. Providers should retain nothing beyond what is necessary to confirm adulthood, and all chat data, intimate or otherwise, should be encrypted end-to-end so even the model provider cannot access it. NSFW modes must remain opt-in with clear session indicators. There should be no transcripts, no shadow logging, and no silent review.
- Teen Experiences That Are Safe by Design. Protections for youth should target content, not surveillance. Sexual material in teen mode must be impossible to access, even with jailbreaks and encoding, but chat histories and self-expression must remain private. Parents should not have transcript access, and companies must not review or store logs. Privacy maximalism is especially critical for LGBT+ teens in unsafe homes. The only permissible exception to a breach of human-LLM privacy should be a lawful search warrant approved by a judge, and strong encryption should make access difficult without due process.
- Neurodivergent Accessibility by Default. Companion AI should provide sensory controls, literal-mode dialogue, configurable intimacy boundaries, and exportable encrypted chat histories. Predictability, transparency, and user choice are essential for autistic and otherwise neurodivergent users. Privacy is part of accessibility, because coerced visibility and involuntary monitoring undermine emotional safety and trust for the very populations who rely on these tools most.
- Research and Evaluation Beyond Engagement. Independent studies should measure effects on loneliness, attachment, and emotional regulation, especially for neurodivergent and socially isolated users. Evaluations must be conducted with privacy-preserving methods rather than hidden telemetry or retention of sensitive logs. The benchmark is harm reduction and emotional benefit, not engagement curves.
There is a categorical difference between a verified adult choosing to co-write erotica with an AI system and a model fabricating a classmate’s body or likeness. Collapsing those scenarios into the same moral bucket is dishonest and corrosive to policymaking. Consensual adult sexual expression has always existed across novels, art, film, and fanfiction, and AI is simply a new medium for human behavior as old as humanity itself. Nonconsensual sexual fabrication, by contrast, is abuse and is now explicitly covered by US federal law through the Take It Down Act.
In my opinion, an ethical path forward for intimate AI models requires a consistent framework rather than a reactive one. Model providers should treat consenting adults as adults, classify nonconsensual sexual content as abuse, and design teen experiences prioritizing safety without relying on surveillance. In parallel, AI companies should minimize retained data, require privacy-preserving controls, and support research which measures real outcomes instead of feeding hype cycles. It is imperative for policymakers to resist legislating by headline or moral panic. Grounding companion AI in consent, privacy, autonomy, and evidence will protect minors, preserve adult freedom, and support the people who rely on these tools most, especially neurodivergent and socially isolated users. The real choice is between fear-driven rules which punish innocent consumers and rights-based policy intended to target actual harm. Only the latter is compatible with human dignity in a digital age.
Noah Weinberger is an AI policy researcher and neurodivergent advocate currently studying at Queen’s University. As an autistic individual, Noah explores the intersection of technology and mental health, focusing on how AI systems can augment emotional well-being. He has written on AI ethics and contributed to discussions on tech regulation, bringing a neurodivergent perspective to debates often dominated by neurotypical voices. Noah’s work emphasizes empathy in actionable policy, ensuring that frameworks for AI governance respect the diverse ways people use technology for connection and support.