Companion AI After the Panic: It’s Okay to Treat Adults Like Adults

Community Article Published October 20, 2025

1712e282-fe11-452f-b1e1-b00dfb3addb7

Public debate around AI companions has swung rapidly between hype, fear, and political outrage. In recent months, two major announcements have intensified these reactions: OpenAI’s plan to allow erotic content for verified adults starting in December 2025, and xAI’s rollout of Grok Imagine with a controversial "Spicy" mode for NSFW media. As with technology before it which has excited and alarmed people, Big Tech companies’ shipping of new features and developments have ignited another moral panic, prompting swift regulatory scrutiny and renewed calls for blanket restrictions. It’s imperative to remember panicking is not a policy framework, and that reactionary laws often overcorrect and hurt people more than help them. If democratic societies want digital rules to adequately protect teens, respect adults, and promote mental health rather than harm, we need a much clearer foundation on which to build.

My position on the usage of AI companions remains simple, consistent, and based on democratic values: treat adults like adults, treat nonconsensual sexual content as abuse, design teen experiences which are safe by default, and prioritize both privacy and well‑being. We can have a healthier companion AI ecosystem if we focus on consent, transparency, and neurodivergent accessibility rather than imposing reflexive, draconian prohibitions.

Several significant policy developments have occurred since my previous article on neurodivergent teens and AI companionship. Together, the legislative updates and company terms of use updates below reveal both the risk landscape and the policy window:

My Position on Adult NSFW Content: Consent and Verification

I see no inherent problem with legal adults using AI for erotica or romantic expression. The World Health Organization defines sexual health as “a state of physical, emotional, mental and social well-being in relation to sexuality; it is not merely the absence of disease, dysfunction or infirmity.” The romance and erotica economy is vast, increasingly female-led, and now mainstream; for example a recent report by Circana shows romantic fiction remains the fastest-growing print category.  Fanfiction platforms like Archive of Our Own (AO3) and Wattpad are part of this ecosystem, showing how adults use fantasy, connection and creative expression.

We must also recognize the critical social context about intimacy is often overlooked in public discussion: male loneliness is at record levels, and rates of autism and other forms of neurodivergence are higher than ever. Large-scale research shows groups such as NEETs (“Not in Education, Employment or Training”) and individuals with the extreme social withdrawal known as hikikomori overlap significantly with autistic traits. Traditional relationship pathways may be inaccessible for many of these adults, and there’s still a lot of stigma against neurodivergent adults seeking romantic companionship. In such a context, consensual adult use of AI-based romance or erotica tools may serve as a coping space, outlet or bridge, not a replacement for human connection, but a legitimate option. Condemning these practices without considering the background only deepens isolation and stigma.

Meanwhile, women continue to lead the modern romance, erotica and fanfiction world, shaping new norms of emotional and sexual expression. If we respect those women-led choices in storytelling and connection, we must also respect adult autonomy when it comes to emerging technologies. The core distinction for AI models in adult spaces is consent and age verification. If both mechanisms are in place, the conversation should move from moral panic to thoughtful policy: protect minors, honor adults, and let the medium follow human need. Respecting adult autonomy with AI chatbots is not a cultural concession, or vindictive of societal moral decay. I believe it’s a consistent policy choice to treat consensual adult expression with AI products the same way we treat it in novels, film, fanfiction, or nightlife. Informed decisionmaking and personal choices should be paired with strict rules for nonconsensual content and with privacy-oriented teen protections.

Where the Real Risks Are in AI Chatbots

The genuine harms in the Generative AI space are concentrated in two predictable and correlated areas; politicians and policymakers have enacted laws about both risks at various jurisdictional levels:

To move from moral panic to workable governance, companion AI policy needs clear guardrails instead of vague anxieties or blanket bans. A durable framework should focus on autonomy, safety, and evidence. In my view, workable AI chatbot structures focused on intimacy or companionship should rest on four pillars:

There is a categorical difference between a verified adult choosing to co-write erotica with an AI system and a model fabricating a classmate’s body or likeness. Collapsing those scenarios into the same moral bucket is dishonest and corrosive to policymaking. Consensual adult sexual expression has always existed across novels, art, film, and fanfiction, and AI is simply a new medium for human behavior as old as humanity itself. Nonconsensual sexual fabrication, by contrast, is abuse and is now explicitly covered by US federal law through the Take It Down Act.

In my opinion, an ethical path forward for intimate AI models requires a consistent framework rather than a reactive one. Model providers should treat consenting adults as adults, classify nonconsensual sexual content as abuse, and design teen experiences prioritizing safety without relying on surveillance. In parallel, AI companies should minimize retained data, require privacy-preserving controls, and support research which measures real outcomes instead of feeding hype cycles. It is imperative for policymakers to resist legislating by headline or moral panic. Grounding companion AI in consent, privacy, autonomy, and evidence will protect minors, preserve adult freedom, and support the people who rely on these tools most, especially neurodivergent and socially isolated users. The real choice is between fear-driven rules which punish innocent consumers and rights-based policy intended to target actual harm. Only the latter is compatible with human dignity in a digital age.

Noah Weinberger is an AI policy researcher and neurodivergent advocate currently studying at Queen’s University. As an autistic individual, Noah explores the intersection of technology and mental health, focusing on how AI systems can augment emotional well-being. He has written on AI ethics and contributed to discussions on tech regulation, bringing a neurodivergent perspective to debates often dominated by neurotypical voices. Noah’s work emphasizes empathy in actionable policy, ensuring that frameworks for AI governance respect the diverse ways people use technology for connection and support.

Community

Sign up or log in to comment