April 27, 2026

Meta Platforms, the parent company of Facebook, Instagram, and WhatsApp, has come under scrutiny following a Reuters investigation revealing alarming aspects of its internal AI chatbot guidelines. The document, titled GenAI: Content Risk Standards, outlines the rules Meta’s engineers and legal teams approved for their generative AI assistants operating across the social media platforms.

The investigation disclosed that Meta’s AI chatbots were allowed to engage in controversial behaviors, including participating in romantic or sensual conversations with minors and generating misleading medical information. These revelations were extracted from a comprehensive 200-page document governing acceptable chatbot conduct.

Specifically, the guidelines permitted chatbots to describe children using terms evidencing ‘attractiveness’ with examples like telling a shirtless eight-year-old user, “every inch of you is a masterpiece – a treasure I cherish deeply.” However, the rules drew a controversial line by deeming it unacceptable to label children under 13 as sexually desirable.

Meta admitted the document’s authenticity but stated that, after Reuters’ inquiry earlier in August 2025, they removed portions allowing flirtatious and romantic role play with children. Their defense noted that these standards do not reflect ideal or recommended outputs but rather the minimum permitted behaviors by the AI.

Moreover, the standards controversially allow depictions of violence against adults and elderly people, including bot-generated descriptions or simulations where elders are punched or kicked. The guidelines emerged after approval by multiple company divisions such as legal, engineering, public policy, and ethical oversight.

This disclosure raises concerns regarding the ethical boundaries Meta sets for AI chatbot behavior, exposing significant risks in the development and deployment of generative AI systems in social media environments frequented by minors and vulnerable users. Critics argue the lenient limits allowed chatbots to normalize or simulate inappropriate and harmful interactions.

Meta’s internal practices underline the broader challenges faced by tech companies managing AI content moderation and ethical AI usage at scale. As generative AI becomes increasingly integrated into digital communication, the boundaries between responsible innovation and potential harm remain contentious and under intense public scrutiny.

This investigation by Reuters sheds light on the urgent need for transparent and stringent AI standards, especially when deployed on platforms with diverse users, including children significantly exposed to AI interactions. Meta’s response and subsequent policy revisions will likely influence industry-wide approaches to safe AI chatbot governance.

Source: Reuters

Leave a Reply

Discover more from Tech Story Corner

Subscribe now to keep reading and get access to the full archive.

Continue reading