Child Sexual Abuse Material (CSAM) Policy
Last Updated: May 19, 2025
Introduction
This policy outlines our commitment to preventing, detecting, and reporting Child Sexual Abuse Material (CSAM) on our platform. We have zero tolerance for content that exploits or endangers minors, and we employ a multi-layered approach to protect children and prevent the distribution of CSAM through our services.
Definitions
Child Sexual Abuse Material (CSAM): Any visual depiction, including photographs, videos, digital or computer-generated images indistinguishable from an actual minor, that involves sexually explicit conduct involving a minor (person under 18 years of age).
Minor: Any person under the age of 18, regardless of the age of consent in any jurisdiction.
Prohibited Content
The following content is strictly prohibited on our platform:
- 1. Child Sexual Abuse Material (CSAM): Any content depicting minors engaged in sexually explicit conduct.
- 2. Sexualization of Minors: Content that sexualizes minors, even if it doesn't contain nudity or explicit sexual activity.
- 3. Self-Generated Explicit Content: Content created by minors that contains sexually explicit material.
- 4. Virtual or AI-Generated CSAM: Computer-generated or digitally altered content that depicts minors in a sexual context.
- 5. Content that Endangers Children: Any content that could lead to the exploitation or endangerment of minors.
Detection and Prevention Measures
We employ robust measures to prevent, detect, and remove CSAM:
Human Moderation
- Specialized Moderator Teams: Trained professionals who review flagged content.
- Mental Health Support: Resources provided to moderators who review disturbing content.
- Regular Training: Ongoing education for moderators on current trends and identification techniques.
Preventative Measures
- Account Verification: Includes identity checks for creators. Currently, age verification relies on creators confirming their age via a checkbox acknowledgment. Future updates will incorporate age verification APIs for enhanced validation.
Reporting Mechanisms
User Reporting
- Report a Problem Button: Easily accessible reporting options within the app (Settings > Report a Problem)
Response Procedures
Upon detection or notification of potential CSAM:
- 1. Immediate Content Removal: Suspected CSAM is taken down within 24 hours.
- 2. Evidence Preservation: Content and relevant metadata are preserved in accordance with legal requirements.
- 3. Account Suspension: Immediate suspension of accounts associated with CSAM.
Law Enforcement Cooperation
We actively cooperate with law enforcement agencies:
- Dedicated Law Enforcement Portal: Specialized channel for law enforcement requests.
- Expedited Responses: Priority processing of CSAM-related legal requests.
- Proactive Reporting: Voluntary reporting of identified CSAM to appropriate authorities.
- Preservation Requests: Prompt compliance with preservation orders.
- Training Support: Participation in law enforcement training about online CSAM issues.
Education and Awareness
We are committed to:
- User Education: Providing resources to help users identify and report concerning content.
- Parental Controls: Offering tools for parents to monitor and manage their children's activity.
- Safety Resources: Partnerships with organizations specializing in online child safety.
- Regular Updates: Keeping our community informed about safety features and best practices.
Policy Review
This policy is reviewed quarterly to ensure:
- Compliance with changing laws and regulations.
- Incorporation of emerging detection technologies.
- Adaptation to evolving threat landscapes.
- Implementation of industry best practices.
Contact Information
For questions regarding this policy or to report potential CSAM outside of our standard reporting channels: