Section 1
Overview
Safety and moderation standards define what conduct is acceptable, what conduct may be restricted, how reports are handled, and what enforcement may follow when behavior creates real risk. On MyINC Social, these standards can apply to posts, comments, direct messages, usernames, profiles, uploaded media, and broader account behavior patterns that appear deceptive, abusive, or unsafe.
These standards exist for a practical reason. When rules are vague, enforcement becomes inconsistent. When enforcement becomes inconsistent, bad actors learn how to test boundaries and ordinary users lose confidence that the service is being operated responsibly. Public standards help solve that by creating a visible reference point for users, moderators, reviewers, and anyone evaluating how the platform handles trust and safety.
Safety is not only about removing obviously bad content. It is also about preventing repeat abuse, reducing impersonation, discouraging manipulative conduct, lowering the burden on good users, and helping the platform remain stable as it grows. A serious platform cannot wait until conflict happens before explaining how it will respond.
For users
Clear standards make it easier to know what is expected, what may be reported, and what kinds of behavior can lead to warnings, restrictions, or removal.
For reviewers
Clear standards reduce guesswork and support more consistent decisions based on context, severity, repeat behavior, and actual platform risk.