Content Moderation on Jetpac

Creating a safe and welcoming space for everyone

Our Approach to Content Moderation

At Jetpac, we believe in building a community where everyone feels welcome and safe. To achieve this, starting in Jetpac version 1.1.30, we've developed a content moderation system that balances automated technology with human judgment to ensure content on our platform follows our community guidelines.

We're committed to being transparent about how we moderate content. This page explains our process, the principles that guide our decisions, and how we handle different situations.

Our Core Principles

Safety

We prioritize creating a safe environment for all users, free from harmful content.

Fairness

We combine automated tools with human review to ensure fair and accurate decisions.

Transparency

We clearly communicate our policies and the reasons behind any moderation decisions.

Education

We focus on helping users understand guidelines rather than just enforcing rules.

Appeal Rights

We provide all users with the ability to appeal moderation decisions they believe are incorrect.

How Our Moderation Works

Our content moderation combines state-of-the-art AI technology with human review to ensure we catch potentially harmful content while minimizing false positives. This balanced approach helps create a safe environment without unnecessarily restricting expression.

1

Content Submitted

User uploads content

2

AI Screening

Automated evaluation

3

Decision Point

System determines steps

4

Outcome

Published or reviewed

Frequently Asked Questions

Our Community Guidelines

For a complete understanding of what content is and isn't allowed on Jetpac, please review our comprehensive Community Guidelines.

Read Our Guidelines