It’s up to you to decide how to utilize the output within your broader moderation system.

Understanding the Output: For each piece of content submitted, PolicyAI provides a classification label (the determined policy category), reasoning, and a severity score indicating the model's classification of how severe the violation is.

Severity scores can be additional helpful signals for triage, human review/ oversight, and double-checking.

Potential Workflow Integrations: