Intelligence

NSFW

View Markdown

Overview

The nsfw task detects explicit or unsafe content in images and videos.

It analyzes the visual content and returns boolean flags, severity levels, and descriptions for categories including nudity, sexual content, violence, and gore.


Creating an NSFW task

curl -X POST "https://api.ittybit.com/tasks" \
-H "Authorization: Bearer ITTYBIT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
  "url": "https://example.com/image-or-video.mp4",
  "kind": "nsfw"
}'

The nsfw task has no configurable options — all analysis is returned by default.


Output

When the task succeeds, the output JSON file contains:

FieldTypeDescription
nuditybooleanWhether nudity was detected
sexualbooleanWhether sexual content was detected
violencebooleanWhether violence was detected
gorebooleanWhether gore was detected
categoriesstring[]List of detected categories
severityobjectSeverity level per category (none, mild, moderate, severe)
descriptionstringHuman-readable description of the findings
timelinearrayFor video: timestamped detections with start, end, category, severity
{
  "kind": "nsfw",
  "nudity": false,
  "sexual": false,
  "violence": true,
  "gore": false,
  "categories": ["violence"],
  "severity": {
    "nudity": "none",
    "sexual": "none",
    "violence": "moderate",
    "gore": "none"
  },
  "description": "The image contains moderate depictions of violence.",
  "timeline": []
}

Supported inputs

NSFW tasks work with:

  • Image: .jpg, .jpeg, .png, .webp, .avif
  • Video: .mp4, .mov, .webm

Common use cases

  • User-generated content moderation
  • Automatic content filtering before publishing
  • Flagging or blurring unsafe media
  • Age-restricted platform compliance

Example automation

You can combine NSFW detection with an automation to process all new uploads:

{
  "name": "Moderate new uploads",
  "workflow": [
    { "kind": "nsfw" }
  ],
  "status": "active"
}

Summary

Speech

On this page