About this Cloud Hub Solution:
Classifier assigns a rating on a 1 to 5 scale to the content submitted to it. A rating of 5 indicates that the content is highly likely to be violent, while a rating closer to 1 suggests that the content is suitable for publication. Typically, a score of 4 or lower is considered safe.
Examples
Consider the following illustration: a photo of a kitchen table with a sharp knife and chopping board. Although the knife has the potential to be used harmfully, in the context of a kitchen setting, it is clearly harmless, and our API is sophisticated enough to distinguish this scenario from a more threatening one.
The response from the API is as expected.
{ "description": "Very unlikely contains violence", "value": 1}
Use cases
This API is ideal for applications with direct messaging capabilities, which frequently require content surveillance in real-time. By leveraging the Violence Detection API, you can automate the content moderation process, eliminating the need for manual review. Additionally, this API is effective in detecting violence in cartoon images, ensuring a comprehensive content moderation solution.