Find out if someone is toxic
Get a report describing the level of toxicity. Powered by AI.
How It Works
Enter comment
Introduce a text or comment made by someone.
Data is analyzed
Data is analyzed by an AI/ML model powered by Perspective API, on various attributes.
Get report
A report is built based on the results provided by the model.
Share!
Share the results with your friends or use them as you see fit.
Using ML to detect a range of potentially harmful attributes
Our report will provide a score for several attributes, not just toxicity.
Toxicity
A rude, disrespectful, or unreasonable comment.
Severe Toxicity
An extremely hateful, aggressive or disrespectful comment.
Identity attack
Negative or hateful comments targeting someone because of their identity.
Insult
Insulting, inflammatory, or negative comment towards a person or a group of people.
Profanity
Swear words, curse words, or other obscene or profane language.
Threat
Describes an intention to inflict pain, injury, or violence against an individual or group.