Automate Telegram interactions by analyzing messages for toxic language using Google Perspective. This workflow triggers manually, evaluates message content, and responds with a warning if profanity is detected, promoting a healthier communication environment.
This workflow is ideal for:
- Community Managers: To monitor and manage toxic language in group chats.
- Moderators: To ensure a safe environment by automatically responding to inappropriate messages.
- Developers: Who want to integrate automated moderation features into their Telegram bots.
- Organizations: Looking to uphold a positive communication standard in their channels.
This workflow addresses the issue of toxic language in Telegram chats. By integrating with Google Perspective, it evaluates messages for potential toxicity and responds accordingly, ensuring a healthier communication environment.
Users can customize this workflow by:
- Adjusting Toxicity Thresholds: Modify the profanity score in the IF condition to be more or less sensitive to toxic language.
- Changing Response Messages: Edit the text in the Telegram node to provide a different response based on the organization's tone or policy.
- Adding More Attributes: Include additional attributes from Google Perspective to analyze, such as sexual_explicit or insult, to broaden the moderation scope.
- Integrating Other Platforms: Connect additional messaging platforms or databases to enhance moderation capabilities across different channels.