ManualTrigger Automate

ManualTrigger Automate streamlines the process of object detection in images by integrating with EditImage and Sticky Note. Users can easily trigger the workflow to download an image, detect objects, and visualize results with bounding boxes, enhancing analysis and insights without complex coding. This efficient automation saves time and improves accuracy in identifying specific elements within images.

7/8/2025
14 nodes
Medium
manualmediumeditimagesticky noteadvancedapiintegration
Categories:
Manual TriggeredMedium WorkflowCreative Design Automation
Integrations:
EditImageSticky Note

Target Audience

Target Audience


- Developers and Data Scientists: Those who are looking to integrate advanced image processing and object detection capabilities into their applications.
- Marketing and Content Creators: Users who need to analyze images for marketing campaigns or content creation, especially in identifying specific objects within images.
- Educators and Researchers: Individuals in academia who are exploring AI and machine learning applications in computer vision.
- Hobbyists and Makers: Tech enthusiasts interested in experimenting with AI and image processing technologies in their personal projects.

Problem Solved

Problem Solved


This workflow addresses the challenge of automatically detecting and drawing bounding boxes around specific objects (like rabbits) in images using AI. Instead of manually identifying objects, users can leverage the Gemini 2.0 API for prompt-based object detection, significantly reducing the time and effort required for image analysis. This is particularly useful in scenarios where quick visual assessments are necessary, such as in wildlife monitoring, inventory management, or visual content analysis.

Workflow Steps

Workflow Steps


1. Manual Trigger: The workflow starts when the user clicks the ‘Test workflow’ button.
2. Get Test Image: The workflow fetches a test image from a specified URL.
3. Get Image Info: The dimensions of the image are retrieved for further processing.
4. Gemini 2.0 Object Detection: The image is sent to the Gemini 2.0 API, requesting bounding boxes for specific objects (e.g., rabbits).
5. Get Variables: The workflow extracts necessary variables including coordinates and image dimensions from the API response.
6. Scale Normalised Coords: Coordinates are normalized to fit the original image dimensions based on the width and height retrieved earlier.
7. Draw Bounding Boxes: Finally, bounding boxes are drawn on the original image based on the calculated coordinates, visually indicating the detected objects.
8. Sticky Notes: Multiple sticky notes are added throughout the workflow to provide contextual information and guidance on the steps involved.

Customization Guide

Customization Guide


- Change the Image URL: Users can modify the URL in the ‘Get Test Image’ node to analyze different images.
- Adjust Object Detection Prompts: In the ‘Gemini 2.0 Object Detection’ node, users can customize the prompt to detect different objects by modifying the text in the JSON body.
- Modify Bounding Box Appearance: The ‘Draw Bounding Boxes’ node allows users to change the color and style of the drawn boxes to better suit their visual preferences.
- Add More Steps: Users can expand the workflow by adding additional nodes for further processing, such as saving the edited image or sending notifications based on detection results.
- Integrate with Other APIs: The workflow can be adapted to include other image processing APIs or services for enhanced functionality.