Chat with local LLMs using n8n and Ollama

7/8/2025
5 nodes
Simple
manualsimplelangchainsticky note
Categories:
Manual TriggeredSimple Workflow
Integrations:
LangChainSticky Note

Target Audience

  • Developers and Data Scientists: Those who want to integrate local LLMs into their applications or workflows.
    - AI Enthusiasts: Individuals interested in experimenting with AI and natural language processing using self-hosted solutions.
    - Business Analysts: Professionals looking to automate chat responses or data collection through intelligent chat interfaces.
    - Educators and Students: Users who want to create interactive learning tools using conversational AI.
  • Problem Solved

    This workflow addresses the challenge of interacting with local Large Language Models (LLMs) in a seamless manner. Users can send messages and receive responses from their self-hosted AI models without needing extensive programming skills. It simplifies the process of integrating AI chat capabilities into various applications.

    Workflow Steps

  • Message Reception: The workflow begins when a chat message is received through the When chat message received node.
    2. Processing the Input: The message is then sent to the Chat LLM Chain, which processes the input and prepares it for the LLM.
    3. Generating Response: The Ollama Chat Model node interacts with the local Ollama server, sending the processed input and receiving an AI-generated response.
    4. Delivering the Response: Finally, the response from the LLM is delivered back to the chat interface, completing the interaction.
  • Customization Guide

  • Ollama Configuration: Users can customize the Ollama API address in the Ollama Chat Model node if it differs from the default http://localhost:11434.
    - Node Parameters: Adjust parameters in the Chat LLM Chain and Ollama Chat Model nodes to refine the AI's responses based on specific needs.
    - Styling Sticky Notes: Modify the content and appearance of the Sticky Note nodes to better fit the user's preferences or branding.
    - Network Settings: If using Docker, ensure the n8n container has access to the host's network for successful communication with Ollama by using the --net=host option.