Local AI Enhanced: Integrating OpenWebUI with N8N for Streamlined Agent Interaction

2025-01-26
ℹ️Note on the source

This blog post was automatically generated (and translated). It is based on the following original, which I selected for publication on this blog:
Use Open WebUI with Your N8N AI Agents – Voice Chat Included! – YouTube.

Local AI Enhanced: Integrating OpenWebUI with N8N for Streamlined Agent Interaction

The ability to run AI locally has become increasingly accessible, allowing users to manage their Large Language Models (LLMs), vector databases, and agent workflows on personal hardware. One significant enhancement to this local AI ecosystem involves integrating OpenWebUI, a user-friendly interface, with platforms like N8N, which facilitates AI agent workflows.

OpenWebUI: A Local AI Interface

OpenWebUI serves as an open-source chat interface, similar to ChatGPT, designed to interact with LLMs. It supports key features such as functions and pipelines, enabling custom functionalities like chatting with personal agents and API endpoints. By leveraging these capabilities, OpenWebUI can be integrated with N8N to enhance the user experience of local AI setups.

The Local AI Starter Kit

The Local AI Starter Kit, developed by the N8N team, provides a Docker Compose file that packages essential services for local AI development. This includes:

  • N8N: For building workflow automations and AI agents.
  • Ollama: For running large language models.
  • Quadrant: A vector database for Retrieval-Augmented Generation (RAG).
  • Postgres: A SQL database for managing chat memory and other data.

All these components run locally, offering a comprehensive environment for AI experimentation and deployment.

Integrating OpenWebUI with N8N

To integrate OpenWebUI into this setup, a new container is added to the Docker Compose file. This allows users to interact with their N8N workflows directly through the OpenWebUI interface. The integration leverages OpenWebUI's functions to create custom pipelines that communicate with N8N agents.

Setting Up the Integration

  1. Clone the Repository: Start by cloning the forked repository containing the enhanced Local AI Starter Kit.
git clone <repository_url>
  1. Configure the .env File: Modify the env.example file to set up your Postgres settings, including username, password, and database name. Rename the file to .env.

  2. Run Docker Compose: Execute the Docker Compose command to start all the services, including OpenWebUI.
docker-compose --profile gpu --profile nvidia up
  1. Access N8N: Navigate to localhost:5678 to access the N8N interface. Create a local account (no internet connection required).

  2. Access OpenWebUI: Open OpenWebUI by navigating to localhost:3000. Similarly, create a local account.

  3. Configure Ollama API URL: In OpenWebUI, go to the admin panel, then settings, and connections. Change the Ollama API URL to ollama to ensure it pulls models from the Docker container.

Creating an N8N Agent Workflow

A basic AI agent workflow in N8N can be integrated with OpenWebUI using webhooks. The workflow should include:

  • Webhook Node: Configured as a POST request with a custom path (e.g., /invoke-n8n-agent). Set the response mode to "Respond to Webhook Node."
  • AI Agent Node: Generates responses based on the prompt from the webhook. Use Ollama for the chat model and configure credentials with the base URL http://ollama:11434.
  • Quadrant Vector Store: Integrates with the agent for RAG. Configure credentials with the URL http://quadrant:6333. The API key is not required for local setups.

Integrating with OpenWebUI Functions

OpenWebUI's function feature allows the creation of custom functionalities. A custom function can be created to interact with the N8N webhook. Key elements of the function include:

  • Valves (Parameters): Define parameters such as the N8N URL, bearer token, input field, and response field.
  • HTTP Request: The function makes a POST request to the N8N webhook URL, passing the user's message and session ID.
  • Response Handling: Extracts the response from the LLM and returns it to OpenWebUI.

Interacting with the AI Agent

Once the integration is set up, users can interact with their N8N AI agents directly through the OpenWebUI chat interface. This includes both text and voice interactions, providing a seamless and intuitive experience.

Further Thoughts on Local AI

Integrating OpenWebUI with N8N marks a significant step forward in making local AI more accessible and user-friendly. It opens up possibilities for:

  • Customization: Tailoring AI agents to specific needs and workflows.
  • Privacy: Maintaining full control over data and processing.
  • Innovation: Experimenting with new AI models and techniques in a local environment.

As the local AI ecosystem continues to evolve, further enhancements and integrations will undoubtedly emerge. Which path do we want to take to utilize AI in the future?


Comments are closed.