Enhancing Obsidian with a Private AI Co-pilot

2025-03-09
ℹ️Note on the source

This blog post was automatically generated (and translated). It is based on the following original, which I selected for publication on this blog:
Private Obsidian AI: Add DeepSeek to your Obsidian with Ollama (GUIDE + SETUP) – YouTube.

Enhancing Obsidian with a Private AI Co-pilot

Obsidian, favored for its local storage capabilities, allows users to maintain complete privacy over their notes. But is it possible to integrate AI tools without compromising this privacy? Recent developments make it feasible to install a private, local AI co-pilot directly within Obsidian. This article delves into how to achieve this, examining the components involved and the benefits of such a setup.

Setting up Obsidian Co-pilot

The first step involves installing the Obsidian Co-pilot plugin. Once installed and enabled, the plugin typically requires an API key from providers like OpenAI. However, the goal here is to bypass external APIs and create a completely private system.

The Obsidian Co-pilot offers several key features:

  • Chat Mode: Engage in conversation with an AI bot on various topics.
  • Vault QA Mode: Query your notes, allowing the AI to find relevant information and answer questions based on your content.
  • Relevant Notes: While writing, the co-pilot actively suggests relevant notes, enhancing connectivity within your knowledge base.

The Role of Embedding Models

Behind the scenes, the co-pilot uses two primary models: a large language model (LLM) and an embedding model. The embedding model is crucial for understanding the relationships between notes. It translates notes into numerical vectors, enabling the system to calculate the relevance between them. When a new note is created or a question is asked, the system uses these vectors to identify the most similar and relevant notes.

To fully privatize the co-pilot experience, both models need to be installed locally.

Installing a Private AI Engine: DeepSeek and Ollama

Ollama is a tool that facilitates running large language models on your local machine, supporting Linux, Windows, and macOS. DeepSeek R1 is presented as a potent open-source language model. These can be downloaded and installed via Ollama.

  1. Install Ollama: Download and install Ollama for your operating system.
  2. Download DeepSeek: Use the command line to pull the DeepSeek model from Ollama.

    ollama pull deepseek-coder:33b-python
  3. Install an Embedding Model: BGE M3 is recommended as an open-source embedding model that surpasses OpenAI's performance. Pull it from Ollama.

    ollama pull bge-m3

Optimizing the Context Window

Large language models have a context window, limiting the amount of text they can process at once. It is important to maximize the context window of your local AI to avoid frequent hallucination from your notes. This can be achieved by using the MODFILE.

Create a Modfile:

FROM deepseek-coder:33b-python
PARAM context_length 16384

Create a new model with the new context length.

ollama create deepseek-coder-16k -f Modfile

Check the parameters of your new model:

ollama show deepseek-coder-16k

Integrating with Obsidian

To link Ollama with Obsidian, ensure Ollama is running in the background and configure Obsidian to use the local models. You may need to set an environment variable for your OS pointing to your ollama install.

In the Obsidian Co-pilot settings, add custom models, specifying the exact names used in Ollama. Select Ollama as the provider and verify the connection. Designate one model for reasoning and another for embedding.

Conclusion

By using local models, users can have a totally private Obsidian Co-pilot. One can now ask questions about your writings and notes without fear of any data being leaked.

While local co-pilots may not currently match the capabilities of cloud-based solutions in areas like agentic file management and deep research, they offer a compelling alternative for users who prioritize privacy and control over their data. The possibility of having a completely private vault with private AI raises the question: Is this the future of personal knowledge management?


Comments are closed.