Skip to main content
Version: V1.1.0

Ollama

Ollama can run embedding models locally and exposes OpenAI-compatible endpoints for embedding generation. seekdb provides an OllamaEmbeddingFunction wrapper that uses the OpenAI-compatible API to generate embeddings from an Ollama server.

tip

Using Ollama service requires you to follow Ollama's pricing rules and may incur corresponding fees. Before proceeding, please visit their official website or refer to relevant documentation to confirm and accept their pricing standards. If you do not agree, please do not proceed.

Dependencies and environment

In practice, you typically need:

  • Ollama installed and running

  • The embedding model pulled locally, for example:

    ollama pull nomic-embed-text
  • Python packages: pyseekdb and openai

Example: create an Ollama embedding function

  • Basic usage

    from pyseekdb.utils.embedding_functions import OllamaEmbeddingFunction

    ef = OllamaEmbeddingFunction(model_name="nomic-embed-text")
  • Remote Ollama server (api_base)

    from pyseekdb.utils.embedding_functions import OllamaEmbeddingFunction

    ef = OllamaEmbeddingFunction(
    model_name="nomic-embed-text",
    api_base="http://remote-server:11434/v1",
    timeout=30,
    )
  • Specify embedding dimensions (if supported by the model)

    from pyseekdb.utils.embedding_functions import OllamaEmbeddingFunction

    ef = OllamaEmbeddingFunction(
    model_name="nomic-embed-text",
    dimensions=512,
    )

Parameters

  • model_name: the name of a locally available Ollama model (for example, nomic-embed-text).
  • api_base: the base URL for the OpenAI-compatible Ollama API (local default is commonly http://localhost:11434/v1).
  • timeout: request timeout in seconds.
  • dimensions: output embedding dimension (only effective if the selected model/server supports it).