Skip to content

Add ollama support#135

Open
rajo69 wants to merge 2 commits intoVectifyAI:mainfrom
rajo69:add-ollama-support
Open

Add ollama support#135
rajo69 wants to merge 2 commits intoVectifyAI:mainfrom
rajo69:add-ollama-support

Conversation

@rajo69
Copy link

@rajo69 rajo69 commented Mar 3, 2026

Fixes #115

What this adds

  • --base-url CLI flag for pointing PageIndex at any OpenAI-compatible endpoint (Ollama, LM Studio, Azure OpenAI, AWS Bedrock, Mistral, etc.).
  • base_url parameter threaded through all LLM call functions in page_index.py, page_index_md.py, and utils.py.
  • base_url: null added to config.yaml defaults so existing behaviour is completely unchanged when the flag is not provided.
  • Fixes a KeyError crash in count_tokens() where tiktoken could not find an encoding for non-OpenAI model names. Now falls back to cl100k_base instead of crashing.

Usage

# Ollama
python run_pageindex.py --pdf_path doc.pdf --model llama3.1 --base-url http://localhost:11434/v1

# Any OpenAI-compatible provider
python run_pageindex.py --pdf_path doc.pdf --model mistral --base-url https://your-custom-endpoint/v1

Breaking changes

None. base_url defaults to null and all existing calls work without any changes.

Tested

  • Default usage (no --base-url) - behaviour identical to before
  • count_tokens() with a non-OpenAI model name - no longer crashes
  • --base-url flag accepted by CLI and passed through to all LLM calls

rajo69 added 2 commits March 3, 2026 00:40
- Add pyproject.toml with package metadata, dependencies, and CLI entry point
- Extract CLI logic into pageindex/cli.py with a proper main() function
- Simplify run_pageindex.py to delegate to pageindex.cli:main
- Add build artifacts to .gitignore

Closes VectifyAI#103
- Add --base-url CLI flag for custom API endpoints
- Thread base_url through all LLM call functions in page_index.py,
  page_index_md.py, and utils.py
- Add base_url to config.yaml defaults
- Fix tiktoken fallback to cl100k_base for non-OpenAI model names

Closes VectifyAI#115

Usage with Ollama:
  pageindex --pdf_path doc.pdf --model llama3.1 --base-url http://localhost:11434/v1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] ollama

1 participant