Open
Conversation
- Add pyproject.toml with package metadata, dependencies, and CLI entry point - Extract CLI logic into pageindex/cli.py with a proper main() function - Simplify run_pageindex.py to delegate to pageindex.cli:main - Add build artifacts to .gitignore Closes VectifyAI#103
- Add --base-url CLI flag for custom API endpoints - Thread base_url through all LLM call functions in page_index.py, page_index_md.py, and utils.py - Add base_url to config.yaml defaults - Fix tiktoken fallback to cl100k_base for non-OpenAI model names Closes VectifyAI#115 Usage with Ollama: pageindex --pdf_path doc.pdf --model llama3.1 --base-url http://localhost:11434/v1
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #115
What this adds
--base-urlCLI flag for pointing PageIndex at any OpenAI-compatible endpoint (Ollama, LM Studio, Azure OpenAI, AWS Bedrock, Mistral, etc.).base_urlparameter threaded through all LLM call functions inpage_index.py,page_index_md.py, andutils.py.base_url: nulladded toconfig.yamldefaults so existing behaviour is completely unchanged when the flag is not provided.KeyErrorcrash incount_tokens()where tiktoken could not find an encoding for non-OpenAI model names. Now falls back tocl100k_baseinstead of crashing.Usage
Breaking changes
None.
base_urldefaults tonulland all existing calls work without any changes.Tested
--base-url) - behaviour identical to beforecount_tokens()with a non-OpenAI model name - no longer crashes--base-urlflag accepted by CLI and passed through to all LLM calls