Skip to content

feat: adding llm txt#217

Merged
sriramveeraghanta merged 1 commit intomasterfrom
feat-add-llm-txt
Feb 24, 2026
Merged

feat: adding llm txt#217
sriramveeraghanta merged 1 commit intomasterfrom
feat-add-llm-txt

Conversation

@sriramveeraghanta
Copy link
Member

@sriramveeraghanta sriramveeraghanta commented Feb 24, 2026

Summary by CodeRabbit

  • Documentation
    • Added comprehensive developer documentation covering self-hosting setup (installation methods, configuration, management, troubleshooting), complete API reference across multiple endpoints and categories, and developer tools integration guides.
    • Updated web configuration to enhance accessibility of documentation resources.

@vercel
Copy link

vercel bot commented Feb 24, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
developer-docs Building Building Preview, Comment Feb 24, 2026 9:11pm

Request Review

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

Three new documentation files are added to the docs/public/ directory: a comprehensive developer documentation (llms-full.txt), a condensed reference guide (llms.txt), and a robots.txt directive linking to these LLM-friendly documentation assets. No executable code changes are introduced.

Changes

Cohort / File(s) Summary
LLM Documentation Assets
docs/public/llms-full.txt, docs/public/llms.txt, docs/public/robots.txt
Adds three new documentation files for LLM accessibility: comprehensive developer documentation with Self-Hosting, API Reference, and Developer Tools sections; a condensed reference guide; and a robots.txt directive to index the documentation assets.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 Hop, hop! A warren of knowledge takes flight,
LLM-friendly docs, shining bright!
From robots.txt to full developer guides,
Plane's wisdom now spreads far and wide! 📚✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: adding llm txt' is directly related to the changeset, which adds LLM-related documentation files (llms.txt, llms-full.txt) and updates robots.txt with LLM directives.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat-add-llm-txt

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
docs/public/llms-full.txt (1)

1-417: Consider auto-generating these files from the docs source to prevent drift.

This file is a comprehensive hand-curated index of ~180+ API endpoints. As the API grows or endpoints change, keeping it in sync manually is error-prone. Many documentation frameworks provide plugins to auto-generate llms.txt/llms-full.txt on each build. You can create or update llms.txt automatically with the llms_txt2ctx CLI/Python tool or use plugins like vitepress-plugin-llms and docusaurus-plugin-llms in your build process. Treat LLMs.txt as living documentation — update it when you add major new pages or remove outdated ones. If you re-organize docs, reflect that in LLMs.txt so the AI doesn't go to outdated links.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/public/llms-full.txt` around lines 1 - 417, The
docs/public/llms-full.txt is being maintained manually and will drift; replace
the manual workflow by auto-generating llms-full.txt (and llms.txt) from the
canonical docs source during the site build: add a build step that runs the
llms_txt2ctx CLI (or equivalent script) or integrates a plugin such as
vitepress-plugin-llms / docusaurus-plugin-llms to emit the file, update the
build/CI configuration to run this step and commit the generated artifact (or
publish it to the site output), and remove manual edits to
docs/public/llms-full.txt so the generated version is the single source of
truth.
docs/public/llms.txt (1)

1-52: LGTM — spec-compliant structure with all required sections.

The llms.txt spec requires the following in order: an H1 with the project name (the only required section), a blockquote with a short summary, and zero or more Markdown sections with detailed information. This file meets all of those requirements.

One optional enhancement: the spec defines a special ## Optional section — URLs listed there can be skipped by LLMs when a shorter context is needed, and it's intended for secondary information that can often be omitted. Grouping lower-priority entries such as Stickies, Initiatives, or MCP Server for Claude Code under ## Optional could improve context-window efficiency for LLMs with tight limits.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/public/llms.txt` around lines 1 - 52, The doc is spec-compliant but
should group lower-priority links into an "## Optional" section to reduce LLM
context when needed; update the "Developer Tools" and/or "API Reference" lists
by moving entries like "Stickies", "Initiatives", and "MCP Server for Claude
Code" (and any other secondary links you deem low-priority) out of the main
lists and place them under a new "## Optional" Markdown section, preserving
their link text and URLs exactly and keeping the H1 and blockquote unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/public/llms-full.txt`:
- Around line 281-285: The Epics API section in llms-full.txt is missing
project-level create/update/delete endpoints referenced by the oauth scope
projects.epics:write; verify whether the API actually exposes Project Epic
endpoints (Create Epic, Update Epic, Delete Epic) or if epic mutation is only
available via Initiatives, then update the Epics list to include the correct
links/titles (e.g., "Create Epic", "Update Epic", "Delete Epic" with their
respective docs URLs) or add a note clarifying that create/update/delete are
only via Initiatives and not project-level; ensure the entry names match the
oauth-scopes.md wording (projects.epics:write) so the docs stay consistent.

In `@docs/public/robots.txt`:
- Around line 19-22: Remove or convert the nonstandard "LLMs-txt:
https://developers.plane.so/llms.txt" directive in the robots.txt to a plain
comment (or delete it) because it's not a valid robots.txt directive; locate the
exact string "LLMs-txt:" in the file and replace it with a commented line (e.g.,
prefix with #) or remove it so only standard directives (User-agent, Allow,
Disallow, Sitemap) remain, and ensure the intended llms.txt URL is documented
elsewhere (e.g., in a README) if needed.

---

Nitpick comments:
In `@docs/public/llms-full.txt`:
- Around line 1-417: The docs/public/llms-full.txt is being maintained manually
and will drift; replace the manual workflow by auto-generating llms-full.txt
(and llms.txt) from the canonical docs source during the site build: add a build
step that runs the llms_txt2ctx CLI (or equivalent script) or integrates a
plugin such as vitepress-plugin-llms / docusaurus-plugin-llms to emit the file,
update the build/CI configuration to run this step and commit the generated
artifact (or publish it to the site output), and remove manual edits to
docs/public/llms-full.txt so the generated version is the single source of
truth.

In `@docs/public/llms.txt`:
- Around line 1-52: The doc is spec-compliant but should group lower-priority
links into an "## Optional" section to reduce LLM context when needed; update
the "Developer Tools" and/or "API Reference" lists by moving entries like
"Stickies", "Initiatives", and "MCP Server for Claude Code" (and any other
secondary links you deem low-priority) out of the main lists and place them
under a new "## Optional" Markdown section, preserving their link text and URLs
exactly and keeping the H1 and blockquote unchanged.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a5afe88 and 8d229e0.

📒 Files selected for processing (3)
  • docs/public/llms-full.txt
  • docs/public/llms.txt
  • docs/public/robots.txt

Comment on lines +281 to +285
### Epics

- [Overview](https://developers.plane.so/api-reference/epics/overview)
- [List Epics](https://developers.plane.so/api-reference/epics/list-epics)
- [Get Epic](https://developers.plane.so/api-reference/epics/get-epic-detail)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's search for any Epic API endpoint definitions
rg -i "epic" --type=md --type=mdx --type=json --type=yml --type=yaml -A 2 | head -100

Repository: makeplane/developer-docs

Length of output: 95


🏁 Script executed:

# Search specifically for create/update/delete operations on epics
rg -i "(create|post|put|patch|delete).*epic" -B 2 -A 2 | head -150

Repository: makeplane/developer-docs

Length of output: 3325


🏁 Script executed:

# Look for the API reference structure and how endpoints are organized
find . -type f \( -name "*.md" -o -name "*.mdx" -o -name "*.txt" \) | xargs rg -l "api.*reference\|api.*endpoint" | head -20

Repository: makeplane/developer-docs

Length of output: 50


🏁 Script executed:

# Check what's in the docs/public directory
ls -la docs/public/ 2>/dev/null || echo "Directory listing not available"

Repository: makeplane/developer-docs

Length of output: 541


🏁 Script executed:

# Read the llms-full.txt file around the Epics section to see full context
sed -n '275,290p' docs/public/llms-full.txt 2>/dev/null || echo "File not found at expected location"

Repository: makeplane/developer-docs

Length of output: 934


🏁 Script executed:

# Search for Epic-specific API endpoints (not within initiatives)
find docs/api-reference/epics -type f -name "*.md" 2>/dev/null | sort

Repository: makeplane/developer-docs

Length of output: 187


🏁 Script executed:

# Look for create/update/delete epic endpoints in the docs directory
find docs/api-reference -type f -name "*epic*.md" 2>/dev/null | sort

Repository: makeplane/developer-docs

Length of output: 324


🏁 Script executed:

# Check what Epic endpoints are documented in the API reference
ls -la docs/api-reference/epics/ 2>/dev/null || echo "Directory not found"

Repository: makeplane/developer-docs

Length of output: 376


🏁 Script executed:

# Search for "create epic" or "add epic" specifically in the docs
rg -i "create.*epic|add.*epic" docs/api-reference/epics/ 2>/dev/null || echo "No results in epics dir"

Repository: makeplane/developer-docs

Length of output: 143


🏁 Script executed:

# Check if there are any epic-related endpoints in the project/module/etc. sections
find docs/api-reference -type d -name "*epic*" 2>/dev/null

Repository: makeplane/developer-docs

Length of output: 92


🏁 Script executed:

# Search for any create/update/delete operations on epics across the entire API reference
rg -i "create.*epic|update.*epic|delete.*epic|remove.*epic" docs/api-reference/ 2>/dev/null | grep -v "initiative" | head -20

Repository: makeplane/developer-docs

Length of output: 224


🏁 Script executed:

# Check the structure of a comparable resource (e.g., Cycles) to see CRUD pattern
ls -la docs/api-reference/cycles/ 2>/dev/null || echo "Not found"

Repository: makeplane/developer-docs

Length of output: 77


🏁 Script executed:

# Check if Project or Module sections have epic operations
find docs/api-reference/project* -type f -name "*epic*.md" 2>/dev/null | head -10

Repository: makeplane/developer-docs

Length of output: 50


🏁 Script executed:

# Check if create/update/delete epic endpoints exist but are named differently
find docs/api-reference -type f -name "*.md" | xargs grep -l "epic" | xargs grep -l "POST\|PUT\|PATCH\|DELETE" | head -20

Repository: makeplane/developer-docs

Length of output: 351


🏁 Script executed:

# Look at what endpoints are mentioned in oauth-scopes which indicated create/update for epics
cat docs/dev-tools/build-plane-app/oauth-scopes.md | grep -A 2 -B 2 "epics"

Repository: makeplane/developer-docs

Length of output: 1078


🏁 Script executed:

# Check if there's an API routes file or OpenAPI spec that documents all endpoints
find . -type f \( -name "*routes*" -o -name "*openapi*" -o -name "*swagger*" -o -name "*.yaml" -o -name "*.yml" \) 2>/dev/null | head -20

Repository: makeplane/developer-docs

Length of output: 121


🏁 Script executed:

# Search for any "create epic" or "update epic" endpoints in markdown files
rg -i "POST.*epic|PUT.*epic|PATCH.*epic" docs/ 2>/dev/null | head -20

Repository: makeplane/developer-docs

Length of output: 50


Add missing Epic API endpoints to match oauth-scopes documentation.

The OAuth scopes explicitly document projects.epics:write for "Create and update project epics," but the Epics API reference section in llms-full.txt only lists Overview, List, and Get. Create/Update/Delete endpoints for epics are either missing from the API reference documentation or should be noted if they only exist within the Initiatives context. Verify which project-level epic endpoints exist and add them to maintain consistency with the capability claims in oauth-scopes.md.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/public/llms-full.txt` around lines 281 - 285, The Epics API section in
llms-full.txt is missing project-level create/update/delete endpoints referenced
by the oauth scope projects.epics:write; verify whether the API actually exposes
Project Epic endpoints (Create Epic, Update Epic, Delete Epic) or if epic
mutation is only available via Initiatives, then update the Epics list to
include the correct links/titles (e.g., "Create Epic", "Update Epic", "Delete
Epic" with their respective docs URLs) or add a note clarifying that
create/update/delete are only via Initiatives and not project-level; ensure the
entry names match the oauth-scopes.md wording (projects.epics:write) so the docs
stay consistent.

Comment on lines +19 to +22
# LLMs.txt - AI-friendly site documentation
# https://llmstxt.org/
LLMs-txt: https://developers.plane.so/llms.txt

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

llmstxt.org specification robots.txt LLMs-txt directive

💡 Result:

  • llmstxt.org defines llms.txt as a separate file placed at a site’s root (/llms.txt) to help LLMs find and use a website’s most relevant content at inference time—it is not a robots.txt extension and does not introduce a robots.txt directive like LLMs-txt. (llmstxt.org)
  • robots.txt vs llms.txt: robots.txt is about crawl access rules (allow/disallow). llms.txt is a curated, human+LLM-readable Markdown “map” of important resources to read/cite. (llmstxt.org)
  • Core llms.txt spec format (in order): (llmstxt.org)
    1. Required: an H1 title (project/site name)
    2. Optional: a blockquote summary
    3. Optional: additional non-heading Markdown (paragraphs/lists)
    4. Optional: one or more H2 sections containing “file lists” (Markdown list items with a required [name](url) link, optionally followed by : notes)
  • Related proposal in the spec: provide clean Markdown versions of pages by serving the same URL with .md appended (and for directory URLs, append index.html.md). (llmstxt.org)

Citations:


🌐 Web query:

robots.txt recognized directives standards User-agent Allow Disallow Sitemap

💡 Result:

robots.txt recognized directives & standards (User-agent / Allow / Disallow / Sitemap)

Standards-based (Robots Exclusion Protocol)

  • IETF RFC 9309 (Robots Exclusion Protocol) standardizes these directives:
    • User-agent: (starts a rule group for a crawler token)
    • Disallow: (paths that must not be accessed)
    • Allow: (paths that may be accessed; used to override a matching Disallow with a more specific match)
      RFC 9309 also defines “most specific match wins” (longest match), and that if an allow and disallow are equally specific, Allow should win. [1]

Sitemap in robots.txt (widely supported, but separate from RFC 9309)

  • Sitemap: is defined by the Sitemaps protocol (sitemaps.org) as a way to declare sitemap URLs in robots.txt, independent of any User-agent group, and you can list multiple sitemap URLs. [2]
  • Google explicitly documents support for exactly these four fields in robots.txt: user-agent, allow, disallow, sitemap; other fields are ignored by Google’s parser. [3]

Sources: RFC 9309 [1]; Sitemaps protocol (sitemaps.org) [2]; Google robots.txt spec/support [3].


🌐 Web query:

LLM crawler robots.txt parsing LLMs-txt LLMS directive

💡 Result:

robots.txt parsing for LLM crawlers (what’s actually standardized)

Robots.txt uses the Robots Exclusion Protocol (IETF RFC 9309). The standardized control surface is essentially:

  • User-agent: …
  • Allow: …
  • Disallow: …

Crawlers pick the applicable User-agent group and then apply the most specific path match across Allow/Disallow rules. RFC 9309 does not define any special “LLM/AI” directive—LLM crawlers are controlled the same way as any other crawler: by matching their user-agent token. [1]

So there is no standardized LLMS directive in robots.txt (anything like that would be non-standard and generally ignored by compliant parsers). [1]

Common LLM-related robots.txt user-agent tokens (examples)

Vendors expose specific user-agent tokens you can target:

  • OpenAI: GPTBot (training crawler) and OAI-SearchBot (search crawler), with guidance that you can allow one and disallow the other independently via robots.txt. [2]
  • Google: Google-Extended is a robots.txt product token that publishers can use to control whether Google content may be used for training future Gemini models and/or grounding, and it “has no effect on Google Search.” [3]
  • Anthropic: ClaudeBot can be blocked via robots.txt (User-agent: ClaudeBot + Disallow: /). [4]

llms.txt / LLMs.txt (what it is, and what it isn’t)

/llms.txt is a separate, community-driven convention: a curated Markdown “map/menu” of the most important LLM-consumable resources on a site—more like a hand-crafted AI-oriented sitemap/briefing than a permissions file. It is not a robots.txt extension and not an access-control mechanism. [5] A common description/spec-style guide recommends placing it at /llms.txt and using a consistent Markdown structure (title, summary, grouped links). [6]

Practical takeaway

  • If your goal is permission/opt-out: use robots.txt (RFC 9309) with vendor user-agent tokens (e.g., GPTBot, ClaudeBot, Google-Extended). There is no official LLMS directive. [1][2][3][4]
  • If your goal is helping LLMs find the right docs: publish /llms.txt as a curated index/brief. [5][6]

Sources
[1] RFC 9309 (Robots Exclusion Protocol) https://datatracker.ietf.org/doc/html/rfc9309
[2] OpenAI – Overview of OpenAI Crawlers (GPTBot / OAI-SearchBot) https://platform.openai.com/docs/gptbot
[3] Google for Developers – Google-Extended (common crawlers) https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers
[4] Anthropic Help Center – blocking the crawler (ClaudeBot) https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler
[5] Search Engine Land – “llms.txt isn’t robots.txt” https://searchengineland.com/llms-txt-isnt-robots-txt-its-a-treasure-map-for-ai-456586
[6] LLMs.txt documentation (format/proposal overview) https://txt-llms.com/documentation


LLMs-txt: is not a recognized robots.txt directive and will be silently ignored by all parsers.

The llms.txt spec (llmstxt.org) defines /llms.txt as a separate, curated Markdown file served at a well-known URL—not as a robots.txt extension or directive. The Robots Exclusion Protocol (RFC 9309) standardizes only User-agent, Allow, Disallow, and Sitemap directives. Access control for LLM crawlers is handled via standard robots.txt user-agent tokens (e.g., GPTBot, ClaudeBot, Google-Extended). Any other directive, including LLMs-txt:, is silently discarded by compliant parsers.

The line is functionally a dead annotation and could mislead future maintainers into thinking it serves a purpose. Converting it to a plain comment improves clarity:

🛠️ Proposed fix
-# LLMs.txt - AI-friendly site documentation
-# https://llmstxt.org/
-LLMs-txt: https://developers.plane.so/llms.txt
+# LLMs-friendly documentation (https://llmstxt.org/)
+# Available at: https://developers.plane.so/llms.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/public/robots.txt` around lines 19 - 22, Remove or convert the
nonstandard "LLMs-txt: https://developers.plane.so/llms.txt" directive in the
robots.txt to a plain comment (or delete it) because it's not a valid robots.txt
directive; locate the exact string "LLMs-txt:" in the file and replace it with a
commented line (e.g., prefix with #) or remove it so only standard directives
(User-agent, Allow, Disallow, Sitemap) remain, and ensure the intended llms.txt
URL is documented elsewhere (e.g., in a README) if needed.

@sriramveeraghanta sriramveeraghanta merged commit 97f12cc into master Feb 24, 2026
5 checks passed
@sriramveeraghanta sriramveeraghanta deleted the feat-add-llm-txt branch February 24, 2026 21:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants