Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThree new documentation files are added to the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
docs/public/llms-full.txt (1)
1-417: Consider auto-generating these files from the docs source to prevent drift.This file is a comprehensive hand-curated index of ~180+ API endpoints. As the API grows or endpoints change, keeping it in sync manually is error-prone. Many documentation frameworks provide plugins to auto-generate
llms.txt/llms-full.txton each build. You can create or update llms.txt automatically with thellms_txt2ctxCLI/Python tool or use plugins likevitepress-plugin-llmsanddocusaurus-plugin-llmsin your build process. Treat LLMs.txt as living documentation — update it when you add major new pages or remove outdated ones. If you re-organize docs, reflect that in LLMs.txt so the AI doesn't go to outdated links.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/public/llms-full.txt` around lines 1 - 417, The docs/public/llms-full.txt is being maintained manually and will drift; replace the manual workflow by auto-generating llms-full.txt (and llms.txt) from the canonical docs source during the site build: add a build step that runs the llms_txt2ctx CLI (or equivalent script) or integrates a plugin such as vitepress-plugin-llms / docusaurus-plugin-llms to emit the file, update the build/CI configuration to run this step and commit the generated artifact (or publish it to the site output), and remove manual edits to docs/public/llms-full.txt so the generated version is the single source of truth.docs/public/llms.txt (1)
1-52: LGTM — spec-compliant structure with all required sections.The llms.txt spec requires the following in order: an H1 with the project name (the only required section), a blockquote with a short summary, and zero or more Markdown sections with detailed information. This file meets all of those requirements.
One optional enhancement: the spec defines a special
## Optionalsection — URLs listed there can be skipped by LLMs when a shorter context is needed, and it's intended for secondary information that can often be omitted. Grouping lower-priority entries such asStickies,Initiatives, orMCP Server for Claude Codeunder## Optionalcould improve context-window efficiency for LLMs with tight limits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/public/llms.txt` around lines 1 - 52, The doc is spec-compliant but should group lower-priority links into an "## Optional" section to reduce LLM context when needed; update the "Developer Tools" and/or "API Reference" lists by moving entries like "Stickies", "Initiatives", and "MCP Server for Claude Code" (and any other secondary links you deem low-priority) out of the main lists and place them under a new "## Optional" Markdown section, preserving their link text and URLs exactly and keeping the H1 and blockquote unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/public/llms-full.txt`:
- Around line 281-285: The Epics API section in llms-full.txt is missing
project-level create/update/delete endpoints referenced by the oauth scope
projects.epics:write; verify whether the API actually exposes Project Epic
endpoints (Create Epic, Update Epic, Delete Epic) or if epic mutation is only
available via Initiatives, then update the Epics list to include the correct
links/titles (e.g., "Create Epic", "Update Epic", "Delete Epic" with their
respective docs URLs) or add a note clarifying that create/update/delete are
only via Initiatives and not project-level; ensure the entry names match the
oauth-scopes.md wording (projects.epics:write) so the docs stay consistent.
In `@docs/public/robots.txt`:
- Around line 19-22: Remove or convert the nonstandard "LLMs-txt:
https://developers.plane.so/llms.txt" directive in the robots.txt to a plain
comment (or delete it) because it's not a valid robots.txt directive; locate the
exact string "LLMs-txt:" in the file and replace it with a commented line (e.g.,
prefix with #) or remove it so only standard directives (User-agent, Allow,
Disallow, Sitemap) remain, and ensure the intended llms.txt URL is documented
elsewhere (e.g., in a README) if needed.
---
Nitpick comments:
In `@docs/public/llms-full.txt`:
- Around line 1-417: The docs/public/llms-full.txt is being maintained manually
and will drift; replace the manual workflow by auto-generating llms-full.txt
(and llms.txt) from the canonical docs source during the site build: add a build
step that runs the llms_txt2ctx CLI (or equivalent script) or integrates a
plugin such as vitepress-plugin-llms / docusaurus-plugin-llms to emit the file,
update the build/CI configuration to run this step and commit the generated
artifact (or publish it to the site output), and remove manual edits to
docs/public/llms-full.txt so the generated version is the single source of
truth.
In `@docs/public/llms.txt`:
- Around line 1-52: The doc is spec-compliant but should group lower-priority
links into an "## Optional" section to reduce LLM context when needed; update
the "Developer Tools" and/or "API Reference" lists by moving entries like
"Stickies", "Initiatives", and "MCP Server for Claude Code" (and any other
secondary links you deem low-priority) out of the main lists and place them
under a new "## Optional" Markdown section, preserving their link text and URLs
exactly and keeping the H1 and blockquote unchanged.
| ### Epics | ||
|
|
||
| - [Overview](https://developers.plane.so/api-reference/epics/overview) | ||
| - [List Epics](https://developers.plane.so/api-reference/epics/list-epics) | ||
| - [Get Epic](https://developers.plane.so/api-reference/epics/get-epic-detail) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's search for any Epic API endpoint definitions
rg -i "epic" --type=md --type=mdx --type=json --type=yml --type=yaml -A 2 | head -100Repository: makeplane/developer-docs
Length of output: 95
🏁 Script executed:
# Search specifically for create/update/delete operations on epics
rg -i "(create|post|put|patch|delete).*epic" -B 2 -A 2 | head -150Repository: makeplane/developer-docs
Length of output: 3325
🏁 Script executed:
# Look for the API reference structure and how endpoints are organized
find . -type f \( -name "*.md" -o -name "*.mdx" -o -name "*.txt" \) | xargs rg -l "api.*reference\|api.*endpoint" | head -20Repository: makeplane/developer-docs
Length of output: 50
🏁 Script executed:
# Check what's in the docs/public directory
ls -la docs/public/ 2>/dev/null || echo "Directory listing not available"Repository: makeplane/developer-docs
Length of output: 541
🏁 Script executed:
# Read the llms-full.txt file around the Epics section to see full context
sed -n '275,290p' docs/public/llms-full.txt 2>/dev/null || echo "File not found at expected location"Repository: makeplane/developer-docs
Length of output: 934
🏁 Script executed:
# Search for Epic-specific API endpoints (not within initiatives)
find docs/api-reference/epics -type f -name "*.md" 2>/dev/null | sortRepository: makeplane/developer-docs
Length of output: 187
🏁 Script executed:
# Look for create/update/delete epic endpoints in the docs directory
find docs/api-reference -type f -name "*epic*.md" 2>/dev/null | sortRepository: makeplane/developer-docs
Length of output: 324
🏁 Script executed:
# Check what Epic endpoints are documented in the API reference
ls -la docs/api-reference/epics/ 2>/dev/null || echo "Directory not found"Repository: makeplane/developer-docs
Length of output: 376
🏁 Script executed:
# Search for "create epic" or "add epic" specifically in the docs
rg -i "create.*epic|add.*epic" docs/api-reference/epics/ 2>/dev/null || echo "No results in epics dir"Repository: makeplane/developer-docs
Length of output: 143
🏁 Script executed:
# Check if there are any epic-related endpoints in the project/module/etc. sections
find docs/api-reference -type d -name "*epic*" 2>/dev/nullRepository: makeplane/developer-docs
Length of output: 92
🏁 Script executed:
# Search for any create/update/delete operations on epics across the entire API reference
rg -i "create.*epic|update.*epic|delete.*epic|remove.*epic" docs/api-reference/ 2>/dev/null | grep -v "initiative" | head -20Repository: makeplane/developer-docs
Length of output: 224
🏁 Script executed:
# Check the structure of a comparable resource (e.g., Cycles) to see CRUD pattern
ls -la docs/api-reference/cycles/ 2>/dev/null || echo "Not found"Repository: makeplane/developer-docs
Length of output: 77
🏁 Script executed:
# Check if Project or Module sections have epic operations
find docs/api-reference/project* -type f -name "*epic*.md" 2>/dev/null | head -10Repository: makeplane/developer-docs
Length of output: 50
🏁 Script executed:
# Check if create/update/delete epic endpoints exist but are named differently
find docs/api-reference -type f -name "*.md" | xargs grep -l "epic" | xargs grep -l "POST\|PUT\|PATCH\|DELETE" | head -20Repository: makeplane/developer-docs
Length of output: 351
🏁 Script executed:
# Look at what endpoints are mentioned in oauth-scopes which indicated create/update for epics
cat docs/dev-tools/build-plane-app/oauth-scopes.md | grep -A 2 -B 2 "epics"Repository: makeplane/developer-docs
Length of output: 1078
🏁 Script executed:
# Check if there's an API routes file or OpenAPI spec that documents all endpoints
find . -type f \( -name "*routes*" -o -name "*openapi*" -o -name "*swagger*" -o -name "*.yaml" -o -name "*.yml" \) 2>/dev/null | head -20Repository: makeplane/developer-docs
Length of output: 121
🏁 Script executed:
# Search for any "create epic" or "update epic" endpoints in markdown files
rg -i "POST.*epic|PUT.*epic|PATCH.*epic" docs/ 2>/dev/null | head -20Repository: makeplane/developer-docs
Length of output: 50
Add missing Epic API endpoints to match oauth-scopes documentation.
The OAuth scopes explicitly document projects.epics:write for "Create and update project epics," but the Epics API reference section in llms-full.txt only lists Overview, List, and Get. Create/Update/Delete endpoints for epics are either missing from the API reference documentation or should be noted if they only exist within the Initiatives context. Verify which project-level epic endpoints exist and add them to maintain consistency with the capability claims in oauth-scopes.md.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/public/llms-full.txt` around lines 281 - 285, The Epics API section in
llms-full.txt is missing project-level create/update/delete endpoints referenced
by the oauth scope projects.epics:write; verify whether the API actually exposes
Project Epic endpoints (Create Epic, Update Epic, Delete Epic) or if epic
mutation is only available via Initiatives, then update the Epics list to
include the correct links/titles (e.g., "Create Epic", "Update Epic", "Delete
Epic" with their respective docs URLs) or add a note clarifying that
create/update/delete are only via Initiatives and not project-level; ensure the
entry names match the oauth-scopes.md wording (projects.epics:write) so the docs
stay consistent.
| # LLMs.txt - AI-friendly site documentation | ||
| # https://llmstxt.org/ | ||
| LLMs-txt: https://developers.plane.so/llms.txt | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
llmstxt.org specification robots.txt LLMs-txt directive
💡 Result:
- llmstxt.org defines
llms.txtas a separate file placed at a site’s root (/llms.txt) to help LLMs find and use a website’s most relevant content at inference time—it is not arobots.txtextension and does not introduce arobots.txtdirective likeLLMs-txt. (llmstxt.org) robots.txtvsllms.txt:robots.txtis about crawl access rules (allow/disallow).llms.txtis a curated, human+LLM-readable Markdown “map” of important resources to read/cite. (llmstxt.org)- Core
llms.txtspec format (in order): (llmstxt.org)- Required: an H1 title (project/site name)
- Optional: a blockquote summary
- Optional: additional non-heading Markdown (paragraphs/lists)
- Optional: one or more H2 sections containing “file lists” (Markdown list items with a required
[name](url)link, optionally followed by: notes)
- Related proposal in the spec: provide clean Markdown versions of pages by serving the same URL with
.mdappended (and for directory URLs, appendindex.html.md). (llmstxt.org)
Citations:
- 1: https://llmstxt.org/?utm_source=openai
- 2: https://llmstxt.org/?utm_source=openai
- 3: https://llmstxt.org/?utm_source=openai
- 4: https://llmstxt.org/?utm_source=openai
🌐 Web query:
robots.txt recognized directives standards User-agent Allow Disallow Sitemap
💡 Result:
robots.txt recognized directives & standards (User-agent / Allow / Disallow / Sitemap)
Standards-based (Robots Exclusion Protocol)
- IETF RFC 9309 (Robots Exclusion Protocol) standardizes these directives:
User-agent:(starts a rule group for a crawler token)Disallow:(paths that must not be accessed)Allow:(paths that may be accessed; used to override a matchingDisallowwith a more specific match)
RFC 9309 also defines “most specific match wins” (longest match), and that if an allow and disallow are equally specific, Allow should win. [1]
Sitemap in robots.txt (widely supported, but separate from RFC 9309)
Sitemap:is defined by the Sitemaps protocol (sitemaps.org) as a way to declare sitemap URLs inrobots.txt, independent of anyUser-agentgroup, and you can list multiple sitemap URLs. [2]- Google explicitly documents support for exactly these four fields in
robots.txt:user-agent,allow,disallow,sitemap; other fields are ignored by Google’s parser. [3]
Sources: RFC 9309 [1]; Sitemaps protocol (sitemaps.org) [2]; Google robots.txt spec/support [3].
🌐 Web query:
LLM crawler robots.txt parsing LLMs-txt LLMS directive
💡 Result:
robots.txt parsing for LLM crawlers (what’s actually standardized)
Robots.txt uses the Robots Exclusion Protocol (IETF RFC 9309). The standardized control surface is essentially:
User-agent: …Allow: …Disallow: …
Crawlers pick the applicable User-agent group and then apply the most specific path match across Allow/Disallow rules. RFC 9309 does not define any special “LLM/AI” directive—LLM crawlers are controlled the same way as any other crawler: by matching their user-agent token. [1]
So there is no standardized LLMS directive in robots.txt (anything like that would be non-standard and generally ignored by compliant parsers). [1]
Common LLM-related robots.txt user-agent tokens (examples)
Vendors expose specific user-agent tokens you can target:
- OpenAI:
GPTBot(training crawler) andOAI-SearchBot(search crawler), with guidance that you can allow one and disallow the other independently via robots.txt. [2] - Google:
Google-Extendedis a robots.txt product token that publishers can use to control whether Google content may be used for training future Gemini models and/or grounding, and it “has no effect on Google Search.” [3] - Anthropic:
ClaudeBotcan be blocked via robots.txt (User-agent: ClaudeBot+Disallow: /). [4]
llms.txt / LLMs.txt (what it is, and what it isn’t)
/llms.txt is a separate, community-driven convention: a curated Markdown “map/menu” of the most important LLM-consumable resources on a site—more like a hand-crafted AI-oriented sitemap/briefing than a permissions file. It is not a robots.txt extension and not an access-control mechanism. [5] A common description/spec-style guide recommends placing it at /llms.txt and using a consistent Markdown structure (title, summary, grouped links). [6]
Practical takeaway
- If your goal is permission/opt-out: use robots.txt (RFC 9309) with vendor user-agent tokens (e.g.,
GPTBot,ClaudeBot,Google-Extended). There is no officialLLMSdirective. [1][2][3][4] - If your goal is helping LLMs find the right docs: publish
/llms.txtas a curated index/brief. [5][6]
Sources
[1] RFC 9309 (Robots Exclusion Protocol) https://datatracker.ietf.org/doc/html/rfc9309
[2] OpenAI – Overview of OpenAI Crawlers (GPTBot / OAI-SearchBot) https://platform.openai.com/docs/gptbot
[3] Google for Developers – Google-Extended (common crawlers) https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers
[4] Anthropic Help Center – blocking the crawler (ClaudeBot) https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler
[5] Search Engine Land – “llms.txt isn’t robots.txt” https://searchengineland.com/llms-txt-isnt-robots-txt-its-a-treasure-map-for-ai-456586
[6] LLMs.txt documentation (format/proposal overview) https://txt-llms.com/documentation
LLMs-txt: is not a recognized robots.txt directive and will be silently ignored by all parsers.
The llms.txt spec (llmstxt.org) defines /llms.txt as a separate, curated Markdown file served at a well-known URL—not as a robots.txt extension or directive. The Robots Exclusion Protocol (RFC 9309) standardizes only User-agent, Allow, Disallow, and Sitemap directives. Access control for LLM crawlers is handled via standard robots.txt user-agent tokens (e.g., GPTBot, ClaudeBot, Google-Extended). Any other directive, including LLMs-txt:, is silently discarded by compliant parsers.
The line is functionally a dead annotation and could mislead future maintainers into thinking it serves a purpose. Converting it to a plain comment improves clarity:
🛠️ Proposed fix
-# LLMs.txt - AI-friendly site documentation
-# https://llmstxt.org/
-LLMs-txt: https://developers.plane.so/llms.txt
+# LLMs-friendly documentation (https://llmstxt.org/)
+# Available at: https://developers.plane.so/llms.txt🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/public/robots.txt` around lines 19 - 22, Remove or convert the
nonstandard "LLMs-txt: https://developers.plane.so/llms.txt" directive in the
robots.txt to a plain comment (or delete it) because it's not a valid robots.txt
directive; locate the exact string "LLMs-txt:" in the file and replace it with a
commented line (e.g., prefix with #) or remove it so only standard directives
(User-agent, Allow, Disallow, Sitemap) remain, and ensure the intended llms.txt
URL is documented elsewhere (e.g., in a README) if needed.
Summary by CodeRabbit