feat: Implement initial GitHub Action for generating issues from pred…#1
feat: Implement initial GitHub Action for generating issues from pred…#1kpj2006 wants to merge 8 commits intoAOSSIE-Org:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new GitHub Action (manifest and src/) plus a build workflow; provides multiple issue-bank JSON manifests, an AI agent to generate issues, parser and issue-creation modules, packaging, tests, and README updates for preset and AI-driven initial-issue creation. Changes
Sequence Diagram(s)sequenceDiagram
participant GA as GitHub Action
participant Parser as Parser Module
participant AI as AI Service
participant Issues as Issues Module
participant GH as GitHub API
GA->>Parser: load base issues
Parser-->>GA: baseIssues
alt preset mode
GA->>Parser: get preset issues
Parser-->>GA: presetIssues
else prompt/advanced mode
GA->>AI: request issues (description, template, categories, skills, baseIssues)
AI-->>GA: generatedIssues
end
GA->>GA: merge issues, trim to max
GA->>Issues: create issues with labelPrefix
loop per issue
Issues->>GH: POST /repos/:owner/:repo/issues (title, body, labels)
GH-->>Issues: 201 Created
end
Issues-->>GA: reports created count
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 21
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-dist.yml:
- Around line 26-31: The workflow's run block currently performs git push
without handling failures; update the sequence around the git push step (the git
add/dist, git commit, git push commands) to detect push errors, log a clear
error message and fail the job or implement a safe fallback (e.g., retry with
git push --force-with-lease semantics or open/update a pull request using gh
CLI) when git push exits non‑zero; ensure the push invocation is wrapped to
check its exit status and emit an actionable error (and exit non‑zero) or
perform the PR fallback so failures aren't swallowed.
- Line 22: The inline YAML mapping for the GitHub Actions step uses extra spaces
inside braces ("with: { node-version: '20' }"); remove the interior spacing to
follow style conventions or convert to the multi-line mapping form under the
same key (i.e., change the inline mapping to use no extra spaces inside the
braces or replace it with a multi-line "with:" block listing "node-version:
'20'") so the workflow conforms to YAML style and linter expectations.
- Around line 23-24: Add a "build" npm script to package.json that uses
`@vercel/ncc` to bundle the entry file into the dist output; specifically, add a
"build" script that runs ncc build src/index.js -o dist (or equivalent ncc CLI)
so CI's npm run build succeeds and produces dist/index.js from src/index.js.
Ensure package.json's "scripts" contains the "build" key and that dist is
git-ignored or produced as expected by the workflow.
In @.gitignore:
- Line 128: In .gitignore the node_modules/ entry currently sits under the
"glossaries" LaTeX section; remove the node_modules/ line from that glossaries
block and append a new "Node.js" section header near the end of the file (e.g.,
a comment like "# Node.js" or similar) and add node_modules/ under it so Node.js
artifacts are clearly grouped and the glossaries section remains semantically
correct.
In `@action.yml`:
- Around line 24-36: Remove the unused inputs by deleting the custom_issues and
issue_complexity input blocks from action.yml (the keys "custom_issues" and
"issue_complexity"), and if you prefer to keep them, implement reading and using
those inputs where action inputs are parsed (e.g., add handling for
process.env.INPUT_CUSTOM_ISSUES / INPUT_ISSUE_COMPLEXITY or the equivalent input
parsing logic in the action entrypoint) and wire their values into the
issue-generation flow; also update any README or docs to reflect the change and
remove any stale references to these inputs.
In `@package.json`:
- Around line 5-8: The package.json is missing a "build" npm script required by
CI and its "main": "index.js" points to a non-existent file; add a "build"
script under "scripts" that runs your project's build step (e.g., the bundler/TS
compile command used in this repo) and update the "main" field to point to the
actual built entry (for example "dist/index.js" or the compiled output path),
ensuring the script name is "build" and the main value matches the built
artifact produced by that script.
- Line 4: The package.json "description" field currently contains HTML/markup
artifacts ("<!-- Don't delete it --> <div name=\"readme-top\"></div>"); replace
that value with a concise plain-text package description (e.g., "Short
description of the project") or an empty string, ensuring the JSON key
"description" holds only plain text without HTML/markdown so npm tooling
displays correctly.
- Around line 25-28: package.json currently lists two bundlers in
devDependencies: "@vercel/ncc" and "esbuild"; remove the unused bundler by
deleting the "esbuild" entry from devDependencies (or conversely remove
"@vercel/ncc" if you intend to use esbuild) and update any build scripts that
reference the removed tool (check npm scripts in package.json and CI workflow
files) so only the chosen bundler (e.g., "@vercel/ncc") remains.
In `@src/agent.js`:
- Around line 67-71: The code currently calls core.setFailed when the Models API
responds with !response.ok but then returns [] (in the block with response.ok
check), causing the action to be marked failed while execution continues;
replace core.setFailed(...) with core.warning(...) in that response.ok failure
branch (keep the await response.text() handling and the return [] behavior) so
the action degrades gracefully, and also add a defensive check in the caller
(the code that calls createIssues in src/index.js) to skip calling createIssues
when the returned models array is empty; reference the response.ok check, the
core.setFailed -> core.warning change, the function that returns [] on failure,
and the createIssues call in src/index.js.
- Around line 73-75: The code is accessing nested response fields and parsing
without validation; wrap the parsing in guards and validation: check response.ok
and that data.choices is an array and data.choices[0]?.message?.content exists
before JSON.parse, use try/catch around JSON.parse to handle invalid JSON,
ensure resultObj.issues is an array, and validate each issue object has the
required fields (title:string, body:string, labels:array) before returning;
filter out or normalize invalid entries (or return [] on failure) and log or
throw a clear error so downstream createIssues only receives well-formed issue
objects.
- Around line 51-55: Update the fetch call that posts chat completions (the
const response = await fetch(...) that currently targets
"https://models.inference.ai.azure.com/chat/completions") to use the new GitHub
Models API endpoint "https://models.github.ai/inference/chat/completions" and
keep it as a POST to /inference/chat/completions; ensure the Authorization
header continues to use the Bearer token (the token variable) and add the
required headers "Accept: application/vnd.github+json" and
"X-GitHub-Api-Version: 2026-03-10"; verify callers supply a GitHub token with
models:read permission and update the headers object in the same fetch call to
include these two new headers.
In `@src/index.js`:
- Around line 36-44: The code is passing the GitHub token variable (token from
github_token) into getAIIssues which authenticates against Azure/OpenAI—replace
this by adding a separate AI API key input (e.g., ai_token or openai_token),
parse it where inputs are read to create a new aiToken variable, and pass
aiToken into getAIIssues instead of token; also update the getAIIssues
signature/implementation to accept and use this aiToken for Azure/OpenAI
authentication rather than the GitHub token.
- Line 14: The parseInt call that sets the maxIssues constant should specify the
radix to avoid ambiguous parsing; update the assignment of maxIssues (where you
call parseInt(core.getInput('max_issues'))) to pass 10 as the second argument
(or replace with Number(...) if you prefer) so the input is always parsed as
decimal.
- Line 16: Validate the github_token immediately after const token =
core.getInput('github_token') and fail fast if it's missing or empty: check
token === '' (or falsy), call core.setFailed with a clear message like
"github_token is required" (or throw an Error) and return to prevent subsequent
calls to createIssues and getAIIssues from running without credentials; update
any callers that assume token exists to rely on this early guard.
In `@src/issues.js`:
- Around line 17-19: The code currently mutates the original issue.labels array
by calling push(labelPrefix) on labels (derived from issue.labels); instead make
a copy before modifying so issue.labels is not changed—e.g., assign labels to a
shallow copy of issue.labels (using slice() or spread: labels = (issue.labels ||
[]).slice() or labels = [...(issue.labels || [])]) and then add labelPrefix (or
use concat to create a new array) so only the local labels variable is modified;
update the logic around the labels variable (where labels is defined/used) to
use this copied array instead of mutating issue.labels directly.
- Line 27: The use of Object.values(labels) is redundant because labels is
expected to be an array; update the cleanup to handle both shapes explicitly: if
labels is an array use labels.filter(l => typeof l === 'string'), otherwise if
labels is a plain object use Object.values(labels).filter(...); modify the code
around the labels variable in src/issues.js (the line that currently reads
labels: Object.values(labels).filter(...)) to perform an Array.isArray check or
a small normalization helper so you only call Object.values when labels is an
object.
- Around line 22-28: Validate that issue.title and issue.body are non-empty
strings and that labels resolves to an array of strings before calling
octokit.rest.issues.create; if any required field is missing or invalid,
log/throw a clear error and skip the API call. In practice, check the issue
object (issue.title, issue.body) and the computed labels variable
(Object.values(labels).filter(...)) inside the function where
octokit.rest.issues.create is invoked, coerce or map alternate keys if needed,
and only call octokit.rest.issues.create when title/body are present and labels
is a valid string array.
In `@src/parser.js`:
- Line 23: Replace the core.info call that logs missing preset files with
core.warning so the message appears as a GitHub Actions warning: change the call
site using core.info(`Warning: Preset file ${issuePath} not found. Falling back
to default.`) to core.warning and update the string to remove the redundant
"Warning:" prefix (e.g. core.warning(`Preset file ${issuePath} not found.
Falling back to default.`)); this change touches the call referencing core.info
and the issuePath interpolation.
- Line 8: The JSON.parse calls that read files (notably in getBaseIssues and the
two other file-reading functions that use JSON.parse on lines 32 and 52) are
unprotected and will throw on malformed JSON; wrap each fs.readFileSync(... ) +
JSON.parse(...) sequence in a try/catch, and on error log the file path and the
parse error (e.g., console.error or the module logger) and return an empty array
(or appropriate default) instead of letting the exception propagate. Ensure you
update the return paths in getBaseIssues and the two other parsing functions to
return [] on parse failure and keep successful behavior unchanged.
- Around line 50-55: The loop that reads and parses JSON files (the for (const
file of files) block that calls fs.readFileSync and JSON.parse and pushes into
banks) lacks error handling; wrap the file read/parse in a try-catch so a single
malformed JSON file doesn’t throw the whole function, log or warn about the
filename and error (using the existing logger or console.warn), and skip pushing
that file into banks (continue) when an exception occurs.
In `@test.js`:
- Around line 1-9: The current test directly requires ./dist/index.js causing
actual runtime behavior (including octokit.rest.issues.create and calls to
models.inference.ai.azure.com) to execute; rewrite this as a unit test using a
test framework (e.g., Jest) that sets the same env vars (INPUT_MODE,
INPUT_PRESET, INPUT_GITHUB_TOKEN, GITHUB_REPOSITORY, GITHUB_WORKSPACE) but
imports or requires the specific exported functions/entrypoint from your module
instead of executing the whole file, and mock external calls: stub
`@actions/github/octokit` methods (e.g., octokit.rest.issues.create) and HTTP
calls (global.fetch or node-fetch/axios or use nock) to return controlled
responses so no real GitHub or AI API requests are made; ensure assertions
verify expected behavior and side effects rather than relying on manual
inspection.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: bb3b5491-94c6-42ec-a5e8-691c2420d663
⛔ Files ignored due to path filters (9)
dist/index.jsis excluded by!**/dist/**,!**/dist/**dist/issue-banks/_base/issues.jsonis excluded by!**/dist/**,!**/dist/**dist/issue-banks/backend/default.jsonis excluded by!**/dist/**,!**/dist/**dist/issue-banks/backend/express.jsonis excluded by!**/dist/**,!**/dist/**dist/issue-banks/frontend/default.jsonis excluded by!**/dist/**,!**/dist/**dist/issue-banks/frontend/nextjs.jsonis excluded by!**/dist/**,!**/dist/**dist/issues.jsonis excluded by!**/dist/**,!**/dist/**dist/licenses.txtis excluded by!**/dist/**,!**/dist/**package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (14)
.github/workflows/build-dist.yml.gitignoreaction.ymlissue-banks/_base/issues.jsonissue-banks/backend/default.jsonissue-banks/backend/express.jsonissue-banks/frontend/default.jsonissue-banks/frontend/nextjs.jsonpackage.jsonsrc/agent.jssrc/index.jssrc/issues.jssrc/parser.jstest.js
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
.github/workflows/build-dist.yml (1)
23-24:⚠️ Potential issue | 🔴 Critical
npm run buildis currently a guaranteed CI failure.Line 24 runs
npm run build, butpackage.json(scripts, Lines 5-7 in the provided snippet) has nobuildentry, so this workflow will fail beforedist/commit logic executes.Proposed fix (`package.json`)
"scripts": { - "test": "echo \"Error: no test specified\" && exit 1" + "test": "echo \"Error: no test specified\" && exit 1", + "build": "ncc build src/index.js -o dist" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-dist.yml around lines 23 - 24, The workflow currently runs "npm run build" but package.json has no "build" script, causing CI failures; either add a proper "build" script to package.json's "scripts" section (e.g., a command that produces dist/ used by the later commit logic) or remove/replace the "npm run build" step in the workflow with the correct script name (or a direct build command) so the CI no longer fails before the dist/ commit logic; update either the workflow step "npm run build" or the package.json "scripts" to make these names consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-dist.yml:
- Around line 12-20: The workflow's build job performs a git push but doesn't
declare permissions, so add explicit write permission for repository contents by
adding a permissions block (permissions: contents: write) for the build job (or
at top-level) to ensure the GITHUB_TOKEN can push; update the job named "build"
in .github/workflows/build-dist.yml to include this permissions entry so the
push step succeeds on repos with restricted default GITHUB_TOKEN permissions.
- Around line 7-10: The workflow's paths filter currently lists 'src/**',
'issue-banks/**', and 'package.json' but omits the lockfile, so changes to
package-lock.json won't trigger a rebuild; update the paths array in
.github/workflows/build-dist.yml to include 'package-lock.json' alongside
'package.json' to ensure npm ci-triggered dependency changes rebuild dist/ and
prevent stale artifacts.
---
Duplicate comments:
In @.github/workflows/build-dist.yml:
- Around line 23-24: The workflow currently runs "npm run build" but
package.json has no "build" script, causing CI failures; either add a proper
"build" script to package.json's "scripts" section (e.g., a command that
produces dist/ used by the later commit logic) or remove/replace the "npm run
build" step in the workflow with the correct script name (or a direct build
command) so the CI no longer fails before the dist/ commit logic; update either
the workflow step "npm run build" or the package.json "scripts" to make these
names consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f637405f-c9b9-4640-9c34-ce155d5a4637
📒 Files selected for processing (1)
.github/workflows/build-dist.yml
…itHub issue creation with CI/CD and testing infrastructure.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (3)
package.json (1)
5-8:⚠️ Potential issue | 🟠 MajorFix package entrypoint to match the built artifact.
Line 5 points to
index.js, but the build on Line 8 outputsdist/index.js. This can break consumers resolving the package entry.Proposed fix
- "main": "index.js", + "main": "dist/index.js",#!/bin/bash # Verify entrypoint consistency for package main vs build output jq -r '.main, .scripts.build' package.json fd -t f '^index\.js$' fd -t f '^dist/index\.js$' # Expected: # - main should reference dist/index.js if root index.js is absent # - build script should produce dist/index.js🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 5 - 8, Update the package.json "main" field to point to the built artifact produced by the build script: change "main": "index.js" to "main": "dist/index.js" (so package consumers resolve the compiled output), and verify the build script in "scripts.build" (ncc build src/index.js -o dist) indeed generates dist/index.js; if your package intentionally ships a root index.js, ensure that file exists or adjust the build/script accordingly so "main" and the build output remain consistent.src/parser.js (1)
37-43:⚠️ Potential issue | 🟠 MajorProtect preset JSON parsing and schema access.
JSON.parseandcontent.issuesaccess are unguarded; malformed files will throw and break flow unpredictably.Proposed fix
- const content = JSON.parse(fs.readFileSync(issuePath, 'utf8')); - const issues = []; - - // Extract all categories of issues in the bank - for (const key in content.issues) { - issues.push(...content.issues[key]); - } + let content; + try { + content = JSON.parse(fs.readFileSync(issuePath, 'utf8')); + } catch (error) { + throw new Error(`Failed to parse preset file ${issuePath}: ${error.message}`); + } + + const issues = []; + const issueGroups = content?.issues; + if (!issueGroups || typeof issueGroups !== 'object') { + throw new Error(`Invalid preset schema in ${issuePath}: missing "issues" object`); + } + + for (const key in issueGroups) { + if (Array.isArray(issueGroups[key])) { + issues.push(...issueGroups[key]); + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parser.js` around lines 37 - 43, Wrap the file read + JSON.parse of issuePath in a try/catch to handle malformed JSON and avoid throwing; on parse error log the issuePath and error and return an empty issues array (or propagate a clear error) instead of letting the process crash. After successful parse, validate that content.issues exists and is an object (e.g., typeof content.issues === 'object' && content.issues !== null) before iterating; if it's missing or invalid, log a warning referencing issuePath and skip the for-in loop so issues remains empty. Use the existing symbols JSON.parse, fs.readFileSync, issuePath, content, and issues to locate and apply the change.action.yml (1)
24-36:⚠️ Potential issue | 🟠 MajorRemove or implement incomplete action inputs before release.
custom_issuesandissue_complexityare exposed but not implemented (also noted on Line 46). This creates a misleading public contract for action users.Proposed fix (remove until implemented)
- custom_issues: - description: 'Path to repo-specific custom issues json' - required: false ... - issue_complexity: - description: 'Comma-separated complexities to filter (e.g. beginner, intermediate, advanced)' - required: false ... -# in this some of the inputs are not fully implemented like custom_issues and issue_complexity(we will work on it later)Also applies to: 46-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@action.yml` around lines 24 - 36, The action exposes inputs custom_issues and issue_complexity in action.yml but they are not implemented; either remove these inputs from action.yml to avoid a misleading public contract, or implement their handling in the action runtime where inputs are read (e.g., the code that calls core.getInput / process.env parsing) so custom_issues and issue_complexity are actually used (validate values, parse comma-separated complexities, and wire into the logic that filters/generates issues). Ensure any default/required metadata (like default for max_issues or description strings) stays consistent with the implemented behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent.js`:
- Around line 5-6: The parsed lists for categories and skills can contain empty
strings; update the parsing for categoriesStr and skillsStr so that after
splitting and trimming you filter out empty entries (e.g., use .filter(Boolean)
or .filter(s => s.length > 0)) when assigning to the categories and skills
variables to ensure no empty values propagate downstream.
- Around line 51-67: The fetch call that posts to
"https://models.github.ai/inference/chat/completions" has no timeout and can
hang; wrap the request with an AbortController, pass its signal into fetch,
start a timer (e.g., setTimeout) to abort after a reasonable timeout value, and
clear the timer once the response arrives; update handling around the existing
response/try-catch to catch and treat AbortError (or aborted requests) as a
timeout failure. Locate the fetch invocation (the block that creates response
using token, systemPrompt and userPrompt) and add the AbortController, timeout
setup, signal: controller.signal in fetch options, and proper cleanup/error
handling.
In `@src/index.js`:
- Line 14: The current parsing of max_issues makes 0 impossible and allows
negatives; replace the simple parseInt fallback with explicit validation: read
the raw input, parse with parseInt, then if the parsed value is a finite integer
and >= 0 use it, otherwise fall back to the default (15); ensure you preserve 0
(do not treat it as falsy) and clamp or reject negative values so the later
slice(0, maxIssues) call behaves correctly (update the maxIssues binding in
src/index.js accordingly).
In `@src/issues.js`:
- Around line 21-31: The current label normalization uses the spread operator on
issue.labels (let labels = [...(issue.labels || [])]) which throws if
issue.labels is an object; change it to explicitly normalize issue.labels to an
array before spreading: if Array.isArray(issue.labels) use a shallow copy, else
if issue.labels is an object use Object.values(issue.labels), otherwise use an
empty array; then push labelPrefix (if present) and keep the existing filter
step before calling octokit.rest.issues.create so labels is always an array of
strings; update references to labels, issue.labels and labelPrefix in
src/issues.js accordingly.
In `@src/parser.js`:
- Around line 20-25: The code constructs issuePath using user-controlled
preset/category/framework which allows path traversal; fix by validating and
normalizing inputs (preset/category/framework) and by resolving the computed
path against the known base directory before using it: restrict allowed
characters (e.g., /^[A-Za-z0-9_-]+$/) or map presets to a whitelist, then build
the path with path.join and path.resolve and assert
resolvedPath.startsWith(baseDir) and that path.isAbsolute(resolvedPath) is false
before reading; apply the same validation/resolution logic to the other
file-resolution usage mentioned (lines around the other path.join at 48-50) and
use issuePath only after these checks succeed.
- Around line 32-35: Replace the current early-exit that calls core.setFailed
and returns an empty array when the preset file is missing with a thrown error
so the run aborts immediately: in the block that checks fs.existsSync(issuePath)
(referencing issuePath and core.setFailed in src/parser.js), remove the return
[] and instead throw a new Error with the same message (e.g., `throw new
Error(\`Preset category file not found: ${issuePath}\`)`) so downstream code
(e.g., whatever calls this parser/generate function) cannot continue and partial
issues won't be produced.
In `@test.js`:
- Around line 228-233: Current test uses a fixed 200ms sleep after requiring
'./src/index' which makes it flaky; instead export the async entrypoint function
(e.g., run) from src/index.js and update the test to require that module inside
jest.isolateModules and await the exported run() directly (replace the
setTimeout Promise). Ensure src/index.js exposes run (module.exports.run or
export async function run) and the test calls and awaits that function so the
test deterministically waits for completion.
---
Duplicate comments:
In `@action.yml`:
- Around line 24-36: The action exposes inputs custom_issues and
issue_complexity in action.yml but they are not implemented; either remove these
inputs from action.yml to avoid a misleading public contract, or implement their
handling in the action runtime where inputs are read (e.g., the code that calls
core.getInput / process.env parsing) so custom_issues and issue_complexity are
actually used (validate values, parse comma-separated complexities, and wire
into the logic that filters/generates issues). Ensure any default/required
metadata (like default for max_issues or description strings) stays consistent
with the implemented behavior.
In `@package.json`:
- Around line 5-8: Update the package.json "main" field to point to the built
artifact produced by the build script: change "main": "index.js" to "main":
"dist/index.js" (so package consumers resolve the compiled output), and verify
the build script in "scripts.build" (ncc build src/index.js -o dist) indeed
generates dist/index.js; if your package intentionally ships a root index.js,
ensure that file exists or adjust the build/script accordingly so "main" and the
build output remain consistent.
In `@src/parser.js`:
- Around line 37-43: Wrap the file read + JSON.parse of issuePath in a try/catch
to handle malformed JSON and avoid throwing; on parse error log the issuePath
and error and return an empty issues array (or propagate a clear error) instead
of letting the process crash. After successful parse, validate that
content.issues exists and is an object (e.g., typeof content.issues === 'object'
&& content.issues !== null) before iterating; if it's missing or invalid, log a
warning referencing issuePath and skip the for-in loop so issues remains empty.
Use the existing symbols JSON.parse, fs.readFileSync, issuePath, content, and
issues to locate and apply the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ad784c2c-4ff4-4eb7-8094-83d84d943b15
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (10)
.github/workflows/build-dist.yml.gitignoreREADME.mdaction.ymlpackage.jsonsrc/agent.jssrc/index.jssrc/issues.jssrc/parser.jstest.js
…sues using AI or predefined presets.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
src/parser.js (1)
42-48:⚠️ Potential issue | 🟠 MajorGuard preset file parsing and
content.issuesshape.This path still assumes valid JSON and expected schema; malformed file content can crash execution.
Proposed fix
- const content = JSON.parse(fs.readFileSync(issuePath, 'utf8')); + let content; + try { + content = JSON.parse(fs.readFileSync(issuePath, 'utf8')); + } catch (error) { + core.warning(`Failed to parse preset issues file ${issuePath}: ${error.message}`); + return []; + } + if (!content || typeof content !== 'object' || !content.issues || typeof content.issues !== 'object') { + core.warning(`Preset issues file has invalid schema: ${issuePath}`); + return []; + } const issues = []; // Extract all categories of issues in the bank - for (const key in content.issues) { - issues.push(...content.issues[key]); + for (const key in content.issues) { + if (Array.isArray(content.issues[key])) { + issues.push(...content.issues[key]); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/parser.js` around lines 42 - 48, Wrap the file read/parse and iteration in a safe guard: when reading issuePath use a try/catch around JSON.parse(fs.readFileSync(issuePath, 'utf8')) to catch malformed JSON and log/handle the error, and validate that the parsed content has the expected shape (e.g., content is an object and content.issues is an object or array) before iterating; in the code that currently references content and content.issues (the variables content, issues, and issuePath in parser.js) bail out or use a safe fallback (empty issues array) if the structure is missing or unexpected so the for (const key in content.issues) block cannot throw.src/agent.js (1)
134-136:⚠️ Potential issue | 🟠 MajorAvoid
setFailed()plusreturn []in this helper.This function marks the action failed but still returns a normal value, so callers can continue and emit success logs. Prefer throwing and letting the entrypoint own failure state.
Proposed fix
} catch (error) { - core.setFailed(`Error during AI issue generation: ${error.message}`); - return []; + throw new Error(`Error during AI issue generation: ${error.message}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent.js` around lines 134 - 136, The catch block that logs "Error during AI issue generation: ${error.message}" should not call core.setFailed and return a normal value; instead rethrow the error so the entrypoint controls action failure. Replace the current catch body (the one that does core.setFailed(...) and return []) with a rethrow (e.g., throw error) or throw a wrapped Error to preserve context, ensuring callers cannot continue as if the helper succeeded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent.js`:
- Around line 118-131: The filter assumes each entry in resultObj.issues is an
object and accesses issue.title/issue.body which will throw for null/non-object
entries; update the filter used to create validIssues to first test that typeof
issue === 'object' && issue !== null (or Array.isArray check if relevant) and if
not, emit core.warning(`Issue at index ${i} is not an object, skipping.`) and
return false; keep the existing checks for title/body and the labels fallback
(issue.labels = [] when not an array) after that guard so non-object entries are
safely skipped.
In `@src/issues.js`:
- Around line 15-39: The loop in src/issues.js can throw again in the catch
because it references issue.title and error.message unsafely; before the try set
a safeTitle variable (e.g., const safeTitle = issue && typeof issue.title ===
'string' ? issue.title : '<unknown>') and use that for logging and for the
create call fallback (body/labels can use defaults). In the catch block avoid
direct property access on error and issue—use a safe error string like (error &&
error.message) || String(error) and the safeTitle variable when emitting
core.warning so the catch cannot itself throw; also keep the existing label
normalization but ensure labels is always an array of strings before calling
octokit.rest.issues.create.
In `@test.js`:
- Around line 189-213: Update the unit test for getAIIssues to include
non-object entries in the AI-returned issues array (e.g., null and a string like
"text") so the function is exercised for skipping non-objects; modify the mocked
fetch json payload inside the existing test 'filters out issues missing required
fields' (or add a new test) to have issues: [ { title: 'Good', body: 'ok',
labels: [] }, null, "text", { body: 'no title', labels: [] } ] and assert the
result still contains only the valid issue (expect length 1 and title 'Good'),
ensuring getAIIssues safely ignores non-object entries.
---
Duplicate comments:
In `@src/agent.js`:
- Around line 134-136: The catch block that logs "Error during AI issue
generation: ${error.message}" should not call core.setFailed and return a normal
value; instead rethrow the error so the entrypoint controls action failure.
Replace the current catch body (the one that does core.setFailed(...) and return
[]) with a rethrow (e.g., throw error) or throw a wrapped Error to preserve
context, ensuring callers cannot continue as if the helper succeeded.
In `@src/parser.js`:
- Around line 42-48: Wrap the file read/parse and iteration in a safe guard:
when reading issuePath use a try/catch around
JSON.parse(fs.readFileSync(issuePath, 'utf8')) to catch malformed JSON and
log/handle the error, and validate that the parsed content has the expected
shape (e.g., content is an object and content.issues is an object or array)
before iterating; in the code that currently references content and
content.issues (the variables content, issues, and issuePath in parser.js) bail
out or use a safe fallback (empty issues array) if the structure is missing or
unexpected so the for (const key in content.issues) block cannot throw.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 38fc35b4-8f92-42a0-9cb2-3ddb61d8856d
📒 Files selected for processing (5)
src/agent.jssrc/index.jssrc/issues.jssrc/parser.jstest.js
… a utility for creating issues on GitHub.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/issues.js (1)
37-42:⚠️ Potential issue | 🟡 MinorMake catch logging non-throwing for non-Error values.
Line 41 assumes
error.messageexists. In JS, catch values can be non-Error, so this can throw again inside the catch path.Suggested fix
- core.warning(`Failed to create issue "${safeTitle}": ${error.message}`); + const safeMessage = + error && typeof error === 'object' && 'message' in error + ? String(error.message) + : String(error); + core.warning(`Failed to create issue "${safeTitle}": ${safeMessage}`);#!/bin/bash # Verify current catch-path logging dereferences `error.message` directly. cat -n src/issues.js | sed -n '35,43p'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/issues.js` around lines 37 - 42, The catch block in the create-issue flow dereferences error.message directly which can throw if the thrown value isn't an Error; update the catch to build a safeError string (e.g., use (error && error.message) || String(error) or JSON.stringify fallback) and pass that into core.warning instead of error.message; modify the same catch that computes safeTitle (refer to safeTitle and the core.warning call) so the logged message becomes `Failed to create issue "<safeTitle>": <safeError>` ensuring the catch path itself cannot throw.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent.js`:
- Around line 17-21: The prompt requires base issues be mandatory but the code
only returns the model's `validIssues`; fix by loading and parsing the
`_base/issues.json` file at runtime and programmatically merging its entries
into the final issue list before returning (use the same dedupe logic applied to
`validIssues`), ensuring every entry from the base file is present in the
returned array; if `maxIssues` must be enforced, trim only the non-base
(model-selected) issues so base issues are never dropped. Reference the
variables/values `systemPrompt`, `validIssues`, and `maxIssues` when making this
change so the merge occurs after model output is parsed and before the function
returns.
In `@test.js`:
- Around line 172-187: Add a new Jest test that exercises the timeout/AbortError
branch of getAIIssues by mocking global.fetch to reject with an object whose
name is 'AbortError', then call getAIIssues (same signature as other tests) and
assert it returns [] and that core.warning was called with a message indicating
a timeout (e.g., expect.stringContaining('timed out') or 'AbortError');
reference the existing test pattern for getAIIssues, use the same setup/teardown
for global.fetch and core.warning, and name the test clearly like "returns []
and warns on AbortError timeout from AI".
---
Duplicate comments:
In `@src/issues.js`:
- Around line 37-42: The catch block in the create-issue flow dereferences
error.message directly which can throw if the thrown value isn't an Error;
update the catch to build a safeError string (e.g., use (error && error.message)
|| String(error) or JSON.stringify fallback) and pass that into core.warning
instead of error.message; modify the same catch that computes safeTitle (refer
to safeTitle and the core.warning call) so the logged message becomes `Failed to
create issue "<safeTitle>": <safeError>` ensuring the catch path itself cannot
throw.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 651934b0-99a8-4cb2-aeea-be05290ad121
📒 Files selected for processing (3)
src/agent.jssrc/issues.jstest.js
…d add functionality to create GitHub issues.
…chitecture diagram, user journeys, and setup instructions.
…efined banks.
Addressed Issues:
Fixes #(issue number)
Screenshots/Recordings:
Additional Notes:
Checklist
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit