AI: Adds a new /generate-tests Claude command that automates the generation of integration tests#2706
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
Review Summary by QodoAdd /generate-tests Claude command for API integration test generation
WalkthroughsDescription• Adds /generate-tests Claude command for automated integration test generation • Analyzes Go types, validation markers, CRD manifests, and CEL rules • Generates comprehensive .testsuite.yaml test suites for API variants • Covers minimal valid creates, field validation, updates, immutability, and feature gates • Preserves existing tests and only adds tests for new or modified fields Diagramflowchart LR
A["API Types & CRD Manifests"] -->|Extract validation rules| B["Analyze Types"]
B -->|Read existing tests| C["Check Coverage"]
C -->|Generate test suites| D["Create .testsuite.yaml"]
D -->|Validate YAML| E["Output Test Files"]
File Changes1. .claude/commands/generate-tests.md
|
📝 WalkthroughWalkthroughAdded a new documentation file 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
|
Hello @chiragkyal! Some important instructions when contributing to openshift/api: |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@AGENTS.md`:
- Around line 129-140: The fenced code blocks in the AGENTS.md snippet (the
usage line "/generate-tests <path-to-types-file-or-api-directory>" and the
example lines "/generate-tests config/v1/types_infrastructure.go" and
"/generate-tests operator/v1") are missing language identifiers and trigger
MD040; update each triple-backtick fence around these blocks to include a
language token like "text" (i.e., change the opening ``` to ```text for the
three code fences that surround the usage line and the example block) so the
markdown linter no longer flags them.
| ``` | ||
| /generate-tests <path-to-types-file-or-api-directory> | ||
| ``` | ||
| Generates comprehensive `.testsuite.yaml` integration test files for OpenShift API type definitions: | ||
| - Reads Go types, validation markers, CRD manifests, and CEL rules | ||
| - Generates test suites for each CRD variant in `zz_generated.featuregated-crd-manifests/` | ||
|
|
||
| Examples: | ||
| ``` | ||
| /generate-tests config/v1/types_infrastructure.go | ||
| /generate-tests operator/v1 | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced code blocks.
Markdownlint MD040 is triggered because these fences omit a language; add one (e.g., text) to keep lint clean.
🛠️ Proposed fix
-```
+```text
/generate-tests <path-to-types-file-or-api-directory>
-```
+```
...
-```
+```text
/generate-tests config/v1/types_infrastructure.go
/generate-tests operator/v1
-```
+```🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 129-129: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 137-137: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@AGENTS.md` around lines 129 - 140, The fenced code blocks in the AGENTS.md
snippet (the usage line "/generate-tests <path-to-types-file-or-api-directory>"
and the example lines "/generate-tests config/v1/types_infrastructure.go" and
"/generate-tests operator/v1") are missing language identifiers and trigger
MD040; update each triple-backtick fence around these blocks to include a
language token like "text" (i.e., change the opening ``` to ```text for the
three code fences that surround the usage line and the example block) so the
markdown linter no longer flags them.
Code Review by Qodo
1. TARGET_PATH lacks validation
|
| REPO_ROOT=$(git rev-parse --show-toplevel) | ||
| TARGET_PATH="$ARGUMENTS" | ||
| if [ -z "$TARGET_PATH" ]; then | ||
| echo "PRECHECK FAILED: No target path provided." | ||
| echo "Usage: /generate-tests <path-to-types-file-or-api-directory>" | ||
| echo "Examples:" | ||
| echo " /generate-tests config/v1/types_infrastructure.go" | ||
| echo " /generate-tests operator/v1" | ||
| echo " /generate-tests route/v1/types_route.go" | ||
| exit 1 | ||
| fi | ||
|
|
||
| echo "Repository root: $REPO_ROOT" | ||
| echo "Target: $TARGET_PATH" | ||
| ``` | ||
|
|
||
| ```bash | ||
| if [ -d "$TARGET_PATH" ]; then | ||
| API_DIR="$TARGET_PATH" | ||
| else | ||
| API_DIR=$(dirname "$TARGET_PATH") | ||
| fi | ||
| ``` |
There was a problem hiding this comment.
1. target_path lacks validation 📘 Rule violation ⛨ Security
The command uses user-provided TARGET_PATH to derive API_DIR without verifying the path exists or is within the repo root, which can lead to unintended file reads/scans outside the repository. This violates the requirement to validate and sanitize external inputs.
Agent Prompt
## Issue description
The `/generate-tests` command consumes user-provided `TARGET_PATH` without validating existence or constraining it to the git repository root, enabling accidental or malicious reads/scans outside the repo.
## Issue Context
`TARGET_PATH` is derived from `ARGUMENTS` and then used to compute `API_DIR` with `dirname`, but there are no checks like `test -e`, `realpath`, or prefix checks against `$REPO_ROOT`.
## Fix Focus Areas
- .claude/commands/generate-tests.md[18-40]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| For feature-gated variants, add the gate(s): | ||
|
|
||
| ```yaml | ||
| # Single feature gate: | ||
| featureGate: <FeatureGateName> | ||
|
|
||
| # Multiple feature gates: | ||
| featureGates: | ||
| - <FeatureGate1> | ||
| - <FeatureGate2> | ||
|
|
||
| # Negative gates (enabled in ungated but not when this gate is active): | ||
| featureGates: | ||
| - -<FeatureGateName> | ||
| ``` |
There was a problem hiding this comment.
2. Featuregate key ignored 🐞 Bug ✓ Correctness
The command instructs using featureGate: for single-gate variants, but the test suite loader only reads featureGates (slice). Generated gated suites will be treated as ungated and run against all CRD variants, causing incorrect coverage and potential false failures.
Agent Prompt
### Issue description
`/generate-tests` currently instructs generating `featureGate: <name>` for single-gated variants, but the test runner’s `SuiteSpec` only consumes `featureGates` (slice). As a result, single-gated suites generated from these instructions will be treated as ungated.
### Issue Context
The YAML loader unmarshals directly into `SuiteSpec` and filtering logic checks `len(featureGates)==0` to decide whether a suite is ungated.
### Fix Focus Areas
- .claude/commands/generate-tests.md[115-129]
- .claude/commands/generate-tests.md[392-394]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| Read the types file(s) and extract all information. Also read the corresponding CRD manifests | ||
| from `zz_generated.featuregated-crd-manifests/<plural>.<group>/` to get: | ||
| - The list of CRD variants (ungated, per-featureGate, compound) | ||
| - The full CRD name | ||
| - Default values applied by the schema | ||
| - The full OpenAPI v3 validation tree | ||
|
|
||
| ## Step 3: Understand Existing Tests | ||
|
|
||
| Check the `tests/` directory for existing test suites: | ||
|
|
||
| ```bash | ||
| # List existing test files for this CRD | ||
| TEST_DIR="$API_DIR/tests" | ||
| if [ -d "$TEST_DIR" ]; then | ||
| echo "=== Existing test directories ===" | ||
| ls -la "$TEST_DIR/" | ||
| fi | ||
| ``` | ||
|
|
||
| If test files already exist for this CRD, read them to understand existing coverage. | ||
| Do NOT duplicate tests that already exist. Only add tests for NEW or MODIFIED fields/types. | ||
|
|
||
| ## Step 4: Generate Test Suites | ||
|
|
||
| For each CRD variant found in `zz_generated.featuregated-crd-manifests/<plural>.<group>/`, | ||
| generate a corresponding test file at: | ||
|
|
||
| ```text | ||
| <group>/<version>/tests/<plural>.<group>/<VariantName>.yaml | ||
| ``` | ||
|
|
||
| Where `<VariantName>` matches the CRD manifest filename (e.g., `AAA_ungated.yaml`, | ||
| `OnPremDNSRecords.yaml`, `FeatureA+FeatureB.yaml`). | ||
|
|
There was a problem hiding this comment.
3. Incorrect manifest/output paths 🐞 Bug ✓ Correctness
The command references zz_generated.featuregated-crd-manifests/<plural>.<group>/ and outputs to <group>/<version>/tests/..., but in-repo featuregated manifests and suites are located under the API directory (e.g., security/v1/zz_generated.featuregated-crd-manifests/... and security/v1/tests/...). Following the doc literally can cause “not found” variant discovery and tests written to paths the runner won’t load.
Agent Prompt
### Issue description
The command’s documented paths are ambiguous/wrong relative to this repo’s layout: featuregated CRD manifests and generated suites live under the specific API directory (like `security/v1/...`), not at repo root, and `<group>` is overloaded (API group domain vs directory name).
### Issue Context
Existing repo tooling (`tests/hack/gen-minimal-test.sh`) demonstrates the expected layout and will be what developers mirror.
### Fix Focus Areas
- .claude/commands/generate-tests.md[70-76]
- .claude/commands/generate-tests.md[93-104]
- .claude/commands/generate-tests.md[301-309]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
TestingRoute v1 test generationExported on 13/02/2026 at 0:03:44 GMT+5:30 from Cursor (2.4.28) User /generate-tests routes/v1 Cursor I'll generate tests for the Let me look at example test files from other APIs to understand the full test pattern better, and then I'll generate comprehensive tests. Now I have a comprehensive understanding of the test format. Let me create comprehensive test files for the Route API. I'll update both the Now let me update the RouteExternalCertificate.yaml with more comprehensive tests for that feature gate. Now let me run the tests to verify they work correctly. The tests require network access to start a local Kubernetes control plane. Let me verify the YAML syntax of the test files instead. Let me verify the full content of the test files. I have successfully generated comprehensive integration tests for the Summary of Generated Tests
|
everettraven
left a comment
There was a problem hiding this comment.
Aside from a couple very minor tweaks, this LGTM.
.claude/commands/generate-tests.md
Outdated
| #### 3. Invalid Field Values | ||
|
|
||
| For each field with validation rules, test that invalid values are rejected: | ||
| - Enum fields: test a value not in the allowed set → `expectedError` |
There was a problem hiding this comment.
It doesn't hurt to have this, but this is probably unnecessary. Trusting that the OpenAPI schema correctly enforces this seems reasonable to me.
There was a problem hiding this comment.
Thanks for the suggestion. I've removed the generation of common OpenAPI validations in the latest commit. Please have a look.
.claude/commands/generate-tests.md
Outdated
| For each field with validation rules, test that invalid values are rejected: | ||
| - Enum fields: test a value not in the allowed set → `expectedError` | ||
| - Pattern fields: test a value violating the regex → `expectedError` | ||
| - Min/max constraints: test boundary violations → `expectedError` |
There was a problem hiding this comment.
Similar to my comment above, this seems a bit unnecessary. The only case where I think this might make sense is when the min/max constraints are a part of a CEL / format validation but those cases should fall under the appropriate cases you have below.
There was a problem hiding this comment.
Thanks! This section is also remove/updated in the latest commit. The focus would be more on custom CEL validations and other related validations.
Signed-off-by: chiragkyal <ckyal@redhat.com>
3d04680 to
91c9cba
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.claude/commands/generate-tests.md (1)
323-327: Use null-delimited iteration for file-safe YAML checks.
for f in $(find ...)is fragile and will mis-handle filenames containing spaces or newlines. Use-print0withread -d ''to safely handle any filename.Suggested diff
-# Quick syntax check — ensure test files are valid YAML -for f in $(find "$API_DIR/tests" -name '*.yaml' -type f); do - if ! python3 -c "import yaml; yaml.safe_load(open('$f'))" 2>/dev/null; then - echo "WARNING: Invalid YAML in $f" - fi -done +# Quick syntax check — ensure test files are valid YAML +find "$API_DIR/tests" -name '*.yaml' -type f -print0 | while IFS= read -r -d '' f; do + if ! python3 -c "import yaml, pathlib; yaml.safe_load(pathlib.Path('$f').read_text())" 2>/dev/null; then + echo "WARNING: Invalid YAML in $f" + fi +done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/commands/generate-tests.md around lines 323 - 327, The loop using for f in $(find "$API_DIR/tests" -name '*.yaml' -type f); do is unsafe for filenames with spaces or newlines; change the iteration to use find ... -print0 and a null-delimited read (read -d '' -r f) so each filename from find is handled atomically, then feed each file into the same python3 -c "import yaml; yaml.safe_load(open('$f'))" check and preserve the existing warning path (echo "WARNING: Invalid YAML in $f") when the python check fails; locate and update the shell loop around the variables/API_DIR and the test YAML validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/commands/generate-tests.md:
- Around line 324-326: Add an explicit PyYAML availability check before the YAML
validation loop: run a quick python3 -c "import yaml" test (the same module
referenced in the existing python3 -c "import yaml; yaml.safe_load(open('$f'))"
check) and, if it fails, print a clear error about missing PyYAML and exit
non‑zero so the loop does not misreport missing module as "Invalid YAML"; place
this check immediately before the validation loop that iterates over $f.
---
Nitpick comments:
In @.claude/commands/generate-tests.md:
- Around line 323-327: The loop using for f in $(find "$API_DIR/tests" -name
'*.yaml' -type f); do is unsafe for filenames with spaces or newlines; change
the iteration to use find ... -print0 and a null-delimited read (read -d '' -r
f) so each filename from find is handled atomically, then feed each file into
the same python3 -c "import yaml; yaml.safe_load(open('$f'))" check and preserve
the existing warning path (echo "WARNING: Invalid YAML in $f") when the python
check fails; locate and update the shell loop around the variables/API_DIR and
the test YAML validation.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
.claude/commands/generate-tests.mdAGENTS.md
| if ! python3 -c "import yaml; yaml.safe_load(open('$f'))" 2>/dev/null; then | ||
| echo "WARNING: Invalid YAML in $f" | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -path "*/.claude/commands/generate-tests.md" -type fRepository: openshift/api
Length of output: 93
🏁 Script executed:
cat -n ./.claude/commands/generate-tests.md | sed -n '310,340p'Repository: openshift/api
Length of output: 1187
Add upfront PyYAML dependency check to avoid conflating missing module with invalid YAML.
The validation loop currently masks missing PyYAML as "Invalid YAML" since both produce a failed python3 command. Add an explicit check for the yaml module before the loop to fail fast with a clear error message if the dependency is unavailable.
Suggested diff
+if ! python3 -c "import yaml" >/dev/null 2>&1; then
+ echo "PRECHECK FAILED: python3 package 'PyYAML' is required for YAML validation."
+ exit 1
+fi
+
for f in $(find "$API_DIR/tests" -name '*.yaml' -type f); do
if ! python3 -c "import yaml; yaml.safe_load(open('$f'))" 2>/dev/null; then
echo "WARNING: Invalid YAML in $f"
fi
done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/commands/generate-tests.md around lines 324 - 326, Add an explicit
PyYAML availability check before the YAML validation loop: run a quick python3
-c "import yaml" test (the same module referenced in the existing python3 -c
"import yaml; yaml.safe_load(open('$f'))" check) and, if it fails, print a clear
error about missing PyYAML and exit non‑zero so the loop does not misreport
missing module as "Invalid YAML"; place this check immediately before the
validation loop that iterates over $f.
everettraven
left a comment
There was a problem hiding this comment.
Looks good to me as a baseline. As with all of the AI assisted workflows, I'm sure we will find things to tweak/improve over time.
/lgtm
|
Pipeline controller notification No second-stage tests were triggered for this PR. This can happen when:
Use |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: everettraven The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest-required |
1 similar comment
|
/retest-required |
|
Looks like we're hitting this issue for the images job. |
|
Thanks for the reference @chiragkyal - This change isn't something critical IMO so lets just keep an eye on that issue getting resolved and then kick the tests again when it has been resolved. |
|
/retest-required |
|
Dev tooling and no impact to product, skipping verification. /verified skip |
|
@everettraven: The DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/verified bypass |
|
@everettraven: The DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@chiragkyal: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary
/generate-testsClaude command that automates the generation of comprehensive.testsuite.yamlintegration test files for OpenShift API type definitionsDescription
This PR introduces a Claude command for generating integration tests for any API group/version in the repository. The command:
zz_generated.featuregated-crd-manifests/to understand CRD variants, defaults, and OpenAPI v3 schemasUsage
Examples: