Merge bosh-stemcells-ci to ci/ with history#503
Merge bosh-stemcells-ci to ci/ with history#503aramprice wants to merge 0 commit intoubuntu-jammyfrom
ci/ with history#503Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughThis pull request introduces comprehensive CI/CD infrastructure for BOSH stemcell and OS image building pipelines. It adds Concourse pipeline configurations, task definitions, shell/Ruby scripts, Terraform modules, BOSH ops files, AWS IAM policies, and documentation to orchestrate building, testing, and publishing Ubuntu Jammy and Noble stemcells across multiple cloud providers with USN/CVE-driven automation and light stemcell support. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes The PR introduces extensive new CI/CD infrastructure across heterogeneous file types (Bash/Ruby scripts, YAML pipelines, Terraform, JSON policies, BOSH manifests), requiring careful review of orchestration logic, credential handling, cloud provider integrations, and USN/CVE processing workflows. While many task definitions follow repetitive patterns, the diversity of cloud providers (AWS, GCP), stemcell variants (light, heavy, FIPS), and automation flows (build, test, publish, cleanup) necessitate contextual reasoning for each cohort to ensure correctness of cloud API interactions, parameter flows, and error handling. Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
08d0a64 to
73dd663
Compare
73dd663 to
ddde62c
Compare
|
Rebase appears to have broken the CLA robot. Might recreate if if untangling this is too problematic. |
There was a problem hiding this comment.
Actionable comments posted: 84
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ci/bin/format-usn-log`:
- Line 3: The jq invocation in ci/bin/format-usn-log uses an unquoted $1 which
can break on filenames with spaces or special characters; update the command
that reads the input (the line invoking jq -r --slurp < $1 '.[] |') to quote the
file variable so the filename is passed as a single argument (e.g. use "<
\"$1\"" or otherwise quote "$1"); ensure any other uses of $1 in this script are
similarly quoted to prevent word splitting.
- Around line 1-12: The script format-usn-log currently assumes $1 exists and
that jq will succeed; add input validation and error handling by first checking
that the positional parameter $1 is provided and points to a readable file, emit
a usage message and exit non‑zero if not, then run jq with flags that enable
failure detection (e.g., using --exit-status) and capture jq's exit code; on jq
failure print a clear error including jq's stderr output and exit with that code
so malformed JSON or jq errors don't produce silent bad output.
In `@ci/bosh-core-stemcells-iam.json`:
- Around line 4-13: The policy statement that grants "Effect": "Allow" for
"Action": ["s3:PutObject","s3:PutObjectAcl"] against "Resource":
["arn:aws:s3:::bosh-core-stemcells/*"] should be hardened by adding a Condition
block that enforces server-side encryption (e.g., require
s3:x-amz-server-side-encryption or aws:SecureTransport), and optionally add IP
or MFA constraints (aws:SourceIp or aws:MultiFactorAuthPresent) as needed;
update the IAM statement containing those Actions/Resource values to include
those Condition keys so uploads must use SSE and, if desired, originate from
allowed IPs or have MFA present.
- Around line 6-8: Review the IAM statement that grants "s3:PutObject" and
"s3:PutObjectAcl" and determine if ACL changes are required by your CI; if not,
remove "s3:PutObjectAcl" from the Action list to eliminate ability to modify
object ACLs, and instead enforce access via bucket-level controls (e.g., bucket
policy and S3 Object Ownership set to BucketOwnerEnforced) or ensure uploads use
the correct canned ACLs server-side; update the policy where the Actions include
"s3:PutObjectAcl" to remove that permission and document the replacement
control.
In `@ci/docs/fips.md`:
- Around line 3-45: The document has grammar, capitalization, and markdownlint
issues: fix headings and sentence case (change "setup access" to "Setup access",
"working group actions" to "Working group actions", "bucket owner actions" to
"Bucket owner actions"), correct spelling ("accunt" -> "account") and wording
("set up" vs "setup"), add a language tag to the fenced code block (use ```yaml
for the resource example and ```bash for gsutil commands), normalize inline code
spacing and quoting for PLACEHOLDER and `gcloud auth login`, and apply the
proposed text edits around the "bosh-core-stemcells-fips" bucket, the HMAC keys
line, the working-group service account paragraph, and the gsutil command block
(references: the YAML resource snippet, the phrase "For this you need a service
account", bucket name bosh-core-stemcells-fips, gsutil defacl ch -u PLACEHOLDER,
gsutil acl ch -u PLACEHOLDER:READER, gsutil -m acl ch -r -u PLACEHOLDER:READER).
- Around line 41-43: The gsutil defacl command on the line using "gsutil defacl
ch -u PLACEHOLDER gs://bosh-core-stemcells-fips" is missing the required
permission suffix; update that invocation (the "gsutil defacl ch" command) to
include the permission in the "-u" argument (e.g., change "-u PLACEHOLDER" to
"-u PLACEHOLDER:READER") so it matches the format used on the subsequent "gsutil
acl ch" and "gsutil -m acl ch -r" commands.
In `@ci/docs/publish.md`:
- Around line 1-15: Fix the typos and markdownlint heading/final-newline issues
in the publishing guide: correct "automaticly" to "automatically" and
"availeble" to "available" (occurring in the paragraphs referencing the
community-stemcell board and publish-ubuntu-jammy-1), ensure there is a blank
line after headings like "# Publish a stemcell." and "## Verify stemcell builds"
to satisfy heading spacing rules, and add a final newline at end of file; verify
references to aggregate-candidate-stemcells and publish-ubuntu-jammy-1 remain
unchanged other than the typo fixes.
In `@ci/ops-files/ipv6-cc.yml`:
- Around line 13-14: Replace the hardcoded IPv4 DNS entry under the dns key
(currently "8.8.8.8") so the ops-file works in IPv6-only environments: change
the literal to a configurable placeholder (e.g., ((dns_servers))) or provide
IPv6 resolver addresses (e.g., 2001:4860:4860::8888) and document the parameter;
update the dns entry in ipv6-cc.yml to use the placeholder or include IPv6
addresses instead of the fixed "8.8.8.8".
In `@ci/ops-files/ipv6-director.yml`:
- Around line 2-3: The mbus URL value for the /cloud_provider/mbus entry uses
((internal_ip)) without IPv6 bracket notation, which breaks parsing for IPv6;
update the value string
"https://mbus:((mbus_bootstrap_password))@((internal_ip)):6868" to wrap the
((internal_ip)) placeholder in square brackets (same pattern used for
second_internal_ip elsewhere) so the host portion becomes [((internal_ip))]
while keeping the rest of the URL intact.
In `@ci/ops-files/resource-pool-cc.yml`:
- Line 3: The ops-file path uses a variable interpolation token
"((vcenter_cluster))" inside the path key which BOSH does not support; update
the ops-file(s) so paths are static strings by either (a) creating separate
ops-files with concrete cluster names replacing "((vcenter_cluster))" for each
target cluster, or (b) changing the deployment manifest shape so the
resource_pool path no longer needs a dynamic segment (e.g., move
cluster-specific keys under a map keyed by a static name and reference variables
inside values rather than path keys); look for the path entries containing
"/resource_pool?" and the token "((vcenter_cluster))" to make the fix.
In `@ci/pipelines/ubuntu-jammy-publisher/configure.sh`:
- Around line 7-10: The current until loop using "$FLY" -t
"${CONCOURSE_TARGET:-stemcell}" status/login can hang forever if auth is broken;
replace it with a bounded retry loop: introduce a max_attempts variable and
attempt counter, perform the login/status retry with a sleep that increases
(simple backoff or fixed delay) between attempts, log each failure with attempt
number and when max_attempts is reached call process exit non-zero (exit 1) and
print a clear error message; update the block that currently uses until "$FLY"
... status / login to use this retry logic and ensure references to "$FLY" and
"${CONCOURSE_TARGET:-stemcell}" remain unchanged.
In `@ci/pipelines/ubuntu-jammy-publisher/pipeline.yml`:
- Around line 24-35: The wrapper is missing the ami_keep_latest param required
by ci/tasks/light-aws/cleanup-ami.sh; update the params block for the
cleanup-amis-in-(@= name @) task in cleanup_old_published_light_stemcells to
include ami_keep_latest and pass the templated value (e.g. ami_keep_latest:
((aws_publish_(@= prefix @)_keep_latest))) so the cleanup script receives the
required setting on startup.
- Around line 642-649: The bosh-stemcells-ci git resource is referencing the
loop variable stemcell.branch even though the resource is declared outside the
for stemcell loop, causing implicit dependency on the last loop value; fix by
either moving the resource declaration into the for stemcell loop so
stemcell.branch is properly scoped, or replace stemcell.branch with an explicit
top-level parameter (e.g., stemcell_branch) and reference that instead; update
any usages of bosh-stemcells-ci and the data.values.oss loop accordingly to
ensure the branch value is deterministically passed rather than leaked from the
loop.
In `@ci/pipelines/ubuntu-jammy/configure.sh`:
- Around line 1-10: Update the script to use stricter shell options and limit
login retries: replace the simple "set -e" with "set -euo pipefail" (so unset
variables like FLY_CLI/CONCOURSE_TARGET cause failures and pipelines propagate
errors), and modify the until loop that calls "${FLY}" -t
"${CONCOURSE_TARGET:-stemcell}" status / login to use a bounded retry counter
and a MAX_RETRIES constant (exit non‑zero after the max attempts with a clear
error via stderr) so the login loop for FLY does not run indefinitely when
authentication continuously fails.
- Around line 12-14: The temp file created by mktemp (stored in pipeline_config)
isn't removed; add a trap to remove pipeline_config on exit/failure: after
creating pipeline_config, register a trap that runs rm -f "$pipeline_config" on
EXIT (and optionally on INT TERM), so the temporary file is always cleaned up
even if the ytt invocation fails; ensure the trap references the same
pipeline_config variable and stays active for the lifetime of the script
surrounding the ytt invocation.
In `@ci/pipelines/ubuntu-jammy/pipeline.yml`:
- Around line 415-431: The deploy-director-ipv6 task invocation (task:
deploy-director-ipv6) omits GCP_PREEMPTIBLE but deploy-director-ipv6.sh
references ${GCP_PREEMPTIBLE} under set -u; add a GCP_PREEMPTIBLE entry to the
params block for the deploy-director-ipv6 task in pipeline.yml (the same param
key used by other GCP director tasks) so the script always receives a defined
value when interpolating; ensure the value is wired from the pipeline variable
((gcp_preemptible)) or an explicit default string consistent with other
deploy-director-* task invocations.
- Around line 561-575: The deploy-director task is missing the DEFAULT_VM_TYPE
parameter required by ci/tasks/gcp/deploy-director.sh (it writes
default_vm_type: ${DEFAULT_VM_TYPE} into director-vars.yml); update the
deploy-director task's params block (the task named deploy-director in the
pipeline) to include DEFAULT_VM_TYPE and pass the appropriate pipeline var
(e.g., DEFAULT_VM_TYPE: ((default_vm_type))) so bosh interpolate has the value.
In `@ci/pipelines/ubuntu-jammy/stemcells.yml`:
- Line 29: Remove the extra trailing blank line at the end of the stemcells.yml
file (the empty line introduced around line 29) so the file ends immediately
after the last YAML entry; ensure there are no trailing empty lines and re-run
yamllint to verify the `empty-lines` rule passes.
In `@ci/pipelines/ubuntu-noble-publisher/configure.sh`:
- Around line 12-13: The temp file created in configure.sh
(pipeline_config=$(mktemp)) is never removed; update the script to register a
cleanup trap that removes "${pipeline_config}" on exit or error and ensure the
trap is set immediately after creating pipeline_config so the temporary file is
always deleted even if ytt or later commands fail; reference the pipeline_config
variable and the mktemp creation and call rm -f "${pipeline_config}" from the
trap to perform the cleanup.
- Around line 7-10: The login loop using "$FLY" -t
"${CONCOURSE_TARGET:-stemcell}" status/login can run forever if login keeps
failing; add a retry limit and fail fast behavior: introduce a counter (e.g.,
attempts) and a max_attempts constant, increment attempts each loop,
sleep/backoff as needed, and if attempts exceeds max_attempts call process exit
with non-zero status (or log error and exit) instead of looping forever; update
the until loop around the "$FLY" status/login calls (the two occurrences of
"$FLY" -t "${CONCOURSE_TARGET:-stemcell}" status and "$FLY" -t
"${CONCOURSE_TARGET:-stemcell}" login) to check the counter and abort after the
threshold.
In `@ci/pipelines/ubuntu-noble-publisher/pipeline.yml`:
- Around line 449-481: The publish job references usn-log/usn-log.json but never
declares a usn-log input or produces it; add a get: usn-log input to the job's
inputs (the in_parallel list) or ensure a prior task in this job emits the
usn-log directory before the upload step. Concretely, add a "get: usn-log" entry
(with any required resource name, passed constraints, or params consistent with
other gets) to the same in_parallel block that contains candidate-... resources
so the upload step that reads usn-log/usn-log.json (the upload task around the
later upload lines) has the resource available. Ensure the resource name matches
the upload step path exactly.
In `@ci/pipelines/ubuntu-noble/configure.sh`:
- Around line 7-10: The unbounded until loop using "${FLY}" and the Concourse
target can hang indefinitely; add a retry limit (e.g., MAX_RETRIES) and a
counter variable, attempt login/status up to that limit, and on exceeding the
max emit an error and exit non-zero; update the loop controlling "${FLY}" -t
"${CONCOURSE_TARGET:-stemcell}" status/login to increment the counter each
attempt, optionally add a short backoff, and call exit 1 with a clear error log
if retries are exhausted.
- Around line 12-20: The temporary file created by mktemp and stored in
pipeline_config is not removed; add a cleanup trap to delete
"${pipeline_config}" on EXIT (and ERR if desired) so repeated runs don't leak
temp files: after pipeline_config is created, register trap 'rm -f
"${pipeline_config}"' (or a small cleanup function) and ensure it runs for
normal exit and failures around the ytt render and "${FLY}" -t ... set-pipeline
invocation so the temp file is always removed; reference the pipeline_config
variable, the mktemp invocation, the ytt render block, and the "${FLY}"
set-pipeline call when placing the trap.
In `@ci/pipelines/ubuntu-noble/pipeline.yml`:
- Around line 706-723: The Slack payload currently builds "attachments" as a
single object in the template.json creation step; change the template so
"attachments" is an array containing that object (i.e. wrap the existing
attachments object in [ ... ]). Locate the heredoc that writes template.json
(the block that creates "attachments": { ... }) and modify it to produce
"attachments": [ { ... } ] so that the subsequent jq/cat pipeline and the file
produced by the template (used by the jq invocation and the
slack-message/attachments output) conform to Slack's expected array format.
In `@ci/tasks/build-release-metadata.sh`:
- Around line 49-51: The script is calling the formatter at
"${REPO_ROOT}/bin/format-usn-log" but the helper was moved under ci/bin; update
the invocation in build-release-metadata.sh to point to the correct location
(use "${REPO_ROOT}/ci/bin/format-usn-log" or equivalent) so the call to
format-usn-log uses the ci/bin copy; change the path where format-usn-log is
executed and verify REPO_ROOT is still the repo root variable used to construct
the new path.
- Around line 21-23: The checksum verification uses `shasum -c jq.txt` which
defaults to SHA-1 but the echoed digest is SHA-256; update the verification
command to use SHA-256 by changing the `shasum` invocation to include the
algorithm flag (e.g. `shasum -a 256 -c jq.txt`) so the `jq-linux64` download is
validated against the provided SHA-256 in `jq.txt`.
In `@ci/tasks/build.sh`:
- Around line 142-145: The commit step fails when there are no changes to
commit; modify the sequence around git add -A and git commit -m "dev:
$OS_NAME-$OS_VERSION/$CANDIDATE_BUILD_NUMBER ($IAAS-$HYPERVISOR)" so it
tolerates no-op rebuilds—either by checking for staged changes (e.g., using git
diff-index or git status) and only running git commit when there are changes, or
by making the git commit command non-fatal on "nothing to commit" (e.g., append
a safe fallback); update the block containing git add -A and git commit to
perform that conditional commit logic to avoid failing on identical metadata
re-runs.
- Around line 22-28: The script currently validates IAAS, HYPERVISOR, OS_NAME
and OS_VERSION via check_param but never validates STEMCELL_BUCKET, so the
placeholder value "replace-me" can slip through; update the script to call
check_param for STEMCELL_BUCKET (alongside the existing check_param calls) and
then explicitly reject the sentinel value by adding a guard that exits non‑zero
if STEMCELL_BUCKET == "replace-me" (or empty). Reference the existing
check_param usage to place the new validation near IAAS/HYPERVISOR/OS_* checks
and ensure the script fails fast before proceeding to use AGENT_SUFFIX or
exporting CANDIDATE_BUILD_NUMBER.
In `@ci/tasks/build.yml`:
- Around line 23-26: The YAML block with keys IAAS, HYPERVISOR, OS_NAME, and
OS_VERSION uses aligned colons (multiple spaces) which violates YAMLlint's
colons rule; update the block in ci/tasks/build.yml so each mapping uses a
single space after the colon (e.g., "IAAS: replace-me", "HYPERVISOR:
replace-me", "OS_NAME: replace-me", "OS_VERSION: replace-me") to normalize
spacing and satisfy the linter.
In `@ci/tasks/bump-bosh-blobstore-cli.sh`:
- Around line 15-24: Validate that BLOBSTORE_TYPE is set and non-empty, and
ensure the cli glob matches exactly one file before writing outputs: check the
BLOBSTORE_TYPE variable at the top and exit with an error if unset, expand the
glob used with sha256sum into an array (the pattern
"${REPO_PARENT}/bosh-blobstore-cli"/*cli*), verify the array length is 1 and
fail with a clear message if zero or multiple matches, then compute sha256sum
for that single file (using sha256sum -b on the resolved filename) and write
url, version and that single sha256sum to the echo target files referenced in
the script (the bosh-blobstore-${BLOBSTORE_TYPE}.url/.version/.sha256sum paths).
In `@ci/tasks/bump-bosh-blobstore-cli.yml`:
- Around line 7-15: The task is missing a params declaration for the
BLOBSTORE_TYPE environment variable used by bump-bosh-blobstore-cli.sh; update
the task definition to add a params section that declares BLOBSTORE_TYPE (e.g.,
make it required or provide a sensible default) so the script always receives a
value; locate the task that runs
bosh-stemcells-ci/ci/tasks/bump-bosh-blobstore-cli.sh and add the BLOBSTORE_TYPE
param entry to its params block.
In `@ci/tasks/check-if-usn-is-in-github.sh`:
- Around line 24-34: The loop uses unquoted expansions of the JSON payload
variable usn which can trigger globbing/word-splitting; update every occurrence
of $usn to be quoted (e.g., use echo "$usn" | jq -r '.url' ... when computing id
and use echo "$usn" >> "${REPO_PARENT}/found-usns/usns.json" and echo "$usn" >>
"${REPO_PARENT}/updated-unfound-usns/usns.json") so the id extraction and the
file append operations handle spaces/special characters safely.
In `@ci/tasks/check-if-usn-is-in-github.yml`:
- Line 16: The YAML file ci/tasks/check-if-usn-is-in-github.yml contains an
extra trailing blank line flagged by yamllint; open that file and remove the
final empty line so the file ends immediately after the last YAML content
(ensure no extra newline-only line remains at EOF).
In `@ci/tasks/check-if-usns-are-applicable.sh`:
- Around line 36-42: The branch logic currently writes "true" to the success
file in both the if and else, making "${REPO_PARENT}/updated-usn-log/success"
meaningless; update the else branch handling in the
ci/tasks/check-if-usns-are-applicable.sh script so that when
packages_included_in_stemcell is false it either writes a distinct value (e.g.,
"false" or "no-applicable-usns") to "${REPO_PARENT}/updated-usn-log/success" or
omits writing entirely and only exits with status 1, keeping the if branch
writing "true" and preserving the existing exit codes.
- Around line 17-21: The loop is writing and parsing the JSON stored in the
variable usn without quoting it, which permits word-splitting and globbing;
update uses of $usn to be quoted (e.g., change echo $usn to echo "$usn" and the
jq pipeline id=$(echo $usn | ...) to id=$(echo "$usn" | ...)) so the JSON is
treated as a single string when passed to jq and when written to
"${REPO_PARENT}/usn.json"; keep existing process_usns invocation as-is.
In `@ci/tasks/check-usn-packages.sh`:
- Around line 18-19: The if-statement uses unguarded expansion of
ALL_PACKAGE_VERSIONS_AVAILABLE which will fail under set -u; update the
condition in the if block that checks ALL_PACKAGE_VERSIONS_AVAILABLE to use
shell parameter expansion with a safe default (e.g., treat unset as false) so
the script won’t hard-fail when the variable is not defined; adjust the
conditional around the existing if/then that references
ALL_PACKAGE_VERSIONS_AVAILABLE accordingly.
In `@ci/tasks/commit-build-time.sh`:
- Around line 18-21: The script currently stages all files with git add -A and
will fail the whole task under set -e when there is nothing to commit; change
git add -A to only stage build_time.txt, then check for staged changes before
committing (e.g., run git add build_time.txt and use git diff --cached --quiet
--exit-code to detect no staged changes and skip git commit if there are none),
ensuring the git commit -m "Commit Build Time" only runs when build_time.txt has
staged changes.
- Line 14: The timestamp is being formatted into formatted_build_time from
build_time but date() is missing the UTC flag, so add the -u option to the date
invocation that produces formatted_build_time (i.e., the command that currently
reads date --date "${build_time%.*}" +%Y%m%dT%H%M%SZ) so it runs as date -u
--date ...; make the identical change in the other occurrence mentioned (the
similar date invocation in the os-images build script) so the appended Z truly
reflects UTC.
In `@ci/tasks/gcp/deploy-director-ipv6.sh`:
- Around line 64-67: The current backtick-style inline exports
(BOSH_ENVIRONMENT, BOSH_CA_CERT, BOSH_CLIENT_SECRET) hide failures from `bosh
int`; change each to run `bosh int ...` into a plain assignment first (e.g.
tmp="$(bosh int ...)" ), check that the assignment is non-empty or let `set -e`
catch the command failure, then run a separate `export BOSH_ENVIRONMENT="$tmp"`
(same for BOSH_CA_CERT and BOSH_CLIENT_SECRET); keep BOSH_CLIENT=admin as a
simple export. This ensures `bosh int` failures are not masked and variables are
exported only after successful extraction.
In `@ci/tasks/gcp/deploy-director-ipv6.yml`:
- Around line 13-14: Remove the empty YAML key `params:` from the CI deployment
playbook (`deploy-director-ipv6.yml`) since it provides no value; either delete
the `params:` line entirely or replace it with a short comment like `# params
placeholder` if you intend to reserve it for future parameters to make the
intent explicit.
In `@ci/tasks/gcp/deploy-director.sh`:
- Around line 71-74: The export lines for BOSH_ENVIRONMENT, BOSH_CA_CERT, and
BOSH_CLIENT_SECRET should be split into two steps so command-substitution
failures are not masked: first assign each variable from `bosh int
"${REPO_PARENT}/director-creds.yml" --path ...` into a plain shell variable
(e.g., BOSH_ENVIRONMENT=...), check or let set -e trap failures surface, then
export the variable (export BOSH_ENVIRONMENT); do the same for BOSH_CA_CERT and
BOSH_CLIENT_SECRET while keeping BOSH_CLIENT=admin as-is so missing paths cause
immediate script failure instead of producing empty exported vars.
In `@ci/tasks/light-aws/build.sh`:
- Around line 7-10: The script enables xtrace early via the DEBUG check (DEBUG
and set -x) which causes secret-bearing commands (the publish access key/secret
echo commands) to be logged; to fix, remove or disable set -x from the top DEBUG
block and instead perform the DEBUG conditional set -x after the AWS credential
staging section where the publish access key and secret are created/echoed,
keeping the BOSH_LOG_LEVEL and BOSH_LOG_PATH exports as needed; ensure the DEBUG
guard (using the DEBUG variable and set -x) is applied only after the credential
echoing is complete so those sensitive lines are not traced.
- Around line 57-65: The script treats an existing AWS light stemcell as a
failure (exit 1) when wget --spider on bosh_io_light_stemcell_url returns 0;
change this to treat it as a cache hit by exiting successfully (exit 0) and
skipping further upload steps, e.g., when wget succeeds print the messages about
existing light_stemcell_name and download URL and then exit 0 instead of 1 so
retries are recoverable; ensure the surrounding set +e / set -e behavior remains
unchanged.
In `@ci/tasks/light-aws/build.yml`:
- Around line 14-29: The YAML params block (keys like bosh_io_bucket_name,
ami_description, ami_region, ami_encrypted, efi, and S3_API_ENDPOINT) uses
aligned columns with excessive spaces after colons which triggers yamllint;
update the block to use standard YAML spacing (single space after each colon, no
padded alignment), ensure boolean values remain unquoted (true/false) and
strings are quoted only when necessary, and run yamllint/fixed YAML formatter to
match the project's other task files.
In `@ci/tasks/light-aws/cleanup-ami.yml`:
- Around line 10-19: The YAML has excessive alignment spaces after colons for
keys like ami_region, ami_access_key, ami_secret_key, ami_role_arn,
ami_older_than_days, ami_keep_latest, os_name, snapshot_id, and
remove_public_images; fix it by replacing the aligned multiple spaces with a
single space after each colon (e.g. change `ami_region:
"eu-central-1"` to `ami_region: "eu-central-1"`), ensure keys are left-aligned
with consistent two-space indentation where needed and preserve existing values
and quotes so yamllint no longer flags spacing issues.
In `@ci/tasks/light-aws/merge-builds`:
- Around line 22-30: The loop over Dir['*-stemcell'] currently uses matches[0]
which is non-deterministic; change the tarball selection to use a deterministic
sort and explicit handling: replace matches[0] with matches.sort.first (or abort
with a clear error if matches.length > 1) so the chosen tarball_path and
stemcell_name are stable across runs, and make the same change in the
corresponding block that mirrors lines 44-45; reference the loop using
Dir['*-stemcell'], the matches array, tarball_path and stemcell_name when making
the edits.
- Line 34: Replace the unsafe YAML.load_file call with Psych.safe_load by
reading the file contents first and passing them to Psych.safe_load with no
permitted classes or symbols; e.g., replace
YAML.load_file("#{temp_dir}/stemcell.MF") with
Psych.safe_load(File.read("#{temp_dir}/stemcell.MF"), permitted_classes: [],
permitted_symbols: []) (ensure Psych is required or available in the runtime).
- Around line 15-19: The run_command helper currently executes interpolated
shell strings using backticks (e.g., `#{cmd}`), which is unsafe for filenames
like tarball_path or stemcell_name; change run_command and its call sites (see
invocations around the other uses) to use a safe exec pattern: accept the
command and its args as an Array and invoke Open3.capture3 or system with
argument arrays (e.g., Open3.capture3(*cmd_args) or system(*cmd_args)) so
arguments are passed without shell interpolation, update callers that currently
build interpolated strings to pass arrays instead (replace `"tar -czf
#{tarball_path} #{dir}"` style strings with ["tar", "-czf", tarball_path, dir]
and similarly for stemcell_name).
In `@ci/tasks/light-aws/run-upload-test.yml`:
- Line 17: The YAML key BOSH_DEBUG_LEVEL currently has extra spaces after the
colon ("BOSH_DEBUG_LEVEL: info"); update this line to use standard
single-space key/value formatting (e.g., "BOSH_DEBUG_LEVEL: info") so it
conforms to YAML style conventions and avoids CI/YAML lint warnings.
In `@ci/tasks/light-aws/tag-aws-ami-light.sh`:
- Line 24: The tar invocation in tag-aws-ami-light.sh uses a bare glob "tar -xzf
*.tgz stemcell.MF" which can treat filenames beginning with "-" as options;
update that command to use a path-prefixed glob (replace "*.tgz" with "./*.tgz")
so the shell expands to ./filename and tar will not misinterpret leading dashes.
- Around line 29-40: The loop currently uses unquoted expansions which can cause
word-splitting; change usages of AMI_LINE, AMI_REGION and AMI_ID to quoted
expansions: use echo "$AMI_LINE" when parsing to derive AMI_REGION and AMI_ID
(e.g. AMI_REGION=$(echo "$AMI_LINE" | cut -f1 -d:) and AMI_ID=$(echo "$AMI_LINE"
| cut -f2 -d:)), and quote the variables when calling aws (aws ec2 create-tags
--resources "$AMI_ID" --region "$AMI_REGION" --tags "$TAGS"). This preserves
values with spaces and prevents accidental splitting while keeping existing
logic in the for-loop intact.
- Line 27: The current line uses eval which is unsafe; instead change the CI
variable GREP_PATTERN to contain only the grep flag+pattern (e.g.
GREP_PATTERN="-v 'gov-\\|cn-'") in the task YAML and update the script line that
sets AMI_LIST to call grep directly without eval (use the pipeline with grep
ami- on stemcell.MF, pipe to tr -d ' ', then pipe to grep $GREP_PATTERN);
reference AMI_LIST, GREP_PATTERN, and stemcell.MF to locate and update the code,
and ensure GREP_PATTERN is quoted appropriately in the YAML so
word-splitting/quoting is preserved.
In `@ci/tasks/light-aws/test-drivers.sh`:
- Around line 59-63: The spec-count detection is fragile and the unquoted
variable is unsafe: replace the manual spec_count computation and usage (the
spec_count="$(grep "It(" -r driver | wc -l)" line and the go run ... -nodes
${spec_count} invocation) by either letting Ginkgo auto-parallelize with the -p
flag (remove spec_count entirely) or, if you must keep a count, compute it
robustly (e.g., grep with -o/regex limited to .go files and --exclude/--include
to avoid comments/strings) and always quote the variable when used
("$spec_count") in the go run github.com/onsi/ginkgo/v2/ginkgo -nodes call;
update the spec_count reference and the ginkgo invocation (remove or quote)
accordingly.
- Around line 52-55: The wget command uses an unquoted expansion of
MACHINE_IMAGE_PATH which can break when the path contains spaces or special
characters; update the wget invocation to quote the output path variable (use
"${MACHINE_IMAGE_PATH}") so the -O argument is passed as a single token, and
ensure any other uses of MACHINE_IMAGE_PATH in this script are similarly quoted
to avoid word-splitting.
- Around line 16-31: The parameter-expansion validations in test-drivers.sh
currently use unquoted variables (e.g., : ${aws_account_id:?must be set}, :
${access_key:?must be set}, and the default assignment :
${uploaded_machine_image_format:=RAW}); update each to use quoted expansions
such as : "${aws_account_id:?must be set}" (and similarly for access_key,
secret_key, bucket_name, region, copy_region, ami_fixture_id,
private_ami_fixture_id, existing_volume_id, existing_snapshot_id,
uploaded_machine_image_url, kms_key_id, kms_multi_region_key,
kms_multi_region_key_replication_test) and quote the default assignment as
"${uploaded_machine_image_format:=RAW}" to satisfy ShellCheck SC2086/SC2223 and
avoid problems with special characters.
In `@ci/tasks/light-aws/test-drivers.yml`:
- Around line 10-23: The YAML mapping for the params block (keys like
aws_account_id, access_key, secret_key, bucket_name, region, copy_region,
ami_fixture_id, private_ami_fixture_id, kms_key_id, kms_multi_region_key,
kms_multi_region_key_replication_test, existing_volume_id, existing_snapshot_id,
uploaded_machine_image_url) currently uses aligned colons with extra spaces;
update each mapping so there is exactly one space after the colon and the empty
string value is written as "" (e.g., aws_account_id: ""), collapsing the aligned
spacing to a single space after each colon to satisfy the `colons` YAMLlint
rule.
In `@ci/tasks/light-aws/test-integration.sh`:
- Around line 13-14: The trap cleanup currently uses unquoted ${tmp_dir} which
can break if the temporary path contains spaces; update the trap command that
references tmp_dir (the trap '{ rm -rf ${tmp_dir}; }' EXIT) to quote the
variable (e.g. use "${tmp_dir}") so the rm -rf invocation is safe with spaces or
special characters.
- Around line 39-42: The script currently downloads TinyCore ISO via HTTP in the
test-integration.sh block (setting MACHINE_IMAGE_PATH and MACHINE_IMAGE_FORMAT
and calling wget); replace the plain HTTP URL with a secure HTTPS mirror or
configurable variable to avoid insecure downloads — update the wget target to a
trusted HTTPS mirror (or make MACHINE_IMAGE_URL an environment-configurable
variable used by the existing MACHINE_IMAGE_PATH assignment and wget) so the
download uses HTTPS while preserving the MACHINE_IMAGE_PATH and
MACHINE_IMAGE_FORMAT behavior.
In `@ci/tasks/light-aws/test-integration.yml`:
- Around line 10-14: The YAML keys access_key, secret_key, bucket_name, region,
and copy_region currently have extra spaces after the colon for visual
alignment; update these lines to follow standard YAML style by using a single
space after each colon (e.g., "access_key: \"\"") so the entries are
consistently formatted and pass strict YAML linters.
In `@ci/tasks/light-aws/us-gov-merge-builds.yml`:
- Around line 5-18: Remove the obsolete input "bosh-stemcells-ci" from the
inputs list and update the run path to point at the merged script location in
this repo; specifically delete the inputs entry with name: bosh-stemcells-ci and
change the run.path value from
"bosh-stemcells-ci/ci/tasks/light-aws/merge-builds" to
"ci/tasks/light-aws/merge-builds" so the task uses the local merged merge-builds
script.
In `@ci/tasks/light-google/create-public-image.sh`:
- Line 28: The script currently builds a deterministic image_name from
raw_stemcell_filename and then unconditionally attempts to create the GCE image,
which can fail with ALREADY_EXISTS and wedge the job; modify the flow to first
check for an existing image with that image_name (e.g., use gcloud compute
images describe or list) and if present skip creation and reuse it, and also
make the create step resilient by catching an ALREADY_EXISTS error and treating
it as success (log and continue) rather than failing the job; update the logic
that invokes image creation to reference the image_name variable and branch to
reuse vs create accordingly.
- Around line 7-10: The script currently enables xtrace with set -x while
GCP_SERVICE_ACCOUNT_KEY is still in scope, causing the private JSON to be
printed; disable tracing until after the authentication step (move or postpone
set -x/DEBUG handling) and when writing the key use printf '%s'
"$GCP_SERVICE_ACCOUNT_KEY" | gcloud auth activate-service-account --key-file=-
(or equivalent) instead of echo "$GCP_SERVICE_ACCOUNT_KEY" so the key is not
expanded into the trace or logs; ensure BOSH_LOG_* exports remain unaffected and
re-enable set -x only after gcloud authentication completes if needed.
In `@ci/tasks/light-google/deploy-skeletal.sh`:
- Line 29: The script writes the SSH private key to bosh.pem (via echo
"${SSH_PRIVATE_KEY}" > bosh.pem) and never removes it; update deploy-skeletal.sh
to create a secure temporary file (mktemp), chmod 600, write SSH_PRIVATE_KEY
into that temp file, use the temp file for SSH/bosh operations, and register a
trap (trap 'rm -f "$TMPKEY"' EXIT INT TERM) to securely delete the temp file on
exit/failure (or use shred before rm if available) to ensure the private key is
cleaned up.
- Line 22: In the mbus_password assignment, the tr invocation uses an
unnecessary bracketed character class ('[/+]') which also removes literal
brackets; update the command used to set mbus_password (the openssl ... | tr -d
... pipeline) to remove only '/' and '+' by replacing the bracketed class with
the simpler '/+' pattern, and scan the script for any other occurrences of tr -d
'[/+]' to change similarly.
In `@ci/tasks/light-google/deploy-skeletal.yml`:
- Line 18: The YAML key declaration for SSH_PRIVATE_KEY has excessive spaces
after the colon which violates the yamllint `colons` rule; update the line
containing the SSH_PRIVATE_KEY key so there is exactly one space after the colon
(e.g., change SSH_PRIVATE_KEY: "" to SSH_PRIVATE_KEY: "") and ensure other
key lines use the same single-space colon spacing for consistency.
In `@ci/tasks/light-google/destroy-skeletal.sh`:
- Around line 16-25: The script currently captures the exit code of "bosh -n
delete-env ./skeletal-deployment.yml" into exit_code but always continues with a
success exit; update the teardown block so that after checking exit_code you
exit with that value on failure instead of returning success. Specifically, keep
the delete-env and exit_code capture, and change the conditional around
exit_code to call "exit $exit_code" (or otherwise propagate the non-zero status)
when exit_code is not zero; reference the existing exit_code variable and the
"bosh -n delete-env ./skeletal-deployment.yml" invocation to locate where to
make the change.
In `@ci/tasks/light-google/make-raw-from-heavy-stemcell.sh`:
- Around line 13-18: The existence probe builds the object path without the
bucket name—fix the stemcell_url function by prepending BUCKET_NAME into the
resource path so it constructs
"/${BUCKET_NAME}/${STEMCELL_BUCKET_PATH}/${light_stemcell_name}" instead of
"/${STEMCELL_BUCKET_PATH}/…"; update any other similar occurrences (the other
resource/url constructions later in the script that reference
STEMCELL_BUCKET_PATH and light_stemcell_name) to include BUCKET_NAME as well so
the “already exists” check targets the correct GCS object path.
- Around line 19-25: The script incorrectly references possibly unset AWS vars
and calls Python 2; change the AWS checks and uses to safe parameter expansion
(e.g. replace [ ! -z "$AWS_ACCESS_KEY_ID" ] and direct uses of
$AWS_SECRET_ACCESS_KEY with tests and expansions using ${AWS_ACCESS_KEY_ID:-}
and ${AWS_SECRET_ACCESS_KEY:-} so the script won't exit under set -u), and
replace the Python 2 URL-encoding call that uses urllib.quote_plus with a Python
3 compatible invocation (use urllib.parse.quote_plus) or a POSIX-safe
alternative, ensuring the variable name signature (produced from string_to_sign
and openssl HMAC) is passed and URL-encoded correctly.
In `@ci/tasks/light-google/make-raw-from-heavy-stemcell.yml`:
- Around line 15-20: The YAML has inconsistent spacing after colons for
parameter keys (e.g., BUCKET_NAME, STEMCELL_BUCKET_PATH, AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY) which YAMLlint flags; fix by normalizing to standard YAML
key: value formatting (single space after the colon and no extra alignment
spaces) for those keys and S3_API_ENDPOINT so the file uses consistent "KEY:
value" style throughout.
In `@ci/tasks/light-google/terraform/input.tf`:
- Around line 5-7: The variable block for gce_credentials_json is not marked
sensitive, so its value can leak into state and logs; update the variable
"gce_credentials_json" block to include sensitive = true (while keeping type =
string) so Terraform treats the credential value as sensitive and omits it from
CLI output and logs.
In `@ci/tasks/light-google/terraform/skeletal.tf`:
- Around line 20-35: The firewall resource
google_compute_firewall.allow-ssh-mbus currently uses source_ranges =
["0.0.0.0/0"]; change this to a parameterized variable (e.g., var.allowed_cidrs
or var.concourse_egress_ranges) and update the resource to use that variable
(source_ranges = var.allowed_cidrs) so only the explicit allowlist is permitted;
add a new terraform variable definition for allowed_cidrs (type = list(string))
and document or set it in the environment/terraform.tfvars to include the
Concourse egress CIDRs rather than allowing 0.0.0.0/0.
In `@ci/tasks/os-images/build.sh`:
- Around line 15-21: The check_param function currently uses eval to read the
environment variable (security risk and allows execution); change it to use
indirect expansion to fetch the variable (use the variable name $name via
${!name}) and validate both that it's not empty and not equal to 'replace-me',
returning an error and exit if either check fails; update the error message to
reference the variable name and ensure this logic is implemented in the
check_param function.
In `@ci/tasks/os-images/build.yml`:
- Around line 18-19: The two YAML keys OPERATING_SYSTEM_NAME and
OPERATING_SYSTEM_VERSION have two spaces after the colon; change them to use a
single space after the colon to match the rest of the file (make
"OPERATING_SYSTEM_NAME: replace-me" and "OPERATING_SYSTEM_VERSION: replace-me"
use one space after each colon).
In `@ci/tasks/publish.sh`:
- Around line 31-34: The git commit step currently fails the task when there are
no changes (e.g., identical regenerated metalink); modify the publish.sh logic
around the git add -A / git commit -m "$COMMIT_PREFIX:
$OS_NAME-$OS_VERSION/$VERSION" sequence to detect whether there are staged
changes first (e.g., check git status --porcelain or git diff --staged --quiet)
and only run git commit if changes exist, otherwise skip the commit and continue
with the copy phase; alternatively, make the commit non-fatal by allowing its
non-zero exit (e.g., fall back to a no-op) so reruns that produce no changes do
not abort the task.
- Around line 73-76: The retry loop using "for i in {1..4}; do aws
--endpoint-url=${AWS_ENDPOINT} s3 cp \"s3://$FROM_BUCKET_NAME/$file\"
\"s3://$TO_BUCKET_NAME/$file\" && break || sleep 5" can falsely return success
because the final sleep returns 0; modify the loop to set a boolean flag (e.g.,
copy_succeeded=false; set to true only when the aws s3 cp command returns 0) and
break, and after the loop explicitly check that flag (copy_succeeded) and handle
failure (exit non-zero or skip writing the version tag) before proceeding to
write the version tag; reference variables AWS_ENDPOINT, FROM_BUCKET_NAME,
TO_BUCKET_NAME and the loop around "file" to locate where to add the flag and
check.
In `@ci/tasks/publish.yml`:
- Around line 22-33: The params block currently uses extra padding for alignment
(e.g., lines starting with AWS_ACCESS_KEY_ID:, AWS_SECRET_ACCESS_KEY:,
AWS_ROLE_ARN:, etc.), which triggers a YAMLlint `colons` violation; update each
mapping in the params block so there is exactly one space after the colon (e.g.,
"AWS_ACCESS_KEY_ID: required" rather than padded columns) throughout the
publish.yml params section to satisfy the linter.
In `@ci/tasks/teardown.sh`:
- Around line 16-19: Replace legacy backtick command substitutions with $(...)
and separate variable declaration from export so command failures are not
masked: run bosh int commands using BOSH_ENVIRONMENT=$(bosh int
"${REPO_PARENT}/director-creds.yml" --path /internal_ip) (and similarly for
BOSH_CA_CERT and BOSH_CLIENT_SECRET using their respective --path values) and
then export each variable with export BOSH_ENVIRONMENT BOSH_CA_CERT
BOSH_CLIENT_SECRET; keep BOSH_CLIENT=admin exported as-is. Ensure you reference
the exact variable names BOSH_ENVIRONMENT, BOSH_CA_CERT, BOSH_CLIENT and
BOSH_CLIENT_SECRET and the bosh int invocations when making the changes.
In `@ci/tasks/test-stemcell.sh`:
- Around line 34-41: The temp file created by mktemp and stored in
jumpbox_private_key is never removed; add a cleanup trap so the file is deleted
on script exit or error and the exported BOSH_GW_PRIVATE_KEY is unset if needed.
After creating jumpbox_private_key in the block that uses mktemp and before
exporting BOSH_GW_PRIVATE_KEY/BOSH_GW_USER/BOSH_GW_HOST, register a trap (e.g.,
trap 'rm -f "${jumpbox_private_key}"; unset BOSH_GW_PRIVATE_KEY' EXIT) so the
private key is removed when the script finishes or is terminated; ensure the
trap only runs if jumpbox_private_key is set to avoid removing unrelated files.
In `@ci/tasks/test-unit.sh`:
- Line 14: The script runs a non-interactive CI step that calls "apt install
sudo" which will prompt and hang; change the invocation referenced in the
ci/tasks/test-unit.sh line containing "apt install sudo" to a non-interactive
form (e.g., use apt-get and add the -y flag and noninteractive environment) so
the installer does not wait for confirmation; ensure you update the command to
something like using apt-get install -y sudo (and consider prefixing with
DEBIAN_FRONTEND=noninteractive and/or running apt-get update first).
- Line 21: The assignment to OS_IMAGE uses readlink -f on a glob
("../../os-image-tarball/*.tgz") which can expand to multiple files; detect and
handle that case explicitly in the test setup: resolve the matching .tgz entries
(use read or find to list matches), if more than one match exit with a clear
error message, if zero matches exit with a missing-file error, otherwise set
OS_IMAGE to the single resolved path; update the OS_IMAGE assignment logic (the
line using readlink -f and the surrounding shell logic) to perform this
defensive check so the script fails fast or selects the single intended tarball.
In `@ci/tasks/usn-processing/usn-shared-functions.sh`:
- Around line 24-26: The split on package_version currently uses cut which
breaks when the version contains Debian epochs (extra ':'), so change to shell
parameter expansion: set package using the prefix before the first colon (use
${package_version%%:*}) and set version to the remainder after the first colon
(use ${package_version#*:}) so the epoch and any additional colons remain in
version; update the assignments that currently set package and version (the
package_version, package, and version variables) accordingly.
- Around line 96-103: The script enables set -u which causes unguarded variable
expansions to error; update conditional checks and uses of ESM_TOKEN and OS to
use safe parameter-expansion (e.g. [ -n "${ESM_TOKEN-}" ] and [ -z "${OS-}" ] or
${ESM_TOKEN:-}) so they don't fail when variables are unset, and update
enable_esm() to reference the token as "${ESM_TOKEN-}" (or "${ESM_TOKEN:-}")
wherever it currently uses "${ESM_TOKEN}" so optional checks and the error path
print the intended message instead of triggering an unbound variable error.
- Around line 90-93: The is_package_installed function uses grep -q "$package$"
which treats the package as a regex and matches suffixes (e.g., "barfoo"),
causing false positives; update the grep call in is_package_installed to use
literal whole-line matching (grep -F -x) so the package name is matched exactly
(treat package as a fixed string and require the whole line to equal it) and
return the grep exit status as before.
In `@ci/tasks/write-bump-message.yml`:
- Line 8: The YAML sets MESSAGE_PREFIX: ~ which YAML parses as null; change the
value to a quoted tilde string so the prefix remains a literal "~" (e.g.,
replace MESSAGE_PREFIX: ~ with MESSAGE_PREFIX: "~") in the
ci/tasks/write-bump-message.yml to preserve the intended default; ensure
MESSAGE_PREFIX is treated as a string not null.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 376b4acc-5b0e-47ec-b8ab-7a76d3e8a029
📒 Files selected for processing (93)
.gitignoreci/README.mdci/bin/format-usn-logci/bosh-core-stemcells-iam.jsonci/bosh-os-images-iam.jsonci/docs/fips.mdci/docs/publish.mdci/ops-files/disable-ephemeral-ip.ymlci/ops-files/ipv6-cc.ymlci/ops-files/ipv6-director.ymlci/ops-files/reserve-ips.ymlci/ops-files/resource-pool-cc.ymlci/pipelines/ubuntu-jammy-publisher/configure.shci/pipelines/ubuntu-jammy-publisher/pipeline.ymlci/pipelines/ubuntu-jammy-publisher/stemcells.ymlci/pipelines/ubuntu-jammy/configure.shci/pipelines/ubuntu-jammy/pipeline.ymlci/pipelines/ubuntu-jammy/stemcells.ymlci/pipelines/ubuntu-noble-publisher/configure.shci/pipelines/ubuntu-noble-publisher/pipeline.ymlci/pipelines/ubuntu-noble-publisher/stemcells.ymlci/pipelines/ubuntu-noble/configure.shci/pipelines/ubuntu-noble/pipeline.ymlci/pipelines/ubuntu-noble/stemcells.ymlci/tasks/bats/iaas/gcp/prepare-bats-config.shci/tasks/bats/iaas/gcp/prepare-bats-config.ymlci/tasks/build-docker-args.shci/tasks/build-docker-args.ymlci/tasks/build-release-metadata.shci/tasks/build-release-metadata.ymlci/tasks/build.shci/tasks/build.ymlci/tasks/bump-bosh-agent.shci/tasks/bump-bosh-agent.ymlci/tasks/bump-bosh-blobstore-cli.shci/tasks/bump-bosh-blobstore-cli.ymlci/tasks/check-if-usn-is-in-github.shci/tasks/check-if-usn-is-in-github.ymlci/tasks/check-if-usns-are-applicable.shci/tasks/check-if-usns-are-applicable.ymlci/tasks/check-usn-packages.shci/tasks/check-usn-packages.ymlci/tasks/commit-build-time.shci/tasks/commit-build-time.ymlci/tasks/gcp/cleanup-bats-vms.shci/tasks/gcp/cleanup-bats-vms.ymlci/tasks/gcp/deploy-director-ipv6.shci/tasks/gcp/deploy-director-ipv6.ymlci/tasks/gcp/deploy-director.shci/tasks/gcp/deploy-director.ymlci/tasks/light-aws/build.shci/tasks/light-aws/build.ymlci/tasks/light-aws/cleanup-ami.shci/tasks/light-aws/cleanup-ami.ymlci/tasks/light-aws/merge-buildsci/tasks/light-aws/run-upload-test.shci/tasks/light-aws/run-upload-test.ymlci/tasks/light-aws/tag-aws-ami-light.shci/tasks/light-aws/tag-aws-ami-light.ymlci/tasks/light-aws/test-drivers.shci/tasks/light-aws/test-drivers.ymlci/tasks/light-aws/test-integration.shci/tasks/light-aws/test-integration.ymlci/tasks/light-aws/test-unit.shci/tasks/light-aws/test-unit.ymlci/tasks/light-aws/us-gov-merge-builds.ymlci/tasks/light-google/assets/public-image-stemcell-ops.ymlci/tasks/light-google/create-public-image.shci/tasks/light-google/create-public-image.ymlci/tasks/light-google/deploy-skeletal.shci/tasks/light-google/deploy-skeletal.ymlci/tasks/light-google/destroy-skeletal.shci/tasks/light-google/destroy-skeletal.ymlci/tasks/light-google/make-raw-from-heavy-stemcell.shci/tasks/light-google/make-raw-from-heavy-stemcell.ymlci/tasks/light-google/skeletal-deployment.ymlci/tasks/light-google/terraform/input.tfci/tasks/light-google/terraform/output.tfci/tasks/light-google/terraform/skeletal.tfci/tasks/os-images/build.shci/tasks/os-images/build.ymlci/tasks/publish.shci/tasks/publish.ymlci/tasks/teardown.shci/tasks/teardown.ymlci/tasks/test-stemcell.shci/tasks/test-stemcell.ymlci/tasks/test-unit.shci/tasks/test-unit.ymlci/tasks/usn-processing/usn-shared-functions.shci/tasks/write-bump-message.ymldocs/develop.mddocs/new_stemcell_line.md
💤 Files with no reviewable changes (3)
- .gitignore
- docs/new_stemcell_line.md
- docs/develop.md
ci/bin/format-usn-log
Outdated
| #!/bin/bash | ||
|
|
||
| jq -r --slurp < $1 '.[] | | ||
| " | ||
| **Title**: " + .title +" | ||
| **URL**: " + .url +" | ||
| **Priorities**: " + (.priorities | join(",")) +" | ||
| **Description**: | ||
| " + .description +" | ||
| **CVEs**: | ||
| " + (.cves | map(" - " + .) | join("\n")) | ||
| ' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Recommended: Add input validation and error handling.
The script lacks validation that an argument is provided and has no error handling for malformed JSON or jq failures. While this may not cause issues in the controlled CI environment, adding basic checks improves robustness.
🛡️ Proposed improvements
#!/bin/bash
+
+set -euo pipefail
+
+if [[ $# -lt 1 ]]; then
+ echo "Usage: $0 <usn-log-file.json>" >&2
+ exit 1
+fi
+
+if [[ ! -f "$1" ]]; then
+ echo "Error: File '$1' not found" >&2
+ exit 1
+fi
jq -r --slurp < "$1" '.[] |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/bin/format-usn-log` around lines 1 - 12, The script format-usn-log
currently assumes $1 exists and that jq will succeed; add input validation and
error handling by first checking that the positional parameter $1 is provided
and points to a readable file, emit a usage message and exit non‑zero if not,
then run jq with flags that enable failure detection (e.g., using --exit-status)
and capture jq's exit code; on jq failure print a clear error including jq's
stderr output and exit with that code so malformed JSON or jq errors don't
produce silent bad output.
ci/bin/format-usn-log
Outdated
| @@ -0,0 +1,12 @@ | |||
| #!/bin/bash | |||
|
|
|||
| jq -r --slurp < $1 '.[] | | |||
There was a problem hiding this comment.
Quote the filename variable to prevent word splitting.
The unquoted $1 at line 3 can cause failures if the filename contains spaces or special characters. Always quote variable expansions in shell scripts unless word splitting is explicitly intended.
🛡️ Proposed fix
-jq -r --slurp < $1 '.[] |
+jq -r --slurp < "$1" '.[] |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| jq -r --slurp < $1 '.[] | | |
| jq -r --slurp < "$1" '.[] | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/bin/format-usn-log` at line 3, The jq invocation in ci/bin/format-usn-log
uses an unquoted $1 which can break on filenames with spaces or special
characters; update the command that reads the input (the line invoking jq -r
--slurp < $1 '.[] |') to quote the file variable so the filename is passed as a
single argument (e.g. use "< \"$1\"" or otherwise quote "$1"); ensure any other
uses of $1 in this script are similarly quoted to prevent word splitting.
ci/bosh-core-stemcells-iam.json
Outdated
| { | ||
| "Effect": "Allow", | ||
| "Action": [ | ||
| "s3:PutObject", | ||
| "s3:PutObjectAcl" | ||
| ], | ||
| "Resource": [ | ||
| "arn:aws:s3:::bosh-core-stemcells/*" | ||
| ] | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding encryption requirements and access conditions.
The policy does not enforce encryption or restrict access by IP, MFA, or other conditions. While not strictly required, adding these constraints strengthens the security posture.
🛡️ Example: Enforce server-side encryption
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bosh-core-stemcells/*"
+ ],
+ "Condition": {
+ "StringEquals": {
+ "s3:x-amz-server-side-encryption": "AES256"
+ }
+ }
+ }
+ ]
+}Optionally, you can also add IP restrictions or require MFA in the Condition block.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/bosh-core-stemcells-iam.json` around lines 4 - 13, The policy statement
that grants "Effect": "Allow" for "Action": ["s3:PutObject","s3:PutObjectAcl"]
against "Resource": ["arn:aws:s3:::bosh-core-stemcells/*"] should be hardened by
adding a Condition block that enforces server-side encryption (e.g., require
s3:x-amz-server-side-encryption or aws:SecureTransport), and optionally add IP
or MFA constraints (aws:SourceIp or aws:MultiFactorAuthPresent) as needed;
update the IAM statement containing those Actions/Resource values to include
those Condition keys so uploads must use SSE and, if desired, originate from
allowed IPs or have MFA present.
ci/bosh-core-stemcells-iam.json
Outdated
| "Action": [ | ||
| "s3:PutObject", | ||
| "s3:PutObjectAcl" |
There was a problem hiding this comment.
Evaluate whether s3:PutObjectAcl is necessary.
Granting s3:PutObjectAcl allows the principal to modify object ACLs, including setting them to public-read. If CI credentials are compromised, an attacker could expose stemcells publicly. Consider whether ACL modification is required for your CI workflow or if bucket-level policies can handle access control instead.
🔒 Proposed fix to remove ACL permission if not required
"Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"
+ "s3:PutObject"
],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/bosh-core-stemcells-iam.json` around lines 6 - 8, Review the IAM statement
that grants "s3:PutObject" and "s3:PutObjectAcl" and determine if ACL changes
are required by your CI; if not, remove "s3:PutObjectAcl" from the Action list
to eliminate ability to modify object ACLs, and instead enforce access via
bucket-level controls (e.g., bucket policy and S3 Object Ownership set to
BucketOwnerEnforced) or ensure uploads use the correct canned ACLs server-side;
update the policy where the Actions include "s3:PutObjectAcl" to remove that
permission and document the replacement control.
ci/ops-files/ipv6-cc.yml
Outdated
| dns: | ||
| - 8.8.8.8 |
There was a problem hiding this comment.
Avoid hardcoding IPv4 DNS in the IPv6 ops-file.
On Line 13–14, forcing 8.8.8.8 can fail in IPv6-only or restricted networks. Make DNS configurable (or provide IPv6 resolvers) to avoid compilation-time resolution failures.
Proposed fix
dns:
- - 8.8.8.8
+ - ((second_network_dns))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dns: | |
| - 8.8.8.8 | |
| dns: | |
| - ((second_network_dns)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/ops-files/ipv6-cc.yml` around lines 13 - 14, Replace the hardcoded IPv4
DNS entry under the dns key (currently "8.8.8.8") so the ops-file works in
IPv6-only environments: change the literal to a configurable placeholder (e.g.,
((dns_servers))) or provide IPv6 resolver addresses (e.g., 2001:4860:4860::8888)
and document the parameter; update the dns entry in ipv6-cc.yml to use the
placeholder or include IPv6 addresses instead of the fixed "8.8.8.8".
ci/tasks/os-images/build.yml
Outdated
| OPERATING_SYSTEM_NAME: replace-me | ||
| OPERATING_SYSTEM_VERSION: replace-me |
There was a problem hiding this comment.
Fix inconsistent spacing after colons.
Lines 18-19 have extra spaces after the colon, inconsistent with the rest of the file (lines 20-21 use single space).
🧹 Proposed fix
params:
- OPERATING_SYSTEM_NAME: replace-me
- OPERATING_SYSTEM_VERSION: replace-me
+ OPERATING_SYSTEM_NAME: replace-me
+ OPERATING_SYSTEM_VERSION: replace-me
ESM_TOKEN:
UBUNTU_ADVANTAGE_TOKEN:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| OPERATING_SYSTEM_NAME: replace-me | |
| OPERATING_SYSTEM_VERSION: replace-me | |
| params: | |
| OPERATING_SYSTEM_NAME: replace-me | |
| OPERATING_SYSTEM_VERSION: replace-me | |
| ESM_TOKEN: | |
| UBUNTU_ADVANTAGE_TOKEN: |
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 18-18: too many spaces after colon
(colons)
[error] 19-19: too many spaces after colon
(colons)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/tasks/os-images/build.yml` around lines 18 - 19, The two YAML keys
OPERATING_SYSTEM_NAME and OPERATING_SYSTEM_VERSION have two spaces after the
colon; change them to use a single space after the colon to match the rest of
the file (make "OPERATING_SYSTEM_NAME: replace-me" and
"OPERATING_SYSTEM_VERSION: replace-me" use one space after each colon).
ci/tasks/publish.sh
Outdated
| git add -A | ||
| git config --global user.email "ci@localhost" | ||
| git config --global user.name "CI Bot" | ||
| git commit -m "$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION" |
There was a problem hiding this comment.
Guard the git commit so reruns can keep publishing.
If the regenerated metalink is identical to what is already in stemcells-index, git commit exits 1 and the task stops before the copy phase. Reflows/retries should treat that as a no-op.
Suggested fix
git add -A
git config --global user.email "ci@localhost"
git config --global user.name "CI Bot"
-git commit -m "$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION"
+if ! git diff --cached --quiet; then
+ git commit -m "$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION"
+else
+ echo "No stemcells-index changes to commit"
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| git add -A | |
| git config --global user.email "ci@localhost" | |
| git config --global user.name "CI Bot" | |
| git commit -m "$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION" | |
| git add -A | |
| git config --global user.email "ci@localhost" | |
| git config --global user.name "CI Bot" | |
| if ! git diff --cached --quiet; then | |
| git commit -m "$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION" | |
| else | |
| echo "No stemcells-index changes to commit" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/tasks/publish.sh` around lines 31 - 34, The git commit step currently
fails the task when there are no changes (e.g., identical regenerated metalink);
modify the publish.sh logic around the git add -A / git commit -m
"$COMMIT_PREFIX: $OS_NAME-$OS_VERSION/$VERSION" sequence to detect whether there
are staged changes first (e.g., check git status --porcelain or git diff
--staged --quiet) and only run git commit if changes exist, otherwise skip the
commit and continue with the copy phase; alternatively, make the commit
non-fatal by allowing its non-zero exit (e.g., fall back to a no-op) so reruns
that produce no changes do not abort the task.
ci/tasks/publish.sh
Outdated
| for i in {1..4}; do | ||
| aws --endpoint-url=${AWS_ENDPOINT} s3 cp "s3://$FROM_BUCKET_NAME/$file" "s3://$TO_BUCKET_NAME/$file" \ | ||
| && break \ | ||
| || sleep 5 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat ci/tasks/publish.sh | head -100 | tail -40Repository: cloudfoundry/bosh-linux-stemcell-builder
Length of output: 1315
Fix the retry loop to explicitly verify success before continuing.
The loop can exit with status 0 after four failed copy attempts. When aws s3 cp fails on the final iteration, sleep 5 executes and returns 0, causing the loop to exit successfully. The script then immediately writes the version tag without checking if any files were actually copied, potentially creating version tags for missing objects in TO_BUCKET_NAME.
Use a flag to track success and explicitly verify it after the loop:
Suggested fix
- for i in {1..4}; do
- aws --endpoint-url=${AWS_ENDPOINT} s3 cp "s3://$FROM_BUCKET_NAME/$file" "s3://$TO_BUCKET_NAME/$file" \
- && break \
- || sleep 5
- done
+ copied=false
+ for _ in {1..4}; do
+ if aws --endpoint-url="${AWS_ENDPOINT}" s3 cp "s3://$FROM_BUCKET_NAME/$file" "s3://$TO_BUCKET_NAME/$file"; then
+ copied=true
+ break
+ fi
+ sleep 5
+ done
+ if [ "$copied" != true ]; then
+ echo "Failed to copy $file after 4 attempts" >&2
+ exit 1
+ fi🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 73-73: i appears unused. Verify use (or export if used externally).
(SC2034)
[info] 74-74: Double quote to prevent globbing and word splitting.
(SC2086)
[info] 75-75: Note that A && B || C is not if-then-else. C may run when A is true.
(SC2015)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/tasks/publish.sh` around lines 73 - 76, The retry loop using "for i in
{1..4}; do aws --endpoint-url=${AWS_ENDPOINT} s3 cp
\"s3://$FROM_BUCKET_NAME/$file\" \"s3://$TO_BUCKET_NAME/$file\" && break ||
sleep 5" can falsely return success because the final sleep returns 0; modify
the loop to set a boolean flag (e.g., copy_succeeded=false; set to true only
when the aws s3 cp command returns 0) and break, and after the loop explicitly
check that flag (copy_succeeded) and handle failure (exit non-zero or skip
writing the version tag) before proceeding to write the version tag; reference
variables AWS_ENDPOINT, FROM_BUCKET_NAME, TO_BUCKET_NAME and the loop around
"file" to locate where to add the flag and check.
ci/tasks/publish.yml
Outdated
| AWS_ACCESS_KEY_ID: required | ||
| AWS_SECRET_ACCESS_KEY: required | ||
| AWS_ROLE_ARN: | ||
| AWS_DEFAULT_REGION: us-east-1 | ||
| FROM_BUCKET_NAME: required | ||
| TO_BUCKET_NAME: required | ||
| FROM_INDEX: required | ||
| TO_INDEX: required | ||
| COMMIT_PREFIX: publish | ||
| COPY_KEYS: required | ||
| OS_NAME: required | ||
| OS_VERSION: required |
There was a problem hiding this comment.
Fix the params block spacing so the task passes YAMLlint.
This block is padded for alignment, but the repo's linter treats that as a colons violation. Use a single space after each : here.
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 22-22: too many spaces after colon
(colons)
[error] 25-25: too many spaces after colon
(colons)
[error] 26-26: too many spaces after colon
(colons)
[error] 27-27: too many spaces after colon
(colons)
[error] 28-28: too many spaces after colon
(colons)
[error] 29-29: too many spaces after colon
(colons)
[error] 30-30: too many spaces after colon
(colons)
[error] 31-31: too many spaces after colon
(colons)
[error] 32-32: too many spaces after colon
(colons)
[error] 33-33: too many spaces after colon
(colons)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/tasks/publish.yml` around lines 22 - 33, The params block currently uses
extra padding for alignment (e.g., lines starting with AWS_ACCESS_KEY_ID:,
AWS_SECRET_ACCESS_KEY:, AWS_ROLE_ARN:, etc.), which triggers a YAMLlint `colons`
violation; update each mapping in the params block so there is exactly one space
after the colon (e.g., "AWS_ACCESS_KEY_ID: required" rather than padded columns)
throughout the publish.yml params section to satisfy the linter.
ci/tasks/write-bump-message.yml
Outdated
| - name: message | ||
|
|
||
| params: | ||
| MESSAGE_PREFIX: ~ |
There was a problem hiding this comment.
Quote the default MESSAGE_PREFIX value.
On Line 8, MESSAGE_PREFIX: ~ is parsed as YAML null, so the prefix becomes empty. Use a quoted string to keep the intended literal tilde.
Proposed fix
params:
- MESSAGE_PREFIX: ~
+ MESSAGE_PREFIX: "~"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MESSAGE_PREFIX: ~ | |
| MESSAGE_PREFIX: "~" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ci/tasks/write-bump-message.yml` at line 8, The YAML sets MESSAGE_PREFIX: ~
which YAML parses as null; change the value to a quoted tilde string so the
prefix remains a literal "~" (e.g., replace MESSAGE_PREFIX: ~ with
MESSAGE_PREFIX: "~") in the ci/tasks/write-bump-message.yml to preserve the
intended default; ensure MESSAGE_PREFIX is treated as a string not null.
ddde62c to
606a371
Compare
606a371 to
96c7f29
Compare
NOTE pipelines will need to be re-flown
Next steps
ubuntu-jammyforward toubuntu-nobleubuntu-jammyjammyfrom pipeline directory namesubuntu-noblenoblefrom pipeline directory namesubuntu-jammyforward toubuntu-noble