feat: add dashd integration tests for SPV sync#464
feat: add dashd integration tests for SPV sync#464xdustinface wants to merge 4 commits intov0.42-devfrom
dashd integration tests for SPV sync#464Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds CI caching and setup for Dash Core and test-data, a contrib/setup-dashd.py installer, extensive SPV and FFI test harnesses (node, contexts, callback tracker), and many end-to-end integration tests covering sync, transactions, disconnects, and restarts. Changes
Sequence Diagram(s)sequenceDiagram
participant Tester as Tester / CI
participant Setup as contrib/setup-dashd.py
participant Dashd as Dash Core (regtest)
participant Client as SPV / FFI Client
participant RPC as Dashd RPC
Tester->>Setup: request DASHD_PATH and DASHD_DATADIR (cache_dir)
Setup-->>Tester: export DASHD_PATH, DASHD_DATADIR
Tester->>Dashd: start node using DASHD_PATH/DATADIR
Dashd->>RPC: initialize wallets / mining (RPC ready)
Tester->>Client: spawn FFI/SPV client configured with Dashd P2P addr
Client->>Dashd: P2P sync (headers, filters, blocks)
Dashd-->>Client: emit blocks/headers/events
Client->>Tester: callbacks/events (network, sync, wallet)
Tester->>RPC: drive actions (generate blocks, send txs, disconnect peers)
RPC->>Dashd: apply actions (mine/send/disconnect)
Dashd-->>Client: subsequent events → Tester validates via tracker/assertions
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
10e52ea to
41ba821
Compare
41ba821 to
f94c739
Compare
f94c739 to
6af3ade
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (3)
.github/workflows/build-and-test.yml (1)
46-46: IncludeTEST_DATA_REPOin cache key to avoid stale cross-repo cache hits.If
TEST_DATA_REPOis changed while version tags remain the same, the cache key can still collide.♻️ Suggested key tweak
- key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} + key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_REPO }}-${{ env.TEST_DATA_VERSION }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-and-test.yml at line 46, The cache key for the Rust dashcore test job currently uses rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} which can collide when TEST_DATA_REPO changes; update the cache key to include the TEST_DATA_REPO environment variable (e.g. add -${{ env.TEST_DATA_REPO }} into the key string) so the key uniquely encodes which test-data repo is used (modify the 'key:' entry in the workflow accordingly).dash-spv/src/test_utils/node.rs (1)
221-227: Don’t treat everyload_walletfailure as “wallet missing.”Line 223 currently catches all failures and immediately attempts
create_wallet, which can hide real RPC/auth/connectivity issues.Proposed refactor
pub fn ensure_wallet(&self, wallet_name: &str) { let client = self.rpc_client(); match client.load_wallet(wallet_name) { Ok(_) => tracing::info!("Loaded wallet: {}", wallet_name), - Err(_) => { + Err(e) + if matches!( + e, + dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::Error::Rpc(ref rpc_err)) + if rpc_err.message.contains("not found") + ) => + { client .create_wallet(wallet_name, None, None, None, None) .unwrap_or_else(|e| panic!("failed to create wallet '{}': {}", wallet_name, e)); tracing::info!("Created wallet: {}", wallet_name); } + Err(e) => panic!("failed to load wallet '{}': {}", wallet_name, e), } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 221 - 227, The code currently treats any error from client.load_wallet(wallet_name) as "wallet missing" and always calls client.create_wallet, which can mask RPC/auth/connectivity failures; change the match on client.load_wallet to inspect the Err value and only call client.create_wallet when the error clearly indicates a missing/nonexistent wallet (e.g., match on the specific error variant or string like "not found"/WalletNotFound), otherwise propagate or log and return the error so real failures aren't swallowed; keep the success branch tracing::info!("Loaded wallet: {}", wallet_name) and on the create branch call client.create_wallet(wallet_name, ...) only for the missing-wallet case and handle its Result (panic or return) as currently done.dash-spv/tests/dashd_sync/tests_basic.rs (1)
36-37: Extract the empty-wallet mnemonic into a shared test constant.The same mnemonic literal is duplicated in multiple tests. Centralizing it in
setup/module-level const reduces drift risk and simplifies future changes.♻️ Suggested refactor
+const EMPTY_WALLET_MNEMONIC: &str = + "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; ... - let empty_mnemonic = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; - let (empty_wallet, empty_wallet_id) = create_test_wallet(empty_mnemonic, Network::Regtest); + let (empty_wallet, empty_wallet_id) = + create_test_wallet(EMPTY_WALLET_MNEMONIC, Network::Regtest); ... - let empty_mnemonic = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; + let empty_mnemonic = EMPTY_WALLET_MNEMONIC;Also applies to: 81-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_basic.rs` around lines 36 - 37, Replace duplicated literal used to create test wallets by introducing a shared test constant (e.g., EMPTY_WALLET_MNEMONIC) at module scope or inside the test setup and use it where create_test_wallet is called (currently using the local variable empty_mnemonic in tests like the one that calls create_test_wallet(empty_mnemonic, Network::Regtest)); remove the local literal definitions and update other occurrences (lines around 81-82) to reference the new constant so all tests use the single centralized mnemonic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contrib/setup-dashd.py`:
- Around line 98-103: The download block (log, archive_path, url, download,
extract, os.remove) currently trusts the archive without verification; update it
to fetch the official checksum (or signed checksum) for the asset (e.g., from
the release assets or a .sha256/.asc URL), compute the local SHA256 (use
hashlib) for archive_path, compare against the expected value and abort/log an
error if they differ, and only call extract(archive_path, cache_dir) and
os.remove(archive_path) after successful verification; apply the same
verification logic to the similar block referenced around the other occurrence
(lines 123-131) so all downloaded archives are integrity-checked before use.
- Around line 82-83: The call to TarFile.extractall() uses the Python-3.12-only
filter parameter and must be replaced with a manual, safe extraction loop; open
the archive with tarfile.open(archive_path, "r:gz") (same as current code) and
instead of tf.extractall(dest_dir, filter="data") iterate tf.getmembers(),
select only members matching the intended filter (e.g., member.name == "data" or
member.name.startswith("data/")), sanitize each target path to prevent path
traversal (use os.path.join + os.path.realpath and verify it is within
dest_dir), and extract each allowed member with tf.extract(member, dest_dir) (or
tf.extractfile for files) so the behavior is equivalent without relying on the
unsupported filter parameter.
In `@dash-spv-ffi/src/test_utils/callbacks.rs`:
- Around line 89-95: The tracker_from function currently unsafely casts
user_data to &'static CallbackTracker; change its signature to unsafe fn
tracker_from(user_data: *mut c_void) -> Option<NonNull<CallbackTracker>> and
return a NonNull pointer (None if null) instead of a &'static ref, then update
every callback site that calls tracker_from (uses at lines referencing soft
symbols in this file such as the callbacks that currently call tracker_from and
the Arc::as_ptr casts) to safely dereference the NonNull when the Arc is
guaranteed live (e.g., let tracker = tracker_from(user_data).map(|p| unsafe {
p.as_ref() }) or access fields via unsafe { p.as_ref() } only within the
callback), and replace Arc::as_ptr casts/comments to reflect that the Arc must
outlive callbacks; ensure all usages listed (callbacks and Arc casts) are
updated to accept Option<NonNull<CallbackTracker>> and perform safe, localized
unsafe derefs rather than assuming a &'static lifetime.
In `@dash-spv-ffi/src/test_utils/context.rs`:
- Around line 205-217: The get_wallet_balance function currently passes
wallet_id.as_ptr() into wallet_manager_get_wallet_balance without validating the
slice length; add a precondition check in get_wallet_balance that
wallet_id.len() is at least the expected number of bytes (or > 0 if no fixed
length is defined) and return an FFI-safe error via FFIError or early return
when the check fails before calling wallet_manager_get_wallet_balance; mirror
the same validation in the other functions flagged (around the blocks
referencing wallet_manager_get_wallet_balance at the other locations) so no raw
pointer from a short slice is passed across the FFI boundary.
- Around line 266-277: In wait_for_sync, snapshotting sync_complete_count before
checking the current tips can miss a SyncComplete that already happened; change
the logic to first load self.session.tracker.last_header_tip and last_filter_tip
and return immediately if both are >= expected_height, and only then take
count_before = self.session.tracker.sync_complete_count.load(...); keep the
existing loop that waits for either an increment of sync_complete_count or the
tips to reach expected_height (checking sync_complete_count, last_header_tip,
last_filter_tip inside the loop), so the call won't falsely time out when
SyncComplete fired just prior to entry.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 109-114: Update the recv() error handling to treat
RecvError::Lagged as recoverable: change the match on receiver.recv() so that
Err(RecvError::Lagged) continues the wait loop (like Ok(_) does) and only
Err(RecvError::Closed) returns false; apply this same change to the other
identical match site at lines ~185–187. Reference the receiver.recv() call and
the RecvError::Lagged / RecvError::Closed variants when making the edit.
- Around line 185-187: Replace the blanket Err(_) panic in the sync-event
receive loop with a match on broadcast::error::RecvError: if
Err(broadcast::error::RecvError::Lagged) occur, treat it as recoverable and
continue the loop (do not panic); if Err(broadcast::error::RecvError::Closed)
occur, then panic with the existing message that references disconnect_count.
Update the match arm that currently panics using the string "Sync event channel
error after {} disconnections" to only be used for the Closed variant and handle
Lagged by skipping/continue.
In `@dash-spv/tests/dashd_sync/tests_transaction.rs`:
- Around line 136-142: The subtraction fees_paid = baseline_balance -
final_balance can underflow; move the assert!(final_balance < baseline_balance,
...) to run before computing fees_paid so the test fails with the intended
message rather than panicking, i.e., check final_balance < baseline_balance
first and only then compute fees_paid and assert fees_paid < 1_000_000; apply
the same reorder to the other occurrence that uses baseline_balance and
final_balance (the second assertion block in the test).
---
Nitpick comments:
In @.github/workflows/build-and-test.yml:
- Line 46: The cache key for the Rust dashcore test job currently uses
rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{
env.TEST_DATA_VERSION }} which can collide when TEST_DATA_REPO changes; update
the cache key to include the TEST_DATA_REPO environment variable (e.g. add -${{
env.TEST_DATA_REPO }} into the key string) so the key uniquely encodes which
test-data repo is used (modify the 'key:' entry in the workflow accordingly).
In `@dash-spv/src/test_utils/node.rs`:
- Around line 221-227: The code currently treats any error from
client.load_wallet(wallet_name) as "wallet missing" and always calls
client.create_wallet, which can mask RPC/auth/connectivity failures; change the
match on client.load_wallet to inspect the Err value and only call
client.create_wallet when the error clearly indicates a missing/nonexistent
wallet (e.g., match on the specific error variant or string like "not
found"/WalletNotFound), otherwise propagate or log and return the error so real
failures aren't swallowed; keep the success branch tracing::info!("Loaded
wallet: {}", wallet_name) and on the create branch call
client.create_wallet(wallet_name, ...) only for the missing-wallet case and
handle its Result (panic or return) as currently done.
In `@dash-spv/tests/dashd_sync/tests_basic.rs`:
- Around line 36-37: Replace duplicated literal used to create test wallets by
introducing a shared test constant (e.g., EMPTY_WALLET_MNEMONIC) at module scope
or inside the test setup and use it where create_test_wallet is called
(currently using the local variable empty_mnemonic in tests like the one that
calls create_test_wallet(empty_mnemonic, Network::Regtest)); remove the local
literal definitions and update other occurrences (lines around 81-82) to
reference the new constant so all tests use the single centralized mnemonic.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
6af3ade to
9bda906
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
contrib/setup-dashd.py (2)
98-103:⚠️ Potential issue | 🟠 MajorVerify archive integrity before extraction.
Both download paths trust remote archives without checksum/signature verification before extraction. This is a supply-chain gap for CI.
Suggested patch (SHA-256 verification)
@@ import os import platform +import hashlib import sys @@ def download(url, dest, timeout=300, retries=3): @@ time.sleep(5 * attempt) + +def sha256_file(path): + h = hashlib.sha256() + with open(path, "rb") as f: + for chunk in iter(lambda: f.read(1024 * 1024), b""): + h.update(chunk) + return h.hexdigest() + +def verify_sha256(path, expected, label): + actual = sha256_file(path) + if actual.lower() != expected.lower(): + sys.exit(f"Checksum mismatch for {label}: expected {expected}, got {actual}") @@ download(url, archive_path) + expected_dashd_sha256 = os.environ.get("DASHD_ARCHIVE_SHA256") + if expected_dashd_sha256: + verify_sha256(archive_path, expected_dashd_sha256, asset) extract(archive_path, cache_dir) @@ download(url, archive_path) + expected_testdata_sha256 = os.environ.get("TEST_DATA_ARCHIVE_SHA256") + if expected_testdata_sha256: + verify_sha256(archive_path, expected_testdata_sha256, "regtest-40000.tar.gz") extract(archive_path, parent_dir)Also applies to: 123-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 98 - 103, Add SHA-256 checksum verification before calling extract: after download(url, archive_path) compute the archive's SHA-256 and compare it against a trusted checksum source (e.g., a checksums file fetched from the GitHub release or a hardcoded known-good digest for DASHVERSION/asset); if the digests don't match, log the mismatch via log/process logger and abort (do not call extract or os.remove), otherwise proceed to extract(archive_path, cache_dir) and cleanup. Update the same pattern for the duplicate block that handles DASHVERSION asset downloads (the block referencing download, archive_path, extract, os.remove around lines 123-131).
78-83:⚠️ Potential issue | 🟠 MajorHarden archive extraction and drop Python-3.12-only tar filter usage.
Line 80 and Line 83 rely on
extractall(...); this keeps Zip Slip risk, and Line 83 usesfilter="data"which is not portable across older Python runtimes. Please switch to explicit member-by-member extraction with realpath boundary checks for both zip and tar.#!/bin/bash set -euo pipefail echo "Check current extraction calls:" rg -n 'extractall\(' contrib/setup-dashd.py echo echo "Check Python extractall signature in runner environment:" python - <<'PY' import inspect, tarfile, sys print("python:", sys.version) print("tarfile.extractall signature:", inspect.signature(tarfile.TarFile.extractall)) PYSuggested patch (safe extraction helpers)
@@ import os import platform +from pathlib import Path import sys @@ def extract(archive_path, dest_dir): - if archive_path.endswith(".zip"): - with zipfile.ZipFile(archive_path, "r") as zf: - zf.extractall(dest_dir) - else: - with tarfile.open(archive_path, "r:gz") as tf: - tf.extractall(dest_dir, filter="data") + dest_root = os.path.realpath(dest_dir) + + def _safe_target_path(root, name): + target = os.path.realpath(os.path.join(root, name)) + if not (target == root or target.startswith(root + os.sep)): + sys.exit(f"Refusing to extract path outside destination: {name}") + return target + + if archive_path.endswith(".zip"): + with zipfile.ZipFile(archive_path, "r") as zf: + for member in zf.infolist(): + _safe_target_path(dest_root, member.filename) + zf.extract(member, dest_root) + else: + with tarfile.open(archive_path, "r:gz") as tf: + for member in tf.getmembers(): + _safe_target_path(dest_root, member.name) + tf.extract(member, dest_root)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 78 - 83, Replace the unsafe extractall usage for both zip and tar paths by iterating members and performing realpath boundary checks: implement safe_extract_zip(archive_path, dest_dir) and safe_extract_tar(archive_path, dest_dir) and call them instead of zipfile.ZipFile(...).extractall and tarfile.open(...).extractall(filter="data"); for ZIP use zipfile.ZipFile.namelist()/open() and for TAR iterate TarFile.getmembers(), skip absolute/.. entries, validate that os.path.realpath(target).startswith(os.path.realpath(dest_dir)) before writing, and preserve file modes and symlink handling correctly; remove the Python-3.12-only filter="data" usage and ensure you raise/abort on any path traversal attempts when handling archive_path and dest_dir.dash-spv/tests/dashd_sync/helpers.rs (1)
109-114:⚠️ Potential issue | 🟠 MajorHandle
RecvError::Laggedas recoverable in both recv loops (still unresolved).At Line 113 and Line 185-187, blanket
Err(_)handling treats lag as terminal.broadcastreceivers can lag transiently; onlyClosedshould terminate/fail this wait logic. This was already raised previously and still appears in the current diff.🔧 Proposed fix
result = receiver.recv() => { match result { Ok(ref event) if predicate(event) => return true, Ok(_) => continue, - Err(_) => return false, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => return false, } }Ok(SyncEvent::SyncComplete { .. }) => { tracing::info!( "Sync completed after {} peer disconnections", disconnect_count ); break; } Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} disconnections", disconnect_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} disconnections", disconnect_count); + } } }#!/bin/bash # Verify current handling in both recv loops. rg -n -C2 'receiver\.recv\(\)|sync_event_receiver\.recv\(\)|Err\(_\)|RecvError::Lagged|RecvError::Closed' dash-spv/tests/dashd_sync/helpers.rsExpected: current output shows blanket
Err(_)branches at the two recv sites and no explicitLagged/Closedsplit.Also applies to: 185-187
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 109 - 114, The recv loops currently treat any Err(_) from receiver.recv() (and sync_event_receiver.recv()) as terminal; change the match arms to distinguish broadcast::error::RecvError variants: match Err(broadcast::error::RecvError::Closed) => return false, Err(broadcast::error::RecvError::Lagged(_)) => continue (or log and continue) so lagging is treated as recoverable; update both occurrences (the match around receiver.recv() and the match around sync_event_receiver.recv()) and ensure broadcast::error::RecvError is in scope for pattern matching.
🧹 Nitpick comments (5)
.github/workflows/build-and-test.yml (1)
46-46: IncludeTEST_DATA_REPOin the cache key to avoid cross-repo cache collisions.If
TEST_DATA_REPOis changed, the current key can still hit an old cache with incompatible test data.Suggested patch
- key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} + key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_REPO }}-${{ env.TEST_DATA_VERSION }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-and-test.yml at line 46, The cache key string rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} can collide across different test-data repositories; update that key to include the TEST_DATA_REPO identifier (e.g., add ${ { env.TEST_DATA_REPO }} or the appropriate input) so the workflow cache key becomes unique per test-data repo; modify the cache step that defines the key (the line containing key: rust-dashcore-test-...) to append the TEST_DATA_REPO token.dash-spv/src/test_utils/node.rs (2)
103-123: Move hardcoded dashd network/bind params intoDashCoreConfig.At Line 104-123, regtest/bind/rpc parameters are embedded inline. Exposing these through
DashCoreConfig(with current defaults) keeps this harness configurable and guideline-compliant.As per coding guidelines, "Never hardcode network parameters, addresses, or keys."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 103 - 123, The args_vec currently embeds hardcoded network and bind flags; refactor by adding fields to DashCoreConfig (e.g., network_mode with default "regtest", rpc_bind, rpc_allow_ip, bind_addr, listen, debug_level, fallback_fee, and any port defaults) and replace the inline literals in args_vec with formatted values from self.config (e.g., use self.config.network_mode, self.config.rpc_bind, self.config.rpc_allow_ip, self.config.bind_addr, self.config.debug_level, self.config.fallback_fee, etc.); ensure existing defaults match current hardcoded values so behavior is unchanged and update any code constructing Node to accept or populate DashCoreConfig accordingly.
53-75: Prefer fallible config construction over panics in shared test utils.At Line 58, Line 66, and Line 60-64, setup failures hard-panic via
expect/assert!. Returning a typedResulthere will make test setup failures easier to surface and compose from callers.As per coding guidelines, "Use proper error types (thiserror) and propagate errors appropriately" and "Avoid
unwrap()andexpect()in library code; use explicit error types (e.g.,thiserror)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 53 - 75, Change Node::from_env to return a Result<Self, E> instead of panicking: replace the expect/assert calls in from_env with error construction and returns using a concrete error type (create a NodeError enum using thiserror with variants for MissingEnv, PathNotFound, and PortAllocationFailure). Specifically, in from_env handle std::env::var("DASHD_PATH") and std::env::var("DASHD_DATADIR") by returning Err(NodeError::MissingEnv(...)) when absent, check dashd_path.exists() and return Err(NodeError::PathNotFound(dashd_path)) instead of assert!, and propagate errors from find_available_port() as Err(NodeError::PortAllocationFailure(...)) when necessary; update callers/tests to handle the Result. Ensure signatures and uses reference from_env, dashd_path, datadir, find_available_port, wallet, p2p_port, and rpc_port.dash-spv/tests/dashd_sync/tests_basic.rs (1)
36-37: Consider extracting the test mnemonic to a constant.The "abandon..." mnemonic (standard BIP39 test vector) is duplicated at line 81. Extracting it to a module-level constant would improve maintainability.
♻️ Suggested refactor
+/// BIP39 test mnemonic with no regtest activity +const EMPTY_WALLET_MNEMONIC: &str = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; + #[tokio::test] async fn test_sync_empty_wallet() { ... - let empty_mnemonic = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; - let (empty_wallet, empty_wallet_id) = create_test_wallet(empty_mnemonic, Network::Regtest); + let (empty_wallet, empty_wallet_id) = create_test_wallet(EMPTY_WALLET_MNEMONIC, Network::Regtest);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_basic.rs` around lines 36 - 37, Extract the repeated BIP39 test mnemonic string into a module-level constant (e.g., const TEST_MNEMONIC: &str = "abandon abandon ... about") and replace inline occurrences (such as the local variable empty_mnemonic and the duplicated string at the other test) with that constant; update uses around create_test_wallet(empty_mnemonic, Network::Regtest) to call create_test_wallet(TEST_MNEMONIC, Network::Regtest) and ensure the constant is declared near the top of the test module so both tests reference the same value.dash-spv-ffi/tests/dashd_ffi_sync_test.rs (1)
17-19: Consider adding skip message when dashd context is unavailable.Other skip conditions in this file (e.g.,
supports_miningcheck) print an informative message. Adding a similar message here would improve test output consistency and CI debugging.Suggested improvement
let Some(dashd) = rt.block_on(DashdTestContext::new()) else { + eprintln!("Skipping test (dashd not available)"); return; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs` around lines 17 - 19, When DashdTestContext::new() returns None, the test silently returns; change the early return to print or log a skip message before returning so test output matches others (e.g., the supports_mining check). Concretely, update the block that calls rt.block_on(DashdTestContext::new()) and the variable dashd to detect the None case and emit an informative message like "skipping test: dashd context unavailable" (or similar) prior to returning so CI/test logs show why the test was skipped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contrib/setup-dashd.py`:
- Around line 43-45: The code currently defaults unknown Linux architectures to
"x86_64" (variables system, machine, arch, asset, DASHVERSION), which can
download the wrong binary; change the logic in the Linux branch so that you
explicitly accept known archs (e.g., "aarch64"/"arm64" -> "aarch64",
"x86_64"/"amd64" -> "x86_64") and otherwise fail fast by raising an error or
exiting with a clear message about unsupported architecture instead of silently
assigning "x86_64" and constructing asset using asset =
f"dashcore-{DASHVERSION}-{arch}-linux-gnu.tar.gz".
In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs`:
- Around line 47-56: The test converts dashd.wallet.balance using floating-point
multiplication which can lose precision; change the test to derive
expected_balance without f64 arithmetic — either read satoshis directly from the
test fixture/source if available, or parse the wallet balance as a precise
decimal and convert to satoshis (e.g., use a decimal library or parse the string
representation) before comparing to ctx.get_wallet_balance(&wallet_id). Ensure
you update the expected_balance calculation (the variable and its usage) to use
the integer satoshi value rather than (dashd.wallet.balance * 100_000_000.0).
---
Duplicate comments:
In `@contrib/setup-dashd.py`:
- Around line 98-103: Add SHA-256 checksum verification before calling extract:
after download(url, archive_path) compute the archive's SHA-256 and compare it
against a trusted checksum source (e.g., a checksums file fetched from the
GitHub release or a hardcoded known-good digest for DASHVERSION/asset); if the
digests don't match, log the mismatch via log/process logger and abort (do not
call extract or os.remove), otherwise proceed to extract(archive_path,
cache_dir) and cleanup. Update the same pattern for the duplicate block that
handles DASHVERSION asset downloads (the block referencing download,
archive_path, extract, os.remove around lines 123-131).
- Around line 78-83: Replace the unsafe extractall usage for both zip and tar
paths by iterating members and performing realpath boundary checks: implement
safe_extract_zip(archive_path, dest_dir) and safe_extract_tar(archive_path,
dest_dir) and call them instead of zipfile.ZipFile(...).extractall and
tarfile.open(...).extractall(filter="data"); for ZIP use
zipfile.ZipFile.namelist()/open() and for TAR iterate TarFile.getmembers(), skip
absolute/.. entries, validate that
os.path.realpath(target).startswith(os.path.realpath(dest_dir)) before writing,
and preserve file modes and symlink handling correctly; remove the
Python-3.12-only filter="data" usage and ensure you raise/abort on any path
traversal attempts when handling archive_path and dest_dir.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 109-114: The recv loops currently treat any Err(_) from
receiver.recv() (and sync_event_receiver.recv()) as terminal; change the match
arms to distinguish broadcast::error::RecvError variants: match
Err(broadcast::error::RecvError::Closed) => return false,
Err(broadcast::error::RecvError::Lagged(_)) => continue (or log and continue) so
lagging is treated as recoverable; update both occurrences (the match around
receiver.recv() and the match around sync_event_receiver.recv()) and ensure
broadcast::error::RecvError is in scope for pattern matching.
---
Nitpick comments:
In @.github/workflows/build-and-test.yml:
- Line 46: The cache key string rust-dashcore-test-${{ inputs.os }}-${{
env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} can collide across different
test-data repositories; update that key to include the TEST_DATA_REPO identifier
(e.g., add ${ { env.TEST_DATA_REPO }} or the appropriate input) so the workflow
cache key becomes unique per test-data repo; modify the cache step that defines
the key (the line containing key: rust-dashcore-test-...) to append the
TEST_DATA_REPO token.
In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs`:
- Around line 17-19: When DashdTestContext::new() returns None, the test
silently returns; change the early return to print or log a skip message before
returning so test output matches others (e.g., the supports_mining check).
Concretely, update the block that calls rt.block_on(DashdTestContext::new()) and
the variable dashd to detect the None case and emit an informative message like
"skipping test: dashd context unavailable" (or similar) prior to returning so
CI/test logs show why the test was skipped.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 103-123: The args_vec currently embeds hardcoded network and bind
flags; refactor by adding fields to DashCoreConfig (e.g., network_mode with
default "regtest", rpc_bind, rpc_allow_ip, bind_addr, listen, debug_level,
fallback_fee, and any port defaults) and replace the inline literals in args_vec
with formatted values from self.config (e.g., use self.config.network_mode,
self.config.rpc_bind, self.config.rpc_allow_ip, self.config.bind_addr,
self.config.debug_level, self.config.fallback_fee, etc.); ensure existing
defaults match current hardcoded values so behavior is unchanged and update any
code constructing Node to accept or populate DashCoreConfig accordingly.
- Around line 53-75: Change Node::from_env to return a Result<Self, E> instead
of panicking: replace the expect/assert calls in from_env with error
construction and returns using a concrete error type (create a NodeError enum
using thiserror with variants for MissingEnv, PathNotFound, and
PortAllocationFailure). Specifically, in from_env handle
std::env::var("DASHD_PATH") and std::env::var("DASHD_DATADIR") by returning
Err(NodeError::MissingEnv(...)) when absent, check dashd_path.exists() and
return Err(NodeError::PathNotFound(dashd_path)) instead of assert!, and
propagate errors from find_available_port() as
Err(NodeError::PortAllocationFailure(...)) when necessary; update callers/tests
to handle the Result. Ensure signatures and uses reference from_env, dashd_path,
datadir, find_available_port, wallet, p2p_port, and rpc_port.
In `@dash-spv/tests/dashd_sync/tests_basic.rs`:
- Around line 36-37: Extract the repeated BIP39 test mnemonic string into a
module-level constant (e.g., const TEST_MNEMONIC: &str = "abandon abandon ...
about") and replace inline occurrences (such as the local variable
empty_mnemonic and the duplicated string at the other test) with that constant;
update uses around create_test_wallet(empty_mnemonic, Network::Regtest) to call
create_test_wallet(TEST_MNEMONIC, Network::Regtest) and ensure the constant is
declared near the top of the test module so both tests reference the same value.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
9bda906 to
0881f5f
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (8)
contrib/setup-dashd.py (2)
104-109:⚠️ Potential issue | 🟠 MajorAdd archive integrity verification before extraction.
Both download flows still extract unverified archives. Since these artifacts are executed/consumed in CI, checksum (or signed checksum) validation should happen before
extract().Minimal verification sketch
+import hashlib + +def sha256_file(path): + h = hashlib.sha256() + with open(path, "rb") as f: + for chunk in iter(lambda: f.read(1024 * 1024), b""): + h.update(chunk) + return h.hexdigest() + +def verify_sha256(path, expected, label): + actual = sha256_file(path) + if actual.lower() != expected.lower(): + sys.exit(f"Checksum mismatch for {label}: expected {expected}, got {actual}") + def setup_dashd(cache_dir): @@ download(url, archive_path) + if expected := os.environ.get("DASHD_ARCHIVE_SHA256"): + verify_sha256(archive_path, expected, asset) extract(archive_path, cache_dir) @@ def setup_test_data(cache_dir): @@ download(url, archive_path) + if expected := os.environ.get("TEST_DATA_ARCHIVE_SHA256"): + verify_sha256(archive_path, expected, "regtest-40000.tar.gz") extract(archive_path, parent_dir)Also applies to: 133-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 104 - 109, The download+extract flow currently extracts unverified archives (see variables asset, archive_path, url and calls download(url, archive_path) followed by extract(archive_path, cache_dir)); add integrity verification by fetching the corresponding checksum (or signed checksum) for the asset, computing the downloaded file's hash (e.g., SHA256), and comparing them (or verifying the signature) before calling extract(); if verification fails, log an error and abort/remove the archive. Apply the same verification step to the other download/extract block referenced (lines 133-137), and ensure only verified archives are passed to extract() and then cleaned up via os.remove(archive_path).
83-90:⚠️ Potential issue | 🟠 MajorReplace broad
extractall()usage with safe member-by-member extraction.Line 89 depends on
extractall(..., filter="data")(version-sensitive) and both zip/tar paths currently trust archive paths. This is still a security/portability footgun.Suggested hardening patch
+def _safe_target(base_dir, member_name): + base = os.path.realpath(base_dir) + target = os.path.realpath(os.path.join(base, member_name)) + if os.path.commonpath([base, target]) != base: + raise RuntimeError(f"Blocked path traversal entry: {member_name}") + return target + def extract(archive_path, dest_dir): if archive_path.endswith(".zip"): with zipfile.ZipFile(archive_path, "r") as zf: - zf.extractall(dest_dir) + for member in zf.infolist(): + _safe_target(dest_dir, member.filename) + zf.extract(member, dest_dir) else: with tarfile.open(archive_path, "r:gz") as tf: - tf.extractall(dest_dir, filter="data") + for member in tf.getmembers(): + if member.issym() or member.islnk(): + raise RuntimeError(f"Blocked link entry in tar: {member.name}") + _safe_target(dest_dir, member.name) + tf.extract(member, dest_dir)#!/bin/bash # Verify Python API compatibility + current extract usage. python - <<'PY' import inspect, tarfile, sys print("python_version:", sys.version) print("extractall_supports_filter:", "filter" in inspect.signature(tarfile.TarFile.extractall).parameters) PY rg -n "extractall\(" contrib/setup-dashd.py -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 83 - 90, The extract function currently calls zipfile.ZipFile.extractall and tarfile.TarFile.extractall (with a non-portable filter arg) which trusts archive member paths; change extract to iterate members instead and perform safe, member-by-member extraction: for zip path use ZipFile.infolist() and for tar use TarFile.getmembers(), compute the target path per member using os.path.join(dest_dir, member_name), normalize with os.path.normpath and reject any member whose normalized path is outside dest_dir (compare with os.path.commonpath) or is absolute, then create any needed directories and write files (preserving permissions for tar if desired). Update references within extract to archive_path, dest_dir, zf and tf to implement this safe extraction and remove use of extractall and the filter argument.dash-spv/tests/dashd_sync/tests_transaction.rs (1)
127-132:⚠️ Potential issue | 🟡 MinorAvoid underflow in the
assert_eq!message expression.Line 131 computes
final_tx_count - baseline_tx_counteagerly; if the count regresses, this can panic before the assertion reports the mismatch.Safe message computation
assert_eq!( final_tx_count, baseline_tx_count + 3, "Expected 3 new transactions, got {}", - final_tx_count - baseline_tx_count + final_tx_count.saturating_sub(baseline_tx_count) );#!/bin/bash # Confirm counter type/source and locate risky subtraction. rg -nP 'fn transaction_count\s*\(' dash-spv/tests/dashd_sync/setup.rs -C3 rg -nP 'final_tx_count\s*-\s*baseline_tx_count' dash-spv/tests/dashd_sync/tests_transaction.rs -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_transaction.rs` around lines 127 - 132, The assertion message eagerly subtracts baseline_tx_count from final_tx_count which can underflow and panic; change the message expression in the assert_eq! that references final_tx_count and baseline_tx_count to compute the difference safely (e.g. use final_tx_count.checked_sub(baseline_tx_count).map(|d| d.to_string()).unwrap_or_else(|| format!("regressed: {} -> {}", baseline_tx_count, final_tx_count))) and pass that string into the assert_eq! so no subtraction occurs when final_tx_count < baseline_tx_count; update the code around the assert in tests_transaction.rs (referencing final_tx_count, baseline_tx_count, and the assert_eq! invocation) accordingly.dash-spv/tests/dashd_sync/tests_restart.rs (1)
124-139:⚠️ Potential issue | 🟠 MajorTreat lagged broadcast reads as recoverable in restart loop.
At Line 137, panicking on all
recv()errors can fail this test intermittently when the receiver lags behind. OnlyClosedshould be terminal.Proposed fix
+use tokio::sync::broadcast; ... - Err(_) => { - panic!("Sync event channel error after {} restarts", restart_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} restarts", restart_count); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_restart.rs` around lines 124 - 139, The test currently panics on any error from client_handle.sync_event_receiver.recv(), causing flaky failures when the broadcast receiver lags; change the recv() Err handling in the restart loop to only treat broadcast::error::RecvError::Closed as terminal (panic with the existing message), and treat broadcast::error::RecvError::Lagged (or any non-Closed recv errors) as recoverable by logging a warning and continuing the loop; update the match arm handling recv() errors near client_handle.sync_event_receiver.recv() so that is_progress_event, SyncEvent::SyncComplete handling remains unchanged while lagged reads do not abort the test.dash-spv/src/test_utils/node.rs (1)
307-309:⚠️ Potential issue | 🟠 MajorAvoid
f64for wallet amounts in test fixtures/state.Line 308 stores balance as
f64; downstream satoshi conversions can drift by 1 satoshi due to floating-point rounding. Prefer integer satoshis (or exact decimal parsing) end-to-end for deterministic assertions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 307 - 309, The test fixture’s Wallet/Node struct field named balance is currently an f64 which can cause 1-satoshi rounding drift; change the balance type to an integer representing duffs (e.g., u64 or i64) and update all places that construct or read this field (tests, builders, any functions like new/create/set_balance or conversions to satoshis) to use integer arithmetic or exact decimal parsing instead of floating-point math; ensure any helper/conversion functions that previously accepted f64 now accept the integer duff value (or parse strings to integer duffs) so assertions remain deterministic.dash-spv/tests/dashd_sync/helpers.rs (1)
109-114:⚠️ Potential issue | 🟠 MajorHandle
RecvError::Laggedexplicitly in both broadcast receive loops.Lines 113 and 185 currently treat all receive errors as terminal.
Lagged(_)should continue, whileClosedshould terminate/fail.Proposed fix
match result { Ok(ref event) if predicate(event) => return true, Ok(_) => continue, - Err(_) => return false, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => return false, }Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} disconnections", disconnect_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} disconnections", disconnect_count); + } }Also applies to: 185-187
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 109 - 114, The receive-match blocks that currently treat all Err(_) as terminal need to distinguish broadcast::RecvError variants: change the match on result (the match handling result = receiver.recv()) to explicitly handle Err(broadcast::RecvError::Lagged(_)) => continue and Err(broadcast::RecvError::Closed) => return false (or the existing failure path); apply this same change to the second broadcast receive loop elsewhere in the file so Lagged errors are ignored and only Closed ends the loop.dash-spv-ffi/src/test_utils/context.rs (1)
205-217:⚠️ Potential issue | 🟠 MajorValidate
wallet_idlength before passing raw pointers into FFI calls.These paths pass
wallet_id.as_ptr()without a length guard. A short slice can trigger invalid reads across the FFI boundary.Proposed fix
impl FFITestContext { + const WALLET_ID_LEN: usize = 32; + + #[inline] + fn assert_wallet_id_len(wallet_id: &[u8]) { + assert_eq!(wallet_id.len(), Self::WALLET_ID_LEN, "wallet_id must be 32 bytes"); + } + pub unsafe fn get_wallet_balance(&self, wallet_id: &[u8]) -> (u64, u64) { + Self::assert_wallet_id_len(wallet_id); let mut confirmed: u64 = 0; let mut unconfirmed: u64 = 0; let mut error = FFIError::success(); let wm = self.session.wallet_manager as *mut FFIWalletManager; ... } pub unsafe fn get_receive_address(&self, wallet_id: &[u8]) -> Address { + Self::assert_wallet_id_len(wallet_id); let mut error = FFIError::success(); let wm = self.session.wallet_manager as *mut FFIWalletManager; ... } unsafe fn with_bip44_account<T>( &self, wallet_id: &[u8], f: impl FnOnce(*const FFIManagedCoreAccount) -> T, ) -> T { + Self::assert_wallet_id_len(wallet_id); let wm = self.session.wallet_manager as *const FFIWalletManager; ... } }As per coding guidelines
**/*-ffi/**/*.rs: Exercise careful memory safety handling at FFI boundaries.Also applies to: 307-315, 336-344
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/context.rs` around lines 205 - 217, Validate the wallet_id slice length before taking its raw pointer and calling the FFI: in get_wallet_balance (and the similar call sites around 307-315 and 336-344) check wallet_id.len() meets the expected length (or is non-zero if variable-length is allowed) and return/raise an FFIError if it’s too short; only then call wallet_manager_get_wallet_balance with wallet_id.as_ptr(), ensuring you never pass a pointer for an empty/undersized slice across the FFI boundary.dash-spv-ffi/src/test_utils/callbacks.rs (1)
94-99:⚠️ Potential issue | 🔴 CriticalDo not fabricate
&'staticfrom callbackuser_datapointer.Line 94 currently casts to
&'static CallbackTracker, but the pointer is only valid while the originatingArcremains alive. This lifetime inflation is unsound at the FFI boundary.Proposed fix
+use std::ptr::NonNull; ... -unsafe fn tracker_from(user_data: *mut c_void) -> Option<&'static CallbackTracker> { +unsafe fn tracker_from(user_data: *mut c_void) -> Option<NonNull<CallbackTracker>> { if user_data.is_null() { None } else { - Some(&*(user_data as *const CallbackTracker)) + NonNull::new(user_data as *mut CallbackTracker) } }- let Some(tracker) = (unsafe { tracker_from(user_data) }) else { + let Some(tracker_ptr) = (unsafe { tracker_from(user_data) }) else { return; }; + let tracker = unsafe { tracker_ptr.as_ref() };As per coding guidelines
**/*-ffi/**/*.rs: Be careful with FFI memory management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/callbacks.rs` around lines 94 - 99, The helper tracker_from currently fabricates a &'static lifetime from the FFI user_data pointer which is unsound; change tracker_from to stop returning &'static CallbackTracker — instead return a raw pointer or NonNull (e.g., Option<*const CallbackTracker> or Option<NonNull<CallbackTracker>>) or a borrowed reference with a proper caller-controlled lifetime (e.g., unsafe fn tracker_from<'a>(user_data: *mut c_void) -> Option<&'a CallbackTracker>), and update all call sites that assumed &'static to safely dereference the pointer within the correct scope (or reconstruct an Arc from a raw pointer only where ownership is certain). Ensure you reference the symbols tracker_from, user_data, and CallbackTracker when making the change so the FFI boundary no longer inflates lifetimes.
🧹 Nitpick comments (1)
.github/workflows/build-and-test.yml (1)
46-47: IncludeTEST_DATA_REPOin the cache key to avoid stale cross-repo reuse.If
TEST_DATA_REPOchanges while version string stays the same, this key can restore incompatible test data.Small key fix
- key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} + key: rust-dashcore-test-${{ inputs.os }}-${{ env.TEST_DATA_REPO }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-and-test.yml around lines 46 - 47, The cache key currently defined as key: rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{ env.TEST_DATA_VERSION }} can cause stale cross-repo reuse when TEST_DATA_REPO changes; update that key to include the TEST_DATA_REPO identifier (env.TEST_DATA_REPO) so the key becomes aware of the source repo, e.g., add -${{ env.TEST_DATA_REPO }} into the key string for the cache entry to prevent restoring incompatible test data.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@contrib/setup-dashd.py`:
- Around line 104-109: The download+extract flow currently extracts unverified
archives (see variables asset, archive_path, url and calls download(url,
archive_path) followed by extract(archive_path, cache_dir)); add integrity
verification by fetching the corresponding checksum (or signed checksum) for the
asset, computing the downloaded file's hash (e.g., SHA256), and comparing them
(or verifying the signature) before calling extract(); if verification fails,
log an error and abort/remove the archive. Apply the same verification step to
the other download/extract block referenced (lines 133-137), and ensure only
verified archives are passed to extract() and then cleaned up via
os.remove(archive_path).
- Around line 83-90: The extract function currently calls
zipfile.ZipFile.extractall and tarfile.TarFile.extractall (with a non-portable
filter arg) which trusts archive member paths; change extract to iterate members
instead and perform safe, member-by-member extraction: for zip path use
ZipFile.infolist() and for tar use TarFile.getmembers(), compute the target path
per member using os.path.join(dest_dir, member_name), normalize with
os.path.normpath and reject any member whose normalized path is outside dest_dir
(compare with os.path.commonpath) or is absolute, then create any needed
directories and write files (preserving permissions for tar if desired). Update
references within extract to archive_path, dest_dir, zf and tf to implement this
safe extraction and remove use of extractall and the filter argument.
In `@dash-spv-ffi/src/test_utils/callbacks.rs`:
- Around line 94-99: The helper tracker_from currently fabricates a &'static
lifetime from the FFI user_data pointer which is unsound; change tracker_from to
stop returning &'static CallbackTracker — instead return a raw pointer or
NonNull (e.g., Option<*const CallbackTracker> or
Option<NonNull<CallbackTracker>>) or a borrowed reference with a proper
caller-controlled lifetime (e.g., unsafe fn tracker_from<'a>(user_data: *mut
c_void) -> Option<&'a CallbackTracker>), and update all call sites that assumed
&'static to safely dereference the pointer within the correct scope (or
reconstruct an Arc from a raw pointer only where ownership is certain). Ensure
you reference the symbols tracker_from, user_data, and CallbackTracker when
making the change so the FFI boundary no longer inflates lifetimes.
In `@dash-spv-ffi/src/test_utils/context.rs`:
- Around line 205-217: Validate the wallet_id slice length before taking its raw
pointer and calling the FFI: in get_wallet_balance (and the similar call sites
around 307-315 and 336-344) check wallet_id.len() meets the expected length (or
is non-zero if variable-length is allowed) and return/raise an FFIError if it’s
too short; only then call wallet_manager_get_wallet_balance with
wallet_id.as_ptr(), ensuring you never pass a pointer for an empty/undersized
slice across the FFI boundary.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 307-309: The test fixture’s Wallet/Node struct field named balance
is currently an f64 which can cause 1-satoshi rounding drift; change the balance
type to an integer representing duffs (e.g., u64 or i64) and update all places
that construct or read this field (tests, builders, any functions like
new/create/set_balance or conversions to satoshis) to use integer arithmetic or
exact decimal parsing instead of floating-point math; ensure any
helper/conversion functions that previously accepted f64 now accept the integer
duff value (or parse strings to integer duffs) so assertions remain
deterministic.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 109-114: The receive-match blocks that currently treat all Err(_)
as terminal need to distinguish broadcast::RecvError variants: change the match
on result (the match handling result = receiver.recv()) to explicitly handle
Err(broadcast::RecvError::Lagged(_)) => continue and
Err(broadcast::RecvError::Closed) => return false (or the existing failure
path); apply this same change to the second broadcast receive loop elsewhere in
the file so Lagged errors are ignored and only Closed ends the loop.
In `@dash-spv/tests/dashd_sync/tests_restart.rs`:
- Around line 124-139: The test currently panics on any error from
client_handle.sync_event_receiver.recv(), causing flaky failures when the
broadcast receiver lags; change the recv() Err handling in the restart loop to
only treat broadcast::error::RecvError::Closed as terminal (panic with the
existing message), and treat broadcast::error::RecvError::Lagged (or any
non-Closed recv errors) as recoverable by logging a warning and continuing the
loop; update the match arm handling recv() errors near
client_handle.sync_event_receiver.recv() so that is_progress_event,
SyncEvent::SyncComplete handling remains unchanged while lagged reads do not
abort the test.
In `@dash-spv/tests/dashd_sync/tests_transaction.rs`:
- Around line 127-132: The assertion message eagerly subtracts baseline_tx_count
from final_tx_count which can underflow and panic; change the message expression
in the assert_eq! that references final_tx_count and baseline_tx_count to
compute the difference safely (e.g. use
final_tx_count.checked_sub(baseline_tx_count).map(|d|
d.to_string()).unwrap_or_else(|| format!("regressed: {} -> {}",
baseline_tx_count, final_tx_count))) and pass that string into the assert_eq! so
no subtraction occurs when final_tx_count < baseline_tx_count; update the code
around the assert in tests_transaction.rs (referencing final_tx_count,
baseline_tx_count, and the assert_eq! invocation) accordingly.
---
Nitpick comments:
In @.github/workflows/build-and-test.yml:
- Around line 46-47: The cache key currently defined as key:
rust-dashcore-test-${{ inputs.os }}-${{ env.DASHVERSION }}-${{
env.TEST_DATA_VERSION }} can cause stale cross-repo reuse when TEST_DATA_REPO
changes; update that key to include the TEST_DATA_REPO identifier
(env.TEST_DATA_REPO) so the key becomes aware of the source repo, e.g., add -${{
env.TEST_DATA_REPO }} into the key string for the cache entry to prevent
restoring incompatible test data.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- dash-spv-ffi/src/test_utils/mod.rs
- dash-spv-ffi/tests/callback_integration_test.rs
- dash-spv/tests/dashd_sync/tests_basic.rs
- .github/workflows/sanitizer.yml
- dash-spv-ffi/Cargo.toml
- dash-spv/tests/dashd_sync.rs
0881f5f to
e14f0ee
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
dash-spv-ffi/src/test_utils/context.rs (1)
205-217:⚠️ Potential issue | 🟠 MajorValidate
wallet_idlength before passing raw pointers into FFI.These paths pass
wallet_id.as_ptr()without enforcing expected byte length, which can cause out-of-bounds reads in downstream FFI code.🔧 Proposed fix
impl FFITestContext { + const WALLET_ID_LEN: usize = 32; + + fn assert_wallet_id_len(wallet_id: &[u8]) { + assert_eq!( + wallet_id.len(), + Self::WALLET_ID_LEN, + "wallet_id must be 32 bytes" + ); + } + pub unsafe fn get_wallet_balance(&self, wallet_id: &[u8]) -> (u64, u64) { + Self::assert_wallet_id_len(wallet_id); let mut confirmed: u64 = 0; let mut unconfirmed: u64 = 0; ... } pub unsafe fn get_receive_address(&self, wallet_id: &[u8]) -> Address { + Self::assert_wallet_id_len(wallet_id); let mut error = FFIError::success(); ... } unsafe fn with_bip44_account<T>( &self, wallet_id: &[u8], f: impl FnOnce(*const FFIManagedCoreAccount) -> T, ) -> T { + Self::assert_wallet_id_len(wallet_id); let wm = self.session.wallet_manager as *const FFIWalletManager; ... } }#!/bin/bash set -euo pipefail # Locate all wallet_id raw-pointer crossings and check for a local precondition helper. rg -n "wallet_id\\.as_ptr\\(\\)" dash-spv-ffi/src/test_utils/context.rs -C2 rg -n "WALLET_ID_LEN|assert_wallet_id_len" dash-spv-ffi/src/test_utils/context.rs -C2As per coding guidelines
**/*-ffi/**/*.rs: Be careful with FFI memory management.Also applies to: 307-315, 336-344
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/context.rs` around lines 205 - 217, The get_wallet_balance function calls wallet_manager_get_wallet_balance with wallet_id.as_ptr() without validating wallet_id length; add a precondition check that wallet_id.len() equals the expected wallet id byte length (use the existing WALLET_ID_LEN constant or introduce one) and return or set an FFIError if it does not match before calling wallet_manager_get_wallet_balance, so no out-of-bounds pointer is passed; apply the same validation to other FFI crossings that call wallet_id.as_ptr() (e.g., the other functions using wallet_id.as_ptr()) to centralize safety.dash-spv-ffi/src/test_utils/callbacks.rs (1)
94-99:⚠️ Potential issue | 🔴 CriticalAvoid fabricating a
'staticreference fromuser_data.Casting
user_datato&'static CallbackTrackeris unsound. The pointer is only valid while the owningArc<CallbackTracker>is alive.🔧 Proposed fix
+use std::ptr::NonNull; ... -unsafe fn tracker_from(user_data: *mut c_void) -> Option<&'static CallbackTracker> { +unsafe fn tracker_from(user_data: *mut c_void) -> Option<NonNull<CallbackTracker>> { if user_data.is_null() { None } else { - Some(&*(user_data as *const CallbackTracker)) + NonNull::new(user_data as *mut CallbackTracker) } } ... - let Some(tracker) = (unsafe { tracker_from(user_data) }) else { + let Some(tracker_ptr) = (unsafe { tracker_from(user_data) }) else { return; }; + let tracker = unsafe { tracker_ptr.as_ref() };#!/bin/bash set -euo pipefail # Verify current lifetime cast and raw-pointer origin. sed -n '87,105p' dash-spv-ffi/src/test_utils/callbacks.rs rg -n "unsafe fn tracker_from|Arc::as_ptr\\(tracker\\)|tracker_from\\(" dash-spv-ffi/src/test_utils/callbacks.rs -C2As per coding guidelines
**/*-ffi/**/*.rs: Exercise careful memory safety handling at FFI boundaries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/callbacks.rs` around lines 94 - 99, The current unsafe fn tracker_from(user_data: *mut c_void) creates a &'static CallbackTracker from a raw pointer, which is unsound; instead change tracker_from to return an owned Arc<CallbackTracker> (e.g. unsafe fn tracker_from(user_data: *mut c_void) -> Option<Arc<CallbackTracker>>) and reconstruct the Arc with Arc::from_raw(user_data as *const CallbackTracker), immediately clone it to get a new owning Arc, then call Arc::into_raw on the original to restore the original ownership (so you don't drop the original Arc). Also ensure wherever the pointer is stored you use Arc::into_raw(tracker) (not Arc::as_ptr) so reconstructing with Arc::from_raw is valid.dash-spv/tests/dashd_sync/tests_restart.rs (1)
124-139:⚠️ Potential issue | 🟠 MajorHandle lagged sync-event receiver reads as recoverable.
Line 137 currently panics on any
recv()error. Fortokio::sync::broadcast,RecvError::Lagged(_)is recoverable and should continue; onlyClosedshould be terminal.Proposed fix
result = client_handle.sync_event_receiver.recv() => { match result { @@ - Err(_) => { - panic!("Sync event channel error after {} restarts", restart_count); - } + Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue, + Err(tokio::sync::broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} restarts", restart_count); + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_restart.rs` around lines 124 - 139, The recv() error handling on client_handle.sync_event_receiver currently panics on any Err; change it to treat tokio::sync::broadcast::error::RecvError::Lagged(_) as recoverable (log or continue the loop) and only treat RecvError::Closed as terminal (panic with restart_count). Locate the match arm that handles Err(_) in the block reading from client_handle.sync_event_receiver.recv() and replace the blanket panic with a match on the RecvError variants so Lagged(_) continues and Closed triggers the existing panic.dash-spv-ffi/tests/dashd_ffi_sync_test.rs (2)
49-57:⚠️ Potential issue | 🟡 MinorAvoid floating-point conversion for exact satoshi assertions.
Line 50 derives expected satoshis via
f64, which can introduce precision noise for exactu64equality checks. Prefer an integer satoshi source from fixture data (or precise decimal parsing before conversion).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs` around lines 49 - 57, The test computes expected_balance by multiplying dashd.wallet.balance (f64) which can lose precision; replace this with an exact integer/satoshi source or a precise conversion: use an integer satoshi field from the fixture (e.g., a dashd.wallet.balance_satoshis or similar) or parse the fixture balance as a decimal string and convert to u64 safely, then compare that integer to confirmed; update the reference to expected_balance and ensure ctx.get_wallet_balance / assert_eq use the exact u64 value (avoid f64 -> round() -> as u64 conversions).
311-318:⚠️ Potential issue | 🟡 MinorReorder fee subtraction after the balance ordering assert.
Line 313 and Line 401 compute
baseline_balance - final_balancebefore verifyingfinal_balance < baseline_balance. If that condition is violated, the subtraction can underflow and panic before the intended assertion message.Proposed fix
- let fees_paid = baseline_balance - final_balance; assert!( final_balance < baseline_balance, "Balance should decrease by fees for internal transfers" ); + let fees_paid = baseline_balance - final_balance; assert!(fees_paid < 1_000_000, "Total fees ({}) should be reasonable", fees_paid); @@ - let fees_paid = baseline_balance - final_balance; assert!( final_balance < baseline_balance, "Balance should decrease by fees for internal transfers" ); + let fees_paid = baseline_balance - final_balance; assert!(fees_paid < 1_000_000, "Total fees ({}) should be reasonable", fees_paid);Also applies to: 400-406
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs` around lines 311 - 318, The test computes fees_paid = baseline_balance - final_balance before asserting final_balance < baseline_balance, which can underflow; move the subtraction so you first assert final_balance < baseline_balance (using baseline_balance and final_balance) and only then compute fees_paid, and apply the same change for the second occurrence where fees_paid is calculated in the same test; ensure the assertion message remains and the fees_paid < 1_000_000 check stays after the computation.dash-spv/tests/dashd_sync/helpers.rs (1)
109-114:⚠️ Potential issue | 🟠 MajorTreat
broadcastlag as recoverable in both wait loops.Line 113 and Line 185 currently treat all
recv()errors as terminal.RecvError::Lagged(_)should continue waiting; onlyRecvError::Closedshould terminate/fail.Proposed fix
match result { Ok(ref event) if predicate(event) => return true, Ok(_) => continue, - Err(_) => return false, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => return false, } @@ Ok(SyncEvent::SyncComplete { .. }) => { @@ Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} disconnections", disconnect_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} disconnections", disconnect_count); + } } }Also applies to: 185-187
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 109 - 114, The recv() error handling treats all errors as terminal; update the match on receiver.recv() so that RecvError::Lagged(_) is treated as recoverable (continue waiting) and only RecvError::Closed returns/fails. Concretely, replace the Err(_) arm with a pattern match that continues on tokio::sync::broadcast::error::RecvError::Lagged(_) and returns false (or terminates) only on RecvError::Closed; apply the same change to the second wait loop that uses receiver.recv() and predicate(event).
🧹 Nitpick comments (1)
contrib/setup-dashd.py (1)
67-80: Consider validating URL scheme for defense-in-depth.The static analysis tool flags
urllib.request.urlopenfor accepting arbitrary schemes (S310). While the URLs are constructed from controlled inputs, validating that the URL useshttps://would prevent accidental misuse if environment variables are misconfigured.🛡️ Optional hardening
def download(url, dest, timeout=300, retries=3): + if not url.startswith("https://"): + sys.exit(f"Only HTTPS URLs are supported: {url}") for attempt in range(1, retries + 1):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 67 - 80, The download function currently calls urllib.request.urlopen with arbitrary URLs; add a defensive check at the start of download to parse the URL (e.g. with urllib.parse.urlparse) and verify the scheme is exactly "https" before attempting urllib.request.urlopen; if the scheme is missing or not "https", exit or raise a clear error (same strategy as current sys.exit usage) so urllib.request.urlopen is never invoked with non-HTTPS schemes. Ensure this validation occurs in the download function before any network activity and preserves the existing retry/timeout behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/test_utils/node.rs`:
- Around line 245-255: The supports_mining logic misclassifies unsupported RPCs
as available; update the error match for client.generate_to_address (inside
supports_mining) to treat JSON-RPC Rpc errors whose e.message indicates the
method is missing as "not available" (return false). Specifically, extend the
condition that currently checks for "not available" to also check for other
RPC-not-supported messages such as "Method not found" (and similar variants), so
that Err(dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::Error::Rpc(ref e)))
where e.message contains any of those texts returns false; keep all other Err(_)
cases returning true.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 52-59: count_wallet_transactions currently sums transaction counts
per account and double-counts transactions present in multiple accounts; change
it to aggregate unique transaction IDs across all accounts using a HashSet to
deduplicate before returning the count. In the body of count_wallet_transactions
(and keep the same parameters and WalletManager/ManagedWalletInfo usage),
iterate all_accounts(), collect each transaction's unique identifier (e.g.,
t.txid or t.txid.clone()) into a std::collections::HashSet, and return
hashset.len(); add the necessary HashSet import. Ensure you reference
wallet_info.accounts().all_accounts() and the transaction struct field/method
that provides the txid.
In `@dash-spv/tests/dashd_sync/tests_basic.rs`:
- Around line 36-37: Replace the hardcoded mnemonic string used to create test
wallets with a deterministic source from test fixtures or the test setup: remove
the literal assigned to empty_mnemonic and instead read the mnemonic from the
test harness (env/config/fixture) or generate it deterministically in setup,
then pass that value into create_test_wallet (used to produce empty_wallet and
empty_wallet_id); apply the same change for the other hardcoded mnemonic usage
around the create_test_wallet call at the later occurrence (lines referenced by
the reviewer) so no network parameters, addresses, or keys are embedded directly
in the test code.
In `@dash-spv/tests/dashd_sync/tests_transaction.rs`:
- Around line 85-155: The spawned ClientHandle created via
ctx.spawn_new_client().await (variable client_handle) is never stopped; call
client_handle.stop().await to gracefully shut down the background run task
before each test returns or finishes (including before early returns and at the
end of the test). Update both tests that call spawn_new_client().await (the one
with multiple transactions and the other in the 159-231 range) to await
client_handle.stop().await in all exit paths so the run task is not orphaned.
---
Duplicate comments:
In `@dash-spv-ffi/src/test_utils/callbacks.rs`:
- Around line 94-99: The current unsafe fn tracker_from(user_data: *mut c_void)
creates a &'static CallbackTracker from a raw pointer, which is unsound; instead
change tracker_from to return an owned Arc<CallbackTracker> (e.g. unsafe fn
tracker_from(user_data: *mut c_void) -> Option<Arc<CallbackTracker>>) and
reconstruct the Arc with Arc::from_raw(user_data as *const CallbackTracker),
immediately clone it to get a new owning Arc, then call Arc::into_raw on the
original to restore the original ownership (so you don't drop the original Arc).
Also ensure wherever the pointer is stored you use Arc::into_raw(tracker) (not
Arc::as_ptr) so reconstructing with Arc::from_raw is valid.
In `@dash-spv-ffi/src/test_utils/context.rs`:
- Around line 205-217: The get_wallet_balance function calls
wallet_manager_get_wallet_balance with wallet_id.as_ptr() without validating
wallet_id length; add a precondition check that wallet_id.len() equals the
expected wallet id byte length (use the existing WALLET_ID_LEN constant or
introduce one) and return or set an FFIError if it does not match before calling
wallet_manager_get_wallet_balance, so no out-of-bounds pointer is passed; apply
the same validation to other FFI crossings that call wallet_id.as_ptr() (e.g.,
the other functions using wallet_id.as_ptr()) to centralize safety.
In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs`:
- Around line 49-57: The test computes expected_balance by multiplying
dashd.wallet.balance (f64) which can lose precision; replace this with an exact
integer/satoshi source or a precise conversion: use an integer satoshi field
from the fixture (e.g., a dashd.wallet.balance_satoshis or similar) or parse the
fixture balance as a decimal string and convert to u64 safely, then compare that
integer to confirmed; update the reference to expected_balance and ensure
ctx.get_wallet_balance / assert_eq use the exact u64 value (avoid f64 -> round()
-> as u64 conversions).
- Around line 311-318: The test computes fees_paid = baseline_balance -
final_balance before asserting final_balance < baseline_balance, which can
underflow; move the subtraction so you first assert final_balance <
baseline_balance (using baseline_balance and final_balance) and only then
compute fees_paid, and apply the same change for the second occurrence where
fees_paid is calculated in the same test; ensure the assertion message remains
and the fees_paid < 1_000_000 check stays after the computation.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 109-114: The recv() error handling treats all errors as terminal;
update the match on receiver.recv() so that RecvError::Lagged(_) is treated as
recoverable (continue waiting) and only RecvError::Closed returns/fails.
Concretely, replace the Err(_) arm with a pattern match that continues on
tokio::sync::broadcast::error::RecvError::Lagged(_) and returns false (or
terminates) only on RecvError::Closed; apply the same change to the second wait
loop that uses receiver.recv() and predicate(event).
In `@dash-spv/tests/dashd_sync/tests_restart.rs`:
- Around line 124-139: The recv() error handling on
client_handle.sync_event_receiver currently panics on any Err; change it to
treat tokio::sync::broadcast::error::RecvError::Lagged(_) as recoverable (log or
continue the loop) and only treat RecvError::Closed as terminal (panic with
restart_count). Locate the match arm that handles Err(_) in the block reading
from client_handle.sync_event_receiver.recv() and replace the blanket panic with
a match on the RecvError variants so Lagged(_) continues and Closed triggers the
existing panic.
---
Nitpick comments:
In `@contrib/setup-dashd.py`:
- Around line 67-80: The download function currently calls
urllib.request.urlopen with arbitrary URLs; add a defensive check at the start
of download to parse the URL (e.g. with urllib.parse.urlparse) and verify the
scheme is exactly "https" before attempting urllib.request.urlopen; if the
scheme is missing or not "https", exit or raise a clear error (same strategy as
current sys.exit usage) so urllib.request.urlopen is never invoked with
non-HTTPS schemes. Ensure this validation occurs in the download function before
any network activity and preserves the existing retry/timeout behavior.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
🚧 Files skipped from review as they are similar to previous changes (5)
- dash-spv/tests/dashd_sync/tests_disconnect.rs
- .github/workflows/sanitizer.yml
- .github/workflows/build-and-test.yml
- dash-spv/Cargo.toml
- dash-spv-ffi/src/lib.rs
e14f0ee to
45b4f3f
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (10)
dash-spv/tests/dashd_sync/tests_basic.rs (1)
36-37:⚠️ Potential issue | 🟠 MajorAvoid hardcoding mnemonic/key material in tests.
Both occurrences embed mnemonic phrases directly in source. Load from fixtures/env or generate deterministically in test setup instead.
As per coding guidelines
**/*.rs: "Never hardcode network parameters, addresses, or keys."Also applies to: 81-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_basic.rs` around lines 36 - 37, The test currently hardcodes mnemonic phrases in the variable empty_mnemonic (and the duplicate at lines ~81-82) and passes them to create_test_wallet; replace this by either loading the mnemonic from a test fixture or environment variable or by generating a deterministic test mnemonic inside the test setup (e.g., a helper like create_deterministic_mnemonic or using a fixed entropy seed) and then call create_test_wallet with that value; update both occurrences (empty_mnemonic and the duplicate) and add a small comment in the test helper to explain the deterministic source so keys are not embedded in source.contrib/setup-dashd.py (2)
104-109:⚠️ Potential issue | 🟠 MajorVerify artifact integrity before extraction/use.
The script downloads executable/test-data archives and extracts immediately. Add SHA256 (or signed checksum) verification before calling
extract(...); otherwise CI may run untrusted content if assets are tampered with in transit/supply chain.Also applies to: 133-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 104 - 109, Add SHA256 verification before extracting downloaded Dash artifacts: after download(...) and before extract(...), compute the SHA256 of archive_path (use hashlib.sha256()) and compare it to a known expected checksum (either fetched from a trusted URL or looked up from a mapping keyed by DASHVERSION/asset); if the checksum does not match, log an error via log(...) and abort (do not call extract(...) or os.remove(...)). Apply the same verification flow to the other download/extract sequence referenced around extract(...) at lines 133-137, ensuring both places validate integrity and fail-safe without extraction on mismatch.
83-90:⚠️ Potential issue | 🟠 MajorHarden archive extraction to prevent path traversal writes.
extractall()(both zip and tar paths) trusts member paths. A crafted archive can escapedest_dirvia../or absolute paths. Validate each member’s resolved target path is insidedest_dirbefore extracting.Proposed hardening sketch
def extract(archive_path, dest_dir): + def _safe_target(base, name): + target = os.path.realpath(os.path.join(base, name)) + base_real = os.path.realpath(base) + if not (target == base_real or target.startswith(base_real + os.sep)): + sys.exit(f"Unsafe archive path: {name}") + return target + if archive_path.endswith(".zip"): with zipfile.ZipFile(archive_path, "r") as zf: - zf.extractall(dest_dir) + for member in zf.infolist(): + _safe_target(dest_dir, member.filename) + zf.extract(member, dest_dir) else: with tarfile.open(archive_path, "r:gz") as tf: - tf.extractall(dest_dir, filter="data") + for member in tf.getmembers(): + _safe_target(dest_dir, member.name) + tf.extract(member, dest_dir)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 83 - 90, The extract function currently calls zipfile.ZipFile.extractall and tarfile.open(...).extractall which are vulnerable to path traversal; update extract to iterate members (for zip: zf.infolist()/namelist, for tar: tf.getmembers()), compute each member's target path by joining dest_dir and the member name and resolving/normalizing it, reject any member whose resolved path is outside dest_dir or is absolute, and only then extract that single member (using safe write or the archive's extract method that targets that member). Also ensure symlinks are handled safely (do not follow or extract symlinks that point outside dest_dir) and preserve existing behavior for normal files and directories in the extract function (function name: extract, parameters: archive_path, dest_dir; local variables: zf, tf, members).dash-spv-ffi/src/test_utils/callbacks.rs (1)
94-99:⚠️ Potential issue | 🔴 CriticalRemove fake
&'staticlifetime fromtracker_from().Line 94 returns
&'static CallbackTracker, butuser_dataoriginates fromArc::as_ptrand is only valid while thatArcis alive. Returning'staticis unsound at this FFI boundary.Suggested direction
+use std::ptr::NonNull; ... -unsafe fn tracker_from(user_data: *mut c_void) -> Option<&'static CallbackTracker> { +unsafe fn tracker_from(user_data: *mut c_void) -> Option<NonNull<CallbackTracker>> { if user_data.is_null() { None } else { - Some(&*(user_data as *const CallbackTracker)) + NonNull::new(user_data as *mut CallbackTracker) } }Then at call sites:
- let Some(tracker) = (unsafe { tracker_from(user_data) }) else { return; }; + let Some(tracker_ptr) = (unsafe { tracker_from(user_data) }) else { return; }; + let tracker = unsafe { tracker_ptr.as_ref() };#!/bin/bash # Verify the current unsound lifetime contract and affected usage points. rg -n "unsafe fn tracker_from|&'static CallbackTracker|tracker_from\\(user_data\\)|Arc::as_ptr\\(tracker\\) as \\*mut c_void" dash-spv-ffi/src/test_utils/callbacks.rs -C2As per coding guidelines
**/*-ffi/**/*.rs: "Exercise careful memory safety handling at FFI boundaries" and "Be careful with FFI memory management."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/callbacks.rs` around lines 94 - 99, The function tracker_from currently claims to return &'static CallbackTracker which is unsound because user_data comes from Arc::as_ptr and only lives while the Arc is alive; change tracker_from to stop promising a 'static lifetime—either make it unsafe fn tracker_from<'a>(user_data: *mut c_void) -> Option<&'a CallbackTracker> so the lifetime is caller-controlled, or (preferably for FFI) return Option<*const CallbackTracker> (or Option<NonNull<CallbackTracker>>) and let callers convert to a reference within the valid Arc scope; update all call sites that use tracker_from and any uses of Arc::as_ptr(tracker) as *mut c_void to ensure the Arc is kept alive for the duration of any derived reference.dash-spv/src/test_utils/node.rs (1)
247-249:⚠️ Potential issue | 🟠 MajorHandle additional “RPC method unavailable” error variants in
supports_mining.Line 248 only checks
"not available". Unsupported builds can return"Method not found"(and similar wording), which currently gets misclassified as supported.Dash Core JSON-RPC: what exact error messages are returned when calling `generatetoaddress` on builds where mining RPCs are not compiled/enabled (e.g., "Method not found" vs "not available")?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 247 - 249, The pattern in supports_mining currently only matches RPC errors whose message contains "not available", causing other unavailable-method variants to be misclassified; update the match arm for Err(dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::Error::Rpc(ref e))) in supports_mining to also detect variants like "Method not found" and similar wording (e.g., by doing a case-insensitive contains check for "method not" or adding checks for "Method not found"/"not implemented"), or better yet normalize e.message.to_lowercase() and match on substrings ("not available", "method not", "not implemented") so unsupported builds are correctly treated as not supporting mining.dash-spv-ffi/src/test_utils/context.rs (1)
205-217:⚠️ Potential issue | 🟠 MajorValidate
wallet_idlength before passing raw pointers into FFI.These paths pass
wallet_id.as_ptr()with no length guard. Add a precondition (expected 32 bytes) before any FFI call using that pointer.As per coding guidelines `**/*-ffi/**/*.rs`: "Exercise careful memory safety handling at FFI boundaries" and "Be careful with FFI memory management."Suggested fix
impl FFITestContext { + const WALLET_ID_LEN: usize = 32; + + fn assert_wallet_id_len(wallet_id: &[u8]) { + assert_eq!(wallet_id.len(), Self::WALLET_ID_LEN, "wallet_id must be 32 bytes"); + } + pub unsafe fn get_wallet_balance(&self, wallet_id: &[u8]) -> (u64, u64) { + Self::assert_wallet_id_len(wallet_id); let mut confirmed: u64 = 0; ... } pub unsafe fn get_receive_address(&self, wallet_id: &[u8]) -> Address { + Self::assert_wallet_id_len(wallet_id); ... } unsafe fn with_bip44_account<T>( &self, wallet_id: &[u8], f: impl FnOnce(*const FFIManagedCoreAccount) -> T, ) -> T { + Self::assert_wallet_id_len(wallet_id); ... } }Also applies to: 307-315, 336-344
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/context.rs` around lines 205 - 217, The function get_wallet_balance (and the other FFI callers at the noted locations) currently passes wallet_id.as_ptr() to wallet_manager_get_wallet_balance with no length check; add a precondition that wallet_id.len() == 32 before any FFI call, and if the length is wrong avoid calling the FFI function (e.g., return an appropriate error/zero balance or set the FFIError to an invalid-argument state) so we never pass an out-of-range pointer; update get_wallet_balance and the similar call sites (the calls to wallet_manager_get_wallet_balance and any other FFI functions using wallet_id) to perform this guard.dash-spv/tests/dashd_sync/tests_transaction.rs (2)
85-155:⚠️ Potential issue | 🟡 MinorStop spawned client before test exits.
The
client_handlecreated at line 97 is never stopped. This can cause the background task to race with test teardown and artifact retention. Addclient_handle.stop().awaitbefore the test completes.Proposed fix
tracing::info!( "All 3 transactions found: tx_count {} -> {}, balance {} -> {} (fees={})", baseline_tx_count, final_tx_count, baseline_balance, final_balance, fees_paid ); + client_handle.stop().await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_transaction.rs` around lines 85 - 155, The spawned client_handle from TestContext::spawn_new_client() is never stopped; call client_handle.stop().await before the test returns (e.g., after the final assertions and before the test function exits) to stop the background task cleanly and avoid races with teardown — ensure you await the stop() Future on the client_handle variable created by spawn_new_client().
159-231:⚠️ Potential issue | 🟡 MinorStop spawned client before test exits.
Similar to the previous test, the
client_handlecreated at line 171 is never stopped. Addclient_handle.stop().awaitat the end of the test.Proposed fix
tracing::info!( "All iterations complete: tx_count {} -> {}, balance {} -> {} (fees={})", baseline_tx_count, baseline_tx_count + amounts.len(), baseline_balance, final_balance, fees_paid ); + client_handle.stop().await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_transaction.rs` around lines 159 - 231, The spawned client_handle in test_multiple_transactions_across_blocks is never stopped; to fix, call client_handle.stop().await before the test returns (e.g., after the final verification/logging and before exiting) so the client is cleanly shut down; locate the client_handle created by ctx.spawn_new_client().await in the test and add the awaitable stop() call on that handle.dash-spv/tests/dashd_sync/helpers.rs (2)
186-191:⚠️ Potential issue | 🟡 MinorHandle
RecvError::Laggedas recoverable in disconnect loop.Similar to
wait_for_network_event, the sync event receiver can lag during rapid peer reconnections.RecvError::Laggedis recoverable; only panic onRecvError::Closed.Proposed fix
Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} disconnections", disconnect_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} disconnections", disconnect_count); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 186 - 191, The disconnect loop currently panics on any receiver error; change the Err(_) arm in the sync event receiver match to distinguish RecvError variants so RecvError::Lagged is treated as recoverable (continue) and only RecvError::Closed causes panic. Update the match in the disconnect loop in helpers.rs (the sync event receiver handling) to pattern match Err(e) => match e { RecvError::Lagged => continue, RecvError::Closed => panic!(...) } — mirroring the behavior used in wait_for_network_event.
109-121:⚠️ Potential issue | 🟡 MinorHandle
RecvError::Laggedas recoverable.Line 116 treats all
recv()errors as failures. However,broadcast::error::RecvError::Laggedis recoverable—it means the receiver fell behind but Tokio automatically advances the cursor to the oldest retained message. OnlyRecvError::Closedindicates a genuine failure. Treating lag as terminal can cause flaky test failures under bursty network traffic.Proposed fix
result = receiver.recv() => { match result { Ok(ref event) if predicate(event) => return true, Ok(_) => continue, - Err(_) => return false, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => return false, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 109 - 121, The recv() error handling treats all errors as terminal; change the match on result in the loop that uses receiver.recv(), deadline, and predicate so that Err(e) distinguishes broadcast::error::RecvError::Lagged from RecvError::Closed—on Lagged continue (treat as recoverable and loop) while on Closed return false; keep the existing Ok branches and the tokio::select structure intact.
🧹 Nitpick comments (1)
dash-spv/tests/dashd_sync/setup.rs (1)
168-169: Consider usingexpect()with descriptive messages instead ofunwrap().When these assertions fail,
unwrap()provides minimal context. Usingexpect()with a message would improve test failure diagnostics:-let header_height = progress.headers().unwrap().current_height(); -let filter_header_height = progress.filter_headers().unwrap().current_height(); +let header_height = progress.headers().expect("Headers progress missing").current_height(); +let filter_header_height = progress.filter_headers().expect("Filter headers progress missing").current_height();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/setup.rs` around lines 168 - 169, Replace the two unwrap() calls on progress.headers() and progress.filter_headers() with expect() calls that include descriptive messages; specifically, change progress.headers().unwrap().current_height() and progress.filter_headers().unwrap().current_height() to use progress.headers().expect("expected headers to be present in progress").current_height() and progress.filter_headers().expect("expected filter headers to be present in progress").current_height() so test failures report clear context mentioning header_height and filter_header_height.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/tests/dashd_sync/setup.rs`:
- Around line 332-338: The create_test_config function currently enables
exclusive peer mode implicitly by adding a peer but doesn't set the
restrict_to_configured_peers flag explicitly; update create_test_config (the
function that builds ClientConfig) to call the setter to set
restrict_to_configured_peers(true) on the resulting config so DNS discovery is
disabled and the test intent matches other tests (e.g.,
wallet_integration_test.rs) and manager.rs checks that flag consistently.
---
Duplicate comments:
In `@contrib/setup-dashd.py`:
- Around line 104-109: Add SHA256 verification before extracting downloaded Dash
artifacts: after download(...) and before extract(...), compute the SHA256 of
archive_path (use hashlib.sha256()) and compare it to a known expected checksum
(either fetched from a trusted URL or looked up from a mapping keyed by
DASHVERSION/asset); if the checksum does not match, log an error via log(...)
and abort (do not call extract(...) or os.remove(...)). Apply the same
verification flow to the other download/extract sequence referenced around
extract(...) at lines 133-137, ensuring both places validate integrity and
fail-safe without extraction on mismatch.
- Around line 83-90: The extract function currently calls
zipfile.ZipFile.extractall and tarfile.open(...).extractall which are vulnerable
to path traversal; update extract to iterate members (for zip:
zf.infolist()/namelist, for tar: tf.getmembers()), compute each member's target
path by joining dest_dir and the member name and resolving/normalizing it,
reject any member whose resolved path is outside dest_dir or is absolute, and
only then extract that single member (using safe write or the archive's extract
method that targets that member). Also ensure symlinks are handled safely (do
not follow or extract symlinks that point outside dest_dir) and preserve
existing behavior for normal files and directories in the extract function
(function name: extract, parameters: archive_path, dest_dir; local variables:
zf, tf, members).
In `@dash-spv-ffi/src/test_utils/callbacks.rs`:
- Around line 94-99: The function tracker_from currently claims to return
&'static CallbackTracker which is unsound because user_data comes from
Arc::as_ptr and only lives while the Arc is alive; change tracker_from to stop
promising a 'static lifetime—either make it unsafe fn
tracker_from<'a>(user_data: *mut c_void) -> Option<&'a CallbackTracker> so the
lifetime is caller-controlled, or (preferably for FFI) return Option<*const
CallbackTracker> (or Option<NonNull<CallbackTracker>>) and let callers convert
to a reference within the valid Arc scope; update all call sites that use
tracker_from and any uses of Arc::as_ptr(tracker) as *mut c_void to ensure the
Arc is kept alive for the duration of any derived reference.
In `@dash-spv-ffi/src/test_utils/context.rs`:
- Around line 205-217: The function get_wallet_balance (and the other FFI
callers at the noted locations) currently passes wallet_id.as_ptr() to
wallet_manager_get_wallet_balance with no length check; add a precondition that
wallet_id.len() == 32 before any FFI call, and if the length is wrong avoid
calling the FFI function (e.g., return an appropriate error/zero balance or set
the FFIError to an invalid-argument state) so we never pass an out-of-range
pointer; update get_wallet_balance and the similar call sites (the calls to
wallet_manager_get_wallet_balance and any other FFI functions using wallet_id)
to perform this guard.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 247-249: The pattern in supports_mining currently only matches RPC
errors whose message contains "not available", causing other unavailable-method
variants to be misclassified; update the match arm for
Err(dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::Error::Rpc(ref e))) in
supports_mining to also detect variants like "Method not found" and similar
wording (e.g., by doing a case-insensitive contains check for "method not" or
adding checks for "Method not found"/"not implemented"), or better yet normalize
e.message.to_lowercase() and match on substrings ("not available", "method not",
"not implemented") so unsupported builds are correctly treated as not supporting
mining.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 186-191: The disconnect loop currently panics on any receiver
error; change the Err(_) arm in the sync event receiver match to distinguish
RecvError variants so RecvError::Lagged is treated as recoverable (continue) and
only RecvError::Closed causes panic. Update the match in the disconnect loop in
helpers.rs (the sync event receiver handling) to pattern match Err(e) => match e
{ RecvError::Lagged => continue, RecvError::Closed => panic!(...) } — mirroring
the behavior used in wait_for_network_event.
- Around line 109-121: The recv() error handling treats all errors as terminal;
change the match on result in the loop that uses receiver.recv(), deadline, and
predicate so that Err(e) distinguishes broadcast::error::RecvError::Lagged from
RecvError::Closed—on Lagged continue (treat as recoverable and loop) while on
Closed return false; keep the existing Ok branches and the tokio::select
structure intact.
In `@dash-spv/tests/dashd_sync/tests_basic.rs`:
- Around line 36-37: The test currently hardcodes mnemonic phrases in the
variable empty_mnemonic (and the duplicate at lines ~81-82) and passes them to
create_test_wallet; replace this by either loading the mnemonic from a test
fixture or environment variable or by generating a deterministic test mnemonic
inside the test setup (e.g., a helper like create_deterministic_mnemonic or
using a fixed entropy seed) and then call create_test_wallet with that value;
update both occurrences (empty_mnemonic and the duplicate) and add a small
comment in the test helper to explain the deterministic source so keys are not
embedded in source.
In `@dash-spv/tests/dashd_sync/tests_transaction.rs`:
- Around line 85-155: The spawned client_handle from
TestContext::spawn_new_client() is never stopped; call
client_handle.stop().await before the test returns (e.g., after the final
assertions and before the test function exits) to stop the background task
cleanly and avoid races with teardown — ensure you await the stop() Future on
the client_handle variable created by spawn_new_client().
- Around line 159-231: The spawned client_handle in
test_multiple_transactions_across_blocks is never stopped; to fix, call
client_handle.stop().await before the test returns (e.g., after the final
verification/logging and before exiting) so the client is cleanly shut down;
locate the client_handle created by ctx.spawn_new_client().await in the test and
add the awaitable stop() call on that handle.
---
Nitpick comments:
In `@dash-spv/tests/dashd_sync/setup.rs`:
- Around line 168-169: Replace the two unwrap() calls on progress.headers() and
progress.filter_headers() with expect() calls that include descriptive messages;
specifically, change progress.headers().unwrap().current_height() and
progress.filter_headers().unwrap().current_height() to use
progress.headers().expect("expected headers to be present in
progress").current_height() and progress.filter_headers().expect("expected
filter headers to be present in progress").current_height() so test failures
report clear context mentioning header_height and filter_header_height.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 211cdf4e-9b93-4ce5-8ef4-e162f6bbc48b
📒 Files selected for processing (23)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlCLAUDE.mdcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
✅ Files skipped from review due to trivial changes (1)
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (5)
- dash-spv-ffi/src/lib.rs
- dash-spv-ffi/src/test_utils/mod.rs
- .github/workflows/build-and-test.yml
- dash-spv/tests/dashd_sync/tests_restart.rs
- .github/workflows/sanitizer.yml
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
45b4f3f to
445a2da
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #464 +/- ##
=============================================
+ Coverage 63.05% 66.58% +3.53%
=============================================
Files 313 313
Lines 64878 64878
=============================================
+ Hits 40907 43202 +2295
+ Misses 23971 21676 -2295
*This pull request uses carry forward flags. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (9)
contrib/setup-dashd.py (2)
104-109:⚠️ Potential issue | 🟠 MajorArchive integrity is still not verified before extraction/use.
Line 104-109 and Line 133-137 trust downloaded archives immediately. Add SHA-256 (or signed checksum) verification before
extract(...).Minimal hardening sketch
+import hashlib + +def sha256_file(path): + h = hashlib.sha256() + with open(path, "rb") as f: + for chunk in iter(lambda: f.read(1024 * 1024), b""): + h.update(chunk) + return h.hexdigest() + +def verify_sha256(path, expected, label): + actual = sha256_file(path) + if actual.lower() != expected.lower(): + sys.exit(f"Checksum mismatch for {label}: expected {expected}, got {actual}") + ... download(url, archive_path) + expected_dashd_sha256 = os.environ.get("DASHD_ARCHIVE_SHA256") + if expected_dashd_sha256: + verify_sha256(archive_path, expected_dashd_sha256, asset) extract(archive_path, cache_dir) os.remove(archive_path) ... download(url, archive_path) + expected_test_data_sha256 = os.environ.get("TEST_DATA_ARCHIVE_SHA256") + if expected_test_data_sha256: + verify_sha256(archive_path, expected_test_data_sha256, "regtest-40000.tar.gz") extract(archive_path, parent_dir) os.remove(archive_path)Also applies to: 133-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 104 - 109, The downloaded archive is used without verifying integrity; before calling extract(archive_path) (in the block using log/download/extract with variables archive_path, url, asset, DASHVERSION, cache_dir) compute the archive's SHA-256 and compare it against a trusted checksum (or verify a signed checksum file fetched from the release) and abort/log an error if the checksums don't match; apply the same checksum verification step to the other identical download/extract block (lines around the second archive use) so extraction only proceeds on a validated archive.
83-90:⚠️ Potential issue | 🟠 Major
extractall(filter="data")is Python-version-sensitive and still brittle here.Line 89 relies on a Python API that is not available on older interpreters; this can fail at runtime outside fully controlled environments. Prefer a manual safe extraction path that is version-agnostic.
Proposed compatibility-safe extraction refactor
+def _safe_extract_tar(tf, dest_dir): + root = os.path.realpath(dest_dir) + for member in tf.getmembers(): + target = os.path.realpath(os.path.join(dest_dir, member.name)) + if not (target == root or target.startswith(root + os.sep)): + sys.exit(f"Unsafe tar member path: {member.name}") + if member.issym() or member.islnk(): + sys.exit(f"Refusing tar symlink/hardlink member: {member.name}") + tf.extract(member, dest_dir) + def extract(archive_path, dest_dir): if archive_path.endswith(".zip"): with zipfile.ZipFile(archive_path, "r") as zf: zf.extractall(dest_dir) else: with tarfile.open(archive_path, "r:gz") as tf: - tf.extractall(dest_dir, filter="data") + _safe_extract_tar(tf, dest_dir)#!/bin/bash # Verify Python support for tarfile.extractall(filter=...) in the executing environment python - <<'PY' import inspect, sys, tarfile print("python_version:", sys.version) print("extractall_signature:", inspect.signature(tarfile.TarFile.extractall)) print("supports_filter_param:", "filter" in inspect.signature(tarfile.TarFile.extractall).parameters) PY # Check whether CI/workflows pin a specific Python version rg -n "setup-python|python-version|run: python|run: python3" .github/workflows -g '*.yml'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/setup-dashd.py` around lines 83 - 90, The extract(archive_path, dest_dir) function currently calls tarfile.extractall(..., filter="data") which is not available on older Python versions; replace that branch with a version-agnostic manual extraction: open the tar with tarfile.open(archive_path, "r:gz"), iterate over tf.getmembers() (or tf) and for each TarInfo perform a safe path join check to prevent path traversal, then use tf.extract(member, dest_dir) or write member via tf.extractfile for files; keep the existing zip branch (zipfile.ZipFile(...).extractall) unchanged and ensure exceptions are handled similarly to the zip path.dash-spv-ffi/tests/dashd_ffi_sync_test.rs (1)
49-51:⚠️ Potential issue | 🟡 MinorAvoid float math for satoshi expectations in assertions.
Line 50 uses
f64multiplication for currency conversion. That can introduce off-by-one sat rounding noise in tests; prefer an integer satoshi value from the fixture/model (e.g., exposebalance_satinWalletFile) and compare integers directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs` around lines 49 - 51, The test uses float math to compute expected_balance via (dashd.wallet.balance * 100_000_000.0).round() which can introduce rounding errors; change the test to compare integer satoshis directly by reading an integer balance field from the fixture/model (e.g., expose dashd.wallet.balance_sat or a similar u64 on WalletFile) and use that value as expected_balance when calling ctx.get_wallet_balance in dashd_ffi_sync_test.rs; if WalletFile currently lacks balance_sat, add it to the WalletFile model and its deserialization so tests can assert equality without any f64 multiplication or rounding.dash-spv/src/test_utils/node.rs (1)
245-249:⚠️ Potential issue | 🟠 Major
supports_mining()still misses common “RPC not supported” responses.Line 248 only checks
"not available". Builds that return"Method not found"will still be marked as supported, causing later mining calls to fail unexpectedly.Proposed fix
match client.generate_to_address(0, &addr) { Ok(_) => true, Err(dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::Error::Rpc(ref e))) - if e.message.contains("not available") => + if e.message.contains("not available") + || e.message.contains("Method not found") + || e.message.contains("method not found") => { false } // Any other error (auth, network) still counts as "available" — // a real generate call will surface the actual error. Err(_) => true, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 245 - 249, The supports_mining() check only treats RPC errors containing "not available" as unsupported; update the match arm that inspects the JsonRpc::Error::Rpc message (the branch handling client.generate_to_address(0, &addr)) to also detect other common "RPC not supported" responses such as "Method not found" (and optionally "Method not implemented"/"Unknown method") and return false in those cases; modify the conditional from e.message.contains("not available") to check e.message.contains("not available") || e.message.contains("Method not found") (and add other message checks as needed) so supports_mining() correctly reports unsupported RPCs.dash-spv-ffi/src/test_utils/context.rs (1)
205-217:⚠️ Potential issue | 🟠 MajorAdd a wallet ID length precondition before all FFI pointer calls.
At Lines 213/311/314/343,
wallet_id.as_ptr()is forwarded without validating buffer length. A short slice can trigger OOB reads across the FFI boundary.Proposed fix
impl FFITestContext { + const WALLET_ID_LEN: usize = 32; + + fn assert_wallet_id_len(wallet_id: &[u8]) { + assert_eq!( + wallet_id.len(), + Self::WALLET_ID_LEN, + "wallet_id must be {} bytes", + Self::WALLET_ID_LEN + ); + } + pub unsafe fn get_wallet_balance(&self, wallet_id: &[u8]) -> (u64, u64) { + Self::assert_wallet_id_len(wallet_id); let mut confirmed: u64 = 0; let mut unconfirmed: u64 = 0; ... } pub unsafe fn get_receive_address(&self, wallet_id: &[u8]) -> Address { + Self::assert_wallet_id_len(wallet_id); let mut error = FFIError::success(); ... } unsafe fn with_bip44_account<T>( &self, wallet_id: &[u8], f: impl FnOnce(*const FFIManagedCoreAccount) -> T, ) -> T { + Self::assert_wallet_id_len(wallet_id); let wm = self.session.wallet_manager as *const FFIWalletManager; ... } }As per coding guidelines,
**/*-ffi/**/*.rs: Exercise careful memory safety handling at FFI boundaries; and**/*.rs: Always validate inputs from untrusted sources.Also applies to: 311-315, 342-344
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/context.rs` around lines 205 - 217, Add a precondition that validates wallet_id length before passing wallet_id.as_ptr() into any FFI call (e.g., in get_wallet_balance where wallet_manager_get_wallet_balance is invoked): check wallet_id.len() is greater than the expected minimum (at least > 0 or the specific wallet ID byte-length your FFI expects) and return/propagate an FFI-friendly error (or set error = FFIError::invalid_argument()) instead of calling the FFI with a potentially zero/short slice; apply the same guard to all other call sites that forward wallet_id.as_ptr() (the other occurrences noted around the wallet-related FFI invocations) so no unchecked empty/short slice is passed across the FFI boundary.dash-spv-ffi/src/test_utils/callbacks.rs (1)
94-99:⚠️ Potential issue | 🔴 CriticalRemove the fake
'staticlifetime intracker_from().Line 94 creates
&'static CallbackTrackerfrom a raw pointer that only lives as long as the originatingArc(Lines 394/407/419). That lifetime inflation is unsound and can become UAF if callback timing changes.Proposed fix
+use std::ptr::NonNull; ... -unsafe fn tracker_from(user_data: *mut c_void) -> Option<&'static CallbackTracker> { +unsafe fn tracker_from(user_data: *mut c_void) -> Option<NonNull<CallbackTracker>> { if user_data.is_null() { None } else { - Some(&*(user_data as *const CallbackTracker)) + NonNull::new(user_data as *mut CallbackTracker) } }Then at each callback callsite:
- let Some(tracker) = (unsafe { tracker_from(user_data) }) else { + let Some(tracker_ptr) = (unsafe { tracker_from(user_data) }) else { return; }; + let tracker = unsafe { tracker_ptr.as_ref() };As per coding guidelines,
**/*-ffi/**/*.rs: Exercise careful memory safety handling at FFI boundaries; and**/*-ffi/**/*.rs: Be careful with FFI memory management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/test_utils/callbacks.rs` around lines 94 - 99, Change tracker_from to stop fabricating a 'static reference: replace its return type with a generic lifetime (e.g. unsafe fn tracker_from<'a>(user_data: *mut c_void) -> Option<&'a CallbackTracker>) and when non-null cast user_data to *const CallbackTracker and convert to a reference with that lifetime (Some(&*(user_data as *const CallbackTracker))). Remove the &'static annotation in the body. Then update each callback callsite that uses tracker_from (the FFI callback wrappers that receive the Arc-backed user_data) so the returned reference does not escape the callback scope and is used only while the originating Arc is still guaranteed alive. Ensure all uses are confined to the callback stack frame to avoid UAF.dash-spv/tests/dashd_sync/tests_restart.rs (1)
124-139:⚠️ Potential issue | 🟠 MajorDon’t panic on recoverable
broadcast::RecvError::Laggedduring restart loop.Line 137 currently panics on any recv error.
Laggedshould continue the loop; onlyClosedshould be terminal.Proposed fix
result = client_handle.sync_event_receiver.recv() => { match result { Ok(ref event) if is_progress_event(event) => { events_seen += 1; if events_seen % 2 == 0 { tracing::info!("Restarting on: {}", event.description()); should_restart = true; break; } tracing::info!("Skipped: {}", event.description()); } Ok(SyncEvent::SyncComplete { .. }) => break, Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} restarts", restart_count); - } + Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue, + Err(tokio::sync::broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} restarts", restart_count); + } } } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_restart.rs` around lines 124 - 139, The panic on any recv error inside the restart loop should be changed to handle broadcast::RecvError::Lagged as recoverable: update the match arm for Err(_) from the current panic to match the error (from client_handle.sync_event_receiver.recv()) and continue the loop when Err(broadcast::RecvError::Lagged) is returned, only treating Err(broadcast::RecvError::Closed) as terminal and then panic (or break) with the restart_count; this change affects the recv error handling in the loop that processes SyncEvent (types named SyncEvent and client_handle.sync_event_receiver) and the restart_count usage in the error message.dash-spv/tests/dashd_sync/helpers.rs (1)
112-117:⚠️ Potential issue | 🟠 MajorHandle
RecvError::Laggedas recoverable in broadcast receive loops.Line 116 and Line 188 currently treat all recv errors as terminal. With
tokio::sync::broadcast,Laggedis recoverable and should continue; onlyClosedshould fail. This can cause intermittent failures under bursty event flow.#!/bin/bash # Verify recv() error handling sites and whether Lagged/Closed are distinguished. rg -n -C3 'receiver\.recv\(\)|sync_event_receiver\.recv\(\)|Err\(_\)|RecvError::Lagged|RecvError::Closed' \ dash-spv/tests/dashd_sync/helpers.rs \ dash-spv/tests/dashd_sync/tests_restart.rsProposed fix
result = receiver.recv() => { match result { Ok(ref event) if predicate(event) => return true, Ok(_) => continue, - Err(_) => return false, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => return false, } } @@ Ok(SyncEvent::SyncComplete { .. }) => { tracing::info!( "Sync completed after {} peer disconnections", disconnect_count ); break; } Ok(_) => continue, - Err(_) => { - panic!("Sync event channel error after {} disconnections", disconnect_count); - } + Err(broadcast::error::RecvError::Lagged(_)) => continue, + Err(broadcast::error::RecvError::Closed) => { + panic!("Sync event channel closed after {} disconnections", disconnect_count); + } } } } }Also applies to: 188-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/helpers.rs` around lines 112 - 117, The broadcast receive loop currently treats all Err(_) from receiver.recv() as terminal; change the match to distinguish RecvError::Lagged (recoverable) vs RecvError::Closed (terminal): when matching receiver.recv() (the block with Ok(ref event) if predicate(event) and Ok(_) => continue), replace Err(_) with pattern matching on tokio::sync::broadcast::error::RecvError so that Err(RecvError::Lagged(_)) continues the loop and Err(RecvError::Closed) returns false; apply the same change to the other occurrence using sync_event_receiver.recv().dash-spv/tests/dashd_sync/tests_transaction.rs (1)
97-98:⚠️ Potential issue | 🟠 MajorStop spawned clients before exiting both transaction tests.
Line 97 and Line 171 start long-running client tasks, but neither test shuts them down before return. This can leave background tasks active and race test teardown.
Proposed fix
#[tokio::test] async fn test_multiple_transactions_in_single_block() { @@ tracing::info!( "All 3 transactions found: tx_count {} -> {}, balance {} -> {} (fees={})", baseline_tx_count, final_tx_count, baseline_balance, final_balance, fees_paid ); + client_handle.stop().await; } @@ #[tokio::test] async fn test_multiple_transactions_across_blocks() { @@ tracing::info!( "All iterations complete: tx_count {} -> {}, balance {} -> {} (fees={})", baseline_tx_count, baseline_tx_count + amounts.len(), baseline_balance, final_balance, fees_paid ); + client_handle.stop().await; }Also applies to: 147-155, 171-172, 223-231
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_transaction.rs` around lines 97 - 98, The spawned client tasks created via ctx.spawn_new_client() (returned as client_handle used with client_handle.progress_receiver and wait_for_sync) are not stopped before the test returns; update each test that calls spawn_new_client() to explicitly shut down the client_handle at the end (e.g., await a shutdown/stop method or join the task such as client_handle.shutdown().await or client_handle.stop().await, or otherwise await/abort the background task) so the long-running client task is terminated before test exit; apply the same change for every occurrence where client_handle is created (the blocks around spawn_new_client()/progress_receiver and wait_for_sync).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 24-44: In wait_for_sync, avoid relying solely on
progress_receiver.changed() and remove the headers().unwrap() panic: before
awaiting timeout or changed(), read the current state from progress_receiver
(using borrow() or borrow_and_update()) and if update.is_synced() and both
headers().map(|h| h.current_height()).unwrap_or(0) and filters().ok().map(|f|
f.committed_height()).unwrap_or(0) are >= target_height, return success
immediately; otherwise continue to await progress_receiver.changed() inside the
tokio::select!; also replace headers().unwrap() with a safe headers().map(...)
fallback to handle early startup states and keep the existing error handling for
a closed channel (result.is_err()).
---
Duplicate comments:
In `@contrib/setup-dashd.py`:
- Around line 104-109: The downloaded archive is used without verifying
integrity; before calling extract(archive_path) (in the block using
log/download/extract with variables archive_path, url, asset, DASHVERSION,
cache_dir) compute the archive's SHA-256 and compare it against a trusted
checksum (or verify a signed checksum file fetched from the release) and
abort/log an error if the checksums don't match; apply the same checksum
verification step to the other identical download/extract block (lines around
the second archive use) so extraction only proceeds on a validated archive.
- Around line 83-90: The extract(archive_path, dest_dir) function currently
calls tarfile.extractall(..., filter="data") which is not available on older
Python versions; replace that branch with a version-agnostic manual extraction:
open the tar with tarfile.open(archive_path, "r:gz"), iterate over
tf.getmembers() (or tf) and for each TarInfo perform a safe path join check to
prevent path traversal, then use tf.extract(member, dest_dir) or write member
via tf.extractfile for files; keep the existing zip branch
(zipfile.ZipFile(...).extractall) unchanged and ensure exceptions are handled
similarly to the zip path.
In `@dash-spv-ffi/src/test_utils/callbacks.rs`:
- Around line 94-99: Change tracker_from to stop fabricating a 'static
reference: replace its return type with a generic lifetime (e.g. unsafe fn
tracker_from<'a>(user_data: *mut c_void) -> Option<&'a CallbackTracker>) and
when non-null cast user_data to *const CallbackTracker and convert to a
reference with that lifetime (Some(&*(user_data as *const CallbackTracker))).
Remove the &'static annotation in the body. Then update each callback callsite
that uses tracker_from (the FFI callback wrappers that receive the Arc-backed
user_data) so the returned reference does not escape the callback scope and is
used only while the originating Arc is still guaranteed alive. Ensure all uses
are confined to the callback stack frame to avoid UAF.
In `@dash-spv-ffi/src/test_utils/context.rs`:
- Around line 205-217: Add a precondition that validates wallet_id length before
passing wallet_id.as_ptr() into any FFI call (e.g., in get_wallet_balance where
wallet_manager_get_wallet_balance is invoked): check wallet_id.len() is greater
than the expected minimum (at least > 0 or the specific wallet ID byte-length
your FFI expects) and return/propagate an FFI-friendly error (or set error =
FFIError::invalid_argument()) instead of calling the FFI with a potentially
zero/short slice; apply the same guard to all other call sites that forward
wallet_id.as_ptr() (the other occurrences noted around the wallet-related FFI
invocations) so no unchecked empty/short slice is passed across the FFI
boundary.
In `@dash-spv-ffi/tests/dashd_ffi_sync_test.rs`:
- Around line 49-51: The test uses float math to compute expected_balance via
(dashd.wallet.balance * 100_000_000.0).round() which can introduce rounding
errors; change the test to compare integer satoshis directly by reading an
integer balance field from the fixture/model (e.g., expose
dashd.wallet.balance_sat or a similar u64 on WalletFile) and use that value as
expected_balance when calling ctx.get_wallet_balance in dashd_ffi_sync_test.rs;
if WalletFile currently lacks balance_sat, add it to the WalletFile model and
its deserialization so tests can assert equality without any f64 multiplication
or rounding.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 245-249: The supports_mining() check only treats RPC errors
containing "not available" as unsupported; update the match arm that inspects
the JsonRpc::Error::Rpc message (the branch handling
client.generate_to_address(0, &addr)) to also detect other common "RPC not
supported" responses such as "Method not found" (and optionally "Method not
implemented"/"Unknown method") and return false in those cases; modify the
conditional from e.message.contains("not available") to check
e.message.contains("not available") || e.message.contains("Method not found")
(and add other message checks as needed) so supports_mining() correctly reports
unsupported RPCs.
In `@dash-spv/tests/dashd_sync/helpers.rs`:
- Around line 112-117: The broadcast receive loop currently treats all Err(_)
from receiver.recv() as terminal; change the match to distinguish
RecvError::Lagged (recoverable) vs RecvError::Closed (terminal): when matching
receiver.recv() (the block with Ok(ref event) if predicate(event) and Ok(_) =>
continue), replace Err(_) with pattern matching on
tokio::sync::broadcast::error::RecvError so that Err(RecvError::Lagged(_))
continues the loop and Err(RecvError::Closed) returns false; apply the same
change to the other occurrence using sync_event_receiver.recv().
In `@dash-spv/tests/dashd_sync/tests_restart.rs`:
- Around line 124-139: The panic on any recv error inside the restart loop
should be changed to handle broadcast::RecvError::Lagged as recoverable: update
the match arm for Err(_) from the current panic to match the error (from
client_handle.sync_event_receiver.recv()) and continue the loop when
Err(broadcast::RecvError::Lagged) is returned, only treating
Err(broadcast::RecvError::Closed) as terminal and then panic (or break) with the
restart_count; this change affects the recv error handling in the loop that
processes SyncEvent (types named SyncEvent and
client_handle.sync_event_receiver) and the restart_count usage in the error
message.
In `@dash-spv/tests/dashd_sync/tests_transaction.rs`:
- Around line 97-98: The spawned client tasks created via ctx.spawn_new_client()
(returned as client_handle used with client_handle.progress_receiver and
wait_for_sync) are not stopped before the test returns; update each test that
calls spawn_new_client() to explicitly shut down the client_handle at the end
(e.g., await a shutdown/stop method or join the task such as
client_handle.shutdown().await or client_handle.stop().await, or otherwise
await/abort the background task) so the long-running client task is terminated
before test exit; apply the same change for every occurrence where client_handle
is created (the blocks around spawn_new_client()/progress_receiver and
wait_for_sync).
ℹ️ Review info
Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5f6ef048-e902-4b04-8517-153219e8e6c8
📒 Files selected for processing (23)
.github/workflows/build-and-test.yml.github/workflows/sanitizer.ymlCLAUDE.mdcontrib/setup-dashd.pydash-spv-ffi/Cargo.tomldash-spv-ffi/src/lib.rsdash-spv-ffi/src/test_utils/callbacks.rsdash-spv-ffi/src/test_utils/context.rsdash-spv-ffi/src/test_utils/mod.rsdash-spv-ffi/tests/callback_integration_test.rsdash-spv-ffi/tests/dashd_ffi_sync_test.rsdash-spv/Cargo.tomldash-spv/src/test_utils/context.rsdash-spv/src/test_utils/fs_helpers.rsdash-spv/src/test_utils/mod.rsdash-spv/src/test_utils/node.rsdash-spv/tests/dashd_sync.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_basic.rsdash-spv/tests/dashd_sync/tests_disconnect.rsdash-spv/tests/dashd_sync/tests_restart.rsdash-spv/tests/dashd_sync/tests_transaction.rs
✅ Files skipped from review due to trivial changes (1)
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (7)
- .github/workflows/build-and-test.yml
- dash-spv/tests/dashd_sync/tests_disconnect.rs
- dash-spv/src/test_utils/mod.rs
- dash-spv-ffi/src/test_utils/mod.rs
- dash-spv/tests/dashd_sync/tests_basic.rs
- dash-spv-ffi/Cargo.toml
- dash-spv/src/test_utils/fs_helpers.rs
Adds comprehensive integration tests that verify SPV sync against a real `dashd` instance with pre-generated regtest blockchain: [dashpay/regtest-blockchain](https://github.com/dashpay/regtest-blockchain) - CI infrastructure: `contrib/setup-dashd.py` for cross-platform dashd/test-data setup, GitHub Actions caching and log retention on test failure - Shared test utilities in `dash-spv/src/test_utils/`: `DashCoreNode` (`dashd` process management, RPC), `DashdTestContext` (common setup), filesystem helpers - FFI test utilities in `dash-spv-ffi/src/test_utils/`: `CallbackTracker` (callback verification), `FFITestContext` (FFI client wrapper lifecycle management) - SPV tests: basic sync, empty wallet, multi-wallet, restart consistency, restart with fresh wallet, multiple restarts, random restarts, peer disconnection (exclusive and non-exclusive mode), incremental transactions (single block, across blocks) - FFI tests: wallet sync, incremental sync, restart consistency, all-callbacks verification, post-sync transaction and disconnect callbacks
445a2da to
543aefc
Compare
ZocoLini
left a comment
There was a problem hiding this comment.
Looks good, only a few comments about code organization, let me know what you think about them
There was a problem hiding this comment.
I think this utilities should be inside the integration tests folder, they are not meant to be used in both unit tests and integration tests, maybe something like dash-spv-ffi/tests/callbacks/callback_utils.rs
There was a problem hiding this comment.
Restructured the test a bit, see a66bd50
There was a problem hiding this comment.
same, dash-spv-ffi/ tests/dashd_sync/context_utils.rs would make more sense
There was a problem hiding this comment.
Restructured the test a bit, see a66bd50
There was a problem hiding this comment.
mv to dash-spv-ffi/tests/callbacks/name_that_maskes_sense_here.rs
There was a problem hiding this comment.
Restructured the test a bit, see a66bd50
There was a problem hiding this comment.
mv to dash-spv-ffi/tests/dashd_sync/name_that_maskes_sense_here.rs, following the same name convenction we have in dash-spv, at the end of the day this tests should be the same (or similar) to the ones in dash-spv since FFI is a wrapper around dash-spv logic right??
There was a problem hiding this comment.
same, weird to expose this under a feature when is only meant for integration tests
There was a problem hiding this comment.
Nope. Its used in ffi tests too.
|
|
||
| /// Recursively copy a directory and all its contents. | ||
| pub fn copy_dir(src: &Path, dst: &Path) -> io::Result<()> { | ||
| fs::create_dir_all(dst)?; |
There was a problem hiding this comment.
I dont know about this function, dash-spv exposing a test utility to copy a directory feels wrong, if this is only used in one file or module i would move it there
There was a problem hiding this comment.
Ah yeah it can be pub(super)
| /// By default only retains on panic. Set `DASHD_TEST_RETAIN_ALWAYS=1` | ||
| /// to also retain directories from passing tests. | ||
| pub fn retain_test_dir(src: &Path, label: &str) { | ||
| let retain_always = std::env::var("DASHD_TEST_RETAIN_ALWAYS") |
There was a problem hiding this comment.
move this into the integration tests directory or where is needed, but I dont like to expose it though a feature for everybody
There was a problem hiding this comment.
Nope. Its used in ffi tests too. Its just exposed as test utility.
|
|
||
| /// Default timeout for sync operations in integration tests. | ||
| pub const SYNC_TIMEOUT: Duration = Duration::from_secs(180); | ||
|
|
There was a problem hiding this comment.
same, this is specific for integration tests
There was a problem hiding this comment.
Nope. Its used in ffi tests too. I mean i can redefine it there but, why not having it here in one place.
There was a problem hiding this comment.
same, move it into the dashd_sync integrations tests module
There was a problem hiding this comment.
Nope. Its used in ffi tests too.
dash-spv/tests/dashd_sync.rs
Outdated
There was a problem hiding this comment.
if you rename this to mod.rs and move it into dashd_sync/ Rust doesn't recognize the tests?? We are declaring modules with mod.rs in this codebase, would be weird to change for this case if not needed
There was a problem hiding this comment.
Ah nice, thought integration tests only get picked up this way for some reason.
There was a problem hiding this comment.
Okay yeah and i was right :) Needs to stay like this otherwise test won't be picked up.
There was a problem hiding this comment.
Soo it works with a main.rs though! Added 03fd60e.
Adds comprehensive integration tests that verify SPV sync against a real
dashdinstance with pre-generated regtest blockchain: dashpay/regtest-blockchainIncludes:
contrib/setup-dashd.pyfor cross-platform dashd/test-data setup, GitHub Actions caching and log retention on test failuredash-spv/src/test_utils/:DashCoreNode(dashdprocess management, RPC),DashdTestContext(common setup), filesystem helpersdash-spv-ffi/src/test_utils/:CallbackTracker(callback verification),FFITestContext(FFI client wrapper lifecycle management)Based on:
cycletoSyncEvent::SyncComplete#459thread-locallogging support for parallel test isolation #485Summary by CodeRabbit