Skip to content

Conversation

@nsingla
Copy link

@nsingla nsingla commented Nov 24, 2025

Description of your changes:
Following the instructions here: https://github.com/opendatahub-io/opendatahub-operator, this change overrrides the dspo to provided branch (provided as runtime arg)
e.g. docker run command:

docker run -it -v /home/$username/.kube/config:/dspa/backend/test/.kube/config --rm --workdir="" --env AWS_ACCESS_KEY_ID="" --env AWS_SECRET_ACCESS_KEY="" --env BUCKET="" --env REGION="" --env NAMESPACE=nsingla7 --env DSPA_NAME=ds-pipelines-tests:latest --label-filter=smoke --dspo-branch=stable --dsp-tag=stable

Checklist:

Summary by CodeRabbit

  • Tests

    • Updated end-to-end test suite infrastructure and configuration management
    • Enhanced test deployment workflows with improved operator integration
  • Chores

    • Improved test environment setup and dependency management

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 24, 2025

Walkthrough

Backend test infrastructure is enhanced to support DSPO (Data Science Pipelines Operator) deployment and CSV patching in E2E tests. Changes include Kustomize tooling in Docker, Kubernetes manifest overlays for deployments and persistent storage, and automated bash scripts for CSV patching, manifest preparation, and pod configuration injection during test execution.

Changes

Cohort / File(s) Summary
E2E Test Suite Naming
backend/test/end2end/e2e_suite_test.go
Updated test suite registration name from "E2E Tests Suite" to "E2ETestsSuite" in TestAPIs.
Docker Tooling
backend/test/images/Dockerfile.test
Added installation step for Kustomize v5.8.0: downloads tarball, extracts, copies binary to /usr/local/bin, and cleans up.
Kubernetes Overlays
backend/test/images/overlays/aipipelines-csv-patch.json
backend/test/images/overlays/dspo-pvc.yaml
JSON Patch applied to AIPipelines CSV deployment: sets replicas to 1, strategy to Recreate, adds securityContext with fsGroup, mounts /opt/manifests/datasciencepipelines, and references PVC. New PVC manifest defines aipipelines-component-manifests with 1Gi storage and ReadWriteOnce access.
CSV Patching Workflow
backend/test/images/patch-csv.sh
New script with functions for orchestrating CSV patching: patch_csv() coordinates end-to-end patching including PVC creation and operator pod manifest injection; apply_csv_patch_and_wait_for_operator() applies patches and waits for operator readiness; download_and_prepare_dspo_manifests() fetches DSPO repo with retry logic and prepares overlay structure; copy_dspo_manifests_to_pods() distributes manifests to operator pods; wait_for_dspo_controller_manager() polls for controller manager readiness; find_csv_and_update() locates and patches appropriate CSV namespace.
Test Execution
backend/test/images/test-run.sh
Adds CLI argument parsing for TEST_LABEL, DSP_TAG, DSPO_OWNER, DSPO_REPO, DSPO_BRANCH; sources patch-csv.sh; injects CSV patching prior to DSPA deployment; expands manifests with DSP_TAG image references; implements guarded deployment wait with timeout handling; refactors test execution with label-based filtering and preserves parallelism.

Sequence Diagram

sequenceDiagram
    participant Test as Test Runner
    participant CSV as CSV Patch Workflow
    participant Repo as DSPO Repository
    participant K8s as Kubernetes
    participant Pods as Operator Pods

    Test->>CSV: find_csv_and_update()
    CSV->>K8s: Locate CSV namespace
    K8s-->>CSV: rhods-operator or opendatahub-operator
    
    CSV->>K8s: Create aipipelines PVC
    K8s-->>CSV: PVC created
    
    CSV->>Repo: download_and_prepare_dspo_manifests()
    Repo-->>CSV: DSPO tarball + extracted overlays
    
    CSV->>K8s: apply_csv_patch_and_wait_for_operator()
    K8s-->>CSV: Patch applied
    
    CSV->>Pods: copy_dspo_manifests_to_pods()
    Pods-->>CSV: Manifests distributed
    
    CSV->>K8s: Restart operator deployment
    K8s-->>CSV: Deployment restarted
    
    CSV->>K8s: wait_for_dspo_controller_manager()
    K8s-->>CSV: Controller manager ready
    
    CSV-->>Test: Patching complete
    Test->>K8s: Deploy DSPA
    K8s-->>Test: Deployment available
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Bash script complexity: patch-csv.sh and test-run.sh contain intricate orchestration logic including retry mechanisms, polling with timeouts, error handling, and conditional workflows across multiple Kubernetes operations
  • File heterogeneity: Changes span Go, Docker, YAML manifests, and Bash—each requiring different review perspectives
  • Integration points: patch-csv.sh introduces six new functions with interdependencies; test-run.sh integrates this workflow into test execution flow with guarded deployments and label filtering
  • Error handling density: Multiple timeout handlers, per-pod success/failure reporting, and rollout status checks warrant careful scrutiny
  • Manifest configuration: JSON Patch operations and PVC specifications require validation against deployment context

Poem

🐰 A kustomize tale in the CI sand,
Where patches and manifests hand-in-hand,
Dance through the pods with retries so keen,
CSV magic—the finest I've seen!
Test infrastructure blooms, grand and vast,
This script-based springtime will surely outlast! 🌱

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The description explains the purpose and provides a concrete docker run example, but omits the required PR title convention format (missing 'test:' prefix structure) and the final checklist item about DSPO testing is incomplete. Ensure the PR title follows the exact convention with scope prefix (e.g., 'test(backend): ...'), and complete the third checklist item by verifying tests work with DSPO and documenting results.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding the capability to deploy a custom DSPO before starting tests, which is the central focus of all file modifications.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nsingla nsingla force-pushed the patch_csv_before_running_tests branch from 5c021e2 to 8ad5484 Compare November 25, 2025 20:02
@nsingla nsingla changed the title adding capability to deploy custom DSPO before starting tests test: adding capability to deploy custom DSPO before starting tests Nov 25, 2025
@nsingla nsingla marked this pull request as ready for review November 25, 2025 20:06
@openshift-ci openshift-ci bot requested review from HumairAK and hbelmiro November 25, 2025 20:06
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
backend/test/images/overlays/dspo-pvc.yaml (1)

1-11: Consider specifying a storageClassName.

The PVC manifest is valid, but it relies on the cluster's default storage class. For better portability and explicit control, consider adding a storageClassName field to the spec.

 spec:
   accessModes:
     - ReadWriteOnce
+  storageClassName: <appropriate-storage-class>
   resources:
     requests:
       storage: 1Gi
   volumeMode: Filesystem

This ensures the PVC uses the intended storage class and avoids issues when the cluster default is unavailable or inappropriate for testing.

backend/test/images/test-run.sh (1)

10-39: Consider adding argument validation.

The argument parsing logic is functional and supports the new DSPO configuration options. However, invalid arguments are silently ignored, which could lead to confusion if users mistype an argument.

You could add validation and help text:

 while [ "$#" -gt 0 ]; do
   case "$1" in
     --label-filter=*)
       TEST_LABEL="${1#*=}"
       shift
       ;;
     --dsp-tag=*)
       DSP_TAG="${1#*=}"
       shift
       ;;
     --dspo-owner=*)
       DSPO_OWNER="${1#*=}"
       shift
       ;;
     --dspo-repo=*)
       DSPO_REPO="${1#*=}"
       shift
       ;;
     --dspo-branch=*)
       DSPO_BRANCH="${1#*=}"
       shift
       ;;
+    *)
+      echo "Unknown argument: $1"
+      echo "Usage: $0 [--label-filter=VALUE] [--dsp-tag=VALUE] [--dspo-owner=VALUE] [--dspo-repo=VALUE] [--dspo-branch=VALUE]"
+      exit 1
+      ;;
   esac
 done
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f851ceb and 8ad5484.

📒 Files selected for processing (6)
  • backend/test/end2end/e2e_suite_test.go (1 hunks)
  • backend/test/images/Dockerfile.test (1 hunks)
  • backend/test/images/overlays/aipipelines-csv-patch.json (1 hunks)
  • backend/test/images/overlays/dspo-pvc.yaml (1 hunks)
  • backend/test/images/patch-csv.sh (1 hunks)
  • backend/test/images/test-run.sh (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/test/images/test-run.sh (1)
backend/test/images/patch-csv.sh (1)
  • find_csv_and_update (285-330)
🪛 Shellcheck (0.11.0)
backend/test/images/patch-csv.sh

[warning] 18-18: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 263-263: Declare and assign separately to avoid masking return values.

(SC2155)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (31)
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion= proxy= storage= pod_to_pod_tls_enabled=true
  • GitHub Check: End to End E2EProxy Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy=true storage= pod_to_pod_tls_enabled=
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.5.15 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2EEssential Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy= storage= pod_to_pod_tls_enabled=
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.5.15 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2EFailure Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy= storage= pod_to_pod_tls_enabled=
  • GitHub Check: End to End E2ECritical Tests - K8s v1.29.2 cacheEnabled=false argoVersion=v3.5.15 proxy= storage= pod_to_pod_tls_enabled=
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.6.10 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.7.1 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.7.1 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.6.10 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.5.15 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.6.10 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.7.1 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.6.10 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.5.15 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
  • GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.7.1 proxy=false storage=minio pod_to_pod_tls_enabled=false
  • GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=false multiUser=true argoVersion= proxy= storage=seaweedfs artifactProxy=
  • GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=true multiUser=true argoVersion= proxy= storage=minio artifactProxy=
  • GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=true multiUser=true argoVersion= proxy= storage=seaweedfs artifactProxy=true
  • GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=false multiUser=true argoVersion= proxy= storage=minio artifactProxy=
  • GitHub Check: KFP SDK Client Tests - K8s v1.31.0
  • GitHub Check: KFP SDK Client Tests - K8s v1.29.2
  • GitHub Check: API integration tests v2 - K8s with kubernetes v1.31.0 pod_to_pod_tls_enabled=false
  • GitHub Check: API integration tests v2 - K8s with database v1.31.0 pod_to_pod_tls_enabled=true
  • GitHub Check: API integration tests v2 - K8s with database v1.31.0 pod_to_pod_tls_enabled=false
  • GitHub Check: KFP Webhooks - K8s v1.29.2
  • GitHub Check: KFP Webhooks - K8s v1.31.0
  • GitHub Check: Initialization & Integration tests v1 - K8s v1.31.0
  • GitHub Check: Initialization & Integration tests v1 - K8s v1.29.2
  • GitHub Check: check_ci_status
🔇 Additional comments (12)
backend/test/end2end/e2e_suite_test.go (1)

165-165: LGTM - Test suite name standardized.

The test suite name has been updated to remove spaces, which is a good practice for consistency.

backend/test/images/overlays/aipipelines-csv-patch.json (1)

1-39: LGTM - CSV patch structure is appropriate for testing.

The JSON Patch operations are well-structured:

  • Recreate strategy is appropriate when using a ReadWriteOnce PVC (avoids conflicts during updates)
  • fsGroup: 1001 ensures proper permissions for the mounted volume
  • volumeMount and volume configuration correctly references the PVC defined in dspo-pvc.yaml

Note: The patch assumes the target CSV has a deployment at index 0 with a specific structure. This is acceptable for a controlled test environment but would fail if the CSV structure changes.

backend/test/images/test-run.sh (5)

1-6: LGTM - Script initialization is well-structured.

The use of set -o allexport around sourcing files ensures environment variables are properly exported, and the integration of patch-csv.sh enables the new CSV patching workflow.


55-63: LGTM - DSPA template correctly parameterized.

The DSPA configuration now uses DSP_TAG for all component images, enabling flexible version selection at runtime. The image references and structure are correct.


93-95: LGTM - CSV patching correctly integrated.

The find_csv_and_update call is properly positioned before DSPA deployment, ensuring the operator is patched with custom manifests before creating the DSPA instance.


111-116: LGTM - Deployment readiness check with appropriate error handling.

The explicit readiness check with timeout ensures tests don't proceed with an unavailable deployment. The exit 1 on timeout failure is correct and prevents cascading issues.


135-147: LGTM - Test execution properly configured.

The test execution command is well-structured with:

  • Appropriate parallelism (--nodes=10)
  • Label-based test filtering
  • Correct parameter quoting
  • The --keep-going flag ensures all tests run even if some fail
backend/test/images/patch-csv.sh (4)

79-120: LGTM - Well-designed function with proper stdout/stderr handling.

This function correctly:

  • Checks for existing patches to avoid redundant operations
  • Separates logging (stderr) from return values (stdout)
  • Handles multiple namespace scenarios with appropriate operator labels
  • Waits for pod readiness with a reasonable timeout

124-192: LGTM - Robust download with retry logic and proper error handling.

The function implements excellent reliability features:

  • Retry logic (3 attempts) handles transient network failures
  • Directory structure preparation ensures missing overlays are created
  • Proper stdout/stderr separation maintains clean return values
  • Comprehensive error checking at each step

195-249: LGTM - Comprehensive manifest distribution with detailed reporting.

The function provides excellent visibility and error handling:

  • Success/failure tracking for each pod
  • Differentiated return codes enable appropriate upstream handling
  • Clear user feedback with symbols and summary counts
  • Graceful handling of edge cases (no pods, no manifests)

285-330: LGTM - CSV discovery logic handles expected deployment scenarios.

The function correctly:

  • Searches both common namespaces (RHOAI and ODH operators)
  • Validates CSV existence before proceeding
  • Provides clear error messages when operators aren't found
  • Fails fast with appropriate exit codes
backend/test/images/Dockerfile.test (1)

31-35: Kustomize v5.8.0 is valid and secure with no reported vulnerabilities for the CLI.

The version v5.8.0 was released on November 9, 2025 and is a current release from kubernetes-sigs. Web search confirms no public CVEs are reported against the kustomize CLI for this version. (CVEs found online are for the separate Flux kustomize-controller component, not the kustomize CLI.)

The installation pattern is correct, the version is appropriately pinned, and the approach mirrors the oc client installation above it.

Comment on lines +8 to +75
patch_csv() {
local CSV_NAME="$1"
local NAMESPACE_NAME="$2"
local DSPO_OWNER="$3"
local DSPO_REPO="$4"
local DSPO_BRANCH="$5"

echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME"

# Get the directory where this script is located
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local overlays_dir="$script_dir/overlays"

# Step 1: Create PersistentVolumeClaim
echo "Creating PVC for aipipelines component manifests..."
if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then
echo "PVC created successfully"
else
echo "Failed to create PVC, continuing anyway..."
fi

# Step 2: Download and build DSPO manifest
local base_config_path
base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH")
if [[ $? -ne 0 || -z "$base_config_path" ]]; then
echo "ERROR: Failed to download and prepare DSPO manifests"
exit 1
fi

# Step 3: Apply CSV patch and wait for operator pod readiness
local operator_label
operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir")
if [[ $? -ne 0 || -z "$operator_label" ]]; then
echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness"
exit 1
fi

# Step 4: Copy DSPO manifests to the operator pods
copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME"
local copy_result=$?
if [[ $copy_result -eq 1 ]]; then
echo "ERROR: Failed to copy custom manifests to any pods"
exit 1
fi

# Step 5: Restart operator deployment
echo "Restarting operator deployment to pick up changes..."
if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then
echo "Operator deployment restart initiated"

# Wait for rollout to complete
echo "Waiting for deployment rollout to complete..."
if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then
echo "Operator deployment rollout completed successfully"
else
echo "Warning: Deployment rollout did not complete within timeout"
exit 1
fi
else
echo "Failed to restart operator deployment"
return 1
fi

# Step 6: Wait for DSPO controller manager pod to be running
wait_for_dspo_controller_manager "$NAMESPACE_NAME"

echo "Finished patching aipipelines component for CSV: $CSV_NAME"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Fix shellcheck warning and ensure consistent error handling.

The patch_csv function orchestrates the patching workflow well, but has two issues:

  1. Line 18: Shellcheck SC2155 warning - declaring and assigning in one line can mask command failures.
  2. Line 68: Uses return 1 instead of exit 1, which is inconsistent with error handling elsewhere in the function.

Apply this diff to address both issues:

-    local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+    local script_dir
+    script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
     local overlays_dir="$script_dir/overlays"
     else
         echo "Failed to restart operator deployment"
-        return 1
+        exit 1
     fi

Based on static analysis hints.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
patch_csv() {
local CSV_NAME="$1"
local NAMESPACE_NAME="$2"
local DSPO_OWNER="$3"
local DSPO_REPO="$4"
local DSPO_BRANCH="$5"
echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME"
# Get the directory where this script is located
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local overlays_dir="$script_dir/overlays"
# Step 1: Create PersistentVolumeClaim
echo "Creating PVC for aipipelines component manifests..."
if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then
echo "PVC created successfully"
else
echo "Failed to create PVC, continuing anyway..."
fi
# Step 2: Download and build DSPO manifest
local base_config_path
base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH")
if [[ $? -ne 0 || -z "$base_config_path" ]]; then
echo "ERROR: Failed to download and prepare DSPO manifests"
exit 1
fi
# Step 3: Apply CSV patch and wait for operator pod readiness
local operator_label
operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir")
if [[ $? -ne 0 || -z "$operator_label" ]]; then
echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness"
exit 1
fi
# Step 4: Copy DSPO manifests to the operator pods
copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME"
local copy_result=$?
if [[ $copy_result -eq 1 ]]; then
echo "ERROR: Failed to copy custom manifests to any pods"
exit 1
fi
# Step 5: Restart operator deployment
echo "Restarting operator deployment to pick up changes..."
if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then
echo "Operator deployment restart initiated"
# Wait for rollout to complete
echo "Waiting for deployment rollout to complete..."
if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then
echo "Operator deployment rollout completed successfully"
else
echo "Warning: Deployment rollout did not complete within timeout"
exit 1
fi
else
echo "Failed to restart operator deployment"
return 1
fi
# Step 6: Wait for DSPO controller manager pod to be running
wait_for_dspo_controller_manager "$NAMESPACE_NAME"
echo "Finished patching aipipelines component for CSV: $CSV_NAME"
}
patch_csv() {
local CSV_NAME="$1"
local NAMESPACE_NAME="$2"
local DSPO_OWNER="$3"
local DSPO_REPO="$4"
local DSPO_BRANCH="$5"
echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME"
# Get the directory where this script is located
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local overlays_dir="$script_dir/overlays"
# Step 1: Create PersistentVolumeClaim
echo "Creating PVC for aipipelines component manifests..."
if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then
echo "PVC created successfully"
else
echo "Failed to create PVC, continuing anyway..."
fi
# Step 2: Download and build DSPO manifest
local base_config_path
base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH")
if [[ $? -ne 0 || -z "$base_config_path" ]]; then
echo "ERROR: Failed to download and prepare DSPO manifests"
exit 1
fi
# Step 3: Apply CSV patch and wait for operator pod readiness
local operator_label
operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir")
if [[ $? -ne 0 || -z "$operator_label" ]]; then
echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness"
exit 1
fi
# Step 4: Copy DSPO manifests to the operator pods
copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME"
local copy_result=$?
if [[ $copy_result -eq 1 ]]; then
echo "ERROR: Failed to copy custom manifests to any pods"
exit 1
fi
# Step 5: Restart operator deployment
echo "Restarting operator deployment to pick up changes..."
if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then
echo "Operator deployment restart initiated"
# Wait for rollout to complete
echo "Waiting for deployment rollout to complete..."
if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then
echo "Operator deployment rollout completed successfully"
else
echo "Warning: Deployment rollout did not complete within timeout"
exit 1
fi
else
echo "Failed to restart operator deployment"
exit 1
fi
# Step 6: Wait for DSPO controller manager pod to be running
wait_for_dspo_controller_manager "$NAMESPACE_NAME"
echo "Finished patching aipipelines component for CSV: $CSV_NAME"
}
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 18-18: Declare and assign separately to avoid masking return values.

(SC2155)

🤖 Prompt for AI Agents
In backend/test/images/patch-csv.sh around lines 8 to 75, fix two issues: avoid
SC2155 by declaring script_dir first and then assigning it with a command
substitution that checks for failure (e.g., declare local script_dir;
script_dir="$(cd ...)" and if that command fails, print an error and exit), and
replace the inconsistent return 1 on line ~68 with exit 1 to match the rest of
the function's error handling so failures terminate the script consistently.

Comment on lines +252 to +283
wait_for_dspo_controller_manager() {
local NAMESPACE_NAME="$1"

echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..."
local dspo_pod_ready=false
local max_wait_time=300 # 5 minutes
local wait_interval=10 # Check every 10 seconds
local elapsed_time=0

while [[ $elapsed_time -lt $max_wait_time ]]; do
# Check if the DSPO controller manager pod exists and is running
local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)

if [[ $dspo_pod_status -gt 0 ]]; then
echo "✓ data-science-pipelines-operator-controller-manager pod is running"
dspo_pod_ready=true
break
else
echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)"
sleep $wait_interval
elapsed_time=$((elapsed_time + wait_interval))
fi
done

if [[ $dspo_pod_ready == false ]]; then
echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds"
echo "This may indicate an issue with the DSPO deployment, but continuing..."
return 1
fi

return 0
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Fix shellcheck warning for better error detection.

The wait logic is sound, but Line 263 has a shellcheck SC2155 warning that should be addressed.

Apply this diff:

-        local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)
+        local dspo_pod_status
+        dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)

This ensures that if the command substitution fails, the error isn't masked by the local declaration.

Based on static analysis hints.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
wait_for_dspo_controller_manager() {
local NAMESPACE_NAME="$1"
echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..."
local dspo_pod_ready=false
local max_wait_time=300 # 5 minutes
local wait_interval=10 # Check every 10 seconds
local elapsed_time=0
while [[ $elapsed_time -lt $max_wait_time ]]; do
# Check if the DSPO controller manager pod exists and is running
local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)
if [[ $dspo_pod_status -gt 0 ]]; then
echo "✓ data-science-pipelines-operator-controller-manager pod is running"
dspo_pod_ready=true
break
else
echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)"
sleep $wait_interval
elapsed_time=$((elapsed_time + wait_interval))
fi
done
if [[ $dspo_pod_ready == false ]]; then
echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds"
echo "This may indicate an issue with the DSPO deployment, but continuing..."
return 1
fi
return 0
}
wait_for_dspo_controller_manager() {
local NAMESPACE_NAME="$1"
echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..."
local dspo_pod_ready=false
local max_wait_time=300 # 5 minutes
local wait_interval=10 # Check every 10 seconds
local elapsed_time=0
while [[ $elapsed_time -lt $max_wait_time ]]; do
# Check if the DSPO controller manager pod exists and is running
local dspo_pod_status
dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)
if [[ $dspo_pod_status -gt 0 ]]; then
echo "✓ data-science-pipelines-operator-controller-manager pod is running"
dspo_pod_ready=true
break
else
echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)"
sleep $wait_interval
elapsed_time=$((elapsed_time + wait_interval))
fi
done
if [[ $dspo_pod_ready == false ]]; then
echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds"
echo "This may indicate an issue with the DSPO deployment, but continuing..."
return 1
fi
return 0
}
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 263-263: Declare and assign separately to avoid masking return values.

(SC2155)

🤖 Prompt for AI Agents
In backend/test/images/patch-csv.sh around lines 252 to 283, change the
single-line combined local + command substitution to a separate declaration and
assignment so shellcheck SC2155 is fixed and errors from the command
substitution aren't masked: declare the variable with local (e.g., local
dspo_pod_status) then perform the command substitution on the next line
(dspo_pod_status=$(...)) and optionally check the command's exit status and
handle failures before using the value.

@openshift-ci
Copy link

openshift-ci bot commented Nov 25, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: HumairAK

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 0f5f315 into opendatahub-io:master Nov 25, 2025
75 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants