-
Notifications
You must be signed in to change notification settings - Fork 13
test: adding capability to deploy custom DSPO before starting tests #235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: adding capability to deploy custom DSPO before starting tests #235
Conversation
WalkthroughBackend test infrastructure is enhanced to support DSPO (Data Science Pipelines Operator) deployment and CSV patching in E2E tests. Changes include Kustomize tooling in Docker, Kubernetes manifest overlays for deployments and persistent storage, and automated bash scripts for CSV patching, manifest preparation, and pod configuration injection during test execution. Changes
Sequence DiagramsequenceDiagram
participant Test as Test Runner
participant CSV as CSV Patch Workflow
participant Repo as DSPO Repository
participant K8s as Kubernetes
participant Pods as Operator Pods
Test->>CSV: find_csv_and_update()
CSV->>K8s: Locate CSV namespace
K8s-->>CSV: rhods-operator or opendatahub-operator
CSV->>K8s: Create aipipelines PVC
K8s-->>CSV: PVC created
CSV->>Repo: download_and_prepare_dspo_manifests()
Repo-->>CSV: DSPO tarball + extracted overlays
CSV->>K8s: apply_csv_patch_and_wait_for_operator()
K8s-->>CSV: Patch applied
CSV->>Pods: copy_dspo_manifests_to_pods()
Pods-->>CSV: Manifests distributed
CSV->>K8s: Restart operator deployment
K8s-->>CSV: Deployment restarted
CSV->>K8s: wait_for_dspo_controller_manager()
K8s-->>CSV: Controller manager ready
CSV-->>Test: Patching complete
Test->>K8s: Deploy DSPA
K8s-->>Test: Deployment available
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Nelesh Singla <[email protected]>
5c021e2 to
8ad5484
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/test/images/overlays/dspo-pvc.yaml (1)
1-11: Consider specifying a storageClassName.The PVC manifest is valid, but it relies on the cluster's default storage class. For better portability and explicit control, consider adding a
storageClassNamefield to the spec.spec: accessModes: - ReadWriteOnce + storageClassName: <appropriate-storage-class> resources: requests: storage: 1Gi volumeMode: FilesystemThis ensures the PVC uses the intended storage class and avoids issues when the cluster default is unavailable or inappropriate for testing.
backend/test/images/test-run.sh (1)
10-39: Consider adding argument validation.The argument parsing logic is functional and supports the new DSPO configuration options. However, invalid arguments are silently ignored, which could lead to confusion if users mistype an argument.
You could add validation and help text:
while [ "$#" -gt 0 ]; do case "$1" in --label-filter=*) TEST_LABEL="${1#*=}" shift ;; --dsp-tag=*) DSP_TAG="${1#*=}" shift ;; --dspo-owner=*) DSPO_OWNER="${1#*=}" shift ;; --dspo-repo=*) DSPO_REPO="${1#*=}" shift ;; --dspo-branch=*) DSPO_BRANCH="${1#*=}" shift ;; + *) + echo "Unknown argument: $1" + echo "Usage: $0 [--label-filter=VALUE] [--dsp-tag=VALUE] [--dspo-owner=VALUE] [--dspo-repo=VALUE] [--dspo-branch=VALUE]" + exit 1 + ;; esac done
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/test/end2end/e2e_suite_test.go(1 hunks)backend/test/images/Dockerfile.test(1 hunks)backend/test/images/overlays/aipipelines-csv-patch.json(1 hunks)backend/test/images/overlays/dspo-pvc.yaml(1 hunks)backend/test/images/patch-csv.sh(1 hunks)backend/test/images/test-run.sh(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/test/images/test-run.sh (1)
backend/test/images/patch-csv.sh (1)
find_csv_and_update(285-330)
🪛 Shellcheck (0.11.0)
backend/test/images/patch-csv.sh
[warning] 18-18: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 263-263: Declare and assign separately to avoid masking return values.
(SC2155)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (31)
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion= proxy= storage= pod_to_pod_tls_enabled=true
- GitHub Check: End to End E2EProxy Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy=true storage= pod_to_pod_tls_enabled=
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.5.15 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2EEssential Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy= storage= pod_to_pod_tls_enabled=
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.5.15 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2EFailure Tests - K8s v1.31.0 cacheEnabled=false argoVersion= proxy= storage= pod_to_pod_tls_enabled=
- GitHub Check: End to End E2ECritical Tests - K8s v1.29.2 cacheEnabled=false argoVersion=v3.5.15 proxy= storage= pod_to_pod_tls_enabled=
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.6.10 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.7.1 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.7.1 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.6.10 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.5.15 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.6.10 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.7.1 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=false argoVersion=v3.6.10 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.5.15 proxy=false storage=seaweedfs pod_to_pod_tls_enabled=false
- GitHub Check: End to End E2ECritical Tests - K8s v1.31.0 cacheEnabled=true argoVersion=v3.7.1 proxy=false storage=minio pod_to_pod_tls_enabled=false
- GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=false multiUser=true argoVersion= proxy= storage=seaweedfs artifactProxy=
- GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=true multiUser=true argoVersion= proxy= storage=minio artifactProxy=
- GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=true multiUser=true argoVersion= proxy= storage=seaweedfs artifactProxy=true
- GitHub Check: End to End Critical Scenario Multi User Tests - K8s v1.31.0 cacheEnabled=false multiUser=true argoVersion= proxy= storage=minio artifactProxy=
- GitHub Check: KFP SDK Client Tests - K8s v1.31.0
- GitHub Check: KFP SDK Client Tests - K8s v1.29.2
- GitHub Check: API integration tests v2 - K8s with kubernetes v1.31.0 pod_to_pod_tls_enabled=false
- GitHub Check: API integration tests v2 - K8s with database v1.31.0 pod_to_pod_tls_enabled=true
- GitHub Check: API integration tests v2 - K8s with database v1.31.0 pod_to_pod_tls_enabled=false
- GitHub Check: KFP Webhooks - K8s v1.29.2
- GitHub Check: KFP Webhooks - K8s v1.31.0
- GitHub Check: Initialization & Integration tests v1 - K8s v1.31.0
- GitHub Check: Initialization & Integration tests v1 - K8s v1.29.2
- GitHub Check: check_ci_status
🔇 Additional comments (12)
backend/test/end2end/e2e_suite_test.go (1)
165-165: LGTM - Test suite name standardized.The test suite name has been updated to remove spaces, which is a good practice for consistency.
backend/test/images/overlays/aipipelines-csv-patch.json (1)
1-39: LGTM - CSV patch structure is appropriate for testing.The JSON Patch operations are well-structured:
- Recreate strategy is appropriate when using a ReadWriteOnce PVC (avoids conflicts during updates)
- fsGroup: 1001 ensures proper permissions for the mounted volume
- volumeMount and volume configuration correctly references the PVC defined in dspo-pvc.yaml
Note: The patch assumes the target CSV has a deployment at index 0 with a specific structure. This is acceptable for a controlled test environment but would fail if the CSV structure changes.
backend/test/images/test-run.sh (5)
1-6: LGTM - Script initialization is well-structured.The use of
set -o allexportaround sourcing files ensures environment variables are properly exported, and the integration ofpatch-csv.shenables the new CSV patching workflow.
55-63: LGTM - DSPA template correctly parameterized.The DSPA configuration now uses
DSP_TAGfor all component images, enabling flexible version selection at runtime. The image references and structure are correct.
93-95: LGTM - CSV patching correctly integrated.The
find_csv_and_updatecall is properly positioned before DSPA deployment, ensuring the operator is patched with custom manifests before creating the DSPA instance.
111-116: LGTM - Deployment readiness check with appropriate error handling.The explicit readiness check with timeout ensures tests don't proceed with an unavailable deployment. The
exit 1on timeout failure is correct and prevents cascading issues.
135-147: LGTM - Test execution properly configured.The test execution command is well-structured with:
- Appropriate parallelism (
--nodes=10)- Label-based test filtering
- Correct parameter quoting
- The
--keep-goingflag ensures all tests run even if some failbackend/test/images/patch-csv.sh (4)
79-120: LGTM - Well-designed function with proper stdout/stderr handling.This function correctly:
- Checks for existing patches to avoid redundant operations
- Separates logging (stderr) from return values (stdout)
- Handles multiple namespace scenarios with appropriate operator labels
- Waits for pod readiness with a reasonable timeout
124-192: LGTM - Robust download with retry logic and proper error handling.The function implements excellent reliability features:
- Retry logic (3 attempts) handles transient network failures
- Directory structure preparation ensures missing overlays are created
- Proper stdout/stderr separation maintains clean return values
- Comprehensive error checking at each step
195-249: LGTM - Comprehensive manifest distribution with detailed reporting.The function provides excellent visibility and error handling:
- Success/failure tracking for each pod
- Differentiated return codes enable appropriate upstream handling
- Clear user feedback with symbols and summary counts
- Graceful handling of edge cases (no pods, no manifests)
285-330: LGTM - CSV discovery logic handles expected deployment scenarios.The function correctly:
- Searches both common namespaces (RHOAI and ODH operators)
- Validates CSV existence before proceeding
- Provides clear error messages when operators aren't found
- Fails fast with appropriate exit codes
backend/test/images/Dockerfile.test (1)
31-35: Kustomize v5.8.0 is valid and secure with no reported vulnerabilities for the CLI.The version v5.8.0 was released on November 9, 2025 and is a current release from kubernetes-sigs. Web search confirms no public CVEs are reported against the kustomize CLI for this version. (CVEs found online are for the separate Flux kustomize-controller component, not the kustomize CLI.)
The installation pattern is correct, the version is appropriately pinned, and the approach mirrors the oc client installation above it.
| patch_csv() { | ||
| local CSV_NAME="$1" | ||
| local NAMESPACE_NAME="$2" | ||
| local DSPO_OWNER="$3" | ||
| local DSPO_REPO="$4" | ||
| local DSPO_BRANCH="$5" | ||
|
|
||
| echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME" | ||
|
|
||
| # Get the directory where this script is located | ||
| local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | ||
| local overlays_dir="$script_dir/overlays" | ||
|
|
||
| # Step 1: Create PersistentVolumeClaim | ||
| echo "Creating PVC for aipipelines component manifests..." | ||
| if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then | ||
| echo "PVC created successfully" | ||
| else | ||
| echo "Failed to create PVC, continuing anyway..." | ||
| fi | ||
|
|
||
| # Step 2: Download and build DSPO manifest | ||
| local base_config_path | ||
| base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH") | ||
| if [[ $? -ne 0 || -z "$base_config_path" ]]; then | ||
| echo "ERROR: Failed to download and prepare DSPO manifests" | ||
| exit 1 | ||
| fi | ||
|
|
||
| # Step 3: Apply CSV patch and wait for operator pod readiness | ||
| local operator_label | ||
| operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir") | ||
| if [[ $? -ne 0 || -z "$operator_label" ]]; then | ||
| echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness" | ||
| exit 1 | ||
| fi | ||
|
|
||
| # Step 4: Copy DSPO manifests to the operator pods | ||
| copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME" | ||
| local copy_result=$? | ||
| if [[ $copy_result -eq 1 ]]; then | ||
| echo "ERROR: Failed to copy custom manifests to any pods" | ||
| exit 1 | ||
| fi | ||
|
|
||
| # Step 5: Restart operator deployment | ||
| echo "Restarting operator deployment to pick up changes..." | ||
| if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then | ||
| echo "Operator deployment restart initiated" | ||
|
|
||
| # Wait for rollout to complete | ||
| echo "Waiting for deployment rollout to complete..." | ||
| if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then | ||
| echo "Operator deployment rollout completed successfully" | ||
| else | ||
| echo "Warning: Deployment rollout did not complete within timeout" | ||
| exit 1 | ||
| fi | ||
| else | ||
| echo "Failed to restart operator deployment" | ||
| return 1 | ||
| fi | ||
|
|
||
| # Step 6: Wait for DSPO controller manager pod to be running | ||
| wait_for_dspo_controller_manager "$NAMESPACE_NAME" | ||
|
|
||
| echo "Finished patching aipipelines component for CSV: $CSV_NAME" | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix shellcheck warning and ensure consistent error handling.
The patch_csv function orchestrates the patching workflow well, but has two issues:
- Line 18: Shellcheck SC2155 warning - declaring and assigning in one line can mask command failures.
- Line 68: Uses
return 1instead ofexit 1, which is inconsistent with error handling elsewhere in the function.
Apply this diff to address both issues:
- local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+ local script_dir
+ script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local overlays_dir="$script_dir/overlays" else
echo "Failed to restart operator deployment"
- return 1
+ exit 1
fiBased on static analysis hints.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| patch_csv() { | |
| local CSV_NAME="$1" | |
| local NAMESPACE_NAME="$2" | |
| local DSPO_OWNER="$3" | |
| local DSPO_REPO="$4" | |
| local DSPO_BRANCH="$5" | |
| echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME" | |
| # Get the directory where this script is located | |
| local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| local overlays_dir="$script_dir/overlays" | |
| # Step 1: Create PersistentVolumeClaim | |
| echo "Creating PVC for aipipelines component manifests..." | |
| if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then | |
| echo "PVC created successfully" | |
| else | |
| echo "Failed to create PVC, continuing anyway..." | |
| fi | |
| # Step 2: Download and build DSPO manifest | |
| local base_config_path | |
| base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH") | |
| if [[ $? -ne 0 || -z "$base_config_path" ]]; then | |
| echo "ERROR: Failed to download and prepare DSPO manifests" | |
| exit 1 | |
| fi | |
| # Step 3: Apply CSV patch and wait for operator pod readiness | |
| local operator_label | |
| operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir") | |
| if [[ $? -ne 0 || -z "$operator_label" ]]; then | |
| echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness" | |
| exit 1 | |
| fi | |
| # Step 4: Copy DSPO manifests to the operator pods | |
| copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME" | |
| local copy_result=$? | |
| if [[ $copy_result -eq 1 ]]; then | |
| echo "ERROR: Failed to copy custom manifests to any pods" | |
| exit 1 | |
| fi | |
| # Step 5: Restart operator deployment | |
| echo "Restarting operator deployment to pick up changes..." | |
| if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then | |
| echo "Operator deployment restart initiated" | |
| # Wait for rollout to complete | |
| echo "Waiting for deployment rollout to complete..." | |
| if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then | |
| echo "Operator deployment rollout completed successfully" | |
| else | |
| echo "Warning: Deployment rollout did not complete within timeout" | |
| exit 1 | |
| fi | |
| else | |
| echo "Failed to restart operator deployment" | |
| return 1 | |
| fi | |
| # Step 6: Wait for DSPO controller manager pod to be running | |
| wait_for_dspo_controller_manager "$NAMESPACE_NAME" | |
| echo "Finished patching aipipelines component for CSV: $CSV_NAME" | |
| } | |
| patch_csv() { | |
| local CSV_NAME="$1" | |
| local NAMESPACE_NAME="$2" | |
| local DSPO_OWNER="$3" | |
| local DSPO_REPO="$4" | |
| local DSPO_BRANCH="$5" | |
| echo "Starting aipipelines component patching for CSV: $CSV_NAME in namespace: $NAMESPACE_NAME" | |
| # Get the directory where this script is located | |
| local script_dir | |
| script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| local overlays_dir="$script_dir/overlays" | |
| # Step 1: Create PersistentVolumeClaim | |
| echo "Creating PVC for aipipelines component manifests..." | |
| if oc apply -f "$overlays_dir/dspo-pvc.yaml" -n "$NAMESPACE_NAME"; then | |
| echo "PVC created successfully" | |
| else | |
| echo "Failed to create PVC, continuing anyway..." | |
| fi | |
| # Step 2: Download and build DSPO manifest | |
| local base_config_path | |
| base_config_path=$(download_and_prepare_dspo_manifests "$DSPO_OWNER" "$DSPO_REPO" "$DSPO_BRANCH") | |
| if [[ $? -ne 0 || -z "$base_config_path" ]]; then | |
| echo "ERROR: Failed to download and prepare DSPO manifests" | |
| exit 1 | |
| fi | |
| # Step 3: Apply CSV patch and wait for operator pod readiness | |
| local operator_label | |
| operator_label=$(apply_csv_patch_and_wait_for_operator "$CSV_NAME" "$NAMESPACE_NAME" "$overlays_dir") | |
| if [[ $? -ne 0 || -z "$operator_label" ]]; then | |
| echo "ERROR: Failed to apply CSV patch or wait for operator pod readiness" | |
| exit 1 | |
| fi | |
| # Step 4: Copy DSPO manifests to the operator pods | |
| copy_dspo_manifests_to_pods "$base_config_path" "$operator_label" "$NAMESPACE_NAME" | |
| local copy_result=$? | |
| if [[ $copy_result -eq 1 ]]; then | |
| echo "ERROR: Failed to copy custom manifests to any pods" | |
| exit 1 | |
| fi | |
| # Step 5: Restart operator deployment | |
| echo "Restarting operator deployment to pick up changes..." | |
| if oc rollout restart deploy -n "$NAMESPACE_NAME" -l "$operator_label"; then | |
| echo "Operator deployment restart initiated" | |
| # Wait for rollout to complete | |
| echo "Waiting for deployment rollout to complete..." | |
| if oc rollout status deploy -n "$NAMESPACE_NAME" -l "$operator_label" --timeout=300s; then | |
| echo "Operator deployment rollout completed successfully" | |
| else | |
| echo "Warning: Deployment rollout did not complete within timeout" | |
| exit 1 | |
| fi | |
| else | |
| echo "Failed to restart operator deployment" | |
| exit 1 | |
| fi | |
| # Step 6: Wait for DSPO controller manager pod to be running | |
| wait_for_dspo_controller_manager "$NAMESPACE_NAME" | |
| echo "Finished patching aipipelines component for CSV: $CSV_NAME" | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 18-18: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
In backend/test/images/patch-csv.sh around lines 8 to 75, fix two issues: avoid
SC2155 by declaring script_dir first and then assigning it with a command
substitution that checks for failure (e.g., declare local script_dir;
script_dir="$(cd ...)" and if that command fails, print an error and exit), and
replace the inconsistent return 1 on line ~68 with exit 1 to match the rest of
the function's error handling so failures terminate the script consistently.
| wait_for_dspo_controller_manager() { | ||
| local NAMESPACE_NAME="$1" | ||
|
|
||
| echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..." | ||
| local dspo_pod_ready=false | ||
| local max_wait_time=300 # 5 minutes | ||
| local wait_interval=10 # Check every 10 seconds | ||
| local elapsed_time=0 | ||
|
|
||
| while [[ $elapsed_time -lt $max_wait_time ]]; do | ||
| # Check if the DSPO controller manager pod exists and is running | ||
| local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l) | ||
|
|
||
| if [[ $dspo_pod_status -gt 0 ]]; then | ||
| echo "✓ data-science-pipelines-operator-controller-manager pod is running" | ||
| dspo_pod_ready=true | ||
| break | ||
| else | ||
| echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)" | ||
| sleep $wait_interval | ||
| elapsed_time=$((elapsed_time + wait_interval)) | ||
| fi | ||
| done | ||
|
|
||
| if [[ $dspo_pod_ready == false ]]; then | ||
| echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds" | ||
| echo "This may indicate an issue with the DSPO deployment, but continuing..." | ||
| return 1 | ||
| fi | ||
|
|
||
| return 0 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix shellcheck warning for better error detection.
The wait logic is sound, but Line 263 has a shellcheck SC2155 warning that should be addressed.
Apply this diff:
- local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)
+ local dspo_pod_status
+ dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l)This ensures that if the command substitution fails, the error isn't masked by the local declaration.
Based on static analysis hints.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wait_for_dspo_controller_manager() { | |
| local NAMESPACE_NAME="$1" | |
| echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..." | |
| local dspo_pod_ready=false | |
| local max_wait_time=300 # 5 minutes | |
| local wait_interval=10 # Check every 10 seconds | |
| local elapsed_time=0 | |
| while [[ $elapsed_time -lt $max_wait_time ]]; do | |
| # Check if the DSPO controller manager pod exists and is running | |
| local dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l) | |
| if [[ $dspo_pod_status -gt 0 ]]; then | |
| echo "✓ data-science-pipelines-operator-controller-manager pod is running" | |
| dspo_pod_ready=true | |
| break | |
| else | |
| echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)" | |
| sleep $wait_interval | |
| elapsed_time=$((elapsed_time + wait_interval)) | |
| fi | |
| done | |
| if [[ $dspo_pod_ready == false ]]; then | |
| echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds" | |
| echo "This may indicate an issue with the DSPO deployment, but continuing..." | |
| return 1 | |
| fi | |
| return 0 | |
| } | |
| wait_for_dspo_controller_manager() { | |
| local NAMESPACE_NAME="$1" | |
| echo "Waiting for data-science-pipelines-operator-controller-manager pod to be ready..." | |
| local dspo_pod_ready=false | |
| local max_wait_time=300 # 5 minutes | |
| local wait_interval=10 # Check every 10 seconds | |
| local elapsed_time=0 | |
| while [[ $elapsed_time -lt $max_wait_time ]]; do | |
| # Check if the DSPO controller manager pod exists and is running | |
| local dspo_pod_status | |
| dspo_pod_status=$(oc get pods -n "$NAMESPACE_NAME" --field-selector=status.phase=Running --no-headers 2>/dev/null | grep "data-science-pipelines-operator-controller-manager" | wc -l) | |
| if [[ $dspo_pod_status -gt 0 ]]; then | |
| echo "✓ data-science-pipelines-operator-controller-manager pod is running" | |
| dspo_pod_ready=true | |
| break | |
| else | |
| echo "Waiting for data-science-pipelines-operator-controller-manager pod... (${elapsed_time}s/${max_wait_time}s)" | |
| sleep $wait_interval | |
| elapsed_time=$((elapsed_time + wait_interval)) | |
| fi | |
| done | |
| if [[ $dspo_pod_ready == false ]]; then | |
| echo "WARNING: data-science-pipelines-operator-controller-manager pod did not become ready within ${max_wait_time} seconds" | |
| echo "This may indicate an issue with the DSPO deployment, but continuing..." | |
| return 1 | |
| fi | |
| return 0 | |
| } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 263-263: Declare and assign separately to avoid masking return values.
(SC2155)
🤖 Prompt for AI Agents
In backend/test/images/patch-csv.sh around lines 252 to 283, change the
single-line combined local + command substitution to a separate declaration and
assignment so shellcheck SC2155 is fixed and errors from the command
substitution aren't masked: declare the variable with local (e.g., local
dspo_pod_status) then perform the command substitution on the next line
(dspo_pod_status=$(...)) and optionally check the command's exit status and
handle failures before using the value.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: HumairAK The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
0f5f315
into
opendatahub-io:master
Description of your changes:
Following the instructions here: https://github.com/opendatahub-io/opendatahub-operator, this change overrrides the dspo to provided branch (provided as runtime arg)
e.g. docker run command:
Checklist:
Summary by CodeRabbit
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.