Skip to content

Conversation

@m-misiura
Copy link
Contributor

@m-misiura m-misiura commented Nov 27, 2025

This PR deals with changes first brought by: llamastack/llama-stack#3895 but also removes Telemetry API

Summary by Sourcery

Introduce a compatibility layer for llama_stack vs. llama_stack_api and update provider wiring to drop the Telemetry API.

Enhancements:

  • Add a compat module that abstracts imports between the legacy llama_stack package and the new llama_stack_api layout, covering core APIs, provider types, inference, eval, scoring, and schema utilities.
  • Refactor inline and remote provider code to import shared types and sampling utilities via the new compat module for smoother migration between stack versions.
  • Update provider specifications to no longer expose or depend on the Telemetry API.

@sourcery-ai
Copy link

sourcery-ai bot commented Nov 27, 2025

Reviewer's Guide

Introduce a compatibility layer that allows this provider to work with both the legacy llama_stack package and the newer llama_stack_api module, refactoring imports to use the compat module and removing telemetry API exposure from the provider specs.

File-Level Changes

Change Details Files
Add a compat module that abstracts over legacy llama_stack vs new llama_stack_api imports and re-export shared types and utilities.
  • Create compat.py with try/except import blocks that first attempt to import APIs, provider types, datatypes, and utilities from llama_stack and fall back to llama_stack_api on ImportError or ModuleNotFoundError.
  • Re-export key provider, benchmark, job, dataset, eval, inference, scoring, and schema_utils symbols via all for consistent usage across the package.
src/llama_stack_provider_ragas/compat.py
Refactor existing modules to import provider/inference/schema types through the new compat layer instead of directly from llama_stack.
  • Update inline wrappers to import OpenAI request types and sampling-related classes from the local compat module while keeping ragas imports unchanged.
  • Change config to import json_schema_type from the local compat module rather than llama_stack.schema_utils to support both old and new stacks.
  • Adjust Kubeflow remote component to import SamplingParams from the compat layer instead of llama_stack.apis.inference.
  • Update remote wrappers to import SamplingParams and TopPSamplingStrategy via the compat module to decouple from llama_stack.apis.inference.
src/llama_stack_provider_ragas/inline/wrappers_inline.py
src/llama_stack_provider_ragas/config.py
src/llama_stack_provider_ragas/remote/kubeflow/components.py
src/llama_stack_provider_ragas/remote/wrappers_remote.py
Remove Telemetry API from the provider spec capabilities for both inline and remote providers.
  • Update inline provider spec to no longer include Api.telemetry in the list of supported APIs.
  • Update remote provider spec similarly to drop Api.telemetry from the supported API list.
src/llama_stack_provider_ragas/inline/provider.py
src/llama_stack_provider_ragas/remote/provider.py

Possibly linked issues

  • #Remove Telemetry API from API dependency list: They match: the PR removes Api.telemetry from provider specs, resolving the Telemetry API dependency issue.
  • #(unknown): PR migrates to llama-stack-api and adds a compat layer, implementing the RAGAS provider changes requested for LLS Core 0.2.23.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The new compat.py repeats similar try/except patterns for each group of imports; consider detecting the available package (llama_stack vs llama_stack_api) once and branching on a single flag to avoid repetition and reduce import-time overhead.
  • In compat.py, the fallback imports from llama_stack_api pull many names from the top-level package; if possible, importing from specific submodules (e.g., llama_stack_api.inference) would make the dependency surface clearer and reduce the chance of name collisions.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The new `compat.py` repeats similar try/except patterns for each group of imports; consider detecting the available package (`llama_stack` vs `llama_stack_api`) once and branching on a single flag to avoid repetition and reduce import-time overhead.
- In `compat.py`, the fallback imports from `llama_stack_api` pull many names from the top-level package; if possible, importing from specific submodules (e.g., `llama_stack_api.inference`) would make the dependency surface clearer and reduce the chance of name collisions.

## Individual Comments

### Comment 1
<location> `src/llama_stack_provider_ragas/compat.py:10` </location>
<code_context>
"""
Compatibility layer for llama_stack imports.

This module provides backward compatibility by attempting to import from
the legacy llama_stack package first, then falling back to the newer
llama_stack_api package structure.
"""

# Provider datatypes and API definitions
try:  # Legacy llama_stack layout
    from llama_stack.apis.datatypes import Api
    from llama_stack.providers.datatypes import (
        BenchmarksProtocolPrivate,
        InlineProviderSpec,
        ProviderSpec,
        RemoteProviderSpec,
    )
except (ImportError, ModuleNotFoundError):
    # Newer llama_stack_api layout
    from llama_stack_api import (
        Api,
        BenchmarksProtocolPrivate,
        InlineProviderSpec,
        ProviderSpec,
        RemoteProviderSpec,
    )

# Benchmarks
try:
    from llama_stack.apis.benchmarks import Benchmark
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import Benchmark

# Common job types
try:
    from llama_stack.apis.common.job_types import Job, JobStatus
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import Job, JobStatus

# DatasetIO
try:
    from llama_stack.apis.datasetio import DatasetIO
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import DatasetIO

# Eval
try:
    from llama_stack.apis.eval import BenchmarkConfig, Eval, EvaluateResponse
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import BenchmarkConfig, Eval, EvaluateResponse

# Inference
try:
    from llama_stack.apis.inference import (
        Inference,
        OpenAICompletionRequestWithExtraBody,
        OpenAIEmbeddingsRequestWithExtraBody,
        SamplingParams,
        TopPSamplingStrategy,
    )
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import (
        Inference,
        OpenAICompletionRequestWithExtraBody,
        OpenAIEmbeddingsRequestWithExtraBody,
        SamplingParams,
        TopPSamplingStrategy,
    )

# Scoring
try:
    from llama_stack.apis.scoring import ScoringResult
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import ScoringResult

# Schema utils
try:
    from llama_stack.schema_utils import json_schema_type
except (ImportError, ModuleNotFoundError):
    from llama_stack_api import json_schema_type

__all__ = [
    # API and Provider types
    "Api",
    "BenchmarksProtocolPrivate",
    "InlineProviderSpec",
    "ProviderSpec",
    "RemoteProviderSpec",
    # Benchmarks
    "Benchmark",
    # Job types
    "Job",
    "JobStatus",
    # DatasetIO
    "DatasetIO",
    # Eval
    "BenchmarkConfig",
    "Eval",
    "EvaluateResponse",
    # Inference
    "Inference",
    "OpenAICompletionRequestWithExtraBody",
    "OpenAIEmbeddingsRequestWithExtraBody",
    "SamplingParams",
    "TopPSamplingStrategy",
    # Scoring
    "ScoringResult",
    # Schema utils
    "json_schema_type",
]

</code_context>

<issue_to_address>
**issue (code-quality):** Remove redundant exceptions from an except clause [×5] ([`remove-redundant-exception`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-exception/))
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

"""

# Provider datatypes and API definitions
try: # Legacy llama_stack layout
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Remove redundant exceptions from an except clause [×5] (remove-redundant-exception)

Copy link
Collaborator

@dmaniloff dmaniloff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @m-misiura for this!! Can we also remove these lines from distribution/run.yaml?

telemetry:
  enabled: true

@ruivieira ruivieira merged commit 70c0272 into trustyai-explainability:main Nov 27, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants