-
Notifications
You must be signed in to change notification settings - Fork 10
Move provider to new llama-stack-api module #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move provider to new llama-stack-api module #46
Conversation
Reviewer's GuideIntroduce a compatibility layer that allows this provider to work with both the legacy llama_stack package and the newer llama_stack_api module, refactoring imports to use the compat module and removing telemetry API exposure from the provider specs. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- The new
compat.pyrepeats similar try/except patterns for each group of imports; consider detecting the available package (llama_stackvsllama_stack_api) once and branching on a single flag to avoid repetition and reduce import-time overhead. - In
compat.py, the fallback imports fromllama_stack_apipull many names from the top-level package; if possible, importing from specific submodules (e.g.,llama_stack_api.inference) would make the dependency surface clearer and reduce the chance of name collisions.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new `compat.py` repeats similar try/except patterns for each group of imports; consider detecting the available package (`llama_stack` vs `llama_stack_api`) once and branching on a single flag to avoid repetition and reduce import-time overhead.
- In `compat.py`, the fallback imports from `llama_stack_api` pull many names from the top-level package; if possible, importing from specific submodules (e.g., `llama_stack_api.inference`) would make the dependency surface clearer and reduce the chance of name collisions.
## Individual Comments
### Comment 1
<location> `src/llama_stack_provider_ragas/compat.py:10` </location>
<code_context>
"""
Compatibility layer for llama_stack imports.
This module provides backward compatibility by attempting to import from
the legacy llama_stack package first, then falling back to the newer
llama_stack_api package structure.
"""
# Provider datatypes and API definitions
try: # Legacy llama_stack layout
from llama_stack.apis.datatypes import Api
from llama_stack.providers.datatypes import (
BenchmarksProtocolPrivate,
InlineProviderSpec,
ProviderSpec,
RemoteProviderSpec,
)
except (ImportError, ModuleNotFoundError):
# Newer llama_stack_api layout
from llama_stack_api import (
Api,
BenchmarksProtocolPrivate,
InlineProviderSpec,
ProviderSpec,
RemoteProviderSpec,
)
# Benchmarks
try:
from llama_stack.apis.benchmarks import Benchmark
except (ImportError, ModuleNotFoundError):
from llama_stack_api import Benchmark
# Common job types
try:
from llama_stack.apis.common.job_types import Job, JobStatus
except (ImportError, ModuleNotFoundError):
from llama_stack_api import Job, JobStatus
# DatasetIO
try:
from llama_stack.apis.datasetio import DatasetIO
except (ImportError, ModuleNotFoundError):
from llama_stack_api import DatasetIO
# Eval
try:
from llama_stack.apis.eval import BenchmarkConfig, Eval, EvaluateResponse
except (ImportError, ModuleNotFoundError):
from llama_stack_api import BenchmarkConfig, Eval, EvaluateResponse
# Inference
try:
from llama_stack.apis.inference import (
Inference,
OpenAICompletionRequestWithExtraBody,
OpenAIEmbeddingsRequestWithExtraBody,
SamplingParams,
TopPSamplingStrategy,
)
except (ImportError, ModuleNotFoundError):
from llama_stack_api import (
Inference,
OpenAICompletionRequestWithExtraBody,
OpenAIEmbeddingsRequestWithExtraBody,
SamplingParams,
TopPSamplingStrategy,
)
# Scoring
try:
from llama_stack.apis.scoring import ScoringResult
except (ImportError, ModuleNotFoundError):
from llama_stack_api import ScoringResult
# Schema utils
try:
from llama_stack.schema_utils import json_schema_type
except (ImportError, ModuleNotFoundError):
from llama_stack_api import json_schema_type
__all__ = [
# API and Provider types
"Api",
"BenchmarksProtocolPrivate",
"InlineProviderSpec",
"ProviderSpec",
"RemoteProviderSpec",
# Benchmarks
"Benchmark",
# Job types
"Job",
"JobStatus",
# DatasetIO
"DatasetIO",
# Eval
"BenchmarkConfig",
"Eval",
"EvaluateResponse",
# Inference
"Inference",
"OpenAICompletionRequestWithExtraBody",
"OpenAIEmbeddingsRequestWithExtraBody",
"SamplingParams",
"TopPSamplingStrategy",
# Scoring
"ScoringResult",
# Schema utils
"json_schema_type",
]
</code_context>
<issue_to_address>
**issue (code-quality):** Remove redundant exceptions from an except clause [×5] ([`remove-redundant-exception`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/remove-redundant-exception/))
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| """ | ||
|
|
||
| # Provider datatypes and API definitions | ||
| try: # Legacy llama_stack layout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Remove redundant exceptions from an except clause [×5] (remove-redundant-exception)
dmaniloff
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @m-misiura for this!! Can we also remove these lines from distribution/run.yaml?
telemetry:
enabled: true
This PR deals with changes first brought by: llamastack/llama-stack#3895 but also removes Telemetry API
Summary by Sourcery
Introduce a compatibility layer for llama_stack vs. llama_stack_api and update provider wiring to drop the Telemetry API.
Enhancements: