Skip to content

Add support for passing token to evaluations directly #9

@ruivieira

Description

@ruivieira

At the moment tokens are passed using a Kubernetes logic, i.e.

client.benchmarks.register(
    benchmark_id="trustyai_lmeval::arc_easy",
    dataset_id="trustyai_lmeval::arc_easy",
    scoring_functions=["string"],
    provider_benchmark_id="string",
    provider_id="trustyai_lmeval",
    metadata={
        "env": {
            "OPENAI_API_KEY": {
            "valueFrom": {
                "secretKeyRef": {
                    "name": "user-one-token",
                    "key": "token"
                }
            }
        },
        }
    }
)

On top of this, add the ability to pass them directly from Python:

client.benchmarks.register(
    benchmark_id="trustyai_lmeval::arc_easy",
    dataset_id="trustyai_lmeval::arc_easy",
    scoring_functions=["string"],
    provider_benchmark_id="string",
    provider_id="trustyai_lmeval",
    metadata={
        "env": {
            "OPENAI_API_KEY": "1234567abcdef"
        }
    }
)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions