Skip to content

Commit d8cd81e

Browse files
authored
Merge branch 'main' into enable-data-files
2 parents de830f6 + 99ef5b9 commit d8cd81e

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ Lighteval offers the following entry points for model evaluation:
124124
Did not find what you need ? You can always make your custom model API by following [this guide](https://huggingface.co/docs/lighteval/main/en/evaluating-a-custom-model)
125125
- `lighteval custom`: Evaluate custom models (can be anything)
126126

127-
Here's a **quick command** to evaluate using the *Accelerate backend*:
127+
Here's a **quick command** to evaluate using a remote inference service:
128128

129129
```shell
130130
lighteval eval "hf-inference-providers/openai/gpt-oss-20b" gpqa:diamond

docs/source/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ and see how your models stack up.
99

1010
### 🚀 **Multi-Backend Support**
1111
Evaluate your models using the most popular and efficient inference backends:
12-
- `eval`: Use [inspect-ai](https://inspect.aisi.org.uk/) as backend to evaluate and inspect your models ! (prefered way)
12+
- `eval`: Use [inspect-ai](https://inspect.aisi.org.uk/) as backend to evaluate and inspect your models! (prefered way)
1313
- `transformers`: Evaluate models on CPU or one or more GPUs using [🤗
1414
Accelerate](https://github.com/huggingface/transformers)
1515
- `nanotron`: Evaluate models in distributed settings using [⚡️

src/lighteval/main_inspect.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -565,4 +565,4 @@ def bundle(log_dir: str, output_dir: str, overwrite: bool = True, repo_id: str |
565565
"tiny_benchmarks",
566566
]
567567
model = "hf-inference-providers/meta-llama/Llama-3.1-8B-Instruct:nebius"
568-
eval(models=[model], tasks=task)
568+
eval(models=[model], tasks=tasks[0])

0 commit comments

Comments
 (0)