diff --git a/docs/source/async.mdx b/docs/source/async.mdx index e3a11609c0..9dd87472c8 100644 --- a/docs/source/async.mdx +++ b/docs/source/async.mdx @@ -278,7 +278,7 @@ We found the default values of `actions_per_chunk` and `chunk_size_threshold` to 2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue. 3. **Adjust `chunk_size_threshold`**. - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model). - - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug-visualize-queue-size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup. + - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug_visualize_queue_size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
The action queue size is plotted at runtime when the
- `--debug-visualize-queue-size` flag is passed, for various levels of
+ `--debug_visualize_queue_size` flag is passed, for various levels of
`chunk_size_threshold` (`g` in the SmolVLA paper).