I tried running mlx-community/Qwen3-Next-80B-A3B-Instruct-8bit in LM Studio but the next architecture isn't yet supported in the latest mlx-engine. This is most likely due to the latest mlx-lm release that had a bug.
It works though with the most recent version from GitHub. You can run the model successfully with:
uv run --with git+https://github.com/ml-explore/mlx-lm.git mlx_lm.chat --model mlx-community/Qwen3-Next-80B-A3B-Instruct-8bit --max-tokens 10000
Would it be appropriate to add some docs of how to build all modules - including mlx-engine - from source and using them in LM Studio?