Native speech-to-text for Arch / Omarchy - Fast, accurate and easy system-wide dictation
local & secure | waybar integration | audio feedback | whisper / parakeet / any | cpu / gpu
pssst...un-mute!
2025-08-27.15-22-53.mp4
- Optimized for Arch Linux / Omarchy - Seamless integration with Omarchy / Hyprland & Waybar
- Hyper fast defaults - State-of-the-art, fast speech recognition via in memory Whisper
- Cross-platform GPU support - Automatic detection and acceleration for NVIDIA (CUDA) / AMD (ROCm)
- Supports >any< ASR backend - Parakeet-v3? New-thing? Use the remote API and templates
- Word overrides - Customize transcriptions, prompt and corrections
- Multi-lingual - Use a multi-language model and speak your own language
- Run as user - Runs in user space, just sudo once for the installer
🔐 PRIVATE: hyprwhspr is local and never reads any clipboard / audio content
- Omarchy or Arch Linux
- NVIDIA GPU (optional, for CUDA acceleration)
- AMD GPU (optional, for ROCm acceleration)
"Just works" with Arch and Omarchy.
# Clone the repository
git clone https://github.com/goodroot/hyprwhspr.git
cd hyprwhspr
# Run the automated installer
./scripts/install-omarchy.shThe installer will:
- ✅ Install system dependencies (ydotool, etc.)
- ✅ Copy application files to system directory (
/usr/lib/hyprwhspr) - ✅ Set up Python virtual environment in user space (
~/.local/share/hyprwhspr/venv) - ✅ Install default pywhispercpp backend
- ✅ Download base model to user space (
~/.local/share/pywhispercpp/models/ggml-base.en.bin) - ✅ Set up systemd services for hyprwhspr & ydotoolds
- ✅ Configure Waybar integration
- ✅ Test everything works
Ensure your microphone of choice is available in audio settings!
- Log out and back in (for group permissions)
- Press
Super+Alt+Dto start dictation - beep! - Speak naturally
- Press
Super+Alt+Dagain to stop dictation - boop! - Bam! Text appears in active buffer!
Any snags, please create an issue or visit Omarchy Discord.
Super+Alt+D- Toggle dictation on/off
Edit ~/.config/hyprwhspr/config.json:
Minimal config - only 2 essential options:
Remote backends - use any ASR backend via HTTP API:
See hyprwhspr-backends for backend examples like Parakeet-tdt-0.6b-v3.
{
"transcription_backend": "remote",
"rest_endpoint_url": "https://your-server.example.com/transcribe",
"rest_api_key": "your-api-key-here", // optional
"rest_timeout": 30 // optional, default: 30
}Kudos to @cd-slash for the contribution!
Custom hotkey - extensive key support:
{
"primary_shortcut": "CTRL+SHIFT+SPACE"
}Supported key types:
- Modifiers:
ctrl,alt,shift,super(left) orrctrl,ralt,rshift,rsuper(right) - Function keys:
f1throughf24 - Letters:
athroughz - Numbers:
1through9,0 - Arrow keys:
up,down,left,right - Special keys:
enter,space,tab,esc,backspace,delete,home,end,pageup,pagedown - Lock keys:
capslock,numlock,scrolllock - Media keys:
mute,volumeup,volumedown,play,nextsong,previoussong - Numpad:
kp0throughkp9,kpenter,kpplus,kpminus
Or use direct evdev key names for any key not in the alias list:
{
"primary_shortcut": "SUPER+KEY_COMMA"
}Examples:
"SUPER+SHIFT+M"- Super + Shift + M"CTRL+ALT+F1"- Ctrl + Alt + F1"F12"- Just F12 (no modifier)"RCTRL+RSHIFT+ENTER"- Right Ctrl + Right Shift + Enter
Word overrides - customize transcriptions:
{
"word_overrides": {
"hyperwhisper": "hyprwhspr",
"omarchie": "Omarchy"
}
}Whisper prompt - customize transcription behavior:
{
"whisper_prompt": "Transcribe with proper capitalization, including sentence beginnings, proper nouns, titles, and standard English capitalization rules."
}The prompt influences how Whisper interprets and transcribes your audio, eg:
-
"Transcribe as technical documentation with proper capitalization, acronyms and technical terminology." -
"Transcribe as casual conversation with natural speech patterns." -
"Transcribe as an ornery pirate on the cusp of scurvy."
Audio feedback - optional sound notifications:
{
"audio_feedback": true, // Enable audio feedback (default: false)
"start_sound_volume": 0.3, // Start recording sound volume (0.1 to 1.0)
"stop_sound_volume": 0.3, // Stop recording sound volume (0.1 to 1.0)
"start_sound_path": "custom-start.ogg", // Custom start sound (relative to assets)
"stop_sound_path": "custom-stop.ogg" // Custom stop sound (relative to assets)
}Default sounds included:
- Start recording:
ping-up.ogg(ascending tone) - Stop recording:
ping-down.ogg(descending tone)
Custom sounds:
- Supported formats:
.ogg,.wav,.mp3 - Fallback: Uses defaults if custom files don't exist
Thanks for the sounds, @akx!
Text replacement: Automatically converts spoken words to symbols / punctuation:
Punctuation:
- "period" → "."
- "comma" → ","
- "question mark" → "?"
- "exclamation mark" → "!"
- "colon" → ":"
- "semicolon" → ";"
Symbols:
- "at symbol" → "@"
- "hash" → "#"
- "plus" → "+"
- "equals" → "="
- "dash" → "-"
- "underscore" → "_"
Brackets:
- "open paren" → "("
- "close paren" → ")"
- "open bracket" → "["
- "close bracket" → "]"
- "open brace" → "{"
- "close brace" → "}"
Special commands:
- "new line" → new line
- "tab" → tab character
Speech-to-text replacement list via WhisperTux, thanks @cjams!
Clipboard behavior - control what happens to clipboard after text injection:
{
"clipboard_behavior": false, // Boolean: true = clear after delay, false = keep (default: false)
"clipboard_clear_delay": 5.0 // Float: seconds to wait before clearing (default: 5.0, only used if clipboard_behavior is true)
}clipboard_behavior: true- Clipboard is automatically cleared after the specified delayclipboard_clear_delay- How long to wait before clearing (only matters whenclipboard_behavioristrue)
Paste behavior - control how text is pasted into applications:
{
"paste_mode": "ctrl_shift" // "super" | "ctrl_shift" | "ctrl" (default: "ctrl_shift")
}Paste behavior options:
-
"ctrl_shift"(default) — Sends Ctrl+Shift+V. Works in most terminals. -
"super"— Sends Super+V. Omarchy default. Maybe finicky. -
"ctrl"— Sends Ctrl+V. Standard GUI paste.
Add dynamic tray icon to your ~/.config/waybar/config:
{
"custom/hyprwhspr": {
"exec": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh status",
"interval": 2,
"return-type": "json",
"exec-on-event": true,
"format": "{}",
"on-click": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh toggle",
"on-click-right": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh start",
"on-click-middle": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh restart",
"tooltip": true
}
}Add CSS styling to your ~/.config/waybar/style.css:
@import "/usr/lib/hyprwhspr/config/waybar/hyprwhspr-style.css";Waybar icon click interactions:
- Left-click: Toggle Hyprwhspr on/off
- Right-click: Start Hyprwhspr (if not running)
- Middle-click: Restart Hyprwhspr
Increase for more CPU parallelism when using CPU; on GPU, modest values are fine.
Default model installed: ggml-base.en.bin (~148MB) to ~/.local/share/pywhispercpp/models/
GPU Acceleration (NVIDIA & AMD):
- NVIDIA (CUDA) and AMD (ROCm) are detected automatically; pywhispercpp will use GPU when available
- No manual build steps required.
- If toolchains are present, installer can build pywhispercpp with GPU support; otherwise CPU wheel is used.
CPU performance options - improve cpu transcription speed:
{
"threads": 4 // thread count for whisper cpu processing
}Available models to download:
tiny- Fastest, good for real-time dictationbase- Best balance of speed/accuracy (recommended)small- Better accuracy, still fastmedium- High accuracy, slower processinglarge- Best accuracy, requires GPU acceleration for reasonable speedlarge-v3- Latest large model, requires GPU acceleration for reasonable speed
large and large-v3 require GPU acceleration to perform.
cd ~/.local/share/pywhispercpp/models/
# Tiny models (fastest, least accurate)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin
# Base models (good balance)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin
# Small models (better accuracy)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.bin
# Medium models (high accuracy)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin
# Large models (best accuracy, requires GPU)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3.binUpdate config after downloading:
{
"model": "small.en" // Or just small if multi-lingual model. If both available, general model is chosen.
}Language detection - control transcription language:
English only speakers use .en models which are smaller.
For multi-language detection, ensure you select a model which does not say .en:
{
"language": null // null = auto-detect (default), or specify language code
}Language options:
null(default) - Auto-detect language from audio"en"- English transcription"nl"- Dutch transcription"fr"- French transcription"de"- German transcription"es"- Spanish transcriptionetc.- Any supported language code
Whisper is the default, but any model works via API.
See hyprwhspr-backends for the Parakeet-tdt-0.6b-v3 example.
After that, setup the following to match your backend, and then restart hyprwhspr:
{
"transcription_backend": "remote",
"rest_endpoint_url": "https://127.0.0.1:8080/transcribe",
"rest_api_key": "your-api-key-here", // optional
"rest_timeout": 60 // optional, default: 30
}Uses local Python and optionally systemd. Works great with GPU, or set the CPU flag.
If you're having persistent issues, you can completely reset hyprwhspr:
# Stop services
systemctl --user stop hyprwhspr ydotool
# Remove runtime data
rm -rf ~/.local/share/hyprwhspr/
# Remove user config
rm -rf ~/.config/hyprwhspr/
# Remove system files
sudo rm -rf /usr/lib/hyprwhspr/And then...
# Then reinstall fresh
./scripts/install-omarchy.shI heard the sound, but don't see text!
It's fairly common in Arch and other distros for the microphone to need to be plugged in and set each time you log in and out of your session, including during a restart. Within sound options, ensure that the microphone is indeed set. The sound utility will show feedback from the microphone if it is.
Hotkey not working:
# Check service status for hyprwhspr
systemctl --user status hyprwhspr.service
# Check logs
journalctl --user -u hyprwhspr.service -f# Check service statusr for ydotool
systemctl --user status ydotool.service
# Check logs
journalctl --user -u ydotool.service -fPermission denied:
# Fix uinput permissions
/usr/lib/hyprwhspr/scripts/fix-uinput-permissions.sh
# Log out and back inNo audio input:
If your mic actually available?
# Check audio devices
pactl list short sources
# Restart PipeWire
systemctl --user restart pipewireAudio feedback not working:
# Check if audio feedback is enabled in config
cat ~/.config/hyprwhspr/config.json | grep audio_feedback
# Verify sound files exist
ls -la /usr/lib/hyprwhspr/share/assets/
# Check if ffplay/aplay/paplay is available
which ffplay aplay paplayModel not found:
# Check if model exists
ls -la ~/.local/share/pywhispercpp/models/
# Download a different model
cd ~/.local/share/pywhispercpp/models/
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
# Verify model path in config
cat ~/.config/hyprwhspr/config.json | grep modelStuck recording state:
# Check service health and auto-recover
/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh health
# Manual restart if needed
systemctl --user restart hyprwhspr.service
# Check service status
systemctl --user status hyprwhspr.servicehyprwhspr is designed as a system package:
/usr/lib/hyprwhspr/- Main installation directory/usr/lib/hyprwhspr/lib/- Python application~/.local/share/pywhispercpp/models/- Whisper models (user space)~/.config/hyprwhspr/- User configuration~/.config/systemd/user/- Systemd service
hyprwhspr uses systemd for reliable service management:
hyprwhspr.service- Main application service with auto-restartydotool.service- Input injection daemon service- Tray integration - All tray operations use systemd commands
- Process management - No manual process killing or starting
- Service dependencies - Proper startup/shutdown ordering
- Check logs:
journalctl --user -u hyprwhspr.servicejournalctl --user -u ydotool.service - Verify permissions: Run the permissions fix script
- Test components: Check ydotool, audio devices, whisper.cpp
- Report issues: Create an issue or visit Omarchy Discord - logging info helpful!
MIT License - see LICENSE file.
Create an issue, happy to help!
For pull requests, also best to start with an issue.
Built with ❤️ in 🇨🇦 for the Omarchy community
Integrated and natural speech-to-text.
{ "primary_shortcut": "SUPER+ALT+D", "model": "base.en" }