Talk to the AI assistant hands-free using a keyboard shortcut — ask questions, check your tasks, and update your work without touching the keyboard to type.
This feature may need to be enabled for your workspace.
Voice lets you interact with Hamster Studio's AI assistant by speaking rather than typing. Hold a keyboard shortcut to talk, release to send your audio, and the assistant responds out loud. The session stays open so you can keep the conversation going across multiple turns. Voice is built for situations where typing is inconvenient: stepping through a brief while on a call, reviewing tasks away from your desk, or running through a checklist hands-free.
The assistant in voice mode is conversational. It greets you, answers questions naturally, and responds in short spoken sentences. It only takes action on your work — like looking up tasks or triggering a plan — when you explicitly ask for it.
Hold the shortcut to start talking — Press and hold Cmd+Shift+V (Mac) or Ctrl+Shift+V (Windows/Linux). The microphone activates and a small indicator appears at the bottom of the screen showing that Hamster is listening. The session connects automatically on your first press.
Speak your request — While holding the shortcut, say what you want. Ask a question, request a task list, ask for the status of your briefs. Keep your request focused — one sentence or short phrase works best.
Release to send — Let go of the shortcut. The audio is sent and the assistant processes your request. A "Speaking…" indicator appears while the response plays back through your speakers or headphones.
Continue the conversation — After the assistant finishes, hold the shortcut again to speak your follow-up. The session stays active for 30 seconds of idle time before automatically disconnecting.
Voice supports two modes:
Push-to-talk (default): Hold the shortcut while you speak, release when done. This gives you precise control over when the assistant is listening. Recommended for use in noisy environments or shared spaces.
Continuous: Press the shortcut once to activate, speak freely, and press again to stop. The assistant uses automatic speech detection to determine when you've finished a sentence. Better for longer, more conversational interactions in a quiet environment.
The voice assistant understands natural language and can take action on your workspace:
List your briefs: "Show me my briefs" or "What projects do I have?" The assistant reads back your most recent briefs and their status.
Connect to a brief: "Connect to the onboarding redesign brief." This sets the voice session's context so that follow-up questions about tasks reference the right project.
List and check tasks: "What are my tasks?" or "Show me the in-progress tasks." Once connected to a brief, the assistant lists tasks by status.
Get task details: "Tell me about task proj-12" gives you the title, status, priority, and description of a specific task.
Update tasks: "Mark proj-12 as done" or "Change the priority of proj-5 to high." The assistant confirms the change after making it.
Search your knowledge base: "Search for our authentication approach" searches your team's documents, blueprints, and methods and reads back the most relevant findings.
Generate a plan: "Generate a plan for this brief." Starts task generation for the connected brief. The tasks appear in the brief while the conversation continues.
Look up recent conversation: "What did we discuss earlier in this thread?" retrieves recent chat history from the current thread.
The voice assistant is designed to stay conversational. It responds in one or two short sentences — it doesn't read out long lists or markdown-formatted content. When you ask for a task list with many items, it summarizes and highlights the most relevant ones.
The assistant only calls tools when you explicitly ask for something. If there's a pause in the audio or your request is unclear, it responds with a brief acknowledgment ("I'm here") rather than guessing what you wanted.
A floating indicator at the bottom center of the screen shows the current state: