/icons/vscode-config.svg Copilot Accessibility & Signals
Category: GitHub Copilot
Overview
Make GitHub Copilot more accessible with audio cues, announcements, and enhanced verbosity settings. This configuration is designed for developers who benefit from auditory feedback and screen reader compatibility.
What’s Included
Audio Signals
- Chat request sent: Audio cue and announcement when you send a chat message
- Response received: Notification when Copilot responds
- File modified: Alert when chat edits modify a file
- Action required: Audio cue when you need to take action (approve, etc.)
- Inline suggestions: Sound when cursor is on a line with suggestions
- Next edit available: Notification for Next Edit Suggestions (NES)
Verbosity Settings
- Inline chat help: Information about accessing inline chat help menu
- Inline completions: How to access completion hover and Accessible View
- Panel chat help: Guidance on accessing chat help menu
Voice Features
- Keyword activation disabled: “Hey Code” voice activation is off by default (enable if needed)
- Auto-synthesize disabled: Responses aren’t automatically read aloud (enable for hands-free mode)
- Speech timeout: 1200ms silence before voice recognition stops
Accessible Views
- Diff viewer: Automatically shows accessible diff viewer for inline chat changes
Usage Tips
- Enable screen reader: Make sure your screen reader (NVDA, JAWS, VoiceOver) is running
- Customize sounds: All audio signals use “auto” mode - they adapt to your system settings
- Voice input: Enable
keywordActivationandautoSynthesizefor hands-free coding - Accessible View: Press
Ctrl+K Ctrl+Hto open Accessible View for detailed change review
Customization
You can adjust audio signals individually:
"on"- Always play sound/announcement"off"- Never play sound/announcement"auto"- Play when screen reader is detected (recommended)
Best For
- Developers using screen readers
- Users who benefit from audio feedback
- Teams committed to inclusive development practices
- Developers with visual impairments
- Anyone who prefers multi-modal feedback while coding
Settings JSON
{
"accessibility.signals.chatRequestSent": {
"sound": "auto",
"announcement": "auto"
},
"accessibility.signals.chatResponseReceived": {
"sound": "auto"
},
"accessibility.signals.chatEditModifiedFile": {
"sound": "auto"
},
"accessibility.signals.chatUserActionRequired": {
"sound": "auto",
"announcement": "auto"
},
"accessibility.signals.lineHasInlineSuggestion": {
"sound": "auto"
},
"accessibility.signals.nextEditSuggestion": {
"sound": "auto",
"announcement": "auto"
},
"accessibility.verbosity.inlineChat": true,
"accessibility.verbosity.inlineCompletions": true,
"accessibility.verbosity.panelChat": true,
"accessibility.voice.keywordActivation": "off",
"accessibility.voice.autoSynthesize": "off",
"accessibility.voice.speechTimeout": 1200,
"inlineChat.accessibleDiffView": "auto"
}