Ventura - Speech Recognition script not working

Just updated to Ventura 13.2.1 from Sierra 12.6. Now scripts that use Speech Recognition in Voice Control no longer work. For instance, I have a simple GUI script to press the Down Arrow a specified # of times.

When I open the script for editing, the line that called Apple’s “SpeechRecognitionServer” is changed to “Apple Script Utility,” which apparently doesn’t have a “listen for” command, & the “listen for” command is changed to “«event sprcsrls»”; Apple Script Utility throws an error:

{<LIST OF #s 1-99>} doesn’t understand the “«event sprcsrls»” message.

SpeechRecognitionServer is no longer listed in the Dictionary window.

Apple still pays attention to Voice Control, in fact, they’ve upgraded it in Ventura, adding a Spelling Mode, so clearly there’s some part of the system that’s interpreting speech. Built-in commands with number input, like “Move forward < count > words” still work. My commands require a separate input to specify the # after activating because I don’t know a way to incorporate it w/other words in a command, but until Ventura, they were all working fine.

Does anyone know how to continue accessing SpeechRecognitionServer, or an alternate/replacement system utility that serves the same function?

Here’s the code for one of my custom commands:

global numList
set frontApp to (path to frontmost application as Unicode text)

set numList to {}
set numList to CreateNumList(numList)

tell application "SpeechRecognitionServer"
-- say a # from 1-99
	set howMany to listen for numList
end tell

tell application "AppleScript Utility"
	set howMany to «event sprcsrls» numList
end tell

tell application "System Events"
	tell application process frontApp
		repeat with i from 1 to howMany
			key code 125 --Up key
			delay 0.02
		end repeat
	end tell
end tell

on CreateNumList(theNumbers as list)
	repeat with i from 1 to 99
		set end of theNumbers to i
	end repeat
	return theNumbers
end CreateNumList

My macOS 13.3 environment with Mac mini M1 seems to work well.

set numList to {"1", "2", "3"}

tell application "SpeechRecognitionServer"
	-- say a # from 1-99
	set howMany to listen for numList
end tell

“listen for” command exists.

My mic is Irium webcam Audio from iPhone

Irium Webcam Audio works in 48KHz sampling (Audio MIDI

SpeechRecognitionServer’s voice regcognition engine is assistive audio. It works well at 48KHz audio sampling.

Now SpeechRecognitionServer works with AirPods.But not work with AirPods Pro due to the digital audio sampling rate.

1 Like

Really, what is the difference in sampling rate? Is that something that the user of AirPods and/or AirPods Pro can easily find?

Apple native speech recognition function is based on 48KHz sampling voice.
This spec comes from Quadra 840AV/660AV’s plain talk with DSP system spec, I doubt.

On the other hand, Siri’s speech recognition is server-side system and for mobile phones. So, its specific voice sampling rate will be lower to match with bluetooth headsets (for a long time, there is no high-quality wireless headsets for speech recognition).

AirPods’s sampling rate is 16KHz. When its debut, I ensured AirPods work with Apple native speech recognition. (48/16KHz Dual Mode? I have no idea).

AirPods Pro is a upper version of AirPods. Its samping rate is 24KHz.
Apple native speech recognition engine does not work with it.

It is a strange act. But it is a real.

I was just checking about voice recognition function to write an ebook.

1 Like

Thank you for your response. For some reason I didn’t get a notification from Late Night (not in Spam either), so I didn’t see it for a few days.

I’m also using OS 13.3, on a Mac Pro (2019), but I don’t see why that would make a difference in available Dictionaries. Your sample script is what I’d expect to work. However, when I paste it into SD, as soon as it compiles, “SpeechRecognitionServer” is changed to “AppleScript Utility.” Apparently this isn’t happening on your system. Since AppleScript Utility doesn’t have a “listen for” command, I get an error. Here’s what it looks like:

Pasted text before compiling:

set numList to {"1", "2", "3"}

tell application "SpeechRecognitionServer"
	-- say a # from 1-99
	set howMany to listen for numList
end tell

after compiling:

set numList to {"1", "2", "3"}

tell application "AppleScript Utility"
	-- say a # from 1-99
	set howMany to listen for numList
end tell

I then get an error:

AppleScript Utility got an error: can’t continue listen.

The SpeechRecognitionServer dictionary is no longer listed in SD’s Dictionary list, & a Finder search for it on my entire system drive gives no results.

I think you’re right to consider all possibilities, but I’m sure my mic has nothing to do with it. I don’t see how a mic would cause code to be changed by AppleScript. Either the mic’s compatible with the computer, or it isn’t. I’ve been using the same analog pro audio headset for years with no issues, & the Mac continues to recognize it after the Ventura upgrade; my custom Voice Control commands that don’t call SpeechRecognitionServer work fine, & obviously the built-in commands are calling something that recognizes speech. I just need to know how to access it, if it’s no longer “SpeechRecognitionServer”.

In the meantime, I have found a workaround using Shortcuts that’s less convenient, but usable. Shortcuts has an action called “Ask for Input” (listed under Scripting/Notification) that permits you to dictate something in Voice Control to be passed to the next action, & you can specify that the input be identified as a number. The number you dictate displays as text in the dialog, but is passed on as a number. I then pass it to the “Run Applescript” action (Scripting/Script Editor), which contains the “System Events” tell block from my original script. The drawback is that you have to manually click Done, or press tab & return to get it to move on. Haven’t figured out how to include those in the automation.

—>I’d still prefer to find a way to access SpeechRecognitionServer or whatever now has the “listen for” command. I don’t know why it’s available on your Mac, but not mine. Has anyone else had this issue?

Are you still able to use SpeechRecognitionServer in your scripts on Ventura, or are you using another system component to recognize the speech? SpeechRecognitionServer disappeared on my Mac when I switched to Ventura.

On Ventura 13.4beta1, SpeechRecognitionServer exists at this path on my environment.


If it exists, you have to reboot your machine at first.
And run DiskUtility to fix your environment, I think.

1 Like

How about to run this script to check your SpeechRecognitionServer?

tell application id ""
	delay 3
	--> "9.0.65"
	--> "SpeechRecognitionServer"
	path to it
	--> alias "Macintosh"
end tell
1 Like

Hmm…how about this?

-- Created 2017-04-20 by Takaaki Naganoya
-- 2017 Piyomaru Software
use AppleScript version "2.5"
use scripting additions
use framework "Foundation"
use framework "AppKit"

set aCmdList to {"Alpha", "Bravo", "Charlie", "quit"}
--set aCmdList to {"Hello", "goodnight", "goodevening", "quit"}

set aRecognizer to current application's NSSpeechRecognizer's alloc()'s init()
aRecognizer's setCommands:aCmdList
aRecognizer's setDelegate:me
aRecognizer's setDisplayedCommandsTitle:"Commands"
aRecognizer's setListensInForegroundOnly:true
aRecognizer's startListening()
aRecognizer's setBlocksOtherRecognizers:true

on speechRecognizer:aSender didRecognizeCommand:aCmd
	set recogCmd to aCmd as string
	say recogCmd
	if recogCmd = "quit" then
		aSender's stopListening()
	end if
end speechRecognizer:didRecognizeCommand:
1 Like

You’d better to use some headset. Mac will recognize the result sound of “say” command.

The first trial script of NSSpeechRecognizer recognized my voice twite.
I didn’t understand the reason meanwhile.

Finally I realized that NSSpeechRecognizer recognized the sound of say command’s speech output.