|Speech Module Guide|
|chil home > projects > ACT-R/PM > documentation >|
A guide to understanding and using ACT-R/PM's Speech Module.
The Speech Module gives ACT-R a rudimentary ability to speak. This system is not designed to provide a sophisticated simulation of human speech production, but to allow ACT-R to speak words and short phrases for simulating verbal responses in experiments.
There are two commands to which the Speech Module responds, which is
subvocalize. Both of them take one parameter, the string to be spoken. Like the Motor Module, this then starts a process of feature generation and finally execution. For command syntax, see the Command Reference.
Speech is output in up to three ways:
First, speech is "heard" by ACT-R itself. An entry is placed into RPM's audicon representing the text being spoken. This is the only output for
Second, on a Macintosh, this can generate actual synthesized speech. In order for synthesized speech to be produced, Apple's Speech Manager software (MacInTalk) must be installed on the machine on which ACT-R/PM is running. This software can be found on Apple's FTP server. Basic speech synthesis does not produce particularly good speech, but it's something.
The third and most useful output channel is a call to a user-defined Lisp function. There are two ways to do this. The recommended way is by defining a method for device-output-speech (see the docs for the Device Interface). An alternate method is to provides a pointer to a user-defined function via the
:speech-hook-fct parameter. This should be a function that takes two arguments: time and text. When the function is called by RPM, it will provide the simulated time (as a floating-point number, in seconds) at which speech began and a string containing the text that is being spoken.
|People Facilities Research Projects Miscellany Search | Home|
Last modified 2002.11.24