|Visual Salience in ACT‑R|
|chil home > projects >|
When deploying the eyes, how does the human visual system decide where to look next?
Most theorists in visual attention agree that it is a mixture of top-down and bottom-up processing. Bottom-up processing is computation of what regions in the visual scene are "salient," irrespective of what the goals or targets of the search are. For instance, in a field of all black objects, a red object stands out.
The other consideration is top-down processing. Clearly, when you're searching for something you know to be red, you tend to look more at red objects. Similarly, people also give spatial guidance--if you expect the thing you're looking for to be on the left, you first generally look on the left.
Since its inception, the ACT-R visual system hasn't really addressed these issues. Herein is a first attempt to address such concerns. There are a couple talks, and the source code for a salience-computing version of the visual system. Documentation is currently scarce, but hopefully they'll be something soon.
Talk given at February 2006 BICA Meeting
Talk given at July 2008 ACT-R Workshop
Lisp source code for new vision module (load this after you've loaded ACT-R 6)
Lisp source code for old vision module (load this after you've loaded ACT-R 6)
Drawing code for new vision module (for MCL only; draws numbers on the installed window with salience information)
Drawing code for old vision module (for MCL only; draws numbers on the installed window with salience information)
This will be minimal. First, load ACT-R 6.0, then the "visual-salience.lisp" file. (If you're on MCL, you can optionally load the "draw-salience.lisp" file as well.)
+visual-location> requests will now be governed by the salience system described in the talks above. That's it--wasn't that easy?
Oh, OK, there are some ways you can tweak it, which is by changing the values of some of the parameters which control the system. Remember that the salience of feature i in the visual icon, Li, is governed by this equation:
The first term is bottom-up salience, the second is top-down salience based on spatial guidance, the third is top-down based on value guidance, and the last is, of course, noise.
The more easy and obvious parameters are:
This controls the s-value of the logistic noise distribution, analogous to (transient) chunk activation noise. (Larger numbers mean noisier.)
This sets the minimum salience value a feature has to have to be returned by a
+visual-location>. Analogous to retrieval threshold.
Controls the total W divided up between the different value constraints. Analogous to goal activation for delcarative memory.
What's the Sji between a specified value in the
color red) and that value on the feature? Defaults to 1.0.
Now for the trickier ones:
By default, the system assigns an Sji of
*max-salience-sji* when the value in the feature matches the value in the
+visual-location> request, and 0 otherwise. So, for example, if you ask for
color red and the feature in question has a color of "orange," it gets no salience boost. If you'd like something more graded (more like partial matching), then supply a hook function. The hook function needs to take three arguments:
slotname is the name of the slot being used right now (e.g., color, val),
criterion is what is specified in the
+visual-location> request, and
value is the value for that feature. This function should return a number (the Sji, which should be less than or equal to
*max-salience-sji*). So, if your function is called with (color red orange), you might want to return 0.5 instead of 0.0.
This controls a great deal of the behavior of the system, at least for bottom-up salience. This is the list of which slots participate in bottom-up salience computation, and what their gamma weight is (that's γk in the salience equation). This should be a list of dotted pairs, where the first item in the pair is the name of the slot and the second is the gamma value to use for that slot. The default is
((size . 0.30) (color . 0.40) (val . 0.30)), which says
size gets 30% of the weight,
color gets 40%, and so on. The numbers should sum up to 1.0.
|People Facilities Research Projects Miscellany Search | Home|
Last modified 2006.07.19