Annual Interdisciplinary Conference, Jackson, WY, February 3-8, 2002. ABSTRACTS
================================================================================
Greg Appelbaum
University of California, Irvine
TBA
================================================================================
Harry P. Bahrick
Ohio Wesleyan University
A Metacognitive Theory of the Spacing Effect
Current theories of the spacing effect are based upon encoding variability
or diminished processing. These theories explain why massed presentation of
memory content is less effective than presentations separated by short time
intervals. They do not explain why presentations spaced at 60 days are more
effective than presentations spaced at 30 days. We present data that support
an explanation based upon conscious monitoring and control of encoding
strategies.
================================================================================
Tom Busey
Indiana University
Set-Size Effects in Identification and Localization: Theory and Data
Authors: Tom Busey and John Palmer
The effect of divided attention is different for identification and
localization. We ask whether this difference is due to perceptual processing
capacity or to the decision process. Using visual search, we measured set-size
effects for finding a target grating (left-leaning) among distractor gratings
(right-leaning). The identification task was yes-no detection and the
localization task was to specify the target location. The observed set-size
effects were larger for localization than for identification. This difference
was shown for several spatial and temporal frequencies and controls ruled out
explanations based on task difficulty, sensory factors, and response measures.
The different decision requirements for the two tasks was modeled using signal
detection theory and by assuming unlimited capacity for both tasks. This model
predicted much of the observed difference between tasks. Thus, the observed
difference may be due to the differences in the decision process.
================================================================================
Lawrence K. Cormack
University of Texas at Austin
Relating Image Properties and Eye Fixation
Author(s): L. K. Cormack, U. Rajashekar, A. C. Bovik and W. S. Geisler
Efficient selection of fixation points must ultimately be based on image data,
yet the image properties that attract gaze are largely unknown. It is thus
difficult to, e.g., implement good fixation strategies in foveated artificial
vision systems. We therefore sought to elucidate the image properties that
attract gaze by combining accurate eye tracking with modern image analysis
techniques.
In one paradigm, subjects searched for targets embedded in 1/f noise. The
noise in a region of interest (ROI) around each fixation was averaged over many
trials yielding gaze attraction images analogous to the discrimination images
of [1]. In a second paradigm, subjects studied several hundred natural images,
and ROIs around each fixation were accumulated. These ROIs were then subject to
Principle Components Analysis (PCA) to reveal commonalities (other techniques
including Independent Component Analysis are currently being explored).
Gaze attraction images indicate that subjects fixate likely targets (as
opposed to randomly sampling the image), and they often search for a
characteristic feature instead of the entire target. Results from the second
paradigm indicate that statistics of ROIs in natural scenes are often different
from those of randomly selected regions from the same images.
The pixel-averaging technique of [1] can be successfully combined with
accurate eye tracking to reveal image structure that attracts gaze. This
technique can potentially reveal image structure that draws fixation in a wide
variety of search tasks. Accurate eye-tracking can also be combined with image
analysis techniques such as PCA to reveal statistics of natural images at the
point of fixation. This promises to complement recent work on natural image
statistics and their relationship to the neurophysiological properties of the
visual system.
[1] Beard and Ahumada (1998) SPIE Proc. Human Vis. and Elec. Im. III, v3299.
================================================================================
Denis Cousineau
University of Montr�al
Blocking the Search and Other Illusory Conjunctions
I will present two sets of intriguing results. The experiment involved a
visual search for one of four possible artificial targets among from 1 to 4
other artificial distractors. The subjects were well trained (75 hours).
Unbeknownst to them, we manipulated the moment at which diagnostic information
was presented using asynchronous presentation with very rapid rates.
Apparently, in one condition, the participants "froze," not initiating the
search as soon as the first object is presented, questioning the pop-out
assumption. In addition, in the final session, we modified the presentation so
that after half of the information was presented, it started disappearing while
the second half was appearing. Whereas the errors were almost non-existent
before (near 2.5%), it now jumped to 25%, suggesting massive illusory
conjunctions. This questions our above questions on pop-out and calls for a
better definition on what is the relevant level of information in the stimuli.
================================================================================
Simon Dennis
University of Queensland
Category Effects in Episodic Recognition: Are Item Noise Accounts Sufficient?
Authors: Angela Maguire, Michael Humphreys and Simon Dennis
Item noise accounts assert that interference in episodic recognition is
generated by overlap between the representation of a test item and the items
that appeared in the study list. If a study list contains items from
categories, such accounts predict that discriminability between targets and
distractors from a presented category should degrade as the number of items in
the category increases. In addition, item noise models have no mechanism for
distinguishing between taxonomic categories, such as Brisbane, Sydney and
Melbourne, and associate categories such as bed, snooze and night. Furthermore,
item noise models suggest that within category discrimination should be easier
than between category discrimination where both categories were presented at
study.
In a series of eight experiments, the nature of the category (taxonomic versus
associative), the number of items from the category (one versus five),
presentation type (blocked versus distributed) and presentation location within
the category (first item versus last item) were manipulated. Recognition memory
was assessed using both yes/no and two-alternative forced choice procedures,
where the distractor items were either the most representative exemplar from the
taxonomic categories, or the prototype from the associative categories.
For taxonomic categories, while there were strong bias effects as a function
of number of items in the category in yes/no recognition, the only effect on
discriminability was a lower A' for items in the distributed, first
presentation, five item category condition. Note this is the one condition for
which participants may be unaware that the study list contains categories during
encoding. Furthermore, in the forced choice paradigm, there was no effect of
the between/within category manipulation.
For associative categories, again there were strong bias effects as a function
of number of items in yes/no recognition. However, there was also a decrease in
both yes/no and forced choice discriminability as the number of items increased.
In addition, there was a decrease in yes/no discriminability for the last item
in the five item categories for blocked presentation. Such a pattern is
inconsistent with item noise accounts, but would occur if participants were
activating the prototypical item during study (i.e. an implicit associative
response), storing an item to context association, and then using the item to
retrieve contexts at test (i.e. a context noise model).
===============================================================================
Nick Donnelly
University of Southampton
Searching for Targets in Configurations:
Reaffirmation of the Role of Collinearity in Determining Search Efficiency
Donnelly, Humphreys and Riddoch (1991; see also Donnelly, Weekes, Humphreys
and Albon, 1998, and Humphreys and Donnelly, 2000) reported that search for
vertex targets was more efficient when the configuration of distractor vertices
formed a shape on absent trials when they grouped but did not form a shape. The
authors attributed the difference between conditions to the rapid computation of
collinearity between distractors, via edge-interpolation, although other
interpretations were possible. In the present study, an alternative account
based on shape-template matching is examined in a series of experiments using
Kanisza-type inducers rather than the line vertices used in earlier studies.
The results show that Kanisza-type inducers do not support the efficient
detection of targets. The data are consistent with edge-interpolation, but not
with shape-template matching, mechanisms enabling efficient search.
Donnelly, N., Humphreys, G. W. and Riddoch, M. J. (1991). Parallel computation
of primitive shape descriptions. Journal of Experimental Psychology: Human
Perception and Performance, 17, 561-570.
Donnelly, N. Weekes, B., Humphreys, G. W. and Albon, A. (1998). Processes
involved in the computation of a shape description. Journal of Experimental
Psychology: Human Perception and Performance, 24, 1119-1130.
Humphreys, G. W. and Donnelly, N. (2000). 3D constraints on spatially parallel
shape perception. Perception and Psychophysics, 2000, 62, 1060-1085.
================================================================================
Barbara Dosher
University of California, Irvine
TBA
================================================================================
Mario Fific
Indiana University
Parallel vs Serial Processing and Individual Differences in Visual and Memory
Search Task Revealed by the Systems Factorial Technology
Author(s): Mario Fific and James Townsend
Methodology involving factorial variation in order to determine mental
architecture and to assess processing capacity has been greatly expanded over
the past several decades. However, it has never been adapted to study the
realm where much of the interest began in the 1960s: short-term memory search.
We present a new method of manipulating probe-to-memory item processing speed
and our initial results for loads n=2 for both memory and visual search tasks.
Three variables were manipulated in this experiment: number of processing
elements (2), phonemic (memory search) and graphemic (visual search)
dissimilarity of a target to the particular memorized element (high, low) and
temporal characteristics of the tasks (short and long ISI in memory search and
duration of target exposure in visual search). We employ the recent results
involving the distribution functions rather than means alone. Our results in
memory search suggest that some observers really are serial whereas others are
strongly parallel. Thus, these fine grained analyses portend quite striking
individual differences in this basic cognitive task. However results from
visual search suggested architecture that is not compatible to independent
channels parallel or serial processing. Further, less individual differences
between observers have been observed.
================================================================================
Greg Francis
Purdue University
Quantitative Models of Visual Backward Masking:
They are all Correct...No wait...They are all Incorrect
Visual backward masking refers to a class of phenomena where the percept of a
briefly presented target stimulus is greatly reduced by the subsequent
presentation of a mask stimulus. Backward masking has been studied for over a
century, so it was significant when Di Lollo et al. (2000) reported experimental
findings on backwards masking that they concluded were inconsistent with all
current theories of masking. I show that their conclusion is not valid.
Analysis and simulations of four current quantitative models of backward masking
show that three of the four models can account for the experimental data that
Di Lollo et al. used to reject existing models. The fourth model could be
easily altered to account for the data. Thus all of the models are correct.
When the stimulus onset asynchrony (SOA) between the target and mask stimuli
is experimentally varied, one can measure reports of the target percept as a
function of SOA to produce what is called a masking function. Depending on the
properties of the target and the mask, the masking function may be monotonic
(with the maximum masking occurring for SOA equal zero) or u-shaped (with the
mask having its strongest effect for a positive SOA). Analysis of the models
demonstrates that the models predict that monotonic masks should occur when the
mask is strong and u-shaped masks should occur when the mask is weak. Thus, for
a fixed target and task the masking functions should never intersect: monotonic
masking functions must always lie below u-shaped masking functions, regardless
of the properties of the mask. A new experiment rejects this prediction,
thereby indicating that all current quantitative models of backward masking are
incorrect.
===============================================================================
Bill Geisler
University of Texas
Bayesian Natural Selection and the Evolution of Perceptual Systems
In recent years, there has been much interest in characterizing the
statistical properties of natural stimuli in order to better understand the
design of perceptual systems. A fruitful approach has been to compare the
processing of natural stimuli in real perceptual systems with that of ideal
observers derived within the framework of Bayesian statistical decision theory.
While this form of optimization theory has provided a deeper understanding of
the information contained in natural stimuli as well as of the computational
principles employed in perceptual systems, it does not directly consider the
process of natural selection, which is ultimately responsible for design. Here
we propose a formal framework for analyzing how the statistics of natural
stimuli and the process of natural selection interact to determine the design of
perceptual systems. The framework consists of two complementary components.
The first is a maximum-fitness ideal observer, a standard Bayesian ideal
observer with a utility function appropriate for natural selection. The second
component is a formal version of natural selection based upon Bayesian
statistical decision theory. Maximum-fitness ideal observers and Bayesian
natural selection are demonstrated in several examples. We suggest that the
Bayesian approach is appropriate not only for the study of perceptual systems
but for the study of many other systems in biology.
===============================================================================
Joetta Gobell
University of Califoria, Irvine
TBA
================================================================================
Jason M. Gold
Indiana University
Characterizing Visual Memory Decay with External Noise
Authors: J. M. Gold, R. Sekuler, R. F. Murray, A. B. Sekuler and P. J. Bennett
The ability to recall visual patterns from short-term memory often declines
with the passage of time. There are two possible sources for this decline. One
possibility is that internal variability or "noise" introduced during the
process of storage and retrieval grows over time. A second possibility is that
internal noise remains constant, but the non-stochastic parts of the internal
operations performed during storage and retrieval become less optimal over time.
One way to distinguish between these possibilities is to measure changes in
performance as varying amounts of externally added noise are introduced into a
task. We applied this "external noise masking" technique to a visual pattern
discrimination task that involved the use of short-term visual memory. The task
required observers to perform same/different discriminations with pairs of
randomly generated noisy textures, separated by one of three time delays (100,
500, or 2000 ms). Increasing the delay between stimuli had little or no effect
on internal noise, but reduced the efficiency of the non-stochastic parts of the
internal operations by about 200%. In a subsequent experiment, externally added
noise was used to determine which spatial frequencies observers relied upon to
perform the pattern matching task at the long and short delays. The results
showed that at least part of the reduction in efficiency with longer delays was
due to observers' greater reliance on uninformative high spatial frequencies
outside of the stimulus band. Possible explanations for the shift to higher
frequencies will be discussed.
================================================================================
Todd Handy
Dartmouth College
Visual Sensory Gain in Action-related Processing
Authors: Todd C. Handy, Sarah Ketay, Scott T. Grafton, & Michael S. Gazzaniga
Research on sensory gain in early visual cortex is typically predicated on the
assumption that sensory gain serves to enhance the perception of stimuli in
attended visual field locations. In a recent series of event-related potential
(ERP) experiments we have been investigating whether sensory gain may be
involved as well in visuomotor processing. In particular, evidence suggests
that there is a tendency to orient one's visual spatial attention towards
objects that afford manual interactions (i.e., "graspability"), relative to
objects that don't. The effect -- measured at the level of the lateral
occipital P1 ERP component -- is non-volitional and is stronger in the right
visual field. The results have direct implications for attention-related models
of sensory gain, object competition, and visuomotor processing.
===============================================================================
Erin Harley
University of Washington
Is Hindsight 20/20?: Evidence for Hindsight Bias in Visual Perception Tasks
Authors: Erin Harley and Geoffrey Loftus
This research addresses whether or not a hindsight bias exists for visual
perception tasks, and if so, under what conditions. In two experiments
participants were asked to search for digits hidden in visual noise.
Experiment-1 participants were given information to aid them in the search
process (digit, location, or both), and then estimated what their
performance would have been without the aid. Estimates were compared to
performance on trials in which no aid was given. Experiment-2 participants
were shown the hidden digits (i.e., were provided outcome information) and
predicted the performance of other observers who received no outcome
information. A traditional hindsight bias was found in Experiment 1 and in
Experiment 2 when outcome information was presented last, but a reverse bias
was found in Experiment 2 when outcome information was presented first.
===============================================================================
David Heeger
Stanford University
Neural Basis of the Motion Aftereffect
Authors: David Heeger and Alex Huk
Several recent fMRI studies have reported response increases in human MT+
correlated with perception of the motion aftereffect (MAE). However, MT+
responses can be strongly affected by attention, and subjects may naturally
attend more strongly during the MAE than during controls without MAE. We found
that requiring subjects to attend to the motion of the stimulus on both MAE and
control trials produced equal levels of MT+ response, suggesting that attention
may be a major confound in the interpretation of previous fMRI MAE experiments;
in our data, attention appears to account for the entire effect. After
eliminating this confound, we sought to measure direction-selective motion
adaptation in human visual cortex. We observed that adaptation produced a
direction-selective imbalance in MT+ responses (as well as earlier visual areas
including V1), and yielded a corresponding psychophysical asymmetry in speed
discrimination thresholds. These findings provide physiological evidence of a
population-level response imbalance related to the MAE, and quantify the
relative proportions of direction-selective neurons in human cortical visual
areas.
================================================================================
David E. Huber
University of Colorado, Boulder
How is the Brain Able to Identify Items with Minimal Interference from Prior
Presentations?
Author(s): David E. Huber and Randall C. O'Reilly
In perceptual areas of the brain, rapid neural adaptation is ubiquitously
observed. We propose that this rapid adaptation serves a crucial function,
reducing interference between items presented in succession. The biological
mechanism of synaptic depression, which causes a short-term, transient decrease
in synaptic efficacy as a function of neural activity, is likely a major
contributor to this adaptation. Thus, we specifically propose that synaptic
depression reduces item-specific interference by diminishing the activation of
already-identified items. We derive a rate-coded version of synaptic depression
and implement this activation function within a neural network processing
hierarchy. In the hierarchy, early levels integrate and identify more rapidly
than later levels. Mutual inhibition at each level of the hierarchy explains a
variety of u-shaped general interference results in which, following the
presentation of a first item, identification for a dissimilar second item is
spared for short and long delays but not for intermediate delays; because
synaptic depression results in an n-shaped identification response, this
produces u-shaped interference. We account for general and item-specific
interference effects in the domain of brief, near-threshold word identification
and discuss broad applications of the theory to other perceptual interference
phenomena.
===============================================================================
Petr Janata
Dartmouth College
Cognitive Binding of Auditory and Visual Objects into Semantic Concepts
Authors: Petr Janata and Reginald B. Adams
This paper presents a brief history and introduction to the idea of
environmental sounds representing "auditory objects" in semantic memory, along
with new behavioral and functional neuroimaging data from a study in which
subjects performed a common name verification task. This task requires that
subjects verify the match between a presented written word and either a
corresponding sound or picture. It therefore requires integration of semantic
concepts that are accessed via the lexicon and semantic concepts accessed via
sensory-specific object representations. While the mechanisms underlying this
integrative process have received significant attention in the visual domain,
the auditory parallel, i.e. the manner in which environmental sounds are bound
with associated semantic concepts, has not. Consistent with a theory of the
inferior frontal gyrus that emphasizes semantic, phonological, and working
memory functions, our results support a complementary hypothesis that the IFG
region of the prefrontal cortex in both hemispheres is also involved in binding
sensory-specific object representations into polymodal aggregates, i.e. more
general concepts in semantic memory.
===============================================================================
Geoffrey R. Loftus
University of Washington
Perceptual Interference in Face Recognition
Bruner and Potter (Science, 1964) measured observers� ability to recognize
pictures of objects, initially seen blurred, then gradually focused. Pictures
initially seen very blurred were harder to eventually recognize than pictures
initially seen moderately blurred. I report research that replicates this
�perceptual interference effect� using recognition of celebrity faces. This
work is an offshoot of the idea that a distant face can be represented by
blurring: Combining face size, face distance, and the human modulation transfer
function allows construction of a theoretically equivalent filtered (blurred)
face. Therefore one can visually represent a face seen at any given distance
either by shrinking it to simulate the visual angle or blurring it to simulate
the spatial-frequency composition corresponding to the distance. Representing
distance by either blurring or shrinking produced the perceptual-interference
effect: Faces beginning at 500 feet away needed to move closer for eventual
recognition than faces beginning at 250 feet away.
===============================================================================
Zhong-Lin Lu
University of Southern Califonria
TBA
===============================================================================
Kenneth J. Malmberg
Indiana University
Effects of Similarity, Repetitions, and Normative Frequency on Memory:
Registration With Learning?
Author(s): Kenneth J. Malmberg, Jocelyn E. Holden, Richard M. Shiffrin
Explicit memory improves as the number of times an item is studied increases.
Given this, the situation that Hintzman, Curran, and Oppy (1992) described -- in
in which repetitions of an item apparently do not much improve recognition
performance -- is noteworthy. In this paradigm, subjects study items up to 25
times at spaced intervals. At test, old items (targets; e.g., "TOAD"), new
items (foils; e.g., "ROPE"), and items that are perceptually very similar to
studied items (similar foils; e.g., "TOADS") are presented, and the task is to
determine how many times the test item was studied (i.e., a judgment of
frequency or JOF). JOFs for targets and similar foils increase steadily as the
number of repetitions increases, but the ability to discriminate between targets
and similar foils does not improve after the first 2 or 3 presentations.
Hintzman et al. (1992) termed this finding "registration without learning"
because the increase in JOFs indicate that each presentation is "registered" in
memory, but the features critical for discriminating targets and similar foils
are not "learned" after the first 2 or 3 presentations. In this study, we test
two hypotheses: 1.) The registration without learning observed by Hintzman et
al. was due to ceiling effects, and 2.) The word-frequency effect for
recognition is due to differences in the way high- and low-frequency words are
represented in memory. A "dual-process" REM model (cf. Malmberg, Steyvers,
Stephen, & Shiffrin, in press; Shiffrin & Steyvers, 1997, 1998) is described
that accounts for our findings.
================================================================================
Richard Murray
University of Toronto
Snapshots of Perceptual Organization
Authors: Richard F. Murray, Jason M. Gold, Patrick J. Bennett, and Allison B.
Sekuler
Visual patterns can differ greatly in their perceived simplicity and their
perceptual organization, even if they are very similar in their local properties
(e.g., a Kanizsa square vs. a random placement of Kanizsa inducers). These
differences in perceptual organization have an enormous influence on how well
observers perform visual tasks. Why is this? How do differences in perceptual
organization affect performance of visual tasks, and what does this tell us
about visual processing?
To answer this question, we carried out psychophysical experiments using the
response classification method, which reveals how an observer uses various parts
of a visual stimulus to perform a task. We found that observers used only one
or two simple features of a stimulus to perform visual tasks, even if the
stimulus had many features that gave information relevant to the task. In
stimuli with simple perceptual organizations, observers used large parts of the
stimuli that were perceptually grouped into a single edge or contour. In
stimuli that appeared as a collection of unorganized fragments, observers used
only smaller, local edges or contours.
These results indicate that the features observers use to perform visual tasks
are strongly influenced by perceptual organization: observers use only one or
two stimulus features, and the features that are available depend on the
perceptual organization of the stimulus.
I will also discuss some surprising inefficiencies that these response
classification experiments revealed about observers' judgements of even simple
stimulus properties, such as the orientation of edges.
===============================================================================
Thomas Nelson
University of Maryland
Analysis in Metacognition and Memory: Data Collection vs. Data Analysis
===============================================================================
Miguel Nussbaum
Catholic University of Chile
Developing Technical Support for Establishing Face-to-Face Relations Between
People Who Do Not Know Each Other
Author(s): Miguel Nussbaum and Roberto Aldunate
Our question is how to support social encounters between people who don't know
each other, are near each other, and share similar mental models and matching
needs. For example, suppose there are several people in a bus, each with a PDA
(personal digital assistant, e.g., a Palm Pilot, Ipaq, etc.), and that each PDA
is equipped with a short-range data communicator (e.g., Blue Tooth, WI-FI).
Through the wireless network, the PDAs could communicate (i.e., send msgs to
each other) and form an Ad Hoc network within the bus. That is, each PDA runs
a piece of software named Agent; each of these Agents communicates and
interchanges data. In this way you would have a distributed net of agents that
is running inside the bus. Such a scheme should allow people mobility, using a
distributed net of Agents, that constantly enables people in their vicinity to
monitor and to communicate with each other.
Mental models are built using principles of social psychology. Agents
constantly monitor other agents' mental models and, in the models' intersection,
heuristically verify if their current necessities are shared. The network is
built using portable devices (Compaq Ipaq) with a wireless network communication
card, Wi Fi (IEEE 802.11 B). Since Wi Fi is a short range network (around 300
feet), it restricts interactions to Agents that are physically close.
In this way technology becomes a facilitator of casual encounters and not only
as a communication medium. The current status of this project is to identify
the constituents of the mental models on one side, and the construction of the
distributed Agents in a wireless network on the other.
================================================================================
Tatiana Pasternak
University of Rochester
Activity of MT Neurons is Affected by Remote Visual Stimuli Used in a Working
Memory Task
Area MT neurons have relatively small receptive fields representing the
contralateral hemifield and have been shown to play an important role in
processing of visual motion. Recent lesion, microstimulation and psychophysical
studies have implicated MT in temporary storage of visual motion (Bisley and
Pasternak, 2000; Bisley et al., 2001; Zaksas et al., 2001). We recorded the
activity of MT neurons during the performance of a working memory task in which
the remembered (sample) and the comparison (test) stimuli were placed in
opposite hemifields. This manipulation allowed us to determine whether the
activity of MT neurons during the performance of the memory task is strictly
retinotopic and generated locally or reflects the top-down influences of
cortical areas that have access to the information from the entire visual field.
During the performance of the task, many MT neurons altered their activity
when stimuli were presented in the portion of the visual field contralateral to
their receptive fields. After the sample was presented in the receptive field,
MT neurons showed transient elevated activity during the early portion of the
delay, followed by decreased activity in the middle portion of the delay.
Furthermore, many neurons showed activation to the test stimuli presented in the
remote location. In contrast, many MT neurons showed inhibition during the
presentation of the sample in a remote location. Under these conditions,
activity early in the delay was reduced or absent but increased during the
middle portion of the delay.
Since area MT is strongly retinotopic, changes in the neural activity produced
by stimuli presented in the contralateral hemifield suggest that this activation
is unlikely to be generated locally through connections within MT. Rather, this
activity may be indicative of the top-down influences of cortical areas with
access to information from the entire visual field. These areas, along with
area MT, may form the circuitry underlying the ability to remember visual
motion.
Bisley JW, Pasternak T (2000) The multiple roles of visual cortical areas MT/MST
in remembering the direction of visual motion. Cerebral Cortex 10:1053-1065.
Bisley JW, Zaksas D, Pasternak T (2001) Microstimulation of Cortical Area MT
Affects Performance on a Visual Working Memory Task. J Neurophysiol 85:187-196.
Zaksas D, Bisley JW, Pasternak T (2001) Motion information is spatially
localized in a visual working-memory task. Journal of Neurophysiology 86:912-
921.
===============================================================================
Misha Pavel
Oregon Health & Science University
Augmented Cognition: New Design Paradigm
We describe a novel approach to the design of future systems and devices that
is based on the idea of augmentation of human cognitive abilities. These
resulting systems are expected to significantly improve human performance on a
number of demanding, high-level cognitive tasks. In this presentation we will
first compare our approach to the traditional design of human-computer
interfaces that has been focused on specific tasks such as editing, searching,
etc. In contrast, our starting point is based on the characterization of human
cognitive limitations, and on the quantitative modeling of cognitive bottlenecks
and limitations. These quantitative models are then used to design interfaces
that could alleviate these limitations. We will argue that the notion of an
ideal observer, e.g. as used in signal detection theory, that has been used to
characterize human limitations, can be extended to support the design of
cognitive amplifications devices. We will discuss several examples that are
related to the human limitations of resources including attention, fusion of
abstract information, and incorporation of uncertainty in complex decisions. An
additional key idea of this approach is that the human operator is typically not
aware of his sub-optimal strategies until his performance deteriorates and he
makes errors. To augment an operator's cognition, we continuously monitor his
behavior, help him allocate his resources, and make abstract or missing
information available to him in a convenient perceptual form prior to his own
awareness of the problem.
================================================================================
Adina Roskies
MIT
Are Ethical Judgments Intrinsically Motivational? Lessons from "Acquired
Sociopathy"
Metaethical questions are typically held to be a priori, and therefore
impervious to empirical evidence. Here I examine the metaethical claim of
belief-internalism, the position that moral beliefs are intrinsically
motivating. I argue that belief-internalists are faced with a dilemma. Either
their formulation of internalism is so weak that it fails to be philosophically
interesting, or it is a substantive claim, but can be shown to be empirically
false. I then provide evidence for the falsity of substantive belief-
internalism. I describe a group of brain-damaged patients who sustain an
impairment in their moral sensibility: although they have normal moral beliefs
and make moral judgments, they are not inclined to act in accordance with those
beliefs and judgments. Thus, they are walking counterexamples to the substantive
internalist claim. In addition to constraining our conception of moral
reasoning, this argument stands as an example of how empirical evidence can be
relevantly brought to bear on a philosophical question typically viewed to be a
priori.
================================================================================
Richard M. Shiffrin
Indiana University
Relating Recognition Memory to Categorization with Bayesian Modeling
The REM model for recognition memory was developed on the assumption that
memory storage is 'noisy' (incomplete and error prone), but that retrieval is
'optimal' in the sense that the best decision is made in the face of poor data
in memory. The theory is instantiated by assuming memory traces are separate
vectors of feature values, and that retrieval operates by comparing the test
vector to the traces, feature value by feature value. The result of such
matching is the assignment to each trace in memory of a likelihood ratio
(lambda), giving the probability that the trace had been stored as a copy of the
test item, divided by the probability that the trace had been stored due to
something else. The optimal Bayesian decision is the average of the lambdas,
called the odds (with the default criterion being an odds of 1.0). A similar
approach can be used for categorization studies, with the same lambdas playing
a key role. The relations between such models for the two paradigms, and some
relevant data from a study of categorization based on animacy, are presented.
================================================================================
George Sperling
University of California, Irvine
TBA
================================================================================
Mark Steyvers
Stanford University
Bayesian Networks and Causal Reasoning
Authors: Mark Steyvers, Josh Tenenbaum and Eric-Jan Wagenmakers
The ability to infer causal relationships is crucial for scientific reasoning
and, more generally, for our understanding of the world. Recently, Bayesian
networks (e.g., Pearl, 2000; Glymour & Cooper, 1999) have been proposed as a
theoretical framework for how causal knowledge can be represented and learned
on the basis of observational and/or experimental data. In this research, we
provide a detailed computational account of human learning of causal networks.
We will show that people can learn complex causal networks on the basis of both
observational trials and intervention (experimental) trials where subjects are
allowed to manipulate a variable.
===============================================================================
Bosco Tjan
University of Southern California
Limitation of Ideal-Observer Analysis in Understanding Perceptual Learning
Authors: Bosco Tjan, S. T. L. Chung and D. M. Levi
Performance for a variety of visual tasks improves with practice. Various
attempts have been made to determine the functional nature of perceptual
learning. Previously, using an ideal-observer analysis (similar to Gold et
al., 1999, Nature), we reported that the improvements in identifying letters
in peripheral vision following 6 days of training was due to an increase in
sampling efficiency, but not a reduction in intrinsic noise (Chung et al.,
2001, pre-OSA). Here we reexamine the functional nature of perceptual
learning using the perceptual template model (PTM, Lu & Dosher, 1999,
JOSA-A). We used the data set collected by Chung et al (2001) in which the
change in contrast thresholds for identifying single letters embedded in
external noise were tracked over a period of 6 days. Six levels of external
noise were tested each day. Data were collected using the Method of Constant
Stimuli. Thresholds at d'=0.8, 1.7, 2.7 were determined per noise level by
fitting a psychometric function to the raw data. This resulted in three
threshold-vs-noise-contrast (TvC) functions for each day and each observer.
Fitting PTM to the TvC functions revealed that the performance improvements
observed in 4 of 5 observers were due to a reduction in internal additive
noise. Three of these 4 observers also showed a template retuning. No
significant change in the internal multiplicative noise was observed. (The
fifth observer did not show any significant change in performance.) Despite
individual differences, both internal noise reduction and template retuning
appear to be the common mechanisms of perceptual learning in peripheral
vision. This interpretation of the results contrasts sharply with the one
obtained with an ideal-observer analysis. The discrepancy is due to the
omissions of non-linearity and a stimulus-driven stochastic component
(multiplicative noise or uncertainty) in the linear amplifier model use to
interpret the data human. Philsophical implications of the current result
on the use of ideal-observer analysis will be explored.
===============================================================================
Chia-huei Tseng
University of California, Irvine
TBA
===============================================================================
Eric-Jan Wagenmakers
Northwestern University
Estimation and Interpretation of l/f Noise in Human Cognition
===============================================================================
Heather A. Wild
Indiana University
Poster: Density in the Face Space: What Do We Really Mean?
Authors: Heather A. Wild and Thomas A. Busey
Valentine (1991) proposed the face-space framework to account for differences
in other- versus same-race face perception. He claims that these deficits
result from differential representation of such faces relative to faces of one's
own race, such that other-race faces are more densely clustered, i.e., closer
together, in the space. The face-space framework, however, was proposed as a
metaphorical model and was not formalized. Thus the differential density
hypothesis also was not formalized and, while widely cited in the face
perception literature, has never been directly tested. In the present study, I
collected pairwise similarity ratings of Asian and Caucasian faces from
Caucasian and Asian observers. Analyses of these data yielded face-space
representations. I developed six different mathematical formalizations of
density measures, partially based on previous work (e.g., Krumhansl, 1979; Zaki
& Nosofsky, 2001). I directly examined whether there is actually increased
density for other-race faces in the face-space representations. The results
show that whether other-race faces are actually more densely clustered depends
crucially on the definition of density that is employed, and moreover that these
differences are not consistently present for all groups of subjects. This casts
doubt on Valentine's (1991) initial hypothesis, and challenges researchers to
precisely quantify hypotheses regarding differences in the representations of
same- and other-race faces.
================================================================================