AVA logo Promoting vision research and its applications
 
  [Home] [About] [Membership] [Awards] [Updates] [Contact Us]

Meetings

Meeting Abstracts

E-bulletins

Vision Links

Vision Resources

MOVEMENTS AND MOMENTS IN VISION RESEARCH

The eighth Applied Vision Association Christmas Meeting was held in the Vision Sciences building at Aston University on Wednesday 17th December 2003.

Invited talks were given by:

1) Josh Solomon
(City University)

2) Tom Freeman
(University of Wales)

3) Linda Bowns
(University of Nottingham)

 

Meeting Abstracts

Multilevel time discretization in human vision: origin of the sense of depth in an irradiation process during gaze fixation.

S Artemenkov (Moscow State University of Psychology and Education, 29
Sretenka str., Moscow, Russia, 127051; E-mail: it-edu@mgppu.ru)


According to the general principles of Transcendental Psychology Methodology (TPM) (A Mirakyan, 1999, Outlines of Transcendental Psychology, Moscow, IP RAS) and previous research it is supposed that time quantization within a gaze fixation period (GFP) in human observers is critical for visual form creation and could play a role in the sensation of depth. Using TPM we simulated the structural process dynamics of human vision as a multilevel time discretization process with embedded and repeated GFPs and a form creation process. It appears as an internal irradiation process (IP) with discrete time sampling normalization and provides for the possible emergence of 3D metrics in the perceptual representation. To study perceptual phenomena, caused by IP anisotropy, we introduced an experimental method of short time presentation where objects increased (A) and decreased in size (B). Theoretically, when A & B are shown simultaneously then prior to GFP termination they can be represented internally as separate perceptual processes with different depth interpretations. Thus, it is possible to verify the model experimentally by looking for perceived depth effects with simultaneous presentations of A & B stimuli merging into a single stimulus at the end of a GFP. High-contrast outline drawings of polygons were presented on a tachistoscope and changed size at 10-30 deg/s. Tasks included a type-number identification and size comparison. Results showed that merged objects were perceived as visually distinct, as if they terminated at different depth planes. This is consistent with a qualitative model based on general TPM principles.

 

Why Component Level Zero-crossings Might Be Useful?

L. Bowns (School of Psychology, University of Nottingham. E-mail: lbowns@psychology.nottingham.ac.uk)

The literature supports two hypothesised rules for combining 1D moving Fourier components, the Intersection of Constraints (IOC) and the Vector Average (VA). Bowns (Vision Research 42 (2002) 1671-1681) described a novel implementation of the IOC. The model uses zero-crossings from outputs of oriented Gabor filters that extract component-level spatial information at successive temporal intervals. If any portion of the zero-crossings from the two components occupies the same spatial position, their motion is tracked. This essentially uses the same constraint information as the IOC. The talk reviews some of the published data used to support this implementation. This includes predicting results that were originally presented as supporting the VA rule and predicting reversed motion attributed to a squaring non-linearity. In addition, I shall describe new work (with David Alais) where we tested direct predictions from this implementation. We used short duration plaid stimuli that were perceived in either the IOC or VA direction prior to adaptation. Adapting out the initial perceived direction led to a dramatic bias in favour of the alternative direction. In an attempt to understand this large bias we varied the duration of the adaptor, and the spatio-temporal characteristic of the adaptor. The results showed that the shift required only 1 second of adaptation, and was unaffected by the spatio-temporal properties of the adaptor over a range of spatial and temporal frequencies. We conclude that both solutions are encoded and that the mechanism underlying the two solutions is not dependent on spatial or temporal tuning characteristics. No other existing model of motion is consistent with all of the above results.

 

An equivalent noise analysis of direction integration in spatially band-pass stimuli

Steven C. Dakin, Isabelle Mareschal, and Peter J. Bex. (Institute of Ophthalmology, University College London, Bath Street, London EC1V 9EL. UK. E-mail: s.dakin@ucl.ac.uk)

We examined the pooling of motion signals across space by applying an equivalent noise paradigm to the discrimination of direction in stochastic, two-dimensional moving stimuli. Specifically, we had observers estimate the overall direction (clockwise or anti-clockwise of vertical-upwards motion) of a field of moving band-pass elements, whose directions were drawn from a wrapped normal distribution. By estimating the smallest discriminable change in mean-direction as a function of local direction-variability we were able to infer the precision of observers’ representation of local direction (additive internal noise) and their efficiency at combining local-motions (multiplicative internal noise). By co-varying the number of moving elements and the size of region they occupy, we show that internal noise is determined wholly by the number of features present, irrespective of their spatial arrangement. Crucially however, observers‚ performance deteriorates faster than equivalent noise predictions at high levels of directional variability. We explain this result by asserting that direction integration is achieved by ‘second-stage’ channels that pool motion energy across a range of directions (a process that has been linked to neurons in cortical area MT) and that are limited by multiplicative noise (as are MT neurons). Our psychophysical judgment is then modelled by comparing the maximum response of the clockwise versus anti-clockwise tuned channels. This model explains the surprising breakdown in observer performance at high levels of directional variance and can account for data from all conditions using a single channel bandwidth/multiplicative noise setting. We discuss the implications of our findings for electrophysiological studies of MT and computational models of human motion perception.

 

Eye movement and the motion aftereffect

Tom CA Freeman (School of Psychology, Cardiff University, PO Box 301, Cardiff, CF10 3AG, Wales. E-mail: freemant@cardiff.ac.uk)

Motion aftereffect (MAE) is known to follow adaptation to smooth eye pursuit. The effect is often thought to stem from retinal motion created by the eye movement, either because the retinal position of adapted area and subsequent test overlap or because adaptation of peripheral areas induces motion at the test location. Little has been made of Chaudhuri’s alternative, namely that prolonged eye pursuit creates an extraretinal component to the MAE (Chaudhuri, 1991, Vision Research, 31, 1639-1645). Chaudhuri suggested extraretinal MAE arises from a process of nystagmus-suppression, a mechanism that inhibits potential afternystagmus and so allows observers to maintain fixation on the stationary test. Here I review more recent experiments that test the nystagmus-suppression hypothesis and extend our understanding of extraretinal MAE. First we show that reflexive, nystagmus-like eye movements also produce extraretinal MAE. The effect does not store, supporting the nystagmus-suppression hypothesis. We then show that adaptation to oblique eye movement gives rise to an aftereffect that, for some observers, moves obliquely at first but ends up moving vertically. The same observers report briefer MAE following horizontal eye-movement compared to vertical, suggesting that oblique extraretinal MAE arises from asymmetric decay of afternystagmus in horizontal and vertical eye movement mechanisms. In comparison, observers who only report oblique movement show no difference in the duration of horizontal and vertical MAE. Finally we show that extraretinal MAE alters motion perception during eye movement, by demonstrating changes in the size of the Filehne illusion following eye-movement adaptation. Possible consequences for our understanding of MAE following adaptation to simultaneous eye movement and retinal motion will be discussed.

 

Contrast discrimination and pattern masking: contrast gain control with fixed additive noise

Mark A. Georgeson, Tim S. Meese (Neurosciences Research Institute, Aston University, Birmingham B4 7ET, UK. E-mail: m.a.georgeson@aston.ac.uk)

We studied the visual mechanisms that serve to encode spatial contrast at threshold and suprathreshold levels. In a 2AFC contrast discrimination task, observers had to detect the presence of a vertical 1 c/deg test grating (of contrast c) that was superimposed on a similar vertical 1 c/deg pedestal grating, while in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 c/deg, or oblique 3 c/deg).   When expressed as threshold contrast (c at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for  contrast discrimination but a smoothly increasing threshold for pattern masking.  However, fresh insight is gained by analyzing and modelling performance (p; percent correct) as a joint function of (c, c) - the performance surface. In contrast discrimination, psychometric functions (p vs log c) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain control model with 6 free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance.  Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes.  The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (c) is linearized by the presence of pedestal contrasts but remains nonlinear in pattern masking.

 

Wherefore the Basic Colour Terms?

Lewis D Griffin (Imaging Sciences, Medical School, King’s College London. E-mail: lewis.griffin@kcl.ac.uk)

Carving up the colour solid into the eleven basic categories is a human universal [Berlin & Kay, 1969, Univ. Calif. Press; Kay & Regier, 2003, PNAS 100(15):9085-9089]. Two types of explanations have been advanced for this: physiological explanations have suggested that the categories arise naturally “from bumps on the colour solid” [e.g. Jameson & D’Andrade, 1997, In: Hardin & Maffi, Cambridge University Press, pp295-319] (which are determined by the cone absorption spectra); ecological explanations claim that convergent cultural evolution has delivered the categories as optimal for describing the world [e.g. Yendrikhovskij, 2001, J. Imag. Sci. Tech. 45(5):409-417].

Using a database of imagery compiled using Google Image, I have tested the optimality of the Basic Colour Terms as effective descriptors of the world.

Eighty thumbnails for each of 760 search terms taken from the “First Thousand Words Sticker Book” (e.g. acrobats, baby, cabbage, dance) were downloaded. Partitions of the RGB cube into different numbers and shapes of category were evaluated by expressing the colours of each image in terms of the categories, and computing how often picking the odd-one-out from three on the basis of the colour descriptors was correct when two of the three were images from the same search term and the third was not (e.g. two images of cabbages and one image of a piglet).

It was found that a partition of the RGB cube into basic colour terms performed statistically better than chance, and no other partition could be found that performed significantly better. Control experiments showed that this was not due to the presence of manmade objects in the imagery, nor was it due to the categories being about the right number and shape.

The results support the explanation of the universality of the basic colour terms being due to convergent evolution to an optimal solution for describing the world. This does not debar physiology from also playing an explicatory role as it defines the colour solid that is being carved up.

 

The interaction of luminance and texture amplitude in depth perception

Gillian Hesse*, Andrew Schofield* and Mark Georgeson# (* School of Psychology, University of Birmingham, Birmingham, B15 2TT. E-mail: G.BarbieriHesse@bham.ac.uk. # Neurosciences Research Institute, Aston University, Birmingham, B4 7ET)

Previous studies have suggested separate channels for detection of 1st order luminance (LM) and 2nd order modulations of the local amplitude (AM) of a texture (Schofield and Georgeson, 1999, Vis Res, 39, 2697-2716; Georgeson and Schofield, 2002, Spatial Vision, 16, 59).  It has also been shown that LM and AM mixtures with different phase relationships are easily separated in identification tasks, and (informally) appear very different with the in-phase compound (LM+AM) producing the most realistic depth percept. 

We investigated the role of these LM and AM components in depth perception.   Stimuli consisted of a noise texture background with thin bars formed as local increments or decrements in  luminance and/or noise amplitude.  These stimuli appear as embossed surfaces with wide and narrow regions.  When luminance and amplitude changes have the same sign and magnitude (LM+AM)  the overall modulation is consistent with multiplicative shading, but this is not so when the two modulations have opposite sign (LM-AM). Keeping the AM modulation depth fixed at a supra-threshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of raised regions in the display.   Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM-AM was even better than LM+AM. Further tests suggested that this improvement in performance is not due to an increase in the detectability of luminance in the compound stimuli.  Thus, contrary to previous findings, these results suggest the possibility of interaction between 1st and 2nd order mechanisms in depth perception.

 

Movement aftereffects (MAEs) with varied segregation of test field and surround: spatial offsets matter but colour differences do not

John Harris, Sarah Coates (School of Psychology, University of Reading, Whiteknights, Reading RG 6AL, UK. E-mail: j.p.harris@reading.ac.uk)

Previous work (Harris and Sullivan, 1996, Perception 25 32) showed that, after adapting to vertical stripes, drifting horizontally within a rectangular window in a surround of stationary vertical stripes, subsequent MAEs were stronger if the stationary test stripes were offset from the surrounding stripes than if they were aligned.  The test field/surround relationship might be important at two perceptual levels. At an early stage of processing, spatial offsets might prevent the propagation of local motion signals into the neural representation of the surround, and so enhance motion contrast at the edge of the test field. Alternatively (or in addition), motion signals might be gated at a higher level of analysis, depending on the perceptual segregation of test field and surround.  We measured MAE durations after a constant adaptation period of 15 seconds for 3 test/surround stripe spatial offsets (0, 90 and 180 deg phase shift). In addition, the window and surround stripes could be the same colour (grey/black or red/black) during adaptation and testing, or different (surround always grey/black, window switching from grey/black to red/black at the end of adaptation). We also presented the 9 possible test fields individually, and obtained ratings of how segregated from its surround each appeared.  Offsets increased MAE durations, but making test stripes a different colour from those of the surround did not, even though this markedly increased segregation ratings. We therefore conclude that spatial offsets do not affect MAE strength by varying perceived segregation of test field and surround.

 

Dynamic properties of suprathreshold vision in the presence of static and dynamic visual noise

V. Manahilov, G.J. McCarron, M. Freeman (Department of Vision Sciences, Glasgow Caledonian University, Cowcaddens Road, Glasgow G4 0BA, UK. E-mail: V.Manahilov@gcal.ac.uk)

It is widely believed that fast transient mechanisms operate at low spatial frequencies, while slower sustained mechanisms are activated by stimuli of higher spatial frequencies. A recent study has found that static visual noise masks the sustained mechanisms and transforms the temporal responses of near-threshold finer gratings from sustained to transient (Manahilov et al., 2003, Vision Research, 43, 1855-1867). Here we sought to determine whether the responses to finer stimuli of suprathreshold contrast embedded in static noise are also transient. Using a go/no go paradigm, we measured reaction times (RT) to Gabor patches of suprathreshold contrast levels and spatial frequencies of 0.5 and 7 c/deg in the absence and presence of dynamic and static Gaussian noise. Dynamic noise increased the mean RTs to low and higher spatial frequencies. Static noise delayed the mean RTs only to 7-c/deg Gabor patches of low contrast levels. When the stimuli of higher spatial frequency were presented on a pedestal of the same spatial frequency, dynamic noise increased the RTs, while static noise reduced the RTs. This result suggests that the responses to finer suprathreshold patterns embedded in static noise are transient. Recently we showed that the RT variance could be used as a measure of efficiency of suprathreshold vision (Simpson et al., 2003, Vision Research, 43, 1103-1109). The comparison of RT variances to finer patterns with and without pedestal revealed higher efficiency in the presence of static noise than when the stimuli were embedded in dynamic noise.

 

Role of internal noise and directional bandwidth in the oblique effect for motion

Isabelle Mareschal, Steven Dakin & Peter Bex (Institute of Ophthalmology, University College London, 11-43 Bath Street, EC1V 9EL, London, U.K. Email: i.mareschal@ucl.ac.uk)

The oblique effect is a well documented phenomenon whereby subjects are better at judging orientation, or direction of motion, along the cardinal (horizontal or vertical) axes than the oblique axes. We have examined whether the motion version of this effect results from differences in the directional bandwidth of detectors tuned to different directions, or in differences in levels of internal multiplicative noise on those detectors. We had subjects judge the overall direction of a field of moving band-pass elements as a function of the directional variability on the motion signal. Subjects had to report if the average direction was clockwise or anticlockwise of a static reference which was either presented at 90° (vertical) or 45° (anticlockwise from vertical). We report that direction discrimination was poorer for judgements around oblique but only at low levels of directional variability. In terms of standard equivalent noise this equates to a reduction in efficiency. We show that a channel-based averaging model produces a better account of these data than standard equivalent noise and that this pattern of results is consistent with elevated multiplicative noise on obliquely tuned direction channels but not with changes in bandwidth. We present a further experiment bearing on this hypothesis where subjects judged the direction of a set of Laplacian of a Gaussian elements (with directional s.d. = 8°) moving slightly clockwise or anti-clockwise of some reference direction (either 90° or 45°) in the presence of a mask (mean direction fixed at the reference direction, directional s.d. = 8°). By varying the number of elements in the target and mask we were able to show changes in directional sensitivity that can be largely accounted for by changes in multiplicative internal noise across direction.

 

Perceiving edge contrast

Keith A. May, Mark A. Georgeson (Neurosciences Research Institute, Aston University, Birmingham B4 7ET, UK. E-mail:  mayka@aston.ac.ukm.a.georgeson@aston.ac.uk)

We have shown previously that a template model for edge perception successfully predicts perceived blur for a variety of edge profiles (Georgeson, 2001 Journal of Vision 1 438a; Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54).  This study concerns the perceived contrast of edges. Our model spatially differentiates the luminance profile, half-wave rectifies this first derivative, and then differentiates again to create the edge's 'signature'.  The spatial scale of the signature is evaluated by filtering it with a set of Gaussian derivative operators.  This process finds the correlation between the signature and each operator kernel at each position.  These kernels therefore act as templates, and the position and scale of the best-fitting template indicate the position and blur of the edge. Our previous finding, that reducing edge contrast reduces perceived blur, can been explained by replacing the half-wave rectifier with a smooth, biased rectifier function (May and Georgeson, 2003 Perception 32 388; May and Georgeson, 2003 Perception 32 Supplement, 46).  With the half-wave rectifier, the peak template response, R, to a Gaussian edge with contrast, C, and scale, s, is given by: R = C‑1/4‑3/2.  Hence, edge contrast can be estimated from response magnitude and blur: C = Rp1/4s3/2.  Using this equation with the modified rectifier predicts that perceived contrast will decrease with increasing blur, particularly at low contrasts.  Contrast-matching experiments supported this prediction.  In addition, the model correctly predicts the perceived contrast of Gaussian edges modified either by spatial truncation or by the addition of a ramp.

 

Pupil size, visual search and memory

Gillian Porter, Tom Troscianko & Iain Gilchrist

(Dept of Experimental Psychology, University of Bristol, 8 Woodland Rd, Bristol BS8 1TN. E-mail: Gillian.Porter@bristol.ac.uk)

The extent of dilation of the pupil of the eye is a reliable measure of task-induced load during performance. Most previous studies have focused upon auditory input because the existence of the pupillary light reflex has made it difficult to isolate the "load" component of the papillary response for visual stimuli.  However, when appropriate measures to control display luminance are performed (Porter & Troscianko, 2003 [Perception, 32 supp, 156]), the pupillary response can shed light on the processes that accompany task performance for simple visual stimuli.

Pupil measures were collected while participants conducted serial visual search tasks, searching for a "c" among rotated "c"s. In keeping with Duncan & Humphreys' (1989) model [Psychological Review, 96(3), 433-458], dilation was greater when searching for a target amongst heterogeneous distractors than homogeneous distractors, indicating that more resources were required in the heterogeneous case. However, search difficulty does not appear to be determined just by the difficulty of the discrimination task: when responding to displays with many distractors, pupil size was significantly greater  than for equivalent displays with few distractors. The contribution of set-size to task difficulty suggests that these search tasks involve memory for locations visited. Subsequent experiments involved manipulating the memory component of the task, either in terms of target location (using a "one or two targets?" task, rather than "target absent or present?"), or target identity (by changing this each trial). The results suggest that both spatial and recognition memory may be involved in different visual search tasks. Such effects may be more readily investigated by pupil measures than reaction times because the former provide information on the processing that occurs during a trial.

 

Alternatives to taudot in the control of braking

Paul Rock, Tim Yates & Mike Harris (School of Psychology The University of Birmingham. E-mail t.yates@bham.ac.uk, or harris@bham.ac.uk)

Models of the control of braking are conventionally based on taudot – the rate at which time-to-contact with the target declines over time. Taudot is an appealing perceptual variable because it is easy to extract from optic flow and bypasses the need to extract estimates of current speed and target distance. However, it is a less appealing control variable because deviations from the required value do not map simply onto the required correction in braking. Here we investigate an alternative control strategy based explicitly on the notion of “ideal deceleration”, which can readily be calculated from, for example, estimates of current speed and target distance. Although this strategy may seem less attractive from the perceptual viewpoint, it has substantial advantages from the control viewpoint,  since deviations are more easily and consistently related to the required braking correction. We show that this alternative strategy accurately describes human performance in a simple braking task. We also demonstrate that, given assumptions that seem reasonable in the normal braking context , the required  estimates of speed and distance can, in practice, easily be extracted from optic flow. 

 

Crowding and the tilt illusion: toward a unified account

Joshua A. Solomon, Fatima Felisberti and Michael J. Morgan (Applied Vision Research Centre, Department of Optometry and Visual Science, City University, London EC1V 0HB, UK. E-mail: j.a.solomon@city.ac.uk)

Crowding, the difficult identification of peripherally viewed targets amidst similar distractors, has been explained as a compulsory pooling of target and distractor features. The tilt illusion, in which the difference between two adjacent gratings’ orientations is exaggerated, has also been explained by feature pooling, where the features correspond to maxima in mexican-hat shaped population responses. In an attempt to establish both phenomena with the same stimuli—and account for them with the same model—we asked observers to identify (as clockwise or anticlockwise of vertical) slightly tilted targets surrounded by tilted distractors. Our results are inconsistent with the feature-pooling model: the ratio of assimilation (the tendency to perceive vertical targets as tilted in the same direction as slightly-tilted distractors) to repulsion (the tendency to perceive vertical targets as tilted away from more-oblique distractors) was too small. Instead, our results can be better-fit by a general model of modulatory lateral interaction.

 

Synaptic energy efficiency in functional colour vision

B. Vincent, R. Baddeley (Department of Experimental Psychology, 8 Woodland Road, University of Bristol, Bristol BS8 1TN, UK. E-mail: ben.vincent@bristol.ac.uk)

Physiological measurement of chromatically senstive neurons in the early visual system reveals a functional split into three pathways; luminance, red/green and blue/yellow. The work of Buchsbaum (Buchsbaum and Gottschalk 1983 Proc. R. Soc. Lond. B 220 89-113), refined later by Ruderman (Ruderman et. al. 1998 J. Opt. Soc. Am. A 15 8 2036-2045) explains the functional split very convincingly as the optimal cone combinations to maximise information transmission by redundancy reduction.

This explanation of chromatic redundancy reduction was extended to include the spatial domain by Atick (Atick et. al. 1992 Neural Comp. 4 559-572). This predicts simple type 1 receptive fields with chromatically pure red excitatory centers and green inhibitory surrounds, but fails to explain the full diversity of spatiochromatic properties of visual neurons.       Here, work is reported that examines two approaches. The first is inspired by the chromatic opponency approach of redundancy reduction, the second based on a functional isoluminant representation. Within these two approaches, the optimal spatiochromatic receptive fields are calculated in order to a) encode natural images and b) do so under a metabolically inspired constraint on synaptic activity. This metabolic approach embodies the view that neural organisation is jointly optimised to fulfull a functional task within the realistic biological constraints of energy expenditure (Vincent & Baddeley, Vision Research 43 2003 1283–1290). The similarites of the model predictions and physiological measures are detailed.

 

Detection of  changes of objects and shadows in colour and greyscale images.

Michael J Wright, Athina Inneh, (Department of Human Sciences, Brunel University, Uxbridge, Middlesex, UB8 3PH, U.K. E-mail: michael.wright@brunel.ac.uk)

In “change blindness” experiments, we have shown that object-based changes in full colour scenes were more detectable than changes in shadows(Wright, Alston and Shah, AVA Xmas meeting, 2002). In the present study we confirm the superiority for detecting object changes over shadow changes for a range of natural images, controlling for the local and global contrast of the image difference. Moreover, colour may assist the differentiation of image variations due to surface reflectance from those due to shadows. We therefore asked whether superiority for detection of object over shadow changes is altered if chromatic information is removed. 40 different image pairs were presented, either in colour or in greyscale. Observers located the quadrant of the image in which a change occurred, or else they responded “no change”. Each image pair was presented once only, and contained an object change, a shadow change or no change. Each image was presented for 8 sec and the ISI was 2 sec. Changes could be additions or deletions. There were no differences for changes to different types of scenes (traffic, crowd or room scenes). Both integrated contrast and a spatial filtering model failed to predict change detection when applied to the difference between images. Detection of object changes was superior to shadow changes for greyscale as well as coloured images. It is proposed that a visual short-term memory representation of the first image is built up, and this is compared with the second image. This seems to be a high level representation, since object changes are better detected than shadow changes, and detection is unrelated (over the range measured) to the local or global contrast of the image difference or to its chromatic content.

 

Viperlib.com: the next 1500 images

Peter Thompson, Rob Stone & Elaine Pollard (Department of Psychology, University of York, York YO10 5DD, UK)

Viperlib is an image library designed for teachers and researchers in the area of Perception and visual processing.  In 6 months it has accrued 1500 images all donated freely and available free for non-profit purposes.  How has this been achieved and how do we build on the success of the site?  We shall describe where we have gone right and where we still need help.  The reception following this talk will be, in part, sponsored by the viperlib project and participants' feedback on the project will be sought.  It will help this endeavour if participants could visit viperlib.com in advance of the meeting to formulate suggestions for improvements and extensions to the site. Any really valuable contribution will be rewarded with the much sought-after Barry-the-snake Tee-shirt.

 

Dr Peter Thompson, Executive Editor, Perception

Department of Psychology, University of York, YO10 5DD, UK

P.Thompson@psych.york.ac.uk

tel: +44 1904 433150  fax +44 1904 433181

Try www.viperlib.com  your one-stop visual perception image site

 

Creator of the solar system: www.solar.york.ac.uk

   
webdesign by ablen