-
New paper by Hesse's lab
Perspective image comprehension depends on both visual and proprioceptive information
-
New paper by Phillips's lab
Following stroke, individuals often experience reduced social participation, regardless of physical limitations. Impairments may also occur in a range of cognitive and emotional functions. Successful emotion regulation, which has been identified as important in psychological adaptation to chronic illness, is associated with better perceived psychological well-being and social functioning. However,...
-
New paper by Phillips's lab
Older adults often have problems with prospective memory - remembering to carry out future intentions such as taking medication on time. This study examined age-by-mood interactions in prospective memory. Happy, sad or neutral mood was induced in young and older adults before measuring prospective memory performance. For younger adults, both...
-
New paper by Timmerman's lab
We show that people can implicitly learn short sequences of which they cannot see the constituting elements. This shows both that sequence learning can be implicit (it can't be explicit if people don't even see the sequence material), and that subliminal stimuli can be processed more extensively than previously thought.
-
New paper by Timmerman's lab
We that people with with High Functioning Autism spontaneously/implicitly take the level 1 visual perspective of others (i.e. the perspective of others interferes with reporting their own if it's different), but have difficulties in doing so intentionally/explicitly (reporting the other's perspective). We suggest people with HFA's problematic attention shifts to...
-
New paper by Timmerman's lab
We show that people with High Functioning Autism do not interpret joint gaze in others as a cue to start following others' gaze themselves, something which healthy controls usually do in that they see joint gaze as a social signal indicating upcoming interaction.
-
New paper by Timmerman's lab
This paper compares a number of different subjective awareness scales, and shows that behavioural performance influences subsequent awareness ratings in a visual identification task.
-
New paper by Timmerman's lab
Using virtual avatars (human faces) of which the eye gaze responds to participants' gaze, we show that the degree to which the brain's reward system (striatum) is recruited when experiencing self-initiated joint attention predicts how natural/human we experience social interaction both in open-ended interaction and cooperation. In an open-ended context...
-
New paper by Timmermans et al.
In this brief opinion paper we argue that disturbances in social interaction that characterise many mental disorders ideally lend themselves to investigations through interactive paradigms with virtual faces, which may elucidate abnormal dynamics between patients and others.
-
New paper by Hesse's lab
Previous studies have frequently applied a combination of line-bisection tasks (in which participants indicate the middle of a line) and obstacle avoidance tasks (in which participants move their hand between two obstacles) with the aim of revealing perception-action dissociations in certain neurological disorders, such as visual form agnosia and optic...
-
New paper by Papadopoulos and Clarke et al.
Training an object class detector typically requires a large set of images annotated with bounding-boxes, which is expensive and time consuming to create.
-
New paper by Tatler's lab
Humans display image-independent viewing biases when inspecting complex scenes.