Method Workshop Topics

Please check carefully that the courses you sign up for are not scheduled to run concurrently!!

Analyses of Eye Movements with Linear Mixed Models using R
Reinhold Kliegl
Time: August 9, 9.00-16.00 (or just morning lecture 9-12).

This course is offered as a full day course (morning lecture and afternoon exercises) AND a half day course (only morning lecture). The morning will be more basic in emphasis, the afternoon more applied. Sign up for one of them. This course will also include a pre-workshop online course beginning in June. If you want to participate in the included online tuition, you must register before the end of May.

Linear Mixed Models (LMMs) are increasingly used for the analyses of eye movements. They allow the seamless integration of factors and covariates as well as the simultaneous estimation of variance components and correlation parameters (i.e., individual and item differences) associated with fixed effects. In reading research, they also allow the simultaneous estimation of such model parameters for subjects, sentences, and words. The workshop uses the free R environment for statistical computing and graphics ( http://www.r-project.org/). Of particular relevance are the following packages: lme4, ggplot2, reshape2, plyr, and data.table. It is roughly structured in a morning and an afternoon session. In the morning, we cover contrast coding, LMM specification, and visualization of fixed and random effects of LMM models. The analyses will be demonstrated with peer-reviewed articles, data, and R scripts available at the Potsdam Mind Research Repository (PMR2; http://read.psych.uni-potsdam.de/pmr2/). In the afternoon, participants are expected to analyze their own data. There are two preconditions for participation. First, participants are expected to have a working knowledge of R. In other words, this workshop does not offer an introduction into R; rather participants must already have a laptop with an R installation and feel comfortable with basic R operations (e.g., setting up one's data in data frames; carrying our ANOVAs and multiple regression analyses). Second, there will be some moodle-based distance teaching starting four weeks prior to the workshop. Participants are expected to spend at least five hours per week with assignments in preparation for this course. We assume that those interested in this topic who do not yet meet the preconditions will look for opportunities between now and the beginning of the pre-workshop phase to acquire the necessary skills.

Python Open Source Tools in Eye Movement Research
Sol Simpson with Jonathan Pierce
August 7th, 9.00-16.00.

This one day course will focus on using open-source Python tools and libraries for eye movement research. The focus will be on typical aspects of an eye tracking experiment life-cycle, from experiment design and creation, to data collection and storage, followed by data filtering, visualization, and some simple statistical analysis. (For a full course on statistical analysis methods, that can also be applied within python, we suggest you attend Reinhold Kliegls' class on R, which can be used from within python using the r2py binding). The main tools that will be covered in the session are psychopy, using the ioHub and pyEyeTrackerInterface, for experiment script creation and an eye tracker independent api; pyTables and the ioHub for storage of large eye tracking data sets and efficient retrieval and visualization / analysis of that data using SciPy and NumPy. Matplotlib will be covered for high quality data visualization and plotting, as well as iPython notebook for interactive scientific computing in general.The course will be in a hands on format and we will be making all the necessary software and example scripts available for download prior to the class so that those who are attending can have their computer set-up ahead of time with the necessary python modules and dependencies.

EFRPs: principles, methods and main issues
Thierry Baccino
August 7th, 13.00-16.00.

The course aims in presenting the eye-fixation related potential (EFRP) technique that combines eye-tracking systems and EEGs. The technique is based on electroencephalogram (EEG) measurements of electrical brain
activity in response to eye fixations. EFRPs are extracted from the EEG by means of signal averaging time-locked to the onset and offset of eye fixation. Principles, experimental setups, data analyses and major challenges will be presented and discussed according the major cognitive issues.

Reference: Baccino, T. (2011). Eye Movements and concurrent ERP's: EFRPs investigations in reading. In S. Liversedge, Ian D. Gilchrist & S. Everling (Eds.), Handbook on Eye Movements (pp. 857-870). Oxford University Press.

Analysing eye movements patterns in space and time
Alan Kingston, Walter Bischof, Tom Foulsham
August 10, 9.00-12.00.

A common problem in eye movement research concerns how best to characterise the patterns of fixations and saccades elicited by a particular stimulus and task. An ideal representation of gaze patterns (or "scanpaths) needs to take into account changes over space (where people are looking) and time (when they look there and how often they go back). This can then be used to describe viewing patterns in particular conditions or to compare between individuals, stimuli and situations. In this workshop we will demonstrate several different methods for analysing viewing patterns in complex stimuli. These include scanpath comparison methods, which quantify scanpath similarity across multiple dimensions such as shape and fixation duration, and recurrence quantification analysis (RQA). RQA has been used for describing dynamical systems and for analysing the coordination of gaze patterns between cooperating individuals. Moreover, global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation, making it a powerful new tool for the analysis of the temporal patterns of eye movement behaviour.

Eye Tracking in HCI
Päivi Majaranta, Aulikki Hyrskykari, and Oleg Spakov
August 10, 9.00-16.00.

The goal of the course is to give deep insight into exploiting eye tracking in human-technology interaction. The advances in eye-tracking technology make it an increasingly interesting option to be added to the conventional modalities. However, there are numerous pitfalls to be avoided if one wishes to use the gaze as an active input method. We have used eye trackers in HCI for more than a decade; our experiences should help other researchers and developers to understand the reasons behind the problem issues and to design solutions that avoid the traps. The course provides examples, experiences and design guidelines for using gaze as an input method, including both voluntary, gaze-controlled applications as well as attentive, gaze-aware applications. Previous knowledge on gaze interaction is not required. Various aspects of gaze interaction are covered both theoretically and in practice using live demonstrations with a state-of-the-art eye-tracking device. After the course, the participants should understand the pros and cons of using gaze as an input device in real time.

Viewing Commercials and Ads
Ignace Hooge
August 8th, 13.00-16.00.

One of the goals of designers is to optimize message transfer of their creations. Most outdoor and magazine ads are viewed for only a few seconds. Low-level visual factors and reflexive eye movements play an important role in fetching information. “Not fixated” often means “not seen” and of course “not remembered”. Eye tracking is becoming more popular to investigate viewing behavior on ads and commercials.

Topics in this workshop will include
1) Analysis methods to improve lay-out of ads
2) Do’s and don’ts of Area’s of Interest
3) Good reasons to learn a real programming language
4) Moving AOIs, a good or a bad idea?
5) Talking with marketing people
6) Is an expensive eye tracker necessary to do marketing research?
7) And more

Pupillometry and Cognitive Processing
Bruno Laeng
August 7th, 9.00-12.00.

The measurement of the eye's pupil diameter (in short “pupillometry”) has played a significant role within the field of psychology for at least half a century. This line of research has shown that pupillary responses can provide an estimate of the ‘intensity’ of mental activity and of ‘changes’ in mental states, particularly of changes in the ‘allocation’ of attention and the ‘consolidation’ of perceptions and memories. Pupillary changes can provide a continuous measure of cognitive processing regardless of whether the participant is aware of such a processing is taking place. Recent research in neuroscience has also thrown light on a relation between the activity of the locus coeruleus (i.e., the brain’s “hub” of the noradrenergic system) and pupillary diameter. These neurophysiological findings can provide new important insights to the meaning of pupillary responses for mental activity, in particular attention and consciousness.

Infancy Research and Eye Movements
Natasha Kirkham
August 8th, 9.00-12.00

Over the past two decades there have been incredible advances in the technology supporting developmental science. Instead of relying on gross looking time measures, developmental scientists can now access real-time eye movements of infants, measuring not only where the infant looks within a scene, but also the patterns of these fixations and saccades. The use of eye trackers by developmental psychologists has dramatically changed observational capacities, and made it possible to address questions that could not be examined previously and thus open up entirely new approaches and areas of inquiry. The purpose of this course is to provide a background to infant cognitive developmental research and the theoretical implications of oculomotor anticipations, inhibition of return, spatial negative priming, and scanpaths, within rich, dynamic scenes.

Eye tracking in the 'real world'
Ben Tatler
August 10, 13.00-16.00.

The primary role of vision is to provide the information needed to carry out the tasks of everyday life. In particular vision has a crucial role to play in the guidance of action. The intimate link between vision and action means that if we are to understand how vision is allocated and how the information gathered by the eyes is utlised to serve behaviour, then it is important to consider vision in the context of natural behavioural settings. This motivation underlies the natural task approach to studying eye movement behaviour. However, this approach presents considerable challenges for research due to technological limitations of eye trackers, difficulties in experimental design and control, and practical limitations for analysing collected data.

In this workshop we will consider the theoretical importance of and practical approach to conducting real world research. We will discuss the advantages and limits of current mobile eye trackers (and analysis software), together with practical considerations for designing and conducting research in real environments. A key limitation of real world research is the analyses that are possible on the collected data. Here we will consider the types of questions that can be asked of real world data and how we can approach analyses of such datasets. Many of the approaches taken for eye tracking data in other contexts (which will be covered in other workshops) are not appropriate for real world datasets. We illustrate these design and analysis problems using a range solutions found in prior real world research. We will consider the utility of virtual reality as a surrogate for the real world, but also its limitations. Finally we will discuss possible ways in which technological developments may aid analytical approaches for real world eye movement data in the near future.

Social Attention
Daniel Richardson
August 9th, 9.00-12.00.

Movements of the eye are determined by an interaction of low level properties of the stimulus and high level cognitive factors. Typically in eye movement research, the cognitive factors are that are investigated are memories, expectations or schemas for particular types of scene. I will present a series of studies demonstrating that social factors can also have a substantial contribution to eye movements. In the first, participants watched a video of people giving their views on a sensitive political issue. One speaker made a potentially offensive remark. If participants believed these remarks could be heard by others, they fixated individuals who were likely to be offended. In a second study, two participants in adjacent cubicles had a discussion over an intercom while they were eye tracked. We found that their gaze coordination was modulated by what each believed the other could see on the computer screen. In the another study, we simply showed sets of four stimuli to pairs of participants. We found that that individuals looked at photographs differently if they believed that the other person was looking at the same images, or a set of random symbols. It has long been recognised that gaze contact plays a key role in social interaction. Recently, eye tracking experiments have extended this basic observation in two new directions. Fousham and colleagues have shown that the way that individuals deploy their gaze while watching a conversation between people is closely linked to the timing of their speech, and aspects of the social identity of speakers. When looking a picture of a face, work from our own lab and others has revealed that scanpaths are determined by the mood, personality, culture, status and sex of both the viewer and the face they are viewing. Together these experiments demonstrate that social forces have a strong effect on perceptual mechanisms. Gaze patterns are determined by what we think others will feel, what we think our conversation partners can see, and simply whether or not we think we are looking alone or with other people. Even in the 'non-social' context of viewing a photograph of a face alone, social and affective goals and processes influence gaze.

Recording and analyzing movements of the two eyes, what for?
Zoi Kapoula
August 10, 13.00-16.00.

The real visual space is 3 dimensional and whatever activity we’re performing is deployed in the 3 dimensional space; even when we work on a computer screen or with a book, the space is 3 dimensional, the object is within a space. The brain has to control always the depth and the directional components of the two eyes. Most of the eye trackers are binocular and give signals from the two eyes. In cognitive psychology, or applied research integrating signals from the two eyes could help to understand better where the subject is looking at every instant, where is really the focus of attention and how analyzes information progressively. Attention focus could be cyclopean (averaged of the two eyes) but could also be disjunctive instantaneously. This seminar will thus aim to promote this aspect, integrating the binocular dimension of the eye tracker for more comprehensive psycho-physiological, cognitive, ergonomic and applied research.
I will also mention the importance of binocular eye tracking for pathological aspects as the quality of binocular coordination of eye movements relies on complex neuroplasticity that is fragile for many populations. Many pathologies affect quality of binocular motor control, e.g. strabismus, amblyopia (3 or 4% of the population), vergence insufficiency & heterophoria (latent misalignment of the eyes), leading to visual stress and fatigue (>18% of the population). Recording & analyzing movements of the two eyes in direction and depth gives information about neurological aspects, about the complexity of binocular sensory motor control in 3D space. I will present some of our research indicating problems with binocularity in many populations (dyslexia, strabismus, people with auditory problems, elderly, etc). The issue is also important for applied research as careful screening is necessary to exclude subjects with such dysfunctions and avoid mixing effects.

Efficient Videooculograpy with OpenCV
Thomas Haslwanter and Erich Schneider
August 9, 13.00-16.00.

OpenCV is the leading open source software for video and image processing. The library has >2500 optimized algorithms, and applications ange from interactive art, to mine inspection, stitching maps on the web on through advanced robotics. In this workshop we will show how to use OpenCV for the processing of eye movements in two commonly used programing languages: C# and Python. In the first part of the workshop we will introduce OpenCV, as well as the two IDEs (integrated development environments) ".Net" and "Spyder". .Net will be used for the C#-applications, and Spyder for the Python applications. The second part of the workshop will be "hands-on", and participants will use OpenCV for the processing of eye movements. First, features will be detected in images. Then, the resulting algorithms will be applied to the analysis of videos. For this workshop participants need to have at least some basic programing experience. All required software (.Net, Python, Spyder) is openly available.

How an Eyetracker works – looking inside the black box
Jeff Pelz, Dixon Cleveland and Arantxa Villanueva
August 8, 9.00-16.00.

Eyetrackers have been used for more than 100 years, and although the parts of the system have changed and vary considerably, conceptually the process is the same; we monitor the position of the eye over time by either physically attaching a probe to the orbit or monitor some feature correlated with eye position. Understanding how eyetrackers work is essential to understanding limitations of various systems; their spatial and temporal characteristics, what kind of noise they add to the data, and what kind of scenarios they work best in. This course gives a comprehensive overview of how eyetrackers work, from early analog systems, through the development of VOG in all its forms, right up to making your own eyetracker from webcams and off the shelf parts. We will look at data from several kinds of eyetrackers and discuss their limitations and analysis issues. This course is for anyone using eye data, and although some background in technical aspects is beneficial, it is not necessary.