Record, analyze and annotate gestures and sign languages with motion-capture technologies
Dominique Boutet, Jean-François Jego, Vincent Meyrueis
During the masterclass, we will record a mini-corpus of dialogues using a motion capture equipment (Inertial Measurement Units «PercpetionNeuron»).
The MoCap data will be synchronized with the video of the mini-corpus, and visualized in the software ELAN. To do that, a general framework for qualitative
analysis (ELAN) according to quantitative descriptors (kinematics timelines, relative location, velocity and acceleration) will be presented.
Besides the recording and the analysis framework, we will explore some MoCap uses for annotation; namely a leap motion controller to annotate handshapes of sign languages and a module for the visualization of gestural descriptors on an avatar. These last developments are prototypes, which means your participation could help in designing the future of multimodal studies in linguistics!
Gestures as ‘as-if actions’: Hands on analysis of public discourse.
Cornelia Müller, Lena Hotze
When the hands are used in communication they are transformed from instrumental actions to communicative as-if actions. Employed as gesture the hands are used mimetically and seize to perform instrumental actions, such as grabbing fruit, carrying objects, giving, receiving, showing objects, picking, pushing, pulling, or throwing objects. The instrumental actions, serve as base, as touchstone for the mimed performance. Now, this process has been described for signed languages as well as for gestures. However, the focus here was mostly on iconic signs and iconic gestures. In the class, we will use Müller’s systematics of gestural modes of representation to reconstruct the mimetic base of pragmatic gestures, such as presenting an argument on the open palm of the hand, or claiming the preciseness of a statement by using the ‘precision grip’ (index and thumb act as-if holding a tiny object) or rejecting an intervention, by acting as-if pushing an object away with an open palm oriented vertically.
After an introduction into the gestural modes of representation as techniques of gesture creation, we will apply those modes to the analysis of pragmatic gestures in public discourse. The gestural modes of representation serve as methodological and theoretical point of departure for a linguistic reconstruction of ‘how gestures mean’.
Eye tracking basics in linguistic and multimodal research
Maria Kiose, Olga Prokofieva
What do our eyes do when we read the text or look at the picture? Do their movements betray what attracts our attention or causes misunderstanding? What are the indicators of successful interpretation and how can we detect them? These are the questions we are going to discuss at the Master-class. Already aware of what eye events, areas of interest and trajectory lines are, but never had a trial? Here we will also simulate the eyetracking experiment with YOU as its active participant and a member of the research group. You will take part in the eyetracking equipment adjustment, experiment processing, data analysis and discussion.
The Force of Multimodal Metaphors
Olga Iriskhanova, Alexandra Gulenkova, Alexandra Galkina
Fasten your seat belts! We are starting an exciting journey to investigate how our mind works when we build metaphoric meanings. Our experienced team (pilot Olga Iriskhanova, co-pilots Alexandra Gulenkova and Alexandra Galkina) will show you the beautiful landscapes of words, pictures, and gestures that blend into a Metaphor. You will learn how to navigate in diverse multimodal environments — to recognize, analyze, and improvise metaphors. Most delicious food for thought will be served on board, which will boost your analytical skills and creative thinking! Bon voyage!