First EnTimeMent Workshop
23 September 2019
Casa Paganini-InfoMus, DIBRIS,
University of Genoa
Automated recognition, measure and prediction of qualities of human movement for industry,
sport, rehabilitation, and active experience of cultural heritage content
In occasion of the H2020 EU FET Future Tech Week,
the EU FET PROACTIVE EnTimeMent
presents its activites and objectives to industry and institutions.
Coordinating actions in time (“A tempo”, in music) is a fundamental experience in music, dance,
sport, rehabilitation, games, and work environments: the ability of humans to move together,
to coordinate their activities in harmony.
To behave a tempo means to know the plurality of
times involved in every action: from the short, microscopic times of the body that breathes and reacts,
to the long times of the body that adapts itself and acquire knowledge in mutual non-linear interaction with others.
A tempo means learning to understand the qualities of the gestures of others to predict possible outcomes,
to discover in the present the echoes of the past, the seeds of the future.
The mission of EnTimeMent is to design innovative sensitive and interactive technologies capable to be a tempo with people,
to help improve coordination, interaction and empathy, and to imagine and develop applications in the fields of
health, work, entertainment and the arts.
18 − 18.45
Presentation of the project, brief live demonstrations and videos
Nadia Berthouze, University College London
Antonio Camurri and Andrea Cera, University of Genoa
Luciano Fadiga, Italian Institute of Technology and University of Ferrara
Cora Gasparotti, Accademia Nazionale di Danza, Rome
18.45 − 19.30
Chair: Federico Smanio Wylab, Chiavari
Speakers: Serena Bertolucci Palazzo Ducale, Genova;
Giulia Barbareschi Global Disability Innovation Hub, London;
Ottavio Crivaro, CEO Math & Sport, Milan;
Vittorio Podestà Paralympic Champion and Member of Paralympic Committee
Casa Paganini-InfoMus research staff:
Antonio Camurri, Corrado Canepa, Eleonora Ceccaldi, Paolo Coletta, Simone Ghisio,
Nicola Ferrari, Roberto Sagoleo, Erica Volta, Gualtiero Volpe, Vincenzo D’Amato.
Demo 1 Context
This first video shows the idea of multiple temporal scales (two temporal scales in this example):
Cora dances with a clear and constant movement Context (jumping/ elastic/ joyful) and she
interrupts this context three times with three qualities (“aggressive”, “rigid”, and “fluid”).
Context is perceived by an observer at a higher-level temporal scale (10-15s may be necessary to
consolidate the context).
The three qualities are perceived at a mid-level temporal scale (0,5-2s),
but the context does not disappear and does not need a long time to be confirmed after the
The sonification is created from movement with the aim to emphasize the context and the three
Demo 2 Chronic Pain
An example on “sit-to-stand”:
1. Cora simulates a sit-to-stand of a healthy participant (sonification: fluidity of movement)
2. Cora simulates a sit-to-stand of a participant with backpain (sonification is modified by the
frequent interruptions/ falters/ hesitations in the movement).
3. Prediction of wrong movement “at home”. Now the cause of chronic backpain is solved, but
the patient still moves with “fear of pain”: the patient has to re-adapt to move correctly in
everyday movements, to avoid risks of injuries. In this example we have 2 wrong and 1
correct sit-to-stand: the sonification now does not mirror the quality of the movement, but is
predictive. A short sonic event anticipates the possible consequences of a movement
started in the wrong way.
Demo 3 Avatar
This is an example of Scenario 3 (dance), joint action, showing a preview of technologies to
measure movement qualities and the individual and group motor signature. Cora dances with an
avatar (displayed by a cloud of 3d points): i.e., two dancers, one real and one virtual. They start to
move with very different movement qualities (different individual motor signatures) and slowly they
converge to become a single organism, characterized by a common group motor signature: from
two different individual motor signatures to a unified group motor signature.
This is the final demo, just before the start of the panel / round table.
Cora's dance - Individual Motor Signature (IMS)
Cora is dancing with four different emotions:
Incrusted on the left side of the video is a series of plots showing:
- upper part: time series of x-y-z position of the left hand
- middle part: zoom on the position time series for each specific emotion
- lower part: Cora’s IMS for each emotion, plotted in the similarity space
Each data point represents the « distance »
between velocity distributions of her hand
calculated at successive moments in time,
and the ellipse represents the 95% confident ellipse
around all data contained in that specific emotion period:
red = context
yellow = rigid
green = fluid
blue = agressive
EnTimeMent EU Horizon 2020 FET Proactive 4-year project (2019-2022) GA824160