0:00:00.000,0:00:13.418
rC3 postroll music
0:00:13.418,0:00:19.450
Herald: Now, imagine a stage with an[br]artist performing in front of a crowd.
0:00:19.450,0:00:25.260
Is there a way to measure and even quantify[br]the shows impact on the spectators?
0:00:25.260,0:00:30.200
Kai Kunze is going to address this [br]question in his talk Boiling Minds now.
0:00:30.200,0:00:33.200
Kai, up to you.
0:00:33.200,0:00:38.720
Kai: Thanks a lot for the introduction,[br]but we have a short video. I hope
0:00:38.720,0:00:47.535
that can be played right now.
0:00:47.535,0:01:09.089
intense electronic staccato music
0:01:09.089,0:01:28.105
music shifts to include softer piano tones
0:01:28.105,0:02:04.861
music shifts again to include harp-like tones
0:02:04.861,0:02:10.725
music keeps gently shifting
0:02:10.725,0:02:24.894
longer drawn out, slowly decreasing pitch
0:02:24.894,0:02:40.095
shift towards slow, guitar-like sounds
0:02:40.095,0:03:03.653
with light crackling noises
0:03:03.694,0:03:25.015
music getting quieter, softer
0:03:25.015,0:03:36.225
and fades away
0:03:58.155,0:04:15.296
inaudible talking
0:04:15.296,0:04:22.080
Kai: So thanks a lot for the intro and[br]this is the Boiling Mind talks or linking
0:04:22.080,0:04:30.480
physiology and choreography. I just started[br]off with this short video, that could
0:04:30.480,0:04:36.990
give you an overview over the experience[br]of this dance performance that we
0:04:36.990,0:04:44.520
staged in Tokyo beginning of the year,[br]just before the lockdown, actually.
0:04:44.520,0:04:52.640
And the idea behind this was: we wanted to[br]put the audience on a stage. So breaking the
0:04:52.640,0:04:59.920
fourth wall. Trying to use physiological[br]sensing in the audience. And that change
0:04:59.920,0:05:08.640
then is reflected on stage over the[br]projection, sound and also audio to
0:05:08.640,0:05:14.960
influence the dancers and performers and[br]then, of course, feed them back again to
0:05:14.960,0:05:22.880
the audience. So creating an augmented[br]feedback loop. In his talk today, I just
0:05:22.880,0:05:28.000
want to give you a small overview, a[br]little bit about the motivation, why I
0:05:28.000,0:05:36.160
thought it's a nice topic for the remote[br]experience from the Chaos Computer Club
0:05:36.160,0:05:41.120
and also a little bit more about the[br]concept, the set up and the design
0:05:41.120,0:05:48.720
iterations, as well as the lessons[br]learned. So for me to give this talk,
0:05:48.720,0:05:55.920
I thought it's a good way to exchange[br]expertise and get a couple of people that
0:05:55.920,0:06:01.600
might be interested for the next[br]iterations, because I think we are still
0:06:01.600,0:06:06.960
not done with this work. So it's still[br]kind of work in progress. And also a way
0:06:06.960,0:06:12.160
to share data. So to do some explorative[br]data analysis on the recorded performances
0:06:12.160,0:06:19.360
that we have. And then most important: I[br]wanted to create a more creative way to
0:06:19.360,0:06:25.760
use physiological data and explore it,[br]because also for me as a researcher
0:06:25.760,0:06:31.920
working on variable computing or activity[br]recognition, often we just look into
0:06:31.920,0:06:39.326
recognizing or predicting certain motions[br]or certain mental states.
0:06:39.326,0:06:47.519
And that kind of, at least for simple things,[br]feeds back into this very - I think -
0:06:47.519,0:06:55.190
idiotic or stupid ideas of surveillance and[br]applications cases and that.
0:06:55.190,0:07:01.330
So can we create more intuitive ways [br]to use physiological data?
0:07:01.330,0:07:04.400
So from a concept perspective, I think the
0:07:04.400,0:07:10.520
video gave a good overview of what we[br]tried to create. However,
0:07:10.520,0:07:17.600
what we did in 3 performances was: We used[br]physiological sensors on all audience
0:07:17.600,0:07:22.960
members. So for us, it was important that[br]we are not singling out individual people
0:07:22.960,0:07:29.760
to just get feedback from them, but have[br]the whole response, the whole physiological
0:07:29.760,0:07:37.440
state of the audience as an input to the[br]performance. In that case, we actually
0:07:37.440,0:07:45.520
used heart rate variability and also[br]galvanic skin response as inputs.
0:07:45.520,0:07:51.760
And these inputs then changed the projection[br]that you could see. The lights, especially
0:07:51.760,0:07:58.160
the intensity of the lights and also [br]the sound. And that, again, then led to
0:07:58.160,0:08:05.222
changes in the dancing behavior of the[br]performers.
0:08:05.222,0:08:10.805
For the sensing, we went with a variable[br]set up,
0:08:10.805,0:08:18.983
so in this case a fully wireless wristband,[br]because we wanted to do something that is
0:08:18.983,0:08:25.162
easy to wear and easy to put on and to put[br]off. We had a couple of iterations on that
0:08:25.172,0:08:32.971
and we decided then for electrodermal[br]activity and also heart activity
0:08:32.971,0:08:39.520
to sense, because there's some[br]related work that link these sensors to
0:08:39.520,0:08:45.920
engagement stress and also excitement[br]measures. And the question then was also
0:08:45.920,0:08:53.200
where to sense it first. We went with a[br]couple of wrist bands and also kind of
0:08:53.200,0:08:58.240
commercial approaches or half-commercial[br]approaches. However, the sensing quality
0:08:58.240,0:09:03.753
was just not good enough, especially from[br]the wrist. You cannot really get a good
0:09:03.753,0:09:09.040
electrodermal activity, so galvanic skin[br]response. It's more or less a sweat
0:09:09.040,0:09:18.640
sensor. So that means that you can detect[br]if somebody is sweating and some of the
0:09:18.640,0:09:25.680
sweat is actually then related to a stress[br]response. And in that case, there are a
0:09:25.680,0:09:30.080
couple of ways to measure that. So it[br]could be on the lower part of your hand or
0:09:30.080,0:09:35.280
also on the fingers. These are usually the[br]best positions. So we used the fingers.
0:09:35.280,0:09:42.560
Over the fingers we can also get heartrate[br]activity. And in addition to that, there's
0:09:42.560,0:09:48.480
also a small motion sensor, so a gyro and an[br]accelerometer in the wristband. We haven't
0:09:48.480,0:09:54.160
used that for the performance right now, but[br]we still have the recordings also from the
0:09:54.160,0:10:00.400
audience for that. When I say we, I mean[br]George especially and also Dingding,
0:10:00.400,0:10:05.160
2 researchers that work with me, did [br]actually took care of the designs.
0:10:05.160,0:10:12.087
So then the question was also how to[br]map it to the environment or the staging.
0:10:12.087,0:10:17.180
In this case, actually, this was done [br]by a different team,
0:10:17.180,0:10:20.880
this was done by the embodied media team[br]also at KMD.
0:10:20.880,0:10:24.740
So I know a little bit about it,[br]but I'm definitely not an expert.
0:10:24.740,0:10:33.360
And for the initial design we[br]thought we use the EDA for the movement
0:10:33.360,0:10:40.960
speed of the projection. So the EDA rate[br]of change is matched to movement of these
0:10:40.960,0:10:47.120
blobs that you could see or also the meshs[br]that you can see and the color represents
0:10:47.120,0:10:52.640
the heart rate. We went for the LFHF[br]feature that's low frequency, high
0:10:52.640,0:10:57.600
frequency ratio and should give you,[br]according to related work, some indication
0:10:57.600,0:11:04.000
about excitement. For the lights: the[br]lights were also bound to the heart rate,
0:11:04.000,0:11:08.720
in this case, the beats per minute, and[br]they were matched to intensity. So if the
0:11:08.720,0:11:13.520
beats per minute of the audience go[br]collectively up, the light gets brighter,
0:11:13.520,0:11:19.440
otherwise, it's dimmer. For the audio: we[br]had an audio designer that cared about
0:11:19.440,0:11:27.760
sounds and faded in and faded out specific[br]sounds also related to the EDA to the
0:11:27.760,0:11:36.320
relative rate of change of the electro-[br]dermal activity. All this happened while
0:11:36.320,0:11:43.760
the sensors were connected over sensing[br]server in QT to touch designer software
0:11:43.760,0:11:53.280
that generated this type of projections.[br]And also the music got fed into and that
0:11:53.280,0:11:59.200
was then controlling the feedback[br]to the dancers. If you want to
0:11:59.200,0:12:09.280
have a bit more of detail, I uploaded the[br]work in progress preprint paper, a draft
0:12:09.280,0:12:15.840
of an accepted TI paper. So in case you are[br]interested in the mappings and the design
0:12:15.840,0:12:20.320
decisions for the projections, there is[br]a little bit more information there.
0:12:20.320,0:12:26.560
I'm also happy later on to answer those[br]questions. However, I will probably just
0:12:26.560,0:12:31.520
forward them to the designers, that worked[br]on them. And then, for the overall
0:12:31.520,0:12:38.640
performance, what happened was, we started[br]out with an explanation of the experience.
0:12:38.640,0:12:45.576
So it was already advertised as a performance[br]that would take in electrodermal
0:12:45.576,0:12:52.080
activity and heartbeat activity.[br]So, people that bought tickets or came to
0:12:52.080,0:12:56.000
the event already had a little bit of[br]background information. We, of course,
0:12:56.000,0:13:00.720
made also sure that we explained at the[br]beginning what type of sensing we will be
0:13:00.720,0:13:09.360
using. Also what the risks and problems[br]with these type of sensors and data
0:13:09.360,0:13:16.000
collection is and then audience could decide,[br]with informed consent, if they just want to
0:13:16.000,0:13:20.240
stream the data, don't want to do[br]anything, or they want to stream and also
0:13:20.240,0:13:26.320
contribute the data anonymously to our[br]research. And then when the performance
0:13:26.320,0:13:31.970
started, we had a couple of pieces and[br]parts, that is something that you can see in
0:13:31.970,0:13:38.800
B, where we showed the live data feed from[br]all of the audiences in individual tiles. We
0:13:38.800,0:13:45.680
had that in before just for debugging, but[br]actually the audience liked that. And so
0:13:45.680,0:13:52.080
we made it a part of the performance, also[br]deciding with the choreographers to
0:13:52.080,0:13:57.840
include that. And then for the rest, as[br]you see in C, we have the individual
0:13:57.840,0:14:07.360
objects, these blob objects that move[br]according to the EDA data and change colour
0:14:07.360,0:14:16.160
based on the heart rate information. So[br]the low to high frequency. In B, you see
0:14:16.160,0:14:24.720
also these clouds. And yet similarly, the[br]size is related to the heart rate data.
0:14:24.720,0:14:33.120
And the movement again is EDA. And there's[br]also one scene in E where the dancers pick
0:14:33.120,0:14:39.760
one person in the audience and ask them to[br]come on stage. And then we will display
0:14:39.760,0:14:47.840
that audience members data at large in the[br]back of the projection. And for the rest,
0:14:47.840,0:14:54.560
again, we're using this excitement data[br]from the heart rate and from the
0:14:54.570,0:15:07.200
electrodermal activity to change sizes and[br]colours. So, to come up with this design, we
0:15:07.200,0:15:14.000
went the co-design route, discussing with[br]the researchers, dancers, visual
0:15:14.000,0:15:20.320
designers, audio designers a couple of[br]times. And actually that's also how I got
0:15:20.320,0:15:27.840
involved first, because the initial idea is[br]also from Moe, the primary designer of this
0:15:27.840,0:15:36.160
piece, were to combine somehow perception[br]and motion. And I worked a bit in research
0:15:36.160,0:15:41.760
with the eye tracking. So you see on the[br]screen the pupil website eye tracker it is
0:15:41.760,0:15:50.320
and open source eye tracking solution and[br]also EOG electro-oculography glasses, that
0:15:50.320,0:15:58.240
use the capacitance of your eyeballs to[br]detect something. Rough about eye emotion.
0:15:58.240,0:16:05.520
And we thought at the beginning, we want[br]to combine this, a person seeing the play
0:16:05.520,0:16:10.080
with the motions of the dancers and[br]understand that better. So that's kind of
0:16:10.080,0:16:21.760
how it started. The second inspiration for[br]this idea in the theatre came from a
0:16:21.760,0:16:29.200
visiting scholar, Jamie. Jamie Ward came[br]over and his work with the flood theater
0:16:29.200,0:16:34.320
in London. That's an inclusive theatre[br]that also does workshops or Shakespeare
0:16:34.320,0:16:40.880
workshops. And he did some sensing just[br]with the accelerometers and gyroscopes or
0:16:40.880,0:16:47.018
inertial motion wristbands to detect[br]interpersonal synchrony between
0:16:47.018,0:16:53.360
participants in these workshops. And then[br]we thought, when he came over, we had a
0:16:53.360,0:16:59.552
small piece where we looked into this[br]interpersonal synchrony again in face to
0:16:59.552,0:17:04.160
face communications. I mean, now we are[br]remote and I'm just talking into a camera
0:17:04.160,0:17:08.960
and I cannot see anybody. But usually, if[br]you would have a face to face conversation,
0:17:08.960,0:17:15.040
doesn't happen too often anymore,[br]unfortunately. We would show some type of
0:17:15.040,0:17:20.560
synchronies or, you know, kind of eyeblink,[br]head nod and so on would synchronize with
0:17:20.560,0:17:24.880
the other person, if you're talking to[br]them. And we also showed, that in small
0:17:24.880,0:17:30.240
recordings also we showed that we [br]can recognize this in a variable sensing
0:17:30.240,0:17:36.560
setup. So again, using some glasses and we[br]thought, why don't we try to scale that
0:17:36.560,0:17:42.400
up? Why don't we try and see what happens[br]in a theatre performance or in another
0:17:42.400,0:17:49.810
dance performance and see if we can[br]recognize also some type of synchrony. And
0:17:49.810,0:17:57.520
with a couple of ideation sessions, a[br]couple of also test performances, also
0:17:57.520,0:18:04.880
including dancers trying out glasses,[br]trying out other headwear. And that was
0:18:04.880,0:18:10.480
not really possible to use for the dancers[br]during the performance. We came up with an
0:18:10.480,0:18:18.640
initial prototype and that we tried out,[br]so in, I think November 2018 or so, where
0:18:18.640,0:18:24.320
we used a couple of pupil-labs and also[br]pupil-invisible. These are nicer eye tracking
0:18:24.320,0:18:27.840
glasses, they are optical eye tracking[br]glasses, so they have small cameras in
0:18:27.840,0:18:34.080
them, distributed in the audience. A couple[br]of those Yoji glasses, they have also
0:18:34.080,0:18:38.720
initial motion sensors in them. So[br]accelerometer and gyroscope. And we had at
0:18:38.720,0:18:46.800
the time heart rate sensors. However, they[br]were fixed and wired to the system. And
0:18:46.800,0:18:53.360
also the dancers wore some wristbands[br]where we could record the motion data. And
0:18:53.360,0:18:59.920
then what we did in these cases, then we[br]had projections on three frames on top
0:18:59.920,0:19:05.840
of the dancers. One was showing the blink[br]and the headnod synchronization of the
0:19:05.840,0:19:10.880
audience. The other one showed heart rate[br]and variability. And the third one just
0:19:10.880,0:19:17.280
showed raw feed from one of the eye[br]trackers. And it looked more or less like
0:19:17.280,0:19:23.200
this. And from a technical perspective, we[br]were surprised because it actually worked.
0:19:23.200,0:19:32.720
So we could stream around 10 glasses,[br]three eye trackers and four, five, I think
0:19:32.720,0:19:40.000
heart rate sensors at the same time and the surfer [br]worked. However, from an audience
0:19:40.000,0:19:45.360
perspective, a lot of the feedback was the[br]audience didn't like that just some people
0:19:45.360,0:19:50.240
got singled out and got the device by[br]themselves and others could not really
0:19:50.240,0:19:54.560
contribute and could not also see the[br]data. And then also from a performance
0:19:54.560,0:19:59.120
perspective, the dancers didn't really[br]like that they couldn't interact with the
0:19:59.120,0:20:05.600
data. The dance piece also in this case[br]was pre-choreographed. So there was no
0:20:05.600,0:20:11.280
possibility for the dancers to really[br]interact with the data. And then also,
0:20:11.280,0:20:17.120
again, from an esthetic perspective, we[br]really didn't like that the screens were
0:20:17.120,0:20:22.320
on top because either you would[br]concentrate on the screens or you would
0:20:22.320,0:20:28.160
concentrate on the dance performance. And[br]you had to kind of make a decision also on
0:20:28.160,0:20:33.360
what type of visualization you would focus[br]on. So overall, you know, kind of partly
0:20:33.360,0:20:40.000
okay, but still there were some troubles.[br]So one was definitely we wanted to include
0:20:40.000,0:20:48.560
all of the audience. Meaning we wanted to[br]have everybody participate. Then the
0:20:48.560,0:20:53.600
problem with that part was then also[br]having enough eye trackers or having
0:20:53.600,0:21:00.960
enough head worn devices was an issue. In[br]addition to that, you know, kind of, if
0:21:00.960,0:21:05.840
it's head worn some people might not like[br]it. The pandemic hadn't started yet. When
0:21:05.840,0:21:12.160
we did the recordings, however, there was[br]already the information, some information
0:21:12.160,0:21:19.120
about the virus going around. So we didn't[br]really want as, putting everybody,
0:21:19.120,0:21:25.840
giving everybody some eyeglasses. So then[br]we moved to the heart rate and, galvanic
0:21:25.840,0:21:33.440
skin response solution and the set up[br]where the projection is now part of the
0:21:33.440,0:21:38.320
stage. So we used the two walls, but we[br]also used, it's a little bit hard to see
0:21:38.320,0:21:45.360
in the images, but we also used the floor[br]as another projection surface for the
0:21:45.360,0:21:50.240
dancers to interact with and the main[br]interaction, actually came then over the
0:21:50.240,0:22:02.000
sound. So then moving over to the lessons[br]learned. So what did we take away from
0:22:02.000,0:22:15.280
from that experience? And the first part[br]was talking with the dancers and talking
0:22:15.280,0:22:21.280
with the audience often, if you saw,[br]especially the more intricate, the more
0:22:21.280,0:22:27.920
abstract visualizations, it was sometimes[br]hard to interpret also how their own data
0:22:27.920,0:22:33.760
would feed into that visualization. So,[br]you know, kind of some audience members
0:22:33.760,0:22:38.000
mentioned to some point in time they were[br]not sure if they're influencing anything
0:22:38.000,0:22:44.801
or if it had an effect on other parts,[br]especially if they saw the live data. It
0:22:44.801,0:22:50.165
was kind of obvious. But for future work,[br]we really want to play more with the
0:22:50.165,0:22:57.197
agency and also perceived agency of the[br]audiences and the performers. And we also
0:22:57.197,0:23:02.714
really wonder how can we measure this type[br]of feedback loops? Because now we have
0:23:02.714,0:23:07.331
these recordings. We looked also a little[br]bit more into the data, but it's hard to
0:23:07.331,0:23:16.384
understand. Were we successful? I think in[br]some extent maybe yes, because the
0:23:16.384,0:23:24.295
experience was fun. It was enjoyable. But[br]on this level of, did we really create
0:23:24.295,0:23:28.867
feedback loops and how do you evaluate[br]feedback loops, that's something that we
0:23:28.867,0:23:35.112
want to address in future work. On the[br]other hand, what was surprising I
0:23:35.112,0:23:42.054
mentioned before the raw data was[br]something that the dancers as well as the
0:23:42.054,0:23:48.690
audience really liked. And that was[br]surprising for me because I thought we had
0:23:48.690,0:23:54.273
to hide that more or less. But we had it[br]on. As I said, there's kind of a debug at
0:23:54.273,0:24:00.023
the beginning of some test screenings and[br]audience members were interested in it and
0:24:00.023,0:24:05.927
could see and were talking about: "Oh, see[br]your heart rate is going up or your EDA is
0:24:05.927,0:24:11.327
going up." And the dancers also like that.[br]And we used that then in the performance
0:24:11.327,0:24:19.663
in the three performances that we then[br]successfully made for especially scenes
0:24:19.663,0:24:25.468
where the dancers would interact directly[br]with parts of the audience. At the
0:24:25.468,0:24:32.984
beginning of the play is a scene where the[br]dancers give out business cards to some
0:24:32.984,0:24:38.947
audience members. And it was fun to see[br]that some audience members could identify
0:24:38.947,0:24:44.520
themselves, other audience members would[br]identify somebody else that was sitting
0:24:44.520,0:24:50.264
next to them. And then this member had a[br]spike in EDA because of the surprise. So
0:24:50.264,0:24:55.286
there was really, you know, kind of some[br]interaction going on. So maybe staying if
0:24:55.286,0:25:00.875
you're planning to do a similar event,[br]staying close to the raw data and also low
0:25:00.875,0:25:07.404
latency. So I think it's quite important[br]for some types of these interactions. From
0:25:07.404,0:25:13.534
the dancers there was a big interest, on[br]the one side, they wanted to use the data
0:25:13.534,0:25:20.025
for reflection. So they really liked that[br]they had the printouts of the effects of
0:25:20.025,0:25:27.783
the audience later on. However, they also[br]wanted to dance more with biometric data
0:25:27.783,0:25:33.528
and also use that for their rehearsals[br]more. So, of course, you know, we had to
0:25:33.528,0:25:39.275
co-design, so we worked directly. We[br]showed the dancers the sensors and the
0:25:39.275,0:25:43.976
possibilities and then worked with them to[br]figure out what can work and what cannot
0:25:43.976,0:25:49.418
work and what might have an effect, what[br]might not have an effect. And then we did
0:25:49.418,0:25:55.479
some, as you saw, also some prototype[br]screenings and also some internal
0:25:55.479,0:26:02.361
rehearsals where we used some recorded[br]data. We used some, a couple of people of
0:26:02.361,0:26:06.618
us were sitting in the audience. We got a[br]couple of other researchers and also
0:26:06.618,0:26:12.499
students involved to sit in the audience[br]to stream data. And we also worked a
0:26:12.499,0:26:19.970
little bit with prerecorded experiences[br]and also synthetic experiences, how we
0:26:19.970,0:26:25.631
envisioned that the data would move. But[br]still, it was not enough in terms of
0:26:25.631,0:26:32.169
providing an intuitive way to understand[br]what is going on, especially also for the
0:26:32.169,0:26:39.045
visualizations and the projections. They[br]were harder to interpret than the sound in
0:26:39.045,0:26:50.039
the sound sphere. So and then the next and[br]the biggest point, maybe as well is, the
0:26:50.039,0:26:55.704
sensors and the feature best practices. So[br]we're still wondering, you know, what to
0:26:55.704,0:27:02.739
use. We're still searching. What kind of,[br]sensing equipment can we use to relay
0:27:02.739,0:27:08.760
this, in this invisible link between[br]audience and performers? How can we
0:27:08.760,0:27:15.032
augment that? We started out with the[br]perception and eye tracking part, we then
0:27:15.032,0:27:22.367
went to wrist one device because it's[br]easier to maintain and it's also wireless.
0:27:22.367,0:27:30.089
And it worked quite well to stream 50 to[br]60 audience members for one of those
0:27:30.089,0:27:38.710
events to a wireless router and do the[br]recording, as well as the life
0:27:38.710,0:27:43.043
visualization with it. However, the[br]features might have not been.
0:27:43.043,0:30:42.963
Audio Failure
0:30:42.963,0:30:55.535
Okay. Sorry for the short part where it was[br]offline. So, we were talking about a sense
0:30:55.535,0:31:01.853
of features and best practices. So in this[br]case, we are still searching for the right
0:31:01.853,0:31:13.099
type of sensors and features to use for[br]this type of audience, performer
0:31:13.099,0:31:23.659
interaction. And we were using, the, yeah,[br]the low frequency, high frequency ratio of
0:31:23.659,0:31:28.789
the heart rate values and also the[br]relative changes of the EDA. And that was
0:31:28.789,0:31:35.381
working, I would say not that well,[br]compared to other features that we now
0:31:35.381,0:31:41.974
found while looking into the performances[br]and the recorded data of the around, 98
0:31:41.974,0:31:49.431
participants that agreed to share the data[br]with us, for these performances. And from
0:31:49.431,0:31:56.280
the preliminary analysis that Karen Han,[br]one of our researchers working on and
0:31:56.280,0:32:03.774
looking into what type of features are[br]indicative of changes in the performance.
0:32:03.774,0:32:10.899
It seems that a feature called PNN that's[br]related to heart rate variability to the
0:32:10.899,0:32:19.252
R-R intervals is, seems to be quite good. And[br]also the peak detection per minute using
0:32:19.252,0:32:25.349
the EDA data. So we're just counting the[br]relative changes, the relative up and
0:32:25.349,0:32:32.346
down, for the EDA. If you're interested[br]I'm happy to share the data with you. So
0:32:32.346,0:32:37.564
we have three performances each [br]around an hour and 98 participants in
0:32:37.564,0:32:45.658
total. And we have the heart rate data,[br]the EDA data, from the two fingers as well
0:32:45.658,0:32:54.257
as, the motion data as well. We haven't[br]used the motion data at all except for
0:32:54.257,0:33:00.384
filtering out a little bit the EDA and[br]heart rate data because if you're moving a
0:33:00.384,0:33:06.606
lot, you will have some errors and some[br]problems, some motion artifacts in it. But
0:33:06.606,0:33:15.370
what do I mean with why is the PNN or why[br]is the EDA peak detection so nice? Let's
0:33:15.370,0:33:20.679
look a little bit closer into the data.[br]And here you see I just highlighted
0:33:20.679,0:33:31.073
performance three from the previous plots.[br]You see PNN50 on the left side is the scale, the
0:33:31.073,0:33:40.491
blue line gives you the average of the[br]PNN50 value. So this is the R-R interval
0:33:40.491,0:33:47.813
related heart rate variability feature and[br]that feature is especially related to
0:33:47.813,0:33:54.658
relaxation and also to stress. So usually[br]a lower PNN50 value means you're more
0:33:54.658,0:34:00.991
relaxed and a higher value means that[br]you're. No, higher value means that you're
0:34:00.991,0:34:07.528
more relaxed, sorry. Lower value means [br]that you are more stressed out. So what happens
0:34:07.528,0:34:12.777
now in the performance is something that[br]fits very, very well and correlates with
0:34:12.777,0:34:18.863
the intention of the choreographer.[br]Because the first half of the performance,
0:34:18.863,0:34:27.264
you see section one, two, three, four,[br]five and six on the bottom. The first half
0:34:27.264,0:34:32.062
of the performance is to create a conflict[br]in the audience and to stir them up a
0:34:32.062,0:34:39.955
little. So, for example, also the business[br]card scene is part of that part, or also
0:34:39.955,0:34:47.823
the scene where somebody gets brought from[br]the audience to the stage and joins the
0:34:47.823,0:34:53.640
performance is also part of that versus[br]the latter part is more about reflection
0:34:53.640,0:34:59.347
and also relaxation. So taking in what you[br]experienced at the first part, and that's
0:34:59.347,0:35:03.623
something that you see actually quite nice[br]in the PNN50. So at the beginning it's
0:35:03.623,0:35:10.410
rather low. That means the audience is[br]slightly tense versus in the latter part
0:35:10.410,0:35:17.999
they more relaxed. Similarly, the EDA in[br]the bottom as a bar chart gives you an
0:35:17.999,0:35:23.559
indication of a lot of peaks happening at[br]specific points. And these points
0:35:23.559,0:35:31.316
correlate very well to memorable scenes in[br]the performance. So seeing the one scene,
0:35:31.316,0:35:36.329
where, actually section four, the red one,[br]is the one where somebody from the
0:35:36.329,0:35:41.592
audience gets brought onto the stage.[br]Where is this? I think around minute
0:35:41.592,0:35:52.512
twelve there is a scene where the dancers[br]handout business cards. And that's
0:35:52.512,0:35:56.159
also something, I think. So it's[br]promising, we're not there yet definitely
0:35:56.159,0:36:01.840
from the data analysis part, but there are[br]some interesting things to see. And that
0:36:01.840,0:36:11.232
kind of brings me back to the starting[br]point. So I think, it was an amazing
0:36:11.232,0:36:16.420
experience actually, working with a lot of[br]talented people on that and the
0:36:16.420,0:36:22.296
performance was a lot of fun, but we are[br]slowly moving towards putting the audience
0:36:22.296,0:36:28.499
on stage and trying to break the fourth[br]wall, I think, with these type of setups.
0:36:28.499,0:36:36.073
And that leads me then also to the end of[br]the talk where I just have to do a shout
0:36:36.073,0:36:42.500
out for the people who did the actual[br]work. So all of the talented performers
0:36:42.500,0:36:49.629
and the project lead, especially Moe who[br]organized and was also the link between
0:36:49.629,0:36:56.360
the artistic side and the dancers with[br]Mademoiselle Cinema and us, as well as the
0:36:56.360,0:37:04.953
choreographer Ito-san. And yeah, I hope I[br]didn't miss anybody. So that's it. So
0:37:04.953,0:37:13.965
thanks a lot for this opportunity to[br]introduce this work to you. And now I'm
0:37:13.965,0:37:21.338
open for a couple of questions, remarks. I[br]wanted to also host a self organized
0:37:21.338,0:37:25.715
session sometime. I haven't really gotten[br]the link or anything, but I'll probably
0:37:25.715,0:37:32.630
just post something on Twitter or in one[br]of the chats if you want to stay in
0:37:32.630,0:37:38.661
contact. I'll try to get two or three[br]researchers also to join. I know George,
0:37:38.661,0:37:44.260
who was working on the hardware, and[br]Karen, who worked on the visualizations,
0:37:44.260,0:37:53.072
the data analysis might be available. And [br]if you interested in that, just send me an
0:37:53.072,0:37:59.970
email or check, maybe, I just also add it[br]to the blog post or so if I get the link
0:37:59.970,0:38:05.339
later. So, yeah. Thanks a[br]lot for the attention.
0:38:08.548,0:38:16.686
Herald: Thanks, Kai, for this nice talk.[br]For the audience, please excuse us for the
0:38:16.686,0:38:22.028
small disruption of service we had here.[br]We're a little bit late already, but I
0:38:22.028,0:38:26.560
think we still have time for a question or[br]so. Unfortunately, I don't see anything
0:38:26.560,0:38:31.610
here online at the moment. So if[br]somebody tried to pose a question and
0:38:31.610,0:38:36.714
there was also disruption of service, I[br]apologize beforehand for that. On the
0:38:36.714,0:38:43.192
other hand now, Kai, you talked about data[br]sharing. So how can the data be accessed?
0:38:43.192,0:38:47.836
Do people need to access you or drop to[br]you a mail or personal message?
0:38:47.836,0:38:54.307
Kai: Yeah, we're on the, [br]so right now, no publication is
0:38:54.307,0:38:59.600
still accepted and there's also some[br]issues actually, a little bit of some
0:38:59.600,0:39:03.307
rights issues or so on. So the [br]easiest part is just to send me a mail.
0:39:03.307,0:39:14.380
It will be posted sometime next year[br]on a more public website. But the easiest
0:39:14.380,0:39:20.360
is just to post me a mail. There're already[br]a couple of people working on it and we
0:39:20.360,0:39:25.560
have the rights to share it. It's just a little[br]bit of a question of setting it up.
0:39:25.560,0:39:31.540
I wanted to have the website also online[br]before the talk, but yeah, as with the
0:39:31.540,0:39:35.320
technical difficulties and so on, everything[br]is a little bit harder this year.
0:39:35.320,0:39:43.060
Herald: Indeed. Indeed. Thanks,[br]guys. Yes, I'd say that's it for this
0:39:43.060,0:39:49.460
session. Thank you very much again for[br]your presentation. And I'll switch back to
0:39:49.460,0:39:53.087
the others.
0:39:53.087,0:39:58.301
postroll music
0:39:58.301,0:40:33.000
Subtitles created by c3subtitles.de[br]in the year 2020. Join, and help us!