rC3 postroll music
Herald: Now, imagine a stage with an
artist performing in front of a crowd.
Is there a way to measure and even quantify
the shows impact on the spectators?
Kai Kunze is going to address this
question in his talk Boiling Minds now.
Kai, up to you.
Kai: Thanks a lot for the introduction,
but we have a short video. I hope
that can be played right now.
intense electronic staccato music
music shifts to include softer piano tones
music shifts again to include harp-like tones
music keeps gently shifting
longer drawn out, slowly decreasing pitch
shift towards slow, guitar-like sounds
with light crackling noises
music getting quieter, softer
and fades away
inaudible talking
Kai: So thanks a lot for the intro and
this is the Boiling Mind talks or linking
physiology and choreography. I just started
off with this short video, that could
give you an overview over the experience
of this dance performance that we
staged in Tokyo beginning of the year,
just before the lockdown, actually.
And the idea behind this was: we wanted to
put the audience on a stage. So breaking the
fourth wall. Trying to use physiological
sensing in the audience. And that change
then is reflected on stage over the
projection, sound and also audio to
influence the dancers and performers and
then, of course, feed them back again to
the audience. So creating an augmented
feedback loop. In his talk today, I just
want to give you a small overview, a
little bit about the motivation, why I
thought it's a nice topic for the remote
experience from the Chaos Computer Club
and also a little bit more about the
concept, the set up and the design
iterations, as well as the lessons
learned. So for me to give this talk,
I thought it's a good way to exchange
expertise and get a couple of people that
might be interested for the next
iterations, because I think we are still
not done with this work. So it's still
kind of work in progress. And also a way
to share data. So to do some explorative
data analysis on the recorded performances
that we have. And then most important: I
wanted to create a more creative way to
use physiological data and explore it,
because also for me as a researcher
working on variable computing or activity
recognition, often we just look into
recognizing or predicting certain motions
or certain mental states.
And that kind of, at least for simple things,
feeds back into this very - I think -
idiotic or stupid ideas of surveillance and
applications cases and that.
So can we create more intuitive ways
to use physiological data?
So from a concept perspective, I think the
video gave a good overview of what we
tried to create. However,
what we did in 3 performances was: We used
physiological sensors on all audience
members. So for us, it was important that
we are not singling out individual people
to just get feedback from them, but have
the whole response, the whole physiological
state of the audience as an input to the
performance. In that case, we actually
used heart rate variability and also
galvanic skin response as inputs.
And these inputs then changed the projection
that you could see. The lights, especially
the intensity of the lights and also
the sound. And that, again, then led to
changes in the dancing behavior of the
performers.
For the sensing, we went with a variable
set up,
so in this case a fully wireless wristband,
because we wanted to do something that is
easy to wear and easy to put on and to put
off. We had a couple of iterations on that
and we decided then for electrodermal
activity and also heart activity
to sense, because there's some
related work that link these sensors to
engagement stress and also excitement
measures. And the question then was also
where to sense it first. We went with a
couple of wrist bands and also kind of
commercial approaches or half-commercial
approaches. However, the sensing quality
was just not good enough, especially from
the wrist. You cannot really get a good
electrodermal activity, so galvanic skin
response. It's more or less a sweat
sensor. So that means that you can detect
if somebody is sweating and some of the
sweat is actually then related to a stress
response. And in that case, there are a
couple of ways to measure that. So it
could be on the lower part of your hand or
also on the fingers. These are usually the
best positions. So we used the fingers.
Over the fingers we can also get heartrate
activity. And in addition to that, there's
also a small motion sensor, so a gyro and an
accelerometer in the wristband. We haven't
used that for the performance right now, but
we still have the recordings also from the
audience for that. When I say we, I mean
George especially and also Dingding,
2 researchers that work with me, did
actually took care of the designs.
So then the question was also how to
map it to the environment or the staging.
In this case, actually, this was done
by a different team,
this was done by the embodied media team
also at KMD.
So I know a little bit about it,
but I'm definitely not an expert.
And for the initial design we
thought we use the EDA for the movement
speed of the projection. So the EDA rate
of change is matched to movement of these
blobs that you could see or also the meshs
that you can see and the color represents
the heart rate. We went for the LFHF
feature that's low frequency, high
frequency ratio and should give you,
according to related work, some indication
about excitement. For the lights: the
lights were also bound to the heart rate,
in this case, the beats per minute, and
they were matched to intensity. So if the
beats per minute of the audience go
collectively up, the light gets brighter,
otherwise, it's dimmer. For the audio: we
had an audio designer that cared about
sounds and faded in and faded out specific
sounds also related to the EDA to the
relative rate of change of the electro-
dermal activity. All this happened while
the sensors were connected over sensing
server in QT to touch designer software
that generated this type of projections.
And also the music got fed into and that
was then controlling the feedback
to the dancers. If you want to
have a bit more of detail, I uploaded the
work in progress preprint paper, a draft
of an accepted TI paper. So in case you are
interested in the mappings and the design
decisions for the projections, there is
a little bit more information there.
I'm also happy later on to answer those
questions. However, I will probably just
forward them to the designers, that worked
on them. And then, for the overall
performance, what happened was, we started
out with an explanation of the experience.
So it was already advertised as a performance
that would take in electrodermal
activity and heartbeat activity.
So, people that bought tickets or came to
the event already had a little bit of
background information. We, of course,
made also sure that we explained at the
beginning what type of sensing we will be
using. Also what the risks and problems
with these type of sensors and data
collection is and then audience could decide,
with informed consent, if they just want to
stream the data, don't want to do
anything, or they want to stream and also
contribute the data anonymously to our
research. And then when the performance
started, we had a couple of pieces and
parts, that is something that you can see in
B, where we showed the live data feed from
all of the audiences in individual tiles. We
had that in before just for debugging, but
actually the audience liked that. And so
we made it a part of the performance, also
deciding with the choreographers to
include that. And then for the rest, as
you see in C, we have the individual
objects, these blob objects that move
according to the EDA data and change colour
based on the heart rate information. So
the low to high frequency. In B, you see
also these clouds. And yet similarly, the
size is related to the heart rate data.
And the movement again is EDA. And there's
also one scene in E where the dancers pick
one person in the audience and ask them to
come on stage. And then we will display
that audience members data at large in the
back of the projection. And for the rest,
again, we're using this excitement data
from the heart rate and from the
electrodermal activity to change sizes and
colours. So, to come up with this design, we
went the co-design route, discussing with
the researchers, dancers, visual
designers, audio designers a couple of
times. And actually that's also how I got
involved first, because the initial idea is
also from Moe, the primary designer of this
piece, were to combine somehow perception
and motion. And I worked a bit in research
with the eye tracking. So you see on the
screen the pupil website eye tracker it is
and open source eye tracking solution and
also EOG electro-oculography glasses, that
use the capacitance of your eyeballs to
detect something. Rough about eye emotion.
And we thought at the beginning, we want
to combine this, a person seeing the play
with the motions of the dancers and
understand that better. So that's kind of
how it started. The second inspiration for
this idea in the theatre came from a
visiting scholar, Jamie. Jamie Ward came
over and his work with the flood theater
in London. That's an inclusive theatre
that also does workshops or Shakespeare
workshops. And he did some sensing just
with the accelerometers and gyroscopes or
inertial motion wristbands to detect
interpersonal synchrony between
participants in these workshops. And then
we thought, when he came over, we had a
small piece where we looked into this
interpersonal synchrony again in face to
face communications. I mean, now we are
remote and I'm just talking into a camera
and I cannot see anybody. But usually, if
you would have a face to face conversation,
doesn't happen too often anymore,
unfortunately. We would show some type of
synchronies or, you know, kind of eyeblink,
head nod and so on would synchronize with
the other person, if you're talking to
them. And we also showed, that in small
recordings also we showed that we
can recognize this in a variable sensing
setup. So again, using some glasses and we
thought, why don't we try to scale that
up? Why don't we try and see what happens
in a theatre performance or in another
dance performance and see if we can
recognize also some type of synchrony. And
with a couple of ideation sessions, a
couple of also test performances, also
including dancers trying out glasses,
trying out other headwear. And that was
not really possible to use for the dancers
during the performance. We came up with an
initial prototype and that we tried out,
so in, I think November 2018 or so, where
we used a couple of pupil-labs and also
pupil-invisible. These are nicer eye tracking
glasses, they are optical eye tracking
glasses, so they have small cameras in
them, distributed in the audience. A couple
of those Yoji glasses, they have also
initial motion sensors in them. So
accelerometer and gyroscope. And we had at
the time heart rate sensors. However, they
were fixed and wired to the system. And
also the dancers wore some wristbands
where we could record the motion data. And
then what we did in these cases, then we
had projections on three frames on top
of the dancers. One was showing the blink
and the headnod synchronization of the
audience. The other one showed heart rate
and variability. And the third one just
showed raw feed from one of the eye
trackers. And it looked more or less like
this. And from a technical perspective, we
were surprised because it actually worked.
So we could stream around 10 glasses,
three eye trackers and four, five, I think
heart rate sensors at the same time and the surfer
worked. However, from an audience
perspective, a lot of the feedback was the
audience didn't like that just some people
got singled out and got the device by
themselves and others could not really
contribute and could not also see the
data. And then also from a performance
perspective, the dancers didn't really
like that they couldn't interact with the
data. The dance piece also in this case
was pre-choreographed. So there was no
possibility for the dancers to really
interact with the data. And then also,
again, from an esthetic perspective, we
really didn't like that the screens were
on top because either you would
concentrate on the screens or you would
concentrate on the dance performance. And
you had to kind of make a decision also on
what type of visualization you would focus
on. So overall, you know, kind of partly
okay, but still there were some troubles.
So one was definitely we wanted to include
all of the audience. Meaning we wanted to
have everybody participate. Then the
problem with that part was then also
having enough eye trackers or having
enough head worn devices was an issue. In
addition to that, you know, kind of, if
it's head worn some people might not like
it. The pandemic hadn't started yet. When
we did the recordings, however, there was
already the information, some information
about the virus going around. So we didn't
really want as, putting everybody,
giving everybody some eyeglasses. So then
we moved to the heart rate and, galvanic
skin response solution and the set up
where the projection is now part of the
stage. So we used the two walls, but we
also used, it's a little bit hard to see
in the images, but we also used the floor
as another projection surface for the
dancers to interact with and the main
interaction, actually came then over the
sound. So then moving over to the lessons
learned. So what did we take away from
from that experience? And the first part
was talking with the dancers and talking
with the audience often, if you saw,
especially the more intricate, the more
abstract visualizations, it was sometimes
hard to interpret also how their own data
would feed into that visualization. So,
you know, kind of some audience members
mentioned to some point in time they were
not sure if they're influencing anything
or if it had an effect on other parts,
especially if they saw the live data. It
was kind of obvious. But for future work,
we really want to play more with the
agency and also perceived agency of the
audiences and the performers. And we also
really wonder how can we measure this type
of feedback loops? Because now we have
these recordings. We looked also a little
bit more into the data, but it's hard to
understand. Were we successful? I think in
some extent maybe yes, because the
experience was fun. It was enjoyable. But
on this level of, did we really create
feedback loops and how do you evaluate
feedback loops, that's something that we
want to address in future work. On the
other hand, what was surprising I
mentioned before the raw data was
something that the dancers as well as the
audience really liked. And that was
surprising for me because I thought we had
to hide that more or less. But we had it
on. As I said, there's kind of a debug at
the beginning of some test screenings and
audience members were interested in it and
could see and were talking about: "Oh, see
your heart rate is going up or your EDA is
going up." And the dancers also like that.
And we used that then in the performance
in the three performances that we then
successfully made for especially scenes
where the dancers would interact directly
with parts of the audience. At the
beginning of the play is a scene where the
dancers give out business cards to some
audience members. And it was fun to see
that some audience members could identify
themselves, other audience members would
identify somebody else that was sitting
next to them. And then this member had a
spike in EDA because of the surprise. So
there was really, you know, kind of some
interaction going on. So maybe staying if
you're planning to do a similar event,
staying close to the raw data and also low
latency. So I think it's quite important
for some types of these interactions. From
the dancers there was a big interest, on
the one side, they wanted to use the data
for reflection. So they really liked that
they had the printouts of the effects of
the audience later on. However, they also
wanted to dance more with biometric data
and also use that for their rehearsals
more. So, of course, you know, we had to
co-design, so we worked directly. We
showed the dancers the sensors and the
possibilities and then worked with them to
figure out what can work and what cannot
work and what might have an effect, what
might not have an effect. And then we did
some, as you saw, also some prototype
screenings and also some internal
rehearsals where we used some recorded
data. We used some, a couple of people of
us were sitting in the audience. We got a
couple of other researchers and also
students involved to sit in the audience
to stream data. And we also worked a
little bit with prerecorded experiences
and also synthetic experiences, how we
envisioned that the data would move. But
still, it was not enough in terms of
providing an intuitive way to understand
what is going on, especially also for the
visualizations and the projections. They
were harder to interpret than the sound in
the sound sphere. So and then the next and
the biggest point, maybe as well is, the
sensors and the feature best practices. So
we're still wondering, you know, what to
use. We're still searching. What kind of,
sensing equipment can we use to relay
this, in this invisible link between
audience and performers? How can we
augment that? We started out with the
perception and eye tracking part, we then
went to wrist one device because it's
easier to maintain and it's also wireless.
And it worked quite well to stream 50 to
60 audience members for one of those
events to a wireless router and do the
recording, as well as the life
visualization with it. However, the
features might have not been.
Audio Failure
Okay. Sorry for the short part where it was
offline. So, we were talking about a sense
of features and best practices. So in this
case, we are still searching for the right
type of sensors and features to use for
this type of audience, performer
interaction. And we were using, the, yeah,
the low frequency, high frequency ratio of
the heart rate values and also the
relative changes of the EDA. And that was
working, I would say not that well,
compared to other features that we now
found while looking into the performances
and the recorded data of the around, 98
participants that agreed to share the data
with us, for these performances. And from
the preliminary analysis that Karen Han,
one of our researchers working on and
looking into what type of features are
indicative of changes in the performance.
It seems that a feature called PNN that's
related to heart rate variability to the
R-R intervals is, seems to be quite good. And
also the peak detection per minute using
the EDA data. So we're just counting the
relative changes, the relative up and
down, for the EDA. If you're interested
I'm happy to share the data with you. So
we have three performances each
around an hour and 98 participants in
total. And we have the heart rate data,
the EDA data, from the two fingers as well
as, the motion data as well. We haven't
used the motion data at all except for
filtering out a little bit the EDA and
heart rate data because if you're moving a
lot, you will have some errors and some
problems, some motion artifacts in it. But
what do I mean with why is the PNN or why
is the EDA peak detection so nice? Let's
look a little bit closer into the data.
And here you see I just highlighted
performance three from the previous plots.
You see PNN50 on the left side is the scale, the
blue line gives you the average of the
PNN50 value. So this is the R-R interval
related heart rate variability feature and
that feature is especially related to
relaxation and also to stress. So usually
a lower PNN50 value means you're more
relaxed and a higher value means that
you're. No, higher value means that you're
more relaxed, sorry. Lower value means
that you are more stressed out. So what happens
now in the performance is something that
fits very, very well and correlates with
the intention of the choreographer.
Because the first half of the performance,
you see section one, two, three, four,
five and six on the bottom. The first half
of the performance is to create a conflict
in the audience and to stir them up a
little. So, for example, also the business
card scene is part of that part, or also
the scene where somebody gets brought from
the audience to the stage and joins the
performance is also part of that versus
the latter part is more about reflection
and also relaxation. So taking in what you
experienced at the first part, and that's
something that you see actually quite nice
in the PNN50. So at the beginning it's
rather low. That means the audience is
slightly tense versus in the latter part
they more relaxed. Similarly, the EDA in
the bottom as a bar chart gives you an
indication of a lot of peaks happening at
specific points. And these points
correlate very well to memorable scenes in
the performance. So seeing the one scene,
where, actually section four, the red one,
is the one where somebody from the
audience gets brought onto the stage.
Where is this? I think around minute
twelve there is a scene where the dancers
handout business cards. And that's
also something, I think. So it's
promising, we're not there yet definitely
from the data analysis part, but there are
some interesting things to see. And that
kind of brings me back to the starting
point. So I think, it was an amazing
experience actually, working with a lot of
talented people on that and the
performance was a lot of fun, but we are
slowly moving towards putting the audience
on stage and trying to break the fourth
wall, I think, with these type of setups.
And that leads me then also to the end of
the talk where I just have to do a shout
out for the people who did the actual
work. So all of the talented performers
and the project lead, especially Moe who
organized and was also the link between
the artistic side and the dancers with
Mademoiselle Cinema and us, as well as the
choreographer Ito-san. And yeah, I hope I
didn't miss anybody. So that's it. So
thanks a lot for this opportunity to
introduce this work to you. And now I'm
open for a couple of questions, remarks. I
wanted to also host a self organized
session sometime. I haven't really gotten
the link or anything, but I'll probably
just post something on Twitter or in one
of the chats if you want to stay in
contact. I'll try to get two or three
researchers also to join. I know George,
who was working on the hardware, and
Karen, who worked on the visualizations,
the data analysis might be available. And
if you interested in that, just send me an
email or check, maybe, I just also add it
to the blog post or so if I get the link
later. So, yeah. Thanks a
lot for the attention.
Herald: Thanks, Kai, for this nice talk.
For the audience, please excuse us for the
small disruption of service we had here.
We're a little bit late already, but I
think we still have time for a question or
so. Unfortunately, I don't see anything
here online at the moment. So if
somebody tried to pose a question and
there was also disruption of service, I
apologize beforehand for that. On the
other hand now, Kai, you talked about data
sharing. So how can the data be accessed?
Do people need to access you or drop to
you a mail or personal message?
Kai: Yeah, we're on the,
so right now, no publication is
still accepted and there's also some
issues actually, a little bit of some
rights issues or so on. So the
easiest part is just to send me a mail.
It will be posted sometime next year
on a more public website. But the easiest
is just to post me a mail. There're already
a couple of people working on it and we
have the rights to share it. It's just a little
bit of a question of setting it up.
I wanted to have the website also online
before the talk, but yeah, as with the
technical difficulties and so on, everything
is a little bit harder this year.
Herald: Indeed. Indeed. Thanks,
guys. Yes, I'd say that's it for this
session. Thank you very much again for
your presentation. And I'll switch back to
the others.
postroll music
Subtitles created by c3subtitles.de
in the year 2020. Join, and help us!