< Return to Video

#rC3 - Boiling Mind

  • 0:00 - 0:13
    rC3 postroll music
  • 0:13 - 0:19
    Herald: Now, imagine a stage with an
    artist performing in front of a crowd.
  • 0:19 - 0:25
    Is there a way to measure and even quantify
    the shows impact on the spectators?
  • 0:25 - 0:30
    Kai Kunze is going to address this
    question in his talk Boiling Minds now.
  • 0:30 - 0:33
    Kai, up to you.
  • 0:33 - 0:39
    Kai: Thanks a lot for the introduction,
    but we have a short video. I hope
  • 0:39 - 0:48
    that can be played right now.
  • 0:48 - 1:09
    intense electronic staccato music
  • 1:09 - 1:28
    music shifts to include softer piano tones
  • 1:28 - 2:05
    music shifts again to include harp-like tones
  • 2:05 - 2:11
    music keeps gently shifting
  • 2:11 - 2:25
    longer drawn out, slowly decreasing pitch
  • 2:25 - 2:40
    shift towards slow, guitar-like sounds
  • 2:40 - 3:04
    with light crackling noises
  • 3:04 - 3:25
    music getting quieter, softer
  • 3:25 - 3:36
    and fades away
  • 3:58 - 4:15
    inaudible talking
  • 4:15 - 4:22
    Kai: So thanks a lot for the intro and
    this is the Boiling Mind talks or linking
  • 4:22 - 4:30
    physiology and choreography. I just started
    off with this short video, that could
  • 4:30 - 4:37
    give you an overview over the experience
    of this dance performance that we
  • 4:37 - 4:45
    staged in Tokyo beginning of the year,
    just before the lockdown, actually.
  • 4:45 - 4:53
    And the idea behind this was: we wanted to
    put the audience on a stage. So breaking the
  • 4:53 - 5:00
    fourth wall. Trying to use physiological
    sensing in the audience. And that change
  • 5:00 - 5:09
    then is reflected on stage over the
    projection, sound and also audio to
  • 5:09 - 5:15
    influence the dancers and performers and
    then, of course, feed them back again to
  • 5:15 - 5:23
    the audience. So creating an augmented
    feedback loop. In his talk today, I just
  • 5:23 - 5:28
    want to give you a small overview, a
    little bit about the motivation, why I
  • 5:28 - 5:36
    thought it's a nice topic for the remote
    experience from the Chaos Computer Club
  • 5:36 - 5:41
    and also a little bit more about the
    concept, the set up and the design
  • 5:41 - 5:49
    iterations, as well as the lessons
    learned. So for me to give this talk,
  • 5:49 - 5:56
    I thought it's a good way to exchange
    expertise and get a couple of people that
  • 5:56 - 6:02
    might be interested for the next
    iterations, because I think we are still
  • 6:02 - 6:07
    not done with this work. So it's still
    kind of work in progress. And also a way
  • 6:07 - 6:12
    to share data. So to do some explorative
    data analysis on the recorded performances
  • 6:12 - 6:19
    that we have. And then most important: I
    wanted to create a more creative way to
  • 6:19 - 6:26
    use physiological data and explore it,
    because also for me as a researcher
  • 6:26 - 6:32
    working on variable computing or activity
    recognition, often we just look into
  • 6:32 - 6:39
    recognizing or predicting certain motions
    or certain mental states.
  • 6:39 - 6:48
    And that kind of, at least for simple things,
    feeds back into this very - I think -
  • 6:48 - 6:55
    idiotic or stupid ideas of surveillance and
    applications cases and that.
  • 6:55 - 7:01
    So can we create more intuitive ways
    to use physiological data?
  • 7:01 - 7:04
    So from a concept perspective, I think the
  • 7:04 - 7:11
    video gave a good overview of what we
    tried to create. However,
  • 7:11 - 7:18
    what we did in 3 performances was: We used
    physiological sensors on all audience
  • 7:18 - 7:23
    members. So for us, it was important that
    we are not singling out individual people
  • 7:23 - 7:30
    to just get feedback from them, but have
    the whole response, the whole physiological
  • 7:30 - 7:37
    state of the audience as an input to the
    performance. In that case, we actually
  • 7:37 - 7:46
    used heart rate variability and also
    galvanic skin response as inputs.
  • 7:46 - 7:52
    And these inputs then changed the projection
    that you could see. The lights, especially
  • 7:52 - 7:58
    the intensity of the lights and also
    the sound. And that, again, then led to
  • 7:58 - 8:05
    changes in the dancing behavior of the
    performers.
  • 8:05 - 8:11
    For the sensing, we went with a variable
    set up,
  • 8:11 - 8:19
    so in this case a fully wireless wristband,
    because we wanted to do something that is
  • 8:19 - 8:25
    easy to wear and easy to put on and to put
    off. We had a couple of iterations on that
  • 8:25 - 8:33
    and we decided then for electrodermal
    activity and also heart activity
  • 8:33 - 8:40
    to sense, because there's some
    related work that link these sensors to
  • 8:40 - 8:46
    engagement stress and also excitement
    measures. And the question then was also
  • 8:46 - 8:53
    where to sense it first. We went with a
    couple of wrist bands and also kind of
  • 8:53 - 8:58
    commercial approaches or half-commercial
    approaches. However, the sensing quality
  • 8:58 - 9:04
    was just not good enough, especially from
    the wrist. You cannot really get a good
  • 9:04 - 9:09
    electrodermal activity, so galvanic skin
    response. It's more or less a sweat
  • 9:09 - 9:19
    sensor. So that means that you can detect
    if somebody is sweating and some of the
  • 9:19 - 9:26
    sweat is actually then related to a stress
    response. And in that case, there are a
  • 9:26 - 9:30
    couple of ways to measure that. So it
    could be on the lower part of your hand or
  • 9:30 - 9:35
    also on the fingers. These are usually the
    best positions. So we used the fingers.
  • 9:35 - 9:43
    Over the fingers we can also get heartrate
    activity. And in addition to that, there's
  • 9:43 - 9:48
    also a small motion sensor, so a gyro and an
    accelerometer in the wristband. We haven't
  • 9:48 - 9:54
    used that for the performance right now, but
    we still have the recordings also from the
  • 9:54 - 10:00
    audience for that. When I say we, I mean
    George especially and also Dingding,
  • 10:00 - 10:05
    2 researchers that work with me, did
    actually took care of the designs.
  • 10:05 - 10:12
    So then the question was also how to
    map it to the environment or the staging.
  • 10:12 - 10:17
    In this case, actually, this was done
    by a different team,
  • 10:17 - 10:21
    this was done by the embodied media team
    also at KMD.
  • 10:21 - 10:25
    So I know a little bit about it,
    but I'm definitely not an expert.
  • 10:25 - 10:33
    And for the initial design we
    thought we use the EDA for the movement
  • 10:33 - 10:41
    speed of the projection. So the EDA rate
    of change is matched to movement of these
  • 10:41 - 10:47
    blobs that you could see or also the meshs
    that you can see and the color represents
  • 10:47 - 10:53
    the heart rate. We went for the LFHF
    feature that's low frequency, high
  • 10:53 - 10:58
    frequency ratio and should give you,
    according to related work, some indication
  • 10:58 - 11:04
    about excitement. For the lights: the
    lights were also bound to the heart rate,
  • 11:04 - 11:09
    in this case, the beats per minute, and
    they were matched to intensity. So if the
  • 11:09 - 11:14
    beats per minute of the audience go
    collectively up, the light gets brighter,
  • 11:14 - 11:19
    otherwise, it's dimmer. For the audio: we
    had an audio designer that cared about
  • 11:19 - 11:28
    sounds and faded in and faded out specific
    sounds also related to the EDA to the
  • 11:28 - 11:36
    relative rate of change of the electro-
    dermal activity. All this happened while
  • 11:36 - 11:44
    the sensors were connected over sensing
    server in QT to touch designer software
  • 11:44 - 11:53
    that generated this type of projections.
    And also the music got fed into and that
  • 11:53 - 11:59
    was then controlling the feedback
    to the dancers. If you want to
  • 11:59 - 12:09
    have a bit more of detail, I uploaded the
    work in progress preprint paper, a draft
  • 12:09 - 12:16
    of an accepted TI paper. So in case you are
    interested in the mappings and the design
  • 12:16 - 12:20
    decisions for the projections, there is
    a little bit more information there.
  • 12:20 - 12:27
    I'm also happy later on to answer those
    questions. However, I will probably just
  • 12:27 - 12:32
    forward them to the designers, that worked
    on them. And then, for the overall
  • 12:32 - 12:39
    performance, what happened was, we started
    out with an explanation of the experience.
  • 12:39 - 12:46
    So it was already advertised as a performance
    that would take in electrodermal
  • 12:46 - 12:52
    activity and heartbeat activity.
    So, people that bought tickets or came to
  • 12:52 - 12:56
    the event already had a little bit of
    background information. We, of course,
  • 12:56 - 13:01
    made also sure that we explained at the
    beginning what type of sensing we will be
  • 13:01 - 13:09
    using. Also what the risks and problems
    with these type of sensors and data
  • 13:09 - 13:16
    collection is and then audience could decide,
    with informed consent, if they just want to
  • 13:16 - 13:20
    stream the data, don't want to do
    anything, or they want to stream and also
  • 13:20 - 13:26
    contribute the data anonymously to our
    research. And then when the performance
  • 13:26 - 13:32
    started, we had a couple of pieces and
    parts, that is something that you can see in
  • 13:32 - 13:39
    B, where we showed the live data feed from
    all of the audiences in individual tiles. We
  • 13:39 - 13:46
    had that in before just for debugging, but
    actually the audience liked that. And so
  • 13:46 - 13:52
    we made it a part of the performance, also
    deciding with the choreographers to
  • 13:52 - 13:58
    include that. And then for the rest, as
    you see in C, we have the individual
  • 13:58 - 14:07
    objects, these blob objects that move
    according to the EDA data and change colour
  • 14:07 - 14:16
    based on the heart rate information. So
    the low to high frequency. In B, you see
  • 14:16 - 14:25
    also these clouds. And yet similarly, the
    size is related to the heart rate data.
  • 14:25 - 14:33
    And the movement again is EDA. And there's
    also one scene in E where the dancers pick
  • 14:33 - 14:40
    one person in the audience and ask them to
    come on stage. And then we will display
  • 14:40 - 14:48
    that audience members data at large in the
    back of the projection. And for the rest,
  • 14:48 - 14:55
    again, we're using this excitement data
    from the heart rate and from the
  • 14:55 - 15:07
    electrodermal activity to change sizes and
    colours. So, to come up with this design, we
  • 15:07 - 15:14
    went the co-design route, discussing with
    the researchers, dancers, visual
  • 15:14 - 15:20
    designers, audio designers a couple of
    times. And actually that's also how I got
  • 15:20 - 15:28
    involved first, because the initial idea is
    also from Moe, the primary designer of this
  • 15:28 - 15:36
    piece, were to combine somehow perception
    and motion. And I worked a bit in research
  • 15:36 - 15:42
    with the eye tracking. So you see on the
    screen the pupil website eye tracker it is
  • 15:42 - 15:50
    and open source eye tracking solution and
    also EOG electro-oculography glasses, that
  • 15:50 - 15:58
    use the capacitance of your eyeballs to
    detect something. Rough about eye emotion.
  • 15:58 - 16:06
    And we thought at the beginning, we want
    to combine this, a person seeing the play
  • 16:06 - 16:10
    with the motions of the dancers and
    understand that better. So that's kind of
  • 16:10 - 16:22
    how it started. The second inspiration for
    this idea in the theatre came from a
  • 16:22 - 16:29
    visiting scholar, Jamie. Jamie Ward came
    over and his work with the flood theater
  • 16:29 - 16:34
    in London. That's an inclusive theatre
    that also does workshops or Shakespeare
  • 16:34 - 16:41
    workshops. And he did some sensing just
    with the accelerometers and gyroscopes or
  • 16:41 - 16:47
    inertial motion wristbands to detect
    interpersonal synchrony between
  • 16:47 - 16:53
    participants in these workshops. And then
    we thought, when he came over, we had a
  • 16:53 - 17:00
    small piece where we looked into this
    interpersonal synchrony again in face to
  • 17:00 - 17:04
    face communications. I mean, now we are
    remote and I'm just talking into a camera
  • 17:04 - 17:09
    and I cannot see anybody. But usually, if
    you would have a face to face conversation,
  • 17:09 - 17:15
    doesn't happen too often anymore,
    unfortunately. We would show some type of
  • 17:15 - 17:21
    synchronies or, you know, kind of eyeblink,
    head nod and so on would synchronize with
  • 17:21 - 17:25
    the other person, if you're talking to
    them. And we also showed, that in small
  • 17:25 - 17:30
    recordings also we showed that we
    can recognize this in a variable sensing
  • 17:30 - 17:37
    setup. So again, using some glasses and we
    thought, why don't we try to scale that
  • 17:37 - 17:42
    up? Why don't we try and see what happens
    in a theatre performance or in another
  • 17:42 - 17:50
    dance performance and see if we can
    recognize also some type of synchrony. And
  • 17:50 - 17:58
    with a couple of ideation sessions, a
    couple of also test performances, also
  • 17:58 - 18:05
    including dancers trying out glasses,
    trying out other headwear. And that was
  • 18:05 - 18:10
    not really possible to use for the dancers
    during the performance. We came up with an
  • 18:10 - 18:19
    initial prototype and that we tried out,
    so in, I think November 2018 or so, where
  • 18:19 - 18:24
    we used a couple of pupil-labs and also
    pupil-invisible. These are nicer eye tracking
  • 18:24 - 18:28
    glasses, they are optical eye tracking
    glasses, so they have small cameras in
  • 18:28 - 18:34
    them, distributed in the audience. A couple
    of those Yoji glasses, they have also
  • 18:34 - 18:39
    initial motion sensors in them. So
    accelerometer and gyroscope. And we had at
  • 18:39 - 18:47
    the time heart rate sensors. However, they
    were fixed and wired to the system. And
  • 18:47 - 18:53
    also the dancers wore some wristbands
    where we could record the motion data. And
  • 18:53 - 19:00
    then what we did in these cases, then we
    had projections on three frames on top
  • 19:00 - 19:06
    of the dancers. One was showing the blink
    and the headnod synchronization of the
  • 19:06 - 19:11
    audience. The other one showed heart rate
    and variability. And the third one just
  • 19:11 - 19:17
    showed raw feed from one of the eye
    trackers. And it looked more or less like
  • 19:17 - 19:23
    this. And from a technical perspective, we
    were surprised because it actually worked.
  • 19:23 - 19:33
    So we could stream around 10 glasses,
    three eye trackers and four, five, I think
  • 19:33 - 19:40
    heart rate sensors at the same time and the surfer
    worked. However, from an audience
  • 19:40 - 19:45
    perspective, a lot of the feedback was the
    audience didn't like that just some people
  • 19:45 - 19:50
    got singled out and got the device by
    themselves and others could not really
  • 19:50 - 19:55
    contribute and could not also see the
    data. And then also from a performance
  • 19:55 - 19:59
    perspective, the dancers didn't really
    like that they couldn't interact with the
  • 19:59 - 20:06
    data. The dance piece also in this case
    was pre-choreographed. So there was no
  • 20:06 - 20:11
    possibility for the dancers to really
    interact with the data. And then also,
  • 20:11 - 20:17
    again, from an esthetic perspective, we
    really didn't like that the screens were
  • 20:17 - 20:22
    on top because either you would
    concentrate on the screens or you would
  • 20:22 - 20:28
    concentrate on the dance performance. And
    you had to kind of make a decision also on
  • 20:28 - 20:33
    what type of visualization you would focus
    on. So overall, you know, kind of partly
  • 20:33 - 20:40
    okay, but still there were some troubles.
    So one was definitely we wanted to include
  • 20:40 - 20:49
    all of the audience. Meaning we wanted to
    have everybody participate. Then the
  • 20:49 - 20:54
    problem with that part was then also
    having enough eye trackers or having
  • 20:54 - 21:01
    enough head worn devices was an issue. In
    addition to that, you know, kind of, if
  • 21:01 - 21:06
    it's head worn some people might not like
    it. The pandemic hadn't started yet. When
  • 21:06 - 21:12
    we did the recordings, however, there was
    already the information, some information
  • 21:12 - 21:19
    about the virus going around. So we didn't
    really want as, putting everybody,
  • 21:19 - 21:26
    giving everybody some eyeglasses. So then
    we moved to the heart rate and, galvanic
  • 21:26 - 21:33
    skin response solution and the set up
    where the projection is now part of the
  • 21:33 - 21:38
    stage. So we used the two walls, but we
    also used, it's a little bit hard to see
  • 21:38 - 21:45
    in the images, but we also used the floor
    as another projection surface for the
  • 21:45 - 21:50
    dancers to interact with and the main
    interaction, actually came then over the
  • 21:50 - 22:02
    sound. So then moving over to the lessons
    learned. So what did we take away from
  • 22:02 - 22:15
    from that experience? And the first part
    was talking with the dancers and talking
  • 22:15 - 22:21
    with the audience often, if you saw,
    especially the more intricate, the more
  • 22:21 - 22:28
    abstract visualizations, it was sometimes
    hard to interpret also how their own data
  • 22:28 - 22:34
    would feed into that visualization. So,
    you know, kind of some audience members
  • 22:34 - 22:38
    mentioned to some point in time they were
    not sure if they're influencing anything
  • 22:38 - 22:45
    or if it had an effect on other parts,
    especially if they saw the live data. It
  • 22:45 - 22:50
    was kind of obvious. But for future work,
    we really want to play more with the
  • 22:50 - 22:57
    agency and also perceived agency of the
    audiences and the performers. And we also
  • 22:57 - 23:03
    really wonder how can we measure this type
    of feedback loops? Because now we have
  • 23:03 - 23:07
    these recordings. We looked also a little
    bit more into the data, but it's hard to
  • 23:07 - 23:16
    understand. Were we successful? I think in
    some extent maybe yes, because the
  • 23:16 - 23:24
    experience was fun. It was enjoyable. But
    on this level of, did we really create
  • 23:24 - 23:29
    feedback loops and how do you evaluate
    feedback loops, that's something that we
  • 23:29 - 23:35
    want to address in future work. On the
    other hand, what was surprising I
  • 23:35 - 23:42
    mentioned before the raw data was
    something that the dancers as well as the
  • 23:42 - 23:49
    audience really liked. And that was
    surprising for me because I thought we had
  • 23:49 - 23:54
    to hide that more or less. But we had it
    on. As I said, there's kind of a debug at
  • 23:54 - 24:00
    the beginning of some test screenings and
    audience members were interested in it and
  • 24:00 - 24:06
    could see and were talking about: "Oh, see
    your heart rate is going up or your EDA is
  • 24:06 - 24:11
    going up." And the dancers also like that.
    And we used that then in the performance
  • 24:11 - 24:20
    in the three performances that we then
    successfully made for especially scenes
  • 24:20 - 24:25
    where the dancers would interact directly
    with parts of the audience. At the
  • 24:25 - 24:33
    beginning of the play is a scene where the
    dancers give out business cards to some
  • 24:33 - 24:39
    audience members. And it was fun to see
    that some audience members could identify
  • 24:39 - 24:45
    themselves, other audience members would
    identify somebody else that was sitting
  • 24:45 - 24:50
    next to them. And then this member had a
    spike in EDA because of the surprise. So
  • 24:50 - 24:55
    there was really, you know, kind of some
    interaction going on. So maybe staying if
  • 24:55 - 25:01
    you're planning to do a similar event,
    staying close to the raw data and also low
  • 25:01 - 25:07
    latency. So I think it's quite important
    for some types of these interactions. From
  • 25:07 - 25:14
    the dancers there was a big interest, on
    the one side, they wanted to use the data
  • 25:14 - 25:20
    for reflection. So they really liked that
    they had the printouts of the effects of
  • 25:20 - 25:28
    the audience later on. However, they also
    wanted to dance more with biometric data
  • 25:28 - 25:34
    and also use that for their rehearsals
    more. So, of course, you know, we had to
  • 25:34 - 25:39
    co-design, so we worked directly. We
    showed the dancers the sensors and the
  • 25:39 - 25:44
    possibilities and then worked with them to
    figure out what can work and what cannot
  • 25:44 - 25:49
    work and what might have an effect, what
    might not have an effect. And then we did
  • 25:49 - 25:55
    some, as you saw, also some prototype
    screenings and also some internal
  • 25:55 - 26:02
    rehearsals where we used some recorded
    data. We used some, a couple of people of
  • 26:02 - 26:07
    us were sitting in the audience. We got a
    couple of other researchers and also
  • 26:07 - 26:12
    students involved to sit in the audience
    to stream data. And we also worked a
  • 26:12 - 26:20
    little bit with prerecorded experiences
    and also synthetic experiences, how we
  • 26:20 - 26:26
    envisioned that the data would move. But
    still, it was not enough in terms of
  • 26:26 - 26:32
    providing an intuitive way to understand
    what is going on, especially also for the
  • 26:32 - 26:39
    visualizations and the projections. They
    were harder to interpret than the sound in
  • 26:39 - 26:50
    the sound sphere. So and then the next and
    the biggest point, maybe as well is, the
  • 26:50 - 26:56
    sensors and the feature best practices. So
    we're still wondering, you know, what to
  • 26:56 - 27:03
    use. We're still searching. What kind of,
    sensing equipment can we use to relay
  • 27:03 - 27:09
    this, in this invisible link between
    audience and performers? How can we
  • 27:09 - 27:15
    augment that? We started out with the
    perception and eye tracking part, we then
  • 27:15 - 27:22
    went to wrist one device because it's
    easier to maintain and it's also wireless.
  • 27:22 - 27:30
    And it worked quite well to stream 50 to
    60 audience members for one of those
  • 27:30 - 27:39
    events to a wireless router and do the
    recording, as well as the life
  • 27:39 - 27:43
    visualization with it. However, the
    features might have not been.
  • 27:43 - 30:43
    Audio Failure
  • 30:43 - 30:56
    Okay. Sorry for the short part where it was
    offline. So, we were talking about a sense
  • 30:56 - 31:02
    of features and best practices. So in this
    case, we are still searching for the right
  • 31:02 - 31:13
    type of sensors and features to use for
    this type of audience, performer
  • 31:13 - 31:24
    interaction. And we were using, the, yeah,
    the low frequency, high frequency ratio of
  • 31:24 - 31:29
    the heart rate values and also the
    relative changes of the EDA. And that was
  • 31:29 - 31:35
    working, I would say not that well,
    compared to other features that we now
  • 31:35 - 31:42
    found while looking into the performances
    and the recorded data of the around, 98
  • 31:42 - 31:49
    participants that agreed to share the data
    with us, for these performances. And from
  • 31:49 - 31:56
    the preliminary analysis that Karen Han,
    one of our researchers working on and
  • 31:56 - 32:04
    looking into what type of features are
    indicative of changes in the performance.
  • 32:04 - 32:11
    It seems that a feature called PNN that's
    related to heart rate variability to the
  • 32:11 - 32:19
    R-R intervals is, seems to be quite good. And
    also the peak detection per minute using
  • 32:19 - 32:25
    the EDA data. So we're just counting the
    relative changes, the relative up and
  • 32:25 - 32:32
    down, for the EDA. If you're interested
    I'm happy to share the data with you. So
  • 32:32 - 32:38
    we have three performances each
    around an hour and 98 participants in
  • 32:38 - 32:46
    total. And we have the heart rate data,
    the EDA data, from the two fingers as well
  • 32:46 - 32:54
    as, the motion data as well. We haven't
    used the motion data at all except for
  • 32:54 - 33:00
    filtering out a little bit the EDA and
    heart rate data because if you're moving a
  • 33:00 - 33:07
    lot, you will have some errors and some
    problems, some motion artifacts in it. But
  • 33:07 - 33:15
    what do I mean with why is the PNN or why
    is the EDA peak detection so nice? Let's
  • 33:15 - 33:21
    look a little bit closer into the data.
    And here you see I just highlighted
  • 33:21 - 33:31
    performance three from the previous plots.
    You see PNN50 on the left side is the scale, the
  • 33:31 - 33:40
    blue line gives you the average of the
    PNN50 value. So this is the R-R interval
  • 33:40 - 33:48
    related heart rate variability feature and
    that feature is especially related to
  • 33:48 - 33:55
    relaxation and also to stress. So usually
    a lower PNN50 value means you're more
  • 33:55 - 34:01
    relaxed and a higher value means that
    you're. No, higher value means that you're
  • 34:01 - 34:08
    more relaxed, sorry. Lower value means
    that you are more stressed out. So what happens
  • 34:08 - 34:13
    now in the performance is something that
    fits very, very well and correlates with
  • 34:13 - 34:19
    the intention of the choreographer.
    Because the first half of the performance,
  • 34:19 - 34:27
    you see section one, two, three, four,
    five and six on the bottom. The first half
  • 34:27 - 34:32
    of the performance is to create a conflict
    in the audience and to stir them up a
  • 34:32 - 34:40
    little. So, for example, also the business
    card scene is part of that part, or also
  • 34:40 - 34:48
    the scene where somebody gets brought from
    the audience to the stage and joins the
  • 34:48 - 34:54
    performance is also part of that versus
    the latter part is more about reflection
  • 34:54 - 34:59
    and also relaxation. So taking in what you
    experienced at the first part, and that's
  • 34:59 - 35:04
    something that you see actually quite nice
    in the PNN50. So at the beginning it's
  • 35:04 - 35:10
    rather low. That means the audience is
    slightly tense versus in the latter part
  • 35:10 - 35:18
    they more relaxed. Similarly, the EDA in
    the bottom as a bar chart gives you an
  • 35:18 - 35:24
    indication of a lot of peaks happening at
    specific points. And these points
  • 35:24 - 35:31
    correlate very well to memorable scenes in
    the performance. So seeing the one scene,
  • 35:31 - 35:36
    where, actually section four, the red one,
    is the one where somebody from the
  • 35:36 - 35:42
    audience gets brought onto the stage.
    Where is this? I think around minute
  • 35:42 - 35:53
    twelve there is a scene where the dancers
    handout business cards. And that's
  • 35:53 - 35:56
    also something, I think. So it's
    promising, we're not there yet definitely
  • 35:56 - 36:02
    from the data analysis part, but there are
    some interesting things to see. And that
  • 36:02 - 36:11
    kind of brings me back to the starting
    point. So I think, it was an amazing
  • 36:11 - 36:16
    experience actually, working with a lot of
    talented people on that and the
  • 36:16 - 36:22
    performance was a lot of fun, but we are
    slowly moving towards putting the audience
  • 36:22 - 36:28
    on stage and trying to break the fourth
    wall, I think, with these type of setups.
  • 36:28 - 36:36
    And that leads me then also to the end of
    the talk where I just have to do a shout
  • 36:36 - 36:42
    out for the people who did the actual
    work. So all of the talented performers
  • 36:42 - 36:50
    and the project lead, especially Moe who
    organized and was also the link between
  • 36:50 - 36:56
    the artistic side and the dancers with
    Mademoiselle Cinema and us, as well as the
  • 36:56 - 37:05
    choreographer Ito-san. And yeah, I hope I
    didn't miss anybody. So that's it. So
  • 37:05 - 37:14
    thanks a lot for this opportunity to
    introduce this work to you. And now I'm
  • 37:14 - 37:21
    open for a couple of questions, remarks. I
    wanted to also host a self organized
  • 37:21 - 37:26
    session sometime. I haven't really gotten
    the link or anything, but I'll probably
  • 37:26 - 37:33
    just post something on Twitter or in one
    of the chats if you want to stay in
  • 37:33 - 37:39
    contact. I'll try to get two or three
    researchers also to join. I know George,
  • 37:39 - 37:44
    who was working on the hardware, and
    Karen, who worked on the visualizations,
  • 37:44 - 37:53
    the data analysis might be available. And
    if you interested in that, just send me an
  • 37:53 - 38:00
    email or check, maybe, I just also add it
    to the blog post or so if I get the link
  • 38:00 - 38:05
    later. So, yeah. Thanks a
    lot for the attention.
  • 38:09 - 38:17
    Herald: Thanks, Kai, for this nice talk.
    For the audience, please excuse us for the
  • 38:17 - 38:22
    small disruption of service we had here.
    We're a little bit late already, but I
  • 38:22 - 38:27
    think we still have time for a question or
    so. Unfortunately, I don't see anything
  • 38:27 - 38:32
    here online at the moment. So if
    somebody tried to pose a question and
  • 38:32 - 38:37
    there was also disruption of service, I
    apologize beforehand for that. On the
  • 38:37 - 38:43
    other hand now, Kai, you talked about data
    sharing. So how can the data be accessed?
  • 38:43 - 38:48
    Do people need to access you or drop to
    you a mail or personal message?
  • 38:48 - 38:54
    Kai: Yeah, we're on the,
    so right now, no publication is
  • 38:54 - 39:00
    still accepted and there's also some
    issues actually, a little bit of some
  • 39:00 - 39:03
    rights issues or so on. So the
    easiest part is just to send me a mail.
  • 39:03 - 39:14
    It will be posted sometime next year
    on a more public website. But the easiest
  • 39:14 - 39:20
    is just to post me a mail. There're already
    a couple of people working on it and we
  • 39:20 - 39:26
    have the rights to share it. It's just a little
    bit of a question of setting it up.
  • 39:26 - 39:32
    I wanted to have the website also online
    before the talk, but yeah, as with the
  • 39:32 - 39:35
    technical difficulties and so on, everything
    is a little bit harder this year.
  • 39:35 - 39:43
    Herald: Indeed. Indeed. Thanks,
    guys. Yes, I'd say that's it for this
  • 39:43 - 39:49
    session. Thank you very much again for
    your presentation. And I'll switch back to
  • 39:49 - 39:53
    the others.
  • 39:53 - 39:58
    postroll music
  • 39:58 - 40:33
    Subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
#rC3 - Boiling Mind
Description:

more » « less
Video Language:
English
Duration:
40:33
snaums edited English subtitles for #rC3 - Boiling Mind
Monkel42 edited English subtitles for #rC3 - Boiling Mind
Lost in Space-Käpt'n edited English subtitles for #rC3 - Boiling Mind
Lost in Space-Käpt'n edited English subtitles for #rC3 - Boiling Mind
Lost in Space-Käpt'n edited English subtitles for #rC3 - Boiling Mind
Lost in Space-Käpt'n edited English subtitles for #rC3 - Boiling Mind
Lost in Space-Käpt'n edited English subtitles for #rC3 - Boiling Mind
C3Subtitles edited English subtitles for #rC3 - Boiling Mind
Show all

English subtitles

Revisions