< Return to Video

Frisson Waves - Augmenting Aesthetic Chills in Music Perfromances

  • 0:00 - 0:06
    Herald: Hello, and welcome back to the FM-
    channel. Our next speaker is Kai Konsa.
  • 0:06 - 0:11
    He's a professor at the Graduate School of
    Media Design at Cairo University Japan.
  • 0:11 - 0:16
    His talk will be about focus on Wirfs.
    From what I understood is that it's about
  • 0:16 - 0:21
    shivers and goosebumps in media
    performances that are usually spontaneous.
  • 0:21 - 0:27
    But somehow he and his team managed to
    induce them artificially. But I'm very
  • 0:27 - 0:34
    forward. I'm looking very forward to this
    talk as I don't really understand it. But
  • 0:34 - 0:41
    now, if you want to ask questions in the
    end, RC at the Channel rc3-fm in the
  • 0:41 - 0:46
    rocket chat at the Channel FM and on
    Twitter, and the videos and the hashtag
  • 0:46 - 0:51
    3FM without the Dash, you can ask
    questions that will be answered in the Q&A
  • 0:51 - 0:54
    session afterwards.
    repated but on english
  • 0:54 - 0:59
    Allardyce parking Seotioa your current
    idea that some 200 images of a player for
  • 0:59 - 1:05
    native of translated and that and now I'm
    looking forward to a hopefully very
  • 1:05 - 1:10
    interesting talk.
    Konsa: Hello and welcome to my talk
  • 1:10 - 1:17
    "Frisson Waves", augmenting esthetic
    chills in classical music performances.
  • 1:17 - 1:26
    This is conceptual, early research work
    from collaboration of a lot of artists,
  • 1:26 - 1:34
    designers, researchers, and I'm just
    speaker to introduce it to you a little
  • 1:34 - 1:40
    bit. My name is Kai. And in the next 20
    minutes, I will talk to you a little bit
  • 1:40 - 1:47
    about what is Frisson. Give you a bit of
    motivation and background information, why
  • 1:47 - 1:55
    we are interested in this feeling. And
    then I will talk about how can we
  • 1:55 - 2:03
    recognize into youth and also share
    Frisson. And then at the end, I'll talk
  • 2:03 - 2:12
    about some conclusion and a little bit of
    outlook. So the question is, what is
  • 2:12 - 2:18
    Frisson? You might not have heard the
    term, I actually haven't heard Frisson
  • 2:18 - 2:26
    before we started the research. Two and a
    half years, three years ago. But I
  • 2:26 - 2:32
    definitely knew the feeling. So if you're
    listening attentively to a musical piece,
  • 2:32 - 2:39
    sometimes you might get goose bumps or
    some shiver down your spine. And that is
  • 2:39 - 2:47
    usually triggered from the music. Frisson
    is from the French shivers and sorry, my
  • 2:47 - 2:56
    pronunciation with the German accent. So
    you applied for that, I hope. And it's
  • 2:56 - 3:02
    this psycho physiological phenomenon, that
    we feel when we get these goose bumps or
  • 3:02 - 3:12
    shivers that are triggered from music, but
    also other. If you insist and you might
  • 3:12 - 3:21
    wonder why goosebumps, how can goosebumps
    be related to a positive feeling? There is
  • 3:21 - 3:33
    actually no need to answer theories. I'm
    one that I particularly like, is that
  • 3:33 - 3:37
    Frisson is often induced over music or
    over some kind of stimulus that is
  • 3:37 - 3:42
    repetitive, that has a certain pattern,
    and that at one point the pattern breaks
  • 3:42 - 3:51
    and that surprises you. So this triggers
    your autonomous nervous system. So the
  • 3:51 - 3:56
    fight or flight response you get to
    surprise you wonder alertness goes up and
  • 3:56 - 4:03
    you realize that there's no danger and you
    will relax. And feel this esthetic chills.
  • 4:03 - 4:13
    So the talk, I will give today is an
    exploration of the feeling of Frisson with
  • 4:13 - 4:22
    technology. So how could we detect, induce
    or transmit it using especially variable
  • 4:22 - 4:26
    sensors and actuators and to be a little
    bit aclimactic? I can already tell you
  • 4:26 - 4:33
    that this is still in process. So this is
    really exploratory work. However, you
  • 4:33 - 4:43
    might also wonder why do you care about
    this? Why do you want to do this? And you
  • 4:43 - 4:49
    know, one reason, of course, is. Because
    we can and because it's fun, and I think
  • 4:49 - 4:56
    that's definitely, you know, kind of one
    aspect of of the research, however. Also
  • 4:56 - 5:03
    another reason is so our lab in Yokohama
    works in human factors, research, so HCI
  • 5:03 - 5:09
    and human computer interaction. And we
    lately revisited a lot of work also from
  • 5:09 - 5:17
    cybernetics, also nonlinear dynamics in
    terms of research and also in terms of art
  • 5:17 - 5:23
    and performance. We are very much inspired
    by Stilnox work on extending and
  • 5:23 - 5:31
    augmenting our body. And there's this
    realization, if you work on research that,
  • 5:31 - 5:36
    you know, knowledge is not merely
    functional, there's always some kind of
  • 5:36 - 5:43
    enjoyment in understanding a concept. And
    I think also this community will really
  • 5:43 - 5:48
    understand that type of feeling and this
    sense of wonder and this feeling we also
  • 5:48 - 5:54
    want to explore. We want to understand
    ourselves better in terms of cognition
  • 5:54 - 6:01
    perception, but also in terms of our
    feeling. And actually, last year, I gave
  • 6:01 - 6:07
    also a talk on on boiling mind on an
    Frisson loop that we played with and
  • 6:07 - 6:12
    started researching on. And to some
    extent, this expression. If this all beef
  • 6:12 - 6:17
    talk is just the continuation of this, and
    overall, we are also looking for more
  • 6:17 - 6:24
    creative ways to use physiological data or
    other wearable computing sensing that is
  • 6:24 - 6:33
    not related to surveillance. So extended
    to this, we also wonder what does it mean
  • 6:33 - 6:40
    to be life? It's easy if you think about
    transmitting audio video, easy in
  • 6:40 - 6:45
    quotation marks because yeah, there are
    some experts that know a lot about that.
  • 6:45 - 6:53
    And I see also the effort that goes into
    the remote experience and not the
  • 6:53 - 6:59
    congresses or conferences. However, we
    still don't know how to transmit an
  • 6:59 - 7:05
    atmosphere or a feeling that's much more
    difficult. I think the Congress is a very
  • 7:05 - 7:11
    nice example for that because it moved
    from Berlin to Hamburg to Leipzig. But
  • 7:11 - 7:17
    every time I visited, I kind of felt at
    home. I felt, Oh yeah, these are, you
  • 7:17 - 7:23
    know, kind of the people I like. These are
    the culture, the community I belong to,
  • 7:23 - 7:29
    even though it's at different places. And
    we wonder, you know, kind of how can we
  • 7:29 - 7:37
    transmit at this type of feeling and to
    efforts that we get inspired from from
  • 7:37 - 7:42
    this work is one is neuro life. That's a
    project, an EU project with co-
  • 7:42 - 7:50
    investigator Jamie Ward and also
    cybernetic being project here in Japan,
  • 7:50 - 7:57
    headed by Quarterman Minami. So that deals
    with things like parallel agency and
  • 7:57 - 8:01
    similar. And both of them are actually
    also collaborators in the work that I will
  • 8:01 - 8:08
    present today. So this is the high level
    overview why we are interested in this
  • 8:08 - 8:15
    song, but now getting back to the esthetic
    chills. And first, the question is how
  • 8:15 - 8:23
    could we go about and try to detect or
    recognize them? Looking into related work,
  • 8:23 - 8:30
    of course, we'll see Frisson is that
    chills, of course, affect our physiology.
  • 8:30 - 8:35
    And the first thing that you notice is, of
    course, the payload erection. So the goose
  • 8:35 - 8:41
    bumps that you can get on your arm so the
    hairs go up. So we could try to detect
  • 8:41 - 8:45
    that. However, that might be a little bit
    difficult because some people might not
  • 8:45 - 8:54
    have so much hair on on them and so on. So
    then looking into other physiological
  • 8:54 - 9:01
    changes, respiratory rate is going up for
    the sweat glands, electro dermal activity.
  • 9:01 - 9:06
    You will see more peaks. That's a stress
    and excitement indicator, and heart rate
  • 9:06 - 9:11
    goes up, blood pressure goes up and
    usually heart rate variability related
  • 9:11 - 9:17
    features go down. Because also, if you saw
    last year's talk, we already built a
  • 9:17 - 9:25
    system to record electro dermal activity.
    So the sweating on the hand as well as
  • 9:25 - 9:34
    heart rate we just thought will move along
    and use that. Luckily, we also did a
  • 9:34 - 9:39
    redesign of the wristband bands in the
    meantime, so they look a little bit nicer
  • 9:39 - 9:45
    now and you see also a life demo on my
    background right now. So you see EDA and
  • 9:45 - 9:54
    heart rate behind and if I press here. You
    should also see some noise on the sensor.
  • 9:54 - 10:03
    The visualization, by the way, is done by
    Kirill Ragodzin. So thanks for the work!
  • 10:03 - 10:09
    And then moving forward, so we use these
    wristbands to set up a controlled
  • 10:09 - 10:15
    experiment to detect esthetic chill
    events. We just added a trigger, so to add
  • 10:15 - 10:20
    some self-reporting to it. So in this
    case, we really use the user as a self-
  • 10:20 - 10:27
    report to classify or to label the Frisson
    events that has, of course, you know, also
  • 10:27 - 10:34
    some limitations. So you hope that that's
    good enough to capture it. And we used
  • 10:34 - 10:40
    some music pieces also from related work
    and did some counterbalancing and run this
  • 10:40 - 10:49
    lab study just in know kind of controlled
    space or with headphones and so on. We
  • 10:49 - 10:55
    finished this, but then we also wondered,
    you know, how does it look like in real
  • 10:55 - 11:03
    life in the wild experiments? So we also
    organized a concert. With 18 audience
  • 11:03 - 11:09
    members for one album musical program, and
    the set up was the same, so everybody got
  • 11:09 - 11:16
    a wristband and a trigger. We also added a
    third for the pianist, so using EDA from
  • 11:16 - 11:23
    the foot actually works also relatively
    well and then recorded here the data and
  • 11:23 - 11:36
    hope that people would report their focus
    on their esthetic chills. Here's now one
  • 11:36 - 12:32
    video. A short minute video that shows you
    the recording. piano music How about the
  • 12:32 - 12:39
    analysis? I have to see? I'm sorry the
    this is still ongoing, so we don't really
  • 12:39 - 12:44
    have a lot of results yet. And of course,
    there were a lot of issues with the life
  • 12:44 - 12:49
    recording. If you're interested in doing
    something similar, contact some of the
  • 12:49 - 12:55
    technical stuff or also me. We can give
    you hints and doing this now over 15 or 20
  • 12:55 - 13:00
    years and always do something is going
    wrong, depending on the setting and so on.
  • 13:00 - 13:05
    Now I also know more about the classical
    music concerts. However, we got some
  • 13:05 - 13:11
    useful data. The problem there was we
    could also train a machine learning model
  • 13:11 - 13:16
    because we really wanted to dect it real
    time. And it seemed to work really well.
  • 13:16 - 13:23
    We are just still not sure if it really
    works or not, so we want to be very
  • 13:23 - 13:26
    careful about that. So we get higher
    accuracy spec. But given the limited
  • 13:26 - 13:32
    amount of uses we had also and we want to
    look into that a little bit more. However,
  • 13:32 - 13:36
    the analysis, as well as the data sets,
    will be publicly available. And if you
  • 13:36 - 13:41
    want to get them a little bit earlier to
    also contact me. So then moving on, this
  • 13:41 - 13:47
    is the progress on detection. How does it
    look like for triggering or inducing
  • 13:47 - 13:56
    Frisson? So there's also a lot of cool
    related work. I just show or highlight two
  • 13:56 - 14:03
    of them. One is work by Shoko Fukushima at
    all. And they're using the electrostatic
  • 14:03 - 14:12
    effect on the arm to control payload
    erection. And they use it to increase the
  • 14:12 - 14:18
    surprise feeling of somebody, so you put
    your arm inside and they can control the
  • 14:18 - 14:24
    payload erection. Other work is from Ha at
    all, whether using three Peltier elements
  • 14:24 - 14:32
    on your back, on your spine and deactivate
    them upwards to also induce Frisson or
  • 14:32 - 14:40
    static chills. The problem with those two
    setups, it's quite hard to get them into a
  • 14:40 - 14:44
    into a concert hall. And, you know, some
    people might not really have much hair on
  • 14:44 - 14:50
    their arms or so on, so there might be
    limitations for it. So then, you know, for
  • 14:50 - 14:56
    first iteration, we decided to go for a
    neck prototype, because kind of the neck
  • 14:56 - 15:03
    is also a part of some of the Frisson
    responses. So you get either chills down
  • 15:03 - 15:09
    the spine or up the neck or also your hair
    might stand up. So we thought it's a good
  • 15:09 - 15:13
    start and we used first healthy elements
    or thermal modules and also vibrant
  • 15:13 - 15:18
    tactile feedback. In later iterations we
    moved just to a thermal feedback to
  • 15:18 - 15:25
    activate this on the back of the neck
    around on the upper side of the trapezius
  • 15:25 - 15:32
    muscle, and they would activate with
    slight cold feedback. So for an initial
  • 15:32 - 15:38
    tests, it seemed to work or just this,
    just with 10 participants, around 30
  • 15:38 - 15:44
    minutes per participant, we had two music
    pieces that are based on related works or
  • 15:44 - 15:51
    Chopin and Gustav Holst. We counterbalance
    the conditions or music pieces with
  • 15:51 - 15:57
    neckband without neck bent with neck bent,
    with activation and without activation.
  • 15:57 - 16:04
    And from an initial test, we can see that
    it seems that slight cold feedback really
  • 16:04 - 16:09
    provides more instances of reported
    Frisson. So there is a slight positive
  • 16:09 - 16:12
    feedback, but you know, it was still quite
    little participants and we'll have to
  • 16:12 - 16:18
    continue and see also with a little bit of
    redesign. So we want to change the order
  • 16:18 - 16:25
    and placement of the pelty elements for
    the continuation work as well. Now moving
  • 16:25 - 16:31
    to the last part, so we talked about
    detection, induction, and now let's talk
  • 16:31 - 16:38
    about sharing or transmitting Frisson. He
    had the idea would be, you know, you are
  • 16:38 - 16:45
    listening to a musical piece, a classical
    piece and one person gets Frisson, does
  • 16:45 - 16:52
    this detect it over the wristband and then
    it's distributed ripples through the
  • 16:52 - 17:01
    neighbors? They get activated over the
    Nick Bend and hopefully also free Frisson
  • 17:01 - 17:08
    again around the same time it just after
    the red circled person felt that the
  • 17:08 - 17:13
    esthetic chills. So in this case, then,
    you know, we would have all of the
  • 17:13 - 17:18
    audience members need to wear sensors and
    actuators, and we would need to freshen
  • 17:18 - 17:28
    detection and also then the activation
    based on that. And for that, we also
  • 17:28 - 17:34
    organized a not a concert in this case, 50
    audience members. The program was around
  • 17:34 - 17:43
    1,5 hours. And the set up was, as you see
    here. So performers on the top, and then
  • 17:43 - 17:51
    we had two sections. One saw 25 users
    would wear just the wristband as a kind of
  • 17:51 - 17:58
    control group. And the second group, 25
    users would wear wristband and neck bent,
  • 17:58 - 18:03
    so it would get to actually the detection
    and also the activation. That's all, you
  • 18:03 - 18:10
    know, 50 plus wristbands needed charging
    and 25 neck bands were manufactured, and
  • 18:10 - 18:18
    this is the picture from the actual
    concert with an nick bent section. And
  • 18:18 - 18:23
    here is how this should work, so, you
    know, you have first one person, you
  • 18:23 - 18:30
    detect the Frisson and then you ripple it
    out to the neighbors, then the next person
  • 18:30 - 18:35
    might feel Frisson we detected over the
    wristband and then replay it out to the
  • 18:35 - 18:41
    other people that haven't gotten
    activation yet and so on. So you have then
  • 18:41 - 18:51
    a wave of Frisson hopefully moving through
    the audience members. This is not a setup
  • 18:51 - 18:55
    up, yab he (NAME!) who also did a lot of
    the organization parts or so on at the
  • 18:55 - 19:03
    piano, and he is then a small video that
    summarizes the work. And at the end you
  • 19:03 - 19:08
    see also the servers, the recording
    server, the activation server and the I
  • 19:08 - 19:38
    know the detection server and also the
    activation server. piano music playes
  • 19:38 - 20:25
    one person plays piano, another plays
    chello
    So the question you might have
  • 20:25 - 20:35
    now, did it work? Hmm. Not not completely
    sure. Again, here, work in progress or
  • 20:35 - 20:40
    analysis is ongoing, and we can't also
    really see because, yeah, we had this
  • 20:40 - 20:45
    control group and we could see more
    Frisson events in the sharing group. But
  • 20:45 - 20:49
    how to interpret that, that's really,
    really difficult. We are also working on
  • 20:49 - 20:54
    the design of the wristbands as well as
    the neckband and especially for the
  • 20:54 - 21:00
    neckband. We got a couple of uses, I think
    five or six or seven that really didn't
  • 21:00 - 21:06
    like the neckband, not the activations or
    the slight called activation was OK, but
  • 21:06 - 21:11
    just because it was a little bit too tight
    and a little bit too uncomfortable. So
  • 21:11 - 21:16
    we're working on a redesign we have for
    the next concert in april. All of the data
  • 21:16 - 21:23
    make it also publicly available. Soon
    enough, look also a little bit more what
  • 21:23 - 21:28
    we can find out about what happened. This
    brings me to the end of the presentation.
  • 21:28 - 21:35
    I hope you enjoyed it. Yeah, I just wanted
    to thank a couple of people first and
  • 21:35 - 21:44
    foremost Yann He, who organized this
    Frisson , who introduced us. And also the
  • 21:44 - 21:49
    teams are dismissed for their second
    concert. The extended team thanks a lot
  • 21:49 - 21:55
    for everybody who was involved here, then
    also all of the names. So these are the
  • 21:55 - 21:59
    people that did the actual work did not
    just doing the presenting like I do right
  • 21:59 - 22:05
    now. I hope I haven't missed anybody. So
    also, thanks to George, Dingding, Denny
  • 22:05 - 22:10
    and so on and all of the other people
    involved in group planning the studio
  • 22:10 - 22:20
    Apollo and also the piano NIST's and
    interactive performers. So thanks a lot.
  • 22:20 - 22:27
    Yeah, that brings me to the end of the
    presentation. As I said, we have a third
  • 22:27 - 22:31
    concert, probably in April next year in
    Yokohama, Tokyo area. So if you're
  • 22:31 - 22:37
    interested, let me know also if you have a
    general interest in effect or similar
  • 22:37 - 22:43
    phenomenon. Also, just write me an email.
    It would be good if you mentioned Frisson
  • 22:43 - 22:50
    remote experience in the subject, so I can
    just filter that out and something
  • 22:50 - 22:55
    completely different. We also have a
    conference next year. March submission
  • 22:55 - 23:01
    deadline is January 7th, augmented humans
    in Japan and Germany and cyberspace that
  • 23:01 - 23:11
    deals with maybe similar work. So thanks a
    lot for listening and I hope. Yeah, I told
  • 23:11 - 23:17
    you something interested in the last 20
    minutes. By.
  • 23:17 - 23:22
    Herald: Hello, and welcome back to the FM
    Channel, thank you Kay for the very
  • 23:22 - 23:25
    interesting talk, and Kay should actually
    be with us to answer a few questions.
  • 23:25 - 23:27
    Hello, Kay.
    Kay: Hello.
  • 23:27 - 23:33
    Herald: And we actually do have a few
    questions already. And the first one
  • 23:33 - 23:39
    sounds a bit more like a comment, but I
    tell you anyway. So one one viewer noted
  • 23:39 - 23:43
    that this a technical cybernetics and
    systems course here in Illmenau, the two
  • 23:43 - 23:47
    in the know and wanted to know if you were
    aware of this already.
  • 23:47 - 23:54
    Kay: Actually, it wasn't, but it sounds
    that sounds quite fun and getting more and
  • 23:54 - 23:58
    more interested in cybernetics as well.
    And I think it's useful to revisit some of
  • 23:58 - 24:04
    the ideas around feedback loops, as I said
    at the beginning. So that's cool if you if
  • 24:04 - 24:08
    you're already looking into that. And I
    think especially if people go into HCI
  • 24:08 - 24:12
    fields, I think it's quite useful to get a
    little bit of that background.
  • 24:12 - 24:18
    Herald: Yes, nice. OK, let's go to the
    next question. It's about neural networks,
  • 24:18 - 24:23
    which have the neural network was us. Have
    you considered neural net different doing
  • 24:23 - 24:27
    differential equations such as echo-state
    network or reservoir computing, which are
  • 24:27 - 24:30
    good when modeling stiff time
    consciousness processes?
  • 24:30 - 24:37
    Kay: That's actually a really good
    question and actually also good hint for
  • 24:37 - 24:42
    what to do next. I just tried to look up.
    I saw the question already. Also in the
  • 24:42 - 24:47
    chat, I try to look up what we used, and I
    think at the beginning we just used
  • 24:47 - 24:52
    support vector machines, so not neural
    networks. And no, no, we are using some
  • 24:52 - 24:58
    neural network, but I don't know the
    configuration and I couldn't check. I'll
  • 24:58 - 25:03
    get back to the person who asked the
    question. What was interesting for me was
  • 25:03 - 25:09
    that the the data looks already quite
    good. The sensor data looks actually quite
  • 25:09 - 25:16
    good. And I would assume that in most
    cases, any classifier will do a decent job
  • 25:16 - 25:22
    for the lab experiments for the other
    works. I think, yeah, that sounds quite
  • 25:22 - 25:29
    interesting. I also want to go more
    towards, um yeah, nonlinear dynamics work
  • 25:29 - 25:36
    as well in terms of of of estimating
    Frisson or different feelings. But that's
  • 25:36 - 25:41
    that's a really good hint, that it would
    be more question also for Jo Ann Hunt,
  • 25:41 - 25:47
    coauthor of the paper that is also linked.
    She is our data analyst and so on, and
  • 25:47 - 25:52
    notes what what she used to the first
    classifier was a support vector machine
  • 25:52 - 25:56
    fairly basic and I think recently we use a
    neural network, but I thought it's just
  • 25:56 - 26:02
    very straightforward. PyTorch long
    training, but nothing, nothing special and
  • 26:02 - 26:06
    nothing fancy so far.
    Herald: OK, nice, I guess I guess she will
  • 26:06 - 26:10
    probably also hear from the questions
    then. And then let's go to the next
  • 26:10 - 26:16
    question. Have you considered or tested
    the effects of editorial stimulus, such as
  • 26:16 - 26:21
    attempting to cross Frisson Waves and in
    boring situations instead of like
  • 26:21 - 26:30
    interesting ones?
    Kay: That's also quite a good or
  • 26:30 - 26:35
    interesting question. I mean, there were
    some some audience members also that
  • 26:35 - 26:41
    mentioned that Nick Bend was actually a
    little bit uncomfortable. So I'm not
  • 26:41 - 26:47
    really sure if we cost Frisson with them.
    And I don't know what would happen if you,
  • 26:47 - 26:53
    I think you probably would just make the
    situation uncomfortable anyways. I'm not
  • 26:53 - 27:01
    sure what would happen then, if you
    stimulating called feedback. Actually, you
  • 27:01 - 27:06
    might get the fear response in these cases
    if you're in a boring situation, I'm not
  • 27:06 - 27:13
    sure if you if you. Hmm. Yeah, I actually
    I don't know. It's definitely an
  • 27:13 - 27:21
    interesting idea to to use it in boring
    situations. Can you get somebody to change
  • 27:21 - 27:29
    their? The feeling and get to the more
    excited state? We are playing often, we
  • 27:29 - 27:33
    played a little bit with the thermal
    feedback and it was always interesting if
  • 27:33 - 27:37
    you change the thermal feedback. So
    instead of if you see something hot in VR,
  • 27:37 - 27:42
    you give cold stimulus or so on. It really
    is a little bit confusing and interesting.
  • 27:42 - 27:49
    I haven't thought about that in Frisson
    situation. And if it works for boring
  • 27:49 - 27:53
    work, but it's definitely cool or
    interesting. So if somebody wants to play
  • 27:53 - 27:59
    with that, I would be up for also giving a
    little bit of help or ideas in that
  • 27:59 - 28:04
    direction.
    Herald: OK, thank you for. For answering
  • 28:04 - 28:12
    these questions, unfortunately, that don't
    seem to be any more of them. So that's it.
  • 28:12 - 28:16
    Thank you very much for the very
    interesting talk. That's a topic I haven't
  • 28:16 - 28:22
    really thinking that much about, but thank
    you very much. It was very interesting.
  • 28:22 - 28:26
    Kay: Thanks a lot also for having me, and
    it's always fun and I always enjoyed the
  • 28:26 - 28:31
    feedback. Yet there's also a candlelight
    life demonstration behind me. So you saw
  • 28:31 - 28:35
    my excitement level kind of increasing or
    decreasing with the questions.
  • 28:35 - 28:39
    Herald: Oh wow, that's pretty cool.
    Kay: OK. Yeah, thanks a lot.
  • 28:39 - 28:47
    Herald: Then here and here on the FM
    channel, the next thing happening will at
  • 28:47 - 28:51
    11:00 p.m. the lightening think talk or
    not on the film channel. But one of the
  • 28:51 - 28:56
    next things happening at three will be at
    11:00 p.m. The lightning talks at remote
  • 28:56 - 29:02
    range. And here on on our channel,
    actually at 12:00 a.m. or midnight, there
  • 29:02 - 29:07
    will be the next terror news show. And
    that until then by.
  • 29:07 - 29:15
    Subtitles created by many many volunteers and
    the c3subtitles.de team. Join us, and help us!
Title:
Frisson Waves - Augmenting Aesthetic Chills in Music Perfromances
Description:

more » « less
Video Language:
English
Duration:
29:15

English subtitles

Incomplete

Revisions Compare revisions