Return to Video

35C3 - Information Biology - Investigating the information flow in living systems

  • 0:00 - 0:18
    35C3 preroll music
  • 0:18 - 0:24
    Herald Angel: What happens if you mix
    Shannon's information theory and
  • 0:24 - 0:31
    biological systems? A dish better served
    hot. Please welcome our computational
  • 0:31 - 0:38
    systems biology chef, who will guide you
    through investigating the information flow
  • 0:38 - 0:43
    in living systems. Please welcome with a
    very warm round of applause Jürgen Pahle.
  • 0:43 - 0:52
    applause
  • 0:52 - 0:57
    Jürgen Pahle: Thanks a lot and thanks for
    having me. It's great that so many of you
  • 0:57 - 1:01
    are interested in that topic, which is not
    about technical systems but actually
  • 1:01 - 1:07
    biological cells. So, I am leading a
    group in Heidelberg at the university
  • 1:07 - 1:17
    there and we are mostly interested in how
    information is processed, sensed, stored,
  • 1:17 - 1:25
    communicated between biological cells. And
    we are interested in that because it's not
  • 1:25 - 1:31
    obvious that they actually manage to do
    that in a reliable fashion. They don't
  • 1:31 - 1:35
    have transistors. They only can use their
    molecules mostly proteins, big molecules
  • 1:35 - 1:45
    that are little engines or little motors
    in the cell that allow them to fulfill
  • 1:45 - 1:52
    their biological functions. If information
    processing fails in cells, you get diseases
  • 1:52 - 2:00
    like epilepsy, cancer and of course
    others. Now, cellular signaling pathways
  • 2:00 - 2:07
    have been studied in some detail - mostly
    single pathways. More and more also
  • 2:07 - 2:15
    networks of pathways but surprisingly
    little conceptual work has been
  • 2:15 - 2:19
    done on them. So we know the molecules
    that are involved, we know how they
  • 2:19 - 2:28
    react, how they combine to build these
    pathways. But we don't know how, actually,
  • 2:28 - 2:36
    information is transferred or communicated
    across these pathways and we intend to
  • 2:36 - 2:43
    fill that gap in our group. And, of
    course, first we have to we have to model
  • 2:43 - 2:52
    these networks, we have to model these
    biochemical pathways. And this is how we
  • 2:52 - 2:57
    proceed. So you have a you have a cell -
    you can't see that here - but on the upper
  • 2:57 - 3:02
    left corner you have that scheme of a cell
    with all the different components. You
  • 3:02 - 3:09
    have volumes in the cell where
    chemical reactions happen. So chemical
  • 3:09 - 3:15
    reactions take biochemical species: ions,
    proteins, what have you, and they convert
  • 3:15 - 3:20
    them into other chemical species, and
    these reactions happen in the different
  • 3:20 - 3:27
    compartments. Now it's very important to
    assign speeds or velocities to these
  • 3:27 - 3:33
    reactions because these speeds determine
    how fast the reactions happen and how the
  • 3:33 - 3:39
    dynamic behavior then results. And once
    you have done that, you can translate all
  • 3:39 - 3:45
    of that into a mathematical model like the
    one shown here on the right. This is an
  • 3:45 - 3:49
    ordinary differential equation system, I
    don't want to go into detail. I only have
  • 3:49 - 3:56
    like two or three formulas that might
    be interesting for you. So this is just
  • 3:56 - 4:01
    any mathematical model you have
    of these systems and then you can start
  • 4:01 - 4:06
    analyzing them. You can ask questions
    like: "How does the system change over
  • 4:06 - 4:10
    time?" That's simulation. "Which parts
    influence the behavior most?" "What other
  • 4:10 - 4:17
    stable states? Do you have oscillations,
    do you have a steady state?" and so on.
  • 4:17 - 4:22
    Now, you don't have to do that by hand,
    because we are actually also developing
  • 4:22 - 4:28
    software - that's just another thing. I
    guess you know that all models are wrong.
  • 4:28 - 4:34
    We try to build useful ones. So I said you
    don't have to do this by hand because we
  • 4:34 - 4:41
    are also into method development and we
    are building scientific software. One of
  • 4:41 - 4:45
    the softwares we build is called COPASI:
    COmplex PAthway SImulator. It's free and
  • 4:45 - 4:50
    open source, you can all go to that
    website, download it, play around with it
  • 4:50 - 4:59
    if you want. Because we also use more
    demanding computations which we send to
  • 4:59 - 5:03
    compute clusters, we also developed a
    scripting interface for COPASI, which is
  • 5:03 - 5:10
    called CoRC, the COPASI R connector. And
    this allows you to use the COPASI backend
  • 5:10 - 5:16
    with all the different tools that are in
    COPASI from your R programming environment
  • 5:16 - 5:21
    and then you can build workflows and send
    them to compute cluster. We think it's
  • 5:21 - 5:28
    easy to use. If you play around with it
    and you get stuck, then just let me know.
  • 5:28 - 5:31
    So this is software you can use, you can play
    around with. And where do we get the
  • 5:31 - 5:37
    models? Well, there is a model database
    that is called Biomodels.net, also free to
  • 5:37 - 5:42
    use, you can go there and download models.
    At the moment they have almost 800
  • 5:42 - 5:48
    different manually curated models, and
    almost ten times of that that are built
  • 5:48 - 5:53
    automatically. You can just download them
    in the so-called SBML format, which is the
  • 5:53 - 6:00
    Systems Biology Markup Language, then
    import it into COPASI or other software and
  • 6:00 - 6:02
    play around with them.
  • 6:02 - 6:08
    OK, so coming back to biology,
    one of our favorite systems is
  • 6:08 - 6:14
    calcium signaling. Calcium signaling
    works roughly like this: You have these
  • 6:14 - 6:21
    little - I mean the oval thing is a cell -
    then you have these red cones, that are
  • 6:21 - 6:26
    hormones, and other substances that you
    have in your bloodstream or somewhere
  • 6:26 - 6:32
    outside the cell. They bind to these black
    things, which are receptors on the cell
  • 6:32 - 6:37
    membrane. And then a cascade of
    processes happens that in the end leads to
  • 6:37 - 6:44
    an in-stream of calcium ions, these blue
    balls, from the ER - which is not
  • 6:44 - 6:48
    emergency room, but endoplasmatic
    reticulum, which is one of the
  • 6:48 - 6:53
    compartments in the cell - into the the
    main compartment, the cytosol of the cell.
  • 6:53 - 6:58
    And also calcium streams into the cell
    from outside the cell. And this leads to a
  • 6:58 - 7:05
    sharp increase of the concentration of
    calcium, until it's pumped out again. There
  • 7:05 - 7:10
    are pumps that take calcium ions and
    remove them from the cytosol, and pump
  • 7:10 - 7:16
    them out of the cell and back into the ER.
    This is very important because calcium is
  • 7:16 - 7:21
    a very versatile second messenger. That's
    what they call it. It regulates
  • 7:21 - 7:26
    a number of very important cellular
    processes. If you move your muscles, your
  • 7:26 - 7:31
    muscle contraction is regulated by
    calcium, learning, secretion of
  • 7:31 - 7:37
    neurotransmitters, transmitters in your
    brain, fertilization. A lot of different
  • 7:37 - 7:46
    things are regulated by calcium and, if you
    simulate the dynamic processes, you get
  • 7:46 - 7:51
    behavior like that. Here you can see it
    oscillates, it shows these regular spikes.
  • 7:51 - 7:59
    So this is the calcium concentration over
    time. Now, if you actually measure this in
  • 7:59 - 8:06
    real cells, and this is data measured by
    collaboration partners of mine in England,
  • 8:06 - 8:12
    you see it's not that smooth. You
    get these differences in amplitude of the
  • 8:12 - 8:18
    peaks, you get secondary spikes, you get
    fluctuations around the basal level, and
  • 8:18 - 8:23
    this is because you have random
    fluctuations in your system. Intrinsic
  • 8:23 - 8:28
    random fluctuations that are just due to
    random fluctuations in the timings of
  • 8:28 - 8:33
    single reactive events. Single reactions,
    biochemical reactions that happen. And in
  • 8:33 - 8:37
    in order to capture this behavior,
    because this behavior is
  • 8:37 - 8:42
    important, that can hamper reliable
    information transfer, we have to resort to
  • 8:42 - 8:48
    special simulation algorithms, for example
    the so-called Gillespie algorithm. And if
  • 8:48 - 8:52
    you do that and apply it to the calcium
    system, you can see you can actually
  • 8:52 - 8:57
    capture these secondary peaks and all the
    different other fluctuations you have in
  • 8:57 - 9:04
    there. Now, this is just a Monte Carlo
    simulation. I say "just". It's really time
  • 9:04 - 9:07
    consuming and demanding, because you have
    to calculate each and every single
  • 9:07 - 9:12
    reactive event in the cell. And that takes
    a lot of time. That's why we do that on a
  • 9:12 - 9:17
    compute cluster. I told you already, that
    calcium is a very versatile second
  • 9:17 - 9:22
    messenger. So you have very many different
    triggers of a calcium response in the
  • 9:22 - 9:28
    cell, things that lead to a certain calcium
    dynamics. And on the other hand,
  • 9:28 - 9:33
    downstream, calcium regulates many
    different things. And so you have this
  • 9:33 - 9:38
    hourglass or bow tie structure, and that's
    why people have speculated about the
  • 9:38 - 9:46
    calcium code: How can it be, that the
    proteins - I should go back - that actually do
  • 9:46 - 9:54
    all these cellular functions - [Softly]
    sorry - these green cylinders that bind
  • 9:54 - 9:59
    calcium and are then activated or
    inhibited by it, how can it be that they
  • 9:59 - 10:07
    know, which stimulus or which hormone is
    outside of the cell? They don't see them,
  • 10:07 - 10:13
    because there is a cell membrane around
    the cell, around the cytosol. So people
  • 10:13 - 10:20
    have speculated: Is there information
    encoded in the specific calcium waveform?
  • 10:20 - 10:28
    Is there calcium code? And how can it be
    that the proteins actually decode that code?
  • 10:28 - 10:35
    It's fairly established, that
    calcium shows amplitude modulation. So the
  • 10:35 - 10:41
    higher the amplitude of calcium, the more
    active get some proteins. It also shows
  • 10:41 - 10:46
    frequency modulation, meaning the higher
    the frequency of the calcium oscillations,
  • 10:46 - 10:50
    the more active get some proteins. But, maybe,
    there are other information carrying
  • 10:50 - 10:57
    features in the waveform, like duration,
    waveform timing and so on. Now a doctoral
  • 10:57 - 11:02
    student in my group, Arne Schoch, has
    looked into frequency modulation and he
  • 11:02 - 11:07
    actually showed that there are proteins,
    in that case NFAT, which is the nuclear
  • 11:07 - 11:12
    factor of activated T-cells, which are
    important in your immune system. They only
  • 11:12 - 11:18
    react to calcium oscillations of a certain
    frequency. So they they get activated in a
  • 11:18 - 11:25
    very narrow frequency band, and that's why
    we call it band-pass activation.
  • 11:25 - 11:33
    Okay, so I guess you all know signaling speeds of
    technical systems, they are fairly fast by
  • 11:33 - 11:37
    now. One of our results, because we
    quantify actually information transfer, is
  • 11:37 - 11:42
    that calcium signalling operates at
    roughly point four bits per second. If you
  • 11:42 - 11:47
    compare that to technical systems, that
    seems very low, but maybe that's enough
  • 11:47 - 11:53
    for all the functions that a cell has to
    fulfill. So how did we arrive at this result?
  • 11:53 - 11:59
    Well, we used information theory,
    classical information theory, pioneered by
  • 11:59 - 12:06
    people like Claude Shannon in the 40s,
    also by Hartley, Tuckey and a few other people.
  • 12:06 - 12:09
    So, they looked at technical
    systems, and they have this prototypical
  • 12:09 - 12:14
    communication system, where there is an
    information source on the left side,
  • 12:14 - 12:19
    then this information is somehow encoded.
    It's transmitted over a noisy channel
  • 12:19 - 12:25
    where the message is scrambled. Then it's
    received by a receiver, decoded, and then
  • 12:25 - 12:30
    hopefully you get the same message at the
    destination, that was chosen at the
  • 12:30 - 12:38
    information source. And in our case we
    look at calcium as an information source
  • 12:38 - 12:46
    and we study how much information is
    actually transferred to downstream proteins.
  • 12:46 - 12:53
    How do you do that? Well, information
    theory 101. Information theory primer.
  • 12:53 - 12:59
    In statistical information theory
    of the Shannon type, you look at random
  • 12:59 - 13:04
    variables. You look at events that have a
    certain probability of happening. So let's
  • 13:04 - 13:13
    say you have an event that has a
    probability of happening, and then Shannon
  • 13:13 - 13:20
    said that the information content of this
    event should be the negative logarithm -
  • 13:20 - 13:25
    which is shown here, the curve on the
    right hand side - should be the
  • 13:25 - 13:30
    negative logarithm of the probability,
    meaning that if an event happens all the
  • 13:30 - 13:35
    time - and I will show you an example
    later - there is no information content.
  • 13:35 - 13:39
    The information content is zero. There is
    no surprise, if that event happens, because
  • 13:39 - 13:45
    it happens all the time, it's like there's
    a sunny day somewhere in the desert.
  • 13:45 - 13:52
    However, if you go to lower probabilities,
    then the surprisal becomes bigger and the
  • 13:52 - 13:59
    information content rises. Now, in a system
    you have several events that are possible.
  • 13:59 - 14:02
    And if you take the average uncertainty of
    all possible events you get something that
  • 14:02 - 14:08
    Shannon called entropy. This is still not
    information, because information is a
  • 14:08 - 14:12
    difference in entropy. So you have to
    calculate the entropy of a system, and
  • 14:12 - 14:18
    then you calculate the entropy that is
    remaining after an observation, say. And
  • 14:18 - 14:24
    this difference is the information gained
    by the observation. Now, coming to a
  • 14:24 - 14:28
    simple example, let's say we have a very
    simple weather system where you can only
  • 14:28 - 14:34
    have rainy and sunny days. And let's say
    they are equally likely. So you have a
  • 14:34 - 14:46
    probability of 50%, the average of the
    negative logarithm is 1. So, when you
  • 14:46 - 14:52
    observe the weather in the system, you gain
    one bit per day. You can also think of
  • 14:52 - 14:57
    bits as the information you need, or a
    cell needs, to answer or decide on one yes
  • 14:57 - 15:08
    or no question. Now, if it's always sunny
    and no rain, then you get zero information
  • 15:08 - 15:13
    content or uncertainty. The average is
    zero. So you don't get any information if
  • 15:13 - 15:20
    you observe the weather in the desert, say.
    80/20: You get a certain bit number per
  • 15:20 - 15:30
    day, in that case .64 per day, and you
    can do that for Leipzig. In that case,
  • 15:30 - 15:34
    Leipzig has ninety nine rainy days per
    year, according to the Deutsche
  • 15:34 - 15:40
    Wetterdienst. This gives you an
    information of .84 bit per day. You can do
  • 15:40 - 15:44
    it in a general way. So let's say you have
    one event with a probability of p and
  • 15:44 - 15:50
    another event with a probability of 1
    minus p and then you get this curve, which
  • 15:50 - 15:57
    shows you that the information content is
    actually maximal if you have maximal
  • 15:57 - 16:02
    uncertainty, if you have equally likely
    events. If you have more possible events -
  • 16:02 - 16:08
    in that case four different ones: sunny,
    cloudy, rainy, and thunderstorm - you get
  • 16:08 - 16:12
    two bit and this is because of the
    logarithm. So if you have double the
  • 16:12 - 16:19
    amount of events and they are equally likely
    you get one bit more. Hope I didn't lose
  • 16:19 - 16:26
    anyone? Now we are always looking at
    processes, dynamic things, things that
  • 16:26 - 16:30
    change over time, and if we look at
    processes we have to look at transition
  • 16:30 - 16:35
    probabilities. So we have to change
    probabilities to transition probabilities
  • 16:35 - 16:43
    and you can summarize them in a matrix. So
    let's say, if we have a sunny day today,
  • 16:43 - 16:48
    it's more likely that it's also sunny
    tomorrow and less likely that it's
  • 16:48 - 16:52
    raining, maybe only 25 percent. And, if
    it's rainy today, you can't tell, it's
  • 16:52 - 17:02
    equally likely. These processes are also
    called Markov process. Markov was a
  • 17:02 - 17:08
    Russian mathematician and you have them
    everywhere. These Markovian processes are
  • 17:08 - 17:13
    used in your cell phones, in your hard
    drives, they're used for error correction,
  • 17:13 - 17:20
    the page rank algorithm of Google is one
    big Markov process. So, you're using them
  • 17:20 - 17:29
    all the time, nothing technological would
    work nowadays without them. Because we
  • 17:29 - 17:37
    have knowledge about today's weather, the
    uncertainty about tomorrow's weather decreases.
  • 17:37 - 17:46
    So now we have an entropy rate,
    instead of an entropy. The difference is,
  • 17:46 - 17:51
    again, the information you gain by today's
    weather. You can do the maths in our
  • 17:51 - 17:59
    example. The entropy would be .92 bit per
    day and the entropy rate, given that you
  • 17:59 - 18:06
    know today's weather, is less. It's .87
    bit per day. Now, to complicate things a
  • 18:06 - 18:11
    bit more, maybe, we also look at a second
    process in that case air pressure and you
  • 18:11 - 18:17
    can measure air pressure with these little
    devices, the barometers and maybe, if it's
  • 18:17 - 18:22
    sunny today and the air pressure is high,
    in 90 percent you get a sunny day
  • 18:22 - 18:26
    tomorrow. Normally in 10 percent of the
    cases you get a rainy day and so on you
  • 18:26 - 18:32
    can go through the table. In our case, I
    looked it up yesterday. We had a high air
  • 18:32 - 18:39
    pressure and it was raining. So in our
    little model system it would mean, that
  • 18:39 - 18:48
    it's sunny today. Now, I told you
    information is a decrease in uncertainty.
  • 18:48 - 18:52
    How much information do we get by the
    barometer, by knowing the air pressure?
  • 18:52 - 18:56
    This is the difference in uncertainty
    without barometer and with the barometer
  • 18:56 - 19:01
    and in our case we have to assume that the
    probability of high and low air pressure
  • 19:01 - 19:08
    is the same. And we get .39 bit per day,
    that we gain by looking at the air
  • 19:08 - 19:13
    pressure. Now, what does that have to do
    with biological systems? Well we have two
  • 19:13 - 19:17
    processes. We have a calcium process that
    shows some dynamics and we have the
  • 19:17 - 19:23
    process of an activated protein that does
    something in the cell. So we can look at
  • 19:23 - 19:28
    both of these and then calculate how much
    information is actually transferred from
  • 19:28 - 19:33
    calcium to the protein. How much
    uncertainty do we lose about the
  • 19:33 - 19:37
    protein dynamics, if we know the calcium
    dynamics? This is mathematically exactly
  • 19:37 - 19:43
    what we are doing and this is called
    transfer entropy. It's an information-
  • 19:43 - 19:49
    theoretic measure developed by Thomas
    Schreiber in 2000. There are some
  • 19:49 - 19:56
    practical complications, that we are
    working on, and this is what we are using
  • 19:56 - 20:01
    actually for the calculations. So in our
    case we have data from experiments or we
  • 20:01 - 20:07
    use models of calcium oscillations and
    then we couple a model of a protein to
  • 20:07 - 20:14
    these calcium dynamics. This gives us time
    courses, both of calcium and protein,
  • 20:14 - 20:20
    stochastic time courses, including the
    random fluctuations. And then we use the
  • 20:20 - 20:26
    information-theoretic machinery to study
    them. And some of our results I want to show
  • 20:26 - 20:30
    you. For example, if you increase the
    system size, if you increase the particle
  • 20:30 - 20:35
    numbers, if you make the cell bigger, then
    the information that you can transfer is
  • 20:35 - 20:41
    higher. Meaning, if the cell invests more
    energy and produces more proteins, it can
  • 20:41 - 20:45
    actually achieve a more reliable
    information transfer, which comes of course
  • 20:45 - 20:52
    with costs for the cell. Also, it seems,
    that if you use more complicated dynamics
  • 20:52 - 20:56
    - meaning not only spiking, but maybe
    bursting behavior where you have secondary
  • 20:56 - 21:00
    spikes - then you can transmit more
    information because the input signal
  • 21:00 - 21:08
    carries more information or can carry more
    information in its different features.
  • 21:08 - 21:12
    Another result is that proteins - a very
    interesting result I think - is that
  • 21:12 - 21:18
    proteins can actually be tuned to certain
    characteristics of the calcium input.
  • 21:18 - 21:22
    Meaning, with all the different calcium
    sensitive proteins in the cell they are
  • 21:22 - 21:28
    tuned to a specific signal. So they only
    get activated or these pathways only
  • 21:28 - 21:33
    allow information transmission, if a
    certain signal is observed in the cell by
  • 21:33 - 21:39
    these proteins. So, in a way the 3D
    structure of the protein defines how it
  • 21:39 - 21:47
    behaves dynamically, how quickly it binds
    and so on, how many binding sites it has,
  • 21:47 - 21:55
    and then this dynamic behavior determines
    to what input signals that protein is
  • 21:55 - 22:00
    actually sensitive. On the right hand side
    you can see some calculations we did. The
  • 22:00 - 22:06
    peaks actually show where this specific
    protein, which is a calmodulin-like protein
  • 22:06 - 22:10
    - you don't have to memorize that, it's a
    very important calcium sensitive protein -
  • 22:10 - 22:16
    where these differently parameterized
    models actually get activated and allow
  • 22:16 - 22:20
    information transfer. And this allows
    differential regulation because you have
  • 22:20 - 22:26
    all the different proteins. You have only
    one calcium concentration and only the
  • 22:26 - 22:32
    proteins that are sensitive to a specific
    input get activated or do their things in
  • 22:32 - 22:36
    the cell. Now if you look at more
    complicated proteins - so Calmodulin, the
  • 22:36 - 22:41
    one I just showed you, was only activated
    by calcium - more complicated proteins,
  • 22:41 - 22:47
    like protein kinase C, for example, they are
    both activated and inhibited. So they show
  • 22:47 - 22:52
    biphasic behavior, where in an
    intermediate range of calcium
  • 22:52 - 22:56
    concentration they get activated, with
    very high or very low concentrations they
  • 22:56 - 23:02
    are inactivated. You can actually see that
    these more complicated proteins allow a
  • 23:02 - 23:08
    higher information transfer and again
    producing these more complicated proteins
  • 23:08 - 23:13
    might be more costly for the cell, but it
    can be valuable, because they allow more
  • 23:13 - 23:18
    information to be transferred. And this
    you can see in this plot where we actually
  • 23:18 - 23:23
    scanned over the activation and the
    inhibition constant of these model
  • 23:23 - 23:27
    proteins and you can see that you have
    these sweet spots where you get a very
  • 23:27 - 23:32
    high information transfer. So color coded
    is transfer entropy. Now, coming to a
  • 23:32 - 23:38
    different system: Just quickly, we also
    looked at other systems of course. Calcium
  • 23:38 - 23:43
    signaling is just one of our favorite ones.
    We also looked at bacteria and this is
  • 23:43 - 23:51
    E. coli, a very famous model system for
    biologists. These are cells that can
  • 23:51 - 23:59
    actually move around because they have
    little propellers at their end. They want to
  • 23:59 - 24:05
    find sources of nutrients, for example, to
    get food. So they swim into a direction
  • 24:05 - 24:11
    and then they decide whether
    to keep swimming in that direction
  • 24:11 - 24:17
    or whether to tumble, reorient randomly,
    and swim in some other direction. The
  • 24:17 - 24:24
    problem for them is they are too small.
    They can't detect a concentration gradient
  • 24:24 - 24:30
    of nutrients, of food between their front
    and the back of the cell. So they have to
  • 24:30 - 24:35
    swim in one direction and then they have
    to remember some nutrient concentration of
  • 24:35 - 24:41
    some time back and then they have to
    compare: Is the nutrient
  • 24:41 - 24:45
    concentration actually increasing? Then I
    should continue swimming. If it's
  • 24:45 - 24:50
    decreasing, I should reorient and swim in
    some other direction. This allows them to,
  • 24:50 - 24:58
    on average, swim towards sources of food.
    In order to compare over time the nutrient
  • 24:58 - 25:05
    concentrations they have to memorize, they
    have to know how much nutrients where
  • 25:05 - 25:12
    there sometime ago. For that they have a
    little memory and the memory is actually
  • 25:12 - 25:17
    in the - you can see on the left hand side
    the receptor that actually senses these
  • 25:17 - 25:22
    nutrients. They can be modified, these
    receptors, we call that methylated. So they
  • 25:22 - 25:27
    get a methylation group attached. They
    have different states of methylation, five
  • 25:27 - 25:34
    different ones in that model we are
    looking at. This builds a memory. And we
  • 25:34 - 25:38
    looked into that, we quantified that with
    information theory. This is a measure,
  • 25:38 - 25:43
    this is called mutual information. It's
    not transfer entropy, it's another measure
  • 25:43 - 25:50
    of, in that case, statical information.
    You can see, this is the amount of
  • 25:50 - 25:56
    information that is actually stored about
    the nutrient concentration that is outside
  • 25:56 - 26:02
    of the cell. This is in nats, it's not in
    bits. It's just a different - you can
  • 26:02 - 26:07
    translate them - it's just a different unit
    for information. You can also see how the
  • 26:07 - 26:14
    different methylation states - so these
    are the colored curves - how they go
  • 26:14 - 26:22
    through or how they are active with
    different nutrient concentrations. This is
  • 26:22 - 26:26
    ongoing research. So, maybe, next time,
    hopefully, next time, I can show you much
  • 26:26 - 26:32
    more. Just to finish this, we also look at
    timescales, because the timescales have to
  • 26:32 - 26:39
    be right. The system adapts. So if you
    keep that cell in a certain nutrient
  • 26:39 - 26:43
    concentration, it adapts to that nutrient
    concentration and goes back to its normal
  • 26:43 - 26:48
    operating level. Now, if you increase the
    nutrient concentration again, it shows some
  • 26:48 - 26:54
    swimming behavior. So it adapts, but it
    also has to decide, it also has to compare
  • 26:54 - 27:00
    the different nutrients at different
    positions. That's how they have to manage
  • 27:00 - 27:05
    the different timescales of decision
    making and memory or adaptation and we are
  • 27:05 - 27:10
    looking into that as well. Coming to the
    conclusions, I hope I could convince you
  • 27:10 - 27:14
    that information theory can be applied to
    biology, that it's a very interesting
  • 27:14 - 27:23
    topic, it's a fascinating area and we are
    just at the beginning to do that. I also
  • 27:23 - 27:29
    showed you that it's such that in
    signaling pathways the components can be
  • 27:29 - 27:34
    tuned to their input, which allows
    differential regulation. So even though
  • 27:34 - 27:40
    you don't have wires you can still
    specifically activate different proteins
  • 27:40 - 27:50
    with one signal or multiplex, if you want.
    We are of course in the process of
  • 27:50 - 27:56
    studying what features of the input signal
    are actually information-carrying. So we
  • 27:56 - 28:03
    are looking into things like wave form and
    timing. And we want to look into how these
  • 28:03 - 28:09
    things change in the deceased case. So, if
    you have things like cancer where certain
  • 28:09 - 28:16
    signalling pathways are perturbed or fail,
    we want to exactly find out what does that
  • 28:16 - 28:21
    do to the information processing
    capabilities of the cell. We also found
  • 28:21 - 28:27
    out that estimating these information
    theoretical quantities can be a very
  • 28:27 - 28:33
    tricky business. Another project we are
    doing at the moment is actually only on
  • 28:33 - 28:40
    how to interpret these in a reliable
    manner, how to estimate these from sparse
  • 28:40 - 28:45
    and noisy data. So that's also ongoing
    work. I would like to thank some of my
  • 28:45 - 28:51
    collaborators, of course, my own group, but
    also some others, in particular the Copasi
  • 28:51 - 28:57
    team, that is spread all over the world.
    And with that I would like to thank you
  • 28:57 - 29:01
    for your attention and I would be happy to
    answer any question you might have.
  • 29:01 - 29:02
    Thank you.
  • 29:02 - 29:05
    applause
  • 29:05 - 29:13
    Herald Angel: ... a very warm applause
    for Jürgen. If you have questions, there
  • 29:13 - 29:16
    are two microphones, microphone number
    one, microphone number two and please
  • 29:16 - 29:23
    speak loudly into the microphone. And, I think
    the first one is microphone number two.
  • 29:23 - 29:25
    Your question please.
    Microphone 2: Has there been any work done
  • 29:25 - 29:30
    on computational modelling of the G-protein
    coupled receptors and the second messenger
  • 29:30 - 29:32
    cascades there.
    Jürgen: Can you repeat that, sorry.
  • 29:32 - 29:36
    Microphone 2: Has there any work been done
    on computational modelling of G protein-
  • 29:36 - 29:38
    coupled receptors
    Jürgen: G protein?
  • 29:38 - 29:40
    Microphone 2: Yeah.
    Jürgen: Oh yes, I mean we are doing that
  • 29:40 - 29:44
    because calcium is actually... I mean the
    calcium signal is actually triggered by a
  • 29:44 - 29:50
    cascade that includes the G protein. Most
    of these receptors are actually G coupled
  • 29:50 - 29:54
    or G protein coupled receptors. So that's
    what we are doing.
  • 29:54 - 29:57
    Angel: Thank you. Microphone number two
    again.
  • 29:57 - 30:01
    Microphone 2: First of all thanks for the
    talk. I want to ask you talked a
  • 30:01 - 30:08
    little bit about how different proteins
    get activated by different signals and
  • 30:08 - 30:15
    could you go a bit into detail about what
    kind of signal qualities the proteins can
  • 30:15 - 30:22
    detect? So are they triggered by specific
    frequencies or specific decays, like which
  • 30:22 - 30:28
    characteristics of the signals can be
    picked up by the different proteins?
  • 30:28 - 30:32
    Jürgen: Well, that's actually what we
    study. I mean we have another package that
  • 30:32 - 30:37
    is linked here, the last one, the
    oscillator generator. This is a package in
  • 30:37 - 30:43
    R that allows you to create artificial
    inputs, where you have complete control of
  • 30:43 - 30:49
    all the parameters like amplitude and
    duration of the peak, duration of the
  • 30:49 - 30:55
    secondary peak, frequencies of the primary
    peaks of the secondary peaks, refraction
  • 30:55 - 31:00
    period and so on. You have complete
    control and at the moment we are also
  • 31:00 - 31:05
    running scans and want to find out what
    proteins are actually sensitive to what
  • 31:05 - 31:10
    parameters in the input signal. What we
    know from calcium is that, for example,
  • 31:10 - 31:19
    calcium calmodulin kinase 2, also a very
    important protein in the nervous system,
  • 31:19 - 31:26
    that shows frequency modulation. It has
    also been shown experimentally where they
  • 31:26 - 31:30
    put that protein on a surface, they
    immobilized it on a surface, and then they
  • 31:30 - 31:34
    superfused it with calcium concentrations
    or with solutions of different calcium
  • 31:34 - 31:39
    concentration in a pulsed manner and they
    measured the activity of that protein and
  • 31:39 - 31:45
    they showed that, with increasing frequency,
    the activation gets bigger. At the same
  • 31:45 - 31:48
    time it also shows amplitude modulation,
    okay? It's also sensitive to the
  • 31:48 - 31:55
    amplitude, meaning the absolute height of
    the concentration of calcium.
  • 31:55 - 31:57
    Microphone 2: Thanks.
    Jürgen: Thank you.
  • 31:57 - 32:01
    Angel: And again number two please.
    Microphone 2: Hey. So you talked about a
  • 32:01 - 32:07
    lot of on and off kinetics and I wonder, if
    you think about neurons, which are not only
  • 32:07 - 32:14
    having on and off, but also many amplitudes
    that take a big role in development of
  • 32:14 - 32:21
    cells and synapses. How do you measure
    that, so how do you measure like baseline,
  • 32:21 - 32:26
    sporadic activity of calcium?
    Jürgen: Well, in our case there are
  • 32:26 - 32:29
    different ways of measuring calcium.
    That's not what we are doing...
  • 32:29 - 32:32
    Microphone 2: ... not really measuring,
    sorry, but more like how do you integrate it
  • 32:32 - 32:37
    in your system? Because it's not really an
    on/off reaction but it's more like a
  • 32:37 - 32:43
    sporadic miniature.
    Jürgen: Yeah, I mean in the case of
  • 32:43 - 32:49
    calcium you have these time courses, okay?
    And we look at the complete time
  • 32:49 - 32:53
    course. So we have the calcium
    concentration sampled at every second or
  • 32:53 - 32:59
    half second in the cell by different
    methods. So our collaboration partners
  • 32:59 - 33:05
    they use different dyes that show
    fluorescence, say, when they bind calcium.
  • 33:05 - 33:10
    Some others show bioluminescence. And then
    we use these time courses. In the neural
  • 33:10 - 33:18
    system it's a bit different. There you
    also get the analog mode, where neurons are
  • 33:18 - 33:24
    directly connected and they exchange
    substances, but most of the case you have
  • 33:24 - 33:29
    action potentials and I didn't go into
    neural systems at all because things there
  • 33:29 - 33:34
    are totally different. You get these
    action potentials that are uniform mostly,
  • 33:34 - 33:38
    so they they all have the same duration,
    they all have the same amplitude. And then
  • 33:38 - 33:44
    people in neuroscience or computational
    neuroscience mostly they boil the
  • 33:44 - 33:50
    information down to just the timings of
    these peaks and they use this information
  • 33:50 - 33:54
    and mathematically this is a point process
    and you can use different mathematical
  • 33:54 - 34:00
    tools to study that. We are not really
    looking into neurons. We are mostly
  • 34:00 - 34:07
    interested in non-excitable cells, like
    liver cells, pancreatic cells and so on,
  • 34:07 - 34:12
    cells that are not activated, they don't
    show massive depolarization, like
  • 34:12 - 34:18
    neurons. Thank you.
    Angel: Thank you. And obviously again
  • 34:18 - 34:22
    number two.
    Microphone 2: Hi. So, you mentioned CaM
  • 34:22 - 34:29
    kinases 2. I got that you don't work
    on neuroscience specifically, but I'm
  • 34:29 - 34:33
    pretty sure you have a quite extensive
    knowledge in the subject. What do you
  • 34:33 - 34:42
    think about this, I would say, hypotheses
    that were quite popular a few years ago, I
  • 34:42 - 34:49
    think in the US mainly, about the fact
    that the cytoskeleton of neurons can
  • 34:49 - 34:58
    actually encode and decode through kinases
    in the cytoskeleton memories like bits in
  • 34:58 - 35:03
    - you know - in a hard drive. What's your
    feeling?
  • 35:03 - 35:07
    Jürgen: Well, I'm not going to speculate
    on that specific hypothesis because I'm
  • 35:07 - 35:12
    not really into that, but I know that many
    people are also looking into spatial
  • 35:12 - 35:17
    effects which I didn't mention here. I
    mean the model I showed you is a spatially
  • 35:17 - 35:22
    homogeneous model. We don't look at
    concentration gradients within the cell,
  • 35:22 - 35:27
    our cells are homogeneous at the moment,
    but people do that. And of course then you
  • 35:27 - 35:34
    can look into things, for example, like a
    new topic is morphological computation,
  • 35:34 - 35:39
    meaning that spatially you can also
    perform computations. But, if you're
  • 35:39 - 35:41
    interested in that, I mean, we can
    talk offline...
  • 35:41 - 35:44
    Microphone 2: ... do you buy into this
    theory...
  • 35:44 - 35:45
    Jürgen: ... I can give you some pointers
    there..
  • 35:45 - 35:50
    Microphone 2: ... but do you have a good
    feeling about these theories or you think
  • 35:50 - 35:52
    they're clueless.
    Jürgen: Well, I think that the spatial
  • 35:52 - 35:56
    aspect is a very important thing. And
    that's also something we should
  • 35:56 - 36:02
    look at. I mean, to me random fluctuations
    are very important, intrinsic fluctuations
  • 36:02 - 36:06
    because you can't separate them from the
    dynamics of the system. They are always
  • 36:06 - 36:12
    there, at least some of the fluctuations.
    And also the spatial effects are very
  • 36:12 - 36:15
    important, because you not only
    have these different compartments,
  • 36:15 - 36:20
    where the reactions happen, but you also
    have concentration gradients across the
  • 36:20 - 36:25
    cell. Especially with calcium, people have
    looked into calcium puffs and calcium
  • 36:25 - 36:30
    waves because, when you have a channel, that
    allows calcium to enter, of course directly
  • 36:30 - 36:34
    at that channel you get a much higher
    calcium concentration and then in some
  • 36:34 - 36:40
    cases you get waves that are travelling
    across the the cell. And to me it sounds
  • 36:40 - 36:44
    plausible that this also has a major
    impact on the information processing.
  • 36:44 - 36:49
    Yeah. Thank you.
    Angel: Thank you. In this case, Jürgen,
  • 36:49 - 36:55
    thank you for the talk. And please give a
    very warm applause to him.
  • 36:55 - 36:56
    applause
  • 36:56 - 36:59
    Jürgen: Thank you.
  • 36:59 - 37:03
    applause
  • 37:03 - 37:08
    postroll music
  • 37:08 - 37:26
    subtitles created by c3subtitles.de
    in the year 2019. Join, and help us!
Title:
35C3 - Information Biology - Investigating the information flow in living systems
Description:

more » « less
Video Language:
English
Duration:
37:26

English subtitles

Revisions