Return to Video

#rC3 - Scientific Literacy 101

  • 0:00 - 0:14
    rC3 preroll music
  • 0:14 - 0:19
    Herald: Welcome with me with a big round
    of applause in your living room or
  • 0:19 - 0:26
    wherever you are derJoram. derJoram is a
    science communicator. He got his
  • 0:26 - 0:31
    University education and his first
    scientific experience at Max Planck
  • 0:31 - 0:39
    Institute. And he will give you now a
    crash course for beginners to have the
  • 0:39 - 0:45
    best insight into the scientific method
    and to distinguish science from rubbish.
  • 0:45 - 1:04
    derJoram, the stage is yours.
  • 1:04 - 1:08
    derJoram: Hi, nice to have you here. My name
    is Joram Schwartzmann and I'm a plant
  • 1:08 - 1:13
    biologist. And today I want to talk about
    science. I have worked in research for
  • 1:13 - 1:19
    many years, first during my diploma thesis
    and then during my doctoral research. I've
  • 1:19 - 1:22
    worked both in Universities and at the Max
    Planck Institute. So I got pretty good
  • 1:22 - 1:27
    insights into the way these structures
    work. After my PhD, I left the research
  • 1:27 - 1:32
    career to instead talk about science,
    which is also what I'm about to do today.
  • 1:32 - 1:37
    I am working now in science communication,
    both as a job and in my spare time, when I
  • 1:37 - 1:41
    write about molecular plant research
    online. Today, I will only mention plants
  • 1:41 - 1:45
    a tiny bit because the topic is a
    different one. Today though, we are
  • 1:45 - 1:50
    talking about science literacy. So
    basically, how does the scientific system
  • 1:50 - 1:53
    work? How do you read scientific
    information and which information can you
  • 1:53 - 2:00
    trust? Science. It's kind of a big topic.
    Before we start, it's time for some
  • 2:00 - 2:05
    disclaimers: I am a plant biologist. I
    know stuff about STEM research that is
  • 2:05 - 2:09
    science, technology, engineering and
    mathematics. But there's so much more
  • 2:09 - 2:14
    other science out there. Social science
    and humanities share many core concepts
  • 2:14 - 2:19
    with natural sciences, but have also many
    approaches that are unique to them. I
  • 2:19 - 2:22
    don't know a lot about the way these
    works, so please forgive me if I stick
  • 2:22 - 2:27
    close to what I know, which is STEM
    research. Talking about science is also
  • 2:27 - 2:31
    much less precise than doing the science.
    For pretty much everything that I'll bring
  • 2:31 - 2:35
    up today there is an example where it is
    completely different. So if in your
  • 2:35 - 2:40
    country, field of research or experience
    something is different, we're probably
  • 2:40 - 2:44
    both right about whatever we're talking.
    With that out of the way, let's look at
  • 2:44 - 2:49
    the things that make science science.
    There are three parts of science that are
  • 2:49 - 2:53
    connected. The first one is the scientific
    system. This is the way science is done.
  • 2:53 - 2:57
    Next up, we have people, who do the
    science. The scientific term for them is
  • 2:57 - 3:01
    researchers. We want to look at how you
    become a researcher, how researchers
  • 3:01 - 3:07
    introduce biases and how they pick their
    volcanic layer to do evil science.
  • 3:07 - 3:11
    Finally, there are publications and this
    is the front end of science, the stuff we
  • 3:11 - 3:15
    look at most of the time when we look at
    science. There are several different kinds
  • 3:15 - 3:20
    and not all of them are equally
    trustworthy. Let's begin with the
  • 3:20 - 3:26
    scientific system. We just don't do
    science, we do science systematically.
  • 3:26 - 3:30
    Since the first people tried to understand
    the world around them, we have developed a
  • 3:30 - 3:35
    complex system for science. At the core of
    that is the scientific method. The
  • 3:35 - 3:39
    scientific method gives us structure and
    tools to do science. Without it, we end up
  • 3:39 - 3:44
    in the realm of guesswork, anecdotes and
    false conclusions. Here are some of my
  • 3:44 - 3:48
    favorite things that were believed before
    the scientific method became standard.
  • 3:48 - 3:54
    Gentlemen could not transmit disease. Mice
    are created from grain and cloth. Blood is
  • 3:54 - 4:00
    exclusively produced by the liver. Heart
    shaped plants are good for the heart. But
  • 4:00 - 4:03
    thanks to the scientific method, we have a
    system that allows us to make confident
  • 4:03 - 4:08
    judgment on our observations. Let's use an
    example. This year has aged me
  • 4:08 - 4:13
    significantly and so as a newly formed old
    person, I have pansies on my balcony. I
  • 4:13 - 4:17
    have blue ones and yellow ones, and in
    summer I can see bees buzz around the
  • 4:17 - 4:22
    flowers. I have a feeling, though, that
    they like the yellow ones better. That
  • 4:22 - 4:26
    right there is an observation. I now think
    to myself I wonder if they prefer the
  • 4:26 - 4:32
    yellow flowers over the blue ones based on
    the color
    and this is my hypothesis. The
  • 4:32 - 4:37
    point of a hypothesis is to test it so I
    can accept it or reject it later. So I
  • 4:37 - 4:40
    come up with a test. I count all bees that
    land on yellow flowers and on blue flowers
  • 4:40 - 4:46
    within a weekend. That is my experiment.
    So I sit there all weekend with one of
  • 4:46 - 4:50
    these clicky things in each hand and count
    the bees on the flowers. Every time a bee
  • 4:50 - 4:54
    lands on a flower, I click. click, click,
    click, click, click
    . It's the most fun I
  • 4:54 - 5:00
    had all summer. In the end, I look at my
    numbers. These are my results. I saw sixty
  • 5:00 - 5:04
    four bees on the yellow flowers and twenty
    seven on the blue flowers. Based on my
  • 5:04 - 5:09
    experiment I conclude that bees prefer
    yellow pansies over blue ones. I can now
  • 5:09 - 5:14
    return and accept my hypothesis. Bees do
    prefer yellow flowers over blue ones.
  • 5:14 - 5:18
    Based on that experiment I made a new
    observation and can now make a new
  • 5:18 - 5:23
    hypothesis: do other insects follow the
    same behavior? And so I sat there again
  • 5:23 - 5:29
    next weekend, counting all hoverflies on
    my pansies. Happy days. The scientists in
  • 5:29 - 5:34
    the audience are probably screaming by
    now. I am, too, but on the inside. My
  • 5:34 - 5:38
    little experiment and the conclusions I
    did were flawed. First up, I didn't do any
  • 5:38 - 5:44
    controls apart from yellow versus blue.
    What about time? Do the days or seasons
  • 5:44 - 5:48
    matter? Maybe I picked up the one time
    period when bees actually do prefer yellow
  • 5:48 - 5:52
    but on most other days they like blue
    better? And then I didn't control for
  • 5:52 - 5:56
    position. Maybe the blue ones get less
    sunlight and are less warm and so a good
  • 5:56 - 6:01
    control would have been to swap the pots
    around. I also said I wanted to test
  • 6:01 - 6:05
    color. Another good control would have
    been to put up a cardboard cutout of a
  • 6:05 - 6:09
    flower in blue and yellow and see whether
    it is the color or maybe another factor
  • 6:09 - 6:14
    that attracts the bees. And then I only
    counted once. I put the two data points
  • 6:14 - 6:18
    into an online statistical calculator and
    when I had calculated it, it told me I had
  • 6:18 - 6:21
    internet connectivity problems. So I
    busted out my old textbook about
  • 6:21 - 6:25
    statistics. And as it turns out, you need
    repetitions of your experiment to do
  • 6:25 - 6:30
    statistics and without statistics, you
    can't be sure of anything. If you want to
  • 6:30 - 6:33
    know whether what you measure is random or
    truly different between your two
  • 6:33 - 6:37
    conditions, you do a statistical test that
    tells you with what probability your
  • 6:37 - 6:42
    result could be random. That is called a
    P-value. You want that number to be low.
  • 6:42 - 6:47
    In biology, we're happy with a chance of
    one in twenty. So five percent that the
  • 6:47 - 6:51
    difference we observe between two
    measurements happened by chance. In high
  • 6:51 - 6:55
    energy particle physics, that chance of
    seeing a random effect is 1:3.500.000
  • 6:55 - 7:01
    or 0.00003%. So without
    statistics, you can never be sure whether
  • 7:01 - 7:06
    you observe something important or just
    two numbers that look different. A good
  • 7:06 - 7:10
    way to do science is to do an experiment a
    couple of times, three at least, and then
  • 7:10 - 7:15
    repeat it with controls again at least
    three times. With a bigger data set, I
  • 7:15 - 7:19
    could actually make an observation that
    holds significance. So why do I tell you
  • 7:19 - 7:23
    all of this? You want to know how to
    understand science not how to do it
  • 7:23 - 7:27
    yourself? Well, as it turns out, controls
    and repetitions are also a critical point
  • 7:27 - 7:31
    to check when you read about scientific
    results. Often enough cool findings are
  • 7:31 - 7:35
    based on experiments that didn't control
    for certain things or that are based on
  • 7:35 - 7:39
    very low numbers of repetitions. You have
    to be careful with conclusions from these
  • 7:39 - 7:44
    experiments as they might be wrong. So
    when you read about science, look for
  • 7:44 - 7:47
    science that they followed the scientific
    method like a clearly stated hypothesis,
  • 7:47 - 7:53
    experiments with proper controls and
    enough repetitions to do solid statistics.
  • 7:53 - 7:57
    It seems like an obvious improvement for
    the scientific system to just do more
  • 7:57 - 8:01
    repetitions. Well, there is a problem with
    that. Often experiments require the
  • 8:01 - 8:05
    researchers to break things. Maybe just
    because you take the things out of their
  • 8:05 - 8:08
    environment and into your lab, maybe
    because you can only study it when it's
  • 8:08 - 8:13
    broken. And as it turns out, not all
    things can be broken easily. Let me
  • 8:13 - 8:18
    introduce you to my scale of how easy it
    is to break the thing you study. All the
  • 8:18 - 8:22
    way to the left, you have things like
    particle physics. It's easy to break
  • 8:22 - 8:26
    particles. All you need is a big ring and
    some spare electrons you put in there
  • 8:26 - 8:30
    really, really fast. Once you have these
    two basic things, you can break millions
  • 8:30 - 8:34
    of particles and measure what happens so
    you can calculate really good statistics
  • 8:34 - 8:38
    on them. Then you have other areas of
    physics. In material science. the only
  • 8:38 - 8:43
    thing that stops you from testing how hard
    a rock is, is the price of your rock.
  • 8:43 - 8:48
    Again, that makes us quite confident in
    the material properties of things. Now we
  • 8:48 - 8:54
    enter the realm of biology. Biology is
    less precise because living things are not
  • 8:54 - 8:59
    all the same. If you take two bacterial
    cells of the same species, they might
  • 8:59 - 9:03
    still be slightly different in their
    genome. But luckily we can break millions
  • 9:03 - 9:08
    of bacteria and other microbes without
    running into ethical dilemmas. We even ask
  • 9:08 - 9:12
    researchers to become better at killing
    microbes. So doing more of the experiment
  • 9:12 - 9:17
    is easier when working with microbes. It
    gets harder, though, with bigger and more
  • 9:17 - 9:22
    complex organisms. Want to break plants in
    a greenhouse or in a field? As long as you
  • 9:22 - 9:26
    have the space, you can break thousands of
    them for science and no one minds. How
  • 9:26 - 9:30
    about animals like fish and mice and
    monkeys? There it gets much more
  • 9:30 - 9:34
    complicated very quickly. While we are
    happy to kill thousands of pigs every day
  • 9:34 - 9:38
    for sausages, we feel much less
    comfortable doing the same for science.
  • 9:38 - 9:42
    And it's not a bad thing when we try to
    reduce harm to animals. So while you
  • 9:42 - 9:46
    absolutely can do repetitions and controls
    and animal testing, you usually are
  • 9:46 - 9:51
    limited by the number of animals you can
    break for science. And then we come to
  • 9:51 - 9:55
    human biology. If you thought it was hard
    doing lots of repetitions and controls in
  • 9:55 - 10:00
    animals, try doing that in humans. You
    can't grow a human on a corn sugar based
  • 10:00 - 10:04
    diet just to see what would happen. You
    can't grow humans in isolation and you
  • 10:04 - 10:09
    can't breed humans to make more cancer as
    a control in your cancer experiment. So
  • 10:09 - 10:12
    with anything that involves science in
    humans, we have to have very clever
  • 10:12 - 10:15
    experiment design to control for all the
    things that we can't control. The other
  • 10:15 - 10:18
    way to do science on humans, of course, is
    to be a genetic life form and disk-
  • 10:18 - 10:24
    operating system. What this scale tells us
    is how careful we have to be with
  • 10:24 - 10:28
    conclusions from any of these research
    areas. We have to apply a much higher
  • 10:28 - 10:33
    skepticism when looking at single studies
    on human food than when we study how hard
  • 10:33 - 10:37
    a rock is. If I'm interested in stuff on
    the right end of the spectrum, I'd rather
  • 10:37 - 10:41
    see a couple of studies pointing at a
    conclusion. Whereas the further I get to
  • 10:41 - 10:45
    the left hand side, the more I trust
    single studies. That still doesn't mean
  • 10:45 - 10:51
    that there can't be mistakes in particle
    physics, but I hope you get the idea. Back
  • 10:51 - 10:55
    to the scientific method. Because it is
    circular, it is never done, and so is
  • 10:55 - 10:59
    science. We can always uncover more
    details, look at related things and refine
  • 10:59 - 11:04
    our understanding. There's no field where
    we could ever say: Ok, let's pack up. We
  • 11:04 - 11:09
    know now everything. Good job, everyone -
    the science has been completely done.
  • 11:09 - 11:13
    Everything in science can be potentially
    overturned. Nothing is set in stone.
  • 11:13 - 11:18
    However, and it's a big however, it's not
    likely that this happens for most things.
  • 11:18 - 11:22
    Most things have been shown so often that
    the chance that we will find out that
  • 11:22 - 11:25
    water actually boils at 250 degrees
    centigrade at sea level and normal
  • 11:25 - 11:31
    pressure is close to zero. But if
    researchers would be able to show that
  • 11:31 - 11:35
    strange behavior of water, it is in the
    nature of science to include that result
  • 11:35 - 11:40
    in our understanding. Even if that breaks
    some other ideas that we have about the
  • 11:40 - 11:45
    world. That is what sets science apart
    from dogma. New evidence is not frowned
  • 11:45 - 11:49
    upon and rejected, but welcomed and
    integrated into our current understanding
  • 11:49 - 11:55
    of the world. Enough about a scientific
    system. Let's talk about scientists. You
  • 11:55 - 11:59
    might be surprised to hear, but most
    researchers are actually people. Other
  • 11:59 - 12:03
    people, who are not researchers tend to
    forget that, especially when they talk
  • 12:03 - 12:07
    about the science that the researchers do.
    That goes both ways. There are some that
  • 12:07 - 12:11
    believe in the absolute objective truth of
    science. Ignoring all influence
  • 12:11 - 12:16
    researchers have on the data. And there
    are others, who say that science is lying
  • 12:16 - 12:21
    about things like vaccinations, climate
    change or infectious diseases. Both groups
  • 12:21 - 12:26
    are wrong. Researchers are not infallible
    demigods that eat nature and poop wisdom.
  • 12:26 - 12:31
    They're also not conspiring to bring harm
    to society in search for personal gain.
  • 12:31 - 12:35
    Trust me. I know people, who work in
    pesticide research, they're as miserable
  • 12:35 - 12:40
    as any other researcher. Researchers are
    people. And so they have thoughts and
  • 12:40 - 12:45
    ideas and wishes and biases and faults and
    good intentions. Most people don't want to
  • 12:45 - 12:50
    do bad things and inflict harm on others
    and so do researchers. They aim to do good
  • 12:50 - 12:56
    things and make lives of people better.
    The problem with researchers being people
  • 12:56 - 13:00
    is that they are also flawed. We all have
    cognitive biases that shape the way we
  • 13:00 - 13:04
    perceive and think about the world. And in
    science, there's a whole list of biases
  • 13:04 - 13:09
    that affect the way we gather data and
    draw conclusions from it. Luckily, there
  • 13:09 - 13:14
    are ways to deal with most biases. We have
    to be aware of them, address them and
  • 13:14 - 13:21
    change our behavior to avoid them. What we
    can't do is deny their impact on research.
  • 13:21 - 13:25
    Another issue is diversity. Whenever you
    put a group of similar people together,
  • 13:25 - 13:29
    they will only come up with ideas that fit
    within their group. That's why it is a
  • 13:29 - 13:34
    problem when only white men are dominating
    research leadership positions. Hold on.
  • 13:34 - 13:39
    Some of you might shout. These men are
    men of science. They are objective. They
  • 13:39 - 13:44
    use the scientific method. We don't need
    diversity. We need smart people
    . To which
  • 13:44 - 13:50
    I answer: ugghhh. Here is a story for
    you. For more than 150 years, researchers
  • 13:50 - 13:54
    believed that only male birds are singing.
    It fits the simple idea that male birds do
  • 13:54 - 13:59
    all the mating rituals and stuff, so they
    must be the singers. Just like in humans,
  • 13:59 - 14:03
    female birds were believed to just sit and
    listen while the men shout at each other.
  • 14:03 - 14:08
    In the last 20 years, this idea was
    debunked. New research found that also
  • 14:08 - 14:14
    female birds sing. So how did we miss that
    for so long? Another study on the studies
  • 14:14 - 14:17
    found that during these 20 years that
    overturned the dogma of male singing
  • 14:17 - 14:23
    birds, the researchers changed. Suddenly,
    more women took part in research and
  • 14:23 - 14:27
    research happened in more parts of the
    world. Previously, mostly men in U.S.,
  • 14:27 - 14:32
    Canada, England and Germany were studying
    singing birds in their countries. As a
  • 14:32 - 14:36
    result, they subconsciously introduced
    their own biases and ideas into the work.
  • 14:36 - 14:41
    And so we believe for a long time that
    female birds keep their beaks shut. Only
  • 14:41 - 14:46
    when the group of researchers diversified,
    we got new and better results. The male
  • 14:46 - 14:50
    researchers didn't ignore the female
    songbirds out of bad faith. The men were
  • 14:50 - 14:54
    shaped by their environment but they
    didn't want to do bad things. They just
  • 14:54 - 14:57
    happened to oversee something that someone
    with a different background would pick up
  • 14:57 - 15:02
    on. What does this tell us about science?
    It tells us that science is influenced
  • 15:02 - 15:06
    consciously or subconsciously by internal
    biases. When we talk about scientific
  • 15:06 - 15:11
    results we need to take that into account.
    Especially in studies regarding human
  • 15:11 - 15:15
    behavior. We have to be very careful about
    experiment design, framing and
  • 15:15 - 15:19
    interpretation of results. If you read
    about science that makes bold claims about
  • 15:19 - 15:23
    the way we should work, interact or
    communicate in society that science is
  • 15:23 - 15:27
    prone to be shaped by bias and you should
    be very careful when drawing conclusions
  • 15:27 - 15:31
    from it. I personally would rather wait
    for several studies pointing in a similar
  • 15:31 - 15:36
    direction before I draw major conclusions.
    I linked to a story about a publication
  • 15:36 - 15:40
    about the influence of female mentors on
    career success and it was criticized for a
  • 15:40 - 15:47
    couple of these biases. If we want to
    understand science better, we also have to
  • 15:47 - 15:51
    look at how someone becomes a scientist
    and I mean that in a sense of professional
  • 15:51 - 15:55
    career. Technically, everybody is a
    scientist as soon as they test a
  • 15:55 - 15:59
    hypothesis, observe the outcome and
    repeat. But unfortunately, most of us are
  • 15:59 - 16:03
    not paid for the tiny experiments during
    our day to day life. If you want to become
  • 16:03 - 16:08
    a scientist, you usually start by entering
    academia. Academia is the world of
  • 16:08 - 16:12
    Universities, Colleges and research
    institutes. There is a lot of science done
  • 16:12 - 16:17
    outside of academia, like in research and
    development in industry or by individuals
  • 16:17 - 16:21
    taking part in DIY science. As these
    groups rarely enter the spotlight of
  • 16:21 - 16:27
    public attention, I will ignore them
    today. Sorry. So this is a typical STEM
  • 16:27 - 16:31
    career path. You begin as a Bachelor's or
    Master's student. You work for something
  • 16:31 - 16:36
    between three months and a year and then
    wohoo you get a degree. From here you
  • 16:36 - 16:40
    can leave, go into the industry, be a
    scientific researcher at a University or
  • 16:40 - 16:45
    you continue your education. If you
    continue, you're most likely to do a PhD.
  • 16:45 - 16:48
    But before you can select one of the
    exciting options on a form when you order
  • 16:48 - 16:52
    your food, you have to do research. For
    three to six years, depending on where you
  • 16:52 - 16:57
    do your PhD, you work on a project and
    most likely will not have a great time.
  • 16:57 - 17:01
    You finish with your degree and some
    publications. A lot of people leave now
  • 17:01 - 17:06
    but if you stay in research, you'll become
    a postdoc. The word postdoc comes from the
  • 17:06 - 17:10
    word "doc" as in doctorate and "post" as
    in you have to post a lot of application
  • 17:10 - 17:18
    letters to get a job. Postdocs do more
    research, often on broader topics. They
  • 17:18 - 17:22
    supervise PhD students and are usually
    pretty knowledgeable about their research
  • 17:22 - 17:26
    field. They work and write papers until
    one of two things happen. The German
  • 17:26 - 17:30
    Wissenschaftszeitvertragsgesetz bites them
    in the butt and they get no more contract
  • 17:30 - 17:35
    or they move on to become a group leader
    or professor. Being a professor is great.
  • 17:35 - 17:38
    You have a permanent research position,
    you get to supervise and you get to talk
  • 17:38 - 17:42
    to many cool other researchers. You
    probably know a lot by now, not only about
  • 17:42 - 17:47
    your field but also many other fields in
    your part of science as you constantly go
  • 17:47 - 17:51
    to conferences because they have good food
    and also people are talking about science.
  • 17:51 - 17:56
    Downside is, you're probably not doing any
    experiments yourself anymore. You have
  • 17:56 - 18:01
    postdocs and PhD students, who do that for
    you. If you want to go into science,
  • 18:01 - 18:05
    please have a look at this. What looks
    like terrible city planning is actually
  • 18:05 - 18:09
    terrible career planning as less than one
    percent of PhDs will ever reach the level
  • 18:09 - 18:14
    of professor, also known as the only
    stable job in science. That's also what
  • 18:14 - 18:20
    happened to me, I left academia after my
    PhD. So what do we learn from all of this?
  • 18:20 - 18:23
    Different stages of a research career
    correlate with different levels of
  • 18:23 - 18:27
    expertise. If you read statements from a
    Master's student or professor, you can get
  • 18:27 - 18:31
    an estimate for how much they know about
    their field and in turn for how solid
  • 18:31 - 18:35
    their science is. Of course, this is just
    a rule of thumb- I have met both very
  • 18:35 - 18:38
    knowledgeable Master's students and
    professors, who knew nothing apart from
  • 18:38 - 18:44
    their own small work. So whenever you read
    statements from researchers independent of
  • 18:44 - 18:48
    their career stage, you should also wonder
    whether they represent the scientific
  • 18:48 - 18:52
    consensus. Any individual scientist might
    have a particular hot take about something
  • 18:52 - 18:57
    they care about but in general, they agree
    with their colleagues. When reading about
  • 18:57 - 19:01
    science that relates to policies or public
    debates, it is a good idea to explore
  • 19:01 - 19:05
    whether this particular researcher is
    representing their own opinion or the one
  • 19:05 - 19:09
    of their peers. Don't ask the researcher
    directly though, every single one of them
  • 19:09 - 19:17
    will say that, of course, they represent
    the majority opinion. The difference
  • 19:17 - 19:21
    between science and screwing around is
    writing it down, as Adam Savage once said.
  • 19:21 - 19:25
    Science without publications is pretty
    useless because if you keep all that
  • 19:25 - 19:29
    knowledge to yourself, well, congrats, you
    are very smart now but that doesn't really
  • 19:29 - 19:34
    help anyone but you. Any researchers'
    goal, therefore, is to get their findings
  • 19:34 - 19:38
    publicly known so that others can extend
    the work and create scientific progress.
  • 19:38 - 19:43
    So let's go back to my amazing bee
    research. I did the whole experiment again
  • 19:43 - 19:47
    with proper controls this time and now I
    want to tell people about it. The simplest
  • 19:47 - 19:52
    way to publish my findings would be to
    tweet about it. But then a random guy
  • 19:52 - 19:56
    would probably tell me that I'm wrong and
    stupid and should go f*** myself. So
  • 19:56 - 20:01
    instead I do what most researchers would
    do and go to a scientific conference.
  • 20:01 - 20:04
    That's where researchers hang out, have a
    lot of coffee and sit and listen to talks
  • 20:04 - 20:08
    from other researchers. Conferences are
    usually the first place that new
  • 20:08 - 20:13
    information becomes public. Well, public
    is a bit of a stretch, usually the talks
  • 20:13 - 20:18
    are not really recorded or made accessible
    to anyone, who wasn't there at the time.
  • 20:18 - 20:21
    So while the information is pretty
    trustworthy, it remains fairly
  • 20:21 - 20:25
    inaccessible to others. After my
    conference talk, the next step is to write
  • 20:25 - 20:30
    up all the details of my experiment and
    the results in a scientific paper. Before
  • 20:30 - 20:34
    I send this to an editor at a scientific
    journal, I could publish it myself as a
  • 20:34 - 20:39
    pre-print. These pre-prints are drafts of
    finished papers that are available to read
  • 20:39 - 20:43
    for anyone. They are great because they
    provide easy access to information that is
  • 20:43 - 20:47
    otherwise often behind paywalls. They are
    not so great because they have not yet
  • 20:47 - 20:52
    been peer reviewed. If a pre-print hasn't
    also been published with peer review, you
  • 20:52 - 20:56
    have to be careful with what you read as
    it is essentially only the point of view
  • 20:56 - 21:01
    of the authors. Peer review only happens
    when you submit your paper to a journal.
  • 21:01 - 21:05
    Journals are a whole thing and there have
    been some great talks in the past about
  • 21:05 - 21:09
    why many of them are problematic. Let's
    ignore for a second how these massive
  • 21:09 - 21:12
    enterprises collect money from everyone
    they get in contact with and let's focus
  • 21:12 - 21:17
    instead on what they're doing for the
    academic system. I send them my paper, an
  • 21:17 - 21:22
    editor sees if it's any good and then
    sends my paper to two to three reviewers.
  • 21:22 - 21:25
    These are other researchers that then
    critically check everything I did and
  • 21:25 - 21:30
    eventually recommend accepting or
    rejecting my paper. If it is accepted, the
  • 21:30 - 21:35
    paper will be published. I pay a fee and
    the paper will be available online. Often
  • 21:35 - 21:40
    behind a paywall, unless I pay some more
    cash. At this point, I'd like to have a
  • 21:40 - 21:44
    look at how a scientific paper works.
    There are five important parts to any
  • 21:44 - 21:50
    paper. The title, the author list, the
    abstract, the figures and the text. The
  • 21:50 - 21:53
    title is a summary of the main findings
    and unlike in popular media, it is much
  • 21:53 - 21:57
    more descriptive. Where a newspaper leaves
    out the most important information to get
  • 21:57 - 22:01
    people to read the article, the
    information is right there in the title of
  • 22:01 - 22:07
    the study. In my case that could be
    "Honeybees -Apis mellifera- show selective
  • 22:07 - 22:11
    preference for flower color in viola
    tricolor". You see, everything is right
  • 22:11 - 22:16
    there. The organisms I worked with and the
    main result I found. Below the title
  • 22:16 - 22:20
    stands the author list. As you might have
    guessed, the author list is a list of
  • 22:20 - 22:23
    authors. Depending on the field the paper
    is from, the list can be ordered
  • 22:23 - 22:28
    alphabetically or according to relative
    contribution. If it is contribution then
  • 22:28 - 22:32
    you usually find the first author to have
    done all the work or the middle authors to
  • 22:32 - 22:35
    have contributed some smaller parts and
    the last author to have paid for the whole
  • 22:35 - 22:40
    thing. The last author is usually a group
    leader or professor. A good way to learn
  • 22:40 - 22:45
    more about a research group and their work
    is to search for the last author's name. The
  • 22:45 - 22:49
    abstract is a summary of the findings.
    Read this to get a general idea of what
  • 22:49 - 22:53
    the researchers did and what they found.
    It is very dense in information but it is
  • 22:53 - 22:56
    usually written in a way that also
    researchers from other fields can
  • 22:56 - 23:02
    understand at least some of it. The
    figures are pretty to look at and hold the
  • 23:02 - 23:07
    key findings in most papers and the text
    has the full story with all the details or
  • 23:07 - 23:12
    the jargon and all your references that
    the research is built on. You probably
  • 23:12 - 23:16
    won't read the text unless you care a lot,
    so stick to title, abstract and authors to
  • 23:16 - 23:21
    get a quick understanding of what's going
    on. Scientific papers to reflect a peer
  • 23:21 - 23:26
    reviewed opinion of one or a few research
    groups. If you are interested in a broader
  • 23:26 - 23:31
    topic like what insects like to pollinate
    what flower, you should read review
  • 23:31 - 23:35
    papers. These are peer reviewed summaries
    of a much broader scope, often weighing
  • 23:35 - 23:40
    multiple points of view against each
    other. Review papers are a great resource
  • 23:40 - 23:44
    that avoids some of the biases individual
    research groups might have about their
  • 23:44 - 23:49
    topic. So my research is reviewed and
    published. I can go back now and start
  • 23:49 - 23:52
    counting butterflies, but this is not
    where the publishing of scientific results
  • 23:52 - 23:57
    ends. My institute might think that my bee
    counting is not even bad, it is actually
  • 23:57 - 24:01
    amazing and so they will issue a press
    release. Press releases often emphasize
  • 24:01 - 24:05
    the positive parts of a study while
    putting them into context of something
  • 24:05 - 24:09
    that's relevant to most people. Something
    like "bees remain attracted to yellow
  • 24:09 - 24:13
    flowers despite the climate crisis". The
    facts in a press release are usually
  • 24:13 - 24:17
    correct but shortcomings of a study that I
    mentioned in a paper are often missing
  • 24:17 - 24:23
    from the press release. Because my bee
    study is really cool and because the PR
  • 24:23 - 24:28
    department of my institute did a great
    job, journalists pick up on the story. The
  • 24:28 - 24:32
    first ones are often journals with a focus
    on science like Scientific American or
  • 24:32 - 24:36
    Spektrum der Wissenschaft. Most of the
    time, science journalists do a great job
  • 24:36 - 24:40
    in finding more sources and putting the
    results into context. They often ask other
  • 24:40 - 24:44
    experts for their opinion and they break
    down the scientific language into simpler
  • 24:44 - 24:48
    words. Science journalism is the source I
    recommend to most people when they want to
  • 24:48 - 24:52
    learn about a field that they are not
    experts in. Because my bee story is
  • 24:52 - 24:57
    freaking good, mainstream journalists are
    also reporting on it. They are often
  • 24:57 - 25:00
    pressed for time and write for much
    broader audience, so they just report the
  • 25:00 - 25:05
    basic findings, often putting even more
    emphasis on why people should care.
  • 25:05 - 25:11
    Usually climate change, personal health or
    now Covid. Mainstream press coverage is
  • 25:11 - 25:15
    rarely as detailed as the previous
    reporting and has the strongest tendency
  • 25:15 - 25:20
    to accidentally misrepresent facts or add
    framing that researchers wouldn't use. Oh,
  • 25:20 - 25:23
    and then there is the weird uncle, who
    posts a link to the article on their
  • 25:23 - 25:26
    Facebook with a blurb of text that says
    the opposite of what the study actually
  • 25:26 - 25:32
    did. As you might imagine, the process of
    getting scientific information out to the
  • 25:32 - 25:36
    public quickly becomes a game of
    telephone. What is clearly written in the
  • 25:36 - 25:39
    paper is framed positively in a press
    release and gets watered down even more
  • 25:39 - 25:44
    once it reaches mainstream press. So for
    you, as someone, who wants to understand
  • 25:44 - 25:48
    the science, it is a good idea to be more
    careful the further you get away from your
  • 25:48 - 25:53
    original source material. While specific
    scientific journalism usually does a good
  • 25:53 - 25:57
    job in breaking down the facts without
    distortion, the same can't be said for
  • 25:57 - 26:01
    popular media. If you come across an
    interesting story, try to find another
  • 26:01 - 26:06
    version of it in a different outlet,
    preferably one that is more catered to an
  • 26:06 - 26:09
    audience with scientific interest. Of
    course, you can jump straight to the
  • 26:09 - 26:13
    original paper but understanding the
    scientific jargon can be hard and
  • 26:13 - 26:18
    misunderstanding the message is easy, so
    it can do more harm than good. We see that
  • 26:18 - 26:24
    harm now with Hobbyists, when epidimi...,
    epidimio..., epediomiolo.., who are not
  • 26:24 - 26:28
    people, who study epidemics, who are
    making up their own pandemic modeling.
  • 26:28 - 26:32
    They are cherry picking bits of
    information from scientific papers without
  • 26:32 - 26:35
    understanding the bigger picture and
    context and then post their own charts on
  • 26:35 - 26:40
    Twitter. It's cool if you want to play
    with data in your free time, and it's a
  • 26:40 - 26:45
    fun way to learn more about a topic but it
    can also be very misleading and harmful
  • 26:45 - 26:48
    while dealing with a pandemic if expert
    studies have to fight for attention with
  • 26:48 - 26:53
    nonexperts Excel-graphs. It pays off to
    think twice about whether you're actually
  • 26:53 - 26:59
    helping by publishing your own take on a
    scientific question. Before we end, I want
  • 26:59 - 27:04
    to give you some practical advice on how
    to assess the credibility of a story and
  • 27:04 - 27:08
    how to understand the science better. This
    is now an in-depth guide to fact checking.
  • 27:08 - 27:13
    I want you to get a sort of gut feeling
    about science. When I read scientific
  • 27:13 - 27:18
    information, these are the questions that
    come to my mind. First up, I want to ask
  • 27:18 - 27:23
    yourself, is this plausible and does this
    follow the scientific consensus? If both
  • 27:23 - 27:29
    answers are "no" then you should carefully
    check the sources. More often than not,
  • 27:29 - 27:33
    these results are outliers that somebody
    exaggerated to get news coverage or
  • 27:33 - 27:38
    someone is actively reframing scientific
    information for their own goals. To get a
  • 27:38 - 27:41
    feeling about scientific consensus on
    things, it is a good idea to look for
  • 27:41 - 27:45
    joint statements from research
    communities. Whenever an issue that is
  • 27:45 - 27:50
    linked to current research comes up for
    public debate, there is usually a joint
  • 27:50 - 27:54
    statement laying down the scientific
    opinion signed by dozens or even hundreds
  • 27:54 - 27:59
    of researchers, like, for example, from
    Scientists for Future. And then whenever
  • 27:59 - 28:04
    you see a big number, you should look for
    context. When you read statements like "We
  • 28:04 - 28:09
    grow sugar beet on an area of over 400,000
    hectare", you should immediately ask
  • 28:09 - 28:15
    yourself "Who is we? Is it Germany,
    Europe, the world? What is the time frame?
  • 28:15 - 28:21
    Is that per year? Is that a lot? How much
    is that compared to other crops?". Context
  • 28:21 - 28:27
    matters a lot and often big numbers are
    used to impress you. In this case, 400,000
  • 28:27 - 28:32
    hectare is the yearly area that Germany
    grows sugar beet on. A wheat, for example,
  • 28:32 - 28:38
    is grown on over 3 million hectare per
    year in Germany. Context matters, and so
  • 28:38 - 28:42
    whenever you see a number, look for a
    frame of reference. If the article doesn't
  • 28:42 - 28:46
    give you one, either, go and look for
    yourself or ignore the number for your
  • 28:46 - 28:50
    decision making based on the article.
    Numbers only work with framing, so be
  • 28:50 - 28:55
    aware of it. I want you to think briefly
    about how you felt when I gave you that
  • 28:55 - 29:00
    number of 400,000 hectare. Chances are
    that you felt a sort of feeling of unease
  • 29:00 - 29:05
    because it's really hard to imagine such a
    large number. An interesting exercise is
  • 29:05 - 29:10
    to create your own frame of reference.
    Collect a couple of numbers like total
  • 29:10 - 29:14
    agricultural area of your country, the
    current spending budget of your
  • 29:14 - 29:18
    municipality, the average yearly income,
    or the unemployment rate in relative and
  • 29:18 - 29:22
    absolute numbers. Keep the list somewhere
    accessible and use it whenever you come
  • 29:22 - 29:27
    across a big number that is hard to grasp.
    Are 100,000€ a lot of money in context of
  • 29:27 - 29:32
    public spending? How important are 5,000
    jobs in context of population and
  • 29:32 - 29:36
    unemployment? Such a list can defuze the
    occasional scary big number in news
  • 29:36 - 29:42
    articles, and it can also help you to make
    your point better. Speaking of framing,
  • 29:42 - 29:46
    always be aware, who the sender of the
    information is. News outlets rarely have a
  • 29:46 - 29:52
    specific scientific agenda, but NGOs do.
    If Shell, the oil company, will provide a
  • 29:52 - 29:56
    leaflet where they cite scary numbers and
    present research that they funded that
  • 29:56 - 30:00
    finds that oil drilling is actually good
    for the environment but they won't
  • 30:00 - 30:04
    disclose, who they work with for the
    study, we all would laugh at that
  • 30:04 - 30:08
    information. But if we read a leaflet from
    an environmental NGO in Munich that is
  • 30:08 - 30:11
    structurally identical but with a
    narrative about glyphosate in beer that
  • 30:11 - 30:15
    fits our own perception of the world, we
    are more likely to accept the information
  • 30:15 - 30:19
    in the leaflet. In my opinion, both
    sources are problematic and I would not
  • 30:19 - 30:25
    use any of them to build my own opinion.
    Good journalists put links to the sources
  • 30:25 - 30:30
    in or under the article, and it is a good
    idea to check them. Often, however, you
  • 30:30 - 30:35
    have to look for the paper yourself based
    on hints in the text like author names,
  • 30:35 - 30:40
    institutions, and general topics. And then
    paywalls often block access to the
  • 30:40 - 30:44
    information that you're looking for. You
    can try pages like ResearchGate for legal
  • 30:44 - 30:50
    access to PDFs. Many researchers also use
    sci-hub but as the site provides illegal
  • 30:50 - 30:55
    access to publicly funded research, I
    won't recommend doing so. When you have
  • 30:55 - 30:59
    the paper in front of you, you can either
    read it completely, which is kind of hard,
  • 30:59 - 31:04
    or just read the abstract, which might be
    easier. The easiest is to look for science
  • 31:04 - 31:09
    journalism articles about the paper.
    Twitter is actually great to find those,
  • 31:09 - 31:12
    as many researchers are on Twitter and
    like to share articles about their own
  • 31:12 - 31:16
    research They also like to discuss
    research on Twitter. So if the story is
  • 31:16 - 31:20
    controversial, chances are you'll find
    some science accounts calling that out.
  • 31:20 - 31:25
    While Twitter is terrible in many regards,
    it is a great tool to engage with the
  • 31:25 - 31:30
    scientific community. You can also do a
    basic check-up yourself. Where was the
  • 31:30 - 31:34
    paper published and is it a known journal?
    Who are the people doing the research and
  • 31:34 - 31:39
    what are their affiliations? How did they
    do their experiment? Checking for controls
  • 31:39 - 31:43
    and repetitions in the experiment is hard
    if you don't know the topic, but if you do
  • 31:43 - 31:50
    know the topic, go for it. In the end,
    fact checking takes time and energy. It's
  • 31:50 - 31:53
    very likely that you won't do it very
    often but especially when something comes
  • 31:53 - 31:57
    up that really interests you and you want
    to tell people about it, you should do a
  • 31:57 - 32:02
    basic fact-check on the science. The world
    would be a lot better if you'd only share
  • 32:02 - 32:07
    information that you checked yourself for
    plausibility. You can also help to reduce
  • 32:07 - 32:11
    the need for rigorous fact checking.
    Simply do not spread any sane stories that
  • 32:11 - 32:15
    seem too good to be true and that you
    didn't check yourself or find in a
  • 32:15 - 32:19
    credible source. Misinformation and bad
    science reporting spread because we don't
  • 32:19 - 32:24
    care enough and because they are very,
    very attractive. If we break that pattern,
  • 32:24 - 32:27
    we can give reliable scientific
    information the attention that it
  • 32:27 - 32:31
    deserves. But don't worry, most of the
    science reporting you'll find online is
  • 32:31 - 32:35
    actually pretty good. There is no need to
    be extremely careful with every article
  • 32:35 - 32:40
    you find. Still, I think it is better to
    have a natural alertness to badly reported
  • 32:40 - 32:45
    signs than to trust just anything that is
    posted under a catchy headline. There is
  • 32:45 - 32:50
    no harm in double checking the facts
    because either you correct a mistake or
  • 32:50 - 32:56
    you reinforce correct information in your
    mind. So how do I assess whether a source
  • 32:56 - 33:01
    that I like is actually good? When I come
    across a new outlet, I try to find some
  • 33:01 - 33:06
    articles in an area that I know stuff
    about. For me, that's plant science. I
  • 33:06 - 33:09
    then read what they are writing about
    plants. If that sounds plausible, I am
  • 33:09 - 33:12
    tempted to also trust when they write
    about things like physics or climate
  • 33:12 - 33:18
    change, where I have much less expertize.
    This way I have my own personal list of
  • 33:18 - 33:23
    good and not so good outlets. If somebody
    on Twitter links to an article from the
  • 33:23 - 33:26
    not so good list, I know that I have to
    take that information with a large
  • 33:26 - 33:30
    quantity of salt. And if I want to learn
    more, I look for a different source to
  • 33:30 - 33:38
    back up any claims I find. It is tedious
    but so is science. With a bit of practice,
  • 33:38 - 33:41
    you can internalize the skepticism and
    navigate science information with much
  • 33:41 - 33:47
    more confidence. I hope I could help you
    with that a little bit. So that was my
  • 33:47 - 33:51
    attempt to help you to understand science
    better. I'd be glad if you'd leave me
  • 33:51 - 33:55
    feedback or direct any of your questions
    towards me on Twitter. That's
  • 33:55 - 33:59
    @sciencejoram. There will be sources for
    the things I talked about available
  • 33:59 - 34:04
    somewhere around this video or on my
    website: joram.schwartzmann.de. Thank you
  • 34:04 - 34:11
    for your attention. Goodbye.
  • 34:11 - 34:15
    Herald: derJoram, thank you for your talk,
    very entertaining and informative as well
  • 34:15 - 34:23
    as I might say. We have a few questions
    from here at the Congress that would be...
  • 34:23 - 34:27
    where's the signal? I need my questions
    from the internet - all of them are from
  • 34:27 - 34:29
    the Internet.
    Joram: laughs
  • 34:29 - 34:38
    H: So I would go through the questions and
    you can elaborate on some of the points
  • 34:38 - 34:42
    from your talk. So the first question...
    J: yeah, I will.
  • 34:42 - 34:48
    H: very good. The first question is: Is
    there a difference between reviewed
  • 34:48 - 34:56
    articles and meta studies?
    J: To my knowledge, there isn't really a
  • 34:56 - 35:00
    categorical difference in terms of peer
    review. Meta studies, so studies that
  • 35:00 - 35:05
    integrate, especially in the medical field
    you find that often, they integrate a lot
  • 35:05 - 35:10
    of studies and then summarize the findings
    again and try to put them in context of
  • 35:10 - 35:19
    one another, which are incredibly useful
    studies for medical conclusion making.
  • 35:19 - 35:24
    Because as I said in the talk, it's often
    very hard to do, for example, dietary
  • 35:24 - 35:29
    studies and you want to have large numbers
    and you get that by combining several
  • 35:29 - 35:34
    studies together. And usually these meta
    studies are also peer reviewed. So instead
  • 35:34 - 35:39
    of actually doing the research and going
    and doing whatever experiments you want to
  • 35:39 - 35:46
    do on humans, you instead collect all of
    the evidence others state, and then you
  • 35:46 - 35:49
    integrate it again, draw new conclusions
    from that and compare them and weigh them
  • 35:49 - 35:55
    and say "OK, this study had these
    shortcomings but we can take this part
  • 35:55 - 36:00
    from this study and put it in context with
    this part from his other study" because
  • 36:00 - 36:05
    you make so much additional conclusion
    making on that, you then submit it again
  • 36:05 - 36:09
    to a journal and it's again peer reviewed
    and then other researchers look at it and
  • 36:09 - 36:13
    say, and yeah, pretty much give their
    expertize on it and say whether or not it
  • 36:13 - 36:17
    made sense what you concluded from all of
    these things. So a meta study, when it's
  • 36:17 - 36:22
    published in a scientific journal, is also
    peer reviewed and also a very good,
  • 36:22 - 36:26
    credible source. And I would even say
    often meta studies are the studies that
  • 36:26 - 36:31
    you really want to look for if you have a
    very specific scientific question that you
  • 36:31 - 36:37
    as a sort of non expert, want to have
    answered because very often the individual
  • 36:37 - 36:41
    studies, they are very focused on a
    specific detail of a bigger research
  • 36:41 - 36:45
    question. But if you want to know is, I
    don't know, dietary fiber very good for
  • 36:45 - 36:49
    me. There's probably not a single study
    that will have the answer but there will
  • 36:49 - 36:54
    be many studies that together point
    towards the answer. And the meta study is
  • 36:54 - 36:59
    a place where you can find that answer.
    H: Very good, sounds like something to
  • 36:59 - 37:06
    reinforce the research. Maybe a follow-up
    question or it is a follow-up question: Is
  • 37:06 - 37:12
    there anything you can say in this regards
    about the reproducibility crisis in many
  • 37:12 - 37:17
    fields such as medicine?
    J: Yeah, that's a very good point. I mean,
  • 37:17 - 37:21
    that's something that I didn't mention at
    all in the talk because for pretty much
  • 37:21 - 37:26
    like complexity reasons because when you
    go into reproducibility, you run into all
  • 37:26 - 37:34
    kinds of, sort of complex additional
    problems because it is true that we often
  • 37:34 - 37:40
    struggle with reproducing. I actually
    don't have the numbers how often we fail
  • 37:40 - 37:45
    but this reproducibility crisis that's
    often mentioned - that is this idea that
  • 37:45 - 37:50
    when researchers take a paper that has
    whatever they studied and then other
  • 37:50 - 37:54
    researchers try to recreate a study and
    usually in a paper, there's also a
  • 37:54 - 37:58
    'Material & Method' section that details
    all of the things that they did. It's
  • 37:58 - 38:02
    pretty much the instructions of the
    experiment. And the results of the
  • 38:02 - 38:04
    experiment are both in the same paper
    usually - and when they try to sort of
  • 38:04 - 38:10
    recook the recipe that somebody else did,
    there is a chance that they don't find the
  • 38:10 - 38:13
    same thing. And we see that more and more
    often, especially with like complex
  • 38:13 - 38:18
    research questions. And that brings us to
    the idea that reproduction or
  • 38:18 - 38:24
    reproducibility is an issue and that maybe
    we we can't trust science as much or we
  • 38:24 - 38:31
    have to be more careful. It is true that
    we have to be more careful. But I wouldn't
  • 38:31 - 38:36
    go as far and to be like in general, sort
    of a distrustful of research. And that's
  • 38:36 - 38:39
    why I'm also saying, like in the medical
    field, you always want to have multiple
  • 38:39 - 38:44
    studies pointing at something. You always
    want to have multiple lines of evidence
  • 38:44 - 38:50
    because if one group finds something and
    another group can't find it, like
  • 38:50 - 38:57
    reproduce it, you end up in a place where
    you can't really say "Did this work now?
  • 38:57 - 39:00
    Like, who did the mistake? The first group
    or the second group? " Because also when
  • 39:00 - 39:03
    you were producing a study, you can make
    mistakes or there can be factors that the
  • 39:03 - 39:08
    initial research study didn't document in
    a way that it can be reproduced because
  • 39:08 - 39:13
    they didn't care to write down the supply
    of some chemicals, and the chemicals were
  • 39:13 - 39:17
    very important for the success of the
    experiment. Things like that happen and so
  • 39:17 - 39:21
    you don't know when you just have the
    initial study or the production study and
  • 39:21 - 39:25
    they have a different outcome. But if you
    have then multiple studies that all look
  • 39:25 - 39:32
    in a similar area and out of 10 studies, 8
    or 7 point to do a certain direction, you
  • 39:32 - 39:37
    can then be more certain that this
    direction points towards the truth. In
  • 39:37 - 39:42
    science, it's really hard to say, like
    OK, this is now the objective truth. This
  • 39:42 - 39:47
    is now.. we found now the definitive
    answer to the question that we're looking
  • 39:47 - 39:54
    at, especially in the medical field. So,
    yeah.. So that's a very long way of saying
  • 39:54 - 39:59
    it's complicated reproduction or
    reproducibility studies, they are very
  • 39:59 - 40:07
    important but I wouldn't be too worried or
    too - what's the word here? Like, I
  • 40:07 - 40:12
    wouldn't be too worried that the lack of
    reproducibility breaks the entire
  • 40:12 - 40:18
    scientific method because it's usually
    more complex and more issues at hand than
  • 40:18 - 40:22
    just a simple recooking of another
    person's study.
  • 40:22 - 40:32
    H: Yes, speaking of more publishing, so
    this is a follow-up to the follow-up, the
  • 40:32 - 40:35
    Internet asks, how can we deal with the
    publish or perish culture?
  • 40:35 - 40:42
    J: Oh, yeah. If I knew that, I would write
    a very smart blog posts and trying to get
  • 40:42 - 40:46
    convince people about that. I think
    personally we need to rethink the way we
  • 40:46 - 40:50
    do the funding because that's in the end
    where it comes down to. Another issue I
  • 40:50 - 40:54
    really didn't go into much detail in the
    talk because it's also very complex. So
  • 40:54 - 41:00
    science funding is usually defined by a
    decision making process; at one point
  • 41:00 - 41:05
    somebody decides, who gets the money and
    to get the money they need a qualifier to
  • 41:05 - 41:09
    decide. Like there is 10 research groups
    or 100 research groups said that write a
  • 41:09 - 41:13
    grant and say like "Hey, we need money
    because we want to do research." And they
  • 41:13 - 41:19
    have to figure out or they have to decide,
    who gets it because they can't give money
  • 41:19 - 41:24
    to everyone because we spend money in our
    budgets on different things than just
  • 41:24 - 41:30
    science. So the next best thing that they
    came up with, what the idea to use papers
  • 41:30 - 41:36
    - the number of papers that you have - to
    sort of get a measurement - or the quality
  • 41:36 - 41:40
    of paper that you have - to get a
    measurement of whether you are deserving
  • 41:40 - 41:45
    of the money. And you can see how that's
    problematic and means that people, who are
  • 41:45 - 41:49
    early in their research career, who don't
    have a lot of papers, they have a lower
  • 41:49 - 41:53
    chance of getting the money. And that
    leads to publish or perish idea that if
  • 41:53 - 41:57
    you don't publish your results and if you
    don't publish them in a very well
  • 41:57 - 42:01
    respected journal, then the funding
    agencies won't give you money. And so you
  • 42:01 - 42:08
    perish and you can't really pursue your
    research career. And it's really a hard
  • 42:08 - 42:12
    problem to solve because the decision
    about the funding is very much detached
  • 42:12 - 42:19
    from the scientific world, from academia.
    That's like multiple levels of abstraction
  • 42:19 - 42:24
    between the people, who like in the end
    make the budgets and decide, who gets the
  • 42:24 - 42:30
    money and the people, who are actually
    using the money. I would wish for funding
  • 42:30 - 42:37
    agency to look less at papers and maybe
    come up with different qualifiers, maybe
  • 42:37 - 42:45
    also something like general scientific
    practice, maybe they could do audits of
  • 42:45 - 42:51
    some sort of labs. I mean, there's a ton
    of factors that influence good research
  • 42:51 - 42:57
    that are not mentioned in papers like work
    ethics, work culture, how much teaching you
  • 42:57 - 43:02
    do, which can be very important. But it's
    sort of detrimental to get more funding
  • 43:02 - 43:06
    because when you do teaching, you don't do
    research and then you don't get papers and
  • 43:06 - 43:11
    then you don't get money. So, yeah, I
    don't have a very good solution to the
  • 43:11 - 43:16
    question what we can do. I would like to
    see more diverse funding also of smaller
  • 43:16 - 43:21
    research groups. I would like to see more
    funding for negative results, which is
  • 43:21 - 43:28
    another thing that we don't really value.
    So if you do an experiment and it doesn't
  • 43:28 - 43:32
    work, you can't publish it, you don't get
    the paper, you don't get money and so on.
  • 43:32 - 43:35
    So there are many factors that need to
    change, many things that we need to touch
  • 43:35 - 43:39
    to actually get away from publish or
    perish.
  • 43:39 - 43:47
    H: Yeah, another question that is closely
    connected to that is: Why are there so few
  • 43:47 - 43:52
    stable jobs in science?
    J: Yeah, that's the
  • 43:52 - 43:56
    Wissenschaftszeitvertragsgesetzt,
    something that - I forgot when we got it -
  • 43:56 - 44:04
    I think in the late 90s or early 2000s.
    That's at least a very German specific
  • 44:04 - 44:14
    answer that defined this Gesetz, this law,
    put it into law that you have a limited
  • 44:14 - 44:19
    time span that you can work in research,
    you can only work in research for I think
  • 44:19 - 44:24
    12 years and are some footnotes and stuff
    around it. But there is a fixed time limit
  • 44:24 - 44:28
    that you can work in research on limited
    term contracts, but you're funding
  • 44:28 - 44:31
    whenever you get research funding, it's
    always for a limited time. You always get
  • 44:31 - 44:36
    research funding for three years, six
    years if you're lucky. So you never have
  • 44:36 - 44:41
    permanent money in the research group.
    Sometimes you have that in universities
  • 44:41 - 44:45
    but overall you don't have permanent
    money. And so if you don't have permanent
  • 44:45 - 44:50
    money, you can't have permanent contracts
    and therefore there aren't really stable
  • 44:50 - 44:53
    jobs. And then with professorships or some
    group leader positions, then it changes
  • 44:53 - 44:59
    because group leaders and professorships,
    they are more easily planned. And
  • 44:59 - 45:02
    therefore in universities and research
    institutes, they sort of make a long term
  • 45:02 - 45:07
    budget and say "OK, we will have 15
    research groups. So we have money in the
  • 45:07 - 45:13
    long term for 15 group leaders.". But
    whoever is hired underneath these group
  • 45:13 - 45:16
    leaders, this has much more fluctuation
    and is based on sort of short term money.
  • 45:16 - 45:21
    And so there's no stable jobs there. At
    least that's in Germany. I know that, for
  • 45:21 - 45:26
    example, in the UK and in France, they
    have earlier permanent position jobs. They
  • 45:26 - 45:30
    have lecturers, for example, in the UK
    where you can without being a full
  • 45:30 - 45:35
    professor that has like its own backpack
    of stuff that has to be done, you can
  • 45:35 - 45:40
    already work at a university in the long
    term in a permanent contract. So it's a
  • 45:40 - 45:45
    very.. it's a problem we see across the
    world but Germany has its own very
  • 45:45 - 45:50
    specific problems introduced here that
    make it very unattractive to stay long
  • 45:50 - 45:57
    term in research in Germany.
    H: It's true. I concur.
  • 45:57 - 46:03
    J: Yes
    H laughs Coming to talk to the people,
  • 46:03 - 46:13
    who do science mostly for fun and less for
    profit. This question is: Can you write
  • 46:13 - 46:18
    and publish a paper without a formal
    degree in the sciences, assuming the
  • 46:18 - 46:24
    research efforts are sufficiently good?
    J: Yes, I think technically it is
  • 46:24 - 46:27
    possible. It comes with some problems,
    like, first of all, it's not free. First
  • 46:27 - 46:34
    of all, when you submit your paper to a
    journal, you pay money for it. I don't
  • 46:34 - 46:40
    know exactly but it ranges. I think the
    safe assumption is between 1.000 and
  • 46:40 - 46:44
    5.000$, depending on the journal, where
    you submit to. Then very often it's like
  • 46:44 - 46:50
    some formal problems that... I've been
    recently co-authoring a paper and I'm not
  • 46:50 - 46:57
    actively doing research anymore. I did
    something in my spare time, helped a
  • 46:57 - 47:02
    friend of mine, who was still doing
    research with some like basic stuff but he
  • 47:02 - 47:07
    was so nice to put me on the paper. And
    then there is a form where it says like
  • 47:07 - 47:12
    institute affiliation and I don't have an
    institute affiliation in that sense. So as
  • 47:12 - 47:16
    I'm just a middle author in this paper, I
    was published - or hopefully if it gets
  • 47:16 - 47:20
    accepted - I will be there as an
    independent researcher but it might be
  • 47:20 - 47:24
    that a journal has their own internal
    rules where they say we only accept people
  • 47:24 - 47:28
    from institutions. So it's not really
    inherent in the scientific system that you
  • 47:28 - 47:32
    have to be at an institution but there are
    these doors, there are these
  • 47:32 - 47:38
    pathways that are locked because somebody
    has to put in a form somewhere that which
  • 47:38 - 47:43
    institution you affiliate with. And I know
    that some people, who do like DIY science,
  • 47:43 - 47:49
    so they do outside of academia, that they
    need to have in academia partners that
  • 47:49 - 47:54
    help them with the publishing and also to
    get access to certain things. I mean, in
  • 47:54 - 47:58
    computer science, you don't need specific
    chemicals,but if you do anything like
  • 47:58 - 48:03
    chemical engineering or biology or
    anything, often you only get access to the
  • 48:03 - 48:08
    supplies when you are an academic
    institution. So, I know that many people
  • 48:08 - 48:13
    have sort of these partnerships,
    corporations with academia that allow them
  • 48:13 - 48:19
    to actually do the research and then
    publish it as well because otherwise, if
  • 48:19 - 48:24
    you're just doing it from your own
    bedroom, there might be a lot of barriers
  • 48:24 - 48:27
    in your way that might be very hard to
    overcome. But I think if you really,
  • 48:27 - 48:35
    really dedicated, you can overcome them.
    H: Coming to the elephants in that
  • 48:35 - 48:41
    bedroom: What can we do against the spread
    of false facts, IFG, corona-
  • 48:41 - 48:48
    vaccines? So they are very.. They get a
    lot of likes and are spread like a disease
  • 48:48 - 48:56
    themselves. And it's very hard to counter,
    especially in personal encounters, these
  • 48:56 - 49:02
    arguments because apparently a lot of
    people are not that familiar with the
  • 49:02 - 49:05
    scientific method. What's your take on
    that?
  • 49:05 - 49:09
    J: Yeah, it's difficult. And I've read
    over the years now many different
  • 49:09 - 49:16
    approaches ranging from nuts actually
    talking about facts because often
  • 49:16 - 49:19
    somebody, who has a very predefined
    opinion on something, they know a lot of
  • 49:19 - 49:23
    false facts that they have on their mind.
    And you, as somebody talking to them,
  • 49:23 - 49:26
    often don't have all of the correct facts
    in your mind. I mean, who runs around
  • 49:26 - 49:32
    with, like, a bag full of climate facts
    and a bag full of 5G facts and a bag full
  • 49:32 - 49:38
    of vaccine facts or like in the same
    quantity and quality as the stuff that
  • 49:38 - 49:41
    somebody, who read stuff on Facebook has
    in their in their backpack and their sort
  • 49:41 - 49:47
    of mental image of the world. So just
    arguing on the facts, it's very hard
  • 49:47 - 49:53
    because people, who follow these false
    ideas, they're very quick at making turns
  • 49:53 - 49:56
    and they like throw a thing at you one
    after the other. And so it's really hard
  • 49:56 - 50:01
    to just be like but actually debunking
    fact one and then debunking the next wrong
  • 50:01 - 50:08
    fact. So I've seen a paper where people
    try to do this sort of on a argumentative
  • 50:08 - 50:13
    standpoint. They say: "Look: You're
    drawing false conclusions. You say because
  • 50:13 - 50:21
    A, therefore B, but these two things
    aren't linked in a causal way. So you
  • 50:21 - 50:25
    can't actually draw this conclusion." And
    so sort of try to destroy that argument on
  • 50:25 - 50:32
    a meta level instead on a fact level. But
    also that is difficult. And usually
  • 50:32 - 50:37
    people, who are really devout followers of
    false facts, they are also not followers
  • 50:37 - 50:43
    of reasons or any reason based argument
    will just not work for them because they
  • 50:43 - 50:52
    will deny it. I think what really helps is
    a lot of small scale action in terms of
  • 50:52 - 50:57
    making scientific data. So making science
    more accessible. And I mean, I'm a science
  • 50:57 - 51:00
    communicator, so I'm heavily biased. I'm
    saying like we need more science
  • 51:00 - 51:05
    communication, we need more low level
    science communication. We need to have it
  • 51:05 - 51:09
    freely accessible because all of the stuff
    that you read with the false facts, this
  • 51:09 - 51:14
    is all freely available on Facebook and so
    on. So we need to have a similar low
  • 51:14 - 51:22
    level, low entry level for the correct
    facts. So for the real facts. And this is
  • 51:22 - 51:26
    also.. It's hard to do. I mean, in science
    communication field, there's also a lot of
  • 51:26 - 51:31
    debate how we do that. Should we do that
    over more presence on social media? Should
  • 51:31 - 51:38
    we simplify more or are we then actually
    oversimplifying like where is the balance?
  • 51:38 - 51:44
    How do we walk this line? So there's a lot
    of discussion and still ongoing learning
  • 51:44 - 51:48
    about that. But I think in the end, it's
    that what we need, we need people to be
  • 51:48 - 51:57
    able to just to find correct facts just as
    easily and understandable as they find the
  • 51:57 - 52:05
    fake news and the facts. Like we need
    science to be communicated as clearly as a
  • 52:05 - 52:11
    stupid share rolls on Facebook, as an
    image that - I don't want to repeat all of
  • 52:11 - 52:18
    the wrong claims, but something that says
    something very wrong, but very persuasive.
  • 52:18 - 52:22
    We need to be as persuasive with the
    correct facts. And I know that many people
  • 52:22 - 52:28
    are doing that by now, especially on
    places like Instagram or TikTok. You find
  • 52:28 - 52:33
    more and more people doing very high
    quality, low level - and I mean that on
  • 52:33 - 52:40
    sort of jargon level, not on a sort of
    intellectual level - so very low barrier
  • 52:40 - 52:47
    science communication. And I think this
    helps a lot. This helps more than very
  • 52:47 - 52:53
    complicated sort of pages debunking false
    facts. I mean, we also need these we also
  • 52:53 - 52:57
    need these as references. But if we really
    want to combat the spread of fake news, we
  • 52:57 - 53:02
    need to just be as accessible with the
    truth.
  • 53:02 - 53:11
    H: A thing closely connected to that is:
    "How do we find human error or detect
  • 53:11 - 53:16
    it?", since I guess people, who are
    watching this talk have already started
  • 53:16 - 53:23
    with a process of fine tuning their
    bullshit detectors but when, for example,
  • 53:23 - 53:27
    something very exciting and promising
    comes along as an example, CRISPR/Cas or
  • 53:27 - 53:39
    something. How do we go forward to not be
    fooled by our own already tuned bullshit
  • 53:39 - 53:46
    detectors and fall to false conclusions.
    J: I think a main part of this is
  • 53:46 - 53:54
    practice. Just try to look for something
    that would break the story, just not for
  • 53:54 - 53:58
    every story that I read - that's that's a
    lot of work. But from time to time, pick a
  • 53:58 - 54:01
    story where you're like "Oh, this is very
    exciting" and try to learn as much as you
  • 54:01 - 54:06
    can about that one story. And by doing
    that, also learn about the process, how
  • 54:06 - 54:12
    you drew the conclusions and then compare
    your final images after you did all the
  • 54:12 - 54:19
    research to the thing that you read in the
    beginning and see where there are things
  • 54:19 - 54:23
    that are not coming together and where
    there are things that are the same and
  • 54:23 - 54:30
    then based on that, practice. And I know
    that that's a lot of work, so that's sort
  • 54:30 - 54:38
    of the the high impact way of doing that
    by just practicing and just actively doing
  • 54:38 - 54:44
    the check-ups. But the other way you can
    do this is find people whose opinion you
  • 54:44 - 54:51
    trust on topics and follow them, follow
    them on podcasts, on social media, on
  • 54:51 - 54:57
    YouTube or wherever. And, especially in
    the beginning when you don't know them
  • 54:57 - 55:01
    well be very critical about them, it's
    easy to fall into like a sort of trap here
  • 55:01 - 55:06
    and following somebody, who actually
    doesn't know their stuff. But there are
  • 55:06 - 55:10
    some people, I mean, in this community
    here - I am not saying anything UFSA -
  • 55:10 - 55:17
    if you follow people like minkorrekt, like
    methodisch inkorrekt, they are great for a
  • 55:17 - 55:19
    very.. I actually can't really pin down
    which scientific area because in their
  • 55:19 - 55:23
    podcast they're touching so many different
    things and they have a very high level
  • 55:23 - 55:29
    understanding of how science works. So
    places like this are a good start to get a
  • 55:29 - 55:35
    healthy dose of skepticism. Another rule
    of thumb that I can give is like usually
  • 55:35 - 55:40
    stories are not as exciting when you get
    down to the nitty gritty details, like I'm
  • 55:40 - 55:45
    a big fan of CRISPR, for example, but I
    don't believe that we can cure all
  • 55:45 - 55:49
    diseases just now because we have CRISPR,
    like, there's very limited things we can
  • 55:49 - 55:55
    do with it and we can do much more with it
    than what we could do when we didn't have
  • 55:55 - 56:01
    it. But I'm not going around and thinking
    now we can create life at will because we
  • 56:01 - 56:06
    have CRISPR. We can fight any disease at
    will because we have CRISPR. So that's in
  • 56:06 - 56:11
    general a good rule of thumb is: just calm
    down, look what's really in there and see
  • 56:11 - 56:18
    how much.. or tone it just down like 20%
    and then take that level of excitement
  • 56:18 - 56:22
    with you instead of going around and being
    scared or overly excited about a new
  • 56:22 - 56:29
    technology and you think that's been found
    because we rarely do these massive jumps
  • 56:29 - 56:35
    that we need to start to worry or get over
    excited about something.
  • 56:35 - 56:43
    H: Very good, so very last question: Which
    tools did you use to create these nice
  • 56:43 - 56:48
    drawings?
    J: laughs Oh, a lot of people won't like
  • 56:48 - 56:53
    me for saying this because this will sound
    like a product promo. But there is.. I use
  • 56:53 - 56:59
    an iPad with a pencil and I used an app to
    draw the things on there called Affinity
  • 56:59 - 57:04
    Designer because that works very well then
    also across device. So that's how I
  • 57:04 - 57:09
    created all of the drawings and I put them
    all together in Apple Motion and exported
  • 57:09 - 57:15
    the whole thing in Apple FinalCut. So this
    is now the show like a sales pitch for all
  • 57:15 - 57:17
    of these products. But I can say, like for
    me, they work very well but there's pretty
  • 57:17 - 57:24
    much alternatives for everything along the
    way. I mean, I can say because I'm also
  • 57:24 - 57:28
    doing a lot of science communication with
    drawings for the Plants and Pipettes project
  • 57:28 - 57:33
    that I am part of and I can say an iPad with a
    pencil and the finishing designer gets you
  • 57:33 - 57:39
    very far for high quality drawings with a
    very easy access because I'm no way an
  • 57:39 - 57:45
    artist. I'm very bad at this stuff. But I
    can hide all my shortcomings because I
  • 57:45 - 57:49
    have an undo function in my iPad and
    because everything's in a vector drawing,
  • 57:49 - 57:54
    I can delete every stroke that I made,
    even if I realized like an hour later that
  • 57:54 - 57:59
    this should not be there, I can, like,
    reposition it and delete it. So vector
  • 57:59 - 58:04
    files and a pencil and an undo function
    were my best friends in the creating of
  • 58:04 - 58:09
    this video.
    H: Very good, derJoram. Thank you very
  • 58:09 - 58:14
    much for your talk and your very extensive
    Q&A. I think a lot of people are very
  • 58:14 - 58:16
    happy with your work.
    J: Thanks you.
  • 58:16 - 58:22
    H: And are actually saying in the pad that
    you should continue communicate science to
  • 58:22 - 58:25
    the public.
    J: That's very good because that's my job.
  • 58:25 - 58:28
    laughs It's good that people like that.
    H: Perfect.
  • 58:28 - 58:32
    J: Thank you very much.
    H: So a round of applause and some very
  • 58:32 - 58:40
    final announcements for this session.
    There will be the Herald new show and the
  • 58:40 - 58:48
    break. So stay tuned for that. And I would
    say if there are no further... no, we
  • 58:48 - 58:53
    don't have any more time, sadly, but I
    guess people know how to connect to you
  • 58:53 - 58:59
    and contact derJoram if they want to know
    anything more.
  • 58:59 - 59:15
    rC3 postroll music
  • 59:15 - 59:40
    Subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
#rC3 - Scientific Literacy 101
Description:

more » « less
Video Language:
English
Duration:
59:40

English subtitles

Revisions