rC3 preroll music
Herald: Welcome with me with a big round
of applause in your living room or
wherever you are derJoram. derJoram is a
science communicator. He got his
University education and his first
scientific experience at Max Planck
Institute. And he will give you now a
crash course for beginners to have the
best insight into the scientific method
and to distinguish science from rubbish.
derJoram, the stage is yours.
derJoram: Hi, nice to have you here. My name
is Joram Schwartzmann and I'm a plant
biologist. And today I want to talk about
science. I have worked in research for
many years, first during my diploma thesis
and then during my doctoral research. I've
worked both in Universities and at the Max
Planck Institute. So I got pretty good
insights into the way these structures
work. After my PhD, I left the research
career to instead talk about science,
which is also what I'm about to do today.
I am working now in science communication,
both as a job and in my spare time, when I
write about molecular plant research
online. Today, I will only mention plants
a tiny bit because the topic is a
different one. Today though, we are
talking about science literacy. So
basically, how does the scientific system
work? How do you read scientific
information and which information can you
trust? Science. It's kind of a big topic.
Before we start, it's time for some
disclaimers: I am a plant biologist. I
know stuff about STEM research that is
science, technology, engineering and
mathematics. But there's so much more
other science out there. Social science
and humanities share many core concepts
with natural sciences, but have also many
approaches that are unique to them. I
don't know a lot about the way these
works, so please forgive me if I stick
close to what I know, which is STEM
research. Talking about science is also
much less precise than doing the science.
For pretty much everything that I'll bring
up today there is an example where it is
completely different. So if in your
country, field of research or experience
something is different, we're probably
both right about whatever we're talking.
With that out of the way, let's look at
the things that make science science.
There are three parts of science that are
connected. The first one is the scientific
system. This is the way science is done.
Next up, we have people, who do the
science. The scientific term for them is
researchers. We want to look at how you
become a researcher, how researchers
introduce biases and how they pick their
volcanic layer to do evil science.
Finally, there are publications and this
is the front end of science, the stuff we
look at most of the time when we look at
science. There are several different kinds
and not all of them are equally
trustworthy. Let's begin with the
scientific system. We just don't do
science, we do science systematically.
Since the first people tried to understand
the world around them, we have developed a
complex system for science. At the core of
that is the scientific method. The
scientific method gives us structure and
tools to do science. Without it, we end up
in the realm of guesswork, anecdotes and
false conclusions. Here are some of my
favorite things that were believed before
the scientific method became standard.
Gentlemen could not transmit disease. Mice
are created from grain and cloth. Blood is
exclusively produced by the liver. Heart
shaped plants are good for the heart. But
thanks to the scientific method, we have a
system that allows us to make confident
judgment on our observations. Let's use an
example. This year has aged me
significantly and so as a newly formed old
person, I have pansies on my balcony. I
have blue ones and yellow ones, and in
summer I can see bees buzz around the
flowers. I have a feeling, though, that
they like the yellow ones better. That
right there is an observation. I now think
to myself I wonder if they prefer the
yellow flowers over the blue ones based on
the color and this is my hypothesis. The
point of a hypothesis is to test it so I
can accept it or reject it later. So I
come up with a test. I count all bees that
land on yellow flowers and on blue flowers
within a weekend. That is my experiment.
So I sit there all weekend with one of
these clicky things in each hand and count
the bees on the flowers. Every time a bee
lands on a flower, I click. click, click,
click, click, click. It's the most fun I
had all summer. In the end, I look at my
numbers. These are my results. I saw sixty
four bees on the yellow flowers and twenty
seven on the blue flowers. Based on my
experiment I conclude that bees prefer
yellow pansies over blue ones. I can now
return and accept my hypothesis. Bees do
prefer yellow flowers over blue ones.
Based on that experiment I made a new
observation and can now make a new
hypothesis: do other insects follow the
same behavior? And so I sat there again
next weekend, counting all hoverflies on
my pansies. Happy days. The scientists in
the audience are probably screaming by
now. I am, too, but on the inside. My
little experiment and the conclusions I
did were flawed. First up, I didn't do any
controls apart from yellow versus blue.
What about time? Do the days or seasons
matter? Maybe I picked up the one time
period when bees actually do prefer yellow
but on most other days they like blue
better? And then I didn't control for
position. Maybe the blue ones get less
sunlight and are less warm and so a good
control would have been to swap the pots
around. I also said I wanted to test
color. Another good control would have
been to put up a cardboard cutout of a
flower in blue and yellow and see whether
it is the color or maybe another factor
that attracts the bees. And then I only
counted once. I put the two data points
into an online statistical calculator and
when I had calculated it, it told me I had
internet connectivity problems. So I
busted out my old textbook about
statistics. And as it turns out, you need
repetitions of your experiment to do
statistics and without statistics, you
can't be sure of anything. If you want to
know whether what you measure is random or
truly different between your two
conditions, you do a statistical test that
tells you with what probability your
result could be random. That is called a
P-value. You want that number to be low.
In biology, we're happy with a chance of
one in twenty. So five percent that the
difference we observe between two
measurements happened by chance. In high
energy particle physics, that chance of
seeing a random effect is 1:3.500.000
or 0.00003%. So without
statistics, you can never be sure whether
you observe something important or just
two numbers that look different. A good
way to do science is to do an experiment a
couple of times, three at least, and then
repeat it with controls again at least
three times. With a bigger data set, I
could actually make an observation that
holds significance. So why do I tell you
all of this? You want to know how to
understand science not how to do it
yourself? Well, as it turns out, controls
and repetitions are also a critical point
to check when you read about scientific
results. Often enough cool findings are
based on experiments that didn't control
for certain things or that are based on
very low numbers of repetitions. You have
to be careful with conclusions from these
experiments as they might be wrong. So
when you read about science, look for
science that they followed the scientific
method like a clearly stated hypothesis,
experiments with proper controls and
enough repetitions to do solid statistics.
It seems like an obvious improvement for
the scientific system to just do more
repetitions. Well, there is a problem with
that. Often experiments require the
researchers to break things. Maybe just
because you take the things out of their
environment and into your lab, maybe
because you can only study it when it's
broken. And as it turns out, not all
things can be broken easily. Let me
introduce you to my scale of how easy it
is to break the thing you study. All the
way to the left, you have things like
particle physics. It's easy to break
particles. All you need is a big ring and
some spare electrons you put in there
really, really fast. Once you have these
two basic things, you can break millions
of particles and measure what happens so
you can calculate really good statistics
on them. Then you have other areas of
physics. In material science. the only
thing that stops you from testing how hard
a rock is, is the price of your rock.
Again, that makes us quite confident in
the material properties of things. Now we
enter the realm of biology. Biology is
less precise because living things are not
all the same. If you take two bacterial
cells of the same species, they might
still be slightly different in their
genome. But luckily we can break millions
of bacteria and other microbes without
running into ethical dilemmas. We even ask
researchers to become better at killing
microbes. So doing more of the experiment
is easier when working with microbes. It
gets harder, though, with bigger and more
complex organisms. Want to break plants in
a greenhouse or in a field? As long as you
have the space, you can break thousands of
them for science and no one minds. How
about animals like fish and mice and
monkeys? There it gets much more
complicated very quickly. While we are
happy to kill thousands of pigs every day
for sausages, we feel much less
comfortable doing the same for science.
And it's not a bad thing when we try to
reduce harm to animals. So while you
absolutely can do repetitions and controls
and animal testing, you usually are
limited by the number of animals you can
break for science. And then we come to
human biology. If you thought it was hard
doing lots of repetitions and controls in
animals, try doing that in humans. You
can't grow a human on a corn sugar based
diet just to see what would happen. You
can't grow humans in isolation and you
can't breed humans to make more cancer as
a control in your cancer experiment. So
with anything that involves science in
humans, we have to have very clever
experiment design to control for all the
things that we can't control. The other
way to do science on humans, of course, is
to be a genetic life form and disk-
operating system. What this scale tells us
is how careful we have to be with
conclusions from any of these research
areas. We have to apply a much higher
skepticism when looking at single studies
on human food than when we study how hard
a rock is. If I'm interested in stuff on
the right end of the spectrum, I'd rather
see a couple of studies pointing at a
conclusion. Whereas the further I get to
the left hand side, the more I trust
single studies. That still doesn't mean
that there can't be mistakes in particle
physics, but I hope you get the idea. Back
to the scientific method. Because it is
circular, it is never done, and so is
science. We can always uncover more
details, look at related things and refine
our understanding. There's no field where
we could ever say: Ok, let's pack up. We
know now everything. Good job, everyone -
the science has been completely done.
Everything in science can be potentially
overturned. Nothing is set in stone.
However, and it's a big however, it's not
likely that this happens for most things.
Most things have been shown so often that
the chance that we will find out that
water actually boils at 250 degrees
centigrade at sea level and normal
pressure is close to zero. But if
researchers would be able to show that
strange behavior of water, it is in the
nature of science to include that result
in our understanding. Even if that breaks
some other ideas that we have about the
world. That is what sets science apart
from dogma. New evidence is not frowned
upon and rejected, but welcomed and
integrated into our current understanding
of the world. Enough about a scientific
system. Let's talk about scientists. You
might be surprised to hear, but most
researchers are actually people. Other
people, who are not researchers tend to
forget that, especially when they talk
about the science that the researchers do.
That goes both ways. There are some that
believe in the absolute objective truth of
science. Ignoring all influence
researchers have on the data. And there
are others, who say that science is lying
about things like vaccinations, climate
change or infectious diseases. Both groups
are wrong. Researchers are not infallible
demigods that eat nature and poop wisdom.
They're also not conspiring to bring harm
to society in search for personal gain.
Trust me. I know people, who work in
pesticide research, they're as miserable
as any other researcher. Researchers are
people. And so they have thoughts and
ideas and wishes and biases and faults and
good intentions. Most people don't want to
do bad things and inflict harm on others
and so do researchers. They aim to do good
things and make lives of people better.
The problem with researchers being people
is that they are also flawed. We all have
cognitive biases that shape the way we
perceive and think about the world. And in
science, there's a whole list of biases
that affect the way we gather data and
draw conclusions from it. Luckily, there
are ways to deal with most biases. We have
to be aware of them, address them and
change our behavior to avoid them. What we
can't do is deny their impact on research.
Another issue is diversity. Whenever you
put a group of similar people together,
they will only come up with ideas that fit
within their group. That's why it is a
problem when only white men are dominating
research leadership positions. Hold on.
Some of you might shout. These men are
men of science. They are objective. They
use the scientific method. We don't need
diversity. We need smart people. To which
I answer: ugghhh. Here is a story for
you. For more than 150 years, researchers
believed that only male birds are singing.
It fits the simple idea that male birds do
all the mating rituals and stuff, so they
must be the singers. Just like in humans,
female birds were believed to just sit and
listen while the men shout at each other.
In the last 20 years, this idea was
debunked. New research found that also
female birds sing. So how did we miss that
for so long? Another study on the studies
found that during these 20 years that
overturned the dogma of male singing
birds, the researchers changed. Suddenly,
more women took part in research and
research happened in more parts of the
world. Previously, mostly men in U.S.,
Canada, England and Germany were studying
singing birds in their countries. As a
result, they subconsciously introduced
their own biases and ideas into the work.
And so we believe for a long time that
female birds keep their beaks shut. Only
when the group of researchers diversified,
we got new and better results. The male
researchers didn't ignore the female
songbirds out of bad faith. The men were
shaped by their environment but they
didn't want to do bad things. They just
happened to oversee something that someone
with a different background would pick up
on. What does this tell us about science?
It tells us that science is influenced
consciously or subconsciously by internal
biases. When we talk about scientific
results we need to take that into account.
Especially in studies regarding human
behavior. We have to be very careful about
experiment design, framing and
interpretation of results. If you read
about science that makes bold claims about
the way we should work, interact or
communicate in society that science is
prone to be shaped by bias and you should
be very careful when drawing conclusions
from it. I personally would rather wait
for several studies pointing in a similar
direction before I draw major conclusions.
I linked to a story about a publication
about the influence of female mentors on
career success and it was criticized for a
couple of these biases. If we want to
understand science better, we also have to
look at how someone becomes a scientist
and I mean that in a sense of professional
career. Technically, everybody is a
scientist as soon as they test a
hypothesis, observe the outcome and
repeat. But unfortunately, most of us are
not paid for the tiny experiments during
our day to day life. If you want to become
a scientist, you usually start by entering
academia. Academia is the world of
Universities, Colleges and research
institutes. There is a lot of science done
outside of academia, like in research and
development in industry or by individuals
taking part in DIY science. As these
groups rarely enter the spotlight of
public attention, I will ignore them
today. Sorry. So this is a typical STEM
career path. You begin as a Bachelor's or
Master's student. You work for something
between three months and a year and then
wohoo you get a degree. From here you
can leave, go into the industry, be a
scientific researcher at a University or
you continue your education. If you
continue, you're most likely to do a PhD.
But before you can select one of the
exciting options on a form when you order
your food, you have to do research. For
three to six years, depending on where you
do your PhD, you work on a project and
most likely will not have a great time.
You finish with your degree and some
publications. A lot of people leave now
but if you stay in research, you'll become
a postdoc. The word postdoc comes from the
word "doc" as in doctorate and "post" as
in you have to post a lot of application
letters to get a job. Postdocs do more
research, often on broader topics. They
supervise PhD students and are usually
pretty knowledgeable about their research
field. They work and write papers until
one of two things happen. The German
Wissenschaftszeitvertragsgesetz bites them
in the butt and they get no more contract
or they move on to become a group leader
or professor. Being a professor is great.
You have a permanent research position,
you get to supervise and you get to talk
to many cool other researchers. You
probably know a lot by now, not only about
your field but also many other fields in
your part of science as you constantly go
to conferences because they have good food
and also people are talking about science.
Downside is, you're probably not doing any
experiments yourself anymore. You have
postdocs and PhD students, who do that for
you. If you want to go into science,
please have a look at this. What looks
like terrible city planning is actually
terrible career planning as less than one
percent of PhDs will ever reach the level
of professor, also known as the only
stable job in science. That's also what
happened to me, I left academia after my
PhD. So what do we learn from all of this?
Different stages of a research career
correlate with different levels of
expertise. If you read statements from a
Master's student or professor, you can get
an estimate for how much they know about
their field and in turn for how solid
their science is. Of course, this is just
a rule of thumb- I have met both very
knowledgeable Master's students and
professors, who knew nothing apart from
their own small work. So whenever you read
statements from researchers independent of
their career stage, you should also wonder
whether they represent the scientific
consensus. Any individual scientist might
have a particular hot take about something
they care about but in general, they agree
with their colleagues. When reading about
science that relates to policies or public
debates, it is a good idea to explore
whether this particular researcher is
representing their own opinion or the one
of their peers. Don't ask the researcher
directly though, every single one of them
will say that, of course, they represent
the majority opinion. The difference
between science and screwing around is
writing it down, as Adam Savage once said.
Science without publications is pretty
useless because if you keep all that
knowledge to yourself, well, congrats, you
are very smart now but that doesn't really
help anyone but you. Any researchers'
goal, therefore, is to get their findings
publicly known so that others can extend
the work and create scientific progress.
So let's go back to my amazing bee
research. I did the whole experiment again
with proper controls this time and now I
want to tell people about it. The simplest
way to publish my findings would be to
tweet about it. But then a random guy
would probably tell me that I'm wrong and
stupid and should go f*** myself. So
instead I do what most researchers would
do and go to a scientific conference.
That's where researchers hang out, have a
lot of coffee and sit and listen to talks
from other researchers. Conferences are
usually the first place that new
information becomes public. Well, public
is a bit of a stretch, usually the talks
are not really recorded or made accessible
to anyone, who wasn't there at the time.
So while the information is pretty
trustworthy, it remains fairly
inaccessible to others. After my
conference talk, the next step is to write
up all the details of my experiment and
the results in a scientific paper. Before
I send this to an editor at a scientific
journal, I could publish it myself as a
pre-print. These pre-prints are drafts of
finished papers that are available to read
for anyone. They are great because they
provide easy access to information that is
otherwise often behind paywalls. They are
not so great because they have not yet
been peer reviewed. If a pre-print hasn't
also been published with peer review, you
have to be careful with what you read as
it is essentially only the point of view
of the authors. Peer review only happens
when you submit your paper to a journal.
Journals are a whole thing and there have
been some great talks in the past about
why many of them are problematic. Let's
ignore for a second how these massive
enterprises collect money from everyone
they get in contact with and let's focus
instead on what they're doing for the
academic system. I send them my paper, an
editor sees if it's any good and then
sends my paper to two to three reviewers.
These are other researchers that then
critically check everything I did and
eventually recommend accepting or
rejecting my paper. If it is accepted, the
paper will be published. I pay a fee and
the paper will be available online. Often
behind a paywall, unless I pay some more
cash. At this point, I'd like to have a
look at how a scientific paper works.
There are five important parts to any
paper. The title, the author list, the
abstract, the figures and the text. The
title is a summary of the main findings
and unlike in popular media, it is much
more descriptive. Where a newspaper leaves
out the most important information to get
people to read the article, the
information is right there in the title of
the study. In my case that could be
"Honeybees -Apis mellifera- show selective
preference for flower color in viola
tricolor". You see, everything is right
there. The organisms I worked with and the
main result I found. Below the title
stands the author list. As you might have
guessed, the author list is a list of
authors. Depending on the field the paper
is from, the list can be ordered
alphabetically or according to relative
contribution. If it is contribution then
you usually find the first author to have
done all the work or the middle authors to
have contributed some smaller parts and
the last author to have paid for the whole
thing. The last author is usually a group
leader or professor. A good way to learn
more about a research group and their work
is to search for the last author's name. The
abstract is a summary of the findings.
Read this to get a general idea of what
the researchers did and what they found.
It is very dense in information but it is
usually written in a way that also
researchers from other fields can
understand at least some of it. The
figures are pretty to look at and hold the
key findings in most papers and the text
has the full story with all the details or
the jargon and all your references that
the research is built on. You probably
won't read the text unless you care a lot,
so stick to title, abstract and authors to
get a quick understanding of what's going
on. Scientific papers to reflect a peer
reviewed opinion of one or a few research
groups. If you are interested in a broader
topic like what insects like to pollinate
what flower, you should read review
papers. These are peer reviewed summaries
of a much broader scope, often weighing
multiple points of view against each
other. Review papers are a great resource
that avoids some of the biases individual
research groups might have about their
topic. So my research is reviewed and
published. I can go back now and start
counting butterflies, but this is not
where the publishing of scientific results
ends. My institute might think that my bee
counting is not even bad, it is actually
amazing and so they will issue a press
release. Press releases often emphasize
the positive parts of a study while
putting them into context of something
that's relevant to most people. Something
like "bees remain attracted to yellow
flowers despite the climate crisis". The
facts in a press release are usually
correct but shortcomings of a study that I
mentioned in a paper are often missing
from the press release. Because my bee
study is really cool and because the PR
department of my institute did a great
job, journalists pick up on the story. The
first ones are often journals with a focus
on science like Scientific American or
Spektrum der Wissenschaft. Most of the
time, science journalists do a great job
in finding more sources and putting the
results into context. They often ask other
experts for their opinion and they break
down the scientific language into simpler
words. Science journalism is the source I
recommend to most people when they want to
learn about a field that they are not
experts in. Because my bee story is
freaking good, mainstream journalists are
also reporting on it. They are often
pressed for time and write for much
broader audience, so they just report the
basic findings, often putting even more
emphasis on why people should care.
Usually climate change, personal health or
now Covid. Mainstream press coverage is
rarely as detailed as the previous
reporting and has the strongest tendency
to accidentally misrepresent facts or add
framing that researchers wouldn't use. Oh,
and then there is the weird uncle, who
posts a link to the article on their
Facebook with a blurb of text that says
the opposite of what the study actually
did. As you might imagine, the process of
getting scientific information out to the
public quickly becomes a game of
telephone. What is clearly written in the
paper is framed positively in a press
release and gets watered down even more
once it reaches mainstream press. So for
you, as someone, who wants to understand
the science, it is a good idea to be more
careful the further you get away from your
original source material. While specific
scientific journalism usually does a good
job in breaking down the facts without
distortion, the same can't be said for
popular media. If you come across an
interesting story, try to find another
version of it in a different outlet,
preferably one that is more catered to an
audience with scientific interest. Of
course, you can jump straight to the
original paper but understanding the
scientific jargon can be hard and
misunderstanding the message is easy, so
it can do more harm than good. We see that
harm now with Hobbyists, when epidimi...,
epidimio..., epediomiolo.., who are not
people, who study epidemics, who are
making up their own pandemic modeling.
They are cherry picking bits of
information from scientific papers without
understanding the bigger picture and
context and then post their own charts on
Twitter. It's cool if you want to play
with data in your free time, and it's a
fun way to learn more about a topic but it
can also be very misleading and harmful
while dealing with a pandemic if expert
studies have to fight for attention with
nonexperts Excel-graphs. It pays off to
think twice about whether you're actually
helping by publishing your own take on a
scientific question. Before we end, I want
to give you some practical advice on how
to assess the credibility of a story and
how to understand the science better. This
is now an in-depth guide to fact checking.
I want you to get a sort of gut feeling
about science. When I read scientific
information, these are the questions that
come to my mind. First up, I want to ask
yourself, is this plausible and does this
follow the scientific consensus? If both
answers are "no" then you should carefully
check the sources. More often than not,
these results are outliers that somebody
exaggerated to get news coverage or
someone is actively reframing scientific
information for their own goals. To get a
feeling about scientific consensus on
things, it is a good idea to look for
joint statements from research
communities. Whenever an issue that is
linked to current research comes up for
public debate, there is usually a joint
statement laying down the scientific
opinion signed by dozens or even hundreds
of researchers, like, for example, from
Scientists for Future. And then whenever
you see a big number, you should look for
context. When you read statements like "We
grow sugar beet on an area of over 400,000
hectare", you should immediately ask
yourself "Who is we? Is it Germany,
Europe, the world? What is the time frame?
Is that per year? Is that a lot? How much
is that compared to other crops?". Context
matters a lot and often big numbers are
used to impress you. In this case, 400,000
hectare is the yearly area that Germany
grows sugar beet on. A wheat, for example,
is grown on over 3 million hectare per
year in Germany. Context matters, and so
whenever you see a number, look for a
frame of reference. If the article doesn't
give you one, either, go and look for
yourself or ignore the number for your
decision making based on the article.
Numbers only work with framing, so be
aware of it. I want you to think briefly
about how you felt when I gave you that
number of 400,000 hectare. Chances are
that you felt a sort of feeling of unease
because it's really hard to imagine such a
large number. An interesting exercise is
to create your own frame of reference.
Collect a couple of numbers like total
agricultural area of your country, the
current spending budget of your
municipality, the average yearly income,
or the unemployment rate in relative and
absolute numbers. Keep the list somewhere
accessible and use it whenever you come
across a big number that is hard to grasp.
Are 100,000€ a lot of money in context of
public spending? How important are 5,000
jobs in context of population and
unemployment? Such a list can defuze the
occasional scary big number in news
articles, and it can also help you to make
your point better. Speaking of framing,
always be aware, who the sender of the
information is. News outlets rarely have a
specific scientific agenda, but NGOs do.
If Shell, the oil company, will provide a
leaflet where they cite scary numbers and
present research that they funded that
finds that oil drilling is actually good
for the environment but they won't
disclose, who they work with for the
study, we all would laugh at that
information. But if we read a leaflet from
an environmental NGO in Munich that is
structurally identical but with a
narrative about glyphosate in beer that
fits our own perception of the world, we
are more likely to accept the information
in the leaflet. In my opinion, both
sources are problematic and I would not
use any of them to build my own opinion.
Good journalists put links to the sources
in or under the article, and it is a good
idea to check them. Often, however, you
have to look for the paper yourself based
on hints in the text like author names,
institutions, and general topics. And then
paywalls often block access to the
information that you're looking for. You
can try pages like ResearchGate for legal
access to PDFs. Many researchers also use
sci-hub but as the site provides illegal
access to publicly funded research, I
won't recommend doing so. When you have
the paper in front of you, you can either
read it completely, which is kind of hard,
or just read the abstract, which might be
easier. The easiest is to look for science
journalism articles about the paper.
Twitter is actually great to find those,
as many researchers are on Twitter and
like to share articles about their own
research They also like to discuss
research on Twitter. So if the story is
controversial, chances are you'll find
some science accounts calling that out.
While Twitter is terrible in many regards,
it is a great tool to engage with the
scientific community. You can also do a
basic check-up yourself. Where was the
paper published and is it a known journal?
Who are the people doing the research and
what are their affiliations? How did they
do their experiment? Checking for controls
and repetitions in the experiment is hard
if you don't know the topic, but if you do
know the topic, go for it. In the end,
fact checking takes time and energy. It's
very likely that you won't do it very
often but especially when something comes
up that really interests you and you want
to tell people about it, you should do a
basic fact-check on the science. The world
would be a lot better if you'd only share
information that you checked yourself for
plausibility. You can also help to reduce
the need for rigorous fact checking.
Simply do not spread any sane stories that
seem too good to be true and that you
didn't check yourself or find in a
credible source. Misinformation and bad
science reporting spread because we don't
care enough and because they are very,
very attractive. If we break that pattern,
we can give reliable scientific
information the attention that it
deserves. But don't worry, most of the
science reporting you'll find online is
actually pretty good. There is no need to
be extremely careful with every article
you find. Still, I think it is better to
have a natural alertness to badly reported
signs than to trust just anything that is
posted under a catchy headline. There is
no harm in double checking the facts
because either you correct a mistake or
you reinforce correct information in your
mind. So how do I assess whether a source
that I like is actually good? When I come
across a new outlet, I try to find some
articles in an area that I know stuff
about. For me, that's plant science. I
then read what they are writing about
plants. If that sounds plausible, I am
tempted to also trust when they write
about things like physics or climate
change, where I have much less expertize.
This way I have my own personal list of
good and not so good outlets. If somebody
on Twitter links to an article from the
not so good list, I know that I have to
take that information with a large
quantity of salt. And if I want to learn
more, I look for a different source to
back up any claims I find. It is tedious
but so is science. With a bit of practice,
you can internalize the skepticism and
navigate science information with much
more confidence. I hope I could help you
with that a little bit. So that was my
attempt to help you to understand science
better. I'd be glad if you'd leave me
feedback or direct any of your questions
towards me on Twitter. That's
@sciencejoram. There will be sources for
the things I talked about available
somewhere around this video or on my
website: joram.schwartzmann.de. Thank you
for your attention. Goodbye.
Herald: derJoram, thank you for your talk,
very entertaining and informative as well
as I might say. We have a few questions
from here at the Congress that would be...
where's the signal? I need my questions
from the internet - all of them are from
the Internet.
Joram: laughs
H: So I would go through the questions and
you can elaborate on some of the points
from your talk. So the first question...
J: yeah, I will.
H: very good. The first question is: Is
there a difference between reviewed
articles and meta studies?
J: To my knowledge, there isn't really a
categorical difference in terms of peer
review. Meta studies, so studies that
integrate, especially in the medical field
you find that often, they integrate a lot
of studies and then summarize the findings
again and try to put them in context of
one another, which are incredibly useful
studies for medical conclusion making.
Because as I said in the talk, it's often
very hard to do, for example, dietary
studies and you want to have large numbers
and you get that by combining several
studies together. And usually these meta
studies are also peer reviewed. So instead
of actually doing the research and going
and doing whatever experiments you want to
do on humans, you instead collect all of
the evidence others state, and then you
integrate it again, draw new conclusions
from that and compare them and weigh them
and say "OK, this study had these
shortcomings but we can take this part
from this study and put it in context with
this part from his other study" because
you make so much additional conclusion
making on that, you then submit it again
to a journal and it's again peer reviewed
and then other researchers look at it and
say, and yeah, pretty much give their
expertize on it and say whether or not it
made sense what you concluded from all of
these things. So a meta study, when it's
published in a scientific journal, is also
peer reviewed and also a very good,
credible source. And I would even say
often meta studies are the studies that
you really want to look for if you have a
very specific scientific question that you
as a sort of non expert, want to have
answered because very often the individual
studies, they are very focused on a
specific detail of a bigger research
question. But if you want to know is, I
don't know, dietary fiber very good for
me. There's probably not a single study
that will have the answer but there will
be many studies that together point
towards the answer. And the meta study is
a place where you can find that answer.
H: Very good, sounds like something to
reinforce the research. Maybe a follow-up
question or it is a follow-up question: Is
there anything you can say in this regards
about the reproducibility crisis in many
fields such as medicine?
J: Yeah, that's a very good point. I mean,
that's something that I didn't mention at
all in the talk because for pretty much
like complexity reasons because when you
go into reproducibility, you run into all
kinds of, sort of complex additional
problems because it is true that we often
struggle with reproducing. I actually
don't have the numbers how often we fail
but this reproducibility crisis that's
often mentioned - that is this idea that
when researchers take a paper that has
whatever they studied and then other
researchers try to recreate a study and
usually in a paper, there's also a
'Material & Method' section that details
all of the things that they did. It's
pretty much the instructions of the
experiment. And the results of the
experiment are both in the same paper
usually - and when they try to sort of
recook the recipe that somebody else did,
there is a chance that they don't find the
same thing. And we see that more and more
often, especially with like complex
research questions. And that brings us to
the idea that reproduction or
reproducibility is an issue and that maybe
we we can't trust science as much or we
have to be more careful. It is true that
we have to be more careful. But I wouldn't
go as far and to be like in general, sort
of a distrustful of research. And that's
why I'm also saying, like in the medical
field, you always want to have multiple
studies pointing at something. You always
want to have multiple lines of evidence
because if one group finds something and
another group can't find it, like
reproduce it, you end up in a place where
you can't really say "Did this work now?
Like, who did the mistake? The first group
or the second group? " Because also when
you were producing a study, you can make
mistakes or there can be factors that the
initial research study didn't document in
a way that it can be reproduced because
they didn't care to write down the supply
of some chemicals, and the chemicals were
very important for the success of the
experiment. Things like that happen and so
you don't know when you just have the
initial study or the production study and
they have a different outcome. But if you
have then multiple studies that all look
in a similar area and out of 10 studies, 8
or 7 point to do a certain direction, you
can then be more certain that this
direction points towards the truth. In
science, it's really hard to say, like
OK, this is now the objective truth. This
is now.. we found now the definitive
answer to the question that we're looking
at, especially in the medical field. So,
yeah.. So that's a very long way of saying
it's complicated reproduction or
reproducibility studies, they are very
important but I wouldn't be too worried or
too - what's the word here? Like, I
wouldn't be too worried that the lack of
reproducibility breaks the entire
scientific method because it's usually
more complex and more issues at hand than
just a simple recooking of another
person's study.
H: Yes, speaking of more publishing, so
this is a follow-up to the follow-up, the
Internet asks, how can we deal with the
publish or perish culture?
J: Oh, yeah. If I knew that, I would write
a very smart blog posts and trying to get
convince people about that. I think
personally we need to rethink the way we
do the funding because that's in the end
where it comes down to. Another issue I
really didn't go into much detail in the
talk because it's also very complex. So
science funding is usually defined by a
decision making process; at one point
somebody decides, who gets the money and
to get the money they need a qualifier to
decide. Like there is 10 research groups
or 100 research groups said that write a
grant and say like "Hey, we need money
because we want to do research." And they
have to figure out or they have to decide,
who gets it because they can't give money
to everyone because we spend money in our
budgets on different things than just
science. So the next best thing that they
came up with, what the idea to use papers
- the number of papers that you have - to
sort of get a measurement - or the quality
of paper that you have - to get a
measurement of whether you are deserving
of the money. And you can see how that's
problematic and means that people, who are
early in their research career, who don't
have a lot of papers, they have a lower
chance of getting the money. And that
leads to publish or perish idea that if
you don't publish your results and if you
don't publish them in a very well
respected journal, then the funding
agencies won't give you money. And so you
perish and you can't really pursue your
research career. And it's really a hard
problem to solve because the decision
about the funding is very much detached
from the scientific world, from academia.
That's like multiple levels of abstraction
between the people, who like in the end
make the budgets and decide, who gets the
money and the people, who are actually
using the money. I would wish for funding
agency to look less at papers and maybe
come up with different qualifiers, maybe
also something like general scientific
practice, maybe they could do audits of
some sort of labs. I mean, there's a ton
of factors that influence good research
that are not mentioned in papers like work
ethics, work culture, how much teaching you
do, which can be very important. But it's
sort of detrimental to get more funding
because when you do teaching, you don't do
research and then you don't get papers and
then you don't get money. So, yeah, I
don't have a very good solution to the
question what we can do. I would like to
see more diverse funding also of smaller
research groups. I would like to see more
funding for negative results, which is
another thing that we don't really value.
So if you do an experiment and it doesn't
work, you can't publish it, you don't get
the paper, you don't get money and so on.
So there are many factors that need to
change, many things that we need to touch
to actually get away from publish or
perish.
H: Yeah, another question that is closely
connected to that is: Why are there so few
stable jobs in science?
J: Yeah, that's the
Wissenschaftszeitvertragsgesetzt,
something that - I forgot when we got it -
I think in the late 90s or early 2000s.
That's at least a very German specific
answer that defined this Gesetz, this law,
put it into law that you have a limited
time span that you can work in research,
you can only work in research for I think
12 years and are some footnotes and stuff
around it. But there is a fixed time limit
that you can work in research on limited
term contracts, but you're funding
whenever you get research funding, it's
always for a limited time. You always get
research funding for three years, six
years if you're lucky. So you never have
permanent money in the research group.
Sometimes you have that in universities
but overall you don't have permanent
money. And so if you don't have permanent
money, you can't have permanent contracts
and therefore there aren't really stable
jobs. And then with professorships or some
group leader positions, then it changes
because group leaders and professorships,
they are more easily planned. And
therefore in universities and research
institutes, they sort of make a long term
budget and say "OK, we will have 15
research groups. So we have money in the
long term for 15 group leaders.". But
whoever is hired underneath these group
leaders, this has much more fluctuation
and is based on sort of short term money.
And so there's no stable jobs there. At
least that's in Germany. I know that, for
example, in the UK and in France, they
have earlier permanent position jobs. They
have lecturers, for example, in the UK
where you can without being a full
professor that has like its own backpack
of stuff that has to be done, you can
already work at a university in the long
term in a permanent contract. So it's a
very.. it's a problem we see across the
world but Germany has its own very
specific problems introduced here that
make it very unattractive to stay long
term in research in Germany.
H: It's true. I concur.
J: Yes
H laughs Coming to talk to the people,
who do science mostly for fun and less for
profit. This question is: Can you write
and publish a paper without a formal
degree in the sciences, assuming the
research efforts are sufficiently good?
J: Yes, I think technically it is
possible. It comes with some problems,
like, first of all, it's not free. First
of all, when you submit your paper to a
journal, you pay money for it. I don't
know exactly but it ranges. I think the
safe assumption is between 1.000 and
5.000$, depending on the journal, where
you submit to. Then very often it's like
some formal problems that... I've been
recently co-authoring a paper and I'm not
actively doing research anymore. I did
something in my spare time, helped a
friend of mine, who was still doing
research with some like basic stuff but he
was so nice to put me on the paper. And
then there is a form where it says like
institute affiliation and I don't have an
institute affiliation in that sense. So as
I'm just a middle author in this paper, I
was published - or hopefully if it gets
accepted - I will be there as an
independent researcher but it might be
that a journal has their own internal
rules where they say we only accept people
from institutions. So it's not really
inherent in the scientific system that you
have to be at an institution but there are
these doors, there are these
pathways that are locked because somebody
has to put in a form somewhere that which
institution you affiliate with. And I know
that some people, who do like DIY science,
so they do outside of academia, that they
need to have in academia partners that
help them with the publishing and also to
get access to certain things. I mean, in
computer science, you don't need specific
chemicals,but if you do anything like
chemical engineering or biology or
anything, often you only get access to the
supplies when you are an academic
institution. So, I know that many people
have sort of these partnerships,
corporations with academia that allow them
to actually do the research and then
publish it as well because otherwise, if
you're just doing it from your own
bedroom, there might be a lot of barriers
in your way that might be very hard to
overcome. But I think if you really,
really dedicated, you can overcome them.
H: Coming to the elephants in that
bedroom: What can we do against the spread
of false facts, IFG, corona-
vaccines? So they are very.. They get a
lot of likes and are spread like a disease
themselves. And it's very hard to counter,
especially in personal encounters, these
arguments because apparently a lot of
people are not that familiar with the
scientific method. What's your take on
that?
J: Yeah, it's difficult. And I've read
over the years now many different
approaches ranging from nuts actually
talking about facts because often
somebody, who has a very predefined
opinion on something, they know a lot of
false facts that they have on their mind.
And you, as somebody talking to them,
often don't have all of the correct facts
in your mind. I mean, who runs around
with, like, a bag full of climate facts
and a bag full of 5G facts and a bag full
of vaccine facts or like in the same
quantity and quality as the stuff that
somebody, who read stuff on Facebook has
in their in their backpack and their sort
of mental image of the world. So just
arguing on the facts, it's very hard
because people, who follow these false
ideas, they're very quick at making turns
and they like throw a thing at you one
after the other. And so it's really hard
to just be like but actually debunking
fact one and then debunking the next wrong
fact. So I've seen a paper where people
try to do this sort of on a argumentative
standpoint. They say: "Look: You're
drawing false conclusions. You say because
A, therefore B, but these two things
aren't linked in a causal way. So you
can't actually draw this conclusion." And
so sort of try to destroy that argument on
a meta level instead on a fact level. But
also that is difficult. And usually
people, who are really devout followers of
false facts, they are also not followers
of reasons or any reason based argument
will just not work for them because they
will deny it. I think what really helps is
a lot of small scale action in terms of
making scientific data. So making science
more accessible. And I mean, I'm a science
communicator, so I'm heavily biased. I'm
saying like we need more science
communication, we need more low level
science communication. We need to have it
freely accessible because all of the stuff
that you read with the false facts, this
is all freely available on Facebook and so
on. So we need to have a similar low
level, low entry level for the correct
facts. So for the real facts. And this is
also.. It's hard to do. I mean, in science
communication field, there's also a lot of
debate how we do that. Should we do that
over more presence on social media? Should
we simplify more or are we then actually
oversimplifying like where is the balance?
How do we walk this line? So there's a lot
of discussion and still ongoing learning
about that. But I think in the end, it's
that what we need, we need people to be
able to just to find correct facts just as
easily and understandable as they find the
fake news and the facts. Like we need
science to be communicated as clearly as a
stupid share rolls on Facebook, as an
image that - I don't want to repeat all of
the wrong claims, but something that says
something very wrong, but very persuasive.
We need to be as persuasive with the
correct facts. And I know that many people
are doing that by now, especially on
places like Instagram or TikTok. You find
more and more people doing very high
quality, low level - and I mean that on
sort of jargon level, not on a sort of
intellectual level - so very low barrier
science communication. And I think this
helps a lot. This helps more than very
complicated sort of pages debunking false
facts. I mean, we also need these we also
need these as references. But if we really
want to combat the spread of fake news, we
need to just be as accessible with the
truth.
H: A thing closely connected to that is:
"How do we find human error or detect
it?", since I guess people, who are
watching this talk have already started
with a process of fine tuning their
bullshit detectors but when, for example,
something very exciting and promising
comes along as an example, CRISPR/Cas or
something. How do we go forward to not be
fooled by our own already tuned bullshit
detectors and fall to false conclusions.
J: I think a main part of this is
practice. Just try to look for something
that would break the story, just not for
every story that I read - that's that's a
lot of work. But from time to time, pick a
story where you're like "Oh, this is very
exciting" and try to learn as much as you
can about that one story. And by doing
that, also learn about the process, how
you drew the conclusions and then compare
your final images after you did all the
research to the thing that you read in the
beginning and see where there are things
that are not coming together and where
there are things that are the same and
then based on that, practice. And I know
that that's a lot of work, so that's sort
of the the high impact way of doing that
by just practicing and just actively doing
the check-ups. But the other way you can
do this is find people whose opinion you
trust on topics and follow them, follow
them on podcasts, on social media, on
YouTube or wherever. And, especially in
the beginning when you don't know them
well be very critical about them, it's
easy to fall into like a sort of trap here
and following somebody, who actually
doesn't know their stuff. But there are
some people, I mean, in this community
here - I am not saying anything UFSA -
if you follow people like minkorrekt, like
methodisch inkorrekt, they are great for a
very.. I actually can't really pin down
which scientific area because in their
podcast they're touching so many different
things and they have a very high level
understanding of how science works. So
places like this are a good start to get a
healthy dose of skepticism. Another rule
of thumb that I can give is like usually
stories are not as exciting when you get
down to the nitty gritty details, like I'm
a big fan of CRISPR, for example, but I
don't believe that we can cure all
diseases just now because we have CRISPR,
like, there's very limited things we can
do with it and we can do much more with it
than what we could do when we didn't have
it. But I'm not going around and thinking
now we can create life at will because we
have CRISPR. We can fight any disease at
will because we have CRISPR. So that's in
general a good rule of thumb is: just calm
down, look what's really in there and see
how much.. or tone it just down like 20%
and then take that level of excitement
with you instead of going around and being
scared or overly excited about a new
technology and you think that's been found
because we rarely do these massive jumps
that we need to start to worry or get over
excited about something.
H: Very good, so very last question: Which
tools did you use to create these nice
drawings?
J: laughs Oh, a lot of people won't like
me for saying this because this will sound
like a product promo. But there is.. I use
an iPad with a pencil and I used an app to
draw the things on there called Affinity
Designer because that works very well then
also across device. So that's how I
created all of the drawings and I put them
all together in Apple Motion and exported
the whole thing in Apple FinalCut. So this
is now the show like a sales pitch for all
of these products. But I can say, like for
me, they work very well but there's pretty
much alternatives for everything along the
way. I mean, I can say because I'm also
doing a lot of science communication with
drawings for the Plants and Pipettes project
that I am part of and I can say an iPad with a
pencil and the finishing designer gets you
very far for high quality drawings with a
very easy access because I'm no way an
artist. I'm very bad at this stuff. But I
can hide all my shortcomings because I
have an undo function in my iPad and
because everything's in a vector drawing,
I can delete every stroke that I made,
even if I realized like an hour later that
this should not be there, I can, like,
reposition it and delete it. So vector
files and a pencil and an undo function
were my best friends in the creating of
this video.
H: Very good, derJoram. Thank you very
much for your talk and your very extensive
Q&A. I think a lot of people are very
happy with your work.
J: Thanks you.
H: And are actually saying in the pad that
you should continue communicate science to
the public.
J: That's very good because that's my job.
laughs It's good that people like that.
H: Perfect.
J: Thank you very much.
H: So a round of applause and some very
final announcements for this session.
There will be the Herald new show and the
break. So stay tuned for that. And I would
say if there are no further... no, we
don't have any more time, sadly, but I
guess people know how to connect to you
and contact derJoram if they want to know
anything more.
rC3 postroll music
Subtitles created by c3subtitles.de
in the year 2020. Join, and help us!