Return to Video

34C3 - Algorithmic science evaluation and power structure: the discourse on strategic citation and

  • 0:00 - 0:15
    34C3 preroll music
  • 0:15 - 0:18
    Herald: Please give a warm welcome here.
  • 0:18 - 0:24
    It’s Franziska, Teresa, and Judith.
  • 0:24 - 0:27
    Judith, you have the stage, thank you.
  • 0:27 - 0:28
    Judith Hartstein: Thank you, thanks!
  • 0:28 - 0:34
    applause
  • 0:34 - 0:57
    inaudible
  • 0:57 - 1:00
    Judith: We believe that
    scientific performance indicators
  • 1:00 - 1:03
    are widely applied to inform
    funding decisions and to
  • 1:03 - 1:08
    determine the availability of career
    opportunities. So, those of you who are
  • 1:08 - 1:14
    working in science or have had a look into
    the science system might agree to that.
  • 1:14 - 1:19
    And we want to understand evaluative
    bibliometrics as algorithmic science
  • 1:19 - 1:28
    evaluation instruments to highlight some
    things that do occur also with other
  • 1:28 - 1:38
    algorithmic instruments of evaluation. And
    so we’re going to start with a quote from
  • 1:38 - 1:45
    a publication in 2015 which reads “As the
    tyranny of bibliometrics tightens its
  • 1:45 - 1:49
    grip, it is having a disastrous effect on
    the model of science presented to young
  • 1:49 - 1:59
    researchers.” We have heard the talk of
    hanno already, and he’s basically also
  • 1:59 - 2:07
    talking about problems in the science
    system and the reputation by the
  • 2:07 - 2:14
    indicators. And the question is, is
    bibliometrics the bad guy here? If you
  • 2:14 - 2:19
    speak of ‘tyranny of bibliometrics’, who
    is the actor doing this? Or are maybe
  • 2:19 - 2:25
    bibliometricians the problem? We want to
    contextualize our talk into the growing
  • 2:25 - 2:30
    movement of Reflexive Metrics. So those
    who are doing science studies, social
  • 2:30 - 2:35
    studies of science, scientometrics and
    bibliometrics. The movement of Reflexive
  • 2:35 - 2:42
    Metrics. So the basic idea is to say:
    “Okay, we have to accept accountability if
  • 2:42 - 2:46
    we do bibliometrics and scientometrics.”
    We have to understand the effects of
  • 2:46 - 2:54
    algorithmic evaluation on science, and we
    will try not to be the bad guy. And the
  • 2:54 - 3:04
    main mediator of the science evaluation
    which is perceived by the researchers is
  • 3:04 - 3:10
    the algorithm. I will hand over the
    microphone to… or I will not hand over the
  • 3:10 - 3:14
    microphone but I will hand over the talk
    to Teresa. She’s going to talk about
  • 3:14 - 3:20
    "Datafication of Scientific Evaluation".
  • 3:20 - 3:24
    Teresa Isigkeit: Okay. I hope you can
    hear me. No? Yes? Okay.
  • 3:24 - 3:26
    Judith: mumbling
  • 3:26 - 3:29
    When we think about the science system
    what do we expect?
  • 3:29 - 3:34
    What can society expect
    from a scientific system?
  • 3:34 - 3:38
    In general, we would say
    reliable and truthful knowledge,
  • 3:38 - 3:42
    that is scrutinized by
    the scientific community.
  • 3:42 - 3:44
    So where can we find this knowledge?
  • 3:44 - 3:47
    Normally in publications.
  • 3:47 - 3:52
    So with these publications,
    can we actually say
  • 3:52 - 3:59
    whether science is bad or good? Or is
    there better science than others?
  • 3:59 - 4:03
    In the era of
    digital publication databases,
  • 4:03 - 4:07
    there’s big datasets of publications.
  • 4:07 - 4:12
    And these are used to
    evaluate and calculate
  • 4:12 - 4:17
    the quality of scientific output.
  • 4:17 - 4:23
    So in general, with this metadata
    we can tell you
  • 4:23 - 4:26
    who is the author of a publication,
  • 4:26 - 4:30
    where is the home institution
    of this author,
  • 4:30 - 4:38
    or which types of citations are in
    the bibliographic information.
  • 4:38 - 4:45
    This is used in the calculation
    of bibliometric indicators.
  • 4:45 - 4:52
    For example if you take the
    journal impact factors,
  • 4:52 - 4:58
    which is a citation based indicator,
    you can compare different journals.
  • 4:58 - 5:04
    And maybe say which journals
    are performing better than others
  • 5:04 - 5:09
    or if the journal factor has increased or
    decreased over the years.
  • 5:09 - 5:15
    Another example would be the
    Hirsch-Index for individual scientists,
  • 5:15 - 5:23
    which is also widely used when
    scientists apply for jobs. So they put
  • 5:23 - 5:28
    these numbers in their CVs and supposedly
    this tells you something about the quality
  • 5:28 - 5:36
    of research those scientists are
    conducting. With the availability of the
  • 5:36 - 5:46
    data we can see an increase in its usage.
    And in a scientific environment in which
  • 5:46 - 5:52
    data-driven science is established,
    scientific conduct decisions regarding
  • 5:52 - 6:04
    hiring or funding heavily rely on these
    indicators. There’s maybe a naive belief
  • 6:04 - 6:12
    that these indicators that are data-driven
    and rely on data that is collected in the
  • 6:12 - 6:27
    database is a more objective metric that
    we can use. So here's a quote by Rieder
  • 6:27 - 6:32
    and Simon: “In this brave new world trust
    no longer resides in the integrity of
  • 6:32 - 6:39
    individual truth-tellers or the veracity
    of prestigious institutions, but is placed
  • 6:39 - 6:44
    in highly formalized procedures enacted
    through disciplined self-restraint.
  • 6:44 - 6:53
    Numbers cease to be supplements.” So we
    see a change of an evaluation system that
  • 6:53 - 7:00
    is relying on expert knowledge to a system
    of algorithmic science evaluation. In this
  • 7:00 - 7:06
    change there’s a belief in a
    depersonalization of the system and the
  • 7:06 - 7:15
    perception of algorithms as the rule of
    law. So when looking at the interaction
  • 7:15 - 7:26
    between the algorithm and scientists we
    can tell that this relationship is not as
  • 7:26 - 7:35
    easy as it seems. Algorithms are not in
    fact objective. They carry social meaning
  • 7:35 - 7:43
    and human agency. They are used to
    construct a reality and algorithms don’t
  • 7:43 - 7:48
    come naturally. They don’t grow on trees
    and can be picked by scientists and people
  • 7:48 - 7:55
    who evaluate the scientific system, so we
    have to be reflective and think about
  • 7:55 - 8:05
    which social meanings the algorithm holds.
    So when there is a code that the algorithm
  • 8:05 - 8:11
    uses, there is a subjective meaning in
    this code, and there is agency in this
  • 8:11 - 8:17
    code, and you can’t just say, oh, this is
    a perfect construction of the reality of
  • 8:17 - 8:22
    scientific system. So the belief that this
    tells you more about the quality of
  • 8:22 - 8:32
    research is not a good indicator. So when
    you think about the example of citation
  • 8:32 - 8:37
    counts the algorithm reads the
    bibliographic information of a publication
  • 8:37 - 8:47
    from the database. So scientists, they
    cite papers that relate to their studies.
  • 8:47 - 8:56
    But we don’t actually know which of these
    citations are more meaningful than others,
  • 8:56 - 9:01
    so they’re not as easily comparable. But
    the algorithms give you the belief they
  • 9:01 - 9:11
    are, so relevance is not as easily put
    into an algorithm and there is different
  • 9:11 - 9:19
    types of citations. So the scientists
    perceive this use of the algorithms also
  • 9:19 - 9:25
    as a powerful instrument. And so the
    algorithm has some sway above the
  • 9:25 - 9:30
    scientists because they rely so much on
    those indicators to further their careers,
  • 9:30 - 9:38
    to get a promotion, or get funding for
    their next research projects. So we have a
  • 9:38 - 9:43
    reciprocal relationship between the
    algorithm and the scientists, and this
  • 9:43 - 9:52
    creates a new construction of reality. So
    we can conclude that governance by
  • 9:52 - 9:59
    algorithms leads to behavioral adaptation
    in scientists, and one of these examples
  • 9:59 - 10:08
    that uses the Science Citation Index will
    be given from Franziska.
  • 10:08 - 10:13
    Franziska Sörgel: Thanks for the
    handover! Yes, let me start.
  • 10:13 - 10:16
    I’m focusing on reputation
    and authorship as you can see
  • 10:16 - 10:21
    on the slide, and first let me
    start with a quote
  • 10:21 - 10:27
    by Eugene Garfield, which says: “Is it
    reasonable to assume that if I cite a
  • 10:27 - 10:33
    paper that I would probably be interested
    in those papers which subsequently cite it
  • 10:33 - 10:39
    as well as my own paper. Indeed, I have
    observed on several occasions that people
  • 10:39 - 10:45
    preferred to cite the articles I had cited
    rather than cite me! It would seem to me
  • 10:45 - 10:51
    that this is the basis for the building up
    of the ‘logical network’ for the Citation
  • 10:51 - 11:02
    Index service.” So, actually, this Science
    Citation Index which is described here was
  • 11:02 - 11:08
    mainly developed in order to solve the
    problems of information retrieval. Eugene
  • 11:08 - 11:16
    Garfield, also founder of this Science
    Citation Index – short: SCI – noted or
  • 11:16 - 11:22
    began to note a huge interest in
    reciprocal publication behavior. He
  • 11:22 - 11:27
    recognized the increasing interest as a
    strategic instrument to exploit
  • 11:27 - 11:33
    intellectual property. And indeed, the
    interest in the SCI – and its data –
  • 11:33 - 11:39
    successively became more relevant within
    the disciplines, and its usage extended.
  • 11:39 - 11:46
    Later, [Derek J.] de Solla Price, another
    social scientist, asked or claimed for a
  • 11:46 - 11:53
    better research on the topic, as it
    currently also meant a crisis in science,
  • 11:53 - 11:59
    and stated: “If a paper was cited once,
    it would get cited again and
  • 11:59 - 12:05
    again, so the main problem was that the
    rich would get richer”, which is also
  • 12:05 - 12:12
    known as the “Matthew Effect”. Finally,
    the SCI and its use turned into a system
  • 12:12 - 12:18
    which was and still is used as a
    reciprocal citation system, and became a
  • 12:18 - 12:25
    central and global actor. Once a paper was
    cited, the probability it was cited again
  • 12:25 - 12:31
    was higher, and it would even extend its
    own influence on a certain topic within
  • 12:31 - 12:38
    the scientific field. So it was known that
    you would either read a certain article
  • 12:38 - 12:49
    and people would do research on a certain
    topic or subject. So this phenomenon would
  • 12:49 - 12:59
    rise to an instrument of disciplining
    science and created power structures.
  • 12:59 - 13:05
    Let me show you one example which is
    closely connected to this phenomenon
  • 13:05 - 13:11
    I just told you about – and I don’t know
    if here in this room there are any
  • 13:11 - 13:19
    astronomers or physicists?
    Yeah, there are few, okay.
  • 13:19 - 13:25
    That’s great, actually.
    So in the next slide, here,
  • 13:25 - 13:33
    we have a table with a time
    window from 2010 to 2016, and social
  • 13:33 - 13:41
    scientists from Berlin found out that the
    co-authorship within the field of physics
  • 13:41 - 13:51
    extended by 58 on a yearly basis in this
    time window. So this is actually already
  • 13:51 - 13:56
    very high, but they also found another
    very extreme case. They found one paper
  • 13:56 - 14:07
    which had roundabout 7,000 words and the
    mentioned authorship of 5,000. So, in
  • 14:07 - 14:15
    average, the contribution of each
    scientist or researcher of this paper who
  • 14:15 - 14:29
    was mentioned was 1.1 word. Sounds
    strange, yeah. And so of course you have
  • 14:29 - 14:35
    to see this in a certain context, and
    maybe we can talk about this later on,
  • 14:35 - 14:41
    because it has to do with Atlas particle
    detector, which requires high maintenance
  • 14:41 - 14:46
    and stuff. But still, the number of
    authorship, and you can see this
  • 14:46 - 14:53
    regardless which scientific field we are
    talking about, generally increased the
  • 14:53 - 15:05
    last years. It remains a problem
    especially for the reputation, obviously.
  • 15:05 - 15:12
    It remains a problem that there is such
    high pressure on nowadays researchers.
  • 15:12 - 15:20
    Still, of course, we have ethics and
    research requires standards of
  • 15:20 - 15:26
    responsibility. And for example there’s
    one, there’s other ones, but there’s one
  • 15:26 - 15:31
    here on the slide: the “Australian Code
    for the Responsible Conduct of Research”
  • 15:31 - 15:37
    which says: “The right to authorship is
    not tied to position or profession and
  • 15:37 - 15:41
    does not depend on whether the
    contribution was paid for or voluntary.
  • 15:41 - 15:46
    It is not enough to have provided
    materials or routine technical support,
  • 15:46 - 15:51
    or to have made the measurements
    on which the publication is based.
  • 15:51 - 15:55
    Substantial intellectual involvement
    is required.”
  • 15:55 - 16:03
    So yeah, this is, could be one rule
    to work with or to work by, to follow.
  • 16:03 - 16:08
    And still we have this problem
    of reputation which remains,
  • 16:08 - 16:11
    and where I hand over to Judith again.
  • 16:11 - 16:20
    Judith: Thank you. So we’re going to speak
    about strategic citation now. So if you
  • 16:20 - 16:30
    put this point of reputation like that,
    you may say: So the researcher does find
  • 16:30 - 16:36
    something in his research, his or her
    research, and addresses the publication
  • 16:36 - 16:40
    describing it to the community. And the
    community, the scientific community
  • 16:40 - 16:46
    rewards the researcher with reputation.
    And now the algorithm, which is like
  • 16:46 - 16:55
    perceived to be a new thing, is mediating
    the visibility of the researcher’s results
  • 16:55 - 17:01
    to the community, and is also mediating
    the rewards – the career opportunities or
  • 17:01 - 17:05
    the funding decisions etc. And what
    happens now and what is plausible to
  • 17:05 - 17:10
    happen is that the researcher addresses
    his or her research also to the algorithm
  • 17:10 - 17:21
    in terms of citing those who are evaluated
    by the algorithm, who he wants to support,
  • 17:21 - 17:29
    and also in terms of strategic keywording
    etc. And that’s the only thing which
  • 17:29 - 17:34
    happens new, might be a perspective on
    that. So the one thing new: the algorithm
  • 17:34 - 17:41
    is addressed as a recipient of scientific
    publications. And it is like far-fetched
  • 17:41 - 17:46
    to discriminate between so-called and
    ‘visible colleges’ and ‘citation cartels’.
  • 17:46 - 17:51
    What do I mean by that? So ‘invisible
    colleges’ is a term to say: “Okay, people
  • 17:51 - 17:56
    are citing each other. They do not work
    together in a co-working space, maybe, but
  • 17:56 - 18:01
    they do research on the same topic.” And
    that’s only plausible that they cite each
  • 18:01 - 18:07
    other. And if we look at citation networks
    and find people citing each other, that
  • 18:07 - 18:13
    does not have necessarily to be something
    bad. And we also have people who are
  • 18:13 - 18:19
    concerned that there might be like
    ‘citation cartels’. So researchers citing
  • 18:19 - 18:27
    each other not for purposes like the
    research topics are closely connected, but
  • 18:27 - 18:36
    to support each other in their career
    prospects. And people do try to
  • 18:36 - 18:41
    discriminate those invisible colleges from
    citation cartels ex post from looking at
  • 18:41 - 18:46
    metadata networks of publication and find
    that a problem. And we have a discourse on
  • 18:46 - 18:58
    that in the bibliometrics community. I
    will show you some short quotes how people
  • 18:58 - 19:05
    talk about those citation cartels. So e.g.
    Davis in 2012 said: “George Franck warned
  • 19:05 - 19:09
    us on the possibility of citation cartels
    – groups of editors and journals working
  • 19:09 - 19:14
    together for mutual benefit.” So we have
    heard about their journal impact factors,
  • 19:14 - 19:23
    so they... it’s believed that editors talk
    to each other: “Hey you cite my journal,
  • 19:23 - 19:27
    I cite your journal, and we both
    will boost our impact factors.”
  • 19:27 - 19:33
    So we have people trying
    to detect those cartels,
  • 19:33 - 19:37
    and Mongeon et al. wrote that:
    “We have little knowledge
  • 19:37 - 19:41
    about the phenomenon itself and
    about where to draw the line between
  • 19:41 - 19:46
    acceptable and unacceptable behavior.” So
    we are having like moral discussions,
  • 19:46 - 19:54
    about research ethics. And also we find
    discussions about the fairness of the
  • 19:54 - 19:58
    impact factors. So Yang et al. wrote:
    “Disingenuously manipulating impact factor
  • 19:58 - 20:03
    is the significant way to harm the
    fairness of the impact factor.” And that’s
  • 20:03 - 20:10
    a very interesting thing I think, because
    why should an indicator be fair? So the...
  • 20:10 - 20:16
    To believe that we have a fair measurement
    of scientific quality relevance and rigor
  • 20:16 - 20:22
    in one single like number, like their
    journal impact factor, is not a small
  • 20:22 - 20:30
    thing to say. And also we have a call for
    detection and punishment. So Davis also
  • 20:30 - 20:34
    wrote: “If disciplinary norms and decorum
    cannot keep this kind of behavior at bay,
  • 20:34 - 20:40
    the threat of being delisted from the JCR
    may be necessary.” So we find the moral
  • 20:40 - 20:44
    concerns on right and wrong. We find the
    evocation of the fairness of indicators
  • 20:44 - 20:51
    and we find the call for detection and
    punishment. When I first heard about that
  • 20:51 - 20:57
    phenomenon of citation cartels which is
    believed to exist, I had something in mind
  • 20:57 - 21:04
    which sounded... or it sounded like
    familiar to me. Because we have a similar
  • 21:04 - 21:11
    information retrieval discourse or a
    discourse about ranking and power in a
  • 21:11 - 21:19
    different area of society: in search
    engine optimization. So I found a quote by
  • 21:19 - 21:27
    Page et al., who developed the PageRank
    algorithm – Google’s ranking algorithm –
  • 21:27 - 21:33
    in 1999, which has changed since that a
    lot. But they wrote also a paper about the
  • 21:33 - 21:42
    social implications of the information
    retrieval by the PageRank as an indicator.
  • 21:42 - 21:46
    And wrote that: “These types of
    personalized PageRanks are virtually
  • 21:46 - 21:50
    immune to manipulation by commercial
    interests. ... For example fast updating
  • 21:50 - 21:54
    of documents is a very desirable feature,
    but it is abused by people who want to
  • 21:54 - 22:01
    manipulate the results of the search
    engine.” And that was important to me to
  • 22:01 - 22:09
    read because we also have like a narration
    of abuse, of manipulation, the perception
  • 22:09 - 22:14
    that that might be fair, so we have a fair
    indicator and people try to betray it.
  • 22:14 - 22:22
    And then we had in the early 2000s,
    I recall having a private website
  • 22:22 - 22:25
    with a public guest book and
    getting link spam from people
  • 22:25 - 22:27
    who wanted to boost their
    Google PageRanks,
  • 22:27 - 22:33
    and shortly afterwards Google
    decided to punish link spam in their
  • 22:33 - 22:38
    ranking algorithm. And then I got lots of
    emails of people saying: “Please delete my
  • 22:38 - 22:44
    post from your guestbook because Google’s
    going to punish me for that.” We may say
  • 22:44 - 22:52
    that this search engine optimization
    discussion is now somehow settled and it’s
  • 22:52 - 22:58
    accepted that Google's ranking is useful.
    They have a secret algorithm, but it works
  • 22:58 - 23:05
    and that is why it’s widely used. Although
    that journal impact factor seems to be
  • 23:05 - 23:13
    transparent it’s basically the same thing
    that it's accepted to be useful and thus
  • 23:13 - 23:17
    it's widely used. So the journal impact
    factor, the SCI and the like. We have
  • 23:17 - 23:25
    another analogy so that Google decides
    which SEO behavior is regarded acceptable
  • 23:25 - 23:28
    and punishes those who act against the
    rules and thus holds an enormous amount of
  • 23:28 - 23:39
    power, which has lots of implications and
    led to the spreading of content management
  • 23:39 - 23:45
    systems, for example, with search engine
    optimization plugins etc. We also have
  • 23:45 - 23:53
    this power concentration in the hands of
    Clarivate (formerly ThomsonReuters) who
  • 23:53 - 23:59
    host the database for the general impact
    factor. And they decide on who’s going to
  • 23:59 - 24:05
    be indexed in those journal citation
    records and how is the algorithm, in
  • 24:05 - 24:12
    detail, implemented in their databases. So
    we have this power concentration there
  • 24:12 - 24:22
    too, and I think if we think about this
    analogy we might come to interesting
  • 24:22 - 24:30
    thoughts but our time is running out so we
    are going to give a take-home message.
  • 24:30 - 24:35
    Tl;dr, we find that the scientific
    community reacts with codes of conduct to
  • 24:35 - 24:40
    a problem which is believed to exist. The
    strategic citation – we have database
  • 24:40 - 24:45
    providers which react with sanctions so
    people are delisted from the journal
  • 24:45 - 24:50
    citation records or journals are delisted
    from the journal citation records to
  • 24:50 - 24:55
    punish them for citation stacking. And we
    have researchers and publishers who adapt
  • 24:55 - 25:05
    their publication strategies in reaction
    to this perceived algorithmic power. But
  • 25:05 - 25:12
    if we want to understand this as a problem
    we don’t have to only react to the
  • 25:12 - 25:19
    algorithm but we have to address the power
    structures. Who holds these instruments in
  • 25:19 - 25:24
    in their hands? If we talk about
    bibliometrics as an instrument and we
  • 25:24 - 25:28
    should not only blame the algorithm – so
    #dontblamethealgorithm.
  • 25:28 - 25:33
    Thank you very much!
    applause
  • 25:38 - 25:44
    Herald: Thank you to Franziska, Teresa
    and Judith, or in the reverse order.
  • 25:45 - 25:48
    Thank you for shining a light on
    how science is actually seen
  • 25:48 - 25:51
    in its publications.
  • 25:51 - 25:52
    As I started off as well,
  • 25:52 - 25:56
    it’s more about
    scratching each other a little bit.
  • 25:56 - 25:58
    I have some questions here
    from the audience.
  • 25:58 - 26:00
    This is Microphone 2, please!
  • 26:00 - 26:05
    Mic2: Yes, thank you for this interesting
    talk. I have a question. You may be
  • 26:05 - 26:10
    familiar with the term ‘measurement
    dysfunction’, that if you provide a worker
  • 26:10 - 26:14
    with an incentive to do a good job based
    on some kind of metric then the worker
  • 26:14 - 26:20
    will start optimizing for the metric
    instead of trying to do a good job, and
  • 26:20 - 26:26
    this is kind of inevitable. So, don’t you
    see that maybe it could be treating the
  • 26:26 - 26:33
    symptoms if we just react about code of
    conduct, tweaking algorithms or addressing
  • 26:33 - 26:37
    power structures. But instead we need to
    remove the incentives that lead to this
  • 26:37 - 26:44
    measurement dysfunction.
    Judith: I would refer to this phenomenon
  • 26:44 - 26:51
    as “perverse learning” – learning for the
    grades you get but not for your intrinsic
  • 26:51 - 27:01
    motivation to learn something. We observe
    that in the science system. But if we only
  • 27:01 - 27:10
    adapt the algorithm, so take away the
    incentives, would be like you wouldn’t
  • 27:10 - 27:20
    want to evaluate research at all which you
    can probably want to do. But to whom would
  • 27:20 - 27:33
    you address this call or this demand, so
    “please do not have indicators” or… I give
  • 27:33 - 27:39
    the question back to you. laughs
    Herald: Okay, questions from the audience
  • 27:39 - 27:46
    out there on the Internet, please. Your
    mic is not working? Okay, then I go to
  • 27:46 - 27:52
    Microphone 1, please Sir.
    Mic1: Yeah, I want to have a provocative
  • 27:52 - 27:57
    thesis. I think the fundamental problem is
    not how these things are gamed but the
  • 27:57 - 28:01
    fundamental problem is that if we think
    the impact factor is a useful measurement
  • 28:01 - 28:05
    for the quality of science.
    Because I think it’s just not.
  • 28:05 - 28:07
    applause
  • 28:10 - 28:12
    Judith: Ahm.. I..
    Mic 1: I guess that was obvious...
  • 28:12 - 28:13
    Judith: Yeah, I would not say
  • 28:13 - 28:18
    that the journal impact factor is
    a measurement of scientific quality
  • 28:18 - 28:24
    because no one has like
    a definition of scientific quality.
  • 28:24 - 28:28
    So what I can observe is only
    people believe this journal impact factor
  • 28:28 - 28:37
    to reflect some quality.
    Maybe they are chasing a ghost but I…
  • 28:37 - 28:42
    whether that’s a valid measure
    is not so important to me,
  • 28:42 - 28:45
    even if it were a relevant
    or a valid measure,
  • 28:45 - 28:52
    it would concern me
    how it affects science.
  • 28:53 - 28:56
    Herald: Okay, question from Microphone 3
    there. Please.
  • 28:56 - 28:59
    Mic3: Thanks for the interesting talk.
    I have a question about
  • 28:59 - 29:04
    the 5,000 authors paper.
    Was that same paper published
  • 29:04 - 29:09
    five thousand times or was it one paper
    with ten page title page?
  • 29:10 - 29:15
    Franziska: No, it was one paper ...
  • 29:15 - 29:20
    ... counting more than 7,000 words.
    And the authorship,
  • 29:20 - 29:24
    so authors and co-authors,
    were more than 5,000.
  • 29:24 - 29:31
    Mic3: Isn’t it obvious
    that this is a fake?
  • 29:31 - 29:35
    Franziska: Well that’s
    what I meant earlier
  • 29:35 - 29:44
    when saying, you have to see this within
    its context. So physicists are working
  • 29:44 - 29:52
    with this with Atlas, this detective
    system. As there were some physicists in
  • 29:52 - 30:02
    the audience they probably do know how
    this works. I do not. But as they claim
  • 30:02 - 30:08
    it’s so much work to work with this, and
    it, as I said, requires so high
  • 30:08 - 30:19
    maintenance it’s... They obviously have
    yeah...
  • 30:19 - 30:22
    Mic3: So everybody who contributed was
    listed?
  • 30:22 - 30:29
    Judith: Exactly, that’s it. And if this is
    ethically correct or not, well, this is
  • 30:29 - 30:34
    something which needs to be discussed,
    right? This is why we have this talk, as
  • 30:34 - 30:40
    we want to make this transparent, and
    contribute it to an open discussion.
  • 30:40 - 30:45
    Herald: Okay, I’m sorry guys. I have to
    cut off here because our emission out
  • 30:45 - 30:49
    there in space is coming to an end.
    I suggest that you guys
  • 30:49 - 30:53
    find each other somewhere,
    maybe in the tea house or...
  • 30:53 - 30:55
    Judith: Sure. We are around, we are here.
  • 30:55 - 30:58
    Herald: You are around. I would love to
    have lots of applause for these ladies,
  • 30:58 - 31:03
    for it really lights on
    how these algorithms
  • 31:03 - 31:05
    not or are working. Thank you very much!
  • 31:05 - 31:07
    Judith: Thank you!
  • 31:07 - 31:22
    postroll music
  • 31:22 - 31:27
    subtitles created by c3subtitles.de
    in the year 2018
Title:
34C3 - Algorithmic science evaluation and power structure: the discourse on strategic citation and
Description:

more » « less
Video Language:
English
Duration:
31:27

English subtitles

Revisions