[Script Info] Title: [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,0:00:00.10,0:00:14.89,Default,,0000,0000,0000,,{\i1}34c3 intro{\i0} Dialogue: 0,0:00:14.89,0:00:19.09,Default,,0000,0000,0000,,Hanno Böck: Yeah, so many of you probably\Nknow me from doing things around IT Dialogue: 0,0:00:19.09,0:00:25.00,Default,,0000,0000,0000,,security, but I'm gonna surprise you to\Nalmost not talk about IT security today. Dialogue: 0,0:00:25.00,0:00:32.19,Default,,0000,0000,0000,,But I'm gonna ask the question "Can we\Ntrust the scientific method?". I want to Dialogue: 0,0:00:32.19,0:00:38.81,Default,,0000,0000,0000,,start this by giving you which is quite a\Nsimple example. So if we do science like Dialogue: 0,0:00:38.81,0:00:45.21,Default,,0000,0000,0000,,we start with the theory and then we are\Ntrying to test if it's true, right? So I Dialogue: 0,0:00:45.21,0:00:49.76,Default,,0000,0000,0000,,mean I said I'm not going to talk about IT\Nsecurity but I chose an example from IT Dialogue: 0,0:00:49.76,0:00:56.69,Default,,0000,0000,0000,,security or kind of from IT security. So\Nthere was a post on Reddit a while ago, Dialogue: 0,0:00:56.69,0:01:01.33,Default,,0000,0000,0000,,a picture from some book which claimed that\Nif you use a Malachite crystal that can Dialogue: 0,0:01:01.33,0:01:06.24,Default,,0000,0000,0000,,protect you from computer viruses.\NWhich... to me doesn't sound very Dialogue: 0,0:01:06.24,0:01:11.01,Default,,0000,0000,0000,,plausible, right? Like, these are crystals and\Nif you put them on your computer, this book Dialogue: 0,0:01:11.01,0:01:18.59,Default,,0000,0000,0000,,claims this protects you from malware. But\Nof course if we really want to know, we Dialogue: 0,0:01:18.59,0:01:23.99,Default,,0000,0000,0000,,could do a study on this. And if you say\Npeople don't do Studies on crazy things: Dialogue: 0,0:01:23.99,0:01:28.77,Default,,0000,0000,0000,,that's wrong. I mean people do studies on\Nhomeopathy or all kinds of crazy things Dialogue: 0,0:01:28.77,0:01:34.55,Default,,0000,0000,0000,,that are completely implausible. So we can\Ndo a study on this and what we will do is Dialogue: 0,0:01:34.55,0:01:39.51,Default,,0000,0000,0000,,we will do a randomized control trial,\Nwhich is kind of the gold standard of Dialogue: 0,0:01:39.51,0:01:46.31,Default,,0000,0000,0000,,doing a test on these kinds of things. So\Nthis is our question: "Do Malachite Dialogue: 0,0:01:46.31,0:01:52.48,Default,,0000,0000,0000,,crystals prevent malware infections?" and\Nhow we would test that, our study design Dialogue: 0,0:01:52.48,0:01:58.40,Default,,0000,0000,0000,,is: ok, we take a group of maybe 20\Ncomputer users. And then we split them Dialogue: 0,0:01:58.40,0:02:06.01,Default,,0000,0000,0000,,randomly to two groups, and then one group\Nwe'll give one of these crystals and tell Dialogue: 0,0:02:06.01,0:02:10.92,Default,,0000,0000,0000,,them: "Put them on your desk or on your\Ncomputer.". Then we need, the other group Dialogue: 0,0:02:10.92,0:02:15.80,Default,,0000,0000,0000,,is our control group. That's very\Nimportant because if we want to know if Dialogue: 0,0:02:15.80,0:02:20.94,Default,,0000,0000,0000,,they help we need another group to compare\Nit to. And to rule out that there are any Dialogue: 0,0:02:20.94,0:02:27.13,Default,,0000,0000,0000,,kinds of placebo effects, we give these\Ncontrol groups a fake Malachite crystal so Dialogue: 0,0:02:27.13,0:02:32.26,Default,,0000,0000,0000,,we can compare them against each other.\NAnd then we wait for maybe six months and Dialogue: 0,0:02:32.26,0:02:39.31,Default,,0000,0000,0000,,then we check how many malware infections\Nthey had. Now, I didn't do that study, but Dialogue: 0,0:02:39.31,0:02:45.09,Default,,0000,0000,0000,,I simulated it with a Python script and\Ngiven that I don't believe that this Dialogue: 0,0:02:45.09,0:02:50.31,Default,,0000,0000,0000,,theory is true I just simulated this as\Nrandom data. So I'm not going to go Dialogue: 0,0:02:50.31,0:02:55.09,Default,,0000,0000,0000,,through the whole script but I'm just like\Ngenerating, I'm assuming there can be Dialogue: 0,0:02:55.09,0:02:59.95,Default,,0000,0000,0000,,between 0 and 3 malware infections and\Nit's totally random and then I compare the Dialogue: 0,0:02:59.95,0:03:04.79,Default,,0000,0000,0000,,two groups. And then I calculate something\Nwhich is called a p-value which is a very Dialogue: 0,0:03:04.79,0:03:10.73,Default,,0000,0000,0000,,common thing in science whenever you do\Nstatistics. A p-value is, it's a bit Dialogue: 0,0:03:10.73,0:03:17.29,Default,,0000,0000,0000,,technical, but it's the probability that\Nif you have no effect that you would get Dialogue: 0,0:03:17.29,0:03:23.57,Default,,0000,0000,0000,,this result. Which kind of in another way\Nmeans, if you have 20 results in an Dialogue: 0,0:03:23.57,0:03:29.26,Default,,0000,0000,0000,,idealized world then one of them is a\Nfalse positive which means one of them Dialogue: 0,0:03:29.26,0:03:34.51,Default,,0000,0000,0000,,says something happens although it\Ndoesn't. And in many fields of science Dialogue: 0,0:03:34.51,0:03:41.18,Default,,0000,0000,0000,,this p-value of 0.05 is considered that\Nsignificant which is like these twenty Dialogue: 0,0:03:41.18,0:03:48.62,Default,,0000,0000,0000,,studies. So one error in twenty studies\Nbut as I said under idealized conditions. Dialogue: 0,0:03:48.62,0:03:53.33,Default,,0000,0000,0000,,So and as it's the script and I can run it\Nin less than a second I just did it twenty Dialogue: 0,0:03:53.33,0:03:59.82,Default,,0000,0000,0000,,times instead of once. So here are my 20\Nsimulated studies and most of them look Dialogue: 0,0:03:59.82,0:04:06.36,Default,,0000,0000,0000,,not very interesting so of course we have\Na few random variations but nothing very Dialogue: 0,0:04:06.36,0:04:12.46,Default,,0000,0000,0000,,significant. Except if you look at this\None study, it says the people with the Dialogue: 0,0:04:12.46,0:04:17.16,Default,,0000,0000,0000,,Malachite crystal had on average 1.8\Nmalware infections and the people with the Dialogue: 0,0:04:17.16,0:04:24.67,Default,,0000,0000,0000,,fake crystal had 0.8. So it means actually\Nthe crystal made it worse. But also this Dialogue: 0,0:04:24.67,0:04:32.10,Default,,0000,0000,0000,,result is significant because it has a\Np-value of 0.03. So of course we can Dialogue: 0,0:04:32.10,0:04:36.11,Default,,0000,0000,0000,,publish that, assuming I really did these\Nstudies. Dialogue: 0,0:04:36.11,0:04:40.60,Default,,0000,0000,0000,,{\i1}applause{\i0}\NB.: And the other studies we just forget Dialogue: 0,0:04:40.60,0:04:45.85,Default,,0000,0000,0000,,about. I mean they were not interesting\Nright and who cares? Non significant Dialogue: 0,0:04:45.85,0:04:52.99,Default,,0000,0000,0000,,results... Okay so you have just seen that\NI created a significant result out of Dialogue: 0,0:04:52.99,0:05:00.59,Default,,0000,0000,0000,,random data. And that's concerning because\Npeople in science - I mean you can really do Dialogue: 0,0:05:00.59,0:05:07.85,Default,,0000,0000,0000,,that. And this phenomena is called\Npublication bias. So what's happening here Dialogue: 0,0:05:07.85,0:05:13.13,Default,,0000,0000,0000,,is that, you're doing studies and if they\Nget a positive result - meaning you're Dialogue: 0,0:05:13.13,0:05:18.99,Default,,0000,0000,0000,,seeing an effect, then you publish them\Nand if there's no effect you just forget Dialogue: 0,0:05:18.99,0:05:26.67,Default,,0000,0000,0000,,about them. We learned earlier that with\Nthis p-value of 0.05 means 1 in 20 studies Dialogue: 0,0:05:26.67,0:05:32.76,Default,,0000,0000,0000,,is a false positive, but you usually don't\Nsee the studies that are not significant, Dialogue: 0,0:05:32.76,0:05:39.32,Default,,0000,0000,0000,,because they don't get published. And you\Nmay wonder: "Ok, what's stopping a Dialogue: 0,0:05:39.32,0:05:43.50,Default,,0000,0000,0000,,scientist from doing exactly this? What's\Nstopping a scientist from just doing so Dialogue: 0,0:05:43.50,0:05:47.75,Default,,0000,0000,0000,,many experiments till one of them looks\Nlike it's a real result although it's just Dialogue: 0,0:05:47.75,0:05:54.71,Default,,0000,0000,0000,,a random fluke?". And the disconcerning\Nanswer to that is, it's usually nothing. Dialogue: 0,0:05:56.76,0:06:03.62,Default,,0000,0000,0000,,And this is not just a theoretical\Nexample. I want to give you an example, Dialogue: 0,0:06:03.62,0:06:09.11,Default,,0000,0000,0000,,that has quite some impact and that was\Nresearched very well, and that is a Dialogue: 0,0:06:09.11,0:06:17.98,Default,,0000,0000,0000,,research on antidepressants so called\NSSRIs. And in 2008 there was a study, the Dialogue: 0,0:06:17.98,0:06:22.68,Default,,0000,0000,0000,,interesting situation here was, that the\NUS Food and Drug Administration, which is Dialogue: 0,0:06:22.68,0:06:29.48,Default,,0000,0000,0000,,the authority that decides whether a\Nmedical drug can be put on the market, Dialogue: 0,0:06:29.48,0:06:35.49,Default,,0000,0000,0000,,they had knowledge about all the studies\Nthat had been done to register this Dialogue: 0,0:06:35.49,0:06:40.38,Default,,0000,0000,0000,,medication. And then some researchers\Nlooked at that and compared it with what Dialogue: 0,0:06:40.38,0:06:45.81,Default,,0000,0000,0000,,has been published. And they figured out\Nthere were 38 studies that saw that these Dialogue: 0,0:06:45.81,0:06:51.04,Default,,0000,0000,0000,,medications had a real effect, had real\Nimprovements for patients. And from those Dialogue: 0,0:06:51.04,0:06:56.79,Default,,0000,0000,0000,,38 studies 37 got published. But then\Nthere were 36 studies that said: "These Dialogue: 0,0:06:56.79,0:07:00.01,Default,,0000,0000,0000,,medications don't really have any\Neffect.", "They are not really better than Dialogue: 0,0:07:00.01,0:07:06.53,Default,,0000,0000,0000,,a placebo effect" and out of those only 14\Ngot published. And even from those 14 Dialogue: 0,0:07:06.53,0:07:11.01,Default,,0000,0000,0000,,there were 11, where the researcher said,\Nokay they have spent the result in a way Dialogue: 0,0:07:11.01,0:07:17.92,Default,,0000,0000,0000,,that it sounds like these medications do\Nsomething. But they were also a bunch of Dialogue: 0,0:07:17.92,0:07:21.87,Default,,0000,0000,0000,,studies that were just not published\Nbecause they had a negative result. And Dialogue: 0,0:07:21.87,0:07:26.39,Default,,0000,0000,0000,,it's clear that if you look at the\Npublished studies only and you ignore the Dialogue: 0,0:07:26.39,0:07:29.32,Default,,0000,0000,0000,,studies with a negative result that\Nhaven't been published, then these Dialogue: 0,0:07:29.32,0:07:34.29,Default,,0000,0000,0000,,medications look much better than they\Nreally are. And it's not like the earlier Dialogue: 0,0:07:34.29,0:07:38.24,Default,,0000,0000,0000,,example there is a real effect from\Nantidepressants, but they are not as good Dialogue: 0,0:07:38.24,0:07:40.21,Default,,0000,0000,0000,,as people have believed in the past. Dialogue: 0,0:07:43.02,0:07:45.86,Default,,0000,0000,0000,,So we've learnt in theory with publication bias Dialogue: 0,0:07:45.86,0:07:50.52,Default,,0000,0000,0000,,you can create result out of nothing.\NBut if you're a researcher and you have a Dialogue: 0,0:07:50.52,0:07:54.79,Default,,0000,0000,0000,,theory that's not true but you really want\Nto publish something about it, that's not Dialogue: 0,0:07:54.79,0:07:59.70,Default,,0000,0000,0000,,really efficient, because you have to do\N20 studies on average to get one of these Dialogue: 0,0:07:59.70,0:08:06.13,Default,,0000,0000,0000,,random results that look like real\Nresults. So there are more efficient ways Dialogue: 0,0:08:06.13,0:08:12.78,Default,,0000,0000,0000,,to get to a result from nothing. If you're\Ndoing a study then there are a lot of Dialogue: 0,0:08:12.78,0:08:17.32,Default,,0000,0000,0000,,micro decisions you have to make, for\Nexample you may have dropouts from your Dialogue: 0,0:08:17.32,0:08:22.15,Default,,0000,0000,0000,,study where people, I don't know they move\Nto another place or they - you now longer Dialogue: 0,0:08:22.15,0:08:26.02,Default,,0000,0000,0000,,reach them, so they are no longer part of\Nyour study. And there are different things Dialogue: 0,0:08:26.02,0:08:30.48,Default,,0000,0000,0000,,how you can handle that. Then you may have\Ncornercase results, where you're not Dialogue: 0,0:08:30.48,0:08:34.51,Default,,0000,0000,0000,,entirely sure: "Is this an effect or not\Nand how do you decide?", "How do you Dialogue: 0,0:08:34.51,0:08:39.64,Default,,0000,0000,0000,,exactly measure?". And then also you may\Nbe looking for different things, maybe Dialogue: 0,0:08:39.64,0:08:46.62,Default,,0000,0000,0000,,there are different tests you can do on\Npeople, and you may control for certain Dialogue: 0,0:08:46.62,0:08:51.64,Default,,0000,0000,0000,,variables like "Do you split men and women\Ninto separate?", "Do you see them Dialogue: 0,0:08:51.64,0:08:56.43,Default,,0000,0000,0000,,separately?" or "Do you separate them by\Nage?". So there are many decisions you can Dialogue: 0,0:08:56.43,0:09:02.05,Default,,0000,0000,0000,,make while doing a study. And of course\Neach of these decisions has a small effect Dialogue: 0,0:09:02.05,0:09:10.40,Default,,0000,0000,0000,,on the result. And it may very often be,\Nthat just by trying all the combinations Dialogue: 0,0:09:10.40,0:09:15.23,Default,,0000,0000,0000,,you will get a p-value that looks like\Nit's statistically significant, although Dialogue: 0,0:09:15.23,0:09:20.67,Default,,0000,0000,0000,,there's no real effect. So and there's\Nthis term called p-Hacking which means Dialogue: 0,0:09:20.67,0:09:25.55,Default,,0000,0000,0000,,you're just adjusting your methods long\Nenough, that you get a significant result. Dialogue: 0,0:09:27.05,0:09:32.55,Default,,0000,0000,0000,,And I'd like to point out here, that this\Nis usually not that a scientist says: "Ok, Dialogue: 0,0:09:32.55,0:09:36.26,Default,,0000,0000,0000,,today I'm going to p-hack my result,\Nbecause I know my theory is wrong but I Dialogue: 0,0:09:36.26,0:09:42.42,Default,,0000,0000,0000,,want to show it's true.". But it's a\Nsubconscious process, because usually the Dialogue: 0,0:09:42.42,0:09:47.40,Default,,0000,0000,0000,,scientists believe in their theories.\NHonestly. They honestly think that their Dialogue: 0,0:09:47.40,0:09:52.04,Default,,0000,0000,0000,,theory is true and that their research\Nwill show that. So they may subconsciously Dialogue: 0,0:09:52.04,0:09:58.28,Default,,0000,0000,0000,,say: "Ok, if I analyze my data like this\Nit looks a bit better so I will do this.". Dialogue: 0,0:09:58.28,0:10:05.08,Default,,0000,0000,0000,,So subconsciously, they may p-hack\Nthemselves into getting a result that's Dialogue: 0,0:10:05.08,0:10:11.45,Default,,0000,0000,0000,,not really there. And again we can ask:\N"What is stopping scientists from Dialogue: 0,0:10:11.45,0:10:22.01,Default,,0000,0000,0000,,p-hacking?". And the concerning answer is\Nthe same: usually nothing. And I came to Dialogue: 0,0:10:22.01,0:10:26.07,Default,,0000,0000,0000,,this conclusion that I say: "Ok, the\Nscientific method it's a way to create Dialogue: 0,0:10:26.07,0:10:31.90,Default,,0000,0000,0000,,evidence for whatever theory you like. No\Nmatter if it's true or not.". And you may Dialogue: 0,0:10:31.90,0:10:35.72,Default,,0000,0000,0000,,say: "That's a pretty bold thing to say.".\Nand I'm saying this even though I'm not Dialogue: 0,0:10:35.72,0:10:42.48,Default,,0000,0000,0000,,even a scientist. I'm just like some\Nhacker who, whatever... But I'm not alone Dialogue: 0,0:10:42.48,0:10:47.76,Default,,0000,0000,0000,,in this, like there's a paper from a\Nfamous researcher John Ioannidis, who Dialogue: 0,0:10:47.76,0:10:51.53,Default,,0000,0000,0000,,said: "Why most published research\Nfindings are false.". He published this in Dialogue: 0,0:10:51.53,0:10:57.17,Default,,0000,0000,0000,,2005 and if you look at the title, he\Ndoesn't really question that most research Dialogue: 0,0:10:57.17,0:11:02.56,Default,,0000,0000,0000,,findings are false. He only wants to give\Nreasons why this is the case. And he makes Dialogue: 0,0:11:02.56,0:11:08.50,Default,,0000,0000,0000,,some very possible assumptions if you look\Nat that many negative results don't get Dialogue: 0,0:11:08.50,0:11:12.13,Default,,0000,0000,0000,,published, and that you will have some\Nbias. And it comes to a very plausible Dialogue: 0,0:11:12.13,0:11:17.18,Default,,0000,0000,0000,,conclusion, that this is the case and this\Nis not even very controversial. If you ask Dialogue: 0,0:11:17.18,0:11:23.49,Default,,0000,0000,0000,,people who are doing what you can call\Nscience on science or meta science, who Dialogue: 0,0:11:23.49,0:11:28.41,Default,,0000,0000,0000,,look at scientific methodology, they will\Ntell you: "Yeah, of course that's the Dialogue: 0,0:11:28.41,0:11:32.08,Default,,0000,0000,0000,,case.". Some will even say: "Yeah, that's\Nhow science works, that's what we Dialogue: 0,0:11:32.08,0:11:37.69,Default,,0000,0000,0000,,expect.". But I find it concerning. And if\Nyou take this seriously, it means: if you Dialogue: 0,0:11:37.69,0:11:43.16,Default,,0000,0000,0000,,read about a study, like in a newspaper,\Nthe default assumption should be 'that's Dialogue: 0,0:11:43.16,0:11:51.18,Default,,0000,0000,0000,,not true' - while we might usually think\Nthe opposite. And if science is a method Dialogue: 0,0:11:51.18,0:11:55.71,Default,,0000,0000,0000,,to create evidence for whatever you like,\Nyou can think about something really Dialogue: 0,0:11:55.71,0:12:00.94,Default,,0000,0000,0000,,crazy, like "Can people see into the future?",\N"Does our mind have\N Dialogue: 0,0:12:00.94,0:12:09.72,Default,,0000,0000,0000,,some extra perception where we can\Nsense things that happen in an hour?". And Dialogue: 0,0:12:09.72,0:12:15.56,Default,,0000,0000,0000,,there was a psychologist called Daryl Bem\Nand he thought that this is the case and Dialogue: 0,0:12:15.56,0:12:20.40,Default,,0000,0000,0000,,he published a study on it. It was titled\N"feeling the future". He did a lot of Dialogue: 0,0:12:20.40,0:12:25.45,Default,,0000,0000,0000,,experiments where he did something, and\Nthen something later happened, and he Dialogue: 0,0:12:25.45,0:12:29.57,Default,,0000,0000,0000,,thought he had statistical evidence that\Nwhat happened later influenced what Dialogue: 0,0:12:29.57,0:12:34.100,Default,,0000,0000,0000,,happened earlier. So, I don't think that's\Nvery plausible - based on what we know Dialogue: 0,0:12:34.100,0:12:41.55,Default,,0000,0000,0000,,about the universe, but yeah... and it was\Npublished in a real psychology journal. Dialogue: 0,0:12:41.55,0:12:46.68,Default,,0000,0000,0000,,And a lot of things were wrong with this\Nstudy. Basically, it's a very nice example Dialogue: 0,0:12:46.68,0:12:51.01,Default,,0000,0000,0000,,for p-hacking and just even a book by\NDaryl Bem, where he describes something Dialogue: 0,0:12:51.01,0:12:55.04,Default,,0000,0000,0000,,which basically looks like p-hacking,\Nwhere he says that's how you do Dialogue: 0,0:12:55.04,0:13:03.87,Default,,0000,0000,0000,,psychology. But the study was absolutely\Nin line with the existing standards in Dialogue: 0,0:13:03.87,0:13:08.76,Default,,0000,0000,0000,,Experimental Psychology. And that a lot of\Npeople found concerning. So, if you can Dialogue: 0,0:13:08.76,0:13:13.62,Default,,0000,0000,0000,,show that precognition is real, that you\Ncan see into the future, then what else Dialogue: 0,0:13:13.62,0:13:19.14,Default,,0000,0000,0000,,can you show and how can we trust our\Nresults? And psychology has debated this a Dialogue: 0,0:13:19.14,0:13:21.88,Default,,0000,0000,0000,,lot in the past couple of years. So\Nthere's a lot of talk about the Dialogue: 0,0:13:21.88,0:13:30.01,Default,,0000,0000,0000,,replication crisis in psychology. And many\Neffects that psychology just thought were Dialogue: 0,0:13:30.01,0:13:35.04,Default,,0000,0000,0000,,true, they figured out, okay, if they try\Nto repeat these experiments, they couldn't Dialogue: 0,0:13:35.04,0:13:40.76,Default,,0000,0000,0000,,get these results even though entire\Nsubfields were built on these results. Dialogue: 0,0:13:44.37,0:13:48.07,Default,,0000,0000,0000,,And I want to show you an example, which\Nis one of the ones that is not discussed so Dialogue: 0,0:13:48.07,0:13:55.54,Default,,0000,0000,0000,,much. So there's a theory which is called\Nmoral licensing. And the idea is that if Dialogue: 0,0:13:55.54,0:14:00.65,Default,,0000,0000,0000,,you do something good, or something you\Nthink is good, then later basically you Dialogue: 0,0:14:00.65,0:14:04.88,Default,,0000,0000,0000,,behave like an asshole. Because you think\NI already did something good now, I don't Dialogue: 0,0:14:04.88,0:14:10.69,Default,,0000,0000,0000,,have to be so nice anymore. And there were\Nsome famous studies that had the theory, Dialogue: 0,0:14:10.69,0:14:17.87,Default,,0000,0000,0000,,that people consume organic food, that\Nlater they become more judgmental, or less Dialogue: 0,0:14:17.87,0:14:27.95,Default,,0000,0000,0000,,social, less nice to their peers. But just\Nlast week someone tried to replicate this Dialogue: 0,0:14:27.95,0:14:32.72,Default,,0000,0000,0000,,original experiments. And they tried it\Nthree times with more subjects and better Dialogue: 0,0:14:32.72,0:14:39.01,Default,,0000,0000,0000,,research methodology and they totally\Ncouldn't find that effect. But like what Dialogue: 0,0:14:39.01,0:14:43.79,Default,,0000,0000,0000,,you've seen here is lots of media\Narticles. I have not found a single Dialogue: 0,0:14:43.79,0:14:51.18,Default,,0000,0000,0000,,article reporting that this could not be\Nreplicated. Maybe they will come but yeah Dialogue: 0,0:14:51.18,0:14:57.36,Default,,0000,0000,0000,,there's just a very recent example. But\Nnow I want to have a small warning for you Dialogue: 0,0:14:57.36,0:15:01.32,Default,,0000,0000,0000,,because you may think now "yeah these\Npsychologists, that all sounds very Dialogue: 0,0:15:01.32,0:15:05.33,Default,,0000,0000,0000,,fishy and they even believe in\Nprecognition and whatever", but maybe your Dialogue: 0,0:15:05.33,0:15:09.89,Default,,0000,0000,0000,,field is not much better maybe you just\Ndon't know about it yet because nobody Dialogue: 0,0:15:09.89,0:15:15.99,Default,,0000,0000,0000,,else has started replicating studies in\Nyour field. And there are other fields Dialogue: 0,0:15:15.99,0:15:21.67,Default,,0000,0000,0000,,that have replication problems and some\Nmuch worse for example the pharma company Dialogue: 0,0:15:21.67,0:15:27.28,Default,,0000,0000,0000,,Amgen in 2012 they published something\Nwhere they said "We have tried to Dialogue: 0,0:15:27.28,0:15:32.94,Default,,0000,0000,0000,,replicate cancer research and preclinical\Nresearch" that is stuff in a petri dish or Dialogue: 0,0:15:32.94,0:15:38.87,Default,,0000,0000,0000,,animal experiments so not drugs on humans\Nbut what happens before you develop a drug Dialogue: 0,0:15:38.87,0:15:44.70,Default,,0000,0000,0000,,and they were only able to replicate 47\Nout of 53 studies. And these were they Dialogue: 0,0:15:44.70,0:15:50.05,Default,,0000,0000,0000,,said landmark studies, so studies that\Nhave been published in the best journals. Dialogue: 0,0:15:50.05,0:15:54.10,Default,,0000,0000,0000,,Now there are a few problems with this\Npublication because they have not Dialogue: 0,0:15:54.10,0:15:58.76,Default,,0000,0000,0000,,published their applications they have not\Ntold us which studies these were that they Dialogue: 0,0:15:58.76,0:16:02.73,Default,,0000,0000,0000,,could not replicate. In the meantime I\Nthink they have published three of these Dialogue: 0,0:16:02.73,0:16:07.29,Default,,0000,0000,0000,,replications but most of it is a bit in\Nthe dark which points to another problem Dialogue: 0,0:16:07.29,0:16:10.69,Default,,0000,0000,0000,,because they say they did this because\Nthey collaborated with the original Dialogue: 0,0:16:10.69,0:16:16.11,Default,,0000,0000,0000,,researchers and they only did this by\Nagreeing that they would not publish the Dialogue: 0,0:16:16.11,0:16:22.38,Default,,0000,0000,0000,,results. But it still sounds very\Nconcerning so but some fields don't have a Dialogue: 0,0:16:22.38,0:16:27.17,Default,,0000,0000,0000,,replication problem because just nobody is\Ntrying to replicate previous results I Dialogue: 0,0:16:27.17,0:16:34.27,Default,,0000,0000,0000,,mean then you will never know if your\Nresults hold up. So what can be done about Dialogue: 0,0:16:34.27,0:16:42.93,Default,,0000,0000,0000,,all this and fundamentally I think the\Ncore issue here is that the scientific Dialogue: 0,0:16:42.93,0:16:49.97,Default,,0000,0000,0000,,process is tied together with results, so\Nwe do a study and only after that we Dialogue: 0,0:16:49.97,0:16:54.76,Default,,0000,0000,0000,,decide whether it's going to be published.\NOr we do a study and only after we have Dialogue: 0,0:16:54.76,0:17:01.23,Default,,0000,0000,0000,,the data we're trying to analyze it. So\Nessentially we need to decouple the Dialogue: 0,0:17:01.23,0:17:09.80,Default,,0000,0000,0000,,scientific process from its results and\None way of doing that is pre-registration Dialogue: 0,0:17:09.80,0:17:14.49,Default,,0000,0000,0000,,so what you're doing there is that before\Nyou start doing a study you will register Dialogue: 0,0:17:14.49,0:17:20.50,Default,,0000,0000,0000,,it in a public register and say "I'm gonna\Ndo a study like on this medication or Dialogue: 0,0:17:20.50,0:17:25.67,Default,,0000,0000,0000,,whatever on this psychological effect" and\Nthat's how I'm gonna do it and then later Dialogue: 0,0:17:25.67,0:17:33.98,Default,,0000,0000,0000,,on people can check if you really did\Nthat. And yeah that's what I said. And this Dialogue: 0,0:17:33.98,0:17:41.18,Default,,0000,0000,0000,,is more or less standard practice in\Nmedical drug trials the summary about it Dialogue: 0,0:17:41.18,0:17:47.13,Default,,0000,0000,0000,,is it does not work very well but it's\Nbetter than nothing. So, and the problem Dialogue: 0,0:17:47.13,0:17:52.03,Default,,0000,0000,0000,,is mostly enforcement so people register\Nstudy and then don't publish it and Dialogue: 0,0:17:52.03,0:17:57.19,Default,,0000,0000,0000,,nothing happens to them even though they\Nare legally required to publish it. And Dialogue: 0,0:17:57.19,0:18:01.89,Default,,0000,0000,0000,,there are two campaigns I'd like to point\Nout, there's the all trials campaign which Dialogue: 0,0:18:01.89,0:18:08.15,Default,,0000,0000,0000,,has been started by Ben Goldacre he's a\Ndoctor from the UK and they like demand Dialogue: 0,0:18:08.15,0:18:13.33,Default,,0000,0000,0000,,that like every trial it's done on\Nmedication should be published. And Dialogue: 0,0:18:13.33,0:18:18.87,Default,,0000,0000,0000,,there's also a project by the same guy the\Ncompare project and they are trying to see Dialogue: 0,0:18:18.87,0:18:25.38,Default,,0000,0000,0000,,if a medical trial has been registered and\Nlater published did they do the same or Dialogue: 0,0:18:25.38,0:18:29.48,Default,,0000,0000,0000,,did they change something in their\Nprotocol and was there a reason for it or Dialogue: 0,0:18:29.48,0:18:36.80,Default,,0000,0000,0000,,did they just change it to get a result,\Nwhich they otherwise wouldn't get.But then Dialogue: 0,0:18:36.80,0:18:41.08,Default,,0000,0000,0000,,again like these issues in medicine they\Noffer get a lot of attention and for good Dialogue: 0,0:18:41.08,0:18:46.82,Default,,0000,0000,0000,,reasons because if we have bad science in\Nmedicine then people die, that's pretty Dialogue: 0,0:18:46.82,0:18:52.96,Default,,0000,0000,0000,,immediate and pretty massive. But if you\Nread about this you always have to think Dialogue: 0,0:18:52.96,0:18:58.51,Default,,0000,0000,0000,,that these issues in drug trials at least\Nthey have pre-registration, most Dialogue: 0,0:18:58.51,0:19:04.33,Default,,0000,0000,0000,,scientific fields don't bother doing\Nanything like that. So whenever you hear Dialogue: 0,0:19:04.33,0:19:08.47,Default,,0000,0000,0000,,something about maybe about publication\Nbias in medicine you should always think Dialogue: 0,0:19:08.47,0:19:12.63,Default,,0000,0000,0000,,the same thing happens in many fields of\Nscience and usually nobody is doing Dialogue: 0,0:19:12.63,0:19:18.81,Default,,0000,0000,0000,,anything about it. And particularly to\Nthis audience I'd like to say there's Dialogue: 0,0:19:18.81,0:19:23.58,Default,,0000,0000,0000,,currently a big trend that people from\Ncomputer science want to revolutionize Dialogue: 0,0:19:23.58,0:19:30.30,Default,,0000,0000,0000,,medicine: big data and machine learning,\Nthese things, which in principle is ok but Dialogue: 0,0:19:30.30,0:19:34.75,Default,,0000,0000,0000,,I know a lot of people in medicine are\Nvery worried about this and the reason is, Dialogue: 0,0:19:34.75,0:19:39.47,Default,,0000,0000,0000,,that these computer science people don't\Nhave the same scientific standards as Dialogue: 0,0:19:39.47,0:19:44.40,Default,,0000,0000,0000,,people in medicine expect them and might\Nsay "Yeah we don't need really need to do Dialogue: 0,0:19:44.40,0:19:50.45,Default,,0000,0000,0000,,a study on this it's obvious that this\Nhelps" and that is worrying and I come Dialogue: 0,0:19:50.45,0:19:53.58,Default,,0000,0000,0000,,from computer science and I very well\Nunderstand that people from medicine are Dialogue: 0,0:19:53.58,0:20:00.54,Default,,0000,0000,0000,,worried about this. So there's an idea\Nthat goes even further as pre-registration Dialogue: 0,0:20:00.54,0:20:05.21,Default,,0000,0000,0000,,and it's called registered reports. There\Nis a couple of years ago some scientists Dialogue: 0,0:20:05.21,0:20:10.54,Default,,0000,0000,0000,,wrote an open letter to the Guardian where\Nthey.. that was published there and the idea Dialogue: 0,0:20:10.54,0:20:16.45,Default,,0000,0000,0000,,there is that you turn the scientific\Npublication process upside down, so if you Dialogue: 0,0:20:16.45,0:20:21.21,Default,,0000,0000,0000,,want to do a study the first thing you\Nwould do with the register report is, you Dialogue: 0,0:20:21.21,0:20:27.00,Default,,0000,0000,0000,,submit your design your study design\Nprotocol to the journal and then the Dialogue: 0,0:20:27.00,0:20:33.11,Default,,0000,0000,0000,,journal decides whether they will publish\Nthat before they see any result, because Dialogue: 0,0:20:33.11,0:20:36.99,Default,,0000,0000,0000,,then you can prevent publication bias and\Nthen you prevent the journals only publish Dialogue: 0,0:20:36.99,0:20:42.71,Default,,0000,0000,0000,,the nice findings and ignore the negative\Nfindings. And then you do the study and Dialogue: 0,0:20:42.71,0:20:46.33,Default,,0000,0000,0000,,then it gets published but it gets\Npublished independent of what the result Dialogue: 0,0:20:46.33,0:20:53.83,Default,,0000,0000,0000,,was. And there of course other things you\Ncan do to improve science, there's a lot Dialogue: 0,0:20:53.83,0:20:58.61,Default,,0000,0000,0000,,of talk about sharing data, sharing code,\Nsharing methods because if you want to Dialogue: 0,0:20:58.61,0:21:04.13,Default,,0000,0000,0000,,replicate a study it's of course easier if\Nyou have access to all the details how the Dialogue: 0,0:21:04.13,0:21:11.09,Default,,0000,0000,0000,,original study was done. Then you could\Nsay "Okay we could do large Dialogue: 0,0:21:11.09,0:21:15.27,Default,,0000,0000,0000,,collaborations" because many studies are\Njust too small if you have a study with Dialogue: 0,0:21:15.27,0:21:19.63,Default,,0000,0000,0000,,twenty people you just don't get a very\Nreliable outcome. So maybe in many Dialogue: 0,0:21:19.63,0:21:25.67,Default,,0000,0000,0000,,situations it would be better get together\N10 teams of scientists and let them all do Dialogue: 0,0:21:25.67,0:21:31.64,Default,,0000,0000,0000,,a big study together and then you can\Nreliably answer a question. And also some Dialogue: 0,0:21:31.64,0:21:36.39,Default,,0000,0000,0000,,people propose just to get higher\Nstatistical thresholds that p-value of Dialogue: 0,0:21:36.39,0:21:42.26,Default,,0000,0000,0000,,0.05 means practically nothing. There was\Nrecently a paper that just argued which Dialogue: 0,0:21:42.26,0:21:47.88,Default,,0000,0000,0000,,would just like put the dot one more to\Nthe left and have 0.005 and that would Dialogue: 0,0:21:47.88,0:21:55.03,Default,,0000,0000,0000,,already solve a lot of problems. And for\Nexample in physics they have they have Dialogue: 0,0:21:55.03,0:22:00.87,Default,,0000,0000,0000,,something called Sigma 5 which is I think\Nzero point and then 5 zeroes and 3 or Dialogue: 0,0:22:00.87,0:22:08.35,Default,,0000,0000,0000,,something like that so in physics they\Nhave much higher statistical thresholds. Dialogue: 0,0:22:08.35,0:22:13.21,Default,,0000,0000,0000,,Now whatever if you're working in any\Nscientific field you might ask yourself Dialogue: 0,0:22:13.21,0:22:20.20,Default,,0000,0000,0000,,like "If we have statistic results are\Nthey pre registered in any way and do we Dialogue: 0,0:22:20.20,0:22:26.38,Default,,0000,0000,0000,,publish negative results?" like we tested\Nan effect and we got nothing and are there Dialogue: 0,0:22:26.38,0:22:32.35,Default,,0000,0000,0000,,replications of all relevant results and I\Nwould say if you answer all these Dialogue: 0,0:22:32.35,0:22:36.29,Default,,0000,0000,0000,,questions with "no" which I think many\Npeople will do, then you're not really Dialogue: 0,0:22:36.29,0:22:41.51,Default,,0000,0000,0000,,doing science what you're doing is the\Nalchemy of our time. Dialogue: 0,0:22:41.51,0:22:50.22,Default,,0000,0000,0000,,{\i1}Applause{\i0}\NThanks. Dialogue: 0,0:22:50.22,0:22:54.50,Default,,0000,0000,0000,,Herald: Thank you very much..\NHanno: No I have more, sorry, I have Dialogue: 0,0:22:54.50,0:23:03.06,Default,,0000,0000,0000,,three more slides, that was not the\Nfinishing line. Big issue is also that Dialogue: 0,0:23:03.06,0:23:09.83,Default,,0000,0000,0000,,there are bad incentives in science, so a\Nvery standard thing to evaluate the impact Dialogue: 0,0:23:09.83,0:23:15.71,Default,,0000,0000,0000,,of science is citation counts for you say\N"if your scientific study is cited a lot Dialogue: 0,0:23:15.71,0:23:18.96,Default,,0000,0000,0000,,then this is a good thing and if your\Njournal is cited a lot this is a good Dialogue: 0,0:23:18.96,0:23:22.39,Default,,0000,0000,0000,,thing" and this for example the impact\Nfactor but there are also other Dialogue: 0,0:23:22.39,0:23:27.06,Default,,0000,0000,0000,,measurements. And also universities like\Npublicity so if your study gets a lot of Dialogue: 0,0:23:27.06,0:23:33.49,Default,,0000,0000,0000,,media reports then your press department\Nlikes you. And these incentives tend to Dialogue: 0,0:23:33.49,0:23:40.20,Default,,0000,0000,0000,,favor interesting results but they don't\Nfavor correct results and this is bad Dialogue: 0,0:23:40.20,0:23:44.90,Default,,0000,0000,0000,,because if we are realistic most results\Nare not that interesting, most results Dialogue: 0,0:23:44.90,0:23:49.88,Default,,0000,0000,0000,,will be "Yeah we have this interesting and\Ncounterintuitive theory and it's totally Dialogue: 0,0:23:49.88,0:24:00.47,Default,,0000,0000,0000,,wrong" and then there's this idea that\Nscience is self-correcting. So if you Dialogue: 0,0:24:00.47,0:24:05.32,Default,,0000,0000,0000,,confront scientists with these issues with\Npublication bias and peer hacking surely Dialogue: 0,0:24:05.32,0:24:11.91,Default,,0000,0000,0000,,they will immediately change that's what\Nscientists do right? And I want to cite Dialogue: 0,0:24:11.91,0:24:16.26,Default,,0000,0000,0000,,something here with this sorry it's a bit\Nlong but "There are some evidence that Dialogue: 0,0:24:16.26,0:24:21.33,Default,,0000,0000,0000,,inferior statistical tests are commonly\Nused research which yields non significant Dialogue: 0,0:24:21.33,0:24:28.73,Default,,0000,0000,0000,,results is not published." That sounds\Nlike publication bias and then it also Dialogue: 0,0:24:28.73,0:24:32.45,Default,,0000,0000,0000,,says: "Significant results published in\Nthese fields are seldom verified by Dialogue: 0,0:24:32.45,0:24:37.89,Default,,0000,0000,0000,,independent replication" so it seems\Nthere's a replication problem. These wise Dialogue: 0,0:24:37.89,0:24:46.75,Default,,0000,0000,0000,,words were set in 1959, so by a\Nstatistician called Theodore Sterling and Dialogue: 0,0:24:46.75,0:24:52.06,Default,,0000,0000,0000,,because science is so self-correcting in\N1995 he complained that this article Dialogue: 0,0:24:52.06,0:24:56.39,Default,,0000,0000,0000,,presents evidence that published result of\Nscientific investigations are not a Dialogue: 0,0:24:56.39,0:25:01.24,Default,,0000,0000,0000,,representative sample of all scientific\Nstudies. "These results also indicate that Dialogue: 0,0:25:01.24,0:25:06.90,Default,,0000,0000,0000,,practice leading to publication bias has\Nnot changed over a period of 30 years" and Dialogue: 0,0:25:06.90,0:25:13.03,Default,,0000,0000,0000,,here we are in 2018 and publication bias\Nis still a problem. So if science is self- Dialogue: 0,0:25:13.03,0:25:21.09,Default,,0000,0000,0000,,correcting then it's pretty damn slow in\Ncorrecting itself, right? And finally I Dialogue: 0,0:25:21.09,0:25:27.40,Default,,0000,0000,0000,,would like to ask you, if you're prepared\Nfor boring science, because ultimately, I Dialogue: 0,0:25:27.40,0:25:31.95,Default,,0000,0000,0000,,think, we have a choice between what I\Nwould like to call TEDTalk science and Dialogue: 0,0:25:31.95,0:25:40.98,Default,,0000,0000,0000,,boring science..\N{\i1}Applause{\i0} Dialogue: 0,0:25:40.98,0:25:46.78,Default,,0000,0000,0000,,.. so with tedtalk science we get mostly\Npositive and surprising results and Dialogue: 0,0:25:46.78,0:25:53.38,Default,,0000,0000,0000,,interesting results we have large defects\Nmany citations lots of media attention and Dialogue: 0,0:25:53.38,0:26:00.14,Default,,0000,0000,0000,,you may have a TED talk about it.\NUnfortunately usually it's not true and I Dialogue: 0,0:26:00.14,0:26:03.82,Default,,0000,0000,0000,,would like to propose boring science as\Nthe alternative which is mostly negative Dialogue: 0,0:26:03.82,0:26:11.62,Default,,0000,0000,0000,,results, pretty boring, small effects but\Nit may be closer to the truth. And I would Dialogue: 0,0:26:11.62,0:26:18.23,Default,,0000,0000,0000,,like to have boring science but I know\Nit's a pretty tough sell. Sorry I didn't Dialogue: 0,0:26:18.23,0:26:35.28,Default,,0000,0000,0000,,hear that. Yeah, thanks for listening.\N{\i1}Applause{\i0} Dialogue: 0,0:26:35.28,0:26:38.48,Default,,0000,0000,0000,,Herald: Thank you.\NHanno: Two questions, or? Dialogue: 0,0:26:38.48,0:26:41.03,Default,,0000,0000,0000,,Herald: We don't have that much time for\Nquestions, three minutes, three minutes Dialogue: 0,0:26:41.03,0:26:45.25,Default,,0000,0000,0000,,guys. Question one - shoot.\NMic: This isn't a question but I just Dialogue: 0,0:26:45.25,0:26:48.70,Default,,0000,0000,0000,,wanted to comment Hanno you missed out a\Nvery critical topic here, which is the use Dialogue: 0,0:26:48.70,0:26:53.13,Default,,0000,0000,0000,,of Bayesian probability. So you did\Nconflate p-values with the scientific Dialogue: 0,0:26:53.13,0:26:57.26,Default,,0000,0000,0000,,method which isn't.. which gave the rest\Nof you talk. I felt a slightly unnecessary Dialogue: 0,0:26:57.26,0:27:02.38,Default,,0000,0000,0000,,anti science slant. On p, p-values isn't\Nthe be-all and end-all of the scientific Dialogue: 0,0:27:02.38,0:27:06.84,Default,,0000,0000,0000,,method so p-values is sort of calculating\Nthe probability that your data will happen Dialogue: 0,0:27:06.84,0:27:10.86,Default,,0000,0000,0000,,given that no hypothesis is true whereas\NBayesian probability would be calculating Dialogue: 0,0:27:10.86,0:27:15.96,Default,,0000,0000,0000,,the probability that your hypothesis is\Ntrue given the data and more and more Dialogue: 0,0:27:15.96,0:27:19.56,Default,,0000,0000,0000,,scientists are slowly starting to realize\Nthat this sort of method is probably a Dialogue: 0,0:27:19.56,0:27:25.81,Default,,0000,0000,0000,,better way of doing science than p-values.\NSo this is probably a a third alternative Dialogue: 0,0:27:25.81,0:27:29.95,Default,,0000,0000,0000,,to your sort of proposal boring science is\Ndoing the other side's Bayesian Dialogue: 0,0:27:29.95,0:27:34.03,Default,,0000,0000,0000,,probability.\NHanno: Sorry yeah, I agree with you I Dialogue: 0,0:27:34.03,0:27:37.53,Default,,0000,0000,0000,,unfortunately I only had\Nhalf an hour here. Dialogue: 0,0:27:37.53,0:27:40.61,Default,,0000,0000,0000,,Herald: Where are you going after this\Nlike where are we going after this lecture Dialogue: 0,0:27:40.61,0:27:46.27,Default,,0000,0000,0000,,can they find you somewhere in the bar?\NHanno: I know him.. Dialogue: 0,0:27:46.27,0:27:50.56,Default,,0000,0000,0000,,Herald: You know science is broken but\Nthen scientists it's a little bit like the Dialogue: 0,0:27:50.56,0:27:54.99,Default,,0000,0000,0000,,next lecture actually that's waiting there\Nit's like: "you scratch my back and I Dialogue: 0,0:27:54.99,0:27:59.16,Default,,0000,0000,0000,,scratch yours for publication". Hanno:\NMaybe two more minutes? Dialogue: 0,0:27:59.16,0:28:04.87,Default,,0000,0000,0000,,Herald: One minute.\NPlease go ahead. Dialogue: 0,0:28:04.87,0:28:11.82,Default,,0000,0000,0000,,Mic: Yeah hi, thank you for your talk. I'm\Ncurious so you've raised, you know, ways Dialogue: 0,0:28:11.82,0:28:15.53,Default,,0000,0000,0000,,we can address this assuming good actors,\Nassuming people who want to do better Dialogue: 0,0:28:15.53,0:28:20.77,Default,,0000,0000,0000,,science that this happens out of ignorance\Nor willful ignorance. What do we do about Dialogue: 0,0:28:20.77,0:28:26.39,Default,,0000,0000,0000,,bad actors. So for example the medical\Ncommunity drug companies, maybe they Dialogue: 0,0:28:26.39,0:28:29.54,Default,,0000,0000,0000,,really like the idea of being profitably\Nincentivized by these random control Dialogue: 0,0:28:29.54,0:28:34.93,Default,,0000,0000,0000,,trials, to make out essentially a placebo\Ndo something. How do we begin to address Dialogue: 0,0:28:34.93,0:28:40.64,Default,,0000,0000,0000,,them current trying to maliciously p-hack\Nor maliciously abuse the pre-reg system or Dialogue: 0,0:28:40.64,0:28:44.41,Default,,0000,0000,0000,,something like that?\NHanno: I mean it's a big question, right? Dialogue: 0,0:28:44.41,0:28:50.66,Default,,0000,0000,0000,,But I think if the standards are kind of\Nconfining you so much that there's not Dialogue: 0,0:28:50.66,0:28:56.38,Default,,0000,0000,0000,,much room to cheat that's way out right\Nand a basis and also I don't think Dialogue: 0,0:28:56.38,0:29:00.11,Default,,0000,0000,0000,,deliberate cheating is that much of a\Nproblem, I actually really think the Dialogue: 0,0:29:00.11,0:29:07.12,Default,,0000,0000,0000,,bigger problem is people honestly\Nbelieve what they do is true. Dialogue: 0,0:29:07.12,0:29:15.64,Default,,0000,0000,0000,,Herald: Okay one last, you sir, please?\NMic: So the value in science is often an Dialogue: 0,0:29:15.64,0:29:20.56,Default,,0000,0000,0000,,account of publications right? Account of\Ncitations so and so on, so is it true that Dialogue: 0,0:29:20.56,0:29:24.80,Default,,0000,0000,0000,,to improve this situation you've\Ndescribed, journals of whose publications Dialogue: 0,0:29:24.80,0:29:31.12,Default,,0000,0000,0000,,are available, who are like prospective,\Nshould impose more higher standards so the Dialogue: 0,0:29:31.12,0:29:37.47,Default,,0000,0000,0000,,journals are those who must like raise the\Nbar, they should enforce publication of Dialogue: 0,0:29:37.47,0:29:43.33,Default,,0000,0000,0000,,protocols before like accepting and etc\Netc. So is it journals who should, like, Dialogue: 0,0:29:43.33,0:29:49.34,Default,,0000,0000,0000,,do work on that or can we regular\Nscientists do something also? I mean you Dialogue: 0,0:29:49.34,0:29:53.27,Default,,0000,0000,0000,,can publish in the journals that have\Nbetter standards, right? There are Dialogue: 0,0:29:53.27,0:29:59.30,Default,,0000,0000,0000,,journals that have these registered\Nreports, but of course I mean as a single Dialogue: 0,0:29:59.30,0:30:03.36,Default,,0000,0000,0000,,scientist is always difficult because\Nyou're playing in a system that has all Dialogue: 0,0:30:03.36,0:30:06.58,Default,,0000,0000,0000,,these wrong incentives.\NHerald: Okay guys that's it, we have to Dialogue: 0,0:30:06.58,0:30:12.67,Default,,0000,0000,0000,,shut down. Please. There is a reference\Nbetter science dot-org, go there, and one Dialogue: 0,0:30:12.67,0:30:16.30,Default,,0000,0000,0000,,last request give really warm applause! Dialogue: 0,0:30:16.30,0:30:24.25,Default,,0000,0000,0000,,{\i1}Applause{\i0} Dialogue: 0,0:30:24.25,0:30:29.24,Default,,0000,0000,0000,,{\i1}34c3 outro{\i0} Dialogue: 0,0:30:29.24,0:30:46.00,Default,,0000,0000,0000,,subtitles created by c3subtitles.de\Nin the year 2018. Join, and help us!