In recent years, the public has gradually discovered that there is a crisis in science. But what is the problem? And how bad is it, really? Today on The Corbett Report we shine a spotlight on the series of interrelated crises that are exposing the way institutional science is practiced today, and what it means for an increasingly science-dependent society.
For those with limited bandwidth, CLICK HERE to download a smaller, lower file size version of this episode.
For those interested in audio quality, CLICK HERE for the highest-quality version of this episode (WARNING: very large download).
In 2015 a study from the Institute of Diet and Health with some surprising results launched a slew of click bait articles with explosive headlines:
“Chocolate accelerates weight loss” insisted one such headline.
“Scientists say eating chocolate can help you lose weight” declared another.
“Lose 10% More Weight By Eating A Chocolate Bar Every Day…No Joke!” promised yet another.
There was just one problem: This was a joke.
The head researcher of the study, “Johannes Bohannon,” took to io9 in May of that year to reveal that his name was actually John Bohannon, the “Institute of Diet and Health” was in fact nothing more than a website, and the study showing the magical weight loss effects of chocolate consumption was bogus. The hoax was the brainchild of a German television reporter who wanted to “demonstrate just how easy it is to turn bad science into the big headlines behind diet fads.”
Given how widely the study’s surprising conclusion was publicized—from the pages of Bild, Europe’s largest daily newspaper to the TV sets of viewers in Texas and Australia—that demonstration was remarkably successful. But although it’s tempting to write this story off as a demonstration about gullible journalists and the scientific illiteracy of the press, the hoax serves as a window into a much larger, much more troubling story.
What makes the chocolate weight loss study so revealing isn’t that it was completely fake; it’s that in an important sense it wasn’t fake. Bohannes really did conduct a weight loss study and the data really does support the conclusion that subjects who ate chocolate on a low-carb diet lose weight faster than those on a non-chocolate diet. In fact, the chocolate dieters even had better cholesterol readings. The trick was all in how the data was interpreted and reported.
As Bohannes explained in his post-hoax confession:
“Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a ‘statistically significant’ result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.”
You see, finding a “statistically significant result” sounds impressive and helps scientists to get their paper published in high-impact journals, but “statistical significance” is in fact easy to fake. If, like Bohannes, you use a small sample size and measure for 18 different variables, it’s almost impossible not to find some “statistically significant” result. Scientists know this, and the process of sifting through data to find “statistically significant” (but ultimately meaningless) results is so common that it has its own name: “p-hacking” or “data dredging.”
But p-hacking only scrapes the surface of the problem. From confounding factors to normalcy bias to publication pressures to outright fraud, the once-pristine image of science and scientists as an impartial font of knowledge about the world has been seriously undermined over the past decade.
Although these types of problems are by no means new, they came into vogue when John Ioannidis, a physician, researcher and writer at the Stanford Prevention Research Center, rocked the scientific community with his landmark paper “Why Most Published Research Findings Are False.” The 2005 paper addresses head on the concern that “most current published research findings are false,” asserting that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” The paper has achieved iconic status, becoming the most downloaded paper in the Public Library of Science and launching a conversation about false results, fake data, bias, manipulation and fraud in science that continues to this day.
JOHN IOANNIDIS: This is a paper that is practically presenting a mathematical modeling of what are the chances that a research finding that is published in the literature would be true. And it uses different parameters, different aspects, in terms of: What we know before; how likely it is for something to be true in a field; how much bias are maybe in the field; what kind of results we get; and what are the statistics that are presented for the specific result.
I have been humbled that this work has drawn so much attention and people from very different scientific fields—ranging not just bio-medicine, but also psychological science, social science, even astrophysics and the other more remote disciplines—have been attracted to what that paper was trying to do.
So what is the problem? And how bad is it, really? And what does it mean for an increasingly tech-dependent society that something is rotten in the state of science?
To get a handle on the scope of this dilemma, we have to realize that the “crisis” of science isn’t a crisis at all, but a series of interrelated crises that get to the heart of the way institutional science is practiced today.
First, there is the Replication Crisis.
This is the canary in the coalmine of the scientific crisis in general because it tells us that a surprising percentage of scientific studies, even ones published in top-tier academic journals that are often thought of as the gold standard for experimental research, cannot be reliably reproduced. This is a symptom of a larger crisis because reproducibility is considered to be a bedrock of the scientific process.
In a nutshell, an experiment is reproducible if independent researchers can run the same experiment and get the same results at a later date. It doesn’t take a rocket scientist to understand why this is important. If an experiment is truly revealing some fundamental truth about the world then that experiment should yield the same results under the same conditions anywhere and at any time (all other things being equal).
Well, not all things are equal.
In the opening years of this decade, the Center for Open Science led a team of 240 volunteer researchers in a quest to reproduce the results of 100 psychological experiments. These experiments had all been published in three of the most prestigious psychology journals. The results of this attempt to replicate these experiments, published in 2015 in a paper on “Estimating the Reproducibility of Psychological Science,” were abysmal. Only 39 of the experimental results could be reproduced.
Worse yet for those who would defend institutional science from its critics, these results are not confined to the realm of psychology. In 2011, Nature published a paper showing that researchers were only able to reproduce between 20 and 25 per cent of 67 published preclinical drug studies. They published another paper the next year with an even worse result: researchers could only reproduce six of a total of 53 “landmark” cancer studies. That’s a reproducibility rate of 11%.
These studies alone are persuasive, but the cherry on top came in May 2016 when Nature published the results of a survey of over 1,500 scientists finding fully 70% of them had tried and failed to reproduce published experimental results at some point. The poll covered researchers from a range of disciplines, from physicists and chemists to earth and environmental scientists to medical researchers and assorted others.
So why is there such a widespread inability to reproduce experimental results? There are a number of reasons, each of which give us another window into the greater crisis of science.
The simplest answer is the one that most fundamentally shakes the widespread belief that scientists are disinterested truthseekers who would never dream of publishing a false result or deliberately mislead others.
JAMES EVAN PILATO: Survey sheds light on the ‘crisis’ rocking research.
More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature’s survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.
The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.
Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively.
SOURCE: Scientists Say Fraud Causing Crisis of Science – #NewWorldNextWeek
In fact, the data shows that the Crisis of Fraud in scientific circles is even worse than scientists will admit. A study published in 2012 found that fraud or suspected fraud was responsible for 43% of scientific paper retractions, by far the single leading cause of retraction. The study demonstrated a 1000% increase in (reported) scientific fraud since 1975. Together with “duplicate publication” and “plagiarism,” misconduct of one form or another accounted for two-thirds of all retractions.
So much for scientists as disinterested truth-tellers.
Indeed, instances of scientific fraud are cropping up more and more in the headlines these days.
Last year, Kohei Yamamizu of the Center for iPS Cell Research and Application was found to have completely fabricated the data for his 2017 paper in the journal Stem Cell Reports, and earlier this year it was found that Yamamizu’s data fabrication was more extensive than previously thought, with a paper from 2012 also being retracted due to doubtful data.
Another Japanese researcher, Haruko Obokata, was found to have manipulated images to get her landmark study on stem cell creation published in Nature. The study was retracted and one of Obokata’s co-authors committed suicide when the fraud was discovered.
Similar stories of fraud behind retracted stem cell papers, molecular-scale transistor breakthroughs, psychological studies and a host of other research calls into question the very foundations of the modern system of peer-reviewed, reproducible science, which is supposed to mitigate fraudulent activity by carefully checking and, where appropriate, repeating important research.
There are a number of reasons why fraud and misconduct is on the rise, and these relate to more structural problems that unveil yet more crises in science.
Like the Crisis of Publication.
We’ve all heard of “publish or perish” by now. It means that only researchers who have a steady flow of published papers to their name are considered for the plush positions in modern-day academia.
This pressure isn’t some abstract or unstated force; it is direct and explicit. Until recently the medical department at London’s Imperial College told researchers that their target was to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.” Similar guidelines and quotas are enacted in departments throughout academia.
And so, like any quota-based system, people will find a way to cheat their way to the goal. Some attach their names to work they have little to do with. Others publish in pay-to-play journals that will publish anything for a small fee. And others simply fudge their data until they get a result that will grab headlines and earn a spot in a high-profile journal.
It’s easy to see how fraudulent or irreproducible data results from this pressure. The pressure to publish in turn puts pressure on researchers to produce data that will be “new” and “unexpected.” A study finding that drinking 5 cups of coffee a day increases your chance of urinary tract cancer (or decreases your chance of stroke) is infinitely more interesting (and thus publishable) than a study finding mixed results, or no discernible effect. So studies finding a surprising result (or ones that can be manipulated into showing surprising results) will be published and those with negative results will not. This makes it much harder for future scientists to get an accurate assessment of the state of research in any given field, since untold numbers of experiments with negative results never get published, and thus never see the light of day.
But the pressure to publish in high-impact, peer-reviewed journals itself raises the specter of another crisis: The Crisis of Peer Review.
The peer review process is designed as a check against fraud, sloppy research and other problems that arise when journal editors are determining whether to publish a paper. In theory, the editor of the journal passes the paper to another researcher in the same field who can then check that the research is factual, relevant, novel and sufficient for publication.
In practice, the process is never quite so straightforward.
The peer review system is in fact rife with abuse, but few cases are as flagrant as that of Hyung-In Moon. Moon was a medicinal-plant researcher at Dongguk University in Gyeongju, South Korea, who aroused suspicions by the ease with which his papers were reviewed. Most researchers are too busy to review other papers at all, but the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry noticed that the reviewers for Moon’s papers were not only always available, but that they usually submitted their review notes within 24 hours. When confronted by the editor about this suspiciously quick work, Moon admitted that he had written most of the reviews himself. He had simply gamed the system, where most journals ask researchers to submit names of potential reviewers for their papers, by creating fake names and email addresses and then submitting “reviews” of his own work.
Beyond the incentivization of fraud and opportunities for gaming the system, however, the peer review process has other, more structural problems. In certain specialized fields there are only a handful of scientists qualified to review new research in the discipline, meaning that this clique effectively forms a team of gatekeepers over an entire branch of science. They often know each other personally, meaning any new research they conduct is certain to be reviewed by one of their close associates (or their direct rivals). This “pal review” system also helps to solidify dogma in echo chambers where the same few people who go to the same conferences and pursue research along the same lines can prevent outsiders with novel approaches from entering the field of study.
In the most egregious cases, as with researchers in the orbit of the Climate Research Unit at the University of East Anglia, groups of scientists have been caught conspiring to oust an editorfrom a journal that published papers that challenged their own research and even conspiring to “redefine what the peer-review literature is” in order to stop rival researchers from being published at all.
So, in short: Yes, there is a Replication Crisis in science. And yes, it is caused by a Crisis of Fraud. And yes, the fraud is motivated by a Crisis of Publication. And yes, those crises are further compounded by a Crisis of Peer Review.
But what creates this environment in the first place? What is the driving factor that keeps this whole system going in the face of all these crises? The answer isn’t difficult to understand. It’s the same thing that puts pressure on every other aspect of the economy: funding.
Modern laboratories investigating cutting edge questions involve expensive technology and large teams of researchers. The types of labs producing truly breakthrough results in today’s environment are the ones that are well funded. And there are only two ways for scientists to get big grants in our current system: big business or big government. So it should be no surprise that “scientific” results, so susceptible to the biases, frauds and manipulations that constitute the crises of science, are up for sale by scientists who are willing to provide dodgy data for dirty dollars to large corporations and politically-motivated government agencies.
First published in 1943, Eim HaBanim Semeichah remains the most comprehensive treatise on Eretz Yisrael, redemption, and Jewish unity. Much of this remarkable work has been proven prophetic by the passage of time. It is truly a priceless treasure.
The saintly author, R. Yisachar Shlomo Teichtal, originally shared the prevalent, Orthodox view which discouraged the active return to Zion. The Holocaust, however, profoundly changed his perspective. The annihilation of unprecedented numbers of his fellow Jews forced him to seek explanations. Thus, relying almost exclusively on his phenomenal memory and keen insight, he investigated the matter exhaustively. His conclusions are eye-opening! The Jewish people will find refuge from their troubles, he argues, only if they unite to rebuild the Land. This will bring about the ultimate redemption.
Although more that 65 years have passed since its original publication, the message of this book is as crucial today as it was then. We therefore take great pride in presenting this masterpiece to the English-speaking public. We only hope that Jews the world over will absorb its message and apply it in practice.
APPROBATIONS:
HaRav HaGaon R. Zalman Nechemyah Goldberg shlita:
I was happy to hear from my dear friend, R. Chayim Menachem Teichtal shlita, that the wonderful book written by his brilliant, righteous, and saintly father, R. Yisachar Shlomo Teichtal ztvk”l… author of Responsa Mishneh Sachir, [was being published in English]. This book, which is completely holy, arouses the hearts of Israel to their Father in Heaven and inspires them to cherish the great mitzvah of settling the Land of Israel.
For some time now, this book, entitled Eim HaBanim Semeichah, has been renowned throughout the Jewish world. Recently, R. Moshe Lichtman shlita took the initiative to translate this book into English, so that the Jewish masses who do not understand the Holy Tongue (Hebrew) can benefit [from it]. The translator has expertise in this field and, undoubtedly, will produce a proper work for the benefit of Klal Yisrael.
Written in honor of the Torah and in honor of the brilliant tzaddik zt”l, Zalman Nechemyah Goldberg
Many years ago, I read several sections of the beautiful work, Eim HaBanim Semeichah, and I enjoyed it tremendously. The saintly author, may HaShem avenge his blood, certainly does not need my approbation, God forbid.
Today, unfortunately, there is much confusion, even among Torah-Jews, on the issue of Eretz Yisrael, which is a vast discipline in the Torah. [Therefore], I commend our dear colleague, R. Moshe Lichtman (may he live), who translated this important book into the common vernacular. For, due to our numerous sins, many Torah-Jews cannot read this book in the original [Hebrew]. I reviewed several pages of the translation and enjoyed them, as well.
Written and signed in honor of the Torah and in honor of our Holy Land, Tzvi Schachter
TESTIMONIALS:
Thank you for translating Eim HaBanim Semeicha. It’s an amazing sefer — one of the two or three most significant sefarim I’ve learned.
So far, I have only got through the introductions (90 pages) and find it fascinating and full of sources for why one should make Aliyah. If anyone is looking for religious sources for making Aliyah I strongly recommend you get your hands on this book which recently came out in English. The author is Rabbi Yisachar Shlomo Teichtal who passed away in the last days of the Holocaust. It’s actually an English translation of a book that has been available for about twenty years in Hebrew. This is not a “how to” book. R. Teichtal zt”l was a Munkatcher Chassid who decided that Munkatch (probably as vehemently anti-Zionist as Satmar) had it all wrong, at least when it comes to making aliya.
The book was actually originally published during the Shoah. Yes, during the Shoah! The English translation was just published by a young scholar/Rov who had been working on it for a few years. Yes, it is excellent. No home should be without it. As a matter of fact, Rav Teichtal, a”h, himself said in all humility that as a Rov and Posek there was so much he didn’t know concerning the mitzvah of yishuv eretz yiroel until he began writing the sefer.
I second the recommendation. The sefer is probably the best and most influential sefer I ever read. Besides for the excellent Torah content, the chizuk with repsect to Aliyah, and the fascinating history covered, the sefer teaches a tremendous amount of Ahavat Yisrael in discussing an area that has generated a lot of Sinat Chinam. I was often brought to tears reading the sefer. In short, it is highly recommended reading.
Your translation of “Eim Habanim Semeichah” was the best book I’ve ever read.
I just made Aliyah, and I now live in Ramat Beit Shemesh. The decision to make Aliyah was a 10 year process. Actually, the decision was immediate; the courage to do it took 10 years. I have many strong feelings about the status of the Jewish people and about the issue of Aliyah. Some friends who know how I feel suggested that I read the book that you translated. I have purchased a copy and have read 100 pages so far. Though I am not finished reading it yet, I am extremely moved by it. It explains exactly how I feel about all these issues but adds a Torah source. I believe, without trying to be overly dramatic, that this may be one of the most important books out today…
Yesterday my youngest daughter was hit by a bus, as a pedestrian (in a cross walk), head on – which then left the scene.
Here’s how the experience went, in Israel, with socialised medicine (for Americans – this is the equivalent of the Democratic cry of “Healthcare for All!”).
– After being struck, Hatzalah was called. Hatzalah is the (now Israel nationwide) volunteer emergency response service – meaning people who come running or by ambu-cycle with first response medical equipment. The idea here being that dealing with emergency stabilisation in the first 10 minutes significantly increases survival rates. Hatzalah is a volunteer service and a charity service – donations make it happen (no personal or government cost).
– A Hatzalah responder arrived within a few minutes, evaluated her and called an ambulance for transport.
– It took a while for the ambulance to arrive through city traffic. The Magen Dovid Odom (Red Star of David national ambulance service) medics re-evaluated her (unfortunately Hatzalah and MDA don’t play well together, so there is no integration between their data or cross trust in their evaluation costing extra time) and transported her to Ichilov hospital.
– There was a not insignificant delay in the ambulance entering the hospital because the hit Israeli TV show Fauoda was filming there at the time.
– The triage at Ichilov ER evaluated her, noted she could stand up, and sent her (with broken bones and possible internal injuries) to walk down to Ambulatory ER and wait in an extremely crowded inner city ER waiting room with 50-100 other people waiting for their coughs to be seen or wounds to be stitched.
– 2 1/2 hours later, sitting with a likely broken limb and in partial shock, she was seen by an ER “surgeon” who, without doing more than asking a few questions and taking notes (no examination, neuro check, or even taking blood pressure or temp – but I guess those had been done in triage so no reason to check later if she wasn’t passing out or seizing, right?), sent her off to x-ray.
– X-ray was great, had her in and out in 5 minutes.
– She waiting another 2 1/2 hours to be seen by an orthopaedist, at which point I had a yelling match with the overloaded nursing staff about the wait time and signing her out and taking her to Hadassah Ein Kerem in Jerusalem (an hour away), assuming we’d be seen faster traveling an hour and waiting there than in Ichilov. After demanding to sign her out and being told to suck it up, look at the crowd, or fine – here’s the sign out paper be prepared for the national system to demand you pay (where normally they don’t) because you left without medical permission – after going through that she was magically called 3 minutes later.
– The orthopedist cast one limb and evaluated her other areas as bruised not broken.
– Back to the ER “surgeon”, who this time we didn’t wait for but barged into his room after his previous patient walked out. Asked about a neuro check, “she’s conscious and CT’s are loads of radiation for a young person – so no” (and socialised medicine means no MRI because they are too expensive unless the person is showing obvious symptoms), about pain control for her severe head pain (she struck her head), “take a Tylenol or Advil”, he signed her out and told her to follow up with an orthopaedist in a week through her (national) HMO.
Israel has a modern first world medical system – the equipment the orthopedist was using was world class – and he did some live action x-rays on the spot with equipment I’ve never seen before. But healthcare for all cost controls mean facilities are run AT CAPACITY, and where in the U.S. she would have been in an MRI in 30 minutes, in Israel an MRI is not an option due to the cost except in severe situations – and a CT (readily available) has reasons to avoid. Pain control is a bit of “suck it up buttercup”, both because of cost control and national attitude.
ER’s are ER’s, and I’ve been in good ones and bad ones in the US and, with this experience of a not-so-good-one, in Israel. But the obvious difference is in the US they throw lots of tests at the problem to make sure a variety of bad things isn’t happening, in Israel they use basic diagnostic techniques – which will miss the 1:100 or 1:1000 situation – to avoid test costs.
Thank G-d, my daughter is banged up moderately and has a fractured or broken limb, but doesn’t appear to have suffered life threatening injury. In the U.S. I would be comfortable saying “yep, she’s ok” and waiting for the $5,000 in deductible bills to arrive of the $15,000 hospital bill. In Israel I’m not comfortable saying that all is ok – but no hospital bill will arrive, only an ambulance bill (for $150) which can then be forwarded to the HMO to take care of… and she will be covered by Social Security Disability for her lost work time (Bituach Leumi) and by the national vehicle accident coverage for any medical bills not covered by the HMO (in Israel they combined all the insurance companies medical coverage into a national plan that covers medical costs for anyone hit or injured in vehicle accidents).
So which is better? Better care, or better coverage? It’s iffy – for basic care, stitches, broken bones, normal illness – I find better coverage with it’s reduced cost concerns to be much better. But when things get more serious you run into wait times, effort to prove need, and it’s hard to get more costly services, treatment or medicine – extra approvals, extra wait. Sometimes that is merely annoying, in this emergency situation it was… distressing.
I don’t know much about the hospital facilities in Tel Aviv, but for ER services I didn’t think much of Ichilov. In Jerusalem I’ve had good experiences in Hadassah Ein Kerem, and adequate experiences in Shaare Tzedek (though Shaare Tzedek is usually overcrowded).