Recently, Cathy O’Neil shared a guest post on her blog www.mathbabe.org that responded to the recent study on race and police shootings. In the post, the Head of Data Science at Zocdoc and Adjunct Professor with NYU’s Center for Data Science, Brian D’Alessandro tears apart Roland Fryer’s findings that “On the most extreme use of force – officer-involved shootings – we find no racial differences in either the raw data or when contextual factors are taken into account.” The major flaw: Fryer’s method of data sampling.
He notes some of the study’s limitations and flaws:
- The data came from a single precinct – a red-flag that doesn’t necessarily mean “cherry-picking data” but the study should have noted, “In Houston, using self-reported data…”
- The study didn’t apply this same analysis to other police precincts to see if the results were statistically significant.
- The data sampling method separated the “all shootings” population from the “use of justified force” population, which means they didn’t actually test their claim.
D’Alessandro doesn’t suspect Fryer’s team of wrong-doing, like manipulating data, but he does question why they released their findings before the peer-review process.
Peer review, he claims, would have caught these weaknesses and would have given the authors a chance to “temper their findings.” He adds, “I’d love for him to take his responsibility to the next level and make his data, both in raw and encoded forms, public.”
This push to open data and the method of analysis reminded me (again) of the One the Media story “Editing The Culture of Science With CRSPR,” which is more about Kevin Esvelt’s appeal to the scientific community to “open all experiments to public scrutiny” than it is about gene-drive technology.
Both stories argue for increased openness in order to provide more inspection.
Why this matters is Fryer’s un-peer-reviewed study was picked up by the New York Times quickly, and the media will remember the original study more than any retractions, and journalists frequently will not question results or misreport them.
Which brought to mind a third story by John Bohannon, “I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here’s How.”
Read the full post for the science in the post, but here’s the set up:
I got a call in December last year from a German television reporter named Peter Onneken. He and his collaborator Diana Löbl were working on a documentary film about the junk-science diet industry. They wanted me to help demonstrate just how easy it is to turn bad science into the big headlines behind diet fads. And Onneken wanted to do it gonzo style: Reveal the corruption of the diet research-media complex by taking part.
The result was a published paper on the weight-loss benefits of eating chocolate. The researchers purposefully used a small sample (16 participants) and measured a large number of things (18) to ensure something would be “statistically significant.” After rushing the paper to a pay-to-publish scientific journal, they relied on a press release to alert the media.
The key is to exploit journalists’ incredible laziness. If you lay out the information just right, you can shape the story that emerges in the media almost like you were writing those stories yourself. In fact, that’s literally what you’re doing, since many reporters just copied and pasted our text.
My point in triangulating these three stories is not that we need better ethics training or that the general population needs to be more skeptical of science journalism (although, maybe, and most definitely).
Rather, what teachers should reap from this triangulation is there is a need to help students practice peer review.
Peer Review: A Primer
Briefly, peer review is the process that precedes publication to an academic journal. When an author (or authors) submit their paper to a journal, the editor will review it and then select two to three experts in the field to review the paper. More often than not, the experts (peers) will note criticisms, and the paper will be denied or accepted after further revisions. The authors take the feedback, make changes, and resubmit the paper until final approval. Peer review benefits the community by
- Providing analysis to catch mistakes
- Filters studies that may not be relevant or helpful for additional research
- Ensures that reported studies include enough information to replicate the researcher’s results
A Systemic Problem
Unfortunately, research universities value academics’ publication over their contributions to the peer review process. Also, researchers are incentivized to focus on new research rather than the dull task of replicating studies. Few journals want to publish papers that report that this other study also works in a different population.
In the Classroom
Although teachers have a limited (or no) ability to affect what happens in universities or in editors’ offices, they can help students learn to expect and value this process of checking and replicating findings. They can do this by expanding Project-Based Learning (PBL).
Project-Based Learning, or PBL, regularly appear in educators’ blogs, on Twitter, and as professional development offerings. So this is not a post about what it is. For that, see here. Instead, I want to discuss how trends in PBL implementation should be changed to include more evaluation and analysis of the resulting projects.
What’s in a Name?
PBL’s name emphasizes the end result of problem-solving, and much of the process focuses on identifying a problem (in the community, around the world, etc) which can be solved with a project. Once a problem has been found, then students see developing a solution as the natural end of the project. There are two problems here. First, students may understand the goal as developing a solution, not the best solution. Second, the presentation and assessment of these projects mostly focuses on assessing what students learned and how well they worked together. PBL doesn’t incorporate time for students to authentically assess the value of the solution or to discuss ways the solution might lead to new developments.
Instead, students need to learn to assess solutions in terms of worth: which solution is the cheapest, most feasible to develop and mass produce, to implement? Which solution will offer the most benefits with the fewest risks? Also, students should be invited to discuss what impact that solution might have within the community in five years. What impact will it have? Finally, the teacher should invite students to individually ponder how they might expand on that solution.
Ideate Together; Critique Alone
Too often, PBL guides help teachers create collaborative classrooms that encourage “out-of-the-box thinking,” creativity, risk-taking, etc.: team-building and team-consensus over an individual’s autonomy. And during the problem-solving, research, and development stages of PBL, this environment is necessary. However, when the projects are complete, students need time to personally reflect. Often teachers will give students prompts to reflect on how well the group worked together to complete the project, but less frequently are students asked how satisfied they are with their own result. What might they change? What are some of the benefits of their own work?
Once students begin to analyze their own work, they should have the chance to deconstruct the solutions of other students. This goes beyond completing a peer-assessment rubric of presentations. Students should be able to read the papers and peruse the presentations alone, without peer pressure or popularity weighing in (a problem that confronts academic peer-review, as well). They should focus on finding flaws in the research and in the project in order to help students improve their design, and perhaps student groups should present twice: once within the safety of their own classroom and again after revising their work to a larger group. We see this happening in student clubs and organizations, like robotics competitions, Odyssey of the Mind, and debate clubs.
What’s a Teacher to Do?
Technology and Open Educational Resources (ORE) can help teachers design lessons that develop these analytical skills. First, there are many, many papers available to review online. Yes, academic writing is difficult to read, but not impossible. Teachers can make a single paper subject to a focused reading, and students may discuss the research during a Socratic Seminar. These discussions would help students analyze and form definitions of what valuable research and writing is and would help students evaluate evidence.
Then, when students prepare their presentations, they could be encouraged to post their work for review. Students could ask questions and leave feedback. A host of cloud-based sharing tools could be used. Taking this a step further, communications technology could connect the class to a professional in the field to provide feedback or guidance. Following that, the class could develop a peer-review checklist or rubric to guide their personal reviews.
I admit, taking PBL one step further is ambitious, but it is necessary. More and more schools are shifting toward student-based learning and PBL. Classrooms already provide spaces for students to collaborate and create. Now, let’s make sure that they know how to evaluate their work and then how to take it to the next level.