Vavreck is a professor of political science and communication studies at UCLA, and most notoriously, she's the dissertation adviser to disgraced UCLA graduate student Michael LaCour, the homosexual marriage researcher who fabricated data to claim that face-to-face communications ("canvassing") by gay activists can change the opinions of opponents of homosexual marriage. Lacour and Donald Philip Green's Science article based on this research has been retracted. My earlier post is here, "The Journal Science Retracts Homosexual Marriage Paper After Lead Author Accused of Falsifying Data."
Yes, this is getting to be a massive scandal, and I'm interested to see what shakes out at UCLA. Word is that LaCour's the ultimate conman, obviously since (major scholar) Donald Philip Green at Columbia was so easily hoodwinked by this bogus research. And despite the harsh slams against Vavreck at Political Science Rumors, she's written letters of recommendation for applicants to job positions in my department at LBCC, and I can't remember anyone who's written more comprehensive and effective recommendations. She's not lazy, that's for sure.
In any case, I think she should face discipline for her lapses as an adviser, particularly in that she's brought enormous shame to her department and UCLA as a whole.
Meanwhile, LaCour was supposed to be out with a major "defense" of his research today, but there's been no word yet. No doubt it's taking him a lot longer than he expected to "gather" the evidence needed to make the case.
More from Jesse Singal, at New York Magazine (where I found the reference to Political Science Rumors), "The Case of the Amazing Gay-Marriage Data: How a Graduate Student Reluctantly Uncovered a Huge Scientific Fraud":
How to get published: Fake your research to make people look better than they are. http://t.co/uPPymJLPZ2
— Tim Beidel (@tjb1013) May 29, 2015
Don't worry Michael LaCour - there's always a place for you at @RollingStone ! http://t.co/WPs9edxnHi pic.twitter.com/Fuq2c9lQ2N
— Jim Hughes (@TheJimHughes) May 28, 2015
How David Broockman reluctantly uncovered the Michael Lacour fraud. Great story by @jessesingal http://t.co/gB09F4Mija
— Kevin Davies (@KevinADavies) May 29, 2015
Last week, [David] Broockman, along with his friend and fellow UC Berkeley graduate student Josh Kalla and Yale University political scientist Peter Aronow, released an explosive 27-page report recounting many “irregularities” in LaCour and Green’s paper. “Irregularities” is diplomatic phrasing; what the trio found was that there’s no evidence LaCour ever actually collaborated with uSamp, the survey firm he claimed to have worked with to produce his data, and that he most likely didn’t commission any surveys whatsoever. Instead, he took a pre-existing dataset, pawned it off as his own, and faked the persuasion “effects” of the canvassing. It’s the sort of brazen data fraud you just don’t see that often, especially in a journal like Science. Green quickly fired off an email to the journal asking for a retraction; Science granted that wish yesterday, albeit without LaCour’s consent. And while there’s no word out of central New Jersey just yet, there’s a good chance, once the legal dust settles, that Princeton University will figure out a way to rescind the job offer it extended to LaCour, who was supposed to start in July. (Princeton offered no comment other than an emailed statement: “We will review all available information and determine the next steps.”) LaCour, for his part, has lawyered up and isn’t talking to the media, although he was caught attempting to cover up faked elements of his curriculum vitae earlier this week. His website claims that he will “supply a definitive response” by the end of the day today.It's a long piece, as you can tell, loaded with all kinds of interesting hyperlinks, so read the whole thing.
But even before Broockman, Kalla, and Aronow published their report, LaCour’s results were so impressive that, on their face, they didn’t make sense. Jon Krosnick, a Stanford social psychologist who focuses on attitude change and also works on issues of scientific transparency, says that he hadn’t heard about the study until he was contacted by a This American Life producer who described the results to him over the phone. “Gee,” he replied, “that's very surprising and doesn't fit with a huge literature of evidence. It doesn't sound plausible to me.” A few clicks later, Krosnick had pulled up the paper on his computer. “Ah,” he told the producer, “I see Don Green is an author. I trust him completely, so I'm no longer doubtful.” (Some people I spoke to about this case argued that Green, whose name is, after all, on the paper, had failed in his supervisory role. I emailed him to ask whether he thought this was a fair assessment. “Entirely fair,” he responded. “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.” He declined to comment further for this story.)
*****
The first thing Broockman did, back in December of 2013, was get frustrated at his inability to run a survey like LaCour’s. On the 9th, Broockman decided to call uSamp (since renamed). This was when the first of those near-misses occurred — a year and a half later, a similar conversation would help bust the entire scandal wide open. But the first time he called uSamp, Broockman trod carefully, because he thought LaCour was still working with the company. “As far as I’m concerned, he still has an ongoing relationship with this company, and is still gathering data with them,” he explains, “and I didn’t want to upset the apple cart of whatever, and so I don’t recall in any way mentioning him. I just said, ‘I’ve heard you can do this kind of work. Can you do this kind of work for me?’”
The salesperson he spoke with, Broockman explains, said that they weren’t sure the firm could complete this kind of survey, but seemed under-informed and slightly incompetent. “And so I just kind of gave up, because I wasn’t on a witch hunt,” says Broockman. “I was just trying to get a study done.” Had Broockman mentioned LaCour by name or pressed for more details, he would realize later, LaCour’s lack of any real connection to the company might have revealed itself right away.
Things didn’t get any easier when Broockman sent his RFP out to dozens of other survey companies the next day. “We are seeking quotes for a large study we are planning that will necessitate enrolling approximately 10,000 new individuals in a custom internet or phone panel,” it started. The responses indicated that these companies had very little ability to pull off a study on the scale of LaCour’s. Broockman says he “found that pretty weird, because apparently uSamp had managed this in like a day.” “Some small part of my head thought, ‘I wonder if it was fake,’” says Broockman. “But most of me thought, ‘I guess I'll have to wait until Mike is willing to reveal what the magic was.’”
Broockman would also have to wait since, like most academics, he was constantly juggling a thousand different projects. During most of 2014, he was working on the question of how constituents react to communication from their lawmakers, a critique of a prominent statistical method, and research into how polarization affects political representation. As an undergrad, Broockman had done some work with Joshua Kalla, a Pittsburgh native who was a couple years below him — Kalla had been a research assistant on a study about housing discrimination Broockman worked on with Green. As Kalla started looking at grad schools, Broockman aggressively lobbied him to come out West.
These efforts were successful, and once Kalla arrived on campus in the fall of 2014, Broockman’s approach to the LaCour research changed: Now, he thought, he had the teammate he needed to finally build on LaCour’s promising canvassing work. Broockman and Kalla have a strong natural chemistry as research partners. In one class at Berkeley, Kalla, who is straight, highlighted the many similarities between himself and Broockman, who is gay, with a nerdy statistics joke about “matching” — the idea of finding two very similar people in a data set to test what effects emerge when you apply a treatment to one but not the other. “If you exact match,” Kalla said, “you could use me and David to figure out the causal effects of being gay.” Canvassing was a natural subject for two young researchers interested in the dynamics of persuasion. “It turns out that even if you’re not interested in canvassing per se, canvassing is a great medium through which to test other theories of how to persuade people,” Broockman says. Whereas traditional experiments involving opinion change tend to entail certain methodological difficulties — are the anonymous survey-takers really paying attention to the questions? Is the sleepy undergrad actually listening to the prompt you’re reading them? — with canvassing, you can say with relative confidence that the subject of the experiment is actively engaging with whatever argument you’re testing.
The 2014 election also helped focus Broockman and Kalla’s research agenda. They were convinced canvassing worked — at least to a point — and that politicians weren’t capitalizing on this fact nearly enough. After the election, they co-authored a a November 2014 piece in Vox arguing as much. The pair wrote that “research has consistently found that authentic interpersonal exchanges usually have sizable impacts,” linking to a positive pre-election Bloomberg Politics cover story about LaCour and Green’s research.
“When we wrote that piece, all of a sudden we received a ton of inbound interest in doing more studies, both because people were persuaded by our point and because we kind of planted a flag in this,” says Broockman. “And so practitioners who were interested in this decided to come talk to us about it.” It was clearly a good time to hone in on canvassing. “That was the perfect storm for Now the time for this idea has come.”
This was also around the time Broockman first got hold of LaCour’s raw data (he’d read the Science paper when it was under review in late 2014). Certain irregularities quickly jumped out at him: The data was, in short, a bit too orderly given that it came from a big survey sample. In itself this didn’t constitute definitive proof that anything was amiss, but it definitely warranted further investigation. Whatever the excitement-suspicion ratio regarding LaCour’s findings had been in Broockman’s mind previously — maybe 90/10 when he first heard about the experiment — it was now closer to 50/50.
Broockman wasn’t sure what to do. He started to bring up his concerns with other friends and advisers (about a dozen of them), and they mostly told him one of two things: Either there was a reasonable explanation for the anomalies, in which case bringing attention to them would risk harming Green and especially the less established LaCour unnecessarily; or something really was fishy, in which case it still wouldn’t be in Broockman’s interest to risk being seen as challenging LaCour’s work. There was almost no encouragement for him to probe the hints of weirdness he’d uncovered. In fact, he quickly found himself nervous about openly discussing his reservations at all. “How much I said depended on how much I trust the person I was talking to and how inebriated I was at the time I had the conversation,” he explains.
On December 17, 2014, Broockman found himself a bit tipsy with someone he trusted: Neil Malhotra, a professor at Stanford’s business school. Broockman had just been offered a job there, and the two were dining at Oak and Rye, a pizza place in Los Gatos, partly so that Broockman could ask Malhotra for advice about the transition from grad school to the professional academic world. A few drinks in, Broockman shared his concerns about LaCour’s data. Malhotra recalled his response: “As someone in your early career stage, you don’t want to do this,” he told Broockman. “You don’t want to go public with this. Even if you have uncontroversial proof, you still shouldn’t do it. Because there’s just not much reward to someone doing this.” If Broockman thought there was wrongdoing behind the irregularities he’d discovered, Malhotra said, it would be a better bet for him to pass his concerns on to someone like Uri Simonsohn, a University of Pennsylvania researcher who already had established an identity as a debunker (eventually, Simonsohn gave Broockman some feedback on the data, but the exchange didn’t lead to any definitive findings).
This might seem like a strange, mafia-ish argument to a non-academic, but within the small world of political science — particularly within the even smaller world of younger, less job-secure political scientists — it makes sense for at least two reasons. The first is that the moment your name is associated with the questioning of someone else’s work, you could be in trouble. If the target is someone above you, like Green, you’re seen as envious, as shamelessly trying to take down a big name. If the target is someone at your level, you’re throwing elbows in an unseemly manner. In either case, you may end up having one of your papers reviewed by the target of your inquiries (or one of their friends) at some point — in theory, peer reviewers are “blinded” to the identity of the author or authors of a paper they’re reviewing, but between earlier versions of papers floating around the internet and the fact that everyone knows what everyone else is working on, the reality is quite different. Moreover, the very few plum jobs and big grants don’t go to people who investigate other researchers’ work — they go to those who stake out their own research areas.
So Broockman decided he needed a way to get feedback on his suspicions without leaving a trace. He’d recently learned about a strange anonymous message board called poliscirumors.com, or PSR. “I believe I first learned about the board when I received a Google Alert with a page that had my last name on it, which proposed marriage to me,” he says. “So naturally that was a link that I clicked.”
Three different people independently described PSR to me as a “cesspool.” No one knows exactly who the site’s primary denizens are, because hardly anyone will admit to perusing it, but it seems to skew young — mostly political-science grad students and untenured professors. While the ostensible purpose of PSR is to provide information about job openings, posts on it have a tendency to devolve into attacks, rumor-mongering, and bitterness fueled by an apocalyptic academic job market. “It is essentially the 4chan of political science,” a political-science researcher told me via email.
It’s not, in short, necessarily the place where one goes for levelheaded debate about the results of statistical analysis...
Still more at the New York Times:
Funding for Retracted Science Study Was Misrepresented: In his published study about canvassers changing peopl... http://t.co/BI6X2LlrSY
— Maxy Sharma (@MaxySharma9) May 29, 2015
No comments:
Post a Comment