The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Authors Filter?
Razib Khan
Nothing found
 TeasersGene Expression Blog
/
Epistemology

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

41BlNMFJqNL._SY344_BO1,204,203,200_ A few separate pieces that I read today came together thematically for me in an odd confluence. First, an article in The Straits Times repeats the shocking statistics about the nature of modern academic intellectual production, Prof, no one is reading you, that you may be aware of. Here’s the important data:

Even debates among scholars do not seem to function properly. Up to 1.5 million peer-reviewed articles are published annually. However, many are ignored even within scientific communities – 82 per cent of articles published in humanities are not even cited once. No one ever refers to 32 per cent of the peer-reviewed articles in the social and 27 per cent in the natural sciences.

If a paper is cited, this does not imply it has actually been read. According to one estimate, only 20 per cent of papers cited have actually been read. We estimate that an average paper in a peer-reviewed journal is read completely by no more than 10 people. Hence, impacts of most peer-reviewed publications even within the scientific community are minuscule.

What ever happened to the “republic of letters”? Are humanists reading, but not citing, each other? Or is it that humanistic production has basically become a matter of adding a line to one’s c.v.? So the scholar writes the monograph which is read by their editor, and then published to collect dust somewhere in the back recesses of an academic library.

Second, an article in The New York Times, Philosophy Returns to the Real World, declares the bright new world in the wake of the Dark Ages of post-modernism. Through a personal intellectual biography the piece charts the turn away from the hyper-solipsistic tendencies in philosophy exemplified by Stanley Fish in the 1980s, down to the modern post-post-modern age. Operationally I believe that Fish is a human who reconstitutes the characteristics of the tyrannical pig Napoleon in Animal Farm. Despite all the grand talk about subjectivism and a skepticism about reality which would make Pyrrho blanch, Fish did very well for himself personally in terms of power, status, and fame by promoting his de facto nihilism. Money and fame are not social constructs for him, they are concrete realities. Like a eunuch in the Forbidden City ignoring the exigencies of the outside world, all Fish and his fellow travelers truly care about are clever turns of the phrase, verbal gymnastics, and social influence and power. As the walls of the city collapse all around them they sit atop their golden thrones, declaring that they are the Emperors of the World, but like Jean-Bédel Bokassa are clearly only addled fools to all the world outside of the circle of their sycophants. After all, in their world if they say it is, is it not so? Their empire is but one of naked illusions.

Finally, via Rod Dreher, a profile of David Brooks in The Guardian. He has a new book out, The Road To Character. I doubt I’ll read it, because from what I can tell and have seen in the domain of personal self-cultivation of the contemplative sort our species basically hit upon some innovations in the centuries around 500 B.C., and has been repackaging those insights through progressively more exotic marketing ploys ever since. Xunzi and Marcus Aurelius have said what needs to be said. No more needed for me.

But this section jumped out:

“I started out as a writer, fresh out of college, thinking that if I could make my living at it – write for an airline magazine – I’d be happy,” says Brooks over coffee in downtown Washington, DC; at 53, he is ageing into the amiably fogeyish appearance he has cultivated since his youth. “I’ve far exceeded my expectations. But then you learn the elemental truth that every college student should know: career success doesn’t make you happy.” In midlife, it struck him that he’d spent too much time cultivating what he calls “the résumé virtues” – racking up impressive accomplishments – and too little on “the eulogy virtues”, the character strengths for which we’d like to be remembered. Brooks builds a convincing case that this isn’t just his personal problem but a societal one: that our market-driven meritocracy, even when functioning at its fairest, rewards outer success while discouraging the development of the soul. Though this is inevitably a conservative argument – we have lost a “moral vocabulary” we once possessed, he says – many of the exemplary figures around whom Brooks builds the book were leftists: labour activists, civil rights leaders, anti-poverty campaigners. (St Augustine and George Eliot feature prominently, too.) What unites them, in his telling, is the inner confrontation they had to endure, setting aside whatever plans they had for life when it became clear that life had other plans for them.

Many of the ancients argued for the importance of inner reflection and mindful introspection. Arguably, the strand of Indian philosophical thought represented by the Bhagavad Gita was swallowed up by this cognitive involution, as one folds in upon one’s own mind.

But let me tell a different story, one of the outer world, but not one of social engagement, but sensory experience of the material domain in an analytic sense. Science. A friend of mine happens to be the first using next-generation sequencing technologies to study a particularly charismatic mammal. I reflected to her recently that she was the first person in the history of the world to gaze upon this particular sequence, to analyze it, to reflect upon the natural historical insights that were yielded up for her by the intersection of biology and computation. It is highly unlikely that my friend will ever become a person of such eminence, such prominence, as David Brooks or Stanley Fish. Feted by her fellow man. But my friend will know truth in a manner innocent of aspirational esteem totally alien to the meritocratic professionals David Brooks references. On the day that you expire, would you rather be remembered for a law review article, or discovering something real, shedding light on some deep truth (as opposed to “truth”)?

This perhaps offers up a possibility for why humanists don’t cite each other. Too many have been poisoned by the nihilism of the likes of Stanley Fish. They do not see any purpose in the scholarship of their peers, because humanistic scholarship of the solipsistic sort is primarily an interior monologue with oneself. The experiments of English professors always support their hypotheses. Their struggle is to feed their egos, they wrestle with themselves, Jacob’s own angel as a distillation of their self-essence. The limits of their minds are the limits of their world.

Finally, this filament threaded through, of a reality out there, the possibility of being made aware of it, even through the mirror darkly, is why I continue to do what I do, and aspire to what I aspire to. The truth is out there. It does not give consideration to our preferences. But it is, and we can grasp it in our comprehension. Over the past ten years in the domain of my personal interest, and now professional focus, genomics, we’ve seen a sea change. That which we did not even imagine has become naked to us. Before the next ten years is out who knows what else we’ll discover?

 
• Category: Ideology • Tags: Epistemology, Philosophy 
🔊 Listen RSS

Again, Chagnon, Sahlins, and science:

When we allow personal ideological bias rule to our scholarly work, we limit the value of our research to answer real questions and to contribute to broader social and scientific debates. If you have an ideological axe to grind, either leave scholarship and go into politics, or else find ways to achieve a level of scholarly objectivity in your research and writing. (yeah, I know, the postmodernists are going to smirk about how naive I am to even use the word “objectivity.” Check out my past posts on epistemology; one can employ objective methods and maintain an overall level of objectivity while admitting that the world is messy and researchers are never free of preconceptions or bias.).

To paraphrase John Hawks, “I think its time to reclaim the name ‘archaeology” from past generations.” We have lots of data and ideas to contribute to major scholarly and public debates today, but too often our writing and epistemological stance work against any wider relevance.

For various reasons cool detachment is harder in anthropology, nor should it always be employed. But the pretense and striving for detachment is an essential part of science (coupled with curiosity and passion about the subject of interest). A counterpoint can be found in the comments below:

Again, your discussion of anthropology is undermined by not having any significant familiarity with the subject. I understand you don’t have the time to do so, but if that’s the case why take the time to write about something in the lack of anything to base it on? What you describe as politics is a reflection of ethical concerns which are fundamental to anyone doing research on human subjects.

Anyone doing research on human subjects has an absolute ethical obligation to avoid harming those subjects in the course of their research. Anthropology is different in that we work with communities, and not individuals – so our ethnical obligation is to the communities we study. As I understand it, medical researchers are focused on avoiding harm while gathering data from their research subjects, not when they publish their findings. For anthropologists, we need to be aware of what we publish as well. So, for example, if I’ve gathered information on people committing crimes, I can’t publish it – it doesn’t matter that I didn’t harm them while observing those crimes, exposing a group as involved in
criminal activities can bring negative consequences on them.

How and what we write about people can matter sometimes – although most of the time it doesn’t, because most people are content to ignore us. So, for example, descriptions of Arab culture in Patel’s The Arab Mind were used to rationalize certain kinds of torture that the US army and intelligence agencies practiced on Muslim detainees. Anthropological studies of indigenous groups in Vietnam, Laos and Cambodia were used by the US military and intelligence in pursuing their war against Vietnam.

The Yanomamo are a marginalized community, that had a history of displacement and who’s territory was being violently encroached on. When Chagnon described them as primitive and fierce, he was characterizing a marginalized community in negative terms in a political context where that could be damaging to their interests. How we talk about marginalized communities is always political. The idea that scientists should just do empirical research on marginalized communities and not worry about the political effects of that research on those communities is not “apolitical”, it is elevating the interests of scientists as a group over the communities they study. Thats a political commitment which is antithetical to any human science.

Chagnon makes a bad case study to discuss a war between detached empiricists and politicizing post-modernists because his description of the Yanomamo as “fierce” is not itself empirical, and neither is his assumption that they are primitive – and your description of the reasons why are pretty dead on. His descriptions of Yanomamo violence are filled with methodological and ethical problems, and his analysis is compromised by taking them as a discrete community without considering the influence of their community’s history of displacement, or his research tactics, which consisted of deliberately violating taboos in order to get information, on their actions.

Yes, there was a mixture of personal animosity, passionately held theoretical commitments and understanding of the role of power in scholarship which led the AAA to subject Chagnon to an unfair tribunal. The charges against him needed to be answered, but the AAA was not the proper venue to do so, and the review of Chagnon’s work was deeply flawed – they did, however, reject the charges of human experimentation which were the basis of the Nazi invective. That said, the problem many anthropologists have against Chagnon’s work has to do with ethics and methodology. Dismissing them as mere politics ignores issues which are key concerns in any human science.

I also find it odd that you mention economics as an ideal in social science that anthropologists should live up to. Is there any other academic field where it is so routine for people to cycle between the academy and partisan political positions; advocate for political programs based on their research; or create large scale political projects based on their research?

My response was not particularly polite. I don’t feel I have to be polite to people who I feel misrepresent my views (in short, after accusing me of not knowing anthropology, they proceed to assume they know my own take on assorted subtle issues, likely by simply inserting their “naive positivist” straw-man). The major takeaway that objectivity may be hard, and it may be impossible in the absolute sense, but it is something we should aim for. Additionally, just because scientific study entails ethical choices, it does not mean that those who disagree with your ethical choices necessarily reject the idea that ethics should inform and shape science. Some anthropologists seem to find it impossible to comprehend that those who don’t agree with their particular vision and implementation of social justice don’t necessarily then support the proposition that the study of humans can be analogized to impersonal billiard balls. Scholars who study cultural diversity have no familiarity with sincere intellectual diversity of perspective. Perhaps more anthropologists should do research among natural scientists, and see the reality that somehow progress in understanding occurs despite human frailties of bias, self-interest, and lack of just desserts.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Anthropology, Epistemology 
🔊 Listen RSS

As many of you know, right before the election I made a $50 bet with Hank Campbell that Nate Silver would get at least 48 out of 50 states correct for the 2008 presidential election. I also got one of Hank’s readers to sign on to the same bet. Additionally, a few readers and Twitter followers got in on the wager; they were bullish on Romney’s prospects, and I was not (more honestly, I was moderately sure they were self-delusional, and willing to take their money to make them more cautious about their self-delusional biases in the future). But there’s a major precondition that needs to be stated here: I hedged.

Last February a friend told me he was 100% confident that Barack Hussein Obama would be reelected. This prompted me to ask for favorable terms on a bet. The logic was simple, if he was 100% confident, then it shouldn’t be a major issue for him, because he was collecting anyhow. As it happens he gave me 5 to 1 odds, so that I would collect $5 for every $1 he might collect. I told him beforehand that I actually thought that Obama had a 60-70% chance of winning, so I went into the wager assuming I’d be out a modest amount of money. But that was no concern. My goal was now to convince those who were irrationally supportive of Romney to take the other side of the bet. For whatever reason people have an inordinate bias toward their hoped-for-candidate in terms of who they think will win, as opposed to who they wish to win. The future ought gets confused with the future is.* I got people to take the other side, which means that I was going to make money no matter who won.

At this point one might wonder about my comment that I suspected that those who were bullish on Romney were delusional. It’s rather strong, and my reasoning is actually rather strange. Overall I accepted the polling averages. A few years back I was an economic determinist in election outcomes, but Nate Silver had convinced me that the sample size was too small to get a good sense of the real proportion of variation being predicted here. In short, the economy matters, but I stepped back from the supposition that it was determinative (as it happens, purely economic models that were excellent at predicting past elections face-planted this time). So that’s why I relied on the polls. Though I leaned on Nate Silver, I didn’t think he was particularly oracular, and I’d say that I’m mildly skeptical of the excessive faith some put in his particular person. When I put a link up to Colby Cosh’s mild take-down of Silvermania I received a few moderately belligerent comments. This despite the fact that I was willing to put money on Silver’s prediction.


But after soft-pedaling my confidence in polling averages, why did I think the pro-Romney people were delusional? The simple answer is 2004 and 2008. When the polling runs against you consistently and persistently motivated reasoning comes out of the wood-work. There’s a particularly desperate stink to it, and I smelled it with the “polls-are-skewed” promoters of 2012. In 2004 there were many plausible arguments for why the polls underestimated John F. Kerry’s final tally. And in 2008 there were even weirder arguments for why McCain might win. In 2012 it went up to a whole new level, with a lot of the politically conservative pundit class signing on board because of desperation.

The reality is that out of the space of plausible models you can find something congenial to your own proposition. I very studiously avoided reading much about the debates about skewed polls, even in the comments of this weblog. For example, Dwight E. Howell left this note on September 29th:

You might want to go back and look at how accurate polls have been at predicting elections in the past. The track record isn’t great. Even the exit polls in WI were wrong. It appears the Democrats who wanted the governor out stopped and chatted and the people who voted to keep him largely walked on by including a significant number of Democrats who had to have voted for him.

There is also the non trivial question of how many of the various sub groups are actually going to show up on election day. If you assume that blacks will turn out in the same numbers as his last election you get one result. If you note that the black community has not fared well during his tenure in office and he has deeply offended many black Christians you have to wonder if some of these people are going to bother to show up and vote for him. The Jews have to know his position relating to the Jewish state, etc. He pretty much had a solid Catholic vote last time but he’s at war with the Catholic church. What does this all mean? You’ll find out after the votes are counted.

The votes have been counted, and Dwight E. Howell was full of shit. In fact I badgered Howell on Twitter and on these comment boards to put a wager down on the election, and he finally begged off after he couldn’t evade me, claiming he wasn’t a betting man. I have a hard time dismissing the possibility that Howell himself knew he was a delusional crank full of bullshit on some level! And yet what he said wasn’t crazy.

The reality is that I didn’t read most of Howell’s comment until after the election. The same with the very similar comments that came through in Howell’s wake on that thread (I did not post them, I simply skimmed the first few sentences). A similarity of content across the comments suggested that these individuals were just regurgitating plausible nuggets and feeding their motivating reasoning bugs. And that’s why I avoided detailed inquiry into the issues: I didn’t want to bias my own perspective! This was part of the source of Hank Campbell’s confusion as to my somewhat erratic response on Twitter as I frantically tried to make a bet with him on the election: I didn’t really care about Hank’s theories about the polls, I suspected that the polls were right because I strongly scented a lot of bullshit on the Republican side. I wanted to get Hank down on some bet, and I wasn’t too concerned with the details. In contrast to the odor wafting up from the Republicans, the Democrats seemed sincerely and guilelessly accepting of the polls which favored them. My intuition here could have been wrong, or the perceptions of the parties of interest may have been wrong. But that was really the situation and context which motivated my behavior at the time.

After the election was over I actually started reading some of the arguments about why the polls were skewed, and I find that they are extremely plausible to me. And not just me, John Hawks owes me a drink because he simply didn’t believe the turnout models which suggested a demographic more like 2008. The reality is that my instinct was to go with John. I too was very skeptical of the proposition that Obama could turnout the same voters as he did in 2008. And yet he did turnout those voters!

What does this tell me after the fact? The plausibility of any given datum can’t outweigh the aggregate. Dwight E. Howell et al. have a lot of plausible historical data. Granted, you have some obvious bullshit “SHOCK POLL” headlines, but only idiots believe those outliers (there are plenty). Rather, if you have a model, there are plenty of data points you can populate to get the appropriate outcome. That was my suspicion and worry, and I find that I’m highly susceptible to some of the more cogent and eloquent arguments about turnout models (not Dwight’s comment specifically, most of the non-specialists signal that they are just echoing the specialists by garbling and muddling transparently). My initial instinct to not allow myself to info-overload, and then filter it out to the subset which confirmed my model, seems to have been wise.

And importantly I relied on the expertise of others. I’m just not that motivated or interested in horse-race politics (though I am interested in political history, philosophy, and economy). I assumed that “political junkies” of partisan sentiment would keep track of the likely outcomes, and when Right and Republican leaning individuals started making desperate sounding arguments with the intent of converting themselves, I believed that that signaled that Obama was on the rise. Similarly, I also defer to the collective wisdom of the polls. This does not mean that these two are infallible (my judgement of people bullshitting, or, the wisdom of the polls). But it’s better than nothing, and I ended up the richer.

All this brings me to Nate Silver’s The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t and Jim Manzi’s Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. These two authors are class acts, and I follow both of their pronouncements closely. I have long appreciated Silver’s contribution to the broader discourse, and though Jim Manzi may not be as prominent, he is an important voice for empiricism on the modern political Right, which too often seems to simply be a reiteration of old hopeful ideals. But ultimately if you are a long time reader I’d have to say that you should go with Uncontrolled and not The Signal and the Noise.

First, I will touch upon an issue that may seem superficial to many: style. Jim Manzi may not have the most limpid of prose. He is of course spending a great deal of time on epistemology, history of science, and quantitative business strategy. But Silver is a far better blogger than he is a narrative nonfiction writer. Many of the chapters in The Signal and the Noise have a formulaic quality, insofar as the focus is clearly on the ideas, but there are often pro forma biographical introductions of important thinkers. There are writers who do this well. I doubt I would; I’m more interested in ideas than the people generating them. And I suspect so is Silver. He almost certainly finds Futarchy more fascinating than Robin Hanson. The main exceptions tend to be in areas where Nate Silver has some personal connection. The chapters on the quantitative revolution in professional sports scouting and gambling are more lively, with more loving attention to the dramatis personae. And that makes sense if you have some priors in hand: Silver comes out of a quantitative sports analyst background, and, he was a professional poker player at one point.

But more importantly as a work of popularized statistical inference The Signal and the Noise probably would not add much novel data or cognitive tools to the typical core reader of GNXP. Most of you are presumably aware of Bayesian probability, and the abuses of modern Frequentism. If this is Greek to you, then I would recommend The Signal and the Noise! And perhaps check out the Less Wrong Wiki. If you don’t know that economists are notoriously bad at predicting recessions, or that political prediction models based on a few economic or social indices are notoriously good at predicting the past but bad at predicting the future, then The Signal and the Noise may also be for you. And reading this book reiterated to me that Nate Silver is a great blogger whose Weltanschauung is broadly similar to mine. But The Signal and the Noise did not present to me any grand revelations. It was an exploration of topics which I developed interests in in striking parallel with Nate Silver over the aughts. I suspect this is a function of the change in our relationship to data due to the power of computers in terms of both storage and analysis. Silver is a reflection of the age, a herald, not a prophet. We are part of the same army.

Jim Manzi’s Uncontrolled is a somewhat different work, insofar as within the author explicitly outlines the relatively constrained scope of his ambition. The core of Manzi’s argument is that public policy would benefit from more randomized controlled trials (also known as randomized field trials). This seems a plainly sensible project, but Manzi’s assertion is that too often enormous public policy ideas are proposed, and then implemented on a massive and indiscriminate scale. Whether the policy was ever effective or not can often be litigated, because there was never a “control.” In the end Uncontrolled is a plea for experimentation, epistemological humility, and incremental gains on the margin. Obviously Manzi is not presenting himself as Prometheus. Rather, this is a small vision executed on a massive scale. Manzi has seen this work in the business work, and he wants to translate these private sector successes to the public domain.

But perhaps what Uncontrolled does better than convince you of the efficacy of randomized controlled trials in public policy is that there are limits to the power of elements of the scientific method in particular domains. This is the old hierarchy of knowledge idea, so Manzi is treading over ground familiar to many. In short, physics is easy, and economics is hard. Grand general theories with a few variables have generally failed in economics where they have succeeded in physics. That is not due to the lack of ingenuity of economists (many of whom come from a physics background!), but simply due to the fact that economic phenomena are much more complex than many physical phenomena. In Manzian terms they have “high causal density.” There are so many possibilities that simple models and obvious large correlations are not going to be robust or existent.

This goes back to why I was very cautious about reading too much about the skepticism of polls before the election. There are so many possibilities it is incredibly easy to conjure up a plausible skepticism of the received wisdom, and present an alternative. True aficionados who wallow in the data can filter the good from the bad, but we civilians rarely can. Importantly I would like to add that this is something Silver acknowledges in The Signal and the Noise. Formal quantitative analysis supplemented by qualitative knowledge trumps quantitative analysis alone. If the pundits who criticize the quants have true knowledge, they will only benefit. If they don’t have true knowledge, as Philip Tetlock has reported, then they have much to fear.

All things leads us to the common sense conclusion that the process to attain knowledge is hard. Powerful math and statistics can give us only so much. Experiment without theory is not illuminating. A theory devoid of empirical data is not persuasive. Randomized experiments without any guiding model or hypothesis may be lacking in insight. These are the outward aspects. But what about the personal strategies for attaining knowledge? If one is focused on one’s domain, one need not over-think this. Presumably, you know your shit. But when you move out of domain your need to be very careful, because you are on alien topography. One suggestion I might make is be careful of looking too hard for data confirming your prejudices; it is all too easy if you are clever. Rather, look only modestly, and withdraw quickly if you don’t find what you are looking for. If it was all that clear and obvious, it would have been clear and obvious to you initially. The bold and plain truth does not hide.

* For those inquiring about Intrade, it is not that easy to deposit money into that system if you are American. Try it.

 

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Epistemology 
🔊 Listen RSS

Matt Yglesias on the enthusiasm for data mining in economics:

Betsey Stevenson and Justin Wolfers hail the way increases in computing power are opening vast new horizons of empirical economics.

I have no doubt that this is, on the whole, change for the better. But I do worry sometimes that social sciences are becoming an arena in which number crunching sometimes trumps sound analysis. Given a nice big dataset and a good computer, you can come up with any number of correlations that hold up at a 95 percent confidence interval, about 1 in 20 of which will be completely spurious. But those spurious ones might be the most interesting findings in the batch, so you end up publishing them!

Those in genomics won’t be surprised at this caution. I think in some ways social psychology and areas of medicine suffered a related problem, where a massive number of studies were “mined” for confirming results. And we see this more informally all the time. In domains where I’m rather familiar with the literature and distribution of ideas it is often easy to infer exactly which Google query the individual entered to fetch back the result they wanted. More worryingly I’ve noticed the same trend whenever people find the historian or economist who is willing to buttress their own perspective. Sometimes I know enough to see exactly how the scholars are shading their responses to satisfy their audience.

With great possibilities comes great peril. I think the era of big data is an improvement on abstruse debates about theory which can’t ultimately be resolved. But you can do a great deal of harm as well as good.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Epistemology 
🔊 Listen RSS

As most long time readers know I generally screen to at least a cursory level comments by people who have not posted before. Except for purposes of entertainment only I won’t publish Creationist comments. Naturally some comments are offensive, but a surprising number I just don’t let through are “not-even-wrong” or “too-stupid-to-understand-the-original-post” class. But yesterday a comment was in the mod queue which really confused me. My initial instinct was to spam it, but I was moderately intrigued, so I let it through. In response to my assertion of having read material which indicated that Gaelic was the language of the Irish peasantry before 1800, Paul Crowley asserted:

What you have read is quite wrong. It is a common misconception (especially in Ireland) based on wishful nationalistic thinking. Farmers and peasants do not drop their native language and learn to speak another without extreme compulsion. While there was some pressure, there was no compulsion. The ancient ruling class — as represented later by the Irish Earls, and as seen in the courts of local chiefs — spoke Gaelic, and it is they who left nearly all the records. Illiterate farmers leave very few records, but what little there is suggests that English has been tongue of the great bulk of the Irish peasantry for as far back as we want to go. The rebels of 1598 all spoke English. Walter Raleigh had no difficulty understanding the speech of local people in Cork in the 1570s.

The great difficulty with the records is that the ‘data’ on this matter reflects aspirations rather than facts. Since the ‘English’ (actually the Norman-French) invaded in 1172, every self-respecting Irishman has declared his deep love and respect for the language so cruelly taken from him….

There were statements in the comment which I’m very skeptical of (e.g., “Farmers and peasants do not drop their native language and learn to speak another without extreme compulsion” is obviously plain bullshit, there are plenty of ethnographic and historical counter-examples to this!). But the commenter asserted forcefully, in cogent English. I don’t know the area, so though I was very skeptical I let the comment through.

Paul Ó Duḃṫaiġ responded rather well, with citations. In hindsight I made a mistake in letting the original comment go through without citation. But I assumed that Paul & Paul would respond to Paul (the Irish are not creative in first names?), and they did. Paul Crowley’s success it getting through my bullshit filter indicates the power of assertive coherency; far too many nuts exhibit standard nut style. Pegging someone as a nut by style rather than substance is far easier. In the case of substance you have to have a relatively good grasp of the field. Irish historical linguistics is not a field which I’m very deeply knowledgeable in, so I used my style bullshit detector, despite my misgivings.

This is analogous to the “Shaggy defense.” Make shit up in the face of overwhelming evidence, and see if anyone buys it. It worked with me. Live and learn.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Culture, Epistemology 
🔊 Listen RSS

Nate Silver has an important post, Herman Cain and the Hubris of Experts. It’s not really about Herman Cain. Rather, it’s about the reality that pundits tend to underestimate uncertainty and complexity. Saying you don’t know isn’t as satisfying as making a definitive categorical assertion. This manifests particularly in the domains of sports and politics because there are clear and distinct criteria to assess predictive power. Politicians win or lose elections, while teams win or lose games. And yet despite the long history of minimal value-add on the part of pundits they persist in both domains. Why? I think it’s pretty obviously a cognitive bias toward storytelling. Similarly, in the 1930s the Alfred Cowles concluded that financial newsletters didn’t help their readers “beat the market,” but he also assumed these newsletters would persist. There was a psychological need for them.

The key here is to change the attitude of the pundit class. The populace will always have a preference for stories with plausible and clean conclusions over radical uncertainty. Not surprisingly many professional pundits reacted with hostility to Silver’s observation that they’re quite often wrong. I don’t venture into political punditry often, but when the Democrats passed health care reform I predicted that Mitt Romney would have no shot to win the the Republican nomination. The facts in this case seemed so clear. Romney was going to be walloped over and over again over his record on health care reform when he was governor. I was wrong. Romney may not win, but obviously he’s a contender. My logic was simple and crisp, but the logic was wrong. That’s why you let reality play out. If what was “on paper” determined national elections, then we’d be talking about President Hillary Clinton.

Of course political journalists that engage in analysis still have a role to play. Don’t newspapers have horoscopes and style sections?

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Ideology, Science • Tags: Epistemology, Politics 
🔊 Listen RSS

If you know of John Ioannidis‘ work, Jonah Lehrer’s new piece in The New Yorker won’t be a surprise to you. It’s alarmingly titled The Truth Wears Off – is there something wrong with the scientific method? Here are some sections which you can’t get without a subscription, and I think they get to the heart of the problem:

“Whenver I start talking about this, scientists get very nervous,” he says….

Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”

There is no mysterious “force” in the universe. The answer is probably going to come down to a combination of the reality of randomness (regression to the mean falls into this category), individual bias, and the cultural incentives of the system of scientific production. This is partly a coordination problem. Most social psychologists, to pick on one discipline which even other psychologists will finger-point toward, are probably aware that their results aren’t going to be robust over the long haul. But they have tenure to gain, mortgages to pay, and fame to accrue. This is not furthering the collective system-building which is science, but the first person to opt-out of rat-race for sexy findings which have publishable p-values will soon be an ex-scientist.

If you don’t have a subscription to The New Yorker, buying one off the newsstands for an article like this is much more worthwhile than another boring political profile. You should also check out Why Most Published Research Findings Are False. You can read that for free. Also see David Dobbs’ How to Set the Bullshit Filter When the Bullshit is Thick.

Note: Statistics are ubiquitous across many of the sciences, but the reality is that most people who use statistics don’t understand them too well. That’s not necessarily an issue, most people who use computers don’t know how they work, but then again, most people don’t use the mouse as a foot pedal.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Epistemology 
🔊 Listen RSS

Update: The title is way too strong as a reflection of my opinion. I’ve added a question mark.

A friend once observed that you can’t have engineering without science, making the whole concept of “social engineering” somewhat farcical. Jim Manzi has an article in City Journal which reviews the checkered history of scientific methods as applied to humanity, What Social Science Does—and Doesn’t—Know: Our scientific ignorance of the human condition remains profound.


The criticisms of a scientific program as applied to humanity are deep, and two pronged. As Manzi notes the “causal density” of human phenomena make teasing causation from correlation very difficult. Additionally, the large scale and humanistic nature of social phenomena make them ethically and practically impossible to apply methods of scientific experimentation. This is why social scientists look for “natural experiments,” or involve extrapolation from “WEIRD” subject pools. But as Manzi notes many of the correlations themselves are highly context sensitive and not amenable to replication.

He concludes:

It is tempting to argue that we are at the beginning of an experimental revolution in social science that will ultimately lead to unimaginable discoveries. But we should be skeptical of that argument. The experimental revolution is like a huge wave that has lost power as it has moved through topics of increasing complexity. Physics was entirely transformed. Therapeutic biology had higher causal density, but it could often rely on the assumption of uniform biological response to generalize findings reliably from randomized trials. The even higher causal densities in social sciences make generalization from even properly randomized experiments hazardous. It would likely require the reduction of social science to biology to accomplish a true revolution in our understanding of human society—and that remains, as yet, beyond the grasp of science.

At the moment, it is certain that we do not have anything remotely approaching a scientific understanding of human society. And the methods of experimental social science are not close to providing one within the foreseeable future. Science may someday allow us to predict human behavior comprehensively and reliably. Until then, we need to keep stumbling forward with trial-and-error learning as best we can.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Epistemology, Social Science 
No Items Found
Razib Khan
About Razib Khan

"I have degrees in biology and biochemistry, a passion for genetics, history, and philosophy, and shrimp is my favorite food. If you want to know more, see the links at http://www.razib.com"