Sunday, April 26, 2009

Brain and Culture Neurobiology, Ideology, and Social Change


Author: Bruce E. Wexler

Brain and Culture is in five chapters: Introduction, Transgenerational Shaping of Human Brain Function, Effects of Sensory Deprivation and Sensory Enrichment on Brain Structure and Function, Self-Preservation and the Difficulty of Change in Adulthood, and The Meeting of Cultures. The book integrates research in Neurobiology and Cognition to generate an account of how the cultural environment shapes the brain and implications for social theory of the decrease in neuroplasticity from childhood to adulthood. The five chapters begin with a review of brain research, including both animal and human studies; continue with a review of the social nature of the human brain and the neurological basis of culture; and finish with an interpretation of conflicts from the historic past to the recent past in light of the biological origins of culture.

Wexler explicates two essential points in the book. First, the environment in which a human brain develops exerts important influences on how that brain develops. In particular, Wexler describes the development of young brains, and how parents and the social interactions of the culture create familiar patterns for a young human. Those early experiences create expected patterns of reality as a brain develops. Second, when an incongruity exists between the internal expectations of a brain and the external reality of a new situation conflict can result. For an individual, the conflict can be exhibited as denial of new information, forgetting new information, or interpreting new information in a manner consistent with previous expectations. What important, Wexler claims, is that the incongruities between the environment and the developed brain introduce distress and dysfunction. Wexler explores the incongruities that arise with the death of family members and with immigration including incongruities arising from other cultural changes. In his discussion of these subjects, Wexler explores the relation between the internal structure of the brain and the external environment. For example, human frontal lobes (which, as Wexler points out, are "thought to be closely associated with values, morality, emotion, and other personality traits") are not fully mature until the age of 20 to 25 years. This late maturation may provide an evolutionary advantage, he says, in that it affords more time "to incorporate the growing collective wisdom and latest innovations."

In part II of the book, "The Neurobiology of Ideology", Wexler uses empirical data from the Laboratory experiments, describing, for example, how brain-imaging studies have correlated activation of the amygdale -- induced when people view pictures of ethnically diverse human faces -- with social prejudice. Wexler explains that people develop internal, experience-determined neural structures that "limit, shape, and focus perception" on the aspects of environmental stimulation that they commonly experience. Their external and internal worlds, therefore, act in concordance with each other. Wexler argues that when people are faced with information that does not agree with their internal structures, they deny, discredit, reinterpret or forget that information. When changes in the environment are great, corresponding internal changes are accompanied by distress and dysfunction. The inability to reconcile differences between strange others and ingrained notions of "humanness" can culminate in violence. The neurobiological imperative to maintain a balance between internal structures and external reality fuels this struggle for control, which contributes to making the contact zone a place of intractable conflict. The result manifests itself in our world today in, to give two examples, racial inequality and intercultural hostility.

Wexler describes how the prejudicial beliefs that lead to cultural clashes derive directly from sociocultural input, beginning with the important adults (parents, for instance) to whom an individual is exposed during childhood. He makes a few other bigger leaps that are less easy to digest, such as when he compares a kitten's experience with unfamiliar oblique lines in a visual-plasticity experiment to that of an immigrant displaced from a village distinguished by flatlands to a city of skyscrapers. But his arguments are provocative and thoughtful nonetheless.

But do the "incongruities" have to lead to "dysfunction"? Might they not instead open up new possibilities for exploration and understanding? That is what the author says might happen in the last chapter of his book. Some unknowns do bring joy. Personality, sense of identity and taste can have a profound effect in determining whether unfamiliar stimuli are perceived as negative or positive. People often have positive reactions to new experiences, such as the sound of an agreeable piece of music never heard before or the smell of a delicious but unfamiliar recipe.

Wexler's position is that familiarity, or "consonance between inner and outer worlds," does generate a pleasurable experience as well. An external event that coincides with a past experience in a person's life, he asserts, is enjoyable "merely on the basis of familiarity and independent of any qualities of the object." But people often express negative reactions too toward familiar stimuli, such as with a job one has been serving in for a long time or when some immigrants avoid moving to familiar social environments that might incite memories of painful or stressful experiences. It is also possible to say, however, that not all goal-directed behavior can be explained by the internal-external dichotomy as Wexler appears to claim, for instance, what Doidge says in "The Brain that Changes Itself" (reviewed in Metapsychology 11: 39)

Bruce Wexler in Brain and Culture explores the socio-cultural implications of the close and changing neurobiological relationship between the individual and the environment. The job is marvelously performed in a readable text.

Saturday, April 25, 2009

In Pursuit of the Gene - Fom Darwin to DNA


Author: James Schwartz

Schwartz aims to recount the history of genetics through the biographies of the principal researchers who advanced (and sometimes held back) the theory's development. The result is a highly readable work with something of the character of a soap opera about it. Rather than dusting off some ancient lab equipment and taking the reader through a technical account of the major experimental steps in the development of genetics, Schwartz has turned to the personal correspondence of the (mostly) men involved, to reveal the personal relationships between competing and collaborating researchers. It is this that lends the soap opera character to his narrative, for as Schwartz remarks "the early geneticists were a particularly passionate group, pathologically competitive in some instances and utterly selfless in others, prone to intense loyalties as well as overwhelming hatreds, singularly idealistic and ruthlessly pragmatic." (x) No daytime TV producer could hope for a cast of characters more likely to 'mix it up' with all manner of plot and intrigue.

The main narrative thread follows the search for the mechanism of heredity. Darwin's brilliant introduction of the notions of common descent and natural selection into biology had this one major oversight: while it was clear that the traits of an organism were passed on and that this explained the differential success of different organisms, the nature of the unit of heredity and how exactly it worked were unknown in Darwin's day. Although Darwin would not live to see it, the discrete nature of the unit had been demonstrated by Gregor Mendel just six years after the publication of The Origin of the Species, though it would take until early 1900 for Mendel's work to have an impact on the scientific discussion. Darwin's own attempt to address the problem, the theory of pangenesis, was a large step in the wrong direction. The theory of pangenesis was founded on the assumption that offspring are a blend of each parent's traits, but this clearly would have the consequence that variation over time disappears, since each generation 'averages out' the distinctiveness of the preceding generation and thus, tends to increasing sameness. In a fatal move, Darwin revived a Lamarkian notion of heritable acquired traits to supply the needed variation. Though both this and the 'blending hypothesis' would eventually prove mistakes, they would also have their champions in the subsequent search for a theory of heredity.

It may surprise some readers to learn of the intimate connection between the development of a theory of heredity and statistical methods. Darwin's cousin, Frances Galton, saw the possibility of applying the bell curve to inheritance, though his motive in this was the pursuit of his own dubious views on the improvement of the human mind through eugenics, the term itself having been introduced by Galton. Fortunately, it was the application of statistics that caught on among the next generation of biologists concerned with heredity, and major developments in both sciences followed from this relationship.

Galton's example presents a recurring pattern in Schwartz's narrative. Even while grinding his own idiosyncratic axe, Galton nevertheless made real advances and inspired others to take up the torch. What strikes the reader is how many of the early geneticists obsessively pressed their own peculiar (and mostly ill-fated) theories beyond any reason, and defended them in barely-polite intellectual combat against rivals. That any coherent theory arose out of this fray seems a wonder. But then, wonders do never cease, as it is said: in spite of their quirks, genuine contributions were made by these odd men.

The star example of the rivalries carried on must be Thomas Morgan's so-called 'fly room' at Columbia University. Many lay persons will recognize that fruit flies have something to do with genetics: here is the source. The basis of modern genetics came together in Morgan's lab, through breeding experiments on fruit flies conducted by a small collection of talented graduate students, including Alfred Sturtevant, Calvin Bridges and Hermann Muller. Despite the outward appearance of enthusiastic scientific collaboration, the fly room was the site of great strife and bad feeling over who was taking credit for group accomplishments, with Morgan himself seeming to be the principal culprit. The hostilities would reach the petty depths of trying to write each other out of history in memoirs. Despite attacks on his reputation, Muller would eventually win the Nobel Prize for artificially inducing mutations with X-rays.

For the reader more interested in the history of the science of genetics, In Pursuit of the Gene might be a disappointment. The science is there, of course, but thoroughly embedded in the human relationships of the scientists. Part of Schwartz's point in choosing this approach is to illustrate how personal the life sciences can become, with the directness of their implications for human life and relationships. This his book does very well. Galton, for example, seems to have come to his obsession with the heritability of genius shortly after failing in his attempt to become 'Cambridge Wrangler', the highest honor a Cambridge mathematics student can earn. It's not hard to imagine his scientific concerns as playing proxy for inner conflict.

Wednesday, April 22, 2009

False Economy: A Surprising Economic History of the World


Author: Alan Beattie

False Economy is a book about how economic triumphs and disasters have shaped the world – and why it’s so hard to change the course of history once decisions have been made. The book’s central idea is that our smart or stupid choices determine whether a country’s economic development is successful – but that success is still often a surprise.

The reader has to work to arrive at this or any other central idea, however. For much of this fascinating but sometimes maddening book, Alan Beattie, the FT’s world trade editor, seems to follow Mark Twain’s famous preface to The Adventures of Huckleberry Finn: “Persons attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot.”

Some themes took shape as I read – but they were often contradictory. We should learn the lessons of economic history, says Beattie, yet those lessons are often unclear.

Beattie is a legendary economic journalist, whose analytical judgment is greater than most of his peers by an amount roughly equal to the income difference he reports between Botswana and Sierra Leone.

Each chapter is themed – cities, religion and so on – and the individual stories are mesmerising. The book’s inability to convey a clear point arises because Beattie is too honest to shoehorn all his material into one comprehensive theory of what makes some countries succeed and others fail, including the twists and turns along the way.

Beattie accurately reflects the collapse in self-confidence among economists on our ability to usefully recommend how “developing” countries can rapidly develop. And he’s right about the reasons for this: both success and failure have often caught us by “surprise”, the key word in the book’s subtitle.

We are given many examples of such surprises. Theological theorists of economic success traditionally celebrated Protestantism – until Catholic Ireland, Portugal and Spain took off. They damned Confucianism, then praised it once East Asia grew. The religious determinists are still confused now about whether Islam hinders growth, when Muslim success stories such as Malaysia consistently outperform Christian comparators such as the Philippines.

As for other factors, corruption is a disaster in Africa, but apparently equal levels of corruption don’t seem to hurt East Asia, Beattie shows.

The book is rife with interesting conundrums: the Nile river valley is one of the most fertile places on earth, yet Egypt imports half of its wheat; Peru rather than California has captured the US asparagus market; West Africa is the perfect location and climate to produce cocaine for Europe, but coke is instead made in distant Colombia – then routed through West Africa.

Beattie’s explanations, the best parts of the book, are on his home turf: international trade. In a wonderful exposition that should make it into all undergraduate economics classes, Beattie argues that, fertile Nile or not, Egypt is not so much importing wheat as importing water. Only the Nile provides water for the mostly desert country. Wheat takes much water to grow, so the water imports are contained in the wheat imports. By importing wheat, Egypt conserves its own water for drinking and uses other countries’ water to grow wheat shipped to Egypt. This is a wonderful illustration of how trade allows countries to import scarce resources, buying in the goods that would use them, and export their abundant resources, by selling the goods that use those.

To Beattie, the surprising and sad thing is that there are so few “water” exchanges and other such beneficial trades. This is partly because special interests distort trade. For example, the US preaches free trade to Africa, but prefers to keep out African cotton and give $4bn in subsidies to 10,000 American cotton farmers instead. Why? Senators in those farming states would block any reform of the cotton subsidy.

Is importing Peruvian asparagus a happy counter-example of US free trade policy? Unfortunately not, Beattie tells us. It was part of a drug war programme to subsidise Peruvians to grow asparagus rather than coca leaves, giving them privileged access to the US market.

And the cocaine still pours out of Colombia, not West Africa, because even an illegal export needs good roads to get the coca leaves to the factory and then to the port. West Africa’s lack of good roads, Beattie points out, costs it much more in lost exports than even trade policies such as those on cotton.

Beattie is resolutely agnostic about how to succeed economically, which is admirable compared to the often pompous pronouncements pouring out of international agencies and think-tanks. Yet his examples suggest this agnosticism is a little overdone.

Beattie actually makes a good case for free trade, but is more comfortable pointing out paradoxes than confirming orthodoxy. He is even more uncomfortable giving any strong guidance on overall development success.

But in being so stubbornly agnostic, I think Beattie makes the same mistake as academics such as Dani Rodrik. Both Beattie and Rodrik are determined to explain all medium-term fluctuations in economic growth rates – they can’t accept that some of the idiosyncratic consequences of decisions resist explanation.

Looking in hindsight, one can emphasise the favourable factors, minimise the unfavourable factors and produce a plausible explanation. Sometimes these explanations will be orthodox economics 101 (free trade worked!); other times they will be heterodox (government industrial policy worked!). But these “explanations” cannot really be proved or disproved; each particular bout of success or failure is special. Ultimately, they give little guidance – to other countries, or even to the same country in a future period.

Economics has a better track record explaining very long run patterns of success or failure – explaining the level of development (per capita income) more than the economic growth rate. This requires economists to swallow our professional pride and admit that many fluctuations around very long run paths are indeed surprising – they will always resist explanation or prediction.

Even the long run patterns don’t explain everything; any prescription has counter- examples. We must give up the search for the grand unified theory of short run and long run growth and development, a bitter pill indeed. But at least this would keep us from chasing the ephemeral “miracles” for their unstable “lessons”, and keep us from trying to squeeze a prescription out of every last counter-example. Instead, we’d stick with ideas that, on average, for most countries, have stood the test of time.

Beattie’s supremely entertaining and informative book is a great reminder that the details of success are often impossible to predict or prescribe: no one can work out how to achieve each component. The best response is not to have increasingly convoluted advice by experts, but to let individuals with local knowledge roam free by trial and error to find their own successes.

So in the end, the economics profession does have more sensible things to say about achieving long-run success than Beattie allows: (relatively) free individuals, free markets, free trade, free thinking, and institutions that support all of the above.

William Easterly is author of ‘The White Man’s Burden: Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good’ (Penguin) and writes the blog ‘Aid Watch’

Friday, April 17, 2009

God: The Failed Hypothesis: How Science Shows That God Does Not Exist

God: The Failed Hypothesis. How Science Shows That God Does Not Exist, by Victor J StengerAuthor: Victor J. Stenger

Treating the traditional God concept, as conventionally presented in the Judeo-Christian and Islamic traditions, like any other scientific hypothesis, Stenger examines all of the claims made for God's existence. He considers the latest Intelligent Design arguments as evidence of God's influence in biology. He looks at human behavior for evidence of immaterial souls. He discusses the findings of physics and astronomy in weighing the suggestions that the universe is the work of a creator. After evaluating all the scientific evidence, Stenger concludes that beyond a reasonable doubt, the universe and life appear exactly as we might expect if there were no God.

Pro:
• Difficult, complicated science is presented in a manner that is understandable for most audiences
• Provides extensive aid to skeptics and nonbelievers trying to address common theistic arguments

Con:
• Although the science is made as understandable as possible, it may still be a bit much for some
• Probably won't be read by as many people as should read it

Description:
• Analysis of the scientific evidence relating to common attributes ascribed to God
• Argues that the scientific evidence is overwhelmingly contrary to the idea that God exists
• Explains how and why life, our planet, and our universe are all natural products of natural forces

Book Review

Given how much science has had to say about nearly everything else in our lives and how successful science has been in transforming nearly every aspect of them, it is at the very least initially implausible that science can and should be excluded from debates about the existence of gods. Then there is the fact that theists themselves frequently trot out arguments that rely upon scientific data — or at least misrepresentations of scientific data — in order to bolster their positions. Finally, we must face the fact that any alleged god that matters will have some sort of impact on our lives, our planet, and our universe.

Only a completely irrelevant god could leave no trace or imprint whatsoever, so if there is a god and it does matter, then it should be detectable even by a science that is completely limited to observations about the natural, material world. Indeed, most believers — and especially adherents of the three prominent monotheistic religions: Judaism, Christianity, and Islam — posit precisely such a god that is active, detectable, and relevant to our material universe.

Given all this, it's not feasible to pretend that science can have nothing to say; according to Victor J. Stenger, science does in fact have a great deal to say — and none of it will be comforting to the average believer. According to Stenger, science may not know everything, but it knows enough and has advanced far enough to provide substantial empirical evidence against the existence of the god which most people tend to believe in.

Stenger's book God: The Failed Hypothesis lays out a model of a god based upon common attributes which people typically ascribe to their god. Those attributes are: God created the universe; God designed the laws and structure of the universe; God makes changes in the universe whenever necessary; God created and designed life; God has a special plan or purpose for humanity; God gave humans immaterial, immortal souls; God created morality; God revealed truths such as these to humanity; God does not hide from humanity.

Each attribute corresponds to some feature of the natural world which should be both true and discoverable, if some being with that attribute exists. Using such a model of God, then, scientists should be able to look at the universe to determine whether it is consistent with any of those attributes. If so, then we have evidence that some being with the attribute likely exists; if not, then we have evidence that no being with that attribute exists — and if no being with one of those attributes exists, it will be difficult for traditional religions to explain how any being with the rest can still be believed in.

It is Stenger's conclusion, after examining the scientific data relating to every attribute, that the empirical scientific evidence is overwhelmingly against the existence of any being possessing any of them. In short, none of the standard attributes accepted by most believers as being true about their god can be salvaged in light of known facts about the universe. This, in turn, prevents any rational, reasonable, or justified belief in such a god from being salvaged. Life was not designed, it evolved naturally. The universe was not created, it arose naturally. Morality was not divinely created, it evolved naturally. The universe was not fine-tuned, it's just what we would expect to find.



This book is aimed at general audiences, not specialists in any scientific or philosophical fields. Sometimes, this means recounting historical developments — like the progress of creationism in America — which will be all too familiar to most of the atheists reading this. For such atheists, this will cause some portions of the book to become uninteresting, but it makes sense to have the material because hopefully some readers will be doubters and believers who aren't already familiar with it all.

This isn't a book of philosophy, sociology, or theology: it's a scientific book, and given the very fuzzy theological concepts Stenger is trying to address, it's probably about as rigorously scientific as such a book can be. At the same time, though, scientific concepts and methods are explained in a manner that should be accessible to most audiences with at least a little familiarity with science — and all in under 300 pages. You can expect theists to offer broad redefinitions to their god-concept in response to scientific arguments like those here. In such a situation, we should ask: what does this new god have to do with the one most people have traditionally believed in, and does this new god really matter anymore?

Sunday, April 12, 2009

From Eternity to Here: The Quest for the Ultimate Theory of Time

Author: Sean Carroll

Why do we remember the past, but not the future? Why don't we meet people who grow younger as they age? Why do things, left by themselves, tend to become messier and more chaotic? What would Maxwell's Demon say to a Boltzmann Brain?The answers can be traced to the moment of the Big Bang -- or possibly before.

From Eternity to Here examines the arrow of time: why the past is different from the future. It's an easy question to ask, much harder to answer. The solution lies in the behavior of entropy, a measure of disorder, which tends to increase according to the celebrated Second Law of Thermodynamics. But why was entropy ever small in the first place? That's a question that has been tackled by thinkers such as Ludwig Boltzmann, Stephen Hawking, Richard Feynman, Roger Penrose, Alan Guth, and Sir Arthur Eddington, all the way back to Lucretius in ancient Rome. But the answer remains elusive.

The only way to understand the origin of entropy is to understand the origin of the universe -- by asking what happened at the Big Bang, and even before the Big Bang. From Eternity to Here discusses how entropy relates to black holes, cosmology, information theory, and the existence of life. The book tells a story that starts in the kitchen, where we can turn eggs into omelets but never the other way around, and takes us to the edges of the universe. Modern discoveries in cosmology -- dark energy and the accelerating universe -- and quantum gravity -- the possibility of time before the Big Bang -- come together to suggest a picture of a multiverse in which the arrow of time emerges naturally from the laws of physics


  • Sean Carroll is a theoretical physicist at the California Institute of Technology. He received his Ph.D. from Harvard in 1993, and worked at MIT, the Institute for Theoretical Physics at UC Santa Barbara, and the University of Chicago before moving to Caltech. His research involves theoretical physics and astrophysics, focusing on issues in cosmology, field theory, and gravitation. He is the author of Spacetime and Geometry, a graduate-level textbook on general relativity; has produced a set of introductory lectures for The Teaching Company entitled Dark Matter and Dark Energy: The Dark Side of the Universe; and blogs regularly at Cosmic Variance. His lives in Los Angeles with his wife, science writer Jennifer Ouellette.

Tuesday, April 7, 2009

How Professors Think: Inside the Curious World of Academic Judgment


Author: Prof. Michèle Lamont

Excellence. Originality. Intelligence. Everyone in academia stresses quality. But what exactly is it, and how do professors identify it?

In the academic evaluation system known as "peer review," highly respected professors pass judgment, usually confidentially, on the work of others. But only those present in the deliberative chambers know exactly what is said. Michèle Lamont observed deliberations for fellowships and research grants, and interviewed panel members at length. In How Professors Think, she reveals what she discovered about this secretive, powerful, peculiar world.

Anthropologists, political scientists, literary scholars, economists, historians, and philosophers don't share the same standards. Economists prefer mathematical models, historians favor different kinds of evidence, and philosophers don't care much if only other philosophers understand them. But when they come together for peer assessment, academics are expected to explain their criteria, respect each other's expertise, and guard against admiring only work that resembles their own. They must decide: Is the research original and important? Brave, or glib? Timely, or merely trendy? Pro-diversity or interdisciplinary enough?

Judging quality isn't robotically rational; it's emotional, cognitive, and social, too. Yet most academics' self-respect is rooted in their ability to analyze complexity and recognize quality, in order to come to the fairest decisions about that elusive god, "excellence." In How Professors Think, Lamont aims to illuminate the confidential process of evaluation and to push the gatekeepers to both better understand and perform their role

Saturday, April 4, 2009

Economics for Everyone: A Short Guide to the Economics of Capitalism


Author: Jim Stanford

Reviewed by Bill Burgess

Economics for Everyone is attracting a lot of attention in the labour movement in Canada for its popular exposé of many flaws in mainstream economic theory.

But what many people are looking for is a sound basis for understanding the rapidly developing economic crisis and what to do about it. Here the book fails. It betrays too much faith in capitalism.

Antidote to neo-classical economic theory

Jim Stanford is the staff economist for the Canadian Auto Workers union (CAW), and he explains economic concepts in clear, accessible language. The main points are summarized in a statement of “A Dozen Things to Remember About Economics,” available, along with further resources and study guides, at www.economicsforeveryone.ca/.

What makes this book different from most economic texts is Stanford’s emphasis that, contrary to the claims by mainstream (neo-classical) economists, their theory and policy are very political, and very pro-employer. They assume that markets are natural when they are really socially constructed. Wages, working conditions and social programs are not determined by objective economic “laws,” they are the result of class or trade union struggle.

One function of mainstream economics is to justify the profits that capitalists reap from economic activity. Neo-classical economists claim that profit rates are determined by what capital contributes to output. However, as Stanford explains, the neo-classical model of capital actually assumes the profit rates they claim it explains. The advocates of this model were forced decades ago to admit this circularity in their reasoning, but they lack any solution to this flaw in their supposedly objective explanation for profits.

One form of capital is fixed capital, like the paper machine as long as a football field that I used to work on. Each shift, tons and tons of paper rolled off my end of the machine. It was very hard to resist the notion that it was the machine that made the paper, that the machine was productive. Stanford explains that such fixed capital is instead a kind of tool that makes labour more productive.

The book’s discussion of labour costs takes up an important issue that is omitted by mainstream economists, namely the intensity of work. Neo-classical economic theory only considers the market price of labour time and skill. But labour intensity is regulated differently, by compulsion within the workplace. Because they consider only market relations, mainstream economists ignore the role of class relations of production.

“Free” trade is a favourite policy of neo-classical economists, who advocate it on the basis of the theory of “comparative advantage.” The theory claims that it always pays countries to specialize and trade, even if one country can produce all tradable goods at a lower cost than the other countries. Stanford shows why the theory does not apply. He describes how it was by not following the neo-classical prescriptions that East Asian economies managed to develop in recent decades.

The book includes a chapter on household and other non-market labour. Mainstream economists simply exclude the decisive contribution made by those who keep us all alive and able to go back to work each day.

In short, this book provides an accessible introduction to some of the concepts and terminology needed to discuss economic issues. It is particularly valuable for its popularization of flaws in neo-classical economic theory like those noted above. By the end of the book, I think readers will identify with economist Joan Robinson’s statement that, “The purpose of studying economics is…to avoid being deceived by economists.”

Too much faith in capitalism

The problem with Economics for Everyone is its inadequate assessment of capitalism. Stanford targets “bad” capitalism rather than capitalism itself.

On issues like the global ecological crisis and the failure of development in poor countries, Stanford suggests that capitalism is to blame, especially its neo-liberal (i.e. post-1970s) variant. Yet, these issues are marginal to his analysis of the economies in advanced capitalist countries. He never addresses their imperialist nature, notably the reality of imperialist war.

Stanford nods to the ideal of an economy organized to meet human need rather than private profit. However, this is not projected as a vital necessity for today. In his chapter titled “Improving Capitalism,” he instead writes that, “Fighting to make our respective countries more like the Nordic version of capitalism … is a challenge that rightfully deserves our first attention.” (‘Nordic’ here refers to the Scandinavian countries of Europe.)

In the final chapter titled “Replacing Capitalism?” Stanford notes “the scandalous failure of capitalism to meet basic needs for so many.” But he then concludes, “On the other hand, there is an absence of compelling real world evidence that any other system … would reliably do better.”

Stanford explicitly accepts the framework of capitalism in his detailed policy agenda. He proposes no measures to replace and displace the capitalist market. One example is that nationalization is limited to “natural monopolies” and industries producing what are narrowly defined as “public goods.” He is emphatic that “private business investment spending remains at the core of the economic strategy.”

What causes economic crises?

Written just before the financial collapse in 2008, Economics for Everyone notes that some economists suggest that “the ingredients may be in place for the commencement of a new period of sustained and relatively stable capitalist growth.” Stanford concludes that, “the jury is still out on whether this modern, tough-love incarnation of capitalism has really established the conditions for a longer-run winning streak.”

Well, that verdict has now been rendered, and it shows how utopian it was to believe that some kinder, gentler “Nordic” version of capitalism is on offer.

The book correctly notes that, “Those of us hoping for something better from the economy cannot wait around for capitalism to self-destruct.” Also correctly, it says, “The only factor that poses a genuine challenge to the current order is the willingness of human beings to reject the injustice and irrationality of this economy, and stand up to demand something better. Capitalism will not fall – rather, it must be pushed.”

But how and why would this happen? Stanford is very clear: “I do not see convincing evidence of an inherent, systematic vulnerability of capitalism.” In other words, he disagrees that this system is characterized by a deep-seated tendency towards producing more goods than people under capitalist relations of production can afford to buy. Socialists have concluded that the capitalists’ only “solution” is social and ecological barbarism.

One telling illustration of Stanford’s perspective is that while he agrees there is an “inherent instability of a decentralized, profit-driven economy,” his explanations for particular economic crises are shallow. They are blamed on “negative events or shocks” external to workings of capitalism proper.

For example, the book describes the global downturn in the early 1980s as being caused by U.S. monetary policy. Even the Great Depression of the 1930s is attributed to stock market speculation rather than expressing something deeper in the nature of capitalism. At a January 30, 2009, public meeting in Vancouver, Stanford similarly blamed the current crisis on financial speculation, and blamed this speculation on neo-liberalism. It is as if they float above capitalism itself.

Since he disagrees that capitalist economic crises are systemic, Stanford also underestimates what it takes for deep crises to be resolved. He claims that it “was massive military spending … that solve[d] the Great Depression.” In fact, this “solution” additionally required the fascist destruction of powerful working class movements in countries like Germany, and the widespread destruction of existing capital by horrific world war.

Socialism for the rich?

Stanford’s “vision” for “improving capitalism” is a “high-investment, sustainable economy.” But we are in a deep economic crisis, and as Keynes put it, trying to get capitalists to invest during a crisis is like pushing on a string. Reducing interest rates has failed, so governments are dumping in public money, hoping to maintain the cycle of borrowing and spending, investment and profit. The policy is “socialism for the rich.”

Stanford’s proposals are not very different. At the January 30 public meeting in Vancouver, his prescription was also more government spending, plus financial regulation.

Of course, we need to fight for programs to meet immediate human needs, like unemployment insurance, and to direct government spending to the most socially productive ends, like housing and public transit. However, borrowing and spending by capitalist governments will not solve the deep-seated problems that have brought the world economy to the brink of collapse. Yes, we should “stand up to demand something better.” But there is no way around the need to challenge capitalism as a system, not just bad, neo-liberal capitalism.

The question is how to mobilize people against the ravages of this capitalist economic crisis and find our way towards 21st Century Socialism. On these points, the perspective in Economics for Everyone is part of the problem rather than part of the solution.

Thursday, April 2, 2009

Intelligence and How to Get It-Why Schools and Culture Count


Author: Richard E. Nisbett

Social psychologist Nisbett asserts that intellect is not primarily genetic but is principally determined by societal influences. Nisbett's commanding argument, superb marshaling of evidence, and fearless discussions of the controversial carve out new and exciting terrain in this hotly debated field.

NYTIMES REVIEW:

Success in life depends on intelligence, which is measured by I.Q. tests. Intelligence is mostly a matter of heredity, as we know from studies of identical twins reared apart. Since I.Q. differences between individuals are mainly genetic, the same must be true for I.Q. differences between groups. So the I.Q. ranking of racial/ethnic groups — Ashkenazi Jews on top, followed by East Asians, whites in general, and then blacks — is fixed by nature, not culture. Social programs that seek to raise I.Q. are bound to be futile. Cognitive inequalities, being written in the genes, are here to stay, and so are the social inequalities that arise from them.

What I have just summarized, with only a hint of caricature, is the hereditarian view of intelligence. This is the view endorsed, for instance, by Richard J. Herrnstein and Charles Murray in “The Bell Curve” (1994), and by Arthur R. Jensen in “The g Factor” (1998). Although hereditarianism has been widely denounced as racism wrapped in pseudoscience, these books drew on a large body of research and were carefully reasoned. Critics often found it easier to impugn the authors’ motives than to refute their conclusions.

Richard E. Nisbett, a prominent cognitive psychologist who teaches at the University of Michigan, doesn’t shirk the hard work. In “Intelligence and How to Get It,” he offers a meticulous and eye-opening critique of hereditarianism. True to its self-helplike title, the book does contain a few tips on how to boost your child’s I.Q. — like exercising during pregnancy (mothers who work out tend to have bigger babies who grow up smarter, possibly because of greater brain size). But its real value lies in Nisbett’s forceful marshaling of the evidence, much of it recent, favoring what he calls “the new environmentalism,” which stresses the importance of nonhereditary factors in determining I.Q. So fascinating is this evidence — drawn from neuroscience and genetics, as well as from studies of educational interventions and parenting styles — that the author’s slightly academic prose style can be forgiven.

Intellectually, the I.Q. debate is a treacherous one. Concepts like heritability are so tricky that even experts stumble into fallacy. Moreover, the relevant data come mostly come from “natural experiments,” which can harbor subtle biases. When the evidence is ambiguous, it is all the easier for ideology to influence one’s scientific judgment. Liberals hope that social policy can redress life’s unfairness. Conservatives hold that natural inequality must be accepted as inevitable. When each side wants to believe certain scientific conclusions for extra-scientific reasons, skepticism is the better part of rigor.

Nisbett himself proceeds with due caution. He grants that I.Q. tests — which gauge both “fluid” intelligence (abstract reasoning skills) and “crystallized” intelligence (knowledge) — measure something real. They also measure something important: even within the same family, higher-I.Q. children go on to make more money than their less-bright siblings.

However, Nisbett bridles at the hereditarian claim that I.Q. is 75 to 85 percent heritable; the real figure, he thinks, is less than 50 percent. Estimates come from comparing the I.Q.’s of blood relatives — identical twins, fraternal twins, siblings — growing up in different adoptive families. But there is a snare here. As Nisbett observes, “adoptive families, like Tolstoy’s happy families, are all alike.” Not only are they more affluent than average, they also tend to give children lots of cognitive stimulation. Thus data from them yield erroneously high estimates of I.Q. heritability. (Think: if we all grew up in exactly the same environment, I.Q. differences would appear to be 100 percent genetic.) This underscores an important point: there is no fixed value for heritability. The notion makes sense only relative to a population. Heritability of I.Q. is higher for upper-class families than for lower-class families, because lower-class families provide a wider range of cognitive environments, from terrible to pretty good.

Even if genes play some role in determining I.Q. differences within a population, which Nisbett grants, that implies nothing about average differences between populations. The classic example is corn seed planted on two plots of land, one with rich soil and the other with poor soil. Within each plot, differences in the height of the corn plants are completely genetic. Yet the average difference between the two plots is entirely environmental.

Could the same logic explain the disparity in average I.Q. between Americans of European and of African descent? Nisbett thinks so. The racial I.Q. gap, he argues, is “purely environmental.” For one thing, it’s been shrinking: over the last 30 years, the measured I.Q. difference between black and white 12-year-olds has dropped from 15 points to 9.5 points. Among his more direct evidence, Nisbett cites impressive studies in population genetics. African-Americans have on average about 20 percent European genes, largely as a legacy of slavery. But the proportion of European genes ranges widely among individuals, from near zero to more than 80 percent. If the racial gap is mostly genetic, then blacks with more European genes ought to have higher I.Q.’s on average. In fact, they don’t.

Nisbett is similarly skeptical that genetics could account for the intellectual prowess of Ashkenazi Jews, whose average I.Q. measures somewhere between 110 and 115. As for the alleged I.Q. superiority of East Asians over American whites, that turns out to be an artifact of sloppy comparisons; when I.Q. tests are properly normed, Americans actually score slightly higher than East Asians.

If I.Q. differences are indeed largely environmental, what might help eliminate group disparities? The most dramatic results come from adoption. When poor children are adopted by upper-middle-class families, they show an I.Q. gain of 12 to 16 points. Upper-class parents talk to their children more than working-class parents do. And there are subtler differences. In poorer black families, for example, children are rarely asked “known-answer questions” — that is, questions where the parents already know the right answer. (“What color is the elephant, Billy?”) Consequently, as Nisbett observes, the children are nonplussed by such questions at school. (“If the teacher doesn’t know this, then I sure don’t.”)

The challenge is to find educational programs that are as effective as adoption in raising I.Q. So far, Nisbett observes, almost all school-age interventions have yielded disappointing results. But some intensive early-childhood interventions have produced enduring I.Q. gains, at a cost of around $15,000 per child per year. Yet, by the author’s reckoning, it would cost less than $100 billion a year to extend such programs to the neediest third of America’s preschoolers. The gain to society would be incalculable.

Still, there are limits even to Nisbett’s optimism. Social policy can get rid of ethnic I.Q. gaps, he thinks, but “the social-class gap” in I.Q. “is never going to be closed.” I would frame the matter a little differently. Even if I.Q. inequality is inevitable, it may eventually become irrelevant. Over the last century, for reasons that aren’t entirely clear, I.Q. scores around the world have been rising by three points a decade. Some of this rise, Nisbett argues, represents a real gain in intelligence. But beyond a certain threshold — an I.Q. of 115, say — there is no correlation between intelligence and creativity or genius. As more of us are propelled above this threshold — and, if Nisbett is right, nearly all of us can be — the role of intelligence in determining success will come to be infinitesimal by comparison with such “moral” traits as conscientiousness and perseverance. Then we can start arguing about whether those are genetic.

Jim Holt is the author of “Stop Me if You’ve Heard This: A History and Philosophy of Jokes.” He is working on a book about the puzzle of existence.