The Wayback Machine - https://web.archive.org/web/20160112061115/http://press.princeton.edu/chapters/s8857.html

 
Book Search:  

 

 
Google full text of our books:

bookjacket

Famine:
A Short History
Cormac Ó Gráda

Book Description | Endorsements | Table of Contents

COPYRIGHT NOTICE: Published by Princeton University Press and copyrighted, © 2009, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers. Follow links for Class Use and other Permissions. For more information, send e-mail to permissions@press.princeton.edu

This file is also available in Adobe Acrobat PDF format

Chapter I

The Third Horseman

Famyn schal a-Ryse thorugh flodes and thorugh foule wedres.
   —William Langland, Piers Ploughman
 
And lo a black horse . . . and he that sat on him had a pair of scales in his hand . . . a quart of wheat for a day’s wages.
   —Book of Revelation 6:5

IN THE DEVELOPED WORLD, famines no longer capture the headlines like they used to. Billboard images of African infants with distended bellies are less ubiquitous, and the focus of international philanthropy has shifted from disaster relief to more structural issues, particularly those of third world debt relief, economic development, and democratic accountability. Totalitarian famines of the kind associated with Joseph Stalin, Mao Tse-tung, and their latter-day imitators are on the wane. Even in Africa, the most vulnerable of the seven continents, the famines of the past decade or so have been, by historical standards, “small” famines. In 2002, despite warnings from the United Nations World Food Programme and nongovernmental relief agencies of a disaster that could affect millions, the excess mortality during a much-publicized crisis in Malawi was probably in the hundreds rather than the thousands. As for the 2005 famine in Niger, which also attracted global attention, experts now argue that it does not qualify as a famine by standard criteria. Mortality there was high in 2005, but apparently no higher than normal in that impoverished country.1

Writing about famine today is, one hopes, part of the process of making it less likely in future. The following chapters describe its symptoms, and how they have changed over time; more important, they explain why famines happened in the past, and why—since this is one of the themes of this book—they are less frequent today than in the past and, given the right conditions, less likely in the future. Research into the history of famine has borrowed from many disciplines and subdisciplines, including medical history, demography, meteorology, economic and social history, economics, anthropology, and plant pathology. This book is informed by all of them.

So is it almost time to declare famine “history”? No, if the continuing increase in the number of malnourished people is our guide; yes, perhaps, if we focus instead on malnourished people’s declining share of the world population and the characteristics of famine in the recent past. And if yes, has this been due to economic progress in famine-prone countries? Or should the credit go to the globalization of relief and better governance where famines were once commonplace? How have the characteristics and incidences of famine changed over time? Are most or all modern famines “man-made”? Can the history of past famines help guard against future ones? This book is in part an answer to such questions.

Famines have always been one of the greatest catastrophes that could engulf a people. Although many observers in the past deemed them “inevitable” or “natural,” throughout history the poor and the landless have protested and resisted at the approach of famines, which they considered to be caused by humans. The conviction that a more caring elite had the power and a less rapacious trading class had the resources to mitigate—if not eradicate—disaster was usually present. This, after all, is the message of Luke’s parable about Dives and Lazarus.2 It is hardly surprising, then, that famines have attracted both the attention of academics and policymakers as well as the indignation of critical observers and philanthropists. In today’s developed world the conviction that famines are an easily prevented anachronism, and therefore a blot on global humanity, is widespread and gaining ground. That makes them a continuing focus for activism and an effective vehicle for raising consciousness about world poverty.

Economist and demographer Robert Malthus was one of those who regarded famine as natural. In 1798, he famously referred to famine as “the last, the most dreadful resource of nature,”3 and indeed other natural disasters such as earthquakes, floods, and even volcanic eruptions tend to be more local and short-lived in their impact. The impact of famines is also more difficult to measure. We measure the energy expended in earthquakes on the Richter scale, volcanic eruptions by a Volcanic Explosivity Index, and weather by rain precipitation, temperature, humidity, and wind speed, but how can we measure famine? Excess mortality is an obvious possibility, but besides being often difficult to measure, it is as much a function of the policy response to famine as of the conditions that caused the crisis. The Indian Famine Codes, introduced in the wake of a series of major famines in the 1870s, defined famine by its early warning signals. These signals—rising grain prices, increased migration, and increased crime—dictated the introduction of measures to save life.

A recent study in this spirit defines the transition from food crisis to famine by rises in the daily death rate above one per ten thousand population, the proportion of “wasted” children (that is, children weighing two standard deviations or less below the average) above 20 percent, and the prevalence of kwashiorkor, an extreme form of malnutrition mainly affecting young children.4 By the same token, “severe famine” means a daily death rate of above five per ten thousand, a proportion of wasted children above 40 percent, and again, the prevalence of kwashiorkor. The first two of these measures could not have been implemented in India a century ago, but the swollen bellies and reddened hair associated with kwashiorkor are age-old signs of crisis.5 In what follows, famine refers to a shortage of food or purchasing power that leads directly to excess mortality from starvation or hunger-induced diseases.

The etymology and meaning of words signifying famine vary by language. The Roman orator Cicero (106–43 BC) distinguished between praesens caritas (present dearness or dearth) and futura fames (future famine) or deinde inopia (thereafter want of means), and Roman sources employed several synonyms for both (e.g., difficultas annonae, frumenti inopia, and summa caritas).6 In Italian the word for famine, carestia, is derived from caritas, and signifies dearness. This suggests one measure of a famine’s intensity since, usually, the greater the increase in the price of basic foodstuffs and the longer it lasts, the more serious the famine. In medieval and early modern England, dearth signified dearness, but meant famine. For economist Adam Smith, however, dearth and famine were distinct, whereas by John Stuart Mill’s day “there is only dearth, where there formerly would have been famine.”7 Famine, in turn, is derived from the Latin fames. In German, Hungersnot connotes hunger associated with a general scarcity of food. The most common terms for famine in the Irish language are gorta (starvation) and, referring to the infamous 1840s, an drochshaol (the bad times). In pharaonic Egypt, the standard word for famine (hkr) derived from “being hungry,” but that signifying plague (i:dt) also connoted famine, highlighting the symbiotic relationship between famine and disease.

Many individual famines are remembered by specific names that only sometimes hint at their horrors. Examples include la famine de l’avenement (the famine of the Accession of Louis XIV) in France in 1662, bliain an áir (“the year of the slaughter”) in Ireland in 1740–41, the Chalisa (referring to a calendar date) and Doji Bara (“skulls famine”) in India in 1783–84 and 1790–91, the Tenmei and Tempo (Japanese era names) in Japan in 1782–87 and 1833–37, the Madhlatule (“eat what you can, and say nothing”) famine in southern Africa in the 1800s, Black ’47 in Ireland in 1847, the Mtunya (“the scramble”) in Kenya in 1917–20, Holodomor (“death by hunger”) in the Ukraine in 1932–33, Chhiyattarer Manvantar (the Great Famine of the Bengal year 1176) and Panchasher Manvantar (“the famine of fifty,” a reference to the Bengal year 1350) in Bengal in 1770 and 1943–44, manori (etymology unclear) in Burundi in 1943–44, and na·n đói «t Dâ·u (“famine of the «t Dâ&middotu; Year”) in Vietnam in 1945.

In any language, however, the term famine is an emotive one that needs to be used with caution. On the one hand, preemptive action requires agreement on famine’s early warning signs; the very declaration of a famine acknowledges the need for public action, and may thus prevent a major mortality crisis. On the other hand, the overuse of the term by relief agencies and others may lead to cynicism and donor fatigue.

In the recent past, definitions of famine have included events and processes that would not qualify as famine in the catastrophic, historical sense. Some scholars have argued for a broader definition that would embrace a range extending from endemic malnutrition to excess mortality and its associated diseases. In support of this view, the term famine indeed represents the upper end of the continuum whose average is “hunger.”8 Malnutrition, which eight hundred to nine hundred million people still endure every day, might be seen as slow-burning famine. Moreover, in famine-prone economies malnutrition is usually endemic, and individual deaths from the lack of food are not uncommon. Yet classic famine means something more than endemic hunger. Common symptoms absent in normal times include rising prices, food riots, an increase in crimes against property, a significant number of actual or imminent deaths from starvation, a rise in temporary migration, and frequently the fear and emergence of famine-induced infectious diseases.

All of these symptoms are listed in one of the earliest graphic depictions of famine, which comes from Edessa in northern Mesopotamia (today’s Ourfa in southeastern Turkey) in AD 499–501. It describes in mordant detail many of the features that have characterized famine through the ages: high food prices (“there was a dearth of everything edible . . . everything that was not edible was cheap”); spousal or child desertion (“others their mothers had left . . . because they had nothing to give them”); public action (“the emperor gave . . . no small sum of money to distribute among the poor”); unfamiliar substitute foods (“bittervetches, and others were frying the withered fallen grapes”); migration (“many villages and hamlets were left destitute of inhabitants . . . a countless multitude . . . entered the city”); and infectious diseases (“many of the rich died, who were not starved; and many of the grandees too”).9 Although the list of famine’s horrors does not end there, what is striking is how little it has changed over the centuries—until the recent past, at least.

The outline of the rest of this introductory chapter is as follows. First, I briefly survey the link between living standards and geography or regionality, on the one hand, and vulnerability to famine, on the other. Next, I turn to the frequency of famines in the past. Finally, I describe in brief how famine is remembered in folklore and oral history.

THE ULTIMATE CHECK

The view that famine was the product of—though not necessarily a corrective for—overpopulation can be traced back nearly five millennia to the Babylonian legend of Gilgamesh. In this epic tale, the gods cut population back to size when their peace was destroyed by “the people bec[oming] numerous, the land bellow[ing] like wild oxen.” Another early reference to the link between population pressure and famine may be found in the Old Testament’s book of Nehemiah, dating probably from about 430 BC, in which overpopulation left the poor without food, and forced men with some property to mortgage it in order to buy food. It also made them sell their children into bondage or borrow at exorbitant rates from their fellow Jews.10 The first economist to describe the link may have been Irish-born economist Richard Cantillon (who died in Paris in 1734), according to whom the human race had the capacity to multiply like “mice in a barn,” although he did not discuss the checks needed to prevent the earth from becoming overpopulated.11

For Robert Malthus (1766–1834), there was no equivocation: when all other checks fail, “gigantic inevitable famine stalks in the rear and, with one mighty blow, levels the population with the food of the world.12 The Malthusian interpretation, stark and simple, was highly influential. It led historians to describe famines in India as “a demonstration of the normal effect of the fertility of nature on the fertility of man,” seventeenth-century Languedoc as a “society . . . suffering from a surplus of people” eventually producing a “violent contraction” through famine, and prefamine Ireland as “a case study in Ricardian and Malthusian economics.”13

Famines have nearly always been a hallmark of economic backwardness. Most documented famines have been the products of harvest failures—what were dubbed “natural causes” by the Victorian actuary Cornelius Walford—in low-income economies.14 Both the extent of the harvest shortfall and the degree of economic backwardness mattered. Today’s developed world has been spared major “death-dealing” famine during peacetime since the mid-nineteenth century, and this applies to England since the mid-eighteenth century at the latest.15 Japan, where famines had been common in the seventeenth century, suffered its last major famine in the 1830s.

At the other extreme is Niger, the focus of global media attention in 2005, and among the poorest economies in the world. The gross domestic product (GDP) per head in Ethiopia and Malawi, also still vulnerable to famine, is in real terms less than half that of the United States two centuries ago. And five of the six economies most prone to food emergencies since the mid-1980s—Angola, Ethiopia, Somalia, Mozambique, and Afghanistan—were ranked in the bottom 10 of 174 countries on the United Nations’ Human Development Index in the mid-1990s; the sixth, war-torn Sudan, was ranked 146th.16 There are exceptions to all historical generalizations, though. Ireland in the 1840s was a poor region of what was then the wealthiest economy in the world, while in 1932–33 the economy of the Soviet Union was backward, but by no means among the world’s poorest.

Today, given goodwill on all sides, famine prevention should be straightforward, even in the poorest corners of the globe. Transport costs (which I will discuss later) have plummeted since the nineteenth century, and the global GDP per capita has quintupled since the beginning of the twentieth century; bad news travels fast; food storage is inexpensive; international disaster relief agencies are ubiquitous; and nutritional requirements and medical remedies in emergencies are better understood. In addition, penicillin and electrolyte drinks for dehydration are readily available, albeit at a cost; most recently, the discovery of cheap, storable, easily transportable, nutrient-dense ready-to-use foods has facilitated the task of relieving the severely malnourished.

A combination of these factors certainly reduced the incidence of famine in the twentieth century. Nowadays, where crop failures are the main threat, as in southern Africa in 2002 and Niger in 2005, a combination of public action, market forces, and food aid tends to mitigate mortality during subsistence crisis. Although noncrisis death rates in sub-Saharan Africa remain high, excess mortality from famine—unless linked to war—tends to be small.17

Why, then, did and does famine persist? In the past famines have usually been linked to poor harvests; a distinguishing feature of twentieth-century famines is that famine mortality was more often linked to wars and ideology than to poor harvests per se. Many of the major famines of the twentieth century were linked to either civil strife and warfare (as in the Soviet Union in 1918–22 or Biafra/Nigeria in 1970) or despotic autarky (as in China in 1959–61 or North Korea after 1996). Human action had a greater impact than, or greatly exacerbated, acts of nature. The relative importance of political factors—“artificial causes or those within human control”—and food availability tout court was reversed.18 Mars in his various guises accounted for more famines than Malthus.

Several of the past century’s major famines would have been less deadly—or might not have occurred at all—under more peaceful or stable political circumstances.19 Toward the end of World War I, the Mtunya (“Scramble”) in central Tanzania was mainly the product of excessive food procurements by the imperial powers, first by the Germans, and then by the British; similar pressures also led to famine in Uganda and French Africa. World War II brought famine to places as different as India, the western Netherlands, and Leningrad (today’s Saint Petersburg). In Bengal, fears of a Japanese invasion in 1942–43 determined the priorities of those in authority, and the so-called Denial Policy, which removed stored holdings of rice, cargo boats, and even bicycles from coastal regions lest they fall into the hands of the invaders, undoubtedly compounded the crisis. Most fundamentally, the poor of Bengal were left unprovided for due to military considerations. The main responsibility for the Ethiopian famine of 1984–85 rested with a regime waging a ruthless campaign against secessionists in the country’s northern provinces.20

In the book of Jeremiah, which describes a tempestuous period in Jewish history (ca. 580 BC), the sword and famine are mentioned simultaneously several times. In ancient Rome famines were few in peacetime, but crises flared during the Punic Wars and the civil wars of 49–31 BC. Classical Greece was also relatively free of famine before the Macedonian conquest in 338 BC. There are countless examples of the threat or reality of military activity leading to famine, even in the absence of a poor harvest. Warfare was also likely to increase the damage inflicted by any given shortfall. This was the case—to list some notorious examples—throughout Europe in the 1310s, Ireland in the 1580s and 1650s, the Indian Deccan in 1630, France in the 1690s, southern Africa in the 1810s and 1820s, Matabeleland in the 1890s, Finnish Ostrobothnia in 1808–9, Spain (as depicted by Francisco Goya in the “Horrors of War”) in 1811–12, and the Soviet Union in the wake of the October Revolution.

Still, another distinguishing feature of the past century was the rise of the totalitarian, all-embracing state. Totalitarianism greatly increased the human cost of policy mistakes and the havoc wrought by government, even in peacetime. The damage caused by poor harvests in the Soviet Union in 1932–33 and China in 1959–61 was greatly exacerbated by political action. What Adam Smith claimed, incorrectly, for famines in early modern Europe—that they never arose “from any other cause but the violence of government attempting, by improper means, to remedy the inconveniences of a dearth”—applies far more to the twentieth century than to the seventeenth or eighteenth.21

Clearly, then, politics, culture, and institutions also matter. Even Malthus did not entirely exclude cultural factors; in the 1800s he argued—atypically perhaps—that granting Irish Catholics the same civil rights as other UK citizens would check population growth, by making them look forward “to other comforts beside the mere support of their families upon potatoes.”22 Of course, these factors are not independent of the degree of economic development, but they are worth considering separately. Effective and compassionate governance might lead to competitive markets, sanctions against corruption, and well-directed relief. Healthy endowments of social capital might mean less crime, and a greater willingness to help one’s neighbor or community. Evidence that famines are very much the exception in democracies (see chapter 8) corroborates this view.

TIME AND PLACE

What does history tell us about the spatial spread of famines? The earliest recorded famines, all associated with prolonged droughts, are mentioned on Egyptian stelae (inscribed stone pillars) dating from the third millennium BC. From earliest times, Egyptian farmers relied on the Nile, swollen by annual monsoon rains in Ethiopia, to burst its banks and “water” the soil. The flooding deposited layers of highly fertile silt on the flat lands nearby, but it was a risky business: one flood in five was either too high or too low. The stelae commemorated members of the ruling class who engaged in philanthropy during one of the many ensuing crises.

Geography must have influenced the intensity and frequency of famines in the past, if only because some famine-related diseases were more likely in particular climates than in others. History indeed suggests that while no part of the globe has been always free from famine, some regions have escaped more lightly than others. Malthus believed that although untold millions of Europeans had their lives blighted by malnutrition in the past, “perhaps in some of these states an absolute famine may never have been known.”23 Even though in this instance Malthus was being atypically complacent, the historical demography of early modern Europe supports the case for a “low pressure demographic regime” in which the preventive check of a lower birthrate was more important than elsewhere.24

Most of the worst famines on record have been linked to either too much or too little rain. In Dionysios Stathakopoulos’s catalog of documented famines in the late Roman period, drought was the main factor in three cases out of four; some in the Near East were blamed on locust invasions, but none on excessive rainfall alone. In prerevolutionary China, drought was twice as likely to cause famine as floods. This was particularly so in wheat-growing regions; in Sichuan, a rice-growing province and the most famine prone of all, drought was responsible for three out of every four famines. Drought was also responsible for the massive Bengal famine of 1770, which may have resulted in millions of deaths—though probably not “at least one-third of the inhabitants,” as claimed by Indian governor general Warren Hastings.25 Zimbabwe’s earliest recorded subsistence crisis in the late fifteenth century was caused by a severe drought.26 The catastrophe in northern China in the late 1870s came in the wake of exceptional droughts in 1876 and 1877, while much of western and central India in the late 1890s saw virtually no rain for three years. At the height of the Great Leap Forward famine during summer 1960, “eight of Shantung’s twelve rivers had no water in them, and for forty days in March and June, it was possible to wade across the lower reaches of the Yellow River.”27

In temperate zones, cold or rain, or a combination of both, were more likely to be the problem. The Great European Famine of 1315–17 was the product of torrential downpours and low temperatures during summer 1315. The grand hiver of 1708–9 was the proximate cause of severe famine in France during that period, and the Great Frost of 1740 led to bliain an áir (the year of carnage) in Ireland in 1740–41. In France, ice-covered rivers were the most spectacular aspect of the “big winter” of 1708–9, while in mid-January 1740 one could walk across Ireland’s biggest lake for miles—an unprecedented feat. Liquids froze indoors and ice floes appeared at river mouths, while in Holland it was recorded that “the drip from the nose, and the spittle from the mouth, both are frozen before they fall to the ground.”28 In Kashmir, the great flood of 1640–42 wiped out 438 villages “and even their names did not survive.”29

Long-term climatic trends also probably mattered. In harsher, more marginal areas such as Scandinavia the colder weather made coping more difficult, and the abandonment of Norse settlements on Greenland during the fifteenth century along with the end of corn cultivation in Iceland during the sixteenth have been linked to climatic shift and famine. The 1690s (the nadir of the so-called little ice age) brought disaster to Scotland, Finland, and France.30

The extreme weather produced by the El Niño–Southern Oscillation (ENSO) of 1876–77 gave rise to the most deadly famines of the nineteenth century. As with all ENSOs, winds driving warm water westward across the southern Pacific Ocean provided the spur, and the resultant low air pressure led to extensive rainfall over the surrounding countries in Southeast Asia and Australasia. In due course, the area of low pressure shifted back east, causing drought in Southeast Asia and heavy rainfalls in the tropical parts of the Americas. The shift almost simultaneously produced droughts farther east, in Brazil and southern Africa. The combination of extreme droughts and monsoons led to millions of deaths under hellish conditions. Another El Niño followed in 1898, wreaking further havoc in India and Brazil’s Nordeste.

The impact of the late nineteenth-century ENSOs is well-known, but recent research has uncovered several more such synchronized climatic assaults. Examples include the great drought-famine of 1743–44, which devastated agricultural production across northern China, and the 1982–83 ENSO that sparked off the Ethiopian famine of 1984–85. El Niño struck again as recently as 1997. Yet the impact of these strikes was mild compared to those of the late 1870s and late 1890s.31

Major historical famines linked to extraordinary “natural events” seem to have been more common than ones associated with ecological shocks. Several famines have been connected to volcanic eruptions. The well-documented impact of the volcanic dust emanating from Laki in 1783 and Mount Tambora near Bali in 1815 on two of northern Europe’s last peacetime famines has prompted searches for links between other volcanic explosions and famines elsewhere. In Europe, the beginning of the Dark Ages has been linked to an undefined disastrous event ca. AD 530 that affected vegetable growth for over a decade. Qualitative accounts imply a massive famine around this time. Tree-ring evidence corroborates the severity of AD 536 as one of the coldest summers ever, and a “dust veil” from a huge volcanic aerosol cloud is a plausible explanation for it. In Japan, the Kangi famine of 1229–32 and the Shołga famines of 1257–60 have been tied to likely volcanic eruptions.32 Similarly the One Rabbit famine, which struck the Mexican Highlands a few decades before the arrival of the conquistadores, has been linked to the eruption of Kuwae, in Vanuatu, circa AD 1452. A volcanic eruption in Iceland in AD 934, one of the largest on record, is also held to have led to cold spells and poor crops in Europe. The freezing winter of 1740–41, which led to widespread famine in northern Europe, may also owe its origins to a volcanic eruption: a volcano on the Kamchatka Peninsula in Russia is one suspect, although Kamchatka is absent from the latest eruption lists derived from ice cores. Examples of ecological shocks associated with famines include phytophthora infestans (potato blight, in Ireland and elsewhere in northern Europe in the 1840s), rinderpest (cattle plague, in Africa in 1888–92), and helminthosporium oryzae (rice brown spot, in Bengal in 1943).33

Today, Africa is the continent most at risk from famine. Its premodern famines are poorly documented, notwithstanding accounts of individual famines in medieval Egypt, precolonial Zimbabwe, Nigeria, Mali, and elsewhere. Yet in the second (and later) editions of his Essay on the Principle of Population, Malthus claimed, mainly on the basis of reading explorer Mungo Park’s Travels in the Interior Districts of Africa (1799), that famines were common in Africa. Park interpreted the sale of humans into slavery as evidence of “the not unfrequent recurrence of severe want,” and referred in particular to a recent three-year famine in Senegambia, which had resulted in widespread resort to voluntary enslavement.34 Even more devastating was the mid-eighteenthcentury famine that forced the ruling Hausa clans to cede much of the southern Sahel to the more drought-resilient Tuareg. Recent specialist accounts claim, however, that famines were rare in precolonial Zimbabwe, and that between the 1750s and 1913, the Hausa lands straddling northern Nigeria and Niger did not experience any “massive subsistence calamity that embraced the entire savannas and desert-edge community”—although regional crises were becoming “increasingly common.”35

The link between colonialism and famine, in Africa and elsewhere, is a controversial and ambivalent one. Rudyard Kipling’s facile depiction of colonialism as white men “filling full the mouth of Famine” across the British Empire has little basis in reality. On balance, the initial impact of colonial conquest and “pacification” was almost certainly to increase famine mortality (as in Mexico in the 1520s, Ireland in the 1580s and 1650s, Namibia/Angola before 1920, the Xhosa lands in South Africa in the 1850s, and northern Nigeria in the early 1910s), although where it replaced a dysfunctional indigenous ruler (as in Madagascar in the 1890s) it may well have reduced it.36 Its subsequent impact is less clear; it depended in part on whether it generated economic growth and whether the fiscal exactions of the colonists exceeded those of indigenous rulers. In the longer run, although colonial rule may have eliminated or weakened traditional coping mechanisms, it meant better communications, integrated markets, and more effective public action, which together probably reduced famine mortality.

Colonialism did not prevent massive famines in nineteenth-century Ireland and India, but those famines were less the product of empire per se than the failure of the authorities of the day to act appropriately. The colonial regime that presided over several major famines in eighteenth- and nineteenth-century India also helped to keep the subcontinent free of famine between the 1900s and the Bengal famine of 1943–44. The change was partly due to improved communications, notably through the railway, although the shift in ideology away from hard-line Malthusianism toward a focus on saving lives also mattered. Colonial exactions during World War I produced famine in several parts of Africa, but famines were almost certainly much fewer between the 1920s and the end of the colonial era than they had been before the post-1880 “scramble for Africa.”

The greater capacity of Africa to sustain population change during the colonial era—the average annual rate of population growth rose from about 0.2 percent in 1700–1870 to 1.3 percent in 1870–1960—is striking, but the extent to which this was due to the decreasing incidence and severity of famines remains moot. Yet the improved communications resulting from empire certainly helped, and the medical knowledge brought by the colonizers must also have attenuated famine mortality because it weakened or sundered the link between epidemics such as smallpox and cholera, on the one hand, and famine, on the other.

Tragically, across much of Africa the departure of European colonizers in the mid-twentieth century saw not an end to famine but what John Iliffe terms a “return of mass famine mortality.” Iliffe attributes this to a combination of postcolonial wars and the collapse of famine-prevention mechanisms created in the later colonial period. The spatial incidence of famine across the continent since the 1960s is instructive in this respect. Civil war alone was enough to trigger a major famine in Nigeria in 1968–70; elsewhere poor harvests were usually a factor, but they were rarely the main cause of mass mortality—the major exceptions being the Sahel in the early 1970s and Darfur in the mid-1980s.37

The incidence of famine in the New World remains an enigma. The Brazilian Grande Seca of 1877–79—which took the lives of a half million or so—has been characterized as “the most costly natural disaster in the history of the western hemisphere.”38 This may well be so in absolute terms, since the population of the pre-Columbian New World was small. Pre-Columbian America was not famine free, however. The Famine of One Rabbit in 1454 was a major catastrophe in Mexico.39 Again, in 1520, “there was death from hunger; there was no one to take care of one another; there was no one to attend to one another.”40 Conditions worsened after the conquista. Using price and production data to distinguish epidemics from famines, David A. Brading and Celia Liu have uncovered serious famines in Mexico in 1695–96, 1713, 1749–50, and 1785–86. The last of these, perhaps the greatest catastrophe to strike Mexico since the conquest, is well documented. A study of the parish registers of León (which had a population of about twenty thousand at the time) suggests a sixfold rise in burials and a drop of two-thirds in the number of marriages in 1786, while baptisms fell by half in both 1786 and 1787. The gigantic rise in the price of maize, from a precrisis average of four reals to forty-eight reals per fanega in 1786, hints at the horrors endured.41 Still, famine in the Americas seems to have been less common in the past than in Europe, Africa, or Asia. Despite undoubted disasters such as those just mentioned, population pressure does not appear to have been as great in the New World as in parts of the Old.

At the other extreme, one of the globe’s most famine-prone places for nearly half a millennium has been Cape Verde, a volcanic archipelago of forty thousand square kilometers located about six hundred kilometers off the coast of Senegal. Uninhabited when discovered by the Portuguese ca. 1460, Cape Verde’s destiny in the following centuries was linked to slavery and the slave trade. Despite its name, Cape Verde is an arid landmass with minimal agricultural potential. The excess mortality associated with its major famines is unparalleled in relative terms. A famine in 1773–76 is said to have removed 44 percent of the population; a second in 1830–33 is claimed to have killed 42 percent of the population of seventy thousand or so; and a third in 1854–56 to have killed 25 percent. In 1860 the population was ninety thousand; 40 percent of Cape Verdeans were reported to have died of famine in 1863–67. Despite a population loss of thirty thousand, the population was put at eighty thousand in 1870. Twentieth-century famines in Cape Verde were less deadly, but still extreme relative to most contemporaneous ones elsewhere: 15 percent of the population (or twenty thousand) in 1900–1903; 16 percent (twenty-five thousand) in 1920–22; 15 percent (twenty thousand) in 1940–43; and 18 percent (thirty thousand) in 1946–48.42 The pivotal role of drought-related famine in the emography of Cape Verde need not be labored. Nevertheless, such death tolls imply extraordinary noncrisis population growth. For instance, if the population estimates for 1830 and 1860 are credited, making good the damage inflicted by the famine of 1830–33 would have required an annual population growth rate of about 4 percent between 1833 and 1860—despite the loss of a quarter or so of the population in 1854–56.

HOW COMMON WERE FAMINES IN THE PAST?

Some have been forgotten altogether, because the object of Indian historians was generally to record the fortunes of a dynasty rather than the condition of a people.
   —Report of the India Famine Comminsion, 1880

An important unresolved puzzle about famines is, How often did they strike in the past? In general, the more backward an economy is, the less likely it is to yield documentary traces of famine. Yet again and again, historians have been unable to resist the temptation to infer the incidence and frequency of famines from the documentary record. That more than three-quarters of the famines listed in Walford’s idiosyncratic chronology of The Famines of the World Past and Present, published in 1879, are European and over half of the remainder are Indian is hardly surprising. Moreover, Walford’s Indian famines struck with increasing intensity over time: eleven before 1700, eleven more during the eighteenth century, and twenty-three during the nineteenth. The illustrious Cambridge History of India, published a century or so later, is equally guilty of discounting the more distant past; the volume covering the 1750–1970 period contains a four-page chronology of famine, while the sole mention in the volume covering the previous 550 years relates to the Deccan famine of 1630–32, the first about which there is significant documentary evidence. Less subject to chronological bias than Walford’s is Paul Greenough’s checklist of Indian famines between 298 BC and 1943–44: it identifies four famines before AD 1000, twenty-four between AD 1000 and AD 1499, eighteen in the sixteenth century, twenty-seven in the seventeenth, eighteen in the eighteenth, and thirty in the nineteenth.43

Long before Walford, Thomas Short produced a list of 254 famines in A General Chronological History of the Air . . . in Sundry Places and at Different Times (1749), extending back to that “which occurred in Palestine in the time of Abraham.” In an attempt to infer famine’s past demographic impact, Malthus invoked Short’s research, subtracting the fifteen famines that occurred “before the Christian era.” Malthus reckoned Short’s chronology to imply an average interval of only 7.5 years between famines.44 In 1846, the eminent statistician Robert Farr (1807–83) believed that he had discovered “the law regulating scarcities in England”— ten years of famine per century between AD 1000 and AD 1600—in references to them in ancient chronicles, but again, the fallibility of such sources is clear.45 A few years later, William Wilde produced a similar chronology in the 1851 Irish census, based on accounts in Gaelic and Anglo-Norman annals. Excluding reports of storms, cattle murrain, and the like, Wilde’s data imply a famine every fifteen years or so, and a famine straddling two or more years about every half century. In Wilde’s data, the frequency was greater before the Black Death (1348) than after it, but again this may be a reflection of the shifting quality of the evidence.

Following the same tradition, the chronology of famine in the area around Timbuktu in the Malian Sahel has been inferred from surviving tarikhs (historical annals). They imply a sixteenth century that was relatively free of famine, followed by two centuries of recurrent disaster. The list begins with a famine in 1617 that led (allegedly) to the consumption of human flesh, and another in 1639, when the dead were buried on the spot “without washing the body or saying a prayer.” By the end of the eighteenth century, Timbuktu and its neighbors had become “small backward cities in an extensive backward region.”46 The distinction between famine proper and epidemic outbreaks in the tarikhs is usually clear enough. In the case of Ethiopia, however, not only do references to crises in Amharic hagiographic writings and Arabic sources make it difficult to distinguish between the two, but they also vary in quality over time. The documentation improves in the fifteenth and sixteenth centuries, and in 1543–44 according to an imperial chronicle, there was “a great famine, a punishment sent on the country by the glorious God,” but the emperor “fed the entire people as a father feeds his son.”47 A recent tally of famines in Ethiopia reckons there were four between AD 100 and AD 1400, four between AD 1400 and AD 1600, eight between AD 1800 and AD 1900, and twenty-three between 1900 and the present.48 Here, too, the apparent increasing incidence of famine is surely a product of the available documentation.

Geographer William Dando’s account in The Geography of Famine is in the same tradition, and equally problematic. On the basis of an unpublished data bank containing eight thousand famines over six millennia, Dando divided the secular chronology of famine by “major world famine region.” But the correspondence between region and “famine type” is purely a function of surviving documentation. Dan-do’s earliest region, northeast Africa and the Middle East, is where the first documented famines occurred; his latest region, which refers to the post-1700 period, is Asia, and this is also a function of when the sources date from. By the same token, Africa plays a marginal role throughout in Dan-do’s schema.49

A recent invaluable analysis of the surviving documentary evidence on famines in Rome and Byzantium circa AD 300–750 is quick to point out that the most urbanized regions of Italy and the Balkans are most often represented.50 These areas are followed by Syria, where the presence of Islamic scholars from the seventh century on led to increased recording of such phenomena. Least mentioned are Egypt, North Africa, and Palestine, but as noted by the author, this again is surely more a reflection of the lack of source material than the relative absence of famines.

The demographic evidence on famine in Japan before about 1800 is also thin. An ingenious analysis of earlier crises by Osamu Saito, based on sourcebooks published in 1894 and 1936, can only offer a crude chronology. It yields a weighted total (0.5 for regional famines, and 1.0 for national famines) of 185 years of famine between AD 600 and AD 1885, or one year in every seven. Still, nearly half the total records refer to the eighth and ninth centuries, when there were several multiyear famines. Focusing on the second millennium only suggests a rising incidence of famine between AD 100 and AD 1500, and a decline thereafter. By this reckoning the eighteenth century endured 10.5 years of famine, while the nineteenth endured 6 years.51 The analyses of Stathakopoulos on the late Roman Empire and Osamu Saito on Japan show that sources like those utilized by Walford and Wilde have their uses when handled with care. Their fallibility is also clear. As a student of Indian famines noted in 1914, “The frequency of the mention of famine in the later history . . . increases in exact proportion with the precision and accuracy in detail of her historians.”52

Support for the Malthusian view that famines were a common occurrence in the past may be found in the work of historian Fernand Braudel, whose listing of famines in “a privileged country like France” mentions “ten general famines during the tenth century; twenty-six in the eleventh; two in the twelfth; four in the fourteenth; seven in the fifteenth; thirteen in the sixteenth; eleven in the seventeenth and sixteen in the eighteenth.” And this, Braudel believes, “omits the hundreds and hundreds of local famines.”53 On the basis of a listing of Indian famines over nearly two millennia, Alexander Loveday argued for a frequency of one per five years, with one really serious famine per half century, while W. H. Mallory reckoned that over two millennia of recorded history, from 108 BC to AD 1911, China experienced 1,828 famines, or one per year, somewhere in the empire. In Tanzania, according to Iliffe, “men measured out their lives in famines . . . not even the most favoured regions were spared.”54 Such sentiments have been echoed more recently by the likes of Stanford University biologist Paul Ehrlich.

Others have maintained that famines were not so frequent. As noted, Malthus believed that Europe and America were largely immune from famine. Some historians use European exceptionalism to highlight the risk of famine elsewhere. For one eminent scholar, “at the minimum the effective demographic shock in Asia was double that in Europe, and the best of the estimates suggest that it was an order of magnitude greater,” while another claims that normal mortality in Asia “may be said to contain a constant famine factor.”55 Malthus’s assertion that famines were “perhaps the most powerful of all the positive checks to the Chinese population” is questioned by recent research, however, which finds that the preventive check was more common in Qing China than previously thought, and that the short-run mortality response to rises in food prices (at least in Liadong in the northeast) was much weaker than in Europe.56 The rapid growth of Chinese population during the eighteenth century—at about 1 percent per annum, or twice as fast as in Europe—makes endemic famine unlikely then, but in the following century it was a different story.

By definition, nothing is known of the severity or relative frequency of famines in the prehistoric era—between ca. 30,000 BC and ca. 3000 BC. Pre-Mughal India, pre–AD 1800 Africa, and the pre–AD 1500 New World are also virtually “prehistory” in this sense. Yet there are several indirect routes to the past. First, the vulnerability and health status of hunter-gatherer and semisettled populations in the present or more recent past may tell us something about the frequency of famines in past times. On the basis of a study of such populations, anthropologist Mark Nathan Cohen sees no reason why prehistoric hunter-gatherers would have been undernourished, or “suffered inordinately high rates of hunger or starvation.”57 Paleopathological evidence from skeletal remains suggests that life became harder with the shift from hunter-gatherer to settled farming communities.

Second, while the historical record implies that seven-year famines as described in the book of Genesis are rare, it also indicates that many of the deadliest famines on record have been due to back-to-back harvest failures. Famine scholar and human rights activist Alex de Waal has noted that “a visitor can only see a single year of drought, and that is not enough to cause famine.”58 In most cases, famines developed into major catastrophes only in the event of successive harvest failures; even the poorest societies could muster the resources to guard against occasional failures, which were much more frequent. At the same time, low yield-to-seed ratios and the high cost of storage imply that one bad year might have a secondary effect on food supplies in the following year.

Let us consider some “bang-bang” famines—that is, ones due to successive harvest failures. One of the first famines on record, in the reign of Djeser (ca. 2770–2730 BC), was attributed to the failure of the Nile to break its banks for seven years in a row. A key feature of the Great European Famine of the 1310s was its long-drawn-out character. People coped with the initial harvest failure of 1315, when rain caused much of the seed grain to rot before it could germinate. Few perished, it seems, in 1315, but the 1316 growing season was also cold and wet. Poor harvests in 1316 and 1317 converted privation into disaster. Contemporaries described the severe Scottish famine of the 1690s as “the seven ill years” or “King William’s dear years” (the price of oatmeal more than tripled).59 Other examples of famines following in the wake of a succession of bad harvests include the Bengal famine of 1770, which came after two bad years, “with complete failure of the rains in a third year,” the European famine of 1816–18, and the Great Finnish Famine of 1867–68.60 Again, Japan’s worst famines in the Tokugawa era—the Tenmei (1782–87) and Tempo (1833– 37)—stretched over several “famine years,” and put a brake on population growth, while the calamitous death rate in part of the Indian state of Maharashtra in 1900 was the culmination of a disastrous decade of monsoon failures, poor harvests, and epidemics. Had the potato failed in Ireland only in 1845 there would have been no “Great Famine”.61 Finally, the Russian famine of 1921–22 is another famous example of crisis in the wake of two dismal harvests and several years of warfare.

Meteorological data offer some insight into the probability of back-to-back crop failures. For instance, monthly mean temperature data are available for an area in central England since 1659. The data are characterized by positive serial correlation—that is, better-than-average years tend to be followed by better-than-average years. Extreme temperatures matter more for harvests than annual averages, though. If “bad” years are defined as ones with deviations 10 percent or more from expected values, then the likelihood of such bad years occurring back-to-back is miniscule. The entire period yields only two cases of back-to-back cold years in 1694–95 and 1697–98.62 There were no pairs of years where the temperatures were more than 10 percent above trend in both. In tropical zones, drought and floods matter more than temperature. The frequencies of drought and flood years between 1871 and 2002 in both India as a whole and the state of Rajasthan are described in table 1.2, along with the number of back-to-back extreme events. At both the national and state levels, the probabilities of occasional, extreme events were relatively high, but those of back-to-back events were low.

Agricultural output data also provide some insight into the frequency of famines in the past, although such data are also scarce before the nineteenth century. The renowned accounts of the medieval bishopric of Winchester in southern England offer one straw in the wind: on the assumption that harvests 15 percent or more below average were extremely poor, the accounts for the 1283–1350 period returned only two back-to-back harvest deficits, in 1315–16 and 1349–50.63 Both were due to excessive rains and flooding.64 Crop output data are preferable to yield data, since the latter fail to take account of the impact of low yields on the acreage sown in the following year. Fitting a range of nineteenth-and twentieth-century agricultural output data to an appropriate polynomial, and then identifying bad years as those with shortfalls of over 10 or 20 percent, implies that such back-to-back events were “rare,” although they were more likely than might be expected on a random basis.65 To the extent that the underlying patterns were unlikely to change much over time and space, the results may be interpreted as tentative evidence that famines were less common in the past than claimed by Malthus or Braudel. On reflection, this is not implausible: given that life expectancy was low even in noncrisis years, frequent famines would have made it impossible to sustain population.

Although some historic famines really stand out, trends in the relative severity of famines in western Europe can only be guessed at before the seventeenth century. A reduction in their frequency in the wake of the European discoveries of the fifteenth and sixteenth centuries may be assumed, if not proven. Other things being equal, the “Columbian exchange” of foodstuffs and farming methods— potatoes, maize, and tomatoes to Europe; wheat, horses, livestock, and capitalist agriculture to the Americas; maize, cassava, and groundnuts to Africa; and tomatoes and sweet potatoes to Asia—can only have reduced global vulnerability to famine. The reduction was gradual, as the European discovery of crops such as the potato and maize gave way to adoption. In western Europe at least, there is also evidence that the integration of food markets attenuated year-to-year price fluctuations from the middle of the second millennium on. The big increases in population between the sixteenth and nineteenth centuries—before industrialization or medical technology could have had much impact— corroborate this.

Proportionately, moreover, the damage wrought by famine was much greater in the nineteenth century and earlier than in the twentieth century (see table 1.1 above). While peacetime famines had disappeared from Europe by the early nineteenth century, with the awkward exceptions of Ireland in the 1840s, Finland in the 1860s, and Russia in 1891–92, thirty million is a conservative estimate of famine mortality in India and China alone between 1870 and about 1900. Data are lacking for major famines such as those in China before and during the Taiping Rebellion (1851–1864), and India in 1802–4, 1812, 1832–33, and during the 1860s, but one hundred million would be a conservative guess at global famine mortality during the nineteenth century as a whole. Given that the world population was much higher in the twentieth century than in the nineteenth, the relative damage wrought by nineteenth-century famines was much more severe. The late nineteenth century, however, saw a reduction in famine intensity in India, due to a combination of better communications and improvements in relief policy; in Russia too famine became more localized. In Japan, famine was common in the seventeenth century, less so in the eighteenth, and disappeared in the nineteenth.

Finally, elementary demographic arithmetic argues against famines being as severe a demographic corrective as Mal-thus and others have suggested. A series of famines that carried off, say, 5 percent of the population every decade, would require a population growth of 0.5 percent in noncrisis years to prevent population from declining in the long run. That would require living standards well above subsistence in noncrisis years. A more likely scenario is slower noncrisis growth, coupled with fewer or less severe famines. That would not rule out what Adam Smith called dearths (disettes), or the endemic malnutrition that, according to economic historian Robert Fogel, characterized preindustrial economies.66

The relative power of famine and epidemics as positive checks also bears noting. Nonfamine-related checks such as the epidemics responsible for the enormous declines in the populations of pre-Columbian America and precolonial Australia as well as the Black Death probably wreaked more demographic havoc than most famines in recorded history. Likewise, the influenza pandemic of 1919 killed more people than any twentieth-century famine, with the possible exception of the Great Chinese Famine of 1959–61,while today the demographic cost of HIV/AIDS exceeds that of famine in Africa’s recent population history.

Where famine has been conquered, did the era of famines end with a bang or a whimper? A neo-Malthusian perspective might posit a scenario whereby famines decline gradually in intensity and frequency before permanently disappearing from a region or country: a slow improvement in living standards would have entailed an ever-smaller proportion of the population at risk. The historical record on this is mixed. India experienced “a declining trend in the overall number of excess deaths” between the 1870s and the 1900s, followed by four famine-free decades. The Bengal famine of 1943–44, which killed over two million, was very much an “outlier.” Since that time India has been spared major famines. Colonial Africa, which saw few “famines that kill” between 1927 and the end of the colonial era (apart from Ethiopia), also fits such a scenario. Iliffe attributes the gradual improvement to a combination of better governance, improved communications, higher living standards, and more rainfall.67

Demographic historians of England have noted how the history of famine’s demise in England also fits such a neo-Malthusian scenario. The late Andrew Appleby has linked the virtual elimination of famine after the 1620s to the reduction in the number of tenant farmers and the growth of towns, and signs of a diversifying agriculture as population ceased to bite at the margin. Anthony Wrigley and Roger Schofield’s analysis of years of crisis mortality—which they define as years when the crude death rate was at least 10 percent above a twenty-five-year moving average—in England between the 1540s and the 1860s also suggests that both the size and duration of crises declined gradually over time, although it indicates further subsistence crises associated with significant excess mortality as late as 1728–30 and 1740–42. Meanwhile, a recent analysis of famine in Japan indicates that “in the seventeenth century famines occurred more or less regularly, and they gradually become less frequent in the eighteenth century.”68

Malthus highlighted the prevalence of years of “very great suffering from want” in Scotland, singling out 1680 “when so many families perished from this cause, that for six miles, in a well-inhabited extent there was not a smoke remaining.”69 But the experience is not all one-way. Ireland between the 1740s and the 1840s also broadly conforms to such a pattern of gradual decline, but then the Great Potato Famine brought the era of famines to a cataclysmic end. Finland’s last famine in 1867–68 was also a major one. In prerevolutionary Russia there is evidence of a gradual decline in famine intensity; then the famines of 1918–22 and 1932–33 were massive crises, the siege-famine of 1942–43 in Leningrad and the postwar crisis of 1946–47 less so. Thus the evidence is mixed, both because of the role of contingency in human behavior and the strong element of randomness in natural and ecological occurrences.

REMEMBERING FAMINE

Oral history and folk memory of famines may plug some of the gaps left by the lack of standard documentary sources. Ordinarily these sources are invoked only for the light they can shed on the recent past, as in the case of oral poetry describing the Ethiopian famine of 1984–85. This example is a reminder of the porosity of memory; a mere decade or so after the event, “most peasants regretted the fact that they had forgotten” the poems composed during the famine. Even those who composed verses at the time had forgotten most of them—or perhaps did not want to remember them.70

Nevertheless, much of what we know about famine in precolonial Africa comes from oral accounts, perhaps transmitted across several generations. The chaos caused by the South African “Madhlatule” famine of the early 1800s, therefore, was described over a century later as “far greater than the Mbete famine in Mpande’s time (in the early 1860s).” People had to guard their crops, “for starving people would eat the green mealies growing there.”71 The claim of a Sudanese herdsman at the time of the rinderpest outbreak of 1889–97 that “a similar calamity had occurred long ago: the Fulanis had suffered,” highlights the singularity of the later outbreak.72 The evidence for cannibalism—bandits waylaying victims on the way to the city, and mothers eating their children—in Ethiopia in the 1890s is all based on (possibly embellished) folkloric evidence. Mashonaland suffered a catastrophic famine about 1860, “when so many people died that they had to be left unburied to be devoured by carrion.” Curiously, local missionaries did not record any human casualties from famine, but given that famine was widespread elsewhere in southern Africa at this time, the oral evidence from indigenous narrators is telling. Folk memories of famine in precolonial Burundi point to a “cumulative combination of climatic accidents, microbial shocks, and internal and international political instability, all occurring in a context of undue pressure on an agro-pastoral system and sociopolitical gridlock.”73 The tendency for particular famines to pass into folklore may imply that major famines were not so common or that those that were remembered dwarfed all the others.

Folklore offers a more intimate medium than the “colder” accounts of officialdom, and arguably gets closer to the way ordinary people felt and were affected, as the following few examples show:

Ireland, 1848–49: Michael Garvey got the cholera, and he and the entire household succumbed. They perished together. I think he died before his wife. . . . Somebody went to their cottage door and could see that they were all dead. All they did then was to set fire to the cottage, burn it, and knock in the walls. I remember myself in autumn-time how we used to pick blackberries near that spot—because there were lots of bushes where the house used to be—my mother warning us to keep away from the place. “Stay away from there,” she used to say, “or you will be harmed.”74

Bengal, 1943: “I was a widow. I stayed for several months longer in my in-law’s house, but I received no rice.” Sindhubala [the widow] began to sell off her brass and bell-metal utensils—plates, cups, etc.—and then purchased mug-dałl, salt, millet, and so forth, to eat. After selling all the utensils, she sold the cow. Then she began to eat wild vegetables, waterlily stalks, wild arum.
   Late in 1943 her father came to take her back to Tanguria. “He said it was not right for a woman to live alone in the household of her in-laws.” She agreed to leave but first sold her husband’s property—about 1½ bighas of land—to her brother-in-law for Rs. 136. [The price was low.] Her father took the money from her, giving her in return about ½ bigha and building on to his house a separate room for her. Some years later she managed to marry off her daughter, and her son-inlaw now lives with her.75

Greece, 1942–43: When the Germans came [to Syros] no one would say anything [to you], the Italians [in contrast] would take [everything]. When you were going somewhere, whether you had cauliflowers or eggshells or lemon rind they would take them from you . . . whereas the Germans had this. They would say to you do anything you want but don’t mess about with me. That was all.76

Leningrad, 1941–44: But I wanted to say that even though it was so deadly cold, and almost everyone’s windows were broken, even then not one Leningrader cut down a living tree. No one ever did that. Because we loved our city, and we could not deprive it of its greenery. . . . They could tear down a fence, break up some kiosk, tear off an outer door. But they couldn’t saw down a tree. They burned furniture, various rags, letters (it was painful to burn letters). They burned many books (also a pity).77

Folklore is prone to forget the more distant past, however, and suffer from chronological confusion. It is also subject to hidden biases and evasions. Thus, although about one-fifth of those who perished during the Great Irish Famine of the 1840s breathed their last in a workhouse, hardly any of the famine narratives collected mainly in the 1930s and 1940s refer to an ancestor in the workhouse. Given the enduring stigma attached to workhouse relief in Ireland, the silence could be due to selective memory; it may also be that the more articulate members of a community, those who transmit the memory, are atypical descendants of more resilient families, and so recall events witnessed rather than those experienced by their forebears. Accounts of participation in the public works, which employed seven hundred thousand people at their peak in 1847, are also doubly vicarious in this sense.78

A recent account of famine conditions on the Micronesian atoll of Chuuk in 1944–45 during Japanese occupation is based largely on the memories of elderly resident survivors, whose stories of substitute foods, intrafamily tensions, theft, and ingenuity in adversity highlight the power of oral history to retrieve anecdotes and impressions of famine often undocumented in conventional sources. At their best, not only do such stories offer new perspectives on the past but they also enrich our reading of the written record. The pitfalls of oral history, though, also need to be kept in mind: autobiographical memory tends to be self-serving, and rarely free of contamination by extraneous data. It can be subject to chronological confusion. Yet even silences can be revealing. For example, the resilience of Chuuk’s rural economy is implicit in these stories; excess mortality was light, even though the island had to sustain a population four times the norm in the face of blockade and nightly bombardments. Clearly it had the capacity to increase and diversify food output considerably at short notice. And the impression gained of the Japanese presence on Chuuk is relatively benign: there is evidence of the requisitioning of food and land on the part of the Japanese military, but it is telling that the hearsay reports of cannibalism refer to the Japanese troops, not to the indigenous population. Again, the oral record is silent on infectious diseases, which might have been expected to accompany any significant excess mortality.79

In Tanzanian folklore, the word famine is often used as a metaphor, and genuine famines were perhaps less frequent in the colonial era than usually claimed. Folklore “remembers” the well-documented famine of the 1890s, other serious famines in the 1830s and 1860s, and more frequent “localized food shortages.”80 In his classic study of Chinese agriculture, John Lossing Buck also relied on the memory of his informants for insight into the frequency of famines in the past. The informants recollected an average of three famines each. These famines, which lasted on average a year, reduced one in four of the population to “eating grass and bark,” forced one in eight to emigrate, and led one in twenty to starve.81 Such subjective accounts, though evocative and indispensable on other grounds, are rather fallible guides to the frequency of famines in the past.

Return to Book Description

File created: 2/6/2009

Questions and comments to: webmaster@pupress.princeton.edu
Princeton University Press

New Book E-mails
New In Print
PUP Blog
Videos/Audios
Princeton APPS
Sample Chapters
Subjects
Series
Catalogs
Princeton Legacy Library
Exam/Desk Copy
Textbooks
Media/Reviewers
Rights/Permissions
Ordering
Recent Awards
Princeton Shorts
Freshman Reading
PUP Europe
About Us
Contact Us
Links
F.A.Q.
PUP Home


Bookmark and Share