The first large-scale use of a traditional weapon of mass destruction (chemical, biological, or nuclear) involved the successful deployment of chemical weapons during World War I (1914–1918). Historians now refer to the Great War as the chemist’s war because of the scientific and engineering mobilization efforts by the major belligerents. The development, production, and deployment of war gases such as chlorine, phosgene, and mustard created a new and complex public health threat that endangered not only soldiers and civilians on the battlefield but also chemical workers on the home front involved in the large-scale manufacturing processes. The story of chemical weapons research and development during that war provides useful insights for current public health practitioners faced with a possible chemical weapons attack against civilian or military populations.

IN THE LATE AFTERNOON OF April 22, 1915, members of a special unit of the German Army opened the valves on more than 6000 steel cylinders arrayed in trenches along their defensive perimeter at Ypres, Belgium. Within 10 minutes, 160 tons of chlorine gas drifted over the opposing French trenches, engulfing all those downwind. Filled with pressurized liquid chlorine, the cylinders had been clandestinely installed by the Germans more than 3 weeks earlier. The order to release the gas was entrusted to German military meteorologists, who had carefully studied the area’s prevailing wind patterns. Disregarding intelligence reports about the strange cylinders prior to the attack, the French troops were totally unprepared for this new and horrifying weapon.1

The surprise use of chlorine gas allowed the Germans to rupture the French line along a 6-kilometer (3.7-mile) front, causing terror and forcing a panicked and chaotic retreat. Within a matter of minutes, this slow moving wall of gas killed more than 1000 French and Algerian soldiers, while wounding approximately 4000 more.2 A British soldier described the pandemonium that flowed from the front lines to the rear.

[I watched] figures running wildly in confusion over the fields. Greenish-gray clouds swept down upon them, turning yellow as they traveled over the country blasting everything they touched and shriveling up the vegetation. . . . Then there staggered into our midst French soldiers, blinded, coughing, chests heaving, faces an ugly purple color, lips speechless with agony, and behind them in the gas soaked trenches, we learned that they had left hundreds of dead and dying comrades.3

The German High Command sanctioned the use of gas in the hope that this new weapon would bring a decisive victory, breaking the enduring stalemate of trench warfare. However, their faith in this wonder weapon was limited. Surprised by the apparent success of the attack, and having no plan to send a large offensive force in after the gas, the Germans were unable to take advantage of the situation. Within days, both armies once again faced each other from the same opposing fortifications. The attack that spring day, nonetheless, marked a turning point in military history, as it is recognized as the first successful use of lethal chemical weapons on the battlefield.

Here, I offer a window into the first weapon of mass destruction (WMD) by charting the development and use of gas warfare during World War I. Defined today as “man-made, supertoxic chemicals that can be dispersed as a gas, vapor, liquid, aerosol (a suspension of microscopic droplets), or adsorbed onto a fine talcum-like powder to create ‘dusty’ agents,” chemical weapons remain a viable public health threat for civilians and soldiers across the globe.4 If, in the world since the attacks of September 11, 2001, the threat of terror weapons seems a ubiquitous part of the daily news and the term WMD is now as familiar to soccer moms as to beltway defense planners, it is important to remember that the medical and public health consequences of chemical weapons use are as real today as they were in 1915.5

Although chemical weapons killed proportionally few soldiers in World War I (1914–1918), the psychological damage from “gas fright” and the exposure of large numbers of soldiers, munitions workers, and civilians to chemical agents had significant public health consequences. Understanding the origins of chemical warfare during World War I and its emergence during that conflict as a physical and psychological threat to both military and civilian populations can provide historical insight into possible contemporary medical responses to this enduring and technologically pervasive threat.

World War I had numerous causes, including colonial competition, economic rivalry, and various ideological and cultural clashes among the rising nation states of Europe. A complex and binding system of alliances among the Central Powers (Germany, Austria-Hungary, and Turkey) and the Allied Powers (Britain, France, Russia, and beginning in 1917, the United States) placed peace in a delicate balance. The tipping point came on June 28, 1914, with the assassination of Archduke Franz Ferdinand of Austria-Hungary by a Serbian national. This single act set off a chain of events that quickly plunged the world into a global war that eventually claimed between 9 million and 10 million lives and lasted 4 years.6

The economic and industrial forces that altered the face of Europe during the first decade of the 20th century were also instrumental in creating many of the technological innovations driving the war. In time, tanks, submarines, and aircraft revolutionized how World War I was waged on land, sea, and in the air. Chemical weapons were another new marvel of the war, and their successful research, development, and deployment reflected the increasing sophistication of scientific and engineering practice. At the same time, physicians and medical researchers (some of whom worked to create these weapons) struggled to create adequate defensive systems and medical procedures to limit casualties. By the time of the armistice on November 11, 1918, the use of chemical weapons such as chlorine, phosgene, and mustard gas had resulted in more than 1.3 million casualties and approximately 90 000 deaths (Table 1).

Although one could argue that primitive forms of chemical weapons were used in earlier conflicts, it was not until the 20th century that scientists, engineers, and physicians could predictably and consistently produce these weapons to inflict mass casualties.7 At the close of the 19th century, the various European powers became troubled by the potentiality of chemical weapons and began holding conferences and writing various treaties to limit or curtail the development and deployment of this new technology. Suspicion and self-interest among both allies and rivals generally limited the usefulness of these activities, an unfortunate political reality that continues to the present day. For instance, the Hague Declaration of 1899 and the Hague Convention of 1907 forbade the use of “poison or poisonous weapons” in warfare, yet more than 124 000 tons of gas were produced by the end of World War I.8

The development of these war gases, like many of the other new weapons systems created during this period, depended on the work of academic and industrial scientists who increasingly served the military needs of the state. Germany, arguably the world’s leader in science at the time, and without question the guiding force in academic and industrial chemistry, moved decisively in the research and production of chemical agents once the war began. Fritz Haber, a prominent German chemist and future Nobel laureate, led the German program.9 Haber, the so-called “father of chemical weapons,” moved enthusiastically between the front and the Kaiser Wilhelm Institute for Physical Chemistry and Electrochemistry in Berlin. As he organized and led the German chemical warfare program, solving ongoing problems in chemical agent development and deployment, his activities anticipated a troubling pattern of behavior among future generations of scientists, engineers, and physicians.

The Germans’ first use of gas mirrored their initial emphasis on the offensive aspects of chemical weapons research and their belief that a technological fix would bring a decisive victory.10 Those on the receiving end, France and Great Britain, moved first on the defensive aspects of these new weapons. By war’s end, however, the national programs among the warring nations focused on both the offensive and defensive aspect of chemical weapons. These programs were often a result of the complete mobilization of their nations’ academic, industrial, and economic resources for war.

The expansion of research brought in an array of specialists from chemistry, physics, and engineering and, increasingly, from medicine, biology, and physiology, further blurring ethical demarcations in medical research. Throughout the war, the British armed forces enlisted scientists in many academic institutions—including Oxford, Cambridge, University College London, the Army Medical College at Millbrook, and the Lister Institute—to work on both aspects of gas warfare. The French government took a more direct approach to chemical weapons research by militarizing the chemistry, pathology, and physiology departments of 16 leading medical schools and institutes. Additionally, it essentially absorbed the University of Paris in order to direct, coordinate, and research all aspects of chemical warfare.11

Research conducted by Allied scientists on the nature of chemical warfare, and the observations and experience of their combat troops was employed by the United States with varying degrees of success when it joined the Allies. The United States had adopted a policy of formal neutrality at the beginning of the war, although ongoing Atlantic trade benefited the Allied cause. American sympathies for Britain, France, and other allies grew during the course of the war, aided both by the deadlock on the battlefield and the increasing menace of unrestricted submarine warfare by German U-boats. In 1917, a marked increase in U-boat attacks on commercial shipping, in addition to a potential German alliance with Mexico, led to a formal declaration of war on April 6, 1917.

The American response to chemical warfare is indicative of the growing sophistication of academic, industrial, and military research and development capabilities in the United States at a time when linkages between the federal government and science were becoming more pronounced.12 As the war intensified, increasing anxiety about possible entry into the conflict and the overall lack of military preparation prompted some in government, industry, and academe to begin planning. The origins of an organized program for chemical warfare in Washington came first from the civilian sector. On February 8, 1917, Van H. Manning, the director of the Bureau of Mines, offered the technical services of his agency to the Military Committee of the National Research Council. The familiarity of the bureau with research involving noxious gases, breathing apparatus, explosives, and gas detection technologies seemed well suited for the task.13

On the same day as the American declaration of war, the National Research Council subcommittee on noxious gases was appointed to “carry on investigations into noxious gases, generation, antidote for same, for war purposes.”14 Within one year, research was under way at a number of prestigious universities and medical schools, including the Massachusetts Institute of Technology, Johns Hopkins, Harvard, and Yale, in addition to some of the country’s leading industrial firms.15 The chemical warfare program was directed out of offices and laboratories at American University in Washington, DC. During the course of the war, research programs involving

gas investigations; defense problems; medical science problems; chemical research; gas mask research; pyrotechnic research; small-scale manufacturing; mechanical research; pharmacological research; [and] administration were carried out in Washington and across the country.16

Research began as the first US troops made preparations for combat. Fear of gas attacks against these members of the American Expeditionary Force (AEF) embarking for the European front initially focused research in the United States on defensive measures, with priority given to gas mask design and production.

As the American war effort intensified, research expanded to include offensive weapons, resulting in numerous discoveries, including the creation of one of the conflict’s only new chemical weapons, an arsenic-based agent similar to mustard gas called lewisite (β-chlorovinyldichloroarsine). Synthesized in his laboratory by Wilfred Lee Lewis, this deadly substance was soon mass-produced by the military under the direction of chemist and future Harvard president James. B. Conant.17 By July 1918, research and development on agents such as lewisite passed from civilian to military control as the entire chemical weapons program moved from the Bureau of Mines to the army’s newly organized Chemical Warfare Service.

At that time, chemical warfare research in the United States involved more than 1900 scientists and technicians, making it at that time the largest government research program in American history.18 By the time the war ended, historians estimate that more than 5500 university-trained scientists and technicians and tens of thousands of industrial workers on both sides of the battle lines worked on chemical weapons.19 Both the military use and industrial production of chemical weapons presented a number of health risks.

As the war progressed, the knowledge gained by British, French, and German military planners and scientists on the nature of gas warfare quickly evolved into a kind of technological chess match. New offensive threats were met by an evolving array of defensive countermeasures. Overall, the deployment of chemical weapons met with mixed results as the tactics, strategy, and military culture of all of the armies continually struggled to adjust to this new weapon. Aside from tactical and strategic consequences, chemical weapons heralded larger cultural changes for combatant and observer alike. In perhaps his most celebrated poem, “Dulce et Decorum Est,” British soldier Wilfred Owen captured in verse the horrors of this new form of warfare, a horror that he had witnessed first hand at the front.

Gas! Gas! Quick, boys!—An ecstasy of fumbling, Fitting the clumsy helmets just in time;

But someone still was yelling out and stumbling, And flound’ring like a man on fire or lime . . .

Dim, through the misty panes and thick green light, As under a green sea, I saw him drowning . . . 20

The types of gas attacks Owen witnessed prior to his own death in 1918 were quite different in kind than those experienced by troops stationed at the front during the first years of the war.

Chemical warfare had begun in a tentative way before Ypres with the French use of tear gas grenades in 1914 and early 1915. Similarly, the British began developing a range of nonlethal chemical weapons meant to harass enemy troops. The Germans started experimental work on chemical agents in late 1914 at the suggestion of University of Berlin chemist and Nobel laureate Walther Nernst.21 This early research quickly produced an effective tear gas artillery shell. Although the Germans fared no better than the French with tear gas as a debilitating agent, German chemists, now with a formal program led by Fritz Haber, continued to work on the chemical weapons problem. By 1915, scientists at the Kaiser Wilhelm Institute had developed an effective chlorine gas weapon. By placing chlorine into specially designed cylinders, chlorine gas could be discharged in a dense cloud that eventually settled into enemy trenches. Interestingly, the German High Command envisioned gas as an effective tool to draw soldiers out of their trenches so as to kill or wound them with conventional weapons rather than as a lethal weapon.22

As an offensive response to chlorine, both the French and the British quickly developed “annoyer” grenades, but these chemical weapons were not lethal and few even made it onto the battlefield. By mid-1915, both sides regularly used cylinders to deploy chlorine gas, and by mid-1916 both sides mixed chlorine and phosgene in an attempt to create larger numbers of casualties.23 By the end of the war, most belligerents employed a variety of chemical agents in combat, including chlorine, phosgene, and mustard gas (Table 1).

The Germans’ offensive use of chlorine led one British soldier to remark that it “was the most fiendish, wicked thing I have ever seen.”24 In a more graphic description, British Sergeant Elmer W. Cotton vividly remembered the suffering of gassed soldiers. He wrote in his diary that men were

propped up against a wall . . . —all gassed—their color was black, green & blue, tongues hanging out & eyes staring—one or two were dead and others beyond human aid, some were coughing up green froth from their lungs.25

This type of suffering moved the British and the French to quickly develop some type of protection from chlorine gas. The British promptly developed a primitive gas mask that a soldier described as “piece of muslin, which we tied round the nose and mouth and around the backs of our heads,” but these were largely ineffective.26 Once chlorine was identified as the chemical agent, a thiosulfatelaced cotton pad effectively neutralized the gas.27

By July 1915, the British medical corps devised a wool hood soaked in thiosulfate, sodium bicarbonate, and glycerin. This “hypo helmet,” a hood that fully enclosed a soldier’s head, had a mica window so soldiers could see.28 Although these early masks were better than nothing, soldiers found them difficult to put on, uncomfortable, and easily damaged, thus limiting their efficacy.29 The development of the small box respirator by the British in 1916 provided effective protection from most chemical agents used throughout the war because it could be modified to neutralize new agents, such as mustard gas.

The gas mask drill—the donning of all protective gear as rapidly as possible under the most difficult conditions—became an integral part of life in the trenches and in preparing recruits for battle. As with the rifle drill that anchors infantry training, the discipline and skills needed to quickly and effectively don protective gear became a necessary part of life for all at the front. Soldiers were instructed always to have their mask handy, no matter where they were or what they were doing. Responding to the sound of a specific whistle, clanging alarm bell, or shout, soldiers would move with all due speed to put on their masks or hoods. Far from embracing gas masks as a life-saving technology, soldiers felt emasculated and claustrophobic in them. As one British officer explained,

We gaze[d] at one another like goggle-eyed, imbecile frogs. The mask makes you feel only half a man. The air you breathe has been filtered of all save a few chemical substances. A man doesn’t live on what passes through the filter—he merely exists. He gets the mentality of a wide-awake vegetable.30

According to British infantryman Albert Marshal, “By the second second lot we’d got gas masks which came right over your head. They were terribly hot and awful, but still, they stopped the gas.”31

There was also no guarantee that gas masks would work. Although British and German masks were fairly reliable because of strict quality control measures, the French masks were notoriously unreliable. Moreover, treating wounded soldiers became even more difficult when both stretcher-bearers and injured men needed to don masks, and in some cases the masks caused more trauma to victims.32 The dangers at the front affected all those that lived and labored there. Besides affecting military populations, gas clouds caused civilian casualties because the wind often blew through villages and towns close to the front. Unlike soldiers, civilians did not necessarily have access to gas masks or the training to make sure the masks were used properly. Although the official number of civilian casualties was about 5200, the numbers were undoubtedly much higher.33 The probability of gas attack was so frequent that in time researchers on both sides even devised masks for horses, dogs, and messenger pigeons involved in military operations.34

As the Americans watched the war in Europe, they realized as early as 1915 that gas masks were an important defensive measure for their troops. The Board of Ordnance and Fortifications of the War Department stressed the importance of developing masks before the United States entered the war. In a November 5, 1915, meeting they decided that “Certain practices in the present European war have indicated the necessity for providing some equipment of this kind. . . . The design and supply should not be left unassigned and should be assigned to the Medical Department.”35

Within a month of the US entry into the conflict, the secretary of war ordered the surgeon general to supply 1 000 000 gas masks, 8500 “chemical sprayer[s] for cleaning trenches,” and 1000 “oxygen apparatus for resuscitating [the] wounded” by June 30, 1918.36 As the summer of 1917 progressed, it became evident that unlike the other purely laboratory-based research programs, the complexity and scale of gas mask manufacture (a new and complex industrial endeavor) required a separate institutional structure within the new chemical warfare research apparatus. On August 31, 1917, the Gas Defense Service (later known as the Gas Defense Division) was formally organized within the Army Medical Department under the auspices of the Office of the Surgeon General to carry out gas mask research and manufacture.37 Even with the nation’s elaborate institutional and research commitment to gas masks, masks had their limits and could not ward off the fear of being gassed.

By late 1915, gas warfare had become a psychological as well as physical weapon. Much as hellish multiday artillery barrages resulted in mental breakdowns associated with “shellshock,” the constant threat of exposure to even a single gas shell added to the already unbearable stress of life at the front. The fear of being gassed, along with periodic harassing gas attacks, kept soldiers on both fronts on edge and could lead to anomie, gas fright, and in some cases mental breakdowns. Soldiers on all sides felt that gas warfare was not a proper weapon and went beyond the bounds of humanity.38 H. Allen, who served at the front, described the psychological effects of nightly gas threats:

With men trained to believe that a light sniff of gas might mean death, and with nerves highly strung by being shelled for long periods and with the presence of not a few who really had been gassed, it is no wonder that the gas alarm went beyond all bounds. . . . Gas horns would be honked, empty brass shell-casings beaten, rifles emptied and the mad cry would be taken up. . . . For miles around, scared soldiers woke up in the midst of frightful pandemonium and put on their masks, only to hear a few minutes later the cry of “All safe.” . . . Two or three alarms a night were common. Gas shock was as frequent as shellshock.39

The psychological consequences of gas attacks were not limited to fear of the air that soldiers breathed. A 1918 US Army “afteraction report” described an incident in which a platoon of machine gunners became convinced that their food was contaminated in a gas attack. When a shell exploded near the men while they ate, a soldier remembered:

[S]omeone yelled “GAS!” and said their food had been gassed. All the men were seized with gas fright and a few minutes later made their way to the Aid Station. . . . They came in in stooping posture, holding their abdomens and complaining of pains in the stomach, while their faces bore anxious, frightened expressions and some had even vomited.40

After being given bicarbonate of soda to settle their stomachs, a rest, and reassurances that they were not gassed, the men returned to the front. Because both gas fright and gas attacks became more severe in 1917 and 1918, doctors and medics found it difficult to diagnose real as opposed to imagined gas attacks. For the most part, all the medical corps could do for gas casualties was prescribe bed rest and wait for symptoms to emerge. Medical units quickly shipped soldiers without obvious physical symptoms of gas exposure back to the front. Moreover, soldiers never knew if their gas mask would leak or if their filter would run out, which caused even more anxiety as belligerents used more caustic agents.41 The Allied efforts to contain the effects of chlorine and phosgene briefly stabilized the technological balance through the first half of the war. The German introduction of mustard gas would destroy this balance and elevate the violence and terror of chemical warfare to a new level.

By the spring of 1917, the defensive measures employed by the Allied armies to contain the German gas threat were increasingly successful, as least with respect to limiting fatalities. Surprisingly, such defensive success came as the offensive deployment of gas weapons became increasingly sophisticated. Since 1915, the integration of ongoing field and laboratory studies involving agent stability, meteorological conditions, and weapons design made the tactical planning involved in delivering gas to specific targets much more reliable.42 There were limits, though, to the destructive range of certain chemical weapons. Both sides now recognized that although a regular gas attack involving either chlorine or phosgene (under optimum conditions) could produce large numbers of casualties, such an attack usually resulted in relatively few fatalities if troops were properly prepared and outfitted.

In a bloody war of attrition, however, the ability to wound instead of kill had definite tactical and strategic value. The continual removal of large numbers of battle-ready troops from forward areas, even for short periods, severely compromised the ability of armies to conduct successful operations. Because of the respiratory damage both chlorine and phosgene caused, soldiers required a long convalescence before returning to combat. The average number of days an AEF gas victim spent recovering and away from the front was 60 days for chlorine and 45.5 days for phosgene.43 Although such losses certainly impeded the war effort, by the spring of 1917 the overall military effectiveness of gas attacks seemed to be diminishing. This would soon change.

In July 1917, aware of the loss of their technological superiority and perhaps their ability to win the war, the Germans deployed a new and more troublesome chemical agent: mustard gas. Although mustard was introduced late in the war, it became known as the “King of Battle Gases” because it eventually caused more chemical casualties than all the other agents combined, including chlorine, phosgene, and cyanogen chloride. Harry L. Gilchrist, medical director of the Gas Service, US Army Expeditionary Force, described the first mustard causalities:

At first the troops didn’t notice the gas and were not uncomfortable, but in the course of an hour or so, there was marked inflammation of their eyes. They vomited, and there was erythema of the skin. . . . Later there was severe blistering of the skin, especially where the uniform had been contaminated, and by the time the gassed cases reached the casualty clearing station, the men were virtually blind and had to be led about, each man holding on to the man in front with an orderly in the lead.44

Unlike the lung irritants chlorine and phosgene, mustard gas was a vesicant (similar to lewisite) that produced large blisters on any area of contact. Particularly severe blisters emerged when uniforms were soaked in mustard gas. If exposure was high enough, mustard gas could cause permanent eye damage, but this was infrequent.45 The complexity of treatment required in mustard injuries involved a new level of aid and medical care.

Caring for mustard victims differed from caring for chlorine or phosgene casualties. Once evacuated, chlorine and phosgene victims received oxygen and bed rest until they were healthy enough to return to the front. However, soldiers exposed to mustard gas, especially in high concentrations or for long periods of time, needed to bathe with hot soap and water to remove the chemical from their skin. If it was not scrubbed off within 30 minutes of exposure, blistering occurred. Portable shower units with specially trained medics helped minimize its blistering effect. These consisted of a “bath truck [that was] provided with [a] hot water boiler and a number of fold-down shower heads.”46

After the troops showered, the chemical corps issued them new uniforms in exchange for their contaminated clothing. These discarded clothes were then decontaminated and reissued to other exposed soldiers. Because mustard gas induced eye injuries, casualties had their eyes washed as quickly as possible to minimize the duration of acute conjunctivitis, which generally lasted several weeks. Soldiers’ care became increasingly difficult in the last year of the war with the increased frequency of gas attacks. Also, mustard gas damaged the lungs more severely than either chlorine or phosgene did, and these lesions were much more difficult to treat.47 The recuperation time from mustard gas exposure—46 days—was similar to that of phosgene.48

Mustard gas was a particular problem for both sides because after it was released it settled in an area, contaminating it. The vesicant often recontaminated soldiers and horses in contaminated, unquarantined areas. Cecil Withers, a British soldier, remembered being exposed to mustard gas during a mortar attack:

I suffer badly from phlegm and from coughs and colds a lot. That all started when the British were shelling hard at the last Battle of the Somme. One of the shells disturbed the residue of mustard gas that had been lying there for months. They talk about secondary smoking . . . I got secondary gas.49

In addition, because mustard gas was heavier than air or water, it settled in ditches or at the bottom of trenches and puddles and created a persistent environmental hazard for troops, civilians, and animals alike. All a soldier needed to do was disturb the dirt, mud, or water and he would suffer from gas exposure. Persistency was a problem not only on the battlefield but also for the medical corps. Because of the volatility of mustard gas, a single gassed soldier could contaminate medical personnel, the ambulance, and other patients. The medical corps created a special evacuation system to minimize this type of contamination once large quantities of mustard gas were used in combat.50

Although new to gas warfare, the United States moved quickly and used mustard gas offensively in June 1918, when US mustard gas production was 30 tons per day. Lewisite, which might have replaced mustard gas had the war continued into the winter of 1919, was a “superior” weapon that caused instantaneous blistering, was lethal in minute quantities, was relatively difficult to detect, and perhaps more importantly, had a molecular structure that allowed rapid dissipation. This last factor allowed attacking forces to move into enemy territory without fear of contamination and injury.51

The bloody toll of mustard gas by war’s end is indicative of its usefulness as an offensive weapon. Although approximately 30% of all war casualties were victims of gas exposure, more than 80% of the approximately 186 000 British chemical casualties were caused by mustard gas alone, with a death toll of approximately 2.6%. This extremely large number of casualties among well-trained and equipped British troops indicates the destructiveness mustard caused on the battlefield. AEF combat losses included more than 52 800 battlefield fatalities, with approximately 1500 dying of gas-related injuries.52 Unfortunately, death and injury caused by chemical agents were not restricted to the battlefield.

The scale of industrial chemical production during the course of the war was enormous and without precedent. Chemical companies, universities, and government laboratories from all the warring nations labored at great cost to produce chemicals not only for traditional war materiel such as munitions and fuel but also for a new generation of weapons. The production of certain chemical weapons was complex and often involved the synthesis of various chemical precursors needed for the completion of a specific chemical agent. Estimates by historians and military officials place production in excess of 124200 tons of gas, indicating a substantial investment on the part of various governments and chemical manufacturers to meet expansive wartime production schedules.53 While thoughts of gas warfare usually bring to mind images of the battlefield, the risks and sacrifices workers made in their home countries to manufacture war gases was no less valuable and in many ways presented just as many, if not more, health risks.

Germany, then the world’s leading chemical manufacturer, produced numerous gas warfare munitions during the course of the war, including more than 33 million pounds of gas shells. The German firm Badische Aniline & Soda Fabrik produced the largest quantity of phosgene over the course of the war. The company’s ready access to carbon monoxide from its ammonia plant facilitated the manufacture of approximately 7200 tons of phosgene per year. Although Germany out-produced the Allies in phosgene production, the French, British, and Americans all made phosgene using a variety of techniques.54 By 1916, the Germans deployed diphosgene in artillery shells at Verdun. It superseded phosgene in German attacks largely because the diphosgene shells could be assembled in the field rather than in distant factories.55

Although postwar investigations on the state of the chemical industry during the war invariably gave Germany high marks overall for worker safety, a report by Lt Col James F. Norris of the US Army Chemical Warfare Service on the Bayer Company’s diphosgene (perchloromethylformate) facility at Höchst noted that German factory men were frequently poisoned by gas and that as many as one third were absent from the factory at any given time because of gas-related illnesses. Norris also suspected that several men died from gas exposure at the Höchst plant, but there was no conclusive proof of the linkage between working in the plant and the men’s deaths.56 Superior plant organization and design, however, reduced the risk to some German workers, as did their choice of production method, most importantly for mustard gas.57

The British and French had more difficulty manufacturing chemical agents, especially once they started producing mustard gas. Because the Allies used the more dangerous Guthrie’s process, industrial mustard production necessitated extra care and state-of-the-art facilities. Historian Ludwig Haber noted that safe production required “the tiled construction and plumbing construction of a municipal washhouse, the ventilation system of an up-to-date coal mine, and the nursing facilities of a CCS [casualty clearing station] in a quiet sector of the front.”58 Because of the use of smaller and older plants, British and French facilities often employed primitive open vats and substandard piping. Particularly dangerous jobs included cleaning the pipes, which inevitably became blocked during production, as well as repairing pumps and charging drums to prepare them for transporting various toxic and corrosive liquids. These industrial conditions produced the same injuries in civilians as they did in soldiers, leading to high rates of absenteeism and eventually to a rotational schedule that allowed workers to have a week off for every 20 days in the plant.

The effects were even more severe for mustard plant workers: “[W]hatever was touched inflamed and whatever was breathed irritated.”59 Factory workers suffered from chronic mustard gas poisoning because of long-term exposure; symptoms included “listlessness, ‘nervous debility’ . . . headaches, indigestion, spasms of the eyelids, breathlessness, and inability to do a full days work.”60 Industrial workers suffered from frequent cases of bronchitis, asthma, throat and lung infections, chest and heart problems, and depression or other recurring but vague ailments.61 Many gassed soldiers and factory workers found it difficult to work after the war because of frequent illness and difficulty breathing.

The most dangerous job regardless of location involved filling artillery gas shells. This produced even more injuries than normal chemical production, and German workers suffered as much as British, French, and American workers.62 Thousands of workers at British loading facilities were adversely affected; at one facility, one worker in nine became ill or was injured, and at another the rate reached 100% of the labor force.63 These types of recurring injuries among British laborers prompted changes in the approach of government and military planners to the problem of plant design and worker safety. One of the most significant responses was the Trench Warfare Supply Department’s appointment of F. Shuffelbotham as acting medical supervisor in charge of all British gas facilities in July 1916.

Following the British model, the Americans adopted a more rigorous medical inspection program to facilitate worker safety. Even before industrial production began, the Bureau of Mines in Pittsburgh, Pennsylvania, was charged with instituting such a program. As large-scale production facilities came on line, the surgeon general’s office took over responsibility for workers (Figure 1). They contracted with local physicians “whose duty it was to hold sick call, give physical examinations at regular intervals, examine applicants for work in the gas plant, and to be available for emergency calls at all hours.”64

By the end of the war, the production infrastructure, including federal facilities such as the Edgewood Arsenal in rural Maryland, employed more than 10000 men and women. In addition to Edgewood, 9 other gas facilities produced over 140 tons of gas per day, “an amount greater than the production of Germany, Great Britain, and France combined.”65 Such mass production quotas, however, were not without risk. An analysis of a single seven-month period at Edgewood revealed 925 casualties and three fatalities, with more than 75 injuries traced directly to mustard gas production.66 For many soldiers and workers, the long-term health consequences of gas exposure were lifelong. As veteran Albert Marshal noted:

[T]he gas is still with me today. It makes me itch every morning and at six every night. You can see my skin all dry. Tonight, my arm will itch from the top to the elbow. And so will the back of my neck. It feels like a needle pricking you. And that’s from ninety years ago.67

Although memories of the Great War have now receded into the past, chemical warfare has remained surprisingly resilient over time. Between the 1918 armistice and 1933, several international conferences were held to try to limit or abolish chemical weapons; these included the Washington Conference (1921–1922), the Geneva Conference (1923–1925), and the World Disarmament Conference (1933). Although progress was made toward outlawing the use of “asphyxiating, poisonous or other gases” per the Geneva Protocol of 1925, programs and research continued throughout the interwar period and most of the rest of the century, despite the public’s rejection of these weapons.68

Although condemned, chemical weapons continued to be used during the interwar years, largely against civilians in colonial possessions. The most notable was the Italian government’s aerial spraying and bombing of Ethiopian soldiers and civilians during the Second Italo-Abyssinian War. The Spanish also used chemical weapons in Morocco, the British in Russia and Iraq, and the Japanese in China, respectively. With the Japanese expansion into China and the rise of National Socialism in Germany in the 1930s, most countries, including the United States, refused to eliminate chemical weapons as a strategic weapon, although all major combatants dramatically scaled back their programs. The difficulties in regulating chemical weapons are perhaps best illustrated by the actions of the United States. Although the United States was a signatory of the Geneva Protocol on June 17, 1925, formal ratification by the US Senate did not come until April 10, 1975, almost 50 years later.

This political ambivalence was reflective of the ongoing debates about chemical weapons research and production that waxed and waned in the United States over the rest of the century. During World War II, for instance, the US military was vocal about its avoidance of the deployment or use of poison gas. At the same time, however, chemical weapons were a mainstay of the Army Air Corps strategic bombing campaigns in both the European and Pacific theaters.69 In less than 3 years, more than 220 000 tons of chemically based munitions such as napalm and magnesium were dropped on civilian targets in both Germany and Japan, resulting in the deaths of hundreds of thousands of civilians. The air campaigns in World War II also reflect the disturbing shift in the rise of civilian casualties during war from all types of weapons, a trend that has increased steadily from the 1930s until the present day.70

The commitment to a host of long-term and large-scale military research programs during the Cold War, involving numerous weapons systems, provided an institutional and financial impetus for work on a variety of chemical weapons. In an effort to secure their position as global superpowers throughout the Cold War, the United States, the Soviet Union, and their various North Atlantic Treaty Organization (NATO) and Warsaw Pact allies developed new chemical weapons programs based in part on German nerve agents like sarin. To expedite their search for new agents, the United States, Canada, and Britain entered a Tripartite Agreement as early as 1946 to share research on offensive and defensive aspects of chemical weapons. Many chemical weapons research programs (such as the US program) often worked in tandem with nominally smaller but no less sophisticated biological weapons programs, which added a wealth of data on complex meteorological and delivery system problems. Over time, the increasing sophistication of mainstream chemical and biological weapons in the early postwar period led to the development of new generations of nerve agents.71

By the mid-1950s, the United States was searching for more powerful nerve agents; these efforts culminated in the development of VX, which was 3 times as lethal as sarin and had the additional tactical utility of battlefield persistence. The United States continued to develop traditional chemical (and nerve) weapons well into the late 1960s. At that time, changes in national security policy, coupled with the ongoing military situation in Vietnam, saw a shift toward partial disarmament of some types of WMDs with a simultaneous expansion of research and development programs for new weapons.

This schizophrenic policy continued into the1980s, culminating in the creation of a new generation of so called “binary” chemical weapons. Binary weapons, which used relatively harmless precursors that chemically combined within a warhead at a designated target, produced various toxic agents capable of large numbers of mass casualties. The promise of binary weapons came at a time of increasing political pressures about the dangers of large-scale chemical warfare.

By the late 1980s, in response to pressure from the Soviet Union and various NATO allies, the United States began the wholesale destruction of much of its chemical weapons stockpile (at places such as the army arsenal at Pine Bluff, Arkansas, and the Johnson Atoll in the South Pacific) in an effort to disarm, as well as destroy older weapons that had become unstable. Research continued in the United States on various chemical weapons systems through the 1990s and on into the new millennium. At present, the United States maintains a large and sophisticated arsenal of chemical and nerve agents for tactical and strategic use.72

The future of these weapons may best be understood by returning to 2 scientists who had firsthand knowledge of their development and use during World War I.73 Their thoughts both at the time and years later may best encapsulate the contested nature of chemical warfare during the last century. Writing in the late 1960s, chemist James Conant, who directed US lewisite production during World War I, expressed moral ambivalence about poison gas, comparing it to weapons systems in general:

To me, the development of new and more gases seemed no more immoral that the manufacture of explosives and guns. . . . I did not see in 1917 . . . why tearing a man’s guts out by high explosive shell is to be preferred to maiming him by attacking his lungs or skin. All war is immoral.74

Otto Hahn, a future Nobel laureate in chemistry, was recruited by his colleague Fritz Haber to the German chemical weapons program. Hahn went to the eastern front to see for himself the capabilities of this new weapon. The experience left him profoundly shaken:

I was very ashamed and deeply agitated. First we attacked the Russian soldiers with our gas, and then, when we saw the poor chaps lying on the ground and slowing dying, we restored their breathing with our self-rescue equipment. The total insanity of war became obvious to us. First one attempts to eliminate the unknown enemy in his trench, but when one comes face to face with him, one cannot bear it and sets about helping him. Yet often we could no longer save the poor victims.75

Whether or not the pragmatic views of Conant were correct, the “insanity of war” remained a constant for the rest of the 20th century, even as the political means and technological methods to carry out war continued to evolve. In the years since the end of the Cold War, the continued evolution of warfare, coupled with rapid developments in globalization, has made the threat of chemical warfare more immediate. The increased availability of various industrial chemicals, coupled with the expansion of asymmetrical warfare and terrorism, presents us with an uncertain future. During 2007, chemical weapons were used in Iraq against both civilian populations and American and British occupation forces. The detonation of either small chemical bombs or large chlorine tanker trucks (with the aid of explosives) in densely populated urban areas such as Fallujah and Baghdad creates many of the same medical response challenges that confronted military aid workers and physicians almost a century ago. In many ways, the world still lives in the shadow of April 1915.

Table
Table 1— Chemical Warfare Agents Developed During World War I
Table 1— Chemical Warfare Agents Developed During World War I
  Chlorine (Cl2) Phosgene: Carbonyl Chloride (COCl2) Mustard Gas: ββ’-Dichlorethyl Sulfide (ClCH2CH2)2S Lewisite: β-Chlorovinyldichloroarsine (ClCH=CHAsCl2)
US Army chemical warfare service symbol Cl CG HS M-1
Physiological classification Lung injurant Lung injurant Vesicant Vesicant
Tactical classification Casualty agent Casualty agent Casualty agent Casualty agent
Vapor density compared with air 2.5 3.5 5.5 7.1
Persistency Summer: 5 min in open, 20 min in woods; winter: 10 min in open, 60 min in woods Summer: 10 min in open 3 min in woods; winter: 20 min in open, 2 h in woods Summer: 24 h in open, 1 wk in woods; winter: several weeks in both the open and woods Summer: 24 h in open, 1 wk in woods; winter: 1 wk in the open and woods
Lethal concentration, mg/L or oz/1000 cu ft/10 min exposure 30-min exposure: 2.53; 10-min exposure: 5.60 30-min exposure: 0.36; 10-min exposure: 0.50 30-min exposure: 0.07; 10-min exposure: 0.15 30-min exposure: 0.48; 10-min exposure: 0.12
Odor Pungent Fresh-cut hay Garlic or horseradish Like geraniums, then biting
Neutralization Alkali solution or solid Steam will hydrolyze; alkalis and amines react with CG Bleaching powder 3% solution sodium sulfide (Na2S) in water; steam; gaseous chlorine; or bury under moist earth Alcoholic sodium hydroxide spray
Physiological action Burns upper respiratory tract Burns lower lung surfaces causing edema Dissolves in skin and then produces burns Dissolves in skin then burns and liberates M-1 oxide, which poisons body
Protection Gas masks, absorbents in canisters only Gas masks, absorbents in canisters only Gas masks and protective clothing Gas masks and best of protective clothing
First aid Keep patient quiet and warm and treat for bronchial pneumonia Keep patient calm; administer heart stimulants; give oxygen in severe cases, treat like pleurisy Wash affected parts with kerosene or gasoline, then with strong soap and water; rub, dry, rinse with hot clean water; agent must be removed within 3 min Eye casualty concentration 1 h exposure, .0001 (mg liter). Wash with oils, hot water, and soap; dry; first aid must be applied at once

Source. Compiled from Medical Aspects of Chemical and Biological Warfare, ed. Frederick R. Sidell, Ernest T. Takafuji, and David R. Franz (Washington, DC: Office of the Surgeon General; 1997).

References

1. On the history of the battle at Ypres and its relationship to military history and the history of science and medicine, see Jeffery K. Smart, “History of Chemical and Biological Warfare: An American Perspective,” in Medical Aspects of Chemical and Biological Warfare, ed. Frederick R. Sidell, Ernest T. Takafuji, and David R. Franz (Washington, DC: Office of the Surgeon General, 1997), 15; L. F. Haber, The Poisonous Cloud: Chemical Warfare in the First World War (New York: Clarendon Press, 1986), 31–32. On the general topic of chemical weapons history in the United States, see Leo B. Brophy, Wyndham D. Miles, and Rexmond C. Cohrane, The Chemical Warfare Service: From Laboratory to Field (Washington, DC: Office of the Chief of Military History, Department of the Army, 1959); Brooks E. Kleber and Dale Birdsell, The Chemical Warfare Service: Chemicals in Combat (Washington, DC: Center for Military History, United States Army, 2003); Leo P. Brophy and George J.B. Fisher, The Chemical Warfare Service: Organizing for War (Washington, DC: Center of Military History, United States Army, 2004). On the cultural, political, and scientific history of chemical warfare in the Unites States, see Edmund P. Russell III, “ ‘Speaking of Annihilation’: Mobilizing for War Against Human and Insect Enemies, 1914–1945,” Journal of American History 82 (March 1996): 1505–1529; see also by Russell, War and Nature: Fighting Humans and Insects With Chemicals From World War I to Silent Spring (New York, NY: Cambridge University Press, 2001); Hugh Slotten, “Humane Chemistry or Scientific Barbarism? American Responses to World War I Poison Gas, 1915–1930,” Journal of American History 77 (September 1990): 476–498; William L. Sibert, “Chemical Warfare,” Journal of Industrial and Engineering Chemistry 11 (November 1919): 1060–1062; Newton D. Baker, “Chemistry in Warfare,” Journal of Industrial and Engineering Chemistry 11 (September 1919): 921–923. Google Scholar
2. Robert J.T. Joy, “Historical Aspects of Medical Defense Against Chemical Warfare,” in Medical Aspects of Chemical and Biological Warfare, 90. Google Scholar
3. O. S. Watkins, Methodist Report, cited in Amos Fries and C. J. West, Chemical Warfare (New York: McGraw Hill, 1921), 13; also cited in Joy, “Historical Aspects of Medical Defense,” 90. Google Scholar
4. For a very useful and up-to-date introduction to chemical weapons and chemical terrorism, see Jonathan B. Tucker, “Introduction,” in Toxic Terror: Assessing Terror Use of Chemical and Biological Weapons, ed. Jonathan B. Tucker (Cambridge, MA: MIT Press, 2000), 3. Tucker further explains the various classifications of chemical agents, which include choking agents that damage lung tissue (e.g., chlorine, phosgene), blood agents that interfere with cellular respiration (e.g., hydrogen cyanide), blister agents that cause severe chemical burns to the skin and lungs (e.g., mustard gas, lewisite), and nerve agents that disrupt nerve-impulse transmission in the central and peripheral nervous systems, causing convulsions and death by respiratory paralysis (e.g., sarin, VX).” Tucker, 33. Google Scholar
5. For an introduction to war, medicine, and public health, see War and Public Health, ed. Barry S. Levy and Victor W. Sidel (Washington, DC: American Public Health Association, 2000), especially the following articles: William H. Foege, “Arms and Public Health: A Global Perspective” (3–11); Richard M. Garfield and Alfred I. Neugut, “The Human Consequences of War” (pp. 27–38). On the specific topic of chemical weapons, see Allan H. Lockwood, “The Public Health Effects of the Use of Chemical Weapons,” in War and Public Health, 84–97; Challenges in Military Health Care: Perspectives on Health Status and the Provision of Care, ed. Jay Stanley and John D. Blair (New Brunswick, NJ: Transaction Publishers, 1993). On current military–medical approaches to the threat of chemical warfare and viable medical responses to that threat, see Ernest R. Takfuji and Allart B. Kok, “The Chemical Warfare Threat and the Military Healthcare Provider,” in Medical Aspects of Chemical and Biological Warfare, 111–128; Ernest T. Takfuji, Anna Johnson-Winegar, and Russ Zajtchuk, “Medical Challenges in Chemical and Biological Defense for the 21st Century,” in Medical Aspects of Chemical and Biological Warfare, 667–686. Google Scholar
6. Historians continue to debate the exact number of casualties incurred during World War I, because record keeping from that period is incomplete and much disputed. Figures vary widely depending on which sources are used and how the historian is trying to measure them. In the case of Russia and Turkey, no system for tracking either military or civilian deaths existed, so it is impossible to determine them with any accuracy. Historian Modris Eksteins estimates that 9 million died, while historian John Keegan places the number at 10 million and Ian F.W. Becekett at 9 million to 10 million. Still others, like Fritz Haber’s son Ludwig Haber, use a percentage of the European population to estimate the dead. Modris Eksteins, Rites of Spring: The Great War and the Birth of the Modern Age (Boston: Houghton Mifflin, 1989); John Keegan, The First World War (New York: Alfred A. Knopf, 1999); Ian F.W. Beckett, The Great War, 1914–1918 (New York: Longman, 2001); Haber, The Poisonous Cloud, 31–39. Google Scholar
7. On the use of chemical weapons in the ancient world, see Adrienne Mayor, Greek Fire, Poison Arrows, and Scorpion Bombs: Biological and Chemical Warfare in the Ancient World (New York: Overlook Duckworth, 2003). On uses in the 19th century, see Edward M. Spiers, Chemical Warfare (Urbana: University of Illinois Press, 1986), 13–14. See also Joy, “Historical Aspects of Medical Defense,” 88–90. Google Scholar
8. The full text of the 1899 convention is available at http://www.yale.edu/lawweb/avalon/lawofwar/hague02.htm; the full text of the 1907 convention is available at http://www.yale.edu/lawweb/avalon/lawofwar/hague04.htm, both accessed January 15, 2007. Google Scholar
9. Recent interest in the life of Haber includes 2 new books: Dietrich Stozenberg, Fritz Haber: Chemist, Nobel Laureate, German, Jew (Philadelphia: Chemical Heritage Press, 2004), and Daniel Charles, Master Mind: The Rise and Fall of Fritz Haber, the Nobel Laureate Who Launched the Age of Chemical Warfare (New York: Ecco, 2005). For a brief intellectual biography of Fritz Haber, see W.A.E. McBryde, “1918 Nobel Laureate Fritz Haber,” in Nobel Laureates in Chemistry, 1901–1992, ed. Laylin K. James (Philadelphia: American Chemical Society and the Chemical Heritage Foundation, 1993), 114–123. Google Scholar
10. The concept of the “technological fix” is embedded in much of the history of technology. The idea that social, economic, and cultural problems can be solved quickly by applying technologies to them is most notably associated with the development of war technologies such as gas weapons, radar, and the gyroscope. However, as historians Thomas P. Hughes and Wiebe E. Bijker have illustrated, technological fixes are also an essential part of electrical systems and more everyday objects such as bicycles and light bulbs. On the technological fix, see Merritt Roe Smith and Leo Marx, Does Technology Drive History? The Dilemma of Technological Determinism (Cambridge, Mass: MIT Press, 1996), particularly the introduction. On the relationship between Hughes’s concepts of the technological fix and technological momentum, see Thomas P. Hughes, Networks of Power: Electrification in Western Society 1880–1930 (Baltimore: Johns Hopkins University Press, 1983), 140–174, 201–226. For technological fixes and large systems, see Hughes, Rescuing Prometheus (New York: Pantheon Books, 1998). Google Scholar
11. The mobilization of academic resources in Great Britain to solve the problem of gas warfare reflected the overall commitment of the country at large to prosecuting the war successfully. Scientists at Birmingham, Cambridge, Finsbury Technical College, Imperial College London, and St. Andrews carried out specific military research programs. Because British soldiers were initially unprepared for and unprotected from German gas attacks, defensive measures took first priority. The Army Medical College at Millbrook took the initial lead in developing protective devices, but soon physiology departments at Bedford College, the Lister Institute (including the animal station), Oxford University, the School of Agriculture in Cambridge, and University College London all contributed to the effort. See Haber, The Poisonous Cloud, 106–138. Google Scholar
12. For an expansive analysis of industrialization in World War I, see Beckett, The Great War. On the relationship between science and technology in World War I in the United States, the best work remains Daniel J. Kevles’s The Physicists: The History of a Scientific Community in Modern America (New York: Alfred A. Knopf, 1978); see especially 102–138 and 446–450. See also Daniel J. Kevles, “George Ellery Hale, the First World War, and the Advancement of Science in America,” Isis 59 (1968): 427–437. On the specific topic of research on chemical weapons during World War I, see Daniel P. Jones, “Chemical Warfare Research During World War I: A Model of Cooperative Research,” in Chemistry and Modern Society, ed. John Parascondola and James C. Whorton (Washington, DC: American Chemical Society, 1983). For a general introduction to the topic of technology and war in the United States, see Alex Roland, “Science and War,” Osiris, 1 (1985): 261–261; see also by Roland, “Technology and War: A Bibliographic Essay,” in Military Enterprise and Technological Change: Perspective on the American Experience, ed. Merritt Roe Smith (Cambridge, MA: MIT Press, 1985), 347–379. For a discussion of military innovation exclusive of chemical warfare in the period immediate following World War I, see Williamson Murray, “Innovation: Past and Future,” in Military Innovation in the Interwar Period, ed. Williamson Murray and Allan R. Millet (New York: Cambridge University Press, 1996), 300–328. On other aspects of chemical warfare research during the period, see Sarah Jansen, “Chemical-Warfare Techniques for Insect Control: Insect ‘Pests’ in Germany Before and After World War I,” Endeavour 24 (2000): 28–33. Google Scholar
13. Brophy and Fisher, The Chemical Warfare Service: Organizing for War, 3. Google Scholar
14. The National Research Council report is quoted in Brophy and Fisher, The Chemical Warfare Service, 4. Google Scholar
15. M.W. Ireland, The Medical Department of the United States Army in the World War, Volume XIV: Medical Aspects of Gas Warfare (Washington, DC: Government Printing Office, 1926), 35–36. Ireland went on to list the organizations and facilities where war-related research was conducted; these included “the Bureau of Mines, Pittsburgh, PA; the National Carbon Co., Cleveland, OH; the Forest Products Laboratory, Madison, WI; the University of Chicago; the research laboratory of the American Sheet & Tin Plate Co., Pittsburgh, PA; the Bureau of Chemistry laboratory, Washington, DC; the Yale Medical School laboratory, New Haven, CT; the Massachusetts Institute of Technology, Cambridge, MA; the Mellon Institute, Pittsburgh, PA, and elsewhere. . . . Branch laboratories were organized from time to time at the Catholic University of American, Washington, DC; Johns Hopkins University, Baltimore, MD; Princeton University, Princeton, NJ; National Carbon Co., Cleveland, OH; Nela Park, Cleveland, OH; Harvard University, Cambridge, MA; Yale University, New Haven, CT; Wesleyan University, Middletown, CT; Ohio State University Columbus, OH; [Bryn Mawr College,] Bryn Mawr, PA; Massachusetts Institute of Technology, Cambridge, MA; Cornell University, Ithaca and New York City, NY; University of Michigan, Ann Arbor, MI; Clark University, Worcester, MA; Worcester Polytechnic Institute, Worcester, MA; University of Wisconsin, Madison, WI; Sprague Institute, Chicago, IL; and Ordnance Proving Ground, Lakehurst, NJ” (35–36). On the history of American research universities, see Roger L. Geiger, To Advance Knowledge: The Growth of American Research Universities in the Twentieth Century, 1900–1940 (New York: Oxford University Press, 1986). For an introduction to the history of chemists and specific scientific research schools in the United States, see John Servos, Physical Chemistry >From Ostwald to Pauling: The Making of a Science in America (Princeton, NJ: Princeton University Press, 1990). See also John Servos, “Research Schools and Their Histories,” Osiris 8 (1993): 3–15; Robert E. Kohler, From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline (Cambridge, MA: Harvard University Press, 1982). Google Scholar
16. Ireland, Medical Department of the United States Army in the World War, 35. Google Scholar
17. James Conant became one of the architects of the academic–military–industrial complex in the United States during the postwar period. During World War II, Conant worked closely with Vannevar Bush in leading the Office of Scientific Research and Development. Both men were instrumental in directing much of the country’s military research during World War II, including the Manhattan Project. For a summary of his research activities during World War I, see James G. Hershberg, James B. Conant: Harvard to Hiroshima and the Making of the Nuclear Age (New York: Alfred A. Knopf, 1993), 35–48. See also James B. Conant, My Several Lives: Memoirs of a Social Inventor (New York: Harper & Row, 1970), especially 41–53. Google Scholar
18. Hugh Slotten, “Humane Chemistry or Scientific Barbarism? American Responses to World War I Poison Gas, 1915–1930,” Journal of American History 77 (September 1990): 485. See also Ireland, Medical Department of the United States Army in the World War, 25. CrossrefGoogle Scholar
19. Haber, The Poisonous Cloud, 107–138. Google Scholar
20. Jon Silkin, The War Poems: Wilfred Owen (London: Sinclair-Stevenson, 1994), 24. Owen was killed in action on November 4, 1918 at the Sambre-Oise Canal, a week before the armistice, by bullet, not by gas. For an introduction to the cultural and literary ramifications of Owen’s war poetry see: Tim Kendall, Modern English War Poetry, (New York : Oxford University Press, 2006); Daniel W. Hipp, The Poetry of Shell Shock: Wartime Trauma and Healing in Wilfred Owen, Ivor Gurney and Siegfried Sassoon (Jefferson, NC: McFarland & Co., 2005); and Santanu Das, Touch and intimacy in First World War Literature, (New York: Cambridge University Press, 2005). Google Scholar
21. On the life of Walther Nerst, see W.A.E. McBryde, “Walther Herman Nernst,” in Nobel Laureates in Chemistry, 1901–1992, ed. Laylin K. James (Philadelphia: American Chemical Society and the Chemical Heritage Foundation, 1993), 125–133. Google Scholar
22. William Moore, Gas Attack! Chemical Warfare 1915–1918 and Afterwards (New York: Hippocrene Books, 1987), 12. Google Scholar
23. Haber, The Poisonous Cloud, 51, 177. Google Scholar
24. Watkins, Methodist Report. Google Scholar
25. Albert Palazzo, Seeking Victory on the Western Front: The British Army and Chemical Warfare in World War I (Lincoln: University of Nebraska Press, 2000), 42. Google Scholar
26. Max Arthur, Last Post (London: Weidenfeld & Nicolson, 2005), 35–36. See also Palazzo, Seeking Victory on the Western Front, 42–43. Google Scholar
27. On gas attacks, see B.C. Goss, “An Artillery Gas Attack,” Journal of Industrial and Engineering Chemistry 11 (September 1919): 829–836; James C. Webster, “The First Gas Regiment,” Journal of Industrial and Engineering Chemistry 11 (July 1919): 621–629; Army War College, Gas Warfare, Part 2: Methods of Defense Against Gas Attacks (Washington, DC: War Department, 1918). For interesting views on gas warfare from earlier periods, see Edward S. Farrow, Gas Warfare (New York: E.P. Dutton & Company, 1920); War Office, Medical Manual of Chemical Warfare (London: His Majesty’s Stationary Office, 1939); Alden H. Waitt, Gas Warfare: The Chemical Weapon, Its Use, and Protection Against It (New York: Duell, Sloan and Pearce, 1942). On the design of gas masks and respirators, see P.W. Carleton, “Anti-Dimming Compositions for Use in the Gas Mask,” Journal of Industrial and Engineering Chemistry 11 (December 1919): 1105–1111; J. Perrot, Max Yablick, and A.C. Fieldner, “New Absorbent for Ammonia Respirators,” Journal of Industrial and Engineering Chemistry 11 (November 1919): 1013–1019; Army War College, Methods of Defense Against Gas Attacks. See also in that series: Army War College, Gas Warfare, Part 1: German Methods of Offense (Washington, DC: War Department, 191 8); Army War College, Gas Warfare, Part 3: Methods of Training in Defensive Measures (Washington, DC: War Department, 1918); Army War College, Gas Warfare, Part 4: The Offensive in Gas Warfare, Cloud and Projector Attacks (Washington, DC: War Department, 1918); Palazzo, Seeking Victory on the Western Front, 42–43. CrossrefGoogle Scholar
28. Palazzo, Seeking Victory on the Western Front, 43. See also Haber, The Poisonous Cloud, 46–46. Google Scholar
29. Haber, The Poisonous Cloud, 47. Google Scholar
30. Denis Winter, Death’s Men: Soldiers in the Great War (New York: Penguin Books, 1979), 124; also cited in Joy, “Historical Aspects of Medical Defense,” 92. Google Scholar
31. Arthur, Last Post, 35–36. Google Scholar
32. Joy, “Historical Aspects of Medical Defense,” 97. For information regarding current military procedures for dealing with chemical warfare causalities and chemical warfare defense, see the following in Medical Aspects of Chemical and Biological Warfare: Frederick R. Sidell, Ronald R. Bresell, Robert H. Mosebar K Mills McNeill, and Ernest T. Takafuji, “Field Management of Chemical Casualties” (325–336); Frederick R. Sidell, “Triage of Chemical Casualties” (337–350); Charles G. Hurst, “Decontamination” (351–360); Michael R. O’Hern, Thomas R. Dashiell, and Mary Frances Tracy, “Chemical Defense Equipment” (361–397). Google Scholar
33. Haber suggests that the actual figures were significantly higher; see The Poisonous Cloud, 248. Google Scholar
34. Joy, “Historical Aspects of Medical Defense,” 91. Google Scholar
35. The report is quoted in Ireland, Medical Department of the United States Army in the World War, 27. Google Scholar
36. Ibid, 28. Google Scholar
37. Ibid, 28, 33–37. Google Scholar
38. Eksteins, Rites of Spring, 162. Google Scholar
39. Winter, Death’s Men, 121. Google Scholar
40. Quoted in Rexmond C. Cochrane, Third Division at Thierry July, 1918 (Army Chemical Center, MD: US Army Chemical Corps Historical Office, 1959), 91. Google Scholar
41. Joy, “Historical Aspects of Medical Defense,” 94. Google Scholar
42. A 1919 article by B.C. Gross of the Chemical Warfare Service recommended attacking in wind conditions of less than 3 miles per hour at a relative humidity of 40% to 50%. The author noted that having considered “temperature, wind, and humidity conditions, the hours between midnight and daylight are usually the most favorable for a gas attack, and, in addition, surprise is more easily possible at this time.” Successful gas-based artillery barrages were designed to hit “small, definitely located targets known to be occupied [with a] concentrated fire of 2 minutes duration.” B.C. Goss, “An Artillery Gas Attack,” The Journal of Industrial and Engineering Chemistry 11 (September 1919): 831. Google Scholar
43. Joy, “Historical Aspects of Medical Defense,” 100. Google Scholar
44. Harry L. Gilchrist and Philip B. Matz, The Residual Effects of Wartime Gases (Washington, DC: Government Printing Office; 1933), 44; also cited in Joy, “Historical Aspects of Medical Defense,” 96. Google Scholar
45. German attacks employing mustard gas were extremely difficult for all Allied troops, especially newly arriving AEF. Unaccustomed to gas attacks under combat conditions with even chlorine and phosgene, the AEF moved to the front just as gas warfare entered its most violent and destructive phase. In addition, an earlier decision by American scientific, medical, and military leaders to focus almost exclusively on the gas mask in lieu of a more complete approach to gas defense and treatment may have been ill conceived. A 1926 US Army medical history of gas warfare noted that the “Medical Department of the United States Army with respect to gas warfare were [sic] concerned with furnishing gas masks and other prophylactic apparatus for the Army, rather than with preparation for the care and treatment of gas casualties.” Ireland, Medical Department of the United States Army in the World War, 27. Google Scholar
46. August M. Prentiss, Chemicals in Warfare (New York: McGraw-Hill Book Company, 1937), 579. On the environmental consequences of war, see also Richard P. Tucker and Edmund Russell Natural Enemy, Natural Ally: Toward an Environmental History of War, ed. (Corvallis, OR: Oregon State University Press, 2004). Google Scholar
47. Joy, “Historical Aspects of Medical Defense,” 98. On the medical aspects of vesicants in general and mustard gas in particular, see Frederick R Sidell, John S. Urbanetti, William J. Smith, and Charles G. Hurst, “Vesicants,” in Medical Aspects of Chemical and Biological Warfare, 197–228; Frederick R. Sidell and Charles G. Hurst, “Long-Term Heal Effects of Nerve Agents and Mustard,” in Medical Aspects of Chemical and Biological Warfare, 229–246. Google Scholar
48. Joy, “Historical Aspects of Medical Defense,” 101. Google Scholar
49. Arthur, Last Post, 83. Google Scholar
50. Joy, “Historical Aspects of Medical Defense,” 98. Google Scholar
51. For a full history of the development of lewisite, see Joel A. Vilensky, Dew of Death: The Story of Lewisite, America’s World War I Weapon of Mass Destruction (Bloomington: Indiana University Press, 2005). Google Scholar
52. These figures are based on Harry L. Gilchrist, A Comparative Study of World War Casualties From Gas and Other Weapons (Edgewood Arsenal, MD: Chemical Warfare School, 1928), p. 21, Table 7. Google Scholar
53. Spiers, Chemical Warfare, 13. On the general history of chemical manufacturing and chemical engineering in the United States, see John Kenly Smith, Jr, “The Evolution of the Chemical Industry: A Technological Perspective,” in Chemical Sciences in the Modern World, ed. Seymour H. Mauskopf (Philadelphia: University of Pennsylvania Press, 1993). For a view of chemical engineering activity during World War I, see Charles O. Brown, “US Chemical Plant for Manufacturing Sodium Cyanide, Saltville, Virginia,” Journal of Industrial and Engineering Chemistry 11 (November 1919): 1010–1013; James F. Norris, “The Manufacture of War Gases in Germany,” Journal of Industrial and Engineering Chemistry 11 (September 1919): 817–829; Edward H. Hempel, The Economics of Chemical Industries (New York: John Wiley & Sons, 1939), 28–31. Google Scholar
54. T. Anthony Ryan, Christien Ryan, Elaine A. Seddon, and Kenneth R. Seddon, Phosgene and Related Carbonyl Halides (New York: Elsever, 1996), 12–13. Google Scholar
55. Ibid, 15–16. Google Scholar
56. Norris, “Manufacture of War Gases in Germany,” 823–824. Norris was one of a group of Allied officers who toured the occupied zones along the Rhine after the war to gather intelligence (both military and industrial) on the state of the German chemical industry. His report on the Bayer Company plant at Höchst, which manufactured per-chloromethylformate (or superpalite), is intriguing, as it offers a window into one of the best designed and “safest” chemical production facilities in Germany. Norris writes that “In the early days they had several small explosions which were not serious. They had a large number of cases of poisoning among the workmen, which were also not serious. All the workers were supplied with gas masks and put them on in case of emergency only. The workers suffered from chest and heart affections. Two or three died during the time [sic] though our informant would not admit this was necessarily the cause of death. They sometimes had as many as one-third of the workers absent owing to sickness. The workers were not changed as they wished to retain those who could stand it. Only men were employed in the plant.” Norris, 824. Google Scholar
57. Ibid, 824. See also Haber, The Poisonous Cloud, 257–258. The German military relied on the methods of Victor Meyer involving the preparation of thioglycol by the Badische Company, which was later converted to mustard gas at the Bayer Company plant at Höchst. Norris, 821–822. Google Scholar
58. Haber, The Poisonous Cloud, 251. Google Scholar
59. Ibid. Google Scholar
60. Ibid, 252. Google Scholar
61. Norris, “Manufacture of War Gases in Germany,” 824. See also Haber, The Poisonous Cloud, 257–258. Google Scholar
62. Haber, The Poisonous Cloud, 250–251. Google Scholar
63. “At first one man reported sick for every nine shells filled! . . . [B]etween 21 June and 7 December 1918, 1,213 casualties were recorded, the weekly rate occasionally reaching 100 per cent of the labour force. . . . Avonmouth and Chittening between them (there are no figures for Banbury) reported altogether 2,600 gas cases.” Haber, The Poisonous Cloud, 251. See also Industrial Fatigue Research Board, The Output of Women Workers in Relation to Hours of Work in Shell-Making, Report No. 2 (London: His Majesty’s Stationary Office, 1919). Google Scholar
64. Ireland, Medical Department of the United States Army in the World War, 32–33. The surgeon general’s responsibilities involved far more than local plant supervision. They included a research service responsible for work on the “chronic effects of low-concentration war gases, protective devices and therapy”; a field service responsible for work on the “selection, appointment, and training of contract surgeons and medical officers, installation of emergency ward[s], dispensary and first-aid equipment, and inspection”; and an education service that “collect[ed] reports, case histories, and information, prepar[ed] bulletins of information for medical officers and contracts surgeons, assign[ed] problems for solutions in laboratories, and develop[ed] special course[s] of instruction at American University for medical officers, to be used later in factory instruction and assigned to the large plants for duty.” Ibid, 33. Google Scholar
65. Ibid, 25. Google Scholar
66. Haber, The Poisonous Cloud, 252. Google Scholar
67. Arthur, Last Post, 35–36. Google Scholar
68. The text of the Geneva Protocol is available from the US Department of State at http://www.state.gov/t/ac/trt/4784.htm#treaty, accessed January 17, 2007. For a detailed examination of these international conferences, see Spiers, Chemical Warfare, 41–47; Victor A Utgoff, The Challenge of Chemical Weapons: An American Perspective (New York: Macmillan, 1990), 14–21. Google Scholar
69. For an introduction to the US strategic bombing campaigns during World War II and the public health ramifications of large-scale urban destruction, see Micheal S. Sherry, The Rise of American Air Power: The Creation of Armageddon (New Haven, CT: Yale University Press, 1987); Tami Davis Biddle, “British and American Approaches to Strategic Bombing: Their Origins and Implementation in the World War II Combined Bomber Offensive,” in Airpower: Theory and Practice, ed. John Gooch (London, England: Frank Cass & Co, 1995), 91–144; See also by Biddle, Rhetoric and Reality in Air Warfare (Princeton, NJ: Princeton University Press, 2002); Hermann Knell, To Destroy a City: Strategic Bombing and Its Human Consequences in World War II (Cambridge, MA: Da Capo Press, 2003); Alvin D. Coox, “Strategic Bombing in the Pacific, 1942–1945,” in Case Studies in Strategic Bombardment, ed. R. Cargill Hall (Washington, DC: Air Force History and Museums Program, 1998), 253–382; Kenneth Hewitt, “Place Annihilation: Area Bombing and the Fate of Urban Places,” Annals of the Association of American Geographers 73 (1983): 257–284. Finally, it is important to keep in mind that the US military’s invasion plans for Japan (Operation Downfall), originally scheduled to commence in November 1945, included provisions for the tactical and strategic use of atomic, biological, and chemical weapons against various military and civilian targets. Google Scholar
70. Throughout the 20th century, the number of civilian deaths increased at a startling rate. Only 14% of deaths during World War I were civilians; by contrast, the rate in World War II was 67%; by the 1980s the rate was 75%, and in the 1990s it reached 90%. Garfield and Neugut, “The Human Consequences of War,” Table 3–3, 31. Google Scholar
71. On the nature of biological weapons programs during World War II, see Gerard J. Fitzgerald, “Babies, Barriers, and Bacteriological Engineers: Biological Warfare Research at LOBUND, 1930–1952,” Technology and Culture, in press; Edward M. Eitzen, “Historical Overview of Biological Warfare,” in Medical Aspects of Chemical and Biological Warfare, 415–423. For an introduction to biological and chemical warfare research activities after World War II, see Gradon Carter and Brian Balmer, “Chemical and Biological Warfare and Defence, 1945–1990,” in Cold War, Hot Science: Applied Research in Britain’s Defence Laboratories, 1945–1990, ed. Robert Bud and Philip Gummett (Amsterdam, The Netherlands: Harwood Academic Publishers, 1999), 295–338. See also G.B. Carter and G.S. Pearson, “North Atlantic Chemical and Biological Research Collaboration: 1916–1995,” Journal of Strategic Studies 19 (March 1996): 74–103; Mark Wheelis, Lajos Rózsa, and Malcolm Dando, Deadly Cultures: Biological Weapons Since 1945 (Cambridge, MA: Harvard University Press, 2006). Google Scholar
72. For a general overview of chemical weapons development in the United States, see Smart, “History of Chemical and Biological Warfare: An American Perspective,” in Medical Aspects of Chemical and Biological Warfare. On the postwar development of nerve agents, see Frederick R. Sidell, “Nerve Agents,” in Medical Aspects of Chemical and Biological Warfare, 129–179; Michael A. Dunn, Brennie E. Hackley Jr, and Fred-erick R. Sidell, “Pretreatment for Nerve Agent Exposure,” in Medical Aspects of Chemical and Biological Warfare, 181–196; Jonathan B. Tucker, War of Nerves: Chemical Warfare From World War I to Al-Qaeda (New York: Pantheon, 2006), 158. For an analysis of nerve agent development during the cold war, see Bradley S. Davis, “Transitional Perspectives on Conventional, Chemical and Biological Weapons Production,” in United States Post–Cold War Defense Interests: A Review of the First Decade, ed. Karl P. Magyar (New York: Palgrave Macmillan, 2004), 136. For an examination of contemporary debates over chemical weapons destruction in relation to public health, see Michael Greenberg, “Public Health, Law, and Local Control: Destruction of the US Chemical Weapons Stockpile,” American Journal of Public Health 93 (2003): 1222–1226 and Smart, “History of Chemical and Biological Warfare: An American Perspective.” Google Scholar
73. For fascinating views on the future of gas warfare as seen by contemporary observers during the interwar period, see Edward S. Farrow, Gas Warfare (New York: E.P. Dutton & Company, 1920), 166; War College, “Aspects in Which the Use of Gas Is Likely to Differ in a Future War,” in Medical Manual of Chemical Warfare (London: His Majesty’s Stationary Office, 1939). Google Scholar
74. James B. Conant, My Several Lives: Memoirs of a Social Inventor (New York: Harper & Row, 1970), 49. Google Scholar
75. Armin Hermann, The New Physics: The Route Into the Atomic Age (Munich: Heinz Moos Verlag, 1979), 47, 73. Google Scholar

Related

No related items

TOOLS

SHARE

ARTICLE CITATION

Gerard J. Fitzgerald , PhD The author is a postdoctoral researcher with New York University, New York, NY. “Chemical Warfare and Medical Response During World War I”, American Journal of Public Health 98, no. 4 (April 1, 2008): pp. 611-625.

https://doi.org/10.2105/AJPH.2007.111930

PMID: 18356568