Diseases

In view of the diverse characteristics of the main diseases of South Asia and of the various factors affecting them, it will be helpful to consider the recent history of the most important of them in turn.

Smallpox

Smallpox was held by nineteenth-century medical opinion to be "the scourge of India," responsible for more deaths than all other diseases combined. Endemic throughout much of the region, smallpox returned in epidemic strength every 5 to 7 years. So prevalent was the disease between February and May that it was known in eastern India as basanta roga, the "spring disease." With the onset of the monsoon season, however, smallpox abated, reaching its lowest ebb in October and November. Climate might provide a partial explanation for this marked seasonality, but social and cultural factors were influential, too. In India the dry spring months, a slack period in the agricultural year, were traditionally a time for congregation and travel, for religious fairs, pilgrimages, and marriages, all activities that provided the close social contact needed for transmission of the Variola virus.

In driving large numbers of destitute and undernourished villagers to seek food in relief camps and cities, famines also created conditions favorable to the dissemination of the disease. So, too, did the dislocation caused by warfare. Just as Maratha invasions may have contributed to an upsurge in smallpox mortality in eighteenth-century Bengal, so did the displacement of millions of refugees in the same area during Partition in 1947. Bangladesh's war of independence in 1971-2 also occasioned major outbreaks of the disease (Chen 1973).

In the absence of reliable statistics it is impossible to gauge the full extent of smallpox mortality before the late nineteenth century. One demonstration of the disease's long-standing importance was the reverence paid throughout South Asia to the Hindu goddess Sitala (or her counterparts) as the deity credited with the power to either cause or withhold the disease. A nineteenth-century writer further attested to the prevalence of smallpox when he speculated that as many as 95 percent of the 9 million inhabitants of the north Indian Doab had experienced the disease. So common was it, he claimed, that "it has become quite a saying among the agricultural and even wealthier classes never to count children as permanent members of the family . . . until they have been attacked with and recovered from smallpox" (Pringle 1869).

As these remarks suggest, smallpox was mainly a childhood disease, the chronology of smallpox epidemics being largely determined by the creation of a pool of susceptible infants born since the previous outbreak. Between 60 and 70 percent of fatalities occurred among children under 10 years of age, and of these half died before they were even one year old, although one of the consequences of the spread of childhood vaccination was to increase the proportion of adult victims.

Smallpox mortality probably reached a peak in British India in the middle and later decades of the nineteenth century before entering a period of sustained decline. From 1.4 million deaths recorded for the decade 1868-77, and 1.5 million for 1878-87, smallpox mortality fell to barely 1 million in 1888-97 and 800,000 in 1898-1907. From 0.8 smallpox deaths per 1,000 inhabitants in 1878-87, the ratio fell to 0.4 in 1908-17 and down to 0.2 in 1937-43, before the Bengal famine and Partition fueled a partial resurgence. Although in epidemic years in the 1870s the number of smallpox deaths soared as high as 2 per 1,000, by the 1930s there were fewer than 0.5 fatalities per 1,000 of the population (James 1909; Hopkins 1983).

Significantly, smallpox mortality began to decline even before the worst famines of the period were over, and exactly when medical intervention began to have an effect on smallpox mortality is unclear. In Sri Lanka, an island with a small population, vaccination may have eliminated endemic smallpox early in the nineteenth century; but because of constant reinfection from India, the disease was not finally eradicated until 1972.

Although vaccination was introduced into India in 1802-3, its impact there at first was slight. Imported lymph was unreliable, and local sources of supply were not adequately developed until late in the century. The arm-to-arm method of vaccination was extremely unpopular, and the bovine origin of vaccinia provoked strong Hindu hostility. Vaccination had, moreover, a popular and well-established rival in variolation (smallpox inoculation), which was considered more effective and culturally acceptable. Faced with such opposition, the colonial regime was disinclined to commit substantial resources to vaccination, and during the nineteenth century it was practiced mainly among Europeans and those

Indians, especially of the middle classes, most closely associated with them. Compulsory vaccination and the outlawing of variolation began in Bombay city in 1877 and was followed by the Government of India's Vaccination Act in 1880. However, the implementation of the acts was piecemeal and deficient. As late as 1945, primary vaccination was compulsory in only 83 percent of towns in British India and a mere 47 percent of rural circles. Compulsory revaccination was even rarer (Arnold 1988). Nevertheless, the evidence suggests that vaccination did contribute to the relatively low levels of smallpox mortality achieved by the early twentieth century. The number of vaccinations in India rose from approximately 4.5 million a year in the 1870s to twice that number in the 1900s and exceeded 40 million (nearly three-quarters of them revaccina-tions) by 1950. Given the heavy death toll smallpox had formerly levied among those under 10 years of age, vaccination significantly reduced mortality among infants and children.

Existing methods of smallpox control were, however, inadequate to secure the final eradication of the disease. In 1962, the year in which India launched its eradication program, the country reported 55,595 cases of smallpox and 15,048 deaths from the disease. There were a further 4,094 cases and 1,189 deaths in Pakistan (mainly East Pakistan). Five years later, the World Health Organization began its own eradication campaign, which concentrated upon early recognition and reporting of the disease, the isolation of cases, and the vaccination of contacts. The Bangladesh war of 1971—2 delayed eradication and enabled a fresh epidemic to flare up. In 1974 India again suffered a severe outbreak (showing that Variola major had lost none of its former destructiveness), with 188,003 cases and 31,262 deaths. South Asia was finally freed of smallpox in 1975.

Cholera

Like smallpox, cholera has had a long history in South Asia. Reliable accounts of the disease date back to at least the sixteenth century, and it was reported from several parts of India during the late eighteenth century (MacPherson 1872). But the cholera epidemic that erupted in the Jessore district of Bengal in August of 1817 assumed an unprecedented malevolence and in barely 2 years it penetrated to almost every part of the subcontinent, including Sri Lanka, before setting out shortly afterward on the first of its "global peregrinations." Epidemics of cholera repeatedly swept through South Asia during the nineteenth century, and the disease was undoubtedly a leading cause of high mortality and (relatively) low population growth in India before 1914. Perhaps from the absence of any existing immunity, mortality rates were particularly high during the 1817-19 epidemic (though not as high as an alarmed Europe was willing to believe).

The incidence of epidemic cholera in India can be related to a number of factors: The emergence of Calcutta, the most populous city of British India and an important regional center for trade and administration, during the eighteenth and nineteenth centuries favored the wide dissemination of the disease from its endemic home in the Ganges-Brahmaputra delta. As elsewhere in the world, cholera often moved in the wake of advancing armies. Soldiers unwittingly helped spread the disease in northern and central India in 1817-18 and again during the Mutiny campaigns of 1857-8. The seasonal migration of laborers and plantation workers may also have had a comparable effect. They passed through endemic areas, lived in primitive accommodation at the site of their employment, and drew water from contaminated sources.

Of even greater significance was the close association, much remarked upon during the nineteenth century, between epidemic cholera and Hindu pilgrimage. Pilgrims traveled long distances, mainly on foot (before the railway age); they were ill-clothed and ill-fed, and crowded in their thousands into insanitary towns or into makeshift camps at the main fairs and festivals. Cholera not only spread easily and rapidly among the pilgrims themselves but also was carried far afield as they dispersed to their homes. Several of the largest epidemics of the nineteenth century were traced to the festival of Rath Jatra, held annually at Puri in Orissa, close to the Bengal cholera zone, and the 12 yearly bathing festivals, the Kumbh melas at Hardwar and Allahabad on the Ganges (Pollitzer 1959).

As a mainly waterborne disease, cholera was endemic in low-lying areas, such as lower Bengal, where the cholera vibrio flourished in village reservoirs ("tanks") and irrigation channels that were also used for drinking water. But epidemic cholera occurred, too, in conjunction with famine. Although major epidemics could occur (as in 1817-19) in the absence of famine conditions, the scale of cholera morbidity and mortality was greatly magnified by the migrations of the famine-struck, by their lack of physical resistance to the disease, and by their dependence upon scant sources of water and food that quickly became contaminated.

Whereas in times of good harvests cholera tended to retreat to the low-lying areas where it was endemic, many of the worst epidemics between the 1860s and the early 1900s bore some relation with concurrent famine. The disease thus exhibited even more violent fluctuations than smallpox, dying away in some years, only to return in devastating strength once or twice in a decade. In 1874, for example, cholera mortality in India sank to 0.16 per 1,000 of the population; only 3 years later (with famine widespread), it reached 3.39 per thousand, before ebbing away again to 0.64 in 1880.

In 1900 cholera soared to its highest recorded peak with 797,222 deaths in British India, a ratio of 3.70 per 1,000: In this year of sickness and starvation, cholera accounted for a tenth of all mortality (Rogers 1928). But, having hovered at around 4 million deaths each decade between 1889 and 1919, cholera mortality finally began to fall, dropping to 2.2 million in 1920-9, and 1.7 million in 1930-9, although there was a partial resurgence during the 1940s (the decade of the Bengal famine and Partition) to just over 2 million deaths with 500,000 cholera deaths in 1943 alone, nearly 50 percent of them in Bengal. From a death rate of 0.74 per 1,000 in 1925-47, and 0.17 in 1948-63, the ratio continued to fall to 0.0017 in 1964-8. Epidemic cholera did not disappear entirely, however. In 1974, for instance, there were 30,997 reported cases in India with 2,189 deaths. In the same year Bangladesh suffered 5,614 cases and 177 deaths, while Sri Lanka, long the recipient of epidemics originating on the Indian mainland, had 4,578 cases and 343 deaths.

Although various theories were advanced during the nineteenth century to explain cholera's origins and spread, it was only in 1883 that Robert Koch finally identified cholera vibrios in a Calcutta tank. Even without this knowledge, sanitary measures (as taken in Europe and North America) had had some earlier effect, helping in particular to reduce mortality among European soldiers who, until the 1870s, suffered a high proportion of casualties from the disease. Improved sewers and filtered water lessened cholera mortality in several major cities (including Calcutta from 1869); the long-term decline in the disease has also been identified with the creation of public health departments in the provinces from the early 1920s. But the latter were poorly funded, and in India, with a predominantly rural society and a large and growing slum population, sanitary reform had a more limited impact than in the West.

Cholera inoculation proved a less effective form of prophylaxis than smallpox vaccination, and the connection between cholera and Hindu India's pilgrimages and sacred sites made the colonial authorities wary for a long time of provoking a religious backlash. But, for all this, anticholera inoculation is likely to have had some success in curbing the explosions of mortality formerly associated with pilgrimages and melas. For many years a voluntary practice, inoculation for pilgrims became compulsory in 1954 in time for the Allahabad Kumbh mela, when nearly a quarter of a million pilgrims were inoculated. Combined with intensive medical surveillance and prompt reporting, inoculation contributed to the decline of epidemic mortality, although, again, the absence of major famines after 1908 (apart from that of 1943-4 in Bengal) must also have been a contributory factor.

In addition, the virulence of the disease may have been on the wane by 1914 (or human resistance may have been increasing). The El Tor biotype, which invaded the region from Southeast Asia in 1964-5, proved less fatal than the "classical" form it largely displaced. Immunity acquired early in life, rather than inoculation or improved sanitation, possibly contributed most to cholera's long-term decline. But, despite this, cholera remains a recurrent threat to human health in South Asia.

Other Enteric Diseases

The reasons for the persistence of cholera - insanitary living conditions and contaminated water supplies — are also reasons for the continuing importance of other enteric diseases. Dysentery and diarrhea have long been significant among the major causes of sickness and death in South Asia, and, like cholera, greatly swelled mortality during times of famine. Alexander Porter (1889), in his pioneering study, reckoned dysentery and diarrhea the chief cause of death among famine sufferers in Madras in 1877-8. In the high wave of mortality at the turn of the century, one death in every seven in British India was attributed to dysentery and diarrhea, placing these diseases in third place behind malaria and plague as major killers. But they also accounted for a great deal of the mortality even in nonfamine times, particularly among children. In 1962 in India they were responsible for 179,714 deaths (equivalent to 0.4 per 1,000 population). In 1974, along with 846,858 cases of gastroenteritis and 333,687 of typhoid (with 3,623 and 924 deaths, respectively), dysentery, with 4.5 million cases and 2,182 deaths, stood high among the most commonly reported illnesses.

Malaria

Although malaria has undoubtedly been present in South Asia for a very long time, it was not recorded separately from other "fevers" until late in the nineteenth century and so is even more difficult than are smallpox and cholera to trace in recent times. But, once identified as a distinct disease, malaria was soon recognized as a primary cause of much ill health and death. Almost a fifth of the mortality that occurred between the 1890s and the 1920s (amounting to some 20 million deaths) was attributed to malaria. As plague, smallpox, and cholera abated, so malaria gained prominence as the greatest single threat to health in South Asia. J. A. Sinton estimated in the 1930s that malaria was responsbile for 3 or 4 times as many deaths as the 3 other diseases combined. Of 6.3 million deaths in British India between 1924 and 1933, at least 1 million, he believed, were directly attributable to malaria, while millions more, weakened by the disease, fell ready prey to other ailments and afflictions. In addition, the disease was a frequent cause of abortions and stillbirths.

As elsewhere, malaria in South Asia has had a close relationship with the environment and with environmental change. During the nineteenth century, the expansion of irrigation canals and the building of railway embankments and other major construction works that interfered with the natural lines of drainage or left patches of stagnant water created human-made environments in which malaria-bearing Anopheles mosquitoes could breed (Klein 1972). South Asia thus failed to experience the beneficial side-effects of the draining of marshlands and swamps that contributed so significantly to malaria's decline in Europe. The extension of irrigated cultivation and the dense settlement it commonly sustained in South Asia, combined with the development of new networks of labor migration and mobility, have also tended to create conditions favorable for human transmission of the disease.

Until recently, malaria infestation largely prevented agricultural colonization of areas like the Terai in the Himalayan foothills and the hill tracts along the Andhra-Orissa border. The partial eradication of malaria in the 1950s and 1960s facilitated the penetration and settlement of these areas, but its return severely affected immigrant farmers (e.g., in the Chittagong Hill tracts of Bangladesh) with no inherited or acquired immunity to the disease.

Like the other diseases so far discussed, malaria has some connections with human hunger. S. R. Christophers (1911) showed a close correlation between peaks of malaria mortality and morbidity in the Punjab between 1869 and 1908, and periods of high food prices, a basic index of widespread poverty and incipient famine. But in this instance the decline of famine was apparently not matched by any corresponding downturn in malaria morbidity and death.

Around 1950 malaria was still killing 500 out of every 100,000 of the population in South Asia, with India alone suffering 800,000 deaths a year. Although quinine prophylaxis and the mosquito nets and screens have provided limited protection to a small minority (mainly Europeans), the advent of DDT from the end of World War II made a comprehensive assault on malaria possible for the first time. India launched a National Malaria Eradication Campaign in 1958, and such was the apparent success of DDT spraying there and in Sri Lanka that Davis (1956) was prompted to write of the "amazing decline" in Third World mortality modern medicine made possible. But the triumph soon proved shortlived. Increasing anopheline resistance to DDT and the diminishing effectiveness of the drug chloro-quine resulted in a startling recrudescence of malaria during the 1960s and 1970s. From a low point of 50,000 cases in India in 1961, the number rose to 1.4 million in 1972 and 1.9 million in 1973. It then shot up to 3.2 million in 1974 and 5.2 million in 1975. Two years later, in 1977, an estimated 30 million people in India were suffering from malaria.

Plague

Although not unknown to the subcontinent in previous centuries, plague arrived in epidemic force in 1896, when the disease broke out in Bombay city and then gradually spread to other parts of western and northern India. By 1907 close to 2 million plague deaths had been recorded, and at its peak in 1904-5 the number of deaths reached 1.3 million in a single year. By 1907-8 plague accounted for more than 14 percent of all mortality in British India, and its heavy toll swelled India's great mortality between the 1890s and the 1920s. In this third plague pandemic, India was exceptionally hard-hit. Of 13.2 million deaths recorded worldwide between 1894 and 1938, 12.5 million occurred in India: Nearly half of the deaths in India fell in the period from 1898 to 1908, with a further third between 1909 and 1918 (Hirst 1953). At first mainly a disease of the cities, plague moved steadily into the countryside. The Punjab was worst affected, with nearly 3 million fatalities between 1901 and 1921; in 1906-7 alone there were 675,307 deaths in the province, equivalent to 27.3 per 1,000 inhabitants.

But for all the severity of the disease, plague in modem India never took as heavy a toll of human life as in Europe during the Black Death of 1347-9, partly because the deadly pneumonic form of the disease was absent. Several areas escaped largely unscathed. Northern and western India bore the brunt of the epidemic, whereas Bengal, the south, and Sri Lanka were far less affected. One possible explanation for this, favored by L. Fabian Hirst, was that the type of flea found on the rats in western and northern areas, Xenopsylla cheopis, was a more efficient vector of the plague bacillus than Xenopsylla astia, which was more common in southern regions.

At the start of the epidemic, in part responding to international pressure and the threat of commercial sanctions by other European powers, the British adopted measures of far greater severity than previously employed against cholera and smallpox. Suspects and victims of the disease were hospitalized or put in segregation camps; house searches were made to discover concealed cases and corpses; many thousands of travelers were examined, and walls and roofs pulled down to allow sunlight and fresh air into dark interiors. At this early stage the role of rat fleas in the transmission of the bacillus was not understood, and the assumption was that plague was a disease of "filth" or an "acute infectious fever" spread through close human contact.

But medical intervention on such a scale proved singularly ineffective. It failed to extirpate the disease, which continued to spread (in part through people fleeing from such drastic measures), and it led to evasion and even open defiance in several cities. In consequence, the British administration soon settled for a less interventionist policy, turning down W. M. Haffkine's suggestion of compulsory inoculation with the antiplague serum he had recently developed. Reliance was placed instead upon voluntary segregation and hospitalization, greater use of indigenous medical practitioners, and latterly (once the rat flea connection had been established) on the trapping and poisoning of rats.

These measures may have contributed to the decline of the disease, already marked by 1917; so also may the extensive use made of voluntary inoculation (in the long term). It is possible that just as the prevalence of famine in the early years of the epidemic aided the spread of plague, so the later absence of famine contributed indirectly to its decline. The bulk movement of grain (and with it X. cheopis) in times of food shortages may have facilitated the spread of the disease: There is some evidence for this connection in its partial resurgence as a result of the massive movements of grain triggered by the Bengal famine of 1943-4. Another possibility is that whereas human susceptibility remained unchanged, rats developed a growing immunity to the plague bacillus. For whatever reason, plague was in decline in India from the early 1920s and, since 1950, has been rare, confined to a few isolated pockets, mainly in the southern Deccan.

Influenza

Influenza had a much shorter but more devastating career than did plague. As with plague, India was one of the areas of the globe hardest hit by the influenza pandemic of 1918-19. In the space of barely 3 months it lost between 12 million and 20 million lives, equivalent to nearly 5 percent of the total population of British India, and up to twice as many as had fallen victim to plague during the whole of the previous decade. As with plague, influenza entered India through international ports like Bombay, but then moved with lightning speed from one city to the next along the main lines of transport and communication. Southern and eastern India were again the areas least affected (Davis 1951; Mills 1986).

The greatest number of deaths occurred during October, November, and December 1918, with the major casualties being young adults between the ages of 20 and 40. Connections between influenza and famine have often been denied, but because the epidemic struck at a time of high prices and food shortages in several parts of India and caused disproportionate mortality among the poorer classes, grounds exist for arguing that one reason why India suffered so severely from the epidemic was precisely because so large a part of its population was at the time hungry or malnourished (Klein 1973; Mills 1986). As elsewhere influenza died away almost as quickly as it had come, and though influenza has remained a common affliction in the region, it has been responsible for few deaths. In 1974, for example, 1,700,000 cases of influenza were reported for India, but there were only 87 deaths.

Tuberculosis

Tuberculosis was not a disease that attracted much medical attention before 1945, but it has increased in importance as other diseases have declined and as the crowded, especially slum, conditions in South Asian cities have grown. Already at the start of the century, TB and the other respiratory diseases accounted for at least one-seventh of all deaths. Fully 342,391 deaths from the disease were reported in

India in 1961-2 (equivalent to 0.85 per 1,000), and between 100 and 150 out of every 100,000 city-dwellers died from this cause. By 1981 an estimated 1.5 percent of the total population of India was affected, placing TB among the leaders of India's many competing causes of death.

Miscellaneous

Several other diseases warrant mention, however briefly. Tetanus in India has been particularly associated with childbirth through the use of unsterilized implements to cut the umbilical cord. Cases numbered 32,825 in India in 1974, with 4,400 reported deaths. Hepatitis was widespread, too, with 126,071 cases and just over 1,000 deaths in 1974. In that year there were also 206,386 cases of whooping cough (300 deaths) and 333,697 (924 deaths) from typhoid, making these, along with malaria, dysentery, gastroenteritis, and influenza, two of the main identifiable causes of morbidity.

Leprosy remains strongly entrenched in South Asia. Whereas the census of 1881 counted only 131,618 lepers in British India (surely a gross un-dernumeration), the census of India 90 years later, in 1971, put the figure at 3.2 million. Some experts, however, estimate at least 4 million cases of leprosy in the country. Not only is this a high figure in national terms, with 5 to 6 per 10,000 of the population affected, but it also makes India home to one-third of the world's leper population. Despite the availability of modern drug therapies, the socioeconomic conditions in which leprosy is transmitted (and perpetuated) have so far prevented its effective eradication.

Swine Influenza

Swine Influenza

SWINE INFLUENZA frightening you? CONCERNED about the health implications? Coughs and Sneezes Spread Diseases! Stop The Swine Flu from Spreading. Follow the advice to keep your family and friends safe from this virus and not become another victim. These simple cost free guidelines will help you to protect yourself from the swine flu.

Get My Free Ebook


Post a comment