Monday, December 27, 2010

The Sumerian Worldview

The Sumerian civilization was comprised of some twenty temple-centred city-states that arose during the 4th millennium BCE in the fertile plain between the Tigris and Euphrates rivers south of present-day Baghdad.  Although increasingly linked over the centuries by trade that took place along the canals and rivers of southern Mesopotamia, each city was independently ruled by a priest-king and council of elders. By the end of the 4th millennium, several had populations exceeding 10,000, and the city of Uruk became the first in the world to surpass 50,000 inhabitants. Together they gave birth to the wheel, written language, mathematics (including geometry and algebra), astronomy, kiln-fired pottery, large-scale irrigation, monumental architecture, urban planning, the first codified legal system, epic literature, and the first schools that flourished under the auspices of the city-state’s primary temple.

These city-states, which had evolved from earlier Neolithic villages, continued to evolve over the 2000 years of Sumerian history. Each was surrounded by a belt of agricultural land that contained many small hamlets connected by a network of roads, canals, and irrigation ditches. Typically situated on a major waterway, each city was linked by a main canal to its own harbour. Divided into commercial, civic, and residential spaces, the residential areas were further divided according to the occupants’ work specialties and social status. Houses – about 90 square metres or 950 square feet in size – were designed so that rooms opened only onto a private inner courtyard, thereby maintaining a clear separation of public and private spaces.

At the centre of each city was a high temple around which the city had grown, and which itself had grown from a small one-room structure in Neolithic times into a successively larger and more elaborate complex that might occupy several acres. As cathedrals today give architectural expression to key components of the Christian worldview, with their high altars, cross-shaped nave and transepts, and steeples pointing heavenward, so these ancient temples gave expression to the Sumerian cosmology. The world was seen as a disc of land surrounded by a salt-water ocean which in turn floated on a primeval sea of fresh water. Above was a giant dome-shaped firmament within which the fixed motions of the heavenly bodies regulated time. Uniting this three-layered cosmos was the Cosmic Mountain or axis mundi, represented by the temple which was home to the patron god of the city and the place of meeting between gods and men. Doors along the long axis of the rectangular temple were entry points for the gods; doors along the short axis provided entry points for men. At the intersection of the two axes, at the centre of the temple, was a table for receiving offerings. Here one could turn 90 degrees to face a statue of the city’s patron god at the far end of the central hall.

The temple was initially built on a raised terrace of rammed earth, later on a higher platform of adobe brick, and still later on a much higher stepped pyramid or ziggurat (which may have inspired the Biblical story of the Tower of Babel). This raised platform represented the primordial land that, in creation, had emerged from the underlying sea. On it the temple was oriented such that its four corners pointed in the cardinal directions of the compass, from whence four rivers flowed from the Cosmic Mountain to water the earth. The roof of the temple served as the observatory from which priestly astronomers kept track of the time. And, as the temple grew ever larger during the Dynastic Period (2900 – 2270 BCE), it became as well a storage and distribution centre for surplus food and the primary residence of the priests.

Not far away, the priestly governor (ensi) or king (lugal), together with his council of elders, kept court at the palace. Like the temple, it had grown during the Dynastic Period from modest beginnings into what became known as “the Big House”. At the same time, and not coincidentally, the cities themselves became walled. During earlier centuries, there had been no military class, little or no armed conflict, and no reason for a city to be walled. But from the start of the 3rd millennium BCE, the growing power of the temple-palace alliance brought with it increasing violence. Cities became walled, undefended villages disappeared, rulers vied with one another for power, and cities began to engage in siege warfare with each other as they sought to expand their territories. The city of Lagash annexed almost all of Sumer and introduced the use of terror as a means of reducing other city-states to tribute. The first historically recorded war took place between the cities of Lagash and Umma c. 2525 BCE. Lagash was later conquered by the priest-king of Umma who went on to claim an empire stretching from the Persian Gulf to the Mediterranean. He in turn was overthrown by Sargon of Akkad who absorbed all of Sumer and proceeded to establish the far-flung Akkadian Empire. Violence was now institutionalised. What began as a cooperative constellation of city-states had, in just a few centuries, surrendered to the imperial ambitions of one dynasty after another.

Accompanying this escalating violence was a worldview that supported it. While the relatively benign presence of the Great Mother was still a focus of Sumerian worship, the nature spirits of the earlier Paleolithic and Neolithic eras had now become more godlike forces of nature. By the 4th millennium BCE, they had morphed into individual deities within a pantheon of gods and goddesses. Although still associated with the forces of nature and immensely powerful, they were now human in form, had human qualities and foibles, and among themselves were unequal in status. Some of them became patron deities of particular cities that they ruled through their earthly representative – the priest-king of that city. It followed that the increasingly frequent and violent power struggles between cities should be seen as a contest between their associated deities.

Nammu, goddess of the primeval sea, may have been the earliest deity. Sometimes described as the mother of all gods, she gave birth to heaven and earth – specifically to her first-born, An, the god of heaven, and then to Ki (later known as Ninhursag), the goddess of earth. From their union in turn came Enlil, god of the air, who separated heaven and earth. An carried off heaven, while Enlil carried off his mother, earth, and with her proceeded to create man and the entire world of plants and animals. An ,the first male sky-god to appear in human history, was seriously concerned with power and served as supreme ruler and alpha-male of the pantheon, as well as the patron god of the city of Uruk – at least until Uruk was defeated by the city of Nippur, whereupon its own patron god, Enlil, replaced him as the supreme object of worship.

Each of these main players had specific functions. An was the power that gave being to all nature. Ninhursag governed wildlife and gave birth to kings. Enlil was god of the winds and of crop-growing weather. Of lesser rank, but significant nonetheless, were the sun god (Utu), the moon goddess (Nanna), and a host of male sky-gods who belonged to a kind of heavenly club known as the Anuna. There were deities for almost everything – for the other celestial bodies, for geographical features such as mountains and steppes, for important tools such as brick-moulds and ploughs, and for each of the crafts, as well as very personal gods who were served by individuals and households.

This pantheon of deities answered life’s most basic questions, How did we and the universe come into being? What is our place in the larger scheme of things? What drives the forces of nature, and what can we do to control them? It was the gods who brought all things into being and created humans from clay in order to serve them. “Worship your god every day,” reads the Babylonian Counsels of Wisdom, “with prayer and sacrifice, accompanied by incense. Present your free-will offering to your god, for this is proper. Offer him prayer, supplication, and prostration daily, and you will get your reward. Then you will have full communion with your god. Reverence begets favour; sacrifice prolongs life; and prayer atones for guilt.” Beyond the demand that we serve them, however, the Sumerian gods seem not to have specified any code of behaviour such as we find later in the Jewish Torah. Nor was behaviour motivated by any promise of heavenly reward or eternal damnation. Men and women were on their own to decide issues of right and wrong, good and evil. And regardless of how they had lived, all were destined to descend after death into a gloomy Underworld where they could expect to spend eternity as a ghost.

The Underworld was ruled, at least from 2400 BCE, by Gilgamesh. Earlier in that millennium he had been king of the city of Uruk, around whom such legends had grown that he was later elevated to the status of a god and king of the Underworld. The myths and legends surrounding him are related in The Epic of Gilgamesh, told and retold by the Sumerians, and later by the Assyrians and Babylonians, over at least two thousand years. Although details of the story differ from one version to another, the main theme – coming to terms with the fact of our mortality – remains the same. Knowing that all men must die, a youthful and irrepressible Gilgamesh and his companion Enkidu set out on a journey to accomplish a heroic deed (slay a mythical monster) and thereby achieve a kind of immortality. In the process, however, they insult the Great Mother who decrees Enkidu’s death and reduces Gilgamesh to resignation regarding his own mortality. Henceforth, because of his heroism in confronting death, he is made ruler of the Underworld and his blessing is invoked in Sumerian burial rites.

The same Gilgamesh narrative includes the story of the Flood and is almost certainly the source of the later Biblical account. According to the Sumerian version, the gods decided on a whim to destroy humankind with a flood. But Enki, patron god of the city of Eridu, was less than happy with the decision and told a man named Utnapishtim to build a very large boat in which to preserve himself, his family, and a host of animals. After six days and nights of riding out the flood, the boat grounded on Mount Nisir (in modern-day Iraq). After another seven days, three birds were released in succession. When the third one did not return, Utnapishtim knew the flood was over.

Alongside the growing pantheon of gods and goddesses, the Great Mother retained a central role in Sumerian mythology – worshipped as the goddess Inanna (“queen of heaven”) until the beginning of the 1st millennium BCE, and thereafter as the Babylonian goddess Ishtar.  An epic poem, The Exaltation of Inanna, described her as all-powerful and reigning in heaven. Associated with the life-giving powers of fertility and abundance, and hence not as war-mongering as many of her male counterparts, she was still far from domesticated. Sexual attraction is aroused in her presence, and she herself is described as sexually aggressive. In The Epic of Gilgamesh, her sexuality is excessive and downright dangerous when spurned.

In order to arouse sexual vigour and ensure the fertility of crops and animals within a city-state, the priest-king was ritually united with Inanna in a royal marriage ceremony. Over the course of two millennia, the marriage was celebrated at least once by the ruler of each major city. A vase found at Uruk illustrates the occasion there c. 3000 BCE. An inscription found at Lagash refers to the marriage being performed there c. 2250 BCE. And in ancient Babylon (c. 1700 BCE), the ritual was performed annually in association with a New Year festival. More than simply promoting fertility, it may have served as well to legitimise the king’s rule by placing him in a productive relationship with the Great Mother.

Gradually over the ensuing centuries, as Sumerian civilization disappeared under the onslaught of successive empires, the Great Mother was reinvented by the warring Father God worshippers. In some instances, the Mother Goddess became the wife or daughter of their chief god. Sometimes they got rid of the goddess altogether or demoted her to the status of a disobedient and trouble-making mortal woman. Pandora, whose name means “giver of all gifts,” was demoted into a mortal woman who brings only trouble into the world. And the Hebrews turned the Mother Goddess into Eve who, because of her disobedience and questioning of male authority, ended forever humanity’s place in Paradise. Needless to say, this masculinisation of the Mesopotamian worldview was the accompaniment not only of increasing violence and warfare, but of the subjugation of women over all ensuing millennia until the present time.
     

Monday, December 13, 2010

The Age of Civilizations and Empires

Civilization began with the emergence of cities and city-states. The word itself comes from the Latin civis, which means one who lives in a city.  So a civilization is a constellation of cities that occupy a given geographical area, share a common language and culture, engage cooperatively in the production and importation of food and other life necessities, and create organisational structures that ensure a continuity of government and social order.

Around the start of the 4th millennium BCE, the city-states of Mesopotamia, located in the alluvial plain between the Tigris and Euphrates Rivers in what is now Iraq, began coming together to form the first of the great Bronze Age civilizations. Many of these cities, such as Eridu (perhaps the oldest city in the world) and Ur (just 12 kilometres northeast of Eridu), each of which housed as many as 10,000 residents, grew up almost in sight of one another. Populated by the same Sumerian people, sharing the same language and culture, and increasingly linked by trade, it was virtually inevitable that these cities should come together as the Sumerian civilization.

The same process soon followed in other regions of what was now becoming the civilised world. Cities in the Indus River Valley came together in the mid-4th millennium BCE to form the Harappan or Indus Valley civilization. It flourished until c. 1800 BCE in what is now Pakistan, north-western India, and south-eastern Afghanistan. At about the same time, from c. 3600 BCE, settlements along the Nile River began to grow and advance rapidly towards civilization. What had previously been autonomous towns and villages were united first into the kingdoms of Upper and Lower Egypt, and then (c. 3150 BCE) unified under the great dynasties of the Egyptian civilization. Meanwhile, on the nearby Mediterranean island of Crete, migrants from Anatolia and/or the Levant had settled in agricultural villages from c. 7000 BCE. As elsewhere, some of these grew into palatial cities that later (from c. 2700 BCE) comprised the Minoan civilization. This in turn was eclipsed by yet another civilization, the Mycenaean, which had emerged from city-states on the mainland of Greece c. 1600 BCE.

These five – the Sumerian, Harappan, Egyptian, Minoan, and Mycenaean – were the great Bronze Age civilizations that marked the beginning of our collective adolescence. They were born in the 4th millennium and had all expired before the end of the 2nd millennium BCE. As is typical of early adolescence, it was a period marked by a new flowering of the human mind and the first purely mental productions of the human race (e.g. writing, the alphabet, the calendar, mathematics, astronomy, etc.). It was also a time of escalating adolescent hubris during which these civilizations and succeeding empires became intoxicated with their expansionist dreams and whatever mayhem was required to realise them. Indeed, it was precisely this escalating violence and mayhem that contributed to the demise of each of these civilizations, all within a few centuries of each other, in what historians call the Bronze Age Collapse. 

There is no neat dividing line between a civilization and am empire. The former, as we have noted, is a complex agricultural and urban culture comprised of a regionally-defined constellation of city-states that share a common language, governmental structure, and sense of identity. An empire, on the other hand, is a geographically extensive group of states, united and ruled by a king or emperor who exercises military and political dominion over populations that are culturally and ethnically distinct from that of the ruling state. In their later stages, each of the Bronze Age civilizations either morphed into a more expansive empire itself or was conquered by some other expanding empire.

The city-states that had flourished as the Sumerian civilization from the start of the 4th millennium BCE were conquered by Sargon of Akkad (now the city of Fallujah in Iraq), c. 2250 BCE, to form the combined empire of Akkad and Sumer – often regarded as the first-ever empire. Having whetted his imperial appetite, Sargon went on to extend his empire as far west as modern-day Syria, Israel, and Turkey, and as far south as Oman. Within 100 years, however, the Akkadian empire itself had collapsed, to be succeeded by a brief Sumerian renaissance before the city of Ur was finally sacked and Sumer came under Amorite rule. That was the end of Sumerian civilization. From the start of the 2nd millennium BCE, Mesopotamia was dominated by successive empires – Assyrian, Babylonian, and Persian. The latter was the largest in ancient history and spanned three continents – Asia,  Africa, and Europe – before it too fell to yet another Conquering Hero: Alexander the Great.

Meanwhile, the Indus Valley civilization was similarly coming unstuck. By 1700 BCE, most of its great cities – (more than 1000 have so far been excavated) – had been abandoned. The reasons are unclear. Climate change seems to have triggered a severe decades-long drought at about that time. A decline in trade with Egypt and Mesopotamia may have been another factor. But invading hordes of barbarian horsemen from the north – Indo-European tribes from Central Asia known as Aryans – almost certainly put the final nail in this civilization’s coffin.

The heyday of Egyptian civilization was also succumbing to its own and others’ imperial ambitions. The stability that had characterised this great civilization since the middle of the 4th millennium BCE began to unravel when its own expansionist dreams brought it into conflict with the Hittite Empire for control of Syria and Palestine. The largest chariot battle ever fought reached an indecisive but costly conclusion when, in 1274 BCE, the Hittites caught the forces of Ramesses II at Kadesh in Syria in history’s first recorded military ambush. Some 50 years later, his successor, Ramesses III, managed to defeat an invading confederacy of sea raiders (known as the “Sea Peoples”) in two great land-and-sea battles. But the heavy cost of such battles exhausted Egypt’s treasury. The death of Ramesses III in 1155 BCE marked the beginning of the end for Egypt. Some unknown environmental disaster (perhaps the eruption of the Hekla volcano in Iceland) dimmed the sun’s light and seriously arrested the growth of global vegetation for almost two decades. Then a combination of droughts, famine, civil unrest, official corruption, and endless bickering among Ramesses III’s heirs precipitated a more total collapse. In subsequent centuries a now-humbled Egypt was intermittently harassed and controlled by Libyans, Assyrians, Greeks, and Persians, before its ultimate conquest by Alexander the Great in 332 BCE and its annexation as a Roman colony in 30 BCE.

Unlike its sister civilizations, the Minoan civilization on the island of Crete never succumbed to imperial temptations. Primarily a mercantile people engaged in overseas trade, they seemed content to lead a peaceful life with no expansionist ambitions. There is no evidence for a Minoan army or for their domination of any peoples outside Crete. In sharp contrast to their warmongering contemporaries, warfare does not appear in their art – and when weapons are depicted, it is only in ritual contexts. Significantly, the Minoan cosmology was never invaded by the warring male sky-gods, and the Mother Goddess remained at the centre of their essentially matriarchal religion. Such a pacific way of life, however, was no guarantee against calamity and war. The largest volcanic eruption in recorded history took place c. 1600 BCE on the Mediterranean island of Thera (now Santorini), devastating Minoan coastal settlements and inspiring Plato’s story of the lost island of Atlantis. Still another natural catastrophe, perhaps an earthquake or another eruption of Thera, further weakened Minoa and made it ripe for invasion. In any case, c. 1400 BCE, the Mycenaean Greeks did just that. They destroyed much of the island, occupied the Minoan palaces, and effectively brought Minoan civilization to an end.

Of all the Bronze Age civilizations, the Mycenaean was by far the most militant and short-lived. Emerging c. 1600 BCE among the cities of mainland Greece, it quickly became more an empire than a civilization, extending its reach to Crete, Turkey, Cyprus and Italy. Its swords and other artefacts have been found as far away as Germany and the Caucasus. Unlike the Minoans, Mycenaean society was dominated by a warrior aristocracy who advanced their interests through conquest. It is the setting of much ancient Greek literature, including the epics of Homer who, in the Iliad, recounts the probably legendary tale of the Mycenaean defeat of Troy in the Trojan War. No single explanation fits the archaeological evidence for the collapse of this warring civilization. Climate change, environmental catastrophe, invasion by the Dorians or the Sea Peoples, or the more widespread availability of iron weapons – all these may have contributed to its demise. The fact is that, from 1200 BCE, its palace centres and outlying settlements were being abandoned or destroyed, and within a hundred years any recognisable features of Mycenaean culture had disappeared.

So ended the 3000-year Age of Civilizations! By the end of the 2nd millennium BCE, all of these great Bronze Age civilizations had fallen prey, either to natural calamities, invading barbarians, or conquest by expansionist empires. Their legacy, however, continues to the present day, as certain of their defining characteristics continue to shape the civilised world.

 One of these is the necessity of trade. Civilizations depend on the export-import of food and other essentials between their cities and other more distant regions. This requires long-distance trade relationships and the development of transportation systems to service them. So ox-drawn, and later horse-drawn carts, are found from early in the 4th millennium BCE. The world’s oldest known roadway, the Sweet Track in England, dates from the same time, as does the construction of more sophisticated sailing vessels. Trade also required the invention of money to replace the previous barter system – (the Sumerians began using silver bars and the Egyptians gold bars as a medium of exchange almost from the beginning) – and the invention of writing (Sumerian cuneiform and Egyptian hieroglyphics) c. 3200 BCE in order to keep accounts. The accumulation of money soon became synonymous with power, and eventually necessitated the introduction of legal codes (e.g. the Sumerian Code of Ur in 2050 BCE and the Babylonian Code of Hammurabi in 1760 BCE) to regulate business practices and the ownership of private property. All of which of course have been enormously elaborated over the centuries ever since.

Another defining characteristic involves an increasingly complex division of labour, the accumulation of wealth, private property, and class stratification based on ownership and control of production. These, together with the centralising of government in the person and court of a priest-king, led quickly to the emergence of a privileged ruling class and a complementary religious or priestly class. Overlapping networks of political, religious, economic, and military power differentially benefited these privileged groups by exploiting the mass of peasant producers, via taxation and slavery, and funneling resources and power from the bottom to the top of the social hierarchy. The common thread is control. A small group of people, the ruling and priestly class, controls the mass of people through the institutions of civilization. 

In this respect, not a great deal has changed over the centuries. A wealthy and powerful elite continues to control production, buy elections, manipulate governments, run the military-industrial complex, and manage the media. In this they are sanctioned by the religious establishment, supported by an educational system that selects who will and will not have access to high-status jobs, and protected against the threat of rebellion by contemporary versions of the ancient Coliseum that keep the masses entertained.

Two other defining characteristics are worth noting. One is institutionalized warfare and the magnification of military power. The other is the building of monumental tombs and ceremonial centres. The former, born in Mesopotamia in the 3rd millennium BCE and growing in its killing power ever since, has, within the last 100 years, gone virtually out of control. Any restraints are gone. Wars are fought over ideas as much as over territory and resources, and the wholesale destruction of entire populations has become commonplace. The latter, whether the ziggurats of Mesopotamia or the pyramids of Egypt or the monumental palaces of Minoa, like the contemporary skyscrapers of New York or Dubai or Toronto, seem to express a kind of adolescent “I’m-the-biggest-I’m-the-best” defiance of death and a reaching for immortality.

The Age of Civilizations and Empires is scarcely over, and its legacy is evident everywhere around us. The great empires that dominated Europe until recently – the Russian, German, Ottoman, and Austro-Hungarian – finally collapsed only in the chaos of World War I. The Empire of Japan’s divine mission to rule the world ended only in the radioactive fires of Hiroshima and Nagasaki. And the last great empire – the British – was ultimately subdued only as recently as 1947 by the diminutive figure of Mohandas Ghandi.

Is it possible that we might finally be emerging from our collective adolescence? As we move towards a global community, could humanity be moving to a new level of maturity? If so, can we make that transition before some catastrophic clash of ideologies and worldviews brings the entire human experiment to an end? I think so. At least, that remains my hope and my contention.

Sunday, December 5, 2010

The rise of cities, kings, and warfare

For more than 6000 years, from the dawn of the Neolithic Era when the first horticultural villages appeared in what is now Syria and the West Bank of Palestine, until the start of the Bronze Age (3500 BCE) when the first great civilizations emerged in Mesopotamia, Egypt, and the Indus Valley, humanity enjoyed a relatively settled, peaceful, and egalitarian existence. Like the latency period of our individual development (age 6–11), it was a time of calm before the storm of our collective adolescence engulfed us in the ensuing Age of Civilizations and Empires

During the later stages of the Neolithic Era, a number of developments took place that led inevitably to the emergence of the great Bronze Age civilizations:
•    the growth of simple villages into large and complex city-states
•    the invention of kingship
•    the rise of militarism and large-scale warfare in an increasingly male-dominated world.

The growth of city-states

During much of the Neolithic period, our ancestors lived in pastoral-horticultural villages of anywhere from 150 to 2000 people. As their technology improved, and they discovered ways of farming more intensively (e.g. the polished stone axe for clearing forests, irrigation ditches, crop rotation, and the ox-drawn plough), they began to produce more food than was needed to meet the immediate needs of the community. A food surplus that could be stored, or possibly traded for other necessities, provided a measure of security hitherto unknown and attracted more and more people from the nomadic life of the hunter-gatherer to the settled life of the Neolithic village. As early as 7500 BCE, some villages were already approaching city size. By 5000 BCE, we see in some of them the first evidence of intensive year-round agriculture. And by 4500 BCE, some had grown in size to as many as 10,000 people.

People who lived in these early towns and cities now had time to concentrate on things other than growing food. Some became skilled in producing tools, others in weaving clothes, and others in building mud-brick houses. As early as 6000 BCE there is evidence of specialist classes – artisans, priests, traders, and administrators. And with such specialisation came social stratification and economic disparity – certainly in comparison with the egalitarian structure of earlier hunter-gatherer bands. The domestication of animals itself contributed to such disparity. Possession of livestock encouraged competition between families and led to inherited inequalities of wealth. But, for all that, such inequalities were still not pronounced. In the Anatolian settlement of Catal Hoyuk (6300-5500 BCE), for example, although some homes appear slightly larger or more elaborately decorated than others, there is on the whole a striking lack of difference in the size of homes and burial sites.

By the start of the Bronze Age, many of these burgeoning Neolithic towns had so grown in size and complexity as to become the first city-states. These were self-governing  territories focused on a major urban centre with sovereignty over a surrounding region ranging from a few square miles to a vast hinterland that might itself contain other cities or towns. The earliest included the Mesopotamian cities of Babylon and Ur, the Indus Valley cities of Harappa and Mohenjo-daro, and the Egyptian cities of Hierakonpolis and Abydos. As the centre of economic, religious, cultural, and administrative life, the core city provided a variety of livelihoods while the surrounding area supplied food and other resources. Now there are marked disparities in wealth, power, and social class. The privileged classes included the professional religious persons, the ruling authorities, the wealthy traders, and the landowners. Those who worked the land were the peasants.

City-states reached their peak in Greece. By the 5th and 4th centuries BCE, the Greeks were organised into hundreds of city-states, including Athens, Sparta, Corinth, and Thebes. In Italy, what began as a 9th century BCE village became the city-state of Rome and thence the centre of a vast empire. The Middle Ages saw a revival of Italian city-states such as Florence, Genoa, Siena, and Venice. What became known as the Most Serene Republic of Venice controlled a vast land-and-sea empire throughout the eastern Mediterranean until it was finally conquered by Napoleon in 1797 CE. And some German city-states such as Bremen and Hamburg managed to survive into the 19th century. For the most part, however, unable to defend themselves against aggressive territorial empires, independent city-states went into serious decline after 1500 CE. Today, with the exception of Monaco, Singapore, and Vatican City, they are all consigned to history.

The invention of kingship

Large population centres, characterised by increasing social stratification and economic disparity, need some form of governmental control. Certainly this was true of the first city-states. By the beginning of the 4th millennium BCE, some had reached such size and complexity as to require an organised system of government. And so they invented kingship.

Precisely when or how the first kings came to power remains shrouded in pre-historic mist. Some early Sumerian texts (made possible only after cuneiform writing had been invented c. 3500 BCE) point to an earlier time, before kings existed, when the people wandered in a state of leaderless confusion – to which the gods responded by delivering to them the concept of kingship. “Kingship,” it was said, “descended from heaven.” The office, in other words, was of divine origin.

At about the same time, the priestly watchers of the Mesopotamian night skies discovered that the seven celestial lights – the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn – move at mathematically determined rates through fixed constellations. Following the principle of “as above, so below,” they concluded that this celestial order should be reflected in the social order and that human affairs should be governed by a king and members of his court who played out a ritual pantomime of identification with the heavenly bodies. So the first Priest-Kings arose – rulers through whom each city-state was governed in accordance with the will of its patron deity.

From the start, religion and politics were in cahoots. Religion legitimised the power structure while priests enjoyed the fruits of their royal patronage. Soon the surpluses accumulated by the great city-states were being funnelled to the king and his court. More and more the labour of the many filled the treasure chests of the few. Why would men and women willingly submit to such a regime? Because they wanted a visible god or representative of the deity – a kind of father figure – always present to receive their offerings, provide necessary leadership, and ensure their protection and prosperity. And for this they were willing to pay the price of their own subjugation.

It would be a mistake, however, to think of these early priest-kings as ruling their cities with anything resembling the tyranny of later Roman emperors or European monarchs. On the contrary, with little individual autonomy, they were locked into playing their prescribed role. Moreover, though the job had undeniable perks, the term of office was time-limited and ended after a certain span of years with the king, together with dignitaries of his court, being slain in the ancient custom of ritual regicide. An extension of the longstanding Neolithic tradition of human and animal sacrifice intended to ensure continuing fertility and prosperity for the community, regicide was part of the job description to which the king willingly submitted. However his reign began – typically chosen in some manner by the local deity to take on the mantle of kingship and become the consort of the Great Mother – it ended with his being ritually sacrificed.

The occasion seems linked to the orbit of the planets – most often to the 8-year cycle of Venus or the 12-year cycle of Jupiter. Stargazing priests would set the date, and members of the king’s council or family would carry it out. Nor were they dispatched alone. Burial sites excavated in the ancient Sumerian city of Ur contain bodies from sixteen different royal courts, including not only those of the priest-kings themselves but of assorted members of their entourages.

Though later kings, more concerned with their own well-being than that of the polis, conspired to have substitutes sacrificed in their stead, the sacrifice of divine or divinely-chosen figures has continued to be an important theme throughout history. Sacred regicide, evident in the early stages of every literate culture, was still being practiced in India in the 16th century CE. In Zimbabwe, as recently as 1810, priests were still ordering the strangulation of the king every four years. And the voluntary sacrifice of a divine saviour to effect our salvation continues to be the central motif in the dominant religion of the Western world. 

The rise of warfare

Throughout the Neolithic Era, the Mother Goddess was the ruling principle of the universe. Her presence and power was dispersed throughout the natural world. With the rise of city-states and a system of government modeled on our solar system, however, the cosmological order came more and more to be seen in its hierarchical dimension. Rather than power being dispersed throughout nature, it came more and more to reside above nature – in a celestial realm from whence a deity communicated his will and exercised his power through a human ruler. And significantly, that power became more his than hers. The realm of nature spirits was becoming a pantheon of gods and goddesses, with increasingly aggressive male deities coming to the fore.

Other factors doubtless contributed to this shift towards male dominance. Much of the work on which the city-state depended and which therefore had economic value (e.g. clearing forests, ploughing fields, and digging irrigation ditches) required male muscle-power in a way that earlier tasks such as planting seeds with a pointed stick did not. It may also be that the very nature of a city-state requires a more aggressive and expansive energy. The increasing concentration of power and wealth at the top means progressively less for those at the bottom. The rich get richer and the poor get poorer, unless the state expands its resource base. Economic growth is essential to avoid social unrest and potential revolt. Eventually the city-state’s consumption will outstrip its own resources and drive it to consume the resources of its neighbours.

As early as the 8th millennium BCE, the village of Jericho, now a proto-city, found it necessary to fortify itself with a surrounding wall. Uruk in Mesopotamia is one of the world’s oldest known walled cities. By the 5th millennium BCE, many hitherto peaceful settlements, not only in the Middle East but in the Indus Valley as well, were fortified with a palisade and outer ditch as neighbouring communities quarrelled more and more over control of prime agricultural land. Nor was the threat only from neighbouring city-states. Bands of desert-dwelling nomads, with an eye on the rich fertile land, invaded many of these towns and villages and brought with them their warring male sky-gods. Clearly the level of testosterone was on the rise.

And it was only the beginning. By the dawn of the Bronze Age civilizations in the middle of the 4th millennium BCE, virtually all of the city-states that comprised them were fortified with walls and defended by armies. Until then, armed conflict had been largely limited to local quarrels as one city-state bumped into another, or as may have been necessary to fend off invading nomads. But in 2350 BCE, King Sargon of Akkad changed all that. He invaded Sumer in massive style – the first outright war of conquest and total subjugation – and the world since then has never looked back. Ironically it was at this same time that city-states, in at least this part of the world, ended their practice of ritual regicide. End one form of ritual murder and institute a far more lethal one! Homo sapiens had discovered another mark of its specie’s uniqueness: a seeming delight in massacring huge numbers of its own kind. Now we could shed blood on a massive scale.

So began the Age of Civilizations and Empires. The rise of city-states, the emergence of kings, and the institution of large-scale warfare in an expansionist, male-dominated world marked a massive upheaval in human society. And it initiated an empire-building era that has lasted until the present.

Wednesday, October 20, 2010

The Neolithic Worldview

The worldview throughout the period of the Great Migrations had been animistic. The whole realm of nature was animated by spirits.  During the later stages of this period, nature, while still animistic, was more and more seen as female – represented by the Venus figurines found in the 35,000-year-old caves of Cro-Magnon Man. Now with the dawn of the Neolithic era 14,000 years ago, the feminisation of nature became increasingly focused in the Mother Goddess, worshiped by these early villagers as the fruitful giver of life and of all that was needed to sustain them. She would become more central still in the worldviews yet to emerge in the civilizations of ancient Mesopotamia, Egypt, and Crete. We have no record of the names by which she was known prior to the invention of written language 5000 years ago – but some that were recorded after that date are Nammu, Utu, Inanna, Ishtar, Iahu, Astarte, Kali, Isis and Matrona. In ancient Greece they called her Gaia.

By whatever name, she was the bountiful goddess Earth – the mother and nourisher of life, and receiver of the dead for rebirth. She was not the supernatural creator of nature, but the creative force of nature itself. All nature was alive, engaged in the creative-destructive dance of life and death. Nature loved and raged at her human children, giving them ample reason to love, fear, and respect her. The plants and animals belonged to her. The forces of nature – sun and moon, winds and seas, mountains and rivers – were members of her holy family. The universe was not a mechanism as we think of it today, but a vast dramatic enterprise manifested primarily in the seasonal cycles on which these early farmers had come to depend. And it was all an expression of the abiding presence of the Great Mother.

The oldest-known man-made place of worship, dated to 9000 BCE, was the hilltop sanctuary of Gobekli Tepe in what is now southeast Turkey. And Jericho, one of the earliest Neolithic villages, grew up about 8000 BCE around a still-earlier shrine to the mother-goddess, where she was venerated through the offerings of fruits and flowers. From that time, and throughout the Neolithic era, she was many goddesses rolled into one – guardian of childbirth, dispenser of healing, fount of prophecy, lady of the beasts, giver of life and death – all different facets of a single power. But above all she was the goddess of fertility.

The miracle of the planted seed and fruitful earth, wherein death is transformed into life, was, to the Neolithic villager, the great mystery. And the myth that grew up around this mystery yielded a practice that would remain at the core of human culture for thousands of years – the practice of sacrifice. The thinking went like this. As rotting vegetation gives rise to new shoots, so death must be the giver of life. And if that is so, then the way to increase life is to increase death. Hence, in all planting cultures, we find the rites of human sacrifice by which this primal mythic scene is enacted literally.

The sacrifice, moreover, had to be a blood sacrifice, because blood was the substance of new life. According to the Neolithic understanding of reproduction, it was not the male semen but a transformation of blood that caused pregnancy. Observing that the menstrual blood flow continues each month except when the woman is pregnant, it must be this withheld blood that is being converted into new life – an idea supported by the obvious fact that the loss of blood leads to death. Just as the earth needs rain to bring forth crops, so the Great Mother needs blood to bring forth new life. Such is the logic behind the rites of human and animal sacrifice. The way to appease the Great Mother is to give her what she demands – blood! And invent a precise way in which to do it – ritual! In this way we can cooperate with the Great Mother to ensure fertility and life itself.

Over the course of time, human sacrifice became replaced by animal sacrifice – a change related in the Biblical account of God demanding that Abraham sacrifice his son Isaac and then changing his mind to allow a ram to be sacrificed instead. Later still, barter sacrifice became acceptable, as in the ritual of chopping off one’s finger joints. “I give you this joint,” ran a Crow Indian prayer to the Morning Star. “Give me something good in exchange.” But whatever the sacrifice, the intent was always the same – to appease the Great Mother or whatever deity was appropriate to the occasion.

The same thinking influenced the burial rituals of this time. As life springs from death in the plant world, so it is in the human world. The dead are buried to be born again. So this is the first era in which we find ceremonial graves as a common practice.

Between 4000 and 5000 BCE, we find the first evidence, especially in the Middle East, of fortified walls being built around these hitherto peaceful, goddess-worshiping villages. They were an unsuccessful defense against invading waves of nomadic hunters who came from desert regions in search of a better life and who brought with them their warring male sky deities. These invasions were the first expression of large-scale violence among humans. The conquering tribes, ruled by men and their male gods, stayed to form more complex social orders, strip women of their equal status, pursue their competitive interests, and build their kingdoms. As the villages grew into warring city states, the dominant cities so extended their control and imposed their customs on the surrounding territory as to become centres of power and government in what would become the great civilizations of Sumer, Assyria, Persia, and Egypt – ruled of course by male kings and pharaohs who had been anointed in their role by none other than the gods themselves. But that’s another story that we’ll leave for the next post.

Friday, October 15, 2010

The Neolithic Revolution

For more than 90% of our species’ existence, from the time of our emergence in Africa 200,000 years ago until as recently as 14,000 years ago, we lived as hunter-gatherers in small familial bands. During that entire span of time, changes in our worldview and lifestyle were minimal. We made modest advances in technology from primitive hand axes to bone-tipped spears and harpoons. The so-called Great Leap Forward of 60,000 years ago saw a breakthrough into syntaxed language and a more symbolic mode of thinking. And our exodus from Africa at about the same time required us to adapt to some dramatically different climatic conditions. By 30,000 years ago, our developing cognitive abilities had found expression in the representational art and Venus figurines with which we decorated our caves in Europe. And we were by then beginning to bury our dead with something resembling a religious mode of consciousness. But we were still just nomadic Stone Age hunter-gatherers, dressed in animal skins, living for the most part in caves, and maintaining a precarious existence in a world animated by nature spirits whom we did our best to keep happy.

Now all that was about to change – dramatically! Beginning about 14,000 years ago and gaining momentum quickly over the next few millenia, the Neolithic Revolution marked the single greatest transition in human history. We went from the only life we had known as wandering hunter-gatherers to that of settled villagers tending crops and herding domesticated animals.

We can only guess at how it began. We had already discovered a cooperative hunting partnership with dogs. Now, in northern Europe, as early as 12,000 BCE, we formed a different kind of partnership with reindeer – raising and herding them in exchange for their milk and meat. About the same time we seem to have noticed that the pits and seeds dropped along our habitual tracks were sprouting into the very plants that we worked so hard to gather. Aha! What if we deliberately planted these seeds and then stayed around to harvest their fruit? And instead of hunting down wild animals, what if we could domesticate them to our mutual advantage? Then we could have milk and meat and wool whenever we wanted.

It was an idea whose time had come. Comparatively quickly we went from nomadic to semi-sedentary to settled. We moved from simple gathering, to planting seeds with a pointed stick, to cultivating the ground with a hoe, and eventually to turning the earth with an ox-drawn plough. We evolved from simply planting seeds to selecting the best seeds from each harvest, to storing the harvests against times of need. In short, we added to our prowess as hunters the know-how required to grow reliable food supplies, raise captive animals, and, where climate and soil permitted, organise ourselves into self-sustaining villages.

Muraybet is the earliest known such agricultural-based settlement. Located on the banks of the Euphrates in what is now Syria, it was occupied from 12,500 to 9500 BCE by villagers who left behind evidence of their domesticated plants, harpoons and fish-hooks, flint-bladed sickles for harvesting, mortars for grinding, and the ever-present goddess figurines. Jericho, in what is now the West Bank of Palestine, is another example. Today it is the one of the oldest continuously inhabited cities in the world – but its origin as a Neolithic village, built around an earlier shrine to the mother-goddess, dates to 9000 BCE. From sites such as these the revolution quickly spread to North Africa and northern Mesopotamia, to Asia and India by 8000 BCE, and finally to North America by 2500 BCE. In all these locations we find an expanding cultivation of plants and the domestication of animals such as dogs, sheep, goats, cattle, oxen, and pigs.

Farming quickly led to the production of surplus food, and with that to growing population centres. By 9000 BCE, we were living in villages of 200, and by 5000 BCE in cities of up to 10,000 residents. Swelling populations in turn required a form of social organisation and control more complex than that of simple hunting bands. Just as eukaryote cells evolved nuclei and animal bodies evolved brains millions of years earlier, so our expanding human communities now required some way of organising themselves and managing their complexity. One solution was the ascription of ruling power to a monarch. A king’s tomb at Eynan, a dozen miles north of the Sea of Galilee, dated to 9000 BCE, is the earliest yet found.

As farming became more efficient, some members of these towns and villages could occupy themselves with concerns other than food production. They learned to spin yarn and weave cloth, fashion flint tools and weapons, mould decorative pottery and religious figures, build mud-brick buildings and wooden furniture, make musical instruments and lead others in worshiping their deities. By 6000 BCE we had developed clearly defined specialist classes – craftsmen, priests, administrators, etc. Just as insect colonies depend on the performance of specialised functions, so it is too for human communities beyond a certain size and level of complexity.

The Neolithic town of Catal Huyuk in Turkey exemplifies the peaceful lifestyle of such communities. There is no evidence of fortification, warfare, conquest, slavery, or significant social inequality. Men and women worked as partners. Women’s roles were no less important than men’s. There is even evidence that those in need were provided for from public stores of food or from the goddess's temple gardens.

However idyllic that may sound, life was not all candlelight and roses (or whatever the Neolithic equivalent of that may have been). They faced floods, droughts, malnutrition, and epidemics unknown to their hunter-gatherer forebears who had enjoyed a more nutritious diet and considerably less risk of famine. The downside of farming was that we became dependent on a smaller variety of crops that could fail; the downside of living in larger population centres was that we became vulnerable to infectious epidemics. The upside was that such challenges pushed us to be more inventive. We learned to extract medicine from plants, store food against times of need, agree on rules for sharing land, create canals to bring water from the river to our fields, and build boats to trade with neighbouring towns and villages.

All these revolutionary developments are linked to a very different sense of time. Prior to this era, we had wandered the earth, gathering and hunting as the need arose, with little or no thought for tomorrow. But the world of farming is the world of extended time. It requires making preparations for a future harvest, investing effort now for the sake of long-term goals, delaying present impulses to reap a future reward. This is a quite different mode of consciousness. It ushers us into a non-present world. Now we imagine the future with anticipation and anxiety, and confront our mortality with a deeper shudder. So this is the first era in which ritual burial and ceremonial graves became common practice.

This era corresponds to a stage in our individual development (age 7 – 12 years) that I call The Responsible Participant. Erik Erikson defined the key developmental task of this stage as “Industry versus Inferiority.” Children now work hard at being responsible. They are keen to share and cooperate – to join with others in being productive. Indeed the desire to be productive supersedes the whims of play. They are eager to learn and develop more complex skills. They now grasp calendar time and have a much better understanding of cause and effect. Jean Piaget described this as the “Concrete Operations Stage” during which the child engages in concrete problem-solving. Thinking is logical, but not yet abstract. The child can now imagine future scenarios, but his focus remains practical and concrete. Interestingly, belief in animism declines during this stage, though remnants of it may continue into later years.

All these developmental features are clearly expressed in what emerged during this Neolithic era of our collective history. It was a peaceful and productive time – akin to what Freud called the “latency period” in our individual development – the calm before the testosterone-crazed storm of adolescence that was soon to erupt in the ensuing age of Civilizations and Empires.

Monday, September 27, 2010

Spirits, Shamans, and Goddesses

There is a kind of consciousness that is uniquely human. Known as reflexive consciousness, it refers to our ability to think about ourselves, ponder our existence, and wonder about our destiny. Most of what we do – getting dressed, preparing breakfast, driving to work, and so on throughout the day – does not require reflective thought. We do these things automatically. But there are also times when we reflect on our lives and make choices based on such reflection. This kind of consciousness seems linked to our equally unique ability to express ourselves in syntaxed language, tensed verbs, and creative art – all of which were in full play from at least 35,000 years ago when the Cro-Magnon people were busy inventing more sophisticated technologies and painting the walls of their caves with representational art. It also seems linked to a heightened awareness of our mortality, a consciously embraced worldview, and the emergence of religious practice at about this same time in our collective history.

Many of these features of reflexive consciousness make their appearance in our individual development during that stage (age 4-7) that I have called The Curious Explorer. It is also the stage during which, according to developmental psychologist Jean Piaget, the young child attributes conscious intention to objects and events in the natural world – or what is known as animism. 

According to the animistic worldview that developed during humanity’s migratory era and that still prevails today in many indigenous hunter-gatherer cultures, the entire universe is alive and interconnected. Everything is animated by spirits. They exist in humans, animals, plants, rocks, natural phenomena such as thunder, and geographic features such as mountains and rivers – and everything that happens is under their control. They can be influenced, however, by rituals, often with sacrificial aspects, designed to win their favour or to keep malevolent spirits at bay.

The spirit that indwells all animals and humans survives physical death. In the case of humans, it may pass on to an easier world of abundant game, or it may remain on earth as a malignant ghost. Those who die a violent death may become malignant spirits that, intent on avenging their death, endanger those who come near the haunted spot. Many traditional Native American religions are fundamentally animistic. In some, such as the Navajo, the departed soul embarks on a journey to the spirit world that requires certain rituals to be performed by the survivors if it is not to become lost and wander forever as a ghost. Only in later cultures did the simple practice of offering food or lighting fires at the grave become elaborated to include the sacrifice of wives, slaves, and animals to provide the departed with such necessities in the future life.

In the animistic worldview, humans are very much a part of nature rather than separate from or superior to it. And because they are on a roughly equal footing with other animals, it is imperative to treat them with respect – especially since an animal may be the spiritual abode of one of your dead ancestors. Animal worship was sometimes intertwined with hunting rites. Archaeological evidence from both cave paintings and animal remains suggests that the bear cult involved a sacrificial ritual in which a bear was shot with arrows and then ritualistically buried near a clay bear statue covered by a bear fur with the skull and the body of the bear buried separately. Other rituals and taboos were designed to please the souls of slain animals so that they would tell other still-living animals that they need not resist being caught and killed.

The practice of shamanism is closely linked to the animistic worldview. A shaman is an intermediary between the human and spirit worlds, capable of leaving his or her body to travel throughout a layered cosmos - flying above the earth to the spirit world or descending into the underworld – to negotiate with good and evil spirits on behalf of the tribe.  By entering a trance, the soul of the shaman ventures into other worlds to seek out the underlying causes of mundane earthly events – and then fights, begs, or cajoles the spirits to offer guidance, ameliorate illness, or otherwise intervene in human affairs down here on the ground. It’s a risky business. The spirits themselves may be less than happy with the shaman’s interference; the plant materials used to induce the trance can be toxic or fatal if misused; and failure to return from an out-of-body journey can lead to death. To assist in the work, therefore, the shaman may have “spirit helpers” (usually the spirits of powerful or agile animals) who enable him or her to fly high like a hawk or dive deep like a fish into the spirit world.

Shamans perform a variety of functions – healing the sick, delivering solutions to community problems, predicting the future, leading sacrifices, and guiding the souls of the dead to their proper abode. Healing is accomplished by retrieving lost parts of the person’s soul or by cleansing the soul of the negative energies polluting it. The shaman’s spirit may enter the body of the patient to confront the spiritual infirmity and banish the infectious spirit. Sometimes medicinal herbs may be prescribed. And in the case of an infertile woman, the problem can be cured by contacting the soul of the wished-for child.

Given the value of these functions and the risks involved in performing them, the shaman usually enjoyed great power and prestige in his or her community – as evidenced by a 12,000 year old shaman burial site in a cave in Galilee. The elderly woman’s body had been arranged with ten large stones placed on her head, pelvis and arms. Among her unusual grave goods were 50 complete tortoise shells, a human foot, and body parts of assorted animals with whose spirits the woman had been in close relationship.

It seems clear that shamanism was practiced as early as 30,000 years ago – the date assigned to the earliest known undisputed shaman burial site in what is now the Czech Republic. Many of the cave paintings from this time – such as the half-human half-animal images, and images of humans wearing animal masks – are suggestive of shamanic practices. And the discovery of bone flutes and drums made of animal skins found in the graves of shamans from this time are in keeping with the use of music to induce shamanic trances.

To this day, shamanism is strongest in societies that still rely on hunting and gathering. It is only when agricultural societies became established that shamanism evolved into a priestly class and animism gave way to more institutionalised forms of religion.

One final feature of the worldview prevalent at this time is the emergence of the first goddesses. Along with the cave paintings of the Cro-Magnon people, we find an abundance of female figurines, naked and unadorned, carved of stone or of mammoth bone or ivory. She was far and away the chief object of sculpture for these cave dwellers. Although some anthropologists have suggested that they may depict actual women, or represent a kind of stone-age pornography, the wider consensus is that they point towards the mythic role of woman as a mother-goddess, experienced as the source and giver of life. Many of these figurines have been found pressed into the earth in sacred settings in household shrines. One, known as the Venus of Hohle Fels, was found in Germany and dated to 35,000 BCE. Made of mammoth tusk, it so emphasizes the vulva and breasts as to make it clear that this was a fertility amulet, almost certainly used in rituals of sympathetic magic to ensure the fertility of women and the land.

To the people of this time, nature was not only animated by spirits but clearly female – a fruitful mother-goddess who gave them life and all that was needed to sustain them, She would more and more come to be symbolised as the Great Goddess of whom these early Venus figurines were the forerunner – the Magna Mater who would become central in the worldviews still to emerge in the civilizations of ancient Egypt, Mesopotamia, and Crete.

Friday, September 24, 2010

The World of the Great Migrations

Here’s a quick recap of what in the previous post we called “The African Exodus”.
  • 60,000 years ago, a first wave of migrants journeyed from Africa to Australia, leaving pockets of population scattered en route along the coasts of Pakistan, India, and the islands of Southeast Asia.
  • 50,000 years ago, as this coastal clan reached China, a second wave left Africa, settled in the Levant,  and followed the steppe highway eastward.
  • 35,000 years ago, these steppe migrants reached southern Siberia and entered China from the north.
  • At the same time, a separate group of the steppe clan (the Cro-Magnon people) travelled west from central Asia into Europe. 
  • 20,000 years ago, the steppe clan that was still travelling northeast crossed the land bridge from Siberia to Alaska and then, 15,000 years ago, expanded south into North, Central, and South America.
  • Only 4000 years ago did descendants of the coastal clan who had settled in Southeast Asia venture across vast expanses of the Pacific to colonise Polynesia.
What was the life of these migrants like? What do we know of their lifestyle, their social structure, and their worldview?

Because they survived by hunting large animals, their lifestyle was chiefly nomadic. Those living near the ice in northern Europe followed the herds of reindeer and caribou. Those living in warmer climes hunted mammoths. It was dangerous work, but they were sufficiently skilled to hunt many large animal species to extinction. The Cro-Magnon people in Europe invented a lunar calendar with which to predict the migration of their prey animals. And as early as 30,000 years ago, both in Europe and Asia, our forbears entered a partnership with dogs. Still evolving from wolves, these animals helped in herding and bringing down game, and were rewarded by scavenging what their human partners left behind. Evidence of humans and dogs being buried together at a site in Israel 14,000 years ago is testimony to the deepening bond that subsequently developed and has continued between man and his best friend to this day.

Like their African ancestors, the migrants lived and moved about in small groups, finding shelter in caves and sometimes in purpose-built huts or pits dug into the earth. Rarely would one group encounter another. Mostly they had to deal only with other animals – many of them extremely dangerous. In coastal areas where they could rely on fishing, their lifestyle would have been somewhat more settled.

These nomadic or semi-nomadic groups were neither large nor complex. Kinship and proximity were the binding elements. There was no significant inequality or distinction of rank. Possessions were simple, with no real difference in wealth. War as we know it didn’t exist. While some division of labour by gender may have existed for the sake of efficiency in acquiring food, this was probably the most gender-equal time in all of history. Some women were very highly regarded, as indicated by the burial, 30,000 years ago in what is now the Czech Republic, of one who was a shaman. And it seems likely that family groups at this time followed a pattern of matrilineal descent.

A diet of meat, roots and fruit became supplemented with wild cereal grains as early as 23,000 years ago. Bananas and tubers may have been cultivated in a rudimentary form of horticulture even earlier. Nor were they slow to learn the secrets of getting high. Wine making and the consumption of hallucinogenic plants seems to have originated during this same period. The widespread use of cooking and other food-processing is reflected in a trend towards smaller teeth in humans over the last 100,000 years. Our face, jaw, and teeth today are about 10% less robust than was the case 10,000 years ago, and 25% less robust than 30,000 years ago.

The Cro-Magnon people brought with them to Europe an inventive genius previously unknown. Fishing nets, harpoons, the bow and arrow, spears that could be thrown, skilfully crafted stone blades, lamps fuelled with animal fat, tailored garments decorated with beads, sewing needles, fireplace cooking utensils, and a variety of containers, some made with wood – all these were in use 30,000 years ago.

Most impressive, however, is the art that decorated the walls of caves such as Chauvet Cave in France. Dated to 30,000 years ago, it is a veritable gallery of prehistoric art, depicting animals and humans, risky hunting scenes, creatures that are half-human and half-animal, as well as assorted symbolic shapes and patterns. There are also engravings, jewellery, animal carvings, sculptures made of bone and ivory, Venus figurines, and the oldest example of ceramic art – the Venus of Dolni Vestonice, dated to about 27,000 years ago. Artistic creations such as these reflect a new capacity for abstract thought, conceptual understanding, and spoken language. While these had been evolving gradually over time, the pace quickened during this Upper Paleolithic period. And with them would have come as well greater emotional capacities for intimacy and sympathetic response to the needs of others.

These cognitive and emotional developments, together with an awakening curiosity about the larger world and a willingness to venture into it at greater risk, correspond with what is typical of early school-age children (aged 4 – 6) – the stage of development that I call “The Curious Explorer”. It is what Jean Piaget described as the “conceptual pre-operations” stage (4-7 years), and the stage in which the key developmental task described by Erik Erikson is “Initiative versus Guilt” (3-6 years). During this stage, initiative adds the element of planning to the tasks we undertake. In learning new skills, the child is also learning to master the world around him. He learns to take initiative for the sake of achieving goals. And his growing courage and independence leads to more risky behaviour. Whether such behaviour is the child’s crossing a street on his or her own, or the early migrants venturing across the Coral Sea to Australia, it is classic exploratory behaviour that characterises this stage.

At the heart of these developments is an emerging sense of self. While still centered primarily on the body rather than being a mental-ego, the self is nonetheless now very much separate from the world and seems central to it. And with that sense of separateness comes a heightened awareness of our mortality and of the threats against which the self must now be defended. Sometime around age four in our individual development, and during this migratory era in our collective development, we awaken to the finiteness and vulnerability of our separate existence. No longer protected from the vision of our mortality, we start defending our now-separate self against death and do what we can to make it seem enduring and immortal. In short, we begin to devise the innumerable strategies of death-denial that distinguish our species, that shape our worldviews, and that lie near the heart of our religious beliefs and rituals.

Just how that was expressed for our ancestors during this particular period of our history I will put on hold until my next Post.

Wednesday, September 22, 2010

The African Exodus

Sixty thousand years ago, somewhere on the African coast of the Red Sea, a tribal band of probably not more than 150 people left Africa and headed east. Following the coasts of Arabia and India, they quickly reached southeast Asia, crossed 60 miles of open sea to Australia without any way of knowing that a hospitable landfall awaited them, and then proceeded to colonise central and east Asia. Considering that Homo sapiens, not long before, had been reduced almost to the point of extinction, it was a remarkable accomplishment to say the least.

What drove them to leave Africa after staying close to home for more than a millennium? We can only speculate. The developments in language and symbolic thought that had marked “the great leap forward” and were accelerating at the time of the African exodus would have contributed to their ability to navigate such a journey – as would their developing skill in producing more refined stone and bone tools. The last Ice Age was also accelerating, driving early humans out of Africa’s drought-stricken interior to the coastal areas where they had learned to gather food from the sea. Tools from this period found at coastal sites indicate that they could migrate over long distances along the coast of eastern Africa. There was no reason why they couldn’t do the same between continents. All they had to do was cross the narrow strait between present-day Djibouti and Yemen and they had relatively easy access to the endless beaches of southeast Asia.

Were they, like early school-age children, prompted by a growing confidence and curiosity to explore the world beyond what had become familiar? Or was the exodus just a gradual and unwitting expansion of range driven by local conditions? Certainly the speed with which they completed their migration to Australia suggests that they were driven by more than a careless meandering.

Human artefacts from the Northern Territory of Australia (where they landed after crossing from New Guinea) as well as at Lake Mungo (1000 kilometres west of Sydney) are clear evidence that humans were there 60,000 years ago. They would have arrived in sufficient numbers to start a breeding population and then found their way 2000 miles inland from Australia’s north coast to a lush oasis known as the Willandra Lakes. These presumably were descendants of the same people who left Africa, on the other side of the planet, at more or less the same time – 60,000 years ago. Whatever their route from Africa, it allowed for very rapid movement. Exactly how they got there, why they came, and what was driving them are questions we may never be able to answer. We do know, however, that Australia had to be colonised from elsewhere. It had been disconnected from the continents of Eurasia, Africa, and the Americas for 100 million years. So it missed out on all the mammalian, primate, and hominid species that evolution had delivered elsewhere – pursuing instead its own path of placental species like kangaroos. And since humans most assuredly did not evolve from kangaroos, they must have come from somewhere beyond Australia.

Then 10,000 years later, according to the genetic and archaeological evidence, a second wave of migrants followed their curiosity and probably herds of grazing antelope out of Africa – this time to the Middle East. We know the Sahara as the largest desert in the world and a distinctly inhospitable place. But during certain periods of early human history it was a relatively moist region that allowed for human habitation. This had been true from 100,000 to 80,000 years ago – and again for a few thousand years some 50,000 years ago. This would have opened a route along the Red Sea, down the Nile to the Mediterranean, and then eastward across the Sinai to the Levant. Or perhaps this second wave of migrants made their exit, as the first wave probably had, across the 20-kilometre-wide strait of Bab al Mandab into southern Arabia.

However they got there, this was the last substantial exchange between Africa and Eurasia for tens of thousands of years. Another period of glaciations, the Wisconsin Ice Epoch, between 40,000 and 15,000 years ago, turned the Sahara into desert again, effectively closing the door between the two continents and dividing the human world into its African and Eurasian constituents.

The path was now open, however, to the rest of the Eurasian continent and beyond. While the first wave of migrants had taken the southern coastal route of Pakistan and India, this second wave could travel a virtual highway of steppe from the Gulf of Aqaba to northern Iran and on into central Asia and Mongolia. So East Asia was settled by modern humans from both the south and north. Those who took the southern coastal route arrived as early as 50,000 years ago, while those who took the northern route probably entered about 15,000 years later from the steppes of southern Siberia.

About the same time that those on the northern route reached East Asia (i.e. 35,000 years ago), another group from that same route took advantage of a climatic window of opportunity to head west from the central Asian steppes into Europe – into the territory that, until then, had been the preserve of the Neanderthals. Just 5000 years later, they had so dominated the region that their Neanderthal cousins were all but extinct. Known as the Cro-Magnon people (named after the cave in southwest France where some of their bones were first unearthed), they mostly inhabited southern Europe and the Balkans during the depths of the Wisconsin Ice Epoch, and expanded northward during the post-glacial period. They were notable for their advanced culture – called the Aurignacian culture – characterised by a new artistic and inventive genius that found expression in skilfully crafted stone and bone tools, antler-tipped spears, bows and arrows, woven clothing, cooking utensils, musical instruments, and above all in spectacular cave paintings and ivory sculptures.

Meanwhile, those who were following the northern steppe highway eastward had already reached southern Siberia by 40,000 years ago. Their journey seems to have stalled there for the next 20,000 years as they adapted to the frighteningly harsh conditions of the Asian Arctic during the Wisconsin glaciation. So the earliest sites in northeast Siberia date from 20,000 years ago. Considering that they had only relatively recently come from their tropical homeland, one can only imagine the hardships they would have endured in this frozen wilderness. Having honed their hunting skills on the central Asian steppes, they survived mainly on large mammals such as musk ox, reindeer, and mammoths, and eventually followed these herds eastward across the land bridge that had been exposed between Siberia and Alaska. Sea levels had dropped so dramatically during the ice age that the Bering Strait had dried up, allowing these northern pioneers to live a dual Asian-American existence.

At first, it would have been impossible for them to expand southward. A continuous sheet of ice covered most of northern Canada and eastern Alaska, keeping them locked in their northern home. Only when the ice began to retreat some 15,000 years ago were they able to enter the North American plains. Initially it may only have been a few dozen, or a few hundred at most, who made the journey into what must have been happy hunting grounds beyond their wildest dreams – a vast grassland teeming with large grazing animals. Almost immediately their population exploded, and within a thousand years they had journeyed all the way to the tip of South America and driven 75% of all the large mammals in the Americas to extinction. So much for any romantic notion that our indigenous forbears were naturally eco-friendly!

The last chapter in our populating this planet was not written until much later – some 4000 years ago. It was then that those who had long since peopled the islands of southeast Asia, and over thousands of years become both agriculturalists and consummate seafarers, pursued their island-hopping curiosity far out into the Pacific into what would become Polynesia. Despite Thor Heyerdahl’s contention that the Polynesians originated in South America, the linguistic evidence seems clear: Hawaii was settled from southeast Asia. But for all that this last chapter was written relatively recently, it is no less heroic than the exploits of those who first undertook this exodus from Africa 60,000 years ago. Even the most direct route to Hawaii by island-hopping would have required at least two enormous sea passages of a few thousand miles – a voyage on which they would have had to take their own crops, confident in their ability to survive wherever they might find land again. What greater testimony could there be to the irrepressible curiosity of this relatively hairless hominid who left his African homeland so long ago!

Sunday, September 19, 2010

The Great Leap Forward

We seem to like very much the idea of making a “great leap forward.”  In 1958 Chairman Mao used the phrase to describe his plan to modernise China’s economy. And “New Age” enthusiasts like to think that humanity is now making a “quantum leap” in consciousness. But the original “great leap forward”, according to many anthropologists, occurred sometime between 75,000 and 50,000 years ago when Homo sapiens became a species driven by language and culture. An explosion in our capacity for symbolic thought and self-awareness, accompanied by breakthrough developments in spoken language, brought with it an accompanying explosion in cultural creativity.

Over what period of time these advancements took place remains a matter of debate among anthropologists. One theory holds that a leap into “behavioural modernity,” or what is sometimes called the Upper Paleolithic Revolution, occurred almost suddenly some 50,000 years ago – perhaps as a result of a genetic mutation or a reorganization of the brain that led to a major advance in language. Proponents of this theory, known as the “big bang” theory of human mental evolution, base their evidence on the abundance of artefacts, such as artwork and bone tools, that appear in the fossil record after 50,000 years ago – indicating, they suggest,  that prior to this date Homo sapiens lacked the cognitive skills required to produce such artefacts. Jared Diamond, an evolutionary scientist at UCLA, contends that, prior to this time, there is little evidence of cultural change. But then, coinciding more or less with our exodus from Africa to colonise the world, there is a sudden flowering of tool-making, sophisticated weaponry, sculpture, cave painting, body ornaments, and long-distance trade

An alternative theory known as the Continuity Theory holds that “behavioural modernity” has resulted from a gradual accumulation of knowledge, skills, and culture occurring over hundreds of thousands of years of human evolution. Advocates of this view, such as geneticist Stephen Oppenheimer, contend that evidence of modern behaviour can be found at a number of sites in Africa and the Levant from a much earlier time. A ritual burial with grave goods, for example, has been uncovered at Qafzeh in Israel and dated to 90,000 years ago. Continuity theorists believe that what appears to be a later technological revolution is probably the result of increased cultural exchange within a growing human population.

The truth may lie somewhere between the extremes of these two theories. From about 75,000 years ago there appears to have been a marked acceleration in the development of human language, cognition, and culture. The evidence for this consists primarily of artefacts found at Blombos Cave, 30 meters above the sea on the southern tip of South Africa. Here we find the earliest undisputed evidence of art in the form of bracelets, beads, rock art, and ochre used as body paint.

Beads made from the shells of tiny molluscs, dating from 76,000 years ago, were found in clusters. Pierced holes in the shells, together with smooth worn patches, suggest that the beads were strung together into necklaces or bracelets which may have rubbed against clothing. Blombos cave is also famous for its abstract engravings on red ochre from the same time. Together with the beadwork, it suggests that inhabitants of the cave had a complex sense of symbolism and a sufficiently developed language to describe the symbolic meaning that the beads and engravings represent. Here was the first tangible evidence of advanced, abstract thought.

Why should our ancestors have gone out of their way to collect high-quality red iron oxides? The red ochre has got to be culturally significant. At first it looks like any lump of pinkish rock. But look more closely and you see a cross-hatched pattern carefully etched onto its surface. It is regarded as the first evidence of Stone Age lipstick – as if, almost suddenly, people wanted to paint their bodies. Coincident with this is evidence that clothing also originated in Africa 75,000 years ago. It would of course have been useful when Homo sapiens left Africa and ventured into colder climes – but that migration did not take place until some 15,000 years later. It would seem that our taste for jewellery, fashion, art and cosmetics all emerged at about the same time. But why? Was it all about sexual attraction and signaling one’s genetic fitness with rare adornments? Or was it evidence of prestige and status? Even in this egalitarian society, some people would be more successful than others, and they may have wanted to signal their success with prized material items. This could, in other words, be the first evidence of social ranking marked by material possessions.

There were also significant advances in tool-making at this time. The harpoon had been invented 90,000 years ago. But now, just 70,000 – 65,000 years ago, using the earlier technology of heating silcrete to temperatures of 450-degrees Fahrenheit, small stone tools and points were invented, making possible the manufacture of lightweight bows and arrows and projectile spears.

Considering all these developments, along with the evidence of more permanent dwellings, hearths, and group living, and we begin to see the first signs of an organised society, communicating through language, symbolism, and rituals. Whether such developments occurred abruptly or more gradually, it seems clear that there was a significant advance in human cognition and culture from 75,000 years ago, leading to the African exodus of 60,000 years ago. The question is “Why?”

Climate was almost certainly a factor. With the onset of a new Ice Age some 80,000 years ago, our relatively settled life on the African savannah was forced to change. By 70,000 years ago it was getting downright nippy in the northern hemisphere. Great sheets of ice were bearing down on what would later be Seattle and New York.  In Africa a 10-degree Celsius drop in the average world temperature, as well as the fallout from the eruption of a super-volcano in Sumatra, brought extensive drought to the interior, forcing early humans to coastal regions where they could survive on seafood. Genetic evidence, however, suggests that they nonetheless suffered a massive decline in population at this time – dwindling to as few as 2000 individuals. Homo sapiens was literally on the brink of extinction. The upside was that, in adapting to these new and difficult conditions, our species also became a whole lot smarter. The deep-freeze and drought may have been the catalyst for the Great Leap Forward, favouring intelligence and more complex social structures as life became more difficult.

It may also be that just a few small genetic mutations at this time gave us these amazing minds and the power of abstract conceptual thought.  Whatever the trigger, none of these changes could have occurred without the development of language and the social networks that language makes possible. More specifically, the Great Leap Forward depended on our mastery of syntax – the ability to create multi-word sentences that are structured with a subject, verb, and object. How those parts of speech are arranged varies from one language to another. English and most other languages are characterised by a subject-verb-object (SVO) syntax. An SOV structure is used by a few languages; VSO and VOS are used by about 15% of languages; and OSV is the rarest of all. But whatever the structure, our ability to communicate complex meaning depends on our understanding and use of syntax. It’s what distinguishes human from ape communication.

Just why we should have crossed the syntax barrier at this point in our history remains unclear. It parallels the development of language in children and seems to require the maturation of certain brain structures. Children begin to speak by babbling. At about 12 months, they begin to use actual words. Over the next year there is a massive expansion of single-word vocabulary and the emergence of two-word sentences. Between two and three years of age, children begin to put together three-word sentences with syntax. This is the stage in individual development that corresponds with the Great Leap Forward in humanity’s development.

It was all necessary in order to make possible the next stage in our development when, some 60,000 years ago, we began to leave our African homeland and spread into Europe and Asia. During the next couple of thousand years we had walked around the coast of South Asia and reached Australia. A later wave of expansion took us into the Middle East and then on into Europe, Asia, and the Americas. A species that had almost been made extinct rallied to populate the entire world. And what set it all in motion happened first in Africa – this Great Leap Forward that marked our initiation as modern humans

Thursday, September 16, 2010

The World of Early Homo Sapiens

By “early homo sapiens” I mean the first modern humans who emerged in the African Rift Valley about 200,000 years ago. They maintained a relatively stable existence on the grasslands and in coastal regions of Africa until about 75,000 years ago when a rapidly accelerating Ice Age forced them to adapt in ways that anthropologists call the Great Leap Forward. Then, some 60,000 years ago, in the Great Migration, they left Africa to colonise the world.

Given the sparseness of the archaeological record, little can be said about their worldview. Indeed, given their still primitive level of cognitive development, they would have had no conceptual worldview as we think of it today. The best we can do, based on what evidence we have, is guess at what their experience of the world might have been.

Although his direct ancestry remains unclear, it seems likely that Homo sapiens, like his Neanderthal cousins in Europe, evolved from Homo heidelbergensis. Three fossil skulls found in Ethiopia and dated to 160,000 years ago are the oldest human remains yet discovered. His average brain size of 1485 cc is almost 50% larger than that of Homo erectus and slightly smaller than that of the Neanderthals. His appearance is distinguished from other Homo species by his nearly vertical forehead, very much smaller or non-existent eyebrow ridges, smaller teeth, a prominent chin, and a more gracile skeleton.

There are indications that, sometime after 160,000 years ago, four separate groups travelled south to the Cape of Good Hope, southwest to the Congo Basin, west to the Ivory Coast, and northeast to the coast of the Red Sea. Then, about 125,000 years ago, a group travelled across the Sahara and up the Nile to the Levant. Human remains found at sites in present-day Israel indicate that we were there from at least 110,000 years ago. From the end of the Illinoian Ice Age 130,000 years ago until the onset of the Intermediate Ice Age 80,000 years ago, the Levant was effectively an extension of northern Africa, with similar climatic conditions and animals. It would have been natural and relatively easy to follow the animals out of Africa. But then, soon after 80,000 years ago, modern humans abruptly disappeared from these sites. The encroaching Ice Age turned the Levant and North Africa into extreme desert and killed off the animals on which humans had relied for thousands of years. Those who had left Africa during warmer and wetter times either died off  themselves or migrated back to Africa.

They were hunter-gatherers who lived in small bands – necessarily small since the game present in any region was limited – comprised of a few family groups based on long-term monogamous relationships, with both parents caring for their children. Survival was less an individual thing than a group achievement. And because survival depended on cooperation and the equal distribution of food to everyone, the bands were egalitarian. There were no elite, no social stratification, and no formal leadership. Decision-making would have been consensual. Nor was there any formal division of labour. While women probably took greater responsibility for gathering and men for hunting, each member of the group would have been skilled at all tasks essential for survival. And for those too old or infirm to carry their weight, there is evidence that they were cared for by the group.

Surviving in the open grasslands would not have been easy. Maintaining a fire throughout the night helped keep the big cats at bay, but it required intelligence and cooperation for humans to hunt game that was much faster and stronger than they. As early as 165,000 years ago they had discovered a new way of fashioning tools by heating silcrete to a high temperature in a fire’s embers to create more consistent and sharper stone flakes. These in turn made possible the invention of stone-tipped spears and harpoons. But still the hunting of big game presented a challenge. It seems to have been accomplished by running the prey to exhaustion and then closing in for the kill at close quarters. Homo sapiens had emerged as a relatively hairless creature who perspired, so they could run for extended periods of time and still maintain their internal body temperature. Their large prey, while swifter over short distances, could not maintain that pace. Panting rather than perspiring, they needed to stop periodically to avoid overheating. Eventually the pursuing humans would run them to the ground.

Although primarily nomadic as they followed game from one region to another, they maintained central campsites (hearths and shelters) as home bases. Because their population was sparse – estimated at only one person per square mile – a given band would have hunted an area of perhaps 60 square miles from a single home base before moving on. Organised violence between bands was rare. The low population density, the abundance of food resources, the lack of any reason to hoard food beyond the group’s immediate needs, the survival value of cooperation, and the advantages of collaboration on hunting expeditions probably all contributed to the relative peacefulness of this period in our otherwise war-ravaged history.

Their diet consisted primarily of meat, fish, shellfish, leafy vegetables, fruit, nuts and insects. With a gradual drying up of the African interior that began 120,000 years ago, humans were attracted more and more to coastal environments where they could migrate easily along the coast and make their living from the sea. The cooking of shellfish is evident as early as 164,000 years ago at a site called Pinnacle Point in South Africa; large dumps of clam and oyster shells, dating from 125,000 years ago, have been found in Eritrea on the eastern Horn of Africa; and large 6-foot-long catfish were being caught with barbed fishing points 90,000 years ago in what is now the Democratic Republic of the Congo.

Our now extinct ancestors began cooking their food at least 250,000 years ago and perhaps much earlier. Because cooked food, and especially cooked meat, delivers significantly more energy for less effort, it would have contributed to the growth and maintenance of our larger brains. It probably also contributed to our becoming more sociable as we brought food back to the central cooking area. We can imagine the band gathering at the end of the day, kindling a fire in the hearth both to cook their food and ward off animals, and then settling in around the fire to eat, laugh, sing, and enjoy their emerging ability to converse in spoken language, in a scene not unlike that which has been repeated countless times to the present day.

Archaeological evidence tells us more about the cosmology and religious practices of European Neanderthals than it does about that of contemporaneous Homo sapiens in Africa – but we may reasonably assume that they were similar. For both, it would be the animals – their nearest neighbours – that played a central role. Neanderthals’ veneration of the bear in mountainous sanctuaries is matched by evidence of animal worship in the Tsodilo Hills in the Kalahari desert. A giant rock resembling a python, and a secret chamber inside a cave there, is accompanied by broken spear points (dated to 70,000 BCE) that had been offered as a sacrifice. The python is still worshipped by present-day !Kung San hunter-gatherers who are descendants of the early humans who first devised the practice. Similarly, the discovery of “butchered” human bones at both European and African sites may point to a ritual post-mortem bone cleaning for presumably religious reasons. And one of the skulls found in Ethiopia has grooves cut into it in a manner suggesting that it was carried around after death – possibly as part of an ancestor worshipping ritual and indicating a belief in some kind of afterlife.

It is unlikely that they were concerned about their own death. While memory would have delivered some notion of the past, their orientation in time would have been predominantly the simple present. With little sense of time or causality, they would not easily have imagined their own death.  Nor would they have been able to plan far ahead. They would have responded to their environment either immediately or after only a short delay.

Their language skills and cognitive development in general were probably like that of a 2 – 4-year-old child – what I call “the Innocent Nestling.” It corresponds to what Jean Piaget described as the “pre-conceptual pre-operations” stage, during which the child begins to use mental symbols to understand his world. By the end of this stage, vocabulary consists of about 200 words which the child can put together in 2 – 3-word phrases. He is beginning to understand the relationship between things, and has some notion of cause and effect. A gradually emerging sense of self finds expression in words such as “me” and “mine”. By the age of 4 the child has a clear sense of “I”, though it is still much more a body-ego than a mental-ego. All of this was probably true as well of early Homo sapiens.

The worldview at this stage, both for early humans and for the Innocent Nestling, can be described as animistic and magical. Animism is the belief that, like oneself, everything is conscious and animated by some life force. It makes for a dramatic universe filled with spirit-powers. Magical thinking arises from the tendency at this stage to confuse psychic and external reality – an inability to fully differentiate the mental image of an object from the object itself. In the world of magic one manipulates an external object by manipulating an image or symbol of that object – as in sticking pins into a Voodoo doll. So in primitive hunting rites, a man draws the animal in the sand before dawn, and when the first sun-ray touches the drawing, he drives a spear into the drawing. Later he slays the animal and performs a ritual dance at evening. But in his magical world, the symbolic act and the actual killing of the animal are so connected as to be inseparable. One cannot happen without the other.

Even today we have not totally outgrown such magical thinking. Many people continue to engage in symbolic acts either to bring good luck or to ward off misfortune. And when the right words are intoned by the right person in a properly celebrated ritual, the symbols of bread and wine are still believed by millions to become the actual body and blood of Christ.

Wednesday, September 8, 2010

Was there a Garden of Eden?

The first chapters of the Bible describe our human origins as deriving from a single set of parents in a place called the Garden of Eden. That mythical account is not far removed from what anthropologists and geneticists are now telling us – though we are still far from consensus on the matter.

The name Homo sapiens (Latin for “wise man”) was coined in the early 18th century by a Swedish botanist, Linnaeus, who recognised that all humans were part of one species. What does that mean? Since the mid-20th century a species has been defined as an interbreeding (or potentially interbreeding) group of organisms. If it’s possible to produce offspring together, you must be of the same species.

But Linnaeus also believed that our Homo sapiens species was divided into distinct sub-species or races. And when evolutionary theory became popular in the 19th century, it was widely believed that these races – identified as African, Native American, East Asian, and European – had evolved at different times and in different places. The theory sat comfortably with Europeans. It meant that they were the most recent and therefore the most advanced race to evolve, while the others, especially the dark-skinned Africans and Native Americans, were downright primitive by comparison. It found further support when the philosopher Herbert Spencer coined the phrase “survival of the fittest” – (a phrase that was never used by Darwin) – and used it to justify the social divisions inherent in late-19th century Britain. So now there was scientific justification for believing not only that the Aryan race was superior to all others, but that people who occupied the top strata of society deserved to be there since they must be more “fit” than the mere peasants.

By the end of the 19th century this widespread belief had given rise to the eugenics movement – (“eugenics” means “good birth”) – which in 1907 gave birth in Britain to the Eugenics Education Society, the stated objective of which was to improve the gene pool of humanity through the selective breeding of “fit” individuals. Supported by the best scientific evidence of the time, the elite loved it. By the 1910s and 1920s it was being used in the U.S. as justification for the forced sterilization of people believed to be mentally subnormal. And from there it was only a short step to the Nazi death camps and the systematic extermination of Jews, gypsies, homosexuals, and other supposedly inferior groups.

In the 1960s, we were still finding scientific justification for such thinking. The American anthropologist Carleton Coon advanced the idea, very like that of Linnaeus 250 years earlier but now known as the “multi-regionalism hypothesis”, that there are five distinct human subspecies that evolved at different times into their present form from ancestral hominids. The basic idea is that ancient hominid species migrated out of Africa over the past two millions years, established themselves in East Asia very early on, and then evolved in situ into modern-day humans – creating the different races in the process. Again, it was the African “Congoids” that appeared first and have remained trapped ever since in an evolutionary dead-end.  And of course the dominance of the more recently evolved Europeans is a natural consequence of their genetic superiority.

Since then the debate has continued to rage. In 1987, an analysis of mitochondrial DNA declared that all modern humans descended from one African population within the last 140,000 years. It was a serious blow to multi-regionalism. But by 1992, that study had been largely discredited. Then, in late 2000, a Swedish study of mitochondrial DNA again seemed to demonstrate that all modern humans emerged from Africa within the past 100,000 years and came from a breeding stock of no more than 10,000 individuals.

It began to look like the multi-regionalism hypothesis was dead. Indeed, most scientists who have studied the matter now agree that all modern humans evolved in Africa within the past 200,000 years. Their direct ancestor was probably the species Homo heidelbergensis that appeared on the scene in Africa some 500,000 years ago and then migrated into Europe. Those that went to Europe later evolved into Neanderthal Man, while those that remained in Africa evolved into modern Homo sapiens. Geneticist Spencer Wells claims to have identified our ancestry even more specifically as Mitochondrial Eve – the African great-great- .... grandmother of us all who lived about 150,000 years ago.

Where did Mitochondrial Eve live? Where, in other words, was the Garden of Eden where our first parents originated? We don’t know specifically, but it was almost certainly somewhere in the Great Rift Valley. A recent genetic survey suggests that a region on the coast of southwest Africa near the Kalahari Desert, at the southern terminus of the Rift Valley, may be our place of origin. It is now homeland to the Bushmen or San people who represent a direct link back to our earliest ancestors. But at that time, the San occupied a much larger area that stretched from southern Africa up the east coast as far as present-day Ethiopia – so the Garden of Eden could have been anywhere within that Valley.

How did this one family of humanity develop distinctive physical traits among different geographic groups – the traits that we used to think represented separate “races”? Some 60,000 years ago, Homo sapiens began to leave Africa and spread across the globe to colonise the entire planet. The physical traits that distinguish modern geographic groups subsequently developed as an adaptation to local environmental conditions. So skin colour varies with the intensity of sunlight in a given region to ensure the necessary absorption of Vitamin D. It has nothing to do with racial identity. We really are one family.

Or are we? In actual fact, the multi-regionalism hypothesis is far from dead. Where, for instance, did the aboriginal people of Australia come from? Spencer Wells and others argue that they migrated there from Africa. But there’s a problem. Although modern humans are thought not to have left Africa before 60,000 years ago, human remains dating to 62,000 years ago have been found at what was once Lake Mungo in the interior of Australia. Even if our dating is off by a bit, migrants from Africa would certainly have had to hustle in order to arrive first in Indonesia, then build ships and navigate a few hundred kilometres of open sea, and finally move more than 2000 kilometres inland from the northern Australian coast to Lake Mungo.

More than this, as recently as 2001, Alan Thorne and his colleagues at the Australian National University reported that mitochondrial DNA from the oldest of the Mungo residents was genetically distinct. It is no longer found in living humans, as it should be if he was descended from the people who left Africa. And then Rosalind Harding, a population geneticist at Oxford, found two genetic variants that are common among Asians and the indigenous people of Australia, but hardly exist in Africa. These variant genes, she is certain, arose more than 200,000 years ago, not in Africa but in East Asia – long before Homo sapiens reached the region. Where then did the aboriginal people of Australia come from? Could it be, as Alan Thorne proposes, that human evolution has been continuous and that different strains of Homo sapiens evolved from Homo erectus – who was already in Indonesia, and only a sea voyage away from Australia, long before the Africans even began their migration eastward?

I, for one, would still like to believe that we are all one family that originated in an updated version of the Garden of Eden somewhere in Africa. But the supporting evidence is not nearly as clear as we might like to think. With any kind of luck, a revival of the multi-regionalism hypothesis will not also revive the racism that we are having such difficulty leaving behind. The moral of the story seems to be this: Be careful lest you too quickly embrace supposedly “hard evidence” to support your own personal prejudices and predilections.