Top Quotes: “Upheaval: Turning Points for Nations in Crisis” — Jared Diamond
Introduction
“It’s neither possible nor desirable for individuals or nations to change completely, or to discard everything of their former identities. The challenge, for nations as for individuals in crisis, is to figure out which parts of their identities are already functioning well and don’t need changing, and which parts are no longer working out and do need changing. Individuals or nations under pressure must take honest stock of their abilities and values. They must decide what of themselves still works, remains appropriate even under the new changed circumstances, and thus can be retained. Conversely, they need the courage to recognize what must be changed in order to deal with the new situation. That requires the individuals or nations to find new solutions compatible with their abilities and with the rest of their being. At the same time, they have to draw a line and stress the elements so fundamental to their identities that they refuse to change them.”
Finland
“In the century before WWI Finland was just an autonomous part of Russia, not an independent nation. It was poor and received little attention within Europe, and almost no attention outside Europe. At the outset of WWII, Finland was independent but still poor, with an economy still focused on agriculture and forest products. Today, Finland is known around the world for its tech and its industry and has become one of the world’s richest countries, with an average per-capita income comparable to that of Germany and Sweden. Its security rests on a glaring paradox: it is a liberal social democracy that for many decades maintained an excellent and trusting relationship with the communist former Soviet Union, and now with current autocratic Russia. That combination of features constitutes a remarkable example of selective change.”
“Vilipuri used to be the second-largest city of Finland until it was ceded to the Soviet Union, along with one-tenth of the total area of Finland, after a ferocious war in the winter of 1939–1940, plus a second war from 1941 to 1944. On October 1939 the Soviet Union made territorial demands on four Baltic countries: Finland, Estonia, Latvia, and Lithuania. Finland was the only country that refused those demands, despite the Soviet Union having an enormous army and a population almost 50 times larger than that of Finland. The Finns nevertheless put up such a fierce resistance that they succeeded in preserving their independence, even though their nation’s survival remained in grave doubt through a series of crises lasting a decade. The heaviest casualties were incurred during the three peak periods evidenced by the tombstones in Helsinki’s largest cemetery, as the Soviet army closed in on Vilipuri in February-March 1940, then as the Finns recaptured Vilipuri in August 1941, and finally as the Soviet army advanced again upon Vilipuri in the summer of 1944.
Finland’s death toll in the war against the Soviet Union was nearly 100,000, mostly men. To modern Americans and Japanese and non-Finnish Europeans, who remember the nearly instantaneous death tolls of 100,000 each in the bombings of single cities (Hiroshima and Hamburg and Tokyo), and the total war deaths of around 20 million each suffered by the Soviet Union and China during WWII, Finland’s death toll of just over 100,000 over the course of five years may seem modest. But it represented 2.5% of Finland’s then-total population of 3,700,000, and 5% of its males. That proportion is the same as if 9,000,000 Americans were to be killed in a war today: almost 10 times the total number of American deaths in all the wars of our 240-year history.”
“Finland identifies with Scandinavia and is considered part of it. Many Finns are blue-eyed blonds, like Swedes and Norwegians. Genetically, Finns are in effect 75% Scandinavian, and only 25% invaders from the east. But geography, language, and culture make Finns different from other Scandinavians, and they are proud of those differences. As for geography, descriptions of Finland by Finns reiterate two themes: ‘We are a small country,’ and ‘Our geography will never change.’ By the latter phrase, Finns mean that Finland’s land border with Russia (or previously with the Soviet Union) is longer than that of any other European country. Finland is in effect a buffer zone between Russia and the rest of Scandinavia.
Out of the nearly 100 native languages of Europe, all are related members of the Indo-European language family except for the isolated Basque language and four others. Those four are Finnish, the closely related Estonian language, and the distantly related Hungarian and Lapp (Saami) languages, all of which belong to the Finno-Ugric language family.”
“All Finnish government jobs require passing exams in both the Finnish and the Swedish languages. My friend told me that if, in the 1950s, you made only a single mistake in choosing between the accusative case and the partitive case, you flunked the exam and couldn’t get a government job.”
“The Finnish language is distinctive, beautiful, a source of national pride, and spoken by almost no one other than Finns themselves. The Finnish language formed the core of the Finnish national identity for which so many Finns were willing to die in their war against the Soviet Union.”
“Speakers of a proto-Finnish language arrived in Finland in pre-historical times, several thousand years ago. In historical times, i.e. after the first detailed written accounts of Finland began to be recorded around 1100, possession of Finland was contested between Sweden and Russia. Finland remained mostly under Swedish control until it was annexed by Russia in 1809. For most of the 19th century, Russia’s tsars let Finland have much autonomy, its own parliament, its own administration, and its own currency, and they didn’t impose the Russian language. But after Nicholas II became tsar in 1894 and appointed as governor a nasty man called Bobrikov (assassinated by a Finn in 1904), Russian rule became oppressive. Hence toward the end of WWI, when the Bolshevik Revolution broke out in Russia in late 1917, Finland declared its independence.
The result was a bitter Finnish Civil War, in which conservative Finns called Whites, consisting of Finnish troops trained in Germany and assisted by German troops who landed in Finland, fought against communist Finns called Reds, as well as against Russian troops still stationed in Finland. When the Whites consolidated their victory in May 1918, they shot about 8,000 Reds, and a further 20,000 Reds died of starvation and disease while rounded up in concentration camps. As measured by percentage of a national population killed per month, the Finnish Civil War remained the world’s most deadly civil conflict until the Rwandan genocide of 1994. That could have poisoned and divided the new country — except that there was quick reconciliation, the surviving leftists received back their full political rights, and by 1926 a leftist had become Finland’s prime minister. But the memories of the civil war did stoke Finland’s fear of Russia and of communism — with consequences for Finland’s subsequent attitude toward the Soviet Union.
During the 1920s and 1930s Finland continued to be fearful of Russia, now reconstituted as the Soviet Union. Ideologically, the two countries were opposites: Finland a liberal capitalist democracy, the Soviet Union a repressive communist dictatorship. Finns remembered oppression by Russia under the last tsar. They were afraid that the Soviet Union would seek to re-acquire Finland, for example by supporting Finnish communists to subvert the Finnish government. They watched with concern Stalin’s reign of terror and paranoid purges of the 1930s. Of most direct concern to Finland, the Soviets were constructing airfields and railroad lines in sparsely populated areas of the Soviet Union east of the Finnish border. Those railroad lines included ones running toward Finland, ending in the middle of forest short of the border, and serving no conceivable purpose except to facilitate an invasion of Finland.
In the 1930s Finland began to strengthen its army and its defenses under its General Mannerheim, who had led the victorious White troops during the civil war. Many Finns volunteered to spend the summer of 1939 at work strengthening Finland’s main defense lines, called the Mannerheim Line, across the Karelian Isthmus, which separated southeastern Finland from Leningrad (today St. Petersburg), the nearest and second-largest Soviet city. As Germany re-armed under Hitler and became increasingly antagonistic to the Soviet Union, Finland tried to maintain a foreign policy based on neutrality, to ignore the Soviet Union, and to hope that no threat would materialize from that direction. The Soviet Union in turn remained suspicious of its bourgeois neighbor that had defeated the communist side during the Finnish Civil War with the aid of German troops.
Just as Finland had strong geographic and historical reasons for being concerned about the Soviet Union, the Soviet Union also had strong geographic and historical reasons for being concerned about Finland. The pre-WWII border between Finland the Soviet Union lay only 30 miles north of Leningrad. German troops had already fought in Finland against communists in 1918; British and French troops had already entered the Gulf of Finlad to blockade or attack Leningrad during the Crimean War of the 1850s; and France had financed a big fortress in Helsinki harbor in the 1700s to prepare for an attack on St. Petersburg. In the late 1930s, Stalin’s fear of Germany under Hitler was growing, for good reason. Communists and Nazis exchanged virulent propaganda. Hitler had written in his autobiography of his vision of Germany expanding to the east, i.e. into the Soviet Union. Stalin had watched Hitler’s Germany absorb Austria in March 1938, take over Czechoslovakia in March 1939, and began to threaten Poland. France, Britain, and Poland rejected Stalin’s proposals to cooperate in the defense of Poland against the growing German threat.
In August 1939 Finland and the rest of the world were stunned to learn that Hitler and Stalin had abruptly called off their propaganda war and signed the German-Soviet Non-aggression Pact. The Finns suspected, correctly, that the pact included secret agreements dividing up spheres of influence, with Germany acknowledging that Finland belonged to the Soviet sphere. The signing of the pact was quickly followed by Germany’s blitzkrieg invasion of Poland, followed within a few weeks by the Soviet Union’s invasion of eastern Poland. Stalin understandably wanted to push the Soviet Union’s border as far westwards as possible, in order to anticipate the growing German threat.
In October 1939 the Soviet Union, still fearful of an eventual German attack, was eager to push even more of its western border back as far westwards as possible. With the temporary security offered by the non-aggression pact, the Soviet Union issued ultimata to its four Baltic neighbors: Lithuania, Latvia, and Estonia, plus Finland. From the Baltic Republics the Soviet Union demanded Soviet military bases on their soil, plus right of transit of Soviet troops to those bases. Although the stationing of Soviet troops obviously left the republics defenseless, the republics were so small that they saw resistance as hopeless, accepted the Soviet demands, and were unable to avoid annexation by the Soviet Union in June of 1940. Encouraged by that success, in early October 1939 the Soviet Union made two demands upon Finland. One was that the Soviet/Finnish border on the Karelian Isthmus be moved back farther from Leningrad, so that Leningrad could not be bombarded or quickly captured (e.g. by German troops stationed again in Finland as they already had been in 1918). While there was no risk of Finland itself attacking the Soviet Union, it was realistic to fear some major European power attacking the Soviet Union through Finland. The second Soviet demand was that Finland let the Soviet Union establish a naval base on Finland’s south coast near Helsinki, and cede some small islands in the Gulf of Finland.
Secret negotiations between Finland and the Soviet Union continued through the months of October and November of 1939. The Finns were willing to make some concessions, but not nearly as many as the Soviets wanted, even though Finland’s General Mannerheim urged the Finnish government to make more concessions because he knew the weakness of the Finnish army and (as a former lieutenant general in tsarist Russia’s army) understood the geographic reasons for the Soviet demands from the Soviet point of view. But Finns from all parts of the Finnish political spectrum — leftists and rightists, Reds and Whites in the civil war — were unanimous in refusing to compromise further. All Finnish political parties agreed with that refusal by their government, whereas in Britain in July 1940 there were leading British politicians in favor of compromising with Hitler in order to buy peace.
One reason for Finns’ unanimity was their fear that Stalin’s real goal was to take over all of Finland. They were afraid that giving in to supposedly modest Soviet demands today would make it impossible for Finland to resist bigger Soviet demands in the future. Finland’s giving up its land defenses on the Karelian isthmus would make it easy for the Soviet Union to invade Finland overland, while a Soviet naval base near Helsinki would allow the Soviet Union to bombard Finland’s capital by land and by sea. The Finns had drawn a lesson from the fate of Czechoslovakia, which had been pressured in 1938 into ceding to Germany its Sudetan borderland with its strongest defense line, leaving Czechoslovakia defenseless against total occupation by Germany in March 1939.
Finns’ second reason for not compromising was their miscalculation that Stalin was only bluffing and would settle for less than what he was demanding. Correspondingly, Stalin also miscalculated and thought that the Finns, too, were only bluffing. Stalin could not imagine that a tiny country would be so crazy as to fight against a country with a population almost 50 times larger. Soviet war plans expected to capture Helsinki within less than two weeks. A third reason for Finns’ refusal to make further concessions was their miscalculation that countries traditionally friendly to Finland would help defend Finland. Finally, some Finnish political leaders calculated that Finland’s army could resist a Soviet invasion for at least six months, even though General Mannerheim warned them that that was impossible.
On November 30, 1939 the Soviet Union attacked Finland, claiming that Finnish artillery shells had landed in the Soviet Union and killed some Soviet soldiers. (Khrushchev later admitted that those shells had actually been fired by Soviet guns from inside the Soviet Union, under orders from a Soviet general who wanted to provoke war.) The war that followed is known as the Winter War. Soviet armies attacked along the whole length of the Finnish/Soviet border, and Soviet planes bombed Helsinki and other Finnish cities. The Finnish civilian casualties in that first night of bombing accounted for 10% of Finland’s total civilian war casualties during the entire five years of WWII. When Soviet troops crossed the Finnish border and captured the nearest Finnish village, Stalin immediately recognized a Finnish communist leader named Kuusinen as head of a so-called ‘democratic’ Finnish government, in order to give the Soviet Union the excuse that it was not invading Finland but just coming to the defense of ‘the’ Finnish government. The establishment of the puppet government helped convince any still-doubting Finns that Stalin really did want to take over their country.”
“Finland’s chances of defeating the Soviet Union were zero, if Stalin were determined to win. The world had already seen how quickly Poland, with a population 10 times that of Finland and far more modern military equipment, had been defeated within a few weeks by German armies half the size of the Soviet Union’s armies. Hence Finns were not so insane as to imagine that they could achieve a military victory. Instead, as a Finnish friend expressed to me, ‘Our aim was to make Russia’s victory as slow, as painful, and as costly for the Russians as possible.’ Specifically, Finland’s goal was to resist for long enough that the Finnish government would have time to recruit military help from friendly countries, and that Stalin would tire of the military costs to the Soviet Union.
To the great surprise of the Soviet Union and of the rest of the world, Finland’s defenses held. The Soviets’ military plan of attacking Finland along the entire length of their shared border included attacks on the Mannerheim Line across the Karelian isthmus, plus attempts to ‘cut Finland at the waist’ by driving all the way across the middle of Finland at the country’s narrowest point. Against Soviet tanks attacking the Mannerheim Line, the Finns compensated for their deficiencies in anti-tank guns by inventing so-called ‘Molotov cocktails,’ which were bottles filled with an explosive mixture of gasoline and other chemicals, sufficient to cripple a Soviet tank. Other Finnish soldiers waited in a foxhole for a tank to come by, then jammed a log into the tank’s tracks to bring it to a stop. Daredevil individual Finnish soldiers then ran up to the crippled tanks, pointed their rifles into the cannon barrels, and observation slits, and shot Soviet soldiers inside the tanks. Naturally, the casualty rate among Finland’s anti-tank crews was up to 70%.
What most won the admiration of world observers for the Finnish defenders was their success in destroying the two Soviet divisions that attacked Finland at its waist. The Soviets advanced with motor vehicles and tanks along the few roads leading from the Soviet Union into Finland. Small groups of Finnish soldiers mounted on skis, wearing white uniforms for camouflage against the snow, moved through the roadless forest, cut the Soviet columns into segments, and then annihilated one segment after another. A Finnish veteran described the tactics that he and his fellow soldiers had used in those winter battles. At night, Soviet soldiers who had parked their vehicles in a long column along a narrow one-lane forest road gathered around big bonfires to keep themselves ware. (Finnish soldiers instead stayed warm at night with small heaters in their tents, invisible from the outside). My friend and his platoon skied through the forest, invisible in their white camo uniforms, to within firing range of a Soviet column. They then climbed nearby trees while carrying their rifles, waited until they could identify the Soviet officers in the light of the bonfire, shot and killed the officers, and then skied off, leaving the Soviets frightened, demoralized, and leaderless.
Why did the Finnish army prevail for so long in defending itself against the Soviet army’s overwhelming advantages of numbers and of equipment? ONe reason was motivation: Finnish soldiers understood that they were fighting for their families, their country, and their independence, and they were willing to die for those goals. For example, when Soviet forces were advancing across the frozen Gulf of Finland, which was defended only by small groups of Finnish soldiers on islands in the gulf, the Finnish defenders were told that there would be no means of rescuing them: they should stay on those islands and kill as many Soviets as possible before they themselves were killed, and they did. Finnish soldiers were accustomed to living and skiing in Finnish forests in the winter, and they were familiar with the terrain on which they were fighting. Third, Finnish soldiers were equipped with clothing, boots, tents, and guns suitable for Finnish winters, but Soviet soldiers were not. Finally, the Finnish army, like the Israeli army today, was effective far out of proportion to its numbers, because of its informality that emphasized soldiers’ taking initiative and making their own decisions rather than blindly obeying orders.”
“Realistically, the only countries from which Finland had any hopes of receiving many troops and/or supplies were Sweden, Germany, Britain, France, and the U.S. Neighboring Sweden, although closely connected to Finland through long shared history and shared culture, refused to send troops out of fear of becoming embroiled in war with the Soviet Union. While Germany had sent troops to support Finnish independence and had long-standing ties of culture and friendship with Finland, Hitler was unwilling to violate the non-aggression pact by helping Finland. The U.S. was far away, and President Roosevelt’s hands were tied by the U.S. neutrality rules resulting from decades of American isolationist policies.
That left only Britain and France as realistic sources of help. They did eventually offer to send troops. But both were already at war with Germany, and that war was the overwhelming preoccupation of the British and French governments, which could not permit anything else to interfere with their goal. Germany was importing much of its iron ore from neutral Sweden. Much of that ore was being exported from Sweden across Norway by railroad to the ice-free Norwegian port of Narvik, and then by ship to Germany. What Britain and France really wanted was to gain control of the Swedish iron fields, and to interrupt the ship traffic from Narvik. Their offer to send troops across neutral Norway and Sweden to help Finland was just a pretext for achieving those true aims.
Hence while the British and French governments offered help to Finland in the form of tens of thousands of troops, it turned out that most of those troops would be stationed at Narvik and along the Narvik railroad and in the Swedish iron fields. Only a tiny fraction of those troops would actually reach Finland. Even the stationings of troops would of course require the permission of the Norwegian and Swedish governments, which were remaining neutral and refused permission.”
“In January 1940 the Soviet Union finally began to digest the lessons of its horrifying troop losses and military defeats in December. Stalin disowned the puppet Finnish government that he had set up under the Finnish communist leader Kuusinen. That meant that Stalin was no longer refusing to acknowledge the real Finnish government, which sent out peace feelers. The Soviets stopped wasting effort on their attempts to cut Finland at the waist, and instead assembled huge concentrations of troops and artillery and tanks on the Karelian Isthmus, where the open terrain favored the Soviets. Finnish soldiers had been fighting continually at the fronts for two months and were exhausted, while the Soviet Union could throw in unlimited fresh reserves. Early in February, Soviet attacks finally broke through the Mannerheim Line, forcing the Finns to retreat to their next and much weaker defense line. Although the other Finnish generals under Mannerheim begged him to retreat even further to a better defensive position, Mannerheim had iron nerves: despite the heavy casualties now being inflicted on the Finnish army, he refused to pull back further, because he knew that it was essential for Finland still to be occupying as much of its territory as possible at the time of the inevitable peace negotiations.
In late February 1940, when the exhausted Finns were finally ready for peace, the British and French still urged the Finns to hold out. The French prime minster, Daladier, urgently wired Finland that he had 100 bomber planes that were ready to take off, and that he guaranteed to ‘arrange’ the passage of those troops by land across Norway and Sweden. That offer induced the Finns to keep fighting for another week, during which several thousand more Finns were killed.
But the British then admitted that Daladier’s offer was a deceitful bluff, that those troops and planes were not ready, that Norway and Sweden were still refusing passage to the offered troops, and that the French offer was being made merely to advance the Allies’ own aims and to save face for Daladier. Hence Finland’s prime minister led a Finnish delegation to Moscow for peace negotiations. At the same time, the Soviet Union maintained its military pressure on Finland by advancing upon Finland’s second-largest city of Viipuri, capital of the Finnish province of Karelia, with lots of Finnish casualties.
The conditions that the Soviet Union imposed in March 1940 were much harsher than the conditions that the Finns had rejected in October 1939. The Soviets now demanded the entire province of Karelia, other territory farther north along the Finland/Soviet border, and use of the Finnish port of Hanko near Helsinki as a Soviet naval base. Rather than remain in their homes under Soviet occupation, the entire population of Karelia, amounting to 10% of Finland’s population, chose to evacuate Karelia and withdrew into the rest of Finland. There, they were squeezed into rooms in apartments and houses of other Finns, until almost all of them could be provided with their own homes by 1945. Uniquely among the many European countries with large internally displaced populations, Finland never housed its displaced citizens in refugee camps.
Why, in March 1940, did Stalin not order the Soviet army to keep advancing and to occupy all of Finland? One reason was that the fierce Finnish resistance had made clear that a further advance would continue to be slow and painful and costly to the Soviet Union, which now had much bigger problems to deal with — namely, the problems of reorganizing its army and rearming to prepare for a German attack. The poor performance of the huge Soviet army against the tiny Finnish army had been a big embarrassment for the Soviet Union: about eight Soviet soldiers killed for every Finn killed. The longer a war with Finland went on, the higher was the risk of British and French intervention, which would drag the Soviet Union into war with those countries and invite a British/French attack on Soviet oil fields in the Caucasus. Some authors concluded that the harsh March 1940 peace terms demonstrate that the Finns should instead have accepted the milder terms of October 1939. But Russian archives opened in the 1990s confirmed Finns’ wartime suspicion: the Soviet Union would have taken advantage of those milder territorial gains and the resulting breaching of the Finnish defense line in October 1939 in order to achieve its intent of taking over all of Finland, just as it did to the three Baltic Republics in 1940. It took the Finns’ fierce resistance and willingness to die, and the slowness and cost of the war against Finland, to convince the Soviet Union not to try to conquer all of Finland in March 1940.”
“Hitler decided to attack the Soviet Union in 1941. At some point, German military planners began discussions with Finnish military planners about ‘hypothetical’ joint operations against the Soviet Union. While Finland had no sympathy with Hitler and Nazism, the Finns understood the cruel reality that it would be impossible for them to avoid choosing sides and to preserve their neutrality in a war between Germany and the Soviet Union: otherwise, one or both countries would seek to occupy Finland. Finland’s bitter experience of having to fight the Soviet Union alone in the Winter War made the prospect of repeating that experience worse than the alternative of an alliance of expedience with Nazi Germany — ‘the least awful of several very bad options.’ The poor performance of the Soviet army in the Winter War had convinced all observers — not only in Finland but also in Germany, Britain, and the U.S. — that a war between Germany and the Soviet Union would end with a German victory. Naturally, too, the Fins wanted to regain their lost province of Keralia. On June 21, 1941 Germany did attack the Soviet Union. Finland declared that it would remain neutral, but on June 25 Soviet planes bombed Finnish cities, giving the Finnish government the excuse that night to declare that Finland was once again at war with the Soviet Union.
The second war against the Soviet Union, following the first Winter War, is called the Continuation War. This time, Finland mobilized one-sixth of its entire population to serve in or work directly for the army: the largest percentage of any country during WWII. That’s as if the U.S. today were to reinstitute the draft and to build up an army of over 50 million. Serving directly in the armed forces were males 16–50, plus some women near the front line. All Finns of both sexes not actually in the armed forces, ages 15–64, had to work in a war industry, agriculture, forestry, or other sector necessary for defense. Teens worked in the fields, sawmills, and anti-aircraft.
With the Soviet army preoccupied in defending itself against the German attack, the Finns quickly reoccupied Finnish Keralia, and (more controversially) also advanced byeond their former border into Soviet Karelia. But Finland’s war aims remained strictly limited, and the Finns described themselves not as ‘allies’ but just as ‘co-belligerents’ with Nazi Germany. In particular, Finland adamantly refused German pleas to do two things: to round up Finland’s Jews (although Finland did turn over a small group of non-Finnish Jews to the Gestapo); and to attack Leningrad from the north while Germans were attacking it from the south. That latter refusal of the Finns saved Leningrad, enabled it to survive the long German siege, and contributed to Stalin’s later decision that it was unnecessary to invade Finland beyond Karelia.”
“Finally, after the Soviet Union had made sufficient progress on pushing German troops out of the Soviet Union that it felt able to divert attention to Finland, in June 1944 it launched a big offensive against the Karelian Isthmus. Soviet troops quickly broke through the Mannerheim Line, but (just as in February 1941) the Finns succeeded in stabilizing the front. The Soviet advance then petered out, partly because Stalin set a higher priority on using his army to reach Berlin from the east ahead of American and British armies advancing from the west; and partly because of the dilemmas already faced during the Winter War: the expected high costs of overcoming further Finnish resistance, of guerrilla warfare in Finland’s forests, and of figuring out what to do with Finland if and when the Soviet Union did succeed in conquering it. Thus, in 1944 as in 1941, Finnish resistance achieved the realistic goal expressed by my Finnish friend: not of defeating the Soviet Union, but of making further Soviet victories prohibitively costly, slow, and painful. As a result, Finland became the sole continental European country fighting in WWII to avoid enemy occupation.
After the battlefront re-stabilized in July 1944, Finland’s leaders again flew to Moscow to sue for peace and signed a new treaty. This time, Soviet territorial demands were almost the same as they had been in 1941. The Soviet Union took back Finnish Karelia and a naval base on the south coast of Finland. The Soviet Union’s only additional territorial acquisition was to annex Finland’s port and nickel mines on the Arctic Ocean. Finland did have to agree to drive out the 200,000 German troops stationed in northern Finland, in order to avoid having to admit Soviet troops into Finland to do that. It took Finland many months, in the course of which the retreating Germans destroyed virtually everything of value in the whole Finnish province of Lapland. My Finnish hosts were still bitter that their former German allies had turned on Finland and laid waste to Lapland.
Finland’s total losses against the Soviets and the Germans in the two wars were about 100,000 men killed. In proportion to Finland’s population then, that’s as if 9 million Americans were killed in a war today. Another 94,000 Finns were crippled, 300,000 Finnish women were widowed, 55,000 Finnish children were orphaned, and 615,000 Finns lost their homes. That’s as if a war resulted in 8 million Americans being crippled, 2.5 million American women being widowed, half-a-million American children being orphaned, and 50 million Americans losing their homes. In addition, in one of the largest child evacuations in history, 80,000 Finnish children were evacuated (mainly to Sweden), with long-lasting traumatic consequences extending to the next generation. Today, daughters of those Finnish mothers evacuated as children are twice as likely to be hospitalized for a psychiatric illness as are their female cousins born to non-evacuated mothers. The Soviet Union’s much heavier combat losses against Finland were estimated at about half-a-million dead and a quarter-of-a-million wounded. That Soviet death toll includes the 5,000 Soviet soldiers taken prisoner by the Finns and repatriated after the armistice to the Soviet Union, where they were immediately shot for having surrendered.
The armistice treaty required Finland ‘to collaborate with the Allied powers in the apprehension of persons accused of war crimes.’ The Allied interpretation of ‘Finnish war criminals’ was: the leaders of Finland’s government during Finland’s wars against the Soviet Union. If Finland hadn’t prosecuted its own government leaders, the Soviets would have done so and imposed harsh sentences, probably death sentences. Hence Finland felt compelled to do something that in any other circumstance would have been considered disgraceful: it passed a retroactive law, declaring it illegal for its government leaders to have defended Finland by adopting policies that were legal and widely supported under Finnish law at the time that those policies were adopted. Finnish courts sentenced to prison Finland’s wartime president Ryti, its wartime Prime Ministers, its wartime foreign minister, and four other ministers plus its ambassador to Berlin. After those leaders had served out their sentences in comfortable special Finnish prisons, most of them were voted or appointed back into high public positions. The peace treaty required Finland to pay heavy reparations to the Soviet Union: $300 million, to be paid within six years. Even after the Soviet Union extended the term to eight years and reduced the amount to $226 million, that was still a huge burden for the small and un-industrialized Finnish economy. Paradoxically, though, those reparations proved to be an economic stimulus, by forcing Finland to develop heavy industries such as building ships and factories-for-export. (The reparations thereby exemplify the etymology of the Chinese word ‘wei-ji,’ meaning ‘crisis,’ which consists of the two characters ‘wei,’ meaning ‘danger,’ and ‘ji,’ meaning ‘opportunity.’) That industrialization contributed to the economic growth of Finland after the war, to the point where Finland became a modern industrial country (and now a high-tech country) rather than (as formerly) a poor agricultural country.
In addition to paying those reparations, Finland had to agree to carry out much trade with the Soviet Union, amounting to 20% of total Finnish trade. From the Soviet Union, Finland imported especially oil. That proved to be a big advantage for Finland, because it didn’t share the dependence of the rest of the West on Middle Eastern oil supplies. But, as part of its trade agreement, Finland also had to import inferior Soviet manufactured goods, such as locomotives, nuclear power plants, and autos, which could otherwise have been obtained more cheaply and with much higher quality from the West.”
“Finns refer to the years 1945–1948 as ‘the years of danger.’ In retrospect, we know that Finland survived, but during those years that happy outcome seemed uncertain. The foremost danger was that of a communist takeover, through domestic communist subversion supported by the Soviet Union. Paradoxically for a democratic country that had been fighting for its survival against the communist Soviet Union, Finland’s Communist Party and its allies won a quarter of the seats in the March 1945 free elections for Finland’s parliament, and they tried to take over the police force. The Soviet Union had already occupied East Germany, was in the process of engineering communist takeovers of four Eastern European countries (Poland, Hungary, Bulgaria, and Romania), engineered a successful coup in Czechoslovakia, and supported an unsuccessful guerrilla war in Greece. Would Finland be next? The cost of reparations to the Soviet Union represented a heavy burden on the still largely agricultural, not-yet-industrialized Finnish economy. War had destroyed Finland’s infrastructure: farms had been neglected, manufacturing facilities had fallen into disrepair, two-thirds of Finland’s shipping fleet had been destroyed, and trucks were worn out, without spare parts, and reduced to burning wood instead of gasoline. Hundreds of thousands of displaced Karelians, crippled Finns, orphans, and widows required housing, money, and emotional support from those Finnish families that remained intact and healthy. Tens of thousands of Finnish children who had been evacuated to Sweden were returning, having been traumatized, forgotten their Finnish language, and nearly forgotten their parents during their years in exile.
In those years of danger, Finland devised a new post-war policy for averting a Soviet take-over. That policy became known as the Paasikivi-Kekkonen line, after Finland’s two presidents who formulated, symbolized, and rigorously implemented it for 35 years (1946–1981). The Paasikivi-Kekkonen line reversed Finland’s disastrous 1930s policy of ignoring Russia. Paasikivi and Kekkonen learned from those mistakes. To them, the essential painful realities were that Finland was a small and weak country; it could expect no help from Western allies; it had to understand and constantly keep in mind the Soviet Union’s point of view; it had to talk frequently with Russian government officials at every level, from the top down; and it had to win and maintain the Soviet Union’s trust, by proving to the Soviet Union that Finland would keep its word and fulfill its agreements. Maintaining the Soviet Union’s trust would require bending over backwards by sacrificing some of the economic independence, and some of the freedom to speak out, that strong unthreatened democracies consider inalienable national rights.
Both Paasikivi and Kekkonen knew the Soviet Union and its people very well. Paasikivi concluded that Stalin’s driving motivation in his relationship with Finland wasn’t ideological but strategic and geopolitical: i.e. the Soviet Union’s military problem of defending its second-largest city (Leningrad / St. Petersburg) against further possible attacks via Finland or via the Gulf of Finland, as had already happened in the past. If the Soviet Union felt secure on that front, Finland would be secure. But Finland could never be secure as long as the Soviet Union felt insecure. More generally, conflict anywhere in the world could make the Soviet Union uneasy and prone to place demands on Finland, so Finland had to become active in world peace-keeping. Paasikivi, and then Kekkonen, were so successful in developing a trusting relationship with Stalin, and then with Khrushchev and with Brezhnev, that, when Stalin was once asked why he had not tried to manuever the Communist Party into power in Finland as he had in every other Eastern European country, he answered, ‘When I have Paasikivi, why would I need the Finnish Communist Party?’”
“The concrete pay-offs from Finland’s adherence to the Paasikivi-Kekkonen line have consisted of what the Soviet Union (and today, Russia) has and hasn’t done to Finland during the past 70 years. It hasn’t invaded Finland. It didn’t engineer a takeover of Finland by the Finnish Communist Party when that party existed. It did reduce the amount and extend the period of the war reparations that Finland owed and paid off to the Soviet Union. In 1956 it did evacuate its naval base and did withdraw its artillery on the Finnish coast at Porkkala, just 10 miles from Helsinki. It did tolerate Finland’s increasing its trade with the West and decreasing its trade with the Soviet Union, Finland’s association with the European Economic Community and Finland’s joining the European Free Trade Association. It was fully within the Soviet Union’s power to do, not to do, or forbid most of those things. The Soviet Union would never have behaved as it did if it had not trusted and felt secure with Finland and with Finland’s leaders.”
“In its foreign relations Finland constantly walked a tightrope between developing its relations with the West and retaining Soviet trust. To establish that trust immediately after the Continuation War in 1944, Finland fulfilled on time all the conditions of its armistice and subsequent peace treaty with the Soviet Union. That meant driving German troops out of Finland, conducting war crimes trials against Finland’s own wartime leaders, legalizing the Finnish Communist Party and bringing it into the government while preventing it from taking over Finland, and punctually paying its war reparations to the Soviet Union, even though that involved individual Finns contributing their jewelry and gold wedding rings.
In expanding its Western involvement, Finland made efforts to reduce chronic Soviet suspicion that Finland might become economically integrated into the West. For instance, Finland found it prudent to refuse the U.S.’ offer of badly needed Marshall Plan aid. While reaching agreements with or joining the Western European associations EEC and EFTA, Finland simultaneously made agreements with Eastern European communist countries, guaranteed most-favored-nation status to the Soviet Union, and promised the Soviet Union the same trade concessions that Finland was making to its EEC partners.
At the same time as Western countries were Finland’s major trade partners, Finland became the Soviet Union’s second-leading Western trade partner (after West Germany). Container shipments through Finland were a major route for Western goods to be imported into the Soviet Union. Finland’s own exports to the Soviet Union included ships, icebreakers, consumer goods, and materials to build entire hospitals, hotels, and industrial towns. For the Soviet Union, Finland was its major source of Western tech and its major window onto the West. The result was that the Soviets no longer had any motivation to take over Finland, because Finland was so much more valuable to the Soviet Union independent and allied with the West than it would have been if conquered or reduced to a communist satellite.
Because Soviet leaders trusted Presidents Paasikivi and Kekkonen, Finland chose not to turn over its presidents as in a normal democracy but maintained those two in office for a total of 35 years. Paasikivi served as president for 10 years until just before his death at 86, while his successor Kekkonen served for 25 years until failing health compelled him to resign at age 81. When Kekkonen visited Brezhnev in 1973 at the time of Finland’s negotiations with the EEC, Kekkonen defused Brezhnev’s concerns by giving Brezhnev his personal word that Finland’s EEC relationship wouldn’t affect Finland’s relationship with Russia. Finland’s parliament then enabled Kekkonen to fulfill that promise, by adopting an emergency law to extend his term for another four years, thereby postponing the scheduled 1974 election.
Finland’s government and press avoided criticizing the Soviet Union and practiced voluntary self-censorship not normally associated with democracies. For example, when other countries condemned the Soviet invasions of Hungary and Czechoslovakia and the Soviet war against Afghanistan, the Finnish government and press remained silent.”
“When a Finnish newspaper in 1971 did offend the Soviet Union by stating (truthfully) that the Baltic Republics were occupied by the Soviet Union in 1939, a Soviet newspaper denounced the statement as a bourgeois attempt to disrupt neighborly relations between Finland and the Soviet Union, and the Soviet foreign minister warned Finland that the Soviet Union expected the Finnish government to prevent such incidents in the future. The Finnish government obliged by calling on the Finnish press to exercise more ‘responsibility,’ i.e. to self-censor such potentially offensive statements.
Finland’s tightrope act served to combine independence from the Soviet Union with economic growth. In this respect, too, Finland as a small country has had to face realities: today’s 6 million Finns will never develop the economic advantages of scale enjoyed by 90 million Germans or 330 million Americans. Finland will never succeed in economic spheres dependent on a low standard of living and the resulting ability to pay workers the low wages still widespread outside Europe and North America. By world standards, Finland will always have few workers, who will always expect high wages. Hence Finland has had to make full use of its available workforce, and to develop industries earning high profits.
In order to make productive use of its entire population, Finland’s school system aims to educate everyone well, unlike the U.S. school system, which now educates some people well but not others. Finnish schoolteachers go through a very competitive selection process, are drawn from bright high school and university students, enjoy high status (even more than university teachers!), are well paid, all have advanced degrees, and have lots of autonomy in how they teach. As a result, Finnish students score at or near the top of world national rankings in literacy, math, and problem-solving abilities. Finland gets the best out of its women as well as out of its men: it was the second country in the world (after New Zealand) to extend the vote to women, and it was one of the first to have a female president. Finland even gets the best out of its police: again astonishingly to Americans, Finnish police have to have a university degree, are trusted by 96% of Finns, and almost never use their guns. Last year, Finnish police on duty fired only six shots, five of them just warning shots: that’s fewer than an average week of police gunshots in LA.
That strong focus on education yields a productive workforce. Finland has the world’s highest percentage of engineers in its population. It’s a world leader in tech.”
“Finland’s combined private and government investment in research and development equals 3.5% of its GDP, almost double the level of other European Union countries, and (along with the percentage of its GDP spent on education) close to the highest in the world. The result of that excellent educational system and those high investments in research and development is that, within half-a-century, Finland went from being a poor country to being one of the richest in the world. Its average per-capita income is now comparable to that of France, Germany, and the UK, which have populations over 10 times that of Finland and have been rich for a long time.”
“Finland’s foreign policy toward the Soviet Union has of necessity been byzantinely complex. The end result is that, in the 70 years since the end of WWII, Finland has come no closer to becoming a Soviet or (now) a Russian satellite. Instead, it has succeeded in steadily increasing its ties with the West while still maintaining good ties with Russia. At the same time, Finns know that life is uncertain, and so military service is still compulsory for Finnish men and voluntary for women. Training lasts up to a year and is rigorous, because Finland expects that its soldiers must really be able to fight. After that year of training, Finns are called up for reserve duty every few years until age 30–35 or older.”
“Finland illustrates well our theme of selective change and building a fence. In its eventual response (after Sept 1944) to the Soviet attack, Finland reversed its long-standing previous policy of trying to ignore and not deal with the Soviet Union. It adopted a new policy of economic involvement and frequent political discussions with the Soviet Union. But those changes were highly selective, because Finland remained unoccupied, politically self-governing, and a socially liberal democracy. That coexistence of two seemingly contrasting identities, one changed and the other unchanged, has puzzled and angered many non-Finns, who coined the scornful term ‘Finlandization’ and implied that Finland could and should have done something different.
Finland exhibits outstandingly strong national identity — much more than someone unfamiliar with Finland would have expected of such a small country that otherwise seems typically Scandinavian. Finland’s national identity and belief in Finland’s uniqueness have arisen especially from its beautiful but unique and difficult language.”
“Finland illustrates willingness to tolerate initial failure, and to persist in experimenting with solutions to a crisis until it finds a solution that works. When the Soviet Union issued its demands to Finland in October 1939, Finland did not respond by offering the economic and political involvement that it eventually adopted. Even if Finland had made such an offer then, Stalin would probably have refused the offer; it required Finland’s ferocious resistance in the Winter War to convince Stalin to leave Finland independent. Instead, from 1944 onwards, when Finland recognized the failure of its pre-war policy of ignoring the Soviet Union and of its wartime policy of seeking a military solution, Finland went through a long and almost uninterrupted period of experimentation in order to discover how much economic and political independence it could retain, and what it had to do to satisfy the Soviet Union in return.
Finland illustrates flexibility born of necessity. In response to Soviet fears and sensitivities, Finland did things unthinkable in any other democracy: it put on trial and imprisoned its own wartime leaders according to a retroactive law; its parliament adopted an emergency decree to postpone a scheduled presidential election; a leading presidential candidate was induced to withdraw his candidacy; and its press self-censored statements likely to offend the Soviet Union. Other democracies would consider these actions as disgraceful. In Finland those actions instead reflected flexibility: sacrificing sacred democratic principles to the extent required to retain political independence, the principle held most sacred.
Finland’s history illustrates belief in a non-negotiable core value: independence and not being occupied by another power. Finns were prepared to fight for that core value, even though they thereby risked mass death. Fortunately for Finns, they survived and also retained their independence.”
“The factor that initially hindered and subsequently favored Finland’s crisis resolution was lack of national consensus about the crisis, and then achievement of consensus. Throughout the 1930s Finland largely ignored the impending crisis with the Soviet Union, and then in 1939 miscalculated that Stalin’s demands were partly a bluff. From 1944 onwards there was instead a consensus, formulated as the Paasikivi-Kekkonen line, that the Finnish government had to talk frequently with Soviet political leaders and learn to see things from the Soviet point of view.
The three factors favorable to crisis solution that Finland conspicuously lacked, and for whose lack Finland had to compensate in other ways, were support from allies, available models, and freedom from geopolitical restraints. Of the nations discussed in this book, none received less support from allies than did Finland: all of Finland’s traditional and potential friends refused to provide the substantive help for which Finland had been hoping.”
Meiji-Era Japan
“Big differences coexist with big similarities between Japanese and American/European societies. In alphabetical order without trying to rank them in importance, some of the differences that people identify involve: apologizing (or not apologizing), the difficulty of learning to read and write, enduring hardships silently, extensive socializing with prospective business clients, extreme politeness, feelings toward foreigners, openly misogynous behavior, patient/doctor communication, pride in beautiful penmanship, reduced individualism, relations with parents-in-law, standing out as different from other people, the status of women, talking directly about feelings, unselfishness, ways of disagreeing with other people — and many other features.
All of those differences are legacies of traditional Japan, coexisting with Western influences on modern Japan. That mixing began with a crisis exploding on July 8, 1853, and accelerated with the Meiji Restoration of 1868, when Japan embarked on a program of selective change that extended over half-a-century. Meiji-Era Japan is perhaps the modern world’s outstanding example of selective national change, and of using other nations as models. Like Finland’s crisis, Japan’s began abruptly with a foreign threat (but not with an actual attack). Like Finland, Japan exhibited outstanding honest self-appraisal, and patience at experimenting with different solutions until it found ones that worked. Unlike Finland, Japan adopted much more comprehensive selective changes and enjoyed greater freedom of action. Hence Japan in the Meiji Era offers a good case study to pair with our discussion of Finland.”
“Under the Meiji Restoration, Japan’s actual ruler was a hereditary military dictator called the shogun, while the emperor was a figurehead without real power. Between 1639 and 1853, the shoguns limited Japanese contact with foreigners, thereby continuing a long Japanese history of lesser isolation arising from the effects of their island geography.
The British and Japanese archipelagoes appear to be geographical equivalents of each other off Eurasia’s east and west coasts, respectively. Japan and Britain look roughly similar in area, and both lie near the Eurasian continent, so one would expect similar histories of involvement with the continent. In fact, since the time of Christ, Britain has been successfully invaded from the continent four times, Japan never. Conversely, Britain has had armies fighting on the continent in every century since the Norman Conquest of AD 1066, but until the late 19th century there were no Japanese armies on the continent except during two brief periods. Already during the Bronze Age over 3,000 years ago, there was vigorous trade between Britain and mainland Europe; British mines in Cornwall were the main source of tin for making European bronze. A century or two ago, Britain was the world’s leading trading nation, while Japanese overseas trade still remained small. Why do these huge differences apparently contradict straightforward geographic expectations?
The explanation for that contradiction involves important details of geography. While Japan and Britain look at a glance similar in area and isolation, Japan is actually five times farther from the continent (110 vs. 22 miles), and 50% larger in area and much more fertile. Hence Japan’s population today is more than double Britain’s, and its production of land-grown food and timber and in-shore seafood is higher. Until modern industry required importation of oil and metals, Japan was largely self-sufficient in essential resources and had little need for foreign trade — unlike Britain. That’s the geographic background to the isolation that characterized most of Japanese history, and that merely increased after 1639.
Europeans first reached China and Japan by sea in 1514 and 1542, respectively. Japan, which had already been doing some trade with China and Korea, then began trading with the Portuguese, Spanish, Dutch, and British. That did not consist of direct trade between Japan and Europe, but instead of trade at settlements on the Chinese coast and elsewhere in Southeast Asia. Those European contacts affected spheres of Japanese society ranging from weapons to religion. When the first Portuguese adventurers reaching Japan in 1542 shot ducks with their primitive guns, Japanese observers were so impressed that they avidly developed their own firearms, with the result that by 1600 Japan had more and better guns than any other country in the world. The first Christian missionaries arrived in 1549, and by 1600 Japan had 300,000 Christians.
But the shoguns had reasons to be concerned about European influence in general, and about Christianity in particular. Europeans were accused of meddling in Japanese politics, and of supplying weapons to Japanese rebels against the Japanese government. Catholics preached intolerance of other religions, disobeyed Japanese government orders not to preach, and were perceived as loyal to a foreign ruler (the Pope). Hence after crucifying thousands of Japanese Christians, between 1636 and 1639 the shogun cut most ties between Japan and Europe. Christianity was banned. Most Japanese were forbidden to travel or live overseas. Japanese fishermen who drifted to sea, got picked up by European or American ships, and managed to return to Japan were often kept under house arrest or forbidden to talk about their experiences overseas. Visits by foreigners to Japan were banned except for Chinese traders confined to one area of the port city of Nagasaki, and Dutch traders confined to Deshima Island in Nagasaki harbor. (Because those Dutch were Protestants, they were considered non-Christian by Japan.) Once every four years, those Dutch traders were ordered to bring tribute to the Japanese capital, traveling by a prescribed route under watchful eyes. Some Japanese domains did succeed in continuing to trade with Korea, China, and the Ryukyu Islands, the archipelago several hundred miles south of Japan that includes Okinawa. Intermittent Korean trade visits to Japan were disguised to Japanese audiences as visits tolerated to receive Korean ‘tribute.’ But all of those contacts remained limited in scale.
The small trade between the Netherlands and Japan was economically negligible. Instead, its significance to Japan was that those Dutch traders became a important source of information about Europe. Among the courses of instruction offered by Japanese private academies were so-called ‘Dutch studies.’ Those classes taught information acquired from the Netherlands about practical and scientific subjects: especially Western medicine, astronomy, maps, surveying, guns, and explosives. Within the Japanese government’s Bureau of Astronomy was an office devoted to translating Dutch books on those subjects into Japanese. Much information about the outside world (including Europe) also came to Japan via China, Chinese books, and European books translated into Chinese.
In short, until 1853 Japan’s contact with foreigners was limited, and was controlled by the Japanese government.”
“Japan in 1853 was very unlike Japan today, and even unlike Japan in 1900, in important ways. Somewhat like medieval Europe, Japan in 1853 was still a feudal hierarchical society divided into domains, each controlled by a lord called a daimyo, whose power exceeded that of a medieval European lord. At the apex of power stood the shogun, of the Tokugawa line of shoguns that had ruled Japan since 1603, and that controlled one-quarter of Japan’s rice-growing land. Daimyo required the shogun’s permission to marry, move, or erect or repair a castle. They were also required, in alternate years, to bring their retainers and take up residence at the shogun’s capital, at great expense to themselves. Besides the resulting tension between the shogun and the daimyo, other problems in Tokugawa Japan arose from the growing gap between the shogun’s expenses and his income, increasingly frequent rebellions, urbanization, and the rising merchant class. But the Tokugawa shoguns had coped with problems and had remained in power for 250 years, and were at no imminent risk of being overthrown. Instead,the shock that led to their overthrow was the arrival of the West.
The background to Western pressure on Japan was Western pressure on China, which produced far more goods desired by the West than did Japan. European consumers especially wanted Chinese tea and silk, but the West produced little that China wanted in return, so Europeans had to make up that trade deficit by shipping silver to China. In order to reduce the hermorrhaging of their silver stocks, British traders got the bright idea of shipping cheap opium from India to sell to China at prices below that of existing Chinese sources. The Chinese government understandably responded by denouncing opium as a health hazard, banning its importation, and demanding that European smugglers surrender all the opium stored on their ships anchored off China’s coast. Britain objected to that Chinese response as an illegal restraint of trade.
The result was the Opium War of 1839–1842 between Britain and China, the first serious test of military strength between China and the West. Although China was far larger and more populous than Britain, it turned out that Britain’s navy and army were far better equipped and trained than China’s. Hence China was defeated and forced into humiliating concessions, paying a large indemnity, and signing a treaty that opened five Chinese ports to British trade. France and the U.S. then extracted the same concessions from China.
When the Japanese government learned of these developments in China, it feared that it would only be a matter of time until some Western power demanded a similar treaty port system in Japan. It did happen, in 1853, and the Western power responsible was the U.S. The reason why, among Western powers, the U.S. was the one that became motivated to act first against Japan was the U.S.’ conquest of California from Mexico in 1848, accompanied by the discovery there of gold, which caused an explosion of American ship traffic to the Pacific coast. Sailings of American whaling and trading ships around the Pacific also increased. Inevitably, some of those American ships got wrecked, some of those wrecks occurred in ocean waters near Japan, where they were killed or arrested according to Tokugawa Japan’s isolationist policy. But the U.S. wanted those sailors instead to receive protection and help, and it wanted American ships to be able to buy coal in Japan.
Hence U.S. President Millard Fillmore sent Commodore Matthew Perry to Japan with a fleet of four ships, including two gun-bearing steam-powered warships infinitely superior to any Japanese ships at that time. (Japan had neither steamships nor even steam engines.) On July 8, 1853 Perry sailed his fleet uninvited into Edo Bay (now called Tokyo Bay), refused Japanese orders to leave, delivered President Fillmore’s letter of demands, and announced that he expected an answer when he returned the following year.
For Japan, Perry’s arrival, and his open threat of overwhelming force, conformed to our definition of ‘crisis’: a serious challenge that cannot be solved by existing methods of coping. After Perry’s departure, the shogun circulated Fillmore’s letter to the daimyo to ask their opinion about how best to respond; that was already unusual. Among their varied proposed responses, common themes were a strong desire to maintain Japan’s isolation, but recognition of the practical impossibility of Japan defending itself against Perry’s warships, hence the suggestion of compromising to buy time during which Japan could acquire Western guns and tech to defend itself. It was the latter view that prevailed.
When Perry returned on February 13, 1854, this time with a fleet of nine warships, the shogun responded by signing Japan’s first treaty with a Western country. Although Japan succeeded in putting off Perry’s demand for a trade agreement, it did make other concessions that ended its 215-year policy of isolation. It opened two Japanese ports as harbors of refuge for American ships, accepted an American consul to reside at one of those ports, and agreed to treat shipwrecked American sailors humanely. After the signing of that agreement between Japan and the U.S., the British and Russian and Dutch naval commanders in the Far East quickly reached similar agreements with Japan.”
“The 14-year period that began in 1854, when the shogun’s government (called the bakufu) signed Perry’s treaty ending Japan’s centuries of isolation, was a tumultuous period of Japanese history. The bakufu struggled to solve the problems resulting from Japan’s forced opening. Ultimately, the shogun failed, because the opening triggered unstoppable changes in Japanese society and government. Those changes in turn led to the shogun’s overthrow by his Japanese rivlas, and then to much more far-reaching changes under the new government that was led by those rivals.
Perry’s treaty and its British, Russian, and Dutch equivalents didn’t satisfy the Western goal of opening Japan to trade. Hence in 1858 the new American consul in Japan negotiated a broader treaty that did address trade, and that was again soon followed by similar treaties with Britain, France, Russia, and the Netherlands. Those treaties became regarded in Japan as humiliating and were termed the ‘unequal treaties,’ because they embodied the Western view that Japan did not deserve to be treated in the way that Western powers treated one another. For instance, the treaties provided for extraterritoriality of Western citizens in Japan, i.e. that they were not subject to Japanese laws. A major goal of Japanese policy for the next half-century became the undoing of the unequal treaties.
Japan’s military weakness in 1858 relegated that goal to the distant future. Instead, the bakufu’s more modest immediate goal in 1858 was to minimize the intrusion of Westerners, and of their ideas and influence. That was achieved by Japan’s keeping up the fiction of obeying the treaties, while actually frustrating them by delaying, unilaterally changing agreements, taking advantage of Western unfamiliarity with ambiguous Japanese place names, and playing off different Western countries against one another. Through the 1858 treaties, Japan succeeded in limiting trade to just two Japanese ports, termed ‘treaty ports,’ and in restricting foreigners to specified districts within those ports beyond which foreigners were forbidden to travel.
The bafuku’s basic strategy from 1854 onwards was one of buying time. That meant satisfying Western powers (with as few concessions as possible), but in the meantime acquiring Western knowledge, equipment, tech, and strength, both military and non-military, so as to be able to resist the West as soon as possible. The bakufu, and also the powerful domains of Satsuma and Choshu that were nominally subject to the bafuku but enjoyed much autonomy, purchased Western ships and guns, modernized their militaries, and sent students to Europe and the U.S. Those students studied not just practical matters such as Western navigation, ships, industry, engineering, science, and tech, but also Western laws, languages, constitutions, economics, political science, and alphabets. The bakufu developed an Institute for the Study of Barbarian (i.e. foreign) Books, translated Western books, and sponsored the production of English-language grammars and an English pocket dictionary.
But while the bakufu and the big domains were thus trying to build up strength, problems resulting from Western contact were developing in Japan. The bakufu and domains became heavily indebted to foreign creditors as a result of expenses such as weapons purchases and sending students overseas. Consumer prices and the cost of living rose. Many samurai (the warrior class) and merchants objected to the bakufu’s efforts to monopolize foreign trade. Now that the shogun had asked the daimyo for advice after Perry’s first visit, some daimyo wanted to become further involved in policy and planning, rather than leaving it all to the shogun as before. It was the shogun who had negotiated and signed treaties with Western powers, but the shogun couldn’t control outlying daimyo who violated those treaties.
The result was several sets of intersecting conflicts. Western powers were in conflict with Japan about whether to open Japan more (the Western goal) or less (the prevalent Japanese goal) to the West. Domains such as Satsuma and Choshu, which had already traditionally been in conflict against the bakufu, were now in sharper conflict, each side trying to use Western equipment and knowledge and allies against the other. Conflicts increased between domains. There was even conflict between the bakufu and the figurehead emperor at the imperial court, on whose behalf the bakufu supposedly acted. For instance, the imperial court refused to approve the 1858 treaty that the bakufu had negotiated with the U.S., but the bakufu proceeded to sign it anyway.
The sharpest conflict within Japan arose over Japan’s basic strategy dilemma: whether to try and resist and expel the foreigners now, or instead to wait until Japan could become stronger. The signing of the unequal treaties by the bakufu created a backlash in Japan, and anger at the shogun and the other lords who had permitted Japan to be dishonored. Already around 1859, resentful, hotheaded, naive young sword-wielding samurai began to pursue a new goal of expelling foreigners by a campaign of assassination. They became known as ‘shishi,’ meaning ‘men of high purpose.’ Appealing to what they believed were traditional Japanese values, they considered themselves morally superior to older politicians.”
“Shishi terrorism was directed against foreigners, and even more often used against Japanese working for or compromising with foreigners. In 1860 a group of shishi succeeded in beheading the regent Ii Naosuke, who had advocated signing treaties with the West. Japanese attacks against foreigners climaxed in two incidents in 1862 and 1863 involving the domains of Satsuma and Choshu. On September 14, 1862 a 24-year-old English merchant, Charles Richardson, was attacked by Satsuma swordsmen on a road and left to bleed to death, because he was considered to have failed to show proper respect for a procession that included the father of Satsuma’s daimyo. Britain demanded indemnities, apologies, and execution of the perpetrators not only from Satsuma but also from the bakufu. After nearly a year of unsuccessful British negotiations with Satsuma, a fleet of British warships bombarded and destroyed most of Satsuma’s capital of Kagoshima and killed an estimated 1,500 Satsuma soldiers. The other incident occurred in late June 1863, when Choshu coastal guns fired on Western ships and closed the crucial Shimonoseki Strait between the main Japanese islands of Honshu and Kyushu. A year later, a fleet of 17 British, French, American, and Dutch warships bombarded and destroyed those coastal guns and carried off Choshu’s remaining cannon.
Those two Western retaliations convinced even Satsuma and Choshua hotheads of the power of Western guns, and of the futility of Japan’s attempting to expel the foreigners while in its current weak condition. The hotheads would have to wait until Japan had achieved military equality with the West. Ironically, that was the policy that the bakufu had already been following, and for which the hotheads had been excoriating the bakufu.
But some domains, especially Satsuma and Choshu, were now convinced that the shogun was incapable of strengthening Japan to the point where it could resist the West. The daimyo concluded that, while they shared the bakufu’s goal of acquiring Western tech, achieving that goal required reorganizing Japan’s government and society. Hence they sought gradually to outmaneuver the shogun. Satsuma and Choshu had formerly been rivals, had been suspicious of each other, and had fought against each other. Recognizing that the shogun’s efforts to build up military strength threatened both domains, they now formed an alliance.
After the former shogun’s death in 1866, the new shogun launched a crash program of modernization and reform, including importing military equipment and advisors from France. That increased the perceived threat to Satsuma and Choshu. When the former emperor also died in 1867, his 15-year-old son succeeded to the imperial throne. Satsuma and Choshu leaders conspired with the new emperor’s grandfather and thereby enlisted the support of the imperial court. On January 3, 1868 the conspirators seized the gates of the Imperial Palace in the city of Kyoto, convened a council stripping the shogun of his lands and of his position on the council, and ended the shogunate. The council proclaimed the fiction of ‘restoring’ the responsibility for governing Japan to the emperor, although that responsibility had previously actually been the shogun’s. That event is known as the Meiji Restoration, and it marks the beginning of what is termed the Meiji Era: the period of rule of the new emperor.”
“After that coup gave them control of Kyoto, the immediate problem facing the Meiji leaders was to establish control over all of Japan. While the shogun himself accepted defeat, many others did not. The result was a civil war between armies supporting and armies opposing the new imperial government. Only when the last opposition forces on Japan’s northern main island of Hokkaido had been defeated in June 1869 did foreign powers recognize the imperial government as the government of Japan. And only then could Meiji leaders proceed with their efforts to reform the country.
At the beginning of the Meiji era, much about Japan was up for grabs. Some leaders wanted an autocratic emperor; others wanted a figurehead emperor with actual power in the hands of a council of ‘advisors’ (that was the solution that eventually prevailed); and still another proposal was for Japan to become a republic without an emperor. Some Japanese who had come to appreciate Western alphabets proposed that alphabets replace Japan’s beautiful but complex writing system, consisting of Chinese-derived characters combined with two Japanese syllabaries. Some Japanese wanted to launch a war against Korea without delay; others argued for waiting.The samurai wanted their private militias to be retained and used; others wanted to disarm and abolish the samurai.
Out of this turmoil of conflicting proposals, the Meiji leaders decided soon in favor of three basic principles. First, although some of the leaders had been among the hotheads who wanted immediately to expel Westerners, realism quickly prevailed. It became as clear to Meiji leaders as it had been to the shogun that Japan was presently incapable of expelling Westerners. Before that could be done, Japan had to become strong by adopting Western sources of strength, meaning not just guns themselves but also far-reaching political and social reforms that provided the underpinnings of Western strength.
Second, an ultimate goal of Meiji leaders was to revise the unequal treaties that had been imposed upon Japan by the West. But that required Japan to be strong and to be seen by the West as a legitimate Western-style state, with a Western-style constitution and laws. For example, Britain’s foreign secretary, Lord Granville, bluntly told Japanese negotiators that Britain would recognize Japanese ‘jurisdiction over British subjects [resident in Japan] in precise proportion to their [Japanese] advancement in enlightenment and civilization,’ as judged by Britain according to British standards of advancement. It ended up taking 26 years from the Meiji coup until the time when Japan could get the West to revise the unequal treaties.
The third basic principle of Meiji leaders was to identify, adopt, and modify, in each sphere of life, the foreign model that was best matched to Japanese conditions and values. Meiji Japan variously borrowed especially from British, German, French, and American models. Different foreign countries ended up as models in different spheres: for instance, the new Japanese navy and army became modeled on the British navy and the German army, respectively. Conversely, within a given sphere, Japan often tried a succession of different foreign models.”
“Meiji Japan’s borrowing from the West was massive, conscious, and planned. Some of the borrowing involved bringing Westerners to Japan: for instance, importing Western schoolteachers to teach or to advise on education, and bringing two German scholars to help write a Japanese constitution drawing heavily on Germany’s constitution. But more of the borrowing involved Japanese traveling as observers to Europe and the U.S. A crucial step, undertaken just two years after the Meiji government had consolidated its power, was the Iwakura Mission of 1871–1873. Consisting of 50 government reps, it toured the U.S. and a dozen European countries, visited factories and government offices, met U.S. President Grant and European leaders, and published a five-volume report providing Japan with detailed accounts of a wide range of Western practices. The mission announced its purpose as being ‘to select from the various institutions prevailing among enlightened nations such as are best suited to our present condition.’ When war broke out between France and Prussia in 1870, Japan even sent two observers with a much narrower purpose: to watch first-hand how Europeans fight.
A by-product of these foreign travels was that Japanese with overseas experience tended to become Meiji Japan’s leaders, both in government and in private spheres.”
“In order to make this massive borrowing from the West palatable to Japanese traditionalists, innovations and borrowings in Meiji Japan were often claimed to be not new at all, but just returns to Japan’s traditional ways. For example, when the emperor himself in 1889 promulgated Japan’s first constitution, based heavily on the German constitution, in his speech he invoked his ascent ‘to the Throne of a lineal succession unbroken for ages eternal,’ and ‘the right of sovereignty of the State [that] we have inherited from Our Ancestors.’ Similarly, new rituals invented for the imperial court during the Meiji Era were claimed to be timeless old court rituals.
This reframing of innovations as supposedly retained traditions — the phenomenon of ‘invented traditions’ often invoked by innovators in other countries besides Japan — contributed to the success of Meiji leaders in carrying out drastic changes. The cruel fact was that the leaders faced a dangerous situation when they assumed power in January 1868. Japan was at risk of attacks by foreign powers, at risk from the civil war between the bakufu’s opponents and its supporters, at risk of wars between domains, and at risk of revolts by groups threatened with losing their former rank and power. Abolition of the samurai’s privileges did provoke several samurai rebellions, the most serious of them the Satsuma revolt of 1877. Armed peasant uprisings did break out periodically in the 1870s. But opposition to Meiji reforms turned out to be less violent than might have been anticipated. Meiji leaders proved skilled at buying off, co-opting, or reconciling their actual or potential opponents.”
“The selective changes adopted in Meiji Japan affected most spheres of Japanese life: the arts, clothing, domestic policies, the economy, education, the emperor’s role, feudalism, foreign policy, government, hairstyles, ideology, law, the military, society, and tech. The most urgent changes, effected or launched within the first few years of the Meiji Era, were to create a modern national army, to abolish feudalism, to found a national system of education, and to secure income for the government by tax reform. Attention then shifted to reforming the law codes, designing a constitution, expanding overseas, and undoing the unequal treaties. In parallel with this attention to pressing practical matters, Meiji leaders also began to address the challenges of creating an explicit ideology to enlist the support of Japan’s citizens.
Military reform began with purchasing modern Western equipment, enlisting French and German officers to train the army, and (later) experimenting with French and British models to develop a modern Japanese navy. The result illustrates Meiji skill at selecting the best foreign model: instead of selecting just one country’s armed forces as the model for all branches of the military, Japan ended up modeling its army on Germany’s army but modeling its navy on Britain’s navy (because in late 19th-century Europe Germany had the strongest army but Britain had the strongest navy!). As one example, when Japan wanted to learn how to build the fast battleships called battle-cruisers invented in Britain, Japan commissioned a British shipyard to design and build the first Japanese battle-cruiser, then used it as the model for building three more battle-cruisers in three different Japanese shipyards.
A national conscription law, adopted in 1873 and based on European models, provided for a national army of men armed with guns and serving for three years. Formerly, each feudal domain had had its own private militia of samurai swordsmen, useless in modern war but still a threat to the Japanese national government. Hence the samurai were first forbidden to carry swords or to administer private punishment, then hereditary occupations (including that of being a samurai) were abolished, then the ex-samurai were paid off in government stipends, and finally those stipends were converted to interest-bearing government bonds.
Another urgent order of business was to end feudalism. To make Japan strong required building a centralized Western-style state. That posed a delicate problem, because as of January 1868 the only real powers of the new imperial government were those just surrendered by the shogun; other powers remained with the daimyo (the feudal lords). Hence in March 1868 four daimyo, including those of Satsuma and Choshu who had instigated the Meiji Restoration, were persuaded to offer their lands and people to the emperor by an ambiguously worded document. When the emperor accepted that offer in July, the other daimyo were commanded to make the same offer, and as a sop they were then appointed as ‘governors’ of their former feudal domains. Finally, in August 1871 the daimyo were told that their domains (and governorships) would now be swept away and replaced with centrally adminstered prefectures. But the daimyo were allowed to keep 10% of their former domains’ assessed incomes, while being relieved of the burden of all the expenses that they had formerly borne. Thus, within three-and-a-half years, centuries of Japanese feudalism were dismantled.
The emperor remained the emperor: that didn’t change. However, he was no longer cloistered in Kyoto’s Imperial Palace: he was transferred to the effective capital of Edo, renamed Tokyo. In his 45 years of rule, the emperor made 102 trips outside of Tokyo and around Japan, compared with a total of just three trips by all emperors combined during the 265 years of the Tokugawa Era (1603–1868).
Education was subject to big reforms, with big consequences. For the first time in its history, Japan acquired a national system of education. Compulsory elementary schools were established in 1872, followed by the founding of Japan’s first university in 1877, middle schools in 1881, and high schools in 1886. The school system at first followed the highly centralized French model, shifting in 1879 to the American school model of local control, and then in 1886 to a German model. The end result of that educational reform is that Japan today ties for having the world’s highest percentage of literate citizens (99%), despite also having the world’s most complicated and hard-to-learn writing system. While the new national system of education was thus inspired by the West, its proclaimed purposes were thoroughly Japanese: to make Japanese people loyal and patriotic citizens revering their emperor and imbued with a sense of national unity.
A more mundane but equally important purpose of educational reform was to train recruits for jobs in government, and to develop Japan’s human capital so that Japan could rise in the world and prosper. In the 1880s, recruitment for the central government bureaucracy became based on an exam testing Western knowledge, rather than testing knowledge of Confucian philosophy. National education, along with the government’s official abolition of hereditary occupations, undermined Japan’s traditional class divisions, because now higher education rather than birth became the stepping-stone to high government office. Partly as a result, among the world’s 14 large rich democracies today, Japan is the one with the most equal division of wealth, and the one with proportionately the fewest billionaires in its population; the U.S. lies far at the opposite extreme in both respects.
The Meiji government’s remaining top priority was to devise an income stream to finance its government operations. Japan had never had Western-style national taxes. Instead each daimyo had separately taxed his own lands to fund his own operating costs, while the shogun had similarly taxed just his own lands but also demanded additional money for specific purposes from all the daimyo. Yet the Meiji government had just relieved the ex-daimyo of their responsibilities as ‘governors,’ had converted their ex-domains into prefectures, and had decreed that those prefectures would now be administered by the central government, leaving the ex-daimyo with no need (so said Meiji leaders) for revenues to finance administrative operations of their own. Hence the Meiji Finance Ministry reasoned that it now needed at least as much annual revenue as the shogun and all the daimyo combined had previously extracted. It achieved that aim in a Western manner, by imposing a national 3% land tax. Japanese farmers periodically complained and rioted, because they had to pay cash every year regardless of the size of the harvest. But they might have considered themselves lucky if they could have foreseen modern Western tax rates. For example, in California today we pay a state 1% property tax, plus a state income tax of up to 12%, plus a national income tax of currently up to 44%.
Less urgent matters included substituting a Western-style legal system for Japan’s traditional system of justice. Law courts with appointed judges were introduced in 1871, followed by a Supreme Court in 1875. Criminal, commercial, and civil law reforms followed different paths of Westernization by experimenting with different foreign models. The criminal law code was initially reformed on a French model, then changed to a German model; the commercial law code used a German model; and the civil law code used French, British, and indigenous Japanese concepts before ending up as German-inspired. In each case, challenges influencing the choices included finding solutions compatible with Japanese views, plus adopting Western institutions in order to achieve international respectability necessary for revising the unequal treaties. For instance, that required abolishing traditional Japanese torture and broad use of the death penalty, which the West no longer considered respectable.
Modernization of Japan’s infrastructure began early in the Meiji Era. The year 1872 saw the founding of a national post system, and the building of Japan’s first railroad and its first telegraph line, followed by establishment of a national bank in 1873. Gas street lighting was installed in Tokyo. The government also got involved in Japan’s industrialization by setting up factories to produce bricks, cement, glass, machinery, and silk with Western machinery and methods. After Japan’s successful war of 1894–1895 against China, government industrial spending came to concentrate on war-related industries such as coal, electricity, gun factories, iron, steel, railroads, and shipyards.
Government reform was especially important if Japan was to achieve international respectability — and especially challenging. Cabinet government was introduced in 1885. Already in 1881, it had been announced that a constitution would be forthcoming, partly in response to public pressure. It then took eight years to devise a Western-style constitution in harmony with Japanese circumstances. The solution to that challenge depended on taking as a model not the U.S. constitution but the German constitution, because German emphasis on a strong emperor corresponded to Japanese conditions. Japan’s constitution invoked Japanese belief that its emperor was descended from the gods through an unbroken line of previous emperors extending back millennia in time. Japan was now a civilized nation with a constitutional government, equal to the world’s other constitutional governments (and — hint hint — no longer to be singled out by unequal treaties.)
Like other spheres of Japanese life, Japanese culture became a mosaic of new Western elements and traditional Japanese elements. Western clothing and hairstyles are overwhelmingly prevalent in Japan today and were adopted quickly — by Japanese men. In the arts, traditional Japanese music, painting, woodblock prints, kabuki theater, and Noh plays survived alongside Western ballroom dancing, military bands, orchestras, operas, theater, painting, and novels.”
“In the last two decades of the Meiji Era, having dealt with mundane but urgent issues such as tax reform and law codes, the Meiji government was able to devote more attention to that task of imbuing Japanese with a sense of public duty. That was achieved partly by government support for traditional religion, and even more by government attention to education. Traditional Japanese religion served to unify Japanese people by teaching shared beliefs in the emperor’s divine descent, patriotism, civic duty, filial piety, respect for the gods, and love of country. Hence the government promoted the traditional Shinto religion and Confucian philosophy, subsidized the leading national Shinto shrines, and appointed their priests. Those values, associated with worship of the emperor as a living god, were featured prominently in the uniform national textbooks prescribed at every level of Japanese education.”
“The last remaining major line of selective change in the Meiji Era was Japan’s transformation from being a target to being an agent of overseas expansion and military aggression. We saw that Tokugawa Japan isolated itself and had no aspirations of overseas conquests. In 1853 Japan appeared to be at imminent risk from militarily much stronger foreign powers.
By the beginning of the Meiji Era in 1868, however, Japan’s military reforms and industrial build-up had removed that imminent risk and permitted instead a stepwise expansion. The first step was Japan’s formal annexation, in 1869, of the northern island of Hokkaido, originally inhabited by a people (the Ainu) quite different from the Japanese, but already partly controlled by the bakufu. In 1874 a punitive military expedition was sent to the island of Taiwan, whose aborigines had killed dozens of Ryukyu fishermen. At the end of the expedition, however, Japan pulled back its forces and refrained from annexing Taiwan. In 1879 the Ryukyu Islands themselves (the archipelago several hundered miles south of Japan) were annexed. From 1894 to 1895 Meiji Japan fought and won its first overseas war against China, and did annex Taiwan.
Japan’s 1904–1905 war against Russia enabled Meiji Japan for the first time to test itself against a Western power; both Japan’s navy and army defeated the Russians. That was a milestone in world history: the defeat of a major European power by an Asian power in an all-out war. By the resulting peace treaty, Japan annexed the southern half of Sakhalin Island and gained control of the South Manchurian Railroad. Japan established a protectorate over Korea in 1905 and annexed it in 1910. In 1914 Japan conquered Germany’s Chinese sphere of influence and Micronesian island colonies in the Pacific Ocean. Finally, in 1915 Japan presented China with the so-called Twenty-One Demands that would have converted China virtually into a vassal state; China gave in to some but not all of the demands.
Japan had already considered attacking China and Korea before 1894 but drew back, because it recognized that it wasn’t strong enough and that it risked giving European powers an excuse to intervene. The only occasion on which Meiji Japan overestimated its strength was in 1895, at the end of its war against China. The concessions that Japan had extracted from China then included China’s ceding to Japan the Liaotung Peninsula, which controls the sea and land routes between China and Korea. But France, Russia, and Germany reacted by joining together to force Japan to abandon the peninsula, which Russia proceeded to lease from China three years later. That humiliating setback made Japan aware of its weakness, standing alone, vis-a-vis European powers. Hence in 1902 Japan made an alliance with Britain, for protection and insurance, before attacking Russia in 1904. Even with the security offered by that British alliance, Japan waited to issue its demands against China, until the armed forces of European powers were tied up in WWI and unable to threaten intervention, as they had done in 1895.
In short, Japan’s military expansion in the Meiji Era was consistently successful, because it was guided at every step by honest, realistic, cautious, informed self-appraisal of the relative strengths of Japan and its targets, and by a correct assessment of what was realistically possible for Japan. Now, compare that successful Meiji Era expansion with Japan’s situation as of August 14, 1945. On that date Japan was at war simultaneously with China, the U.S., Britain, Russia, Australia, and New Zealand (as well as with many other countries that had declared war against Japan but weren’t actively fighting). That was a hopeless combination of enemies against which to fight. Much of the Japanese army had been pinned down for years in China. American bombers had gutted most major Japanese cities. The two atomic bombs had obliterated Hiroshima and Nagasaki. A British/American fleet was bombarding the Japanese coast. Russian armies were advancing against weak Japanese resistance in Manchuria and Sakhalin. Australian and New Zealand troops were mopping up Japanese garrisons on some Pacific islands. Almost all of Japan’s larger warships and merchant fleet had been sunk or knocked out of service. More than 3 million Japanese people had been killed.
It would have been bad enough if blunders of Japanese foreign policy had been responsible for Japan being attacked by all those countries. Instead, Japan’s blunders were worse: Japan itself had been the one to attack those countries. In 1937 Japan launched a full-scale war against China. It fought two brief but bloody border wars with Russia in 1938 and 1939. In 1941 Japan simultaneously and suddenly attacked the U.S. and Britain and the Netherlands, even while Japan was still susceptible to resumption of fighting with Russia. Japan’s attack on Britain automatically resulted in declarations of war by Britain’s Pacific dominions Australia and New Zealand; Japan proceeded to bomb Australia. In 1945 Russia did attack Japan. On August 15, 1945 Japan finally bowed to the long-delayed but inevitable outcome, and surrendered. Why did Japan from 1937 onwards blunder stepwise into such an unrealistic and ultimately unsuccessful military expansion, when Meiji Japan from 1868 onwards had carried out stepwise such a realistic and successful military expansion?
There are numerous reasons: the successful war against Russia, disillusionment with the Treaty of Versailles, the collapse of Japan’s export-led economic growth i 1929, and others. But one additional reason is especially relevant: a difference between Meiji-Era Japan and the Japan of the 1930s-1940s, in knowledge and capacity for honest self-appraisal on the part of Japanese leaders. In the Meiji Era many Japanese, including leaders of Japan’s armed forces, had made visits abroad. They thereby obtained detailed first-hand knowledge of China, the U.S., Germany, and Russia and their armies and navies. They could make an honest appraisal of Japan’s strength compared to the strengths of those other countries. Then, Japan attacked only when it could be confident of success. In contrast, in the 1930s, the Japanese army on the Asian mainland was commanded by young hothead officers who didn’t have experience abroad (unless in Nazi Germany), and who didn’t obey orders from experienced Japanese leaders in Tokyo. Those young hotheads didn’t know first-hand the industrial and military strength of the U.S. and of Japan’s other prospective opponents. They didn’t understand American psychology, and they considered the U.S. a nation of shopkeepers who wouldn’t fight.
Quite a few older leaders of the Japanese government and armed forces in the 1930s did know the strength of the U.S. and Europe first-hand. But Japan’s older leaders with overseas experience in the 1930s were intimidated and dominated, and several were assassinated, by young hotheads lacking overseas experience — much as shishi hotheads in the late 1850s and 1860s had assassinated and intimidated Japan’s leaders then. Of course, the shishi had no more overseas experience of the strength of foreign countries than did Japan’s young officers of the 1930s. The difference was that shishi attacks against Westerners provoked the bombardments of Kagoshima and Shimonoseki Strait by powerful Western warships, which demonstrated convincingly even to the shishi that their strategy had been unrealistic. In the 1930s there were no such foreign bombardments of Japan to force realism upon the young officers who had not been overseas.
In addition, the historical experience of the generation of Japanese leaders who came of age in Meiji Japan was virtually the opposite of the experience of Japan’s leaders of the 1930s. Meiji leaders had spent their formative years in a weak Japan at risk of attack by strong potential enemies. But to Japan’s leaders of the 1930s, war instead meant the intoxicating success of the Russo-Japanese War, the destruction of Russia’s Pacific fleet in Port Arthur harbor by a surprise attack that served as the model for Japan’s surprise attack against the American fleet at Pearl Harbor, and the spectacular destruction of Russia’s Baltic fleet by the Japanese navy in the Battle of Tsushima Strait. In Germany, we’ll see another example of successive generations within the same country holding drastically different political views as the result of different historical experiences.
Thus, part — not all, but part — of the reason for Japan initiating WWII against such hopeless odds was that young army leaders of the 1930s lacked the knowledge base and historical experience necessary for honest, realistic, cautious self-appraisal. The result was disastrous for Japan.”
Chile
“In 1973, Chile was taken over by a military dictatorship that smashed previous world records for government-perpetrated sadistic torture. In the course of a military coup on September 11, Chile’s democratically-elected president committed suicide in the presidential palace. Not only did the Chilean junta kill Chileans in large numbers, torture them in larger numbers, devise vile new techniques of psychological and physical torture, and drive still more Chileans into exile, it also directed terrorist political killings outside Chile, including what was, until the World Trade Center bombing of 1993, the only terrorist political killing of an American citizen on American soil (in DC in 1976). That military government remained in power for almost 17 years.
Today, 29 years after the military government stepped down, Chile is struggling with that government’s legacy. Some torturers and military leaders have been sent to prison, but the top military leaders were not imprisoned. Many Chileans, while deploring the torture, still view the military coup as necessary and unavoidable.”
“Chile is the longest and thinnest country in the world. While averaging only slightly more than 100 miles wide from west to east, it’s nearly 3,000 miles long from north to south: almost as long as the U.S. is wide. Geographically, Chile is isolated from other countries by the high chain of the Andes in the east separating it from Argentina, and by the world’s most barren desert in the north separating it from Bolivia and Peru. As a result, the only foreign wars that Chile has fought since achieving independence were two with its northern neighbors Bolivia and Peru in the years 1836–1839 and 1879–1883.
Despite that enormous length, Chile’s productive farmland, agriculture, and population are concentrated in just a fraction of the country’s area, within the Central Valley surrounding Santiago. Only 60 miles from Santiago is Chile’s main port of Valparaiso, the largest port on the west coast of South America. That geographic concentration, plus Chile’s ethnic homogeneity, has contributed to the unity of Chile, which has never had to deal with the geographic secessionist movements that have plagued most other countries of Chile’s territorial extent.
The big advantages from being located in the temperate zone at the southern end of South America are the higher average agricultural productivity and the lower average disease burden of temperate-zone areas compared to the tropics. As a consequence, Chile, Argentina, and Uruguay are the South American countries with the highest average per-capita incomes, even despite the chronically misguided economic policies of Argentine governments. Chile’s relative prosperity arises from its agriculture, fisheries, minerals, and manufacturing industries. Chile was already a big exporter of wheat to both California and Australia at the time of the Californian and Australian gold rushes of the 1840s, and has remained an agricultural exporter ever since. In recent decades Chile became the leading exporter of fish products in South America, and among the leading ones in the world. Chile eventually developed more manufacturing than did most other Latin American countries.
Before European arrival the area that is now Chile supported only a sparse Native American population, lacking the cultural and political achievements of the rich, populous, powerful Inca Empire to the north. As in most of the rest of South and Central America, the Europeans who conquered and settled Chile were Spainards, beginning in the 1540s. They imported few African slaves and intermarried with the Native Americans. Thus, unlike most other South American countries, Chile today is ethnically rather homogenous and doesn’t have large unmixed Native American or African minorities. Instead, Chileans are overwhelmingly Spanish and mestizo, almost all of them Catholic, and almost all of them Spanish-speaking (unlike the large minorities speaking Native American languages in other Latin American countries). The largest minority group, the Mapuche Native Americans, constitutes only 1% of the population. Relatively few people are of other than Spanish and Native American ancestry.
Thus, Chile’s geography, history, and people have all contributed to its unity. That’s been a positive force in Chilean history, tending to make it less tumultuous than the histories of other Latin American countries. But a big negative force is one that Chile does share with many other Latin American countries: Spanish colonists established large land-holdings unlike the small farms established by European settlers of North America. Hence whereas the U.S. and Canada developed broad-based democratic governments from the very beginnings of their settlement by Europeans, in Chile a small oligarchy controlled most of the land, wealth, and politics. That concentration of political power has constituted a basic problem of Chilean history.
The underlying conflict between the intransigent oligarchy’s traditional power and the rising power of other classes of society could either have been resolved through political compromise or have remained unsolved due to political stalemate. The latter outcome became increasingly frequent after Chile adopted in 1925 a new constitution that staggered the elections of the president, Senate, and lower house of Congress among different years. The well-intentioned idea, adopted in the name of the virtuous principle of balance of power, unfortunately resulted in the presidency, control of the Senate, and control of the lower house usually belonging to different political parties, depending on which party happened to be strongest in a particular election year. Two further subsequent changes in voting procedures increased the left-wing vote at the expense of the oligarchy’s previous dominance. One change was that Chilean women finally obtained the right to vote in municipal elections in 1934, and in presidential elections in 1949. The other change was that voting in Chile had traditionally been open and public, making it easy for landowners to observe and influence how peasants voted. Hence adoption of voting by secret ballot in 1958 produced a leftwards shift.
Chilean political parties came to constitute three blocks — left, center, and right — that were similar in strength. Hence the government was variously either left-controlled or right-controlled, depending on which way the center chose to lean. Each of those blocks themselves contained more extreme and less extreme elements in conflict with each other. For example, within the left block, there were moderates (including most orthodox communists) who wanted to achieve change by constitutional means, competing with a radical left that was impatient and wanted revolutionary change. The army stayed out of modern Chilean political struggles — until 1973.
Exceptionally for Chile, where the leading presidential candidate had usually obtained just a plurality rather than a majority of votes, the 1964 election produced a big majority for the center’s candidate, Eduardo Frei. He was regarded as well-intentioned and honest. Fear of the Marxist program and rising strength of the left-wing coalition led many right-wing voters to support Frei, and his party also won control of Congress’ lower house in the 1965 elections. That raised hopes that Frei could adopt major change and end Chile’s political gridlock.
Frei acted quickly to enable the Chilean government to buy 51% control of Chile’s U.S.-owned copper-mining companies. He poured government investment into the Chilean economy, expanded access to educational opportunities for poor Chileans, succeeded in making Chile the biggest per-capita recipient of U.S. economic aid in Latin America, and initiated a program of agrarian reform to break up large land-holdings. But Frei’s ability to change Chilean society was restricted by Chile’s long-standing political stalemate. On the one hand, Frei’s program was too radical for the Chilean right. On the other hand, Frei wasn’t radical enough for Chilean left-wingers, who wanted even more Chilean control of the copper-mining companies, even more government investment, and even more land redistribution. Under Frei, the Chilean economy continued to suffer from strikes, inflation, and shortages. For instance, there were chronic meat shortages and friends of mine fell victim to street violence. By 1969, all three Chilean political blocks were feeling frustrated by Chilean politics.”
“Developments in Chile from 1970 onwards were guided by two consecutive leaders who represented opposite extremes in politics and personality: Salvador Allende and Augusto Pinochet. They were similar only in sharing the fact that, to this day, it remains unclear why each of them acted as he did.
Allende was a quintessential Chilean professional, from an upper-middle class family, rich, intelligent, idealistic, and a good speaker. Already in his student days he became a declared Marxist, and a founder of Chile’s Socialist Party, which was more extreme left-wing than Chile’s Communist Party. But Allende rated as moderate by Chilean socialist standards, because his aim was to bring Marxist government to Chile by democratic means, not by armed revolution. He graduated med school, and at the age of 31 became Chile’s minister of health, a job he carried out with acknowledged success. He ran for president in 1952, 1958, and 1964 and was defeated all three times, twice by large margins. Hence by the time that Allende once again ran for president in 1970 at the head of a Popular Unity coalition of socialists, communists, radicals, and centrists, his reputation was that of an unthreatening perennial loser.
In the 1970 elections, Allende received the largest share of the popular vote (36%), but only barely, because the much larger percentage (64%) of the electorate opposed to him was split between a right-wing coalition (35%, only 1.5% lower than Allende’s share) and a center coalition (28%). Since Allende had obtained only a plurality, his election required confirmation from Congress, which did confirm him in return for a series of constitutional amendments guaranteeing freedom of the press and other freedoms. Despite the unthreatening personality and history of behavior of Allende, his election immediately provoked an unsuccessful attempt by the U.S. to muster Chilean congressional support for rejecting his confirmation, and provoked the emigration of some Chileans who didn’t care to wait and see what policies Allende would implement. Why was the election of that gentle moderate greeted with such a strong negative reaction?
The reason was that Allende’s and his party’s coalition declared goal of bringing Marxist government to Chile: a prospect that horrified the Chilean right-wing and centrists, the Chilean armed forests, and the U.S. government. After WWII, the Soviet Union embarked on a policy of world domination and developed its own atomic bombs, hydrogen bombs, and intercontinental ballistic missiles. It attempted to strangle democratic West Berlin in 1948 by closing all road access. It carried out brutal communist take-overs and bloody crushings of revolts in Czechoslovakia, East Germany, Hungary, and Poland. It established dictatorships propped up by Soviet troops in those and other Eastern European countries.
Most dangerous of all, after Fidel Castro had installed a Marxist government in Cuba, Castro and Khrushchev began to station ballistic missiles to be armed with nuclear warheads in Cuba only 90 miles from the U.S. For one terrifying week in October 1962, the world was closer to the brink of nuclear war than at any other time in history. Subsequent to the crisis, the gradual release of formerly classified information by both the U.S. and the Soviet Union made it clear that we had been even closer to destruction than had been appreciated at the time. Unbeknownst to America’s military leaders then, who knew that at least 162 missiles had already been stationed in Cuba but who thought that the missiles’ nuclear warheads had not yet arrived, many of the warheads had actually reached Cuba.
After the Cuban Missile Crisis, the Soviet Union responded by accelerating its programs to develop more powerful nuclear weapons and intercontinental ballistic missiles. The U.S. responded with the determination that never again would it tolerate the installation of a communist government in the Western Hemisphere. Any American president who failed to prevent such an installation would have been immediately impeached and removed from office for gross neglect of American interests, just as President Kennedy was warned that he would be impeached if he failed to get Soviet missiles out of Cuba. Beginning in the 60s, the U.S. also became preoccupied with communist threats in Vietnam and other Southeast Asian countries. The Chilean right, center, and armed forces were equally adamant that there would be no Marxist government in Chile, because they had seen what had happened to Cuba and to anti-Marxist Cubans after Castro had come to power. They wouldn’t tolerate that history repeating itself in Chile.
The other U.S. motive for concern about Chile was that Chile’s copper-mining companies, which are the biggest actor of the Chilean economy, were U.S.-owned and developed by U.S.-invested capital, because Chile in the 19th century lacked the capital and the tech to develop copper mines by itself. Under President Frei, Chile had already expropriated (and paid for) a 51% interest in the companies; the U.S. feared (correctly, as it turned out) that Allende might expropriate the remaining 49% without paying. Hence, from the 60s onwards, through a program called Alliance for Progress, the U.S. supported Latin American (including Chilean) centrist reform parties and poured foreign aid money into Latin American countries governed by such parties, in order to pre-empt support for leftist revolutions. Under President Frei, Chile became the leading recipient of U.S. development money in Latin America.
Given those realities, what policies did Allende adopt upon becoming president? Even though he knew that his candidacy had been supported by only 36% of Chilean voters and had been opposed by the Chilean armed forces and the U.S. government, he rejected moderation, caution, and compromise, and instead pursued policies guaranteed to be anathema to those opposing forces. His first measure, with the unanimous support of Chile’s Congress, was to nationalize the U.S.-owned copper companies without paying compensation, a recipe for making powerful international enemies. (Allende’s pretext for not paying compensation was to label company profits already earned above a certain rate of return as ‘excess profits,’ to be counted against compensation and cancelling out the compensation owed.) He nationalized other big international businesses. He horrified the Chilean armed forces by bringing large numbers of Cubans into Chile, by carrying a personal machine gun given to him by Fidel Castro, and by inviting Castro to Chile for a five-week visit. He froze prices (even of small consumer items like shoelaces), replaced free-market elements of Chile’s economy with socialist-style state planning, granted big wage increases, greatly increased government spending, and printed paper money to cover the resulting government deficits. He extended President Frei’s agrarian reform by expropriating large estates and turning them over to peasant collectives. While that agrarian reform and others of Allende’s goals were well-intentioned, they were carried out incompetently. For instance, a 19-year-old not-yet-graduated student economist was given major responsibility for setting Chilean prices of consumer goods. ‘Allende had good ideas, but he executed them poorly. Although he correctly recognized Chile’s problems, he adopted wrong solutions to those problems.’
“The result of Allende’s policies was the spread of economic chaos, violence, and opposition to him. Government deficits covered by just printing money caused hyperinflation, such that real wages (i.e. wages adjusted for inflation) dropped below 1970 levels, even though wages not corrected for inflation nominally increased. Foreign and domestic investment, and foreign aid, dried up. Chile’s trade deficit grew. Consumer goods, including even toilet paper, became scarce in markets, which were increasingly characterized by empty shelves and long queues. Rationing of food and even of water became severe. Workers, who had been Allende’s natural supporters, joined the opposition and mounted nationwide strikes; especially damaging to Chile’s economy were strikes by copper miners and truckers. Street violence and predictions of a coup grew. On the left, Allende’s radical supporters armed themselves; on the right, street posters went up proclaiming ‘Yakarta viene.’ Literally, that means ‘Jakarta is coming,’ a reference to Indonesian right-wing massacres of communists in 1965. That was an open threat by the Chilean right-wing to do the same to Chilean leftists, as it turned out that they actually did. Even Chile’s powerful Catholic Church turned against Allende when he proposed mandatory educational curriculum reforms at private Catholic schools as well as at government schools, aimed at creating a generation of cooperative and unselfish Chilean ‘New Men’ by sending students into the fields as manual laborers.
The outcome of all those developments was the 1973 coup that many characterize as inevitable, even though the form the coup took was not inevitable. An economist friend for me summed up: ‘Allende fell because his economic policies depended on populist measures that had failed again and again in other countries. They produced short-term benefits, at the cost of mortgaging Chile’s future and creating runaway inflation.’ Many Chileans admired Allende and viewed him almost as a saint. But saintly virtues don’t necessarily translate themselves into political success.
Why on earth did Allende, an experienced politician and a moderate, pursue extremist policies that he knew were unacceptable to most Chileans, as well as to Chile’s armed forces? One possibility is that Allende’s previous political successes misled him into thinking that he could defuse the opposition. He had already been successful as minister of health; he had initially assuaged congressional doubts over his election by constitutional amendments that didn’t tie his hands about economic policies; and Congress had unanimously approved his expropriation of the copper companies without compensation. He now hoped to placate the armed forces by bringing all three of their commanders into his cabinet. The other possibility is that Allende was pushed to extreme measures, against his better judgment, by his most radical supporters, the Movement of the Revolutionary Left (Spanish acronym MIR), who wanted a quick revolution to overthrow Chile’s capitalist state. They were accumulating weapons, adopted the slogan ‘Arm the people,’ complained about Allende being too weak, and refused to listen to his entreaties, ‘Just wait patiently for a few more years.’ But it seems to me that, even at the time, and not just with the wisdom of hindsight, Allende’s policies were based on unrealistic appraisals.
The long-expected coup took place on September 11, 1973, after all three branches of the Chilean armed forces had agreed on a plan ten days prior. Although the CIA had been constantly supporting opposition to Allende and seeking to undermine him, even Americans who exposed CIA meddling in Chilean affairs agree that the coup was executed by Chileans themselves, not by the CIA. The Chilean air force bombed the president’s palace in Santiago, while Chilean army tanks shelled it. Recognizing his situation to be hopeless, Allende killed himself with the machine gun presented to him by Fidel Castro.
The coup was welcomed with relief and broad support from centrist and rightist Chileans, much of the middle class, and of course the oligarchs. By then, Chile’s economic chaos, foolish governmental economic policies, and street violence under Allende had become intolerable. Coup supporters regarded the junta merely as an unavoidable transition stage towards restoring the previous status quo of middle-and upper-class civilian political domination that had prevailed before 1970. At a friend’s December 1973 dinner party, 17 of the 18 guests predicted the coup would last just two years. The 18th guest’s prediction of seven years was considered absurd by the other guests; they said that that couldn’t happen in Chile, where all previous military governments had quickly returned power to a civilian government. No one foresaw that the junta would remain in power for almost 17 years. It suspended all political activity, closed Congress, banned left-wing political parties, and even the centrist Christian Democrats (to the great surprise of those centrists), took over Chile’s universities, and appointed military commanders as university rectors.
The junta member who became its leader, essentially by accident, had joined it at the last minute and had not led the coup planning: General Augusto Pinochet. Just a couple of weeks before the coup, the Chilean army had pressured its previous chief of staff into resigning, because he was opposed to a military intervention. By default, the new army chief of staff became Pinochet, who had commanded the army units in the Santiago area. Even at that time, Pinochet was considered relatively old (58). Chile’s other army generals and armed forces commanders thought that they understood their colleague, as did the CIA, which had gathered extensive information about him. The CIA’s appraisal of Pinochet was: quiet, honest, harmless, friendly, hard-working, business-like, religious, with no known interests outside the military and the Catholic Church and his family — in short, not a person likely to lead a coup. The junta expected itself to be a committee of equals, with rotating leadership. They chose Pinochet as their initial leader mainly because he was its oldest member, because he was chief of staff of the largest branch of the Chilean armed forces (the army), and perhaps because they shared the CIA’s view of Pinochet as unthreatening. When the junta took power, Pinochet himself announced that its leadership would rotate.
But when it came time for Pinochet to rate off and to step down as leader, he didn’t do so. Instead, he succeeded in intimidating his fellow junta members by a secret service that he set up. Hundreds of incidents unfolded that involved dissent within the junta, but Pinochet usually succeeded in getting his way. Neither his fellow junta members, nor the CIA, nor anyone else anticipated Pinochet’s ruthlessness, his strong leadership, and his ability to cling to power — at the same time as he continued to project an image of himself as a benign old man and devout Catholic, depicted by the state-controlled media.
The barbaric deeds that happened in Chile after September 11, 1973, cannot be understood without recognizing the role of Pinochet, a leader who imposed his stamp on the course of history. I haven’t heard any plausible explanations for the sadism managed by Pinochet.
As soon as the junta took power, it rounded up leaders of Allende’s Popular Unity Party and other perceived leftists (such as university students and the famous Chilean folk singer Victor Jara), with the goal of literally exterminating the Chilean left-wing. Within the first 10 days, thousands of Chilean leftists were taken to two sports stadiums in Santiago, interrogated, tortured, and killed. (For instance, Jara’s body was found in a dirty canal with 44 bullet holes, all of his fingers chopped off, and his face disfigured.) Five weeks after the coup, Pinochet personally ordered a general to go around Chilean cities in what became known as the ‘Caravan of Death,’ killing political prisoners and Popular Unity politicians whom the army had been too slow at killing.
Two months after the coup, Pinochet founded an org that evolved into DINA, a national intelligence org and secret police force. Its chief reported directly to Pinochet, and it became Chile’s main agent of repression. It was notorious for its brutality, even judged by the standards of brutality of the other intelligence units of the Chilean armed forces. It set up networks of secret detention camps, devised new methods of torture, and made Chileans ‘disappear’ (i.e. murdered them without a trace). One center called La Venda Sexy specialized in sexual abuse to extract information — for example, by rounding up a prisoner’s family members and sexually abusing them in front of the prisoner, utilizing rodents and trained dogs.
In 1974 DINA began to operate outside Chile. It started in Argentina by planting a car bomb that killed Chile’s former army commander-in-chief Carlos Prats and his wife, because Prats had refused to join the coup and was feared by Pinochet as a potential threat. DINA then launched an international campaign of government terrorism, called Operation Condor, by convening a meeting of the heads of the secret police of Chile, Argentina, Uruguay, Paraguay, Bolivia, and eventually Brazil, in order to cooperate on cross-border manhunts of exiles, leftists, and political figures. Hundreds of Chileans were tracked down and killed in other South American countries, Europe, and even one in the U.S. The U.S. case occurred in 1976 in DC, only 14 blocks from the White House, when a car bomb killed the former Chilean diplomat Orlando Letelier (minister of defense under Allende), plus an American colleague. This was the only known case of a foreign terrorist killing an American citizen on American soil — until the World Trade Center bombing of 1993.
By 1976, Pinochet’s government had arrested 130,000 Chileans, or 1% of Chile’s population. While the majority of them were eventually released, DINA and other junta agents killed or ‘disappeared’ thousands of Chileans (most of them < 35), plus four American citizens and various citizens of other countries. The killings were often preceded by torture, aimed at least partly at extracting information. It isn’t clear, though, to what extent the torture was also motivated by pure sadism. About 100,000 Chileans fled into exile, many of them never to return.
One has to wonder how a previously democratic country could descend to such depths of behavior, which far exceeded the previous military interventions of Chilean history in duration, number of killings, and sadism. Partly, the answer involves Chile’s increasing polarization, violence, and breakdown of political compromise, culminating under Allende in the arming of the Chilean far left and in the ‘Yakarta viene’ warnings of impending massacres by the far right. Allende’s Marxist designs and Cuban connections, much more than previous Chilean leftist programs, had made the armed forces fearful and prepared to take preventative actions.”
“Pinochet, like Hitler, seems to be an example of an evil leader who did make a difference to the course of history. Yet Chilean military crimes can’t be blamed on Pinochet alone, because no one has ever suggested that he personally shot or tortured anyone. At its peak, DINA had over 4,000 employees, whose job it was to interrogate, torture, and kill. Chileans aren’t uniquely evil: every country has thousands of sociopaths who would commit evil if ordered or even just permitted to do it. Anyone who has been imprisoned even in generally non-evil countries like Britain and the U.S., and who has had the misfortune to experience there the sadism of jailers and law enforcement officers who have not been specifically ordered to be sadistic, can imagine how those jailers and officers would have behaved if they had indeed received explicit orders to be sadistic.”
“The other main effort of Pinochet’s dictatorship, besides exterminating the Chilean left, was to reconstruct the Chilean economy on a free-market basis, reversing Chile’s prior norm of extensive government intervention. That reversal did not happen during Pinochet’s first year-and-a-half in power, when the economy continued to contract, inflation persisted, and unemployment rose. But from 1975 onwards, Pinochet turned over economic management to a group of neo-liberal economic advisors who became known as the Chicago Boys, because many of them had trained at the University of Chicago in association with the economist Milton Friedman. Their policies emphasized free enterprise, free trade, market orientation, balanced budget, low inflation, modernization of Chilean businesses, and reduced government intervention.
South American military governments usually prefer an economy that they control themselves for their own benefit, rather than a free-market economy that they don’t control. Hence the Junta’s adoption of the Chicago Boys’ policies was unexpected, and it remains uncertain why it happened. It might not have happened at all without Pinochet, because the policies were opposed by some senior Chilean military officers, including one junta member whom Pinochet finally forced to resign in 1978. The adoption is sometimes attributed to the 1975 Chilean visit of Friedman himself, who met with Pinochet for 45 minutes and followed up the meeting by sending Pinochet a long list of recommendations. But Friedman came away from the meeting with a low opinion of Pinochet, who asked Friedman only one question during their conversation. In fact, the Chicago Boys’ program differed significantly from Friedman’s recommendations and drew on detailed plans that Chilean economists had already laid out in a document nicknamed ‘the brick’ (because it was so lengthy and heavy).
A possible explanation is that Pinochet recognized that he knew nothing about economics, portrayed himself as (or was) a simple man, and found appealing the Chicago Boys’ simple, consistent, persuasive proposals. Another factor may be that Pinochet identified the Chicago Boys and their policies with the U.S., which strongly supported Pinochet, shared his hatred of communists, and resumed its loans to Chile immediately after Pinochet’s coup. But his motives are not clear.
Whatever the motives, the resulting free-market policies included the re-privatization of hundreds of state-owned businesses nationalized under Allende (but not of the copper companies); the slashing of the government deficit by across-the-board cuts of every government department’s budget by 15–25%; the slashing of average import duties from 120% to 10%; and the opening of Chile’s economy to international competition. That caused the Chicago Boys’ program to be opposed by Chile’s oligarchy of industrialists and traditional powerful families, whose inefficient businesses had previously been shielded from international competition by high duties and were now forced to compete and innovate. But the results were that the rate of inflation declined from its level of 600% per year under Allende to just 9% per year, the Chilean economy grew at almost 10% per year, foreign investments soared, Chilean consumer spending rose, and Chilean exports eventually diversified and increased.
These positive results were not without setbacks and painful consequences. An unfortunate decision to tie the value of the Chilean peso to the U.S dollar produced a big trade deficit and an economic crisis in 1982. The economic benefits for Chileans were unequally distributed, middle-class and upper-class Chileans prospered, but many other Chileans suffered and found themselves living below the poverty level. In a democracy it would have been difficult to inflict such widespread suffering on poor Chileans, as well as to impose government policies opposed by rich business oligarchs. That was possible only under a repressive dictatorship. Still, one Chilean friend not otherwise sympathetic to Pinochet explained, ‘Yes, but so many Chileans had already been suffering from Chile’s previous economic problems under Allende, without hope of an eventual improvement.’ When it became clear that the junta wasn’t just a temporary transitional phase but intended to remain in power, many middle-class and upper-class Chileans nevertheless continued to support Pinochet because of that (unequally distributed) economic improvement, and despite government repression. Optimism, and a sign of relief about the end of the economic chaos that had prevailed under Allende, arose among those Chileans outside of the sectors of Chilean society that were being tortured or killed.
Like many Chileans, the U.S. government supported Pinochet for more than half of the duration of his military dictatorship — in the U.S.’s case, because of his strong anti-communist stance. U.S. government policy was to extend economic and military aid to Chile, and publicly to deny Pinochet’s human rights abuses, even when those being tortured and killed were American citizens. As American Secretary of State Henry Kissinger expressed, ‘…however unpleasantly [the junta] act, this government is better for us than Allende was.’ That American government support of Pinochet, and that blind eye to his abuses, continued through the presidencies of Nixon, Ford, Carter, and initially Reagan.
But from the mid-80s onward, two things turned the U.S. against Pinochet. One was the accumulated evidence of abuses, including abuses against American citizens — evidence that was becoming increasingly hard to ignore. A turning point was the horrifying killing in Santiago of Rodrigo Rojas, a Chilean teen who was a U.S. legal resident, and who died after being doused with gasoline and set on fire by Chilean soldiers. The other factor turning the Reagan government against Pinochet was Chile’s economic downturn of 1982–1984, which turned more of the Chilean public against Pinochet. Because the economic recovery from 1984 onwards failed to improve the lot of many Chileans, the Chilean left gained strength, Chile’s Catholic Church became an open focus of opposition, and even the Chilean military was becoming dissatisfied with him. In short, Pinochet was not just evil: worse yet from the perspective of the U.S. government, he had become a liability for American political interests.
In 1980 the junta proposed a new constitution that would entrench right-wing and military interests, and asked voters to legitimize Pinochet by voting to extend his term as president for eight years (from 1981 to 1989). After an election campaign tightly controlled by the junta, a big majority of Chilean voters approved the new constitution and Pinochet’s extended term. As that extended term approached its end in 1989, the junta announced another plebiscite in 1988 that would extend Pinochet’s presidency for yet another eight years until 1997, when he would be 82.
This time, though, Pinochet miscalculated and was outmaneuvered by his opponents. International attention forced the campaign to be conducted openly, and the balloting to be conducted honestly. The U.S. threw its resources behind the opposition, which organized a massive effort to register 92% of potential voters and mounted a brilliantly designed campaign around the simple slogan ‘No!’ To Pinochet’s surprise, the ‘No!’ campaign prevailed, with 58% of votes. Although Pinochet’s initial response on the night of the election was to try to deny the vote’s outcome, the other junta members forced him to accept it. But — 42% of Chilean’s had still voted for Pinochet, in that free election of 1988.”
“With that ‘No!’ victory, Pinochet’s opponents at last gained the opportunity to return to power in the presidential elections scheduled for 1990. But the ‘No!’ campaigners had consisted of 17 different groups, with 17 different visions for Chile after Pinochet. Hence Chile risked going down the path trodden by the Allied democracies that had defeated Germany and Japan in WWII, and of whom Winston Churchill had written ‘The great democracies triumphed, and so were able to resume the follies which had so nearly cost them their life.’ A similar question was pending for Chile: would Chileans resume their follies of intransigence and of the no-compromise posture that had cost many of them their lives, and that had cost their country its democratic government?
Of Pinochet’s leftist opponents who were not killed by Pinochet, 100,000 fled into exile, beginning around 1973. They remained in exile for a long time, about 16 years (until 1989). They thus had ample time to reflect on their former intransigence. Many of them went to Western or Eastern Europe, where they spent years watching how socialists, communists, and other leftists of European countries operated, and how those leftists fared. Those Chilean exiles who went to Eastern Europe tended to become depressed upon discovering that intransigent leftist idealists in power didn’t create national happiness. Those exiles who fled to Western Europe instead observed moderate social democracies in action, the resulting high standard of living, and a calmer political atmosphere than the atmosphere that had prevailed in Chile. They discovered that leftists didn’t have to be radical and intransigent, but that they could achieve many of their goals by negotiating and compromising with people who hold different political views. The exiles experienced the collapse of the Soviet Union and of Eastern Europe’s communist governments, and China’s bloody suppression of demonstrations in 1989. All of those observations served to temper extremism and communist sympathies of Chile’s leftists.
Already during the ‘No!’ campaign of 1989, ‘No!’ backers of disparate views realized that they couldn’t win unless they learned to cooperate with each other. They also realized that Pinochet still enjoyed wide support among Chile’s business community and upper class, and that they couldn’t win, or (if they did win) that they would never be permitted to assume power, unless Pinochet supporters could be assured of their personal safety in a post-Pinochet era. Painful as the prospect was, leftists in power would have to practice tolerance toward former enemies whose views they loathed, and whose behavior toward them had been horrible. They had to declare their willingness to build ‘a Chile for all Chileans’: the goal that Patricio Aylwin, Chile’s first democratically elected president after Pincohet, proclaimed in his inaugural address in 1990.
Once the alliance of the 17 ‘No!’ groups had thus won the referendum, the alliance’s leftists faced the necessity of convincing the alliance’s centrists of the Christian Democratic Party that a new leftist government wasn’t to be feared and wouldn’t be as radical as Allende’s leftist government had been. Hence leftist and centrist parties joined in an electoral alliance termed Concertacion. Leftists agreed that, if the alliance would win the 1990 election (which it did), they would let the presidency alternate between a leftist and a centrist, and would let the Christian Democrats fill the presidency first. Leftists agreed to those conditions because they realized that that was the only way that they could eventually return to power.
In fact, Concertacion proceeded to win the first four post-Pinochet elections, in 1990, 1993, 2000, and 2006. The first two presidents were Christian Democrats. The next two presidents were the socialists Ricardo Lagos and Michelle Bachelet; the latter was Chile’s first woman president, and also was the daughter of a general who had been tortured and imprisoned by Pinochet’s junta. In 2010 Concertacion was defeated by a right-wing president (Sebastian Pinera), in 2014 socialist Bachelet returned to power, and in 2018 right-winger Pinera again. Thus, Chile after Pincohet reverted to being a functioning democracy still anomalous for Latin America, but with a huge selective change: a willingness to tolerate, compromise, and share and alternate power.
Besides abandoning political intransigence, the other major change of direction for Chile’s new democratic Concertacion governments compared to the democratic governments of the pre-Pincohet era was with respect to economic policy. The new governments continued most of Pinochet’s free-market economic policies, because those policies were seen to have been largely beneficial in the long run. In fact, Concertacion governments carried those policies even further, by reducing import tariffs so that they came to average only 3% by 2007, the lowest in the world. Free trade agreements were signed with the U.S. and with the EU. The main change introduced by Concertacion into the military government’s economic policies was to increase government spending on social programs and to reform labor laws.
The result has been that, since the 1990 change of government, the Chilean economy has grown at an impressive rate, and that Chile leads the rest of Latin America economically. Average incomes in Chile were only 19% of U.S. averages in 1975; that proportion had risen to 44% by 2000, while average incomes in the rest of Latin America were dropping over that same time. Inflation rates in Chile are low, the rule of law is strong, private property rights are well protected, and the pervasive corruption has decreased. A consequence (and also a partial cause) of this improved economic climate was a doubling of foreign investment that took place quickly in Chile during the first seven years of the return of democracy.
However, Chile’s economic performance is far from a uniformly distributed success. Economic inequality remains high, socioeconomic mobility is low, and Chile continues as before to be a land of contrasting wealth and poverty, although Chile’s rich people today tend to be business leaders rather than the families of former large land-owners. But the overall big improvement of the Chilean economy means that, while the relative gap between the rich and poor persists, the absolute economic status of the poor has become much better. The percentage of Chileans living below the poverty line dropped from its level of 24% during Pinochet’s last year in power to only 5% by 2003.”
“The ‘No!’ electoral victory of 1989 did not mean that Chile was free of Pinochet and the armed forces. Far from it: before stepping down as president, Pinochet obtained legislation naming him senator-for-life, permitting him to appoint several new Supreme Court justices, and retaining him as commander-in-chief of the armed forces until he finally retired in 1998 at age 83. That meant that Pinochet, and his implicit threat of another military coup, were constantly on the minds of Chile’s democratic leaders. One Chilean explained, ‘It’s as if, upon Nazi Germany’s surrender, Hitler hadn’t committed suicide but remained senator-for-life and the German army’s commander-in-chief!’ Further strengthening the Chilean military’s position, Pinochet’s constitution included a provision (still in effect today) specifying that 10% of Chile’s national copper sales revenue (yes: sales, not just profit) must be spent each year on the military budget. That gives Chile’s armed forces a financial basis far in excess of the money needed to defend Chile against any credible foreign threat — especially considering that Chile’s last (and only its second) war ended over a century ago in 1883, that Chile’s borders are protected by ocean and desert and high mountains, and that Chile’s neighbors are not dangerous. Instead, the only likely use of Chile’s armed forces is against the Chilean people themselves.
The Chilean constitution approved under Pinochet contained three provisions favoring the ring-wing. One specified that, of the Senate’s 35 members, 10 were not elected by the public but were instead designated by the president from a list of officials likely to consist only of right-wingers (e.g. former chiefs of the army and navy). Former presidents became appointed senators-for-life. A second provision (not overturned until 2015) specified that each Chilean congressional district elected two representatives, the first of whom required just a plurality of voters, but the other of whom required an 80% majority; that made it very difficult for any district to elect two leftists. The last provision requires a 5/7ths voter majority to change the constitution — but it’s difficult in a democracy (especially one as fractured as Chile) to get 5/7s of the electorate to agree on anything. As a result, although decades have passed since Pinochet was voted out of the presidency, Chile still operates under a modified version of his constitution that most Chileans consider illegitimate.
It’s painful for any country to acknowledge and atone for evil deeds that its officials committed against its own citizens or against citizens of other countries. It’s painful because nothing can undo the past, and often many of the perpetrators are still alive, unrepentant, powerful, and widely supported. Acknowledgment and atonement have been especially difficult for Chile, because Pinochet was supported by such a large minority of Chilean voters even in the 1989 uncoerced plebiscite, because Pinochet remained commander-in-chief of the armed forces, and because the democratic government had good reason to fear another military coup if it proceeded against military perpetrators. On two occasions — when Pinochet’s son was being investigated, and when a human rights commission was beginning its work of investigating the atrocities — soldiers did appear on the streets in full military garb. Their appearance was supposedly just a ‘routine exercise’ — but the implicit threat was obvious to everyone.
Patricio Aylwin, the first post-Pinochet president, proceeded cautiously. When he promised justice ‘insofar as it is possible,’ Chileans hopeful for a reckoning felt disillusioned and feared that his phrase was just a euphemism for ‘no justice.’ But Aylwin did establish a Truth and Reconciliation Commission, which in 1991 published the names of 3,200 Chileans who had been killed or ‘disappeared,’ and a second commission in 2003 reported on torture. Speaking on TV, Aylwin was nearly in tears as he begged the families of victims for forgiveness, on behalf of the Chilean government. Such heartfelt apologies by government leaders for government cruelties have been vanishingly rare in modern history; the closest parallel is German chancellor Willy Brandt’s equally heartfelt apology to the victims of German’s former Nazi government.
A turning point in the reckoning with Pinochet was the British arrest warrant issued against him in 1998 while he was visiting a London clinic for medical treatment. The warrant was issued at the request of a Spanish judge seeking extradition of Pinochet to Spain to answer for crimes against humanity, and for the killings of Spanish citizens in particular. Pinochet’s lawyers initially argued that Pinochet should be immune from prosecution because torture and killings are legitimate functions of government. When the British House of Lords eventually rejected that defense, Pinochet’s lawyers then claimed that he was old and infirm and should be released on humanitarian grounds. The lawyers allowed him to be photographed only while he was in a wheelchair. After 503 days under house arrest, Britain’s home secretary denied Spain’s extradition request, supposedly because Pinochet lacked the strength to testify at a trial, but possibly because of the help Pinochet’s government had given to Britain during Britain’s Falkland Islands War of 1982 against Argentina. Pinochet then immediately flew to Chile. Upon his plane’s arrival he was unloaded in a wheelchair, and then stood up and walked across the tarmac to shake hands with the Chilean generals present to greet and congratulate him.
But even Chilean rightists were shocked by a U.S. Senate subcommittee’s revelation that Pinochet had stashed $30 million in 125 secret U.S. bank accounts. While rightists had been prepared to tolerate torturing and killing, they were disillusioned to learn that Pinochet, whom they had considered different from and better than other dishonest Latin American dictators, stole and hid money. Chile’s Supreme Court stripped Pinochet of the immunity from prosecution that he had enjoyed as senator-for-life. Chile’s tax authority (the equivalent of the U.S.’s IRS) issued a complaint against Pinochet for filing false tax returns. (Perhaps the authorities were inspired by the example of notorious American gangster Al Capone, who was finally sent to jail for federal income tax evasion.) Pinochet was then indicted for other financial crimes and murders, and was placed under house arrest, and his wife and four children were also arrested. But in 2002 he was declared unfit to stand trial because of dementia. He died of a heart attack in 2006, at the age of 91.
Eventually, hundreds of Chilean torturers and killers were indicted, and dozens of them were sent to prison — including the director of Pinochet’s secret intelligence service. Many older Chileans continue to regard the sentences as too harsh, and continue to regard Pinochet as a wonderful man who was unjustly persecuted. Many other Chileans regard the sentences as too mild, too few, too late, aimed mainly at low-ranking rather than high-ranking criminals, and resulting in their being sent to special comfortable resort-like prisons. For instance, not until 2015 did Chilean judges charge 10 military officers with killing the famous singer Victor Jara in 1973, and seven others with killing Rodrigo Rojas in 1986: 42 and 29 years, respectively, after those deeds. In 2010 Chile’s President Michelle Bachelet opened a Villa Grimaldi Museum in Santiago that documents in horrifying detail the tortures and killings under the civilian government. That would have been utterly unthinkable as long as Pinochet remained army commander-in-chief.
Chileans are still wrestling with the moral dilemma of how to weigh the positive and negative sides of their country’s former military government: especially, the dilemma of how to balance its economic benefits against its crimes. The dilemma is insoluble. A simple answer would be: Why even try to weigh the benefits against the crimes? Why not just acknowledge that the military government did both beneficial things and horrible things? But Chileans did have to weigh them in the 1989 plebiscite, when they were offered only the chance of voting ‘yes’ or ‘no’ to keeping Pinochet as president for eight more years, and when they couldn’t vote ‘yes but…’ or ‘no but…’ Faced with that choice, 42% of Chileans voted ‘yes,’ despite the sickening deeds that eventually went on display at the Villa Grimaldi Museum.”
“How did Chile emerge from almost 17 years of military repression and record-smashing government cruelty without even deeper trauma than it did suffer? While Chile today is still struggling with the aftermath of the Pinochet years, I’m pleasantly surprised that Chileans are not more tormented. For that outcome, Chileans’ national identity and pride get much of the credit. Chielans have made a big effort to remain different from those other Latin American countries, and to govern themselves effectively. They’ve been willing to adhere to their motto of ‘Building a Chile for all Chileans,’ despite the powerful motives of so many Chileans not to accept other kinds of Chileans as belonging to that same fatherland. Without that national identity, Chile could not have escaped political paralysis, and could not have returned to being the most democratic and richest country in Latin America.”
“In May 1945 Nazi Germany was militarily completely defeated, many of its Nazi leaders committed suicide, and the whole country was occupied by its enemies. After WWII, there were still plenty of ex-Nazis in German government, but they could not openly defend Nazi crimes. Thus, Germany did eventually deal publicly with Nazi crimes. At the opposite extreme, when the Indonesian army killed or arranged the killings of over half-a-million Indonesians in 1965, the Indonesian government behind those mass killings remained in power, and it is still in power today. Not surprisingly, even today, more than 50 years later, Indonesians hesitate to talk about them.
Chile is an intermediate case. The Chilean military government that ordered killings yielded peacefully to a democratic government. But the military leaders remained alive and retained much power. Chile’s new democratic government initially didn’t dare proceed against military criminals. Today, it is still proceeding cautiously. Why? Because the army might come back. Because there are still lots of Chileans who defend Pinochet. Because ‘a Chile for all Chileans’ means, unfortunately, a Chile that includes former war criminals.”
Indonesia
“My hotel in 1979 had lobby walls decorated with paintings telling the story of Indonesian history. All of the paintings showed events of just the previous 35 years. The event that was the subject of most paintings was termed the 1965 Communist Revolt. Paintings, and explanatory text below them, vividly depicted how communists tortured and killed seven generals; and how one of the generals that the communists tried to kill managed to escape from his house over a wall, but his five-year-old daughter was shot by accident and died. The exhibit left the impression that the torture and killing of those generals and the young girl were the most horrible act that had ever happened in Indonesian history.
The exhibit made no mention of what followed the deaths of the generals: the murder of about 500,000 other Indonesians at the instigation of the Indonesian armed forces. Not mentioning those killings in an exhibit on Indonesian history is quite an omission, because, among mass killings around the world since WWII, only a few others have exceeded the Indonesian death toll. In the two decades since, during my return visits and lengthy stays in Indonesia, not once did I hear those killings mentioned by my government friends — until a change of government in 1998. It’s as if General Pinochet’s government had killed 100 times more Chileans than it actually did, but as if those killings were never mentioned by surviving Chileans, nor by Chilean accounts of Chilean history.
Both Chile and Indonesia experienced a breakdown of political compromise, a leftist effort to gain control of the government, and a military coup that ended that effort and installed a long-lasting dictatorship. Both countries illustrate the role of not one but two successive leaders, with distinctive but contrasting personalities.”
“Indonesia is a new country that didn’t become independent until 1945, and that didn’t even become unified as a colony until around 1910. It has high mountains, including many active volcanoes. One of them, Krakatoa, is famous for the most catastrophic eruption in recent history (1883), an eruption that blew out almost the whole island and injected enough ash into the atmosphere to change the world’s climate for the following year.
Geographically, Indonesia is the most splintered country in the world, with thousands of inhabited islands scattered over an expanse of 3,400 miles from west to east. For most of the last 2,000 years, there were indigenous states on some Indonesian islands. But none of them came to control most of the Indonesian archipelago, nor was there a name or concept for what we know today as Indonesia. Linguistically, Indonesia is one of the world’s most diverse countries, with more than 700 different languages. It’s also religiously diverse: while most Indonesians are Muslims, there are also large Christian and Hindu minorities, as well as Buddhists, Confucians, and followers of local traditional religions. Although there have been religious violence and rioting, they have been on a much smaller scale than in South Asia and the Middle East. Many Indonesians of different religions are relatively tolerant of each other.”
“Beginning after 1510, the Portuguese, then (from 1595 onwards) the Dutch, and then the British attempted to establish colonies in the island chain that is now Indonesia. British control eventually became confined to parts of Borneo, and the only Portuguese colony that survived was in the eastern half of the island of Timor. The most successful colonists were the Dutch, concentrated on Java, which had by far the largest native population (more than half of the population of modern Indonesia). In the 1800s, in order to make their colonial efforts pay for themselves and then produce a profit, the Dutch developed export plantations on Java and Sumatra. But it was only around 1910, more than three centuries after their arrival in the Indonesian archipelago, that the Dutch gained control of the whole far-flung island chain. As an example of how long much of the archipelago remained unexplored by the Dutch, it wasn’t until 1910 that a Dutch governor discovered that the eastern Indonesian island of Flores and the nearby small island of Komodo are home to the world’s largest lizard, the Komodo dragon. Although it’s up to 10 feet long and weighs up to several hundred pounds, it has remained unknown to Europeans for four centuries.
It should be emphasized that the word ‘Indonesia’ didn’t even exist until it was coined by a European around 1850. The Dutch called their colony the ‘Indies,’ the ‘Netherlands Indies,’ or the ‘Dutch East Indies.’ The archipelago’s inhabitants themselves did not share a national identity, nor a national language, nor a sense of unity in opposition to the Dutch. For example, Javanese troops joined Dutch troops to conquer the leading state on the island of Sumatra, a traditional rival of Javanese troops.
In the early 1900s the Dutch colonial government began efforts to switch from a purely exploitative policy for their colony to what they termed an ‘ethical policy’ — i.e. finally trying to do some good for Indonesians. For example, they opened schools, built railroads and irrigation projects on Java, set up local government councils in the main towns, and attempted to relieve Java’s overpopulation by supporting emigration to less densely populated outer islands (against the wishes of those islands’ native populations). But those efforts of Dutch ethical policy produced limited results — partly because the Netherlands itself was too small to put much money into Indonesia; and partly because the efforts of the Dutch, as well as of subsequent independent Indonesia, to improve people’s lives were frustrated by rapid population growth, creating more mouths to feed. Indonesians today consider the negative effects of Dutch colonialism far to have outweighed the positive.
By around 1910, increasing numbers of the inhabitants of the Dutch East Indies were developed the beginnings of a ‘national consciousness.’ That is, they began to feel that they were not just inhabitants of their particular Dutch-governed sultanate in some part of Java or Sumatra, but that they belonged to a larger entity called ‘Indonesia.’ Indonesians with those beginnings of a wider identity formed many distinct but often overlapping groups: a Javanese group that felt culturally superior, an Islamic movement seeking an Islamic identity for Indonesia, labor unions, a communist party, Indonesian students sent to the Netherlands for education, and others. That is, the Indonesian independence movement was fragmented among ideological and geographic and religious lines, presaging problems that continued to plague Indonesia after independence.
The result was not only strikes, plots, and agitation against the Dutch, but also conflict between those Indonesian groups, making for a confused situation. Their actions against the Dutch nonetheless reached the point that in the 1920s the Dutch adopted a policy of repression and sent many of the leaders to what was in effect a concentration camp, in a remote disease-plagued area of Dutch New Guinea.
An important contribution to eventual Indonesian unity was the evolution and transformation of the Malay language, a trade language with a long history, into Bahasa Indonesia, the shared national language of all Indonesians today. Even the largest of Indonesia’s hundreds of local languages, the Javanese language of Central Java, is the native language of less than one-third of Indonesia’s population. If that largest local language had become the national language, it would have symbolized Java’s domination of Indonesia and thereby exacerbated a problem that has persisted in modern Indonesia, namely, fear of Javanese domination on the part of Indonesians of other islands. The Javanese language has the additional disadvantages of being hierarchy-conscious, with different words used in speaking to people of higher or lower status. But Bahasa Indonesia is easy to learn. Only 18 years after Indonesia took over Dutch New Guinea and introduced Bahasa there, I found it being spoken even by uneducated New Guineans in remote villages. Bahasa’s grammar is simple but supple at adding prefixes and suffixes to many word roots, in order to create new words with immediately predictable meanings. For example, the adjective meaning ‘clean’ is ‘bersih,’ the verb meaning ‘to clean’ is ‘membersihkan,’ the noun ‘cleanliness’ is ‘kebersihan,’ and the noun ‘cleaning up’ is ‘pembersihan.’”
“After Japan declared war on the U.S. in December 1941 and began its expansion throughout the Pacific Islands and Southeast Asia, it rapidly conquered the Dutch East Indies. The oil fields of Dutch Borneo, along with Malayan rubber and tin, were in fact a major motive behind Japan’s declaring war, perhaps the biggest single motive, because Japan itself lacked oil and had depended on American oil exports, which President Roosevelt had cut off in retaliation for Japan’s war against China and occupation of French Indo-China. The Borneo oil fields were the nearest alternative source of oil for Japan.
At first, Japanese military leaders claimed that Indonesians and Japanese were Asian brothers in a shared struggle for a new anti-colonial order. Indonesian nationalists initially supported the Japanese and helped to round up the Dutch. But the Japanese mainly sought to extract raw materials (especially oil and rubber) from the Dutch East Indies for the Japanese war machine, and they became even more repressive than had been the Dutch. As the war turned against the Japanese, in September 1944 they promised independence to Indonesians, though without setting a date. When Japan did surrender on August 15, 1945, only two days later the Indonesians declared independence, ratified a constitution, and founded local militias. But they quickly discovered that the defeat of the Dutch by the Japanese, then the promise of independence by the Japanese, and finally the defeat of the Japanese by the U.S. and its allies did not ensure independence for Indonesia. Instead, in September 1945 British and Australian troops arrived to take over from the Japanese, and then Dutch troops arrived with the aim of restoring Dutch control. Fighting broke out that pitted British and Dutch troops against Indonesian troops.
The Dutch, invoking the ethnic diversity and huge territorial extent of the Indonesian archipelago, and probably driven by their own motive of ‘divide and rule’ to retain control, promoted the idea of a federation for Indonesia. They set up separate federal states within areas that they reconquered. In contrast, many Indonesian revolutionaries sought a single unified republican government for all of the former Dutch East Indies. By a preliminary agreement reached in November 1946, the Dutch recognized the Indonesian Republic’s authority — but only in Java and Sumatra. However, by July 1947 the Dutch became exasperated and launched what they termed a ‘police action,’ with the goal of destroying the Republic. After a cease-fire, then another Dutch ‘police action,’ and UN and U.S. pressure, the Dutch gave way and agreed to transfer authority to the Republic. The final transfer took place in December 1949 — but with two big limitations that infuriated Indonesians and that took them 12 years to overturn. One was that the Dutch did not yield the Dutch half of New Guinea. Instead, they retained it under Dutch administration, on the grounds that New Guinea was much less developed politically than was the rest of the Dutch East Indies, that it was not even remotely ready for independence, and that most New Guineans are ethnically as different from most Indonesians as either group is from Europeans. The other limitation was that Dutch companies such as Shell maintained ownership over Indonesian natural resources.
Dutch efforts to re-establish control over Indonesia between 1945 and 1949 were carried out with such brutal methods that were vividly depicted in the Indonesian history paintings in my hotel lobby. (For instance, one showed two Dutch soldiers raping an Indonesian woman.) Simultaneously, other brutal methods were employed by Indonesians against other Indonesians because within Indonesia itself there was much resistance to the Indonesian Republic, viewed by many eastern Indonesians and Sumatrans as Javanese-dominated. Again, I still heard much resentment and longing for political separation from Indonesia on the part of my non-Javanese Indonesian friends in the 80s. There was also opposition to the Republican leadership from Indonesian communists, culminating in a 1948 revolt crushed by the Republican Army that killed at least 8,000 Indonesian communists — a foretaste of what was to happen on a much larger scale after the failed coup of 1965.”
“The new nation faced crippling problems that had been carried over from the pre-independence era, and some of which now became further exacerbated. As an ex-colony long governed by the Netherlands for the Netherlands’ benefit, independent Indonesia began its existence greatly underdeveloped economically. Population growth (at nearly 3% per year in the 60s) continued to place a heavy burden on the economy after independence, as it had during Dutch times. Many Indonesians still lacked a sense of national identity and continued to identify themselves as Javanese, Moluccans, Sumatarans, or members of some other regional population, rather than as Indonesians. The Indonesian language that would eventually contribute to Indonesian unity was not yet widely established; instead, the 700 local languages were used. Those who did consider themselves Indonesian differed in their visions for Indonesia. Some Indonesian Muslim leaders wanted Indonesia to become an Islamic state. The Indonesian Communist Party wanted Indonesia to become a communist state. Some non-Javanese Indonesians wanted either much regional autonomy or else outright regional independence, and staged regional revolts, which the Republic’s military eventually defeated.
The military itself was a focus of schisms, and of debates about its role. Should the military be controlled, as in other democracies, by civilian politicians, of whom Indonesian military officers were becoming increasingly suspicious? Or should the military instead be more autonomous and pursue its own policies for Indonesia? The military saw itself as the savior of the revolution, the bulwark of national identity, and demanded a guaranteed voting block in parliament. The civilian government, on the other hand, sought to save money by eliminating military units, reducing the size of the officer corps, and pushing soldiers out of the military and off the government payroll. There were also internal disagreements between branches of the armed forces, especially disagreements pitting the air force against other branches. There were disagreements between army commanders themselves, especially between revolutionary regional commanders and conservative central commanders. Military leaders extorted money from other Indonesians and from businesses for army purposes, raised money by smuggling and by taxing radio ownership and electricity, and increasingly took over regional economies, thereby institutionalizing the corruption that today remains one of Indonesia’s biggest problems.
Indonesia’s founding president, Sukarno (1901–1970), had begun his political career already in Dutch times as a nationalist leader against the Dutch colonial government. (Like many Indonesians, Sukarno had only a single name, not a first name and a family name.) The Dutch sent him into exile, from which the Japanese brought him back. It was Sukarno who issued Indonesia’s Proclamation of Independence on August 17, 1945. Well aware of Indonesia’s weak national identity, he formulated a set of five principles termed Pancasila, which to this day serves as an umbrella ideology to unify Indonesia and was enshrined in the 1945 constitution. The principles are broad ones: belief in one god, Indonesian national unity, humanitarianism, democracy, and social justice for all Indonesians.
As president, Sukarno blamed Indonesia’s poverty on Dutch imperialism and capitalism, abrogated Indonesia’s inherited debts, nationalized Dutch properties, and turned over the management of most of them to the army. He developed a state-centered economy that the army, the civil bureaucracy, and Sukarno himself could milk for their benefit. Not surprisingly, Indonesian private enterprise and foreign aid both declined. Both the U.S. and British governments became alarmed and sought to destabilize Sukarno’s position, just as the U.S. had tried to destabilize Allende in Chile. Sukarno responded by telling the U.S. to ‘go to hell with your aid’; then in 1965 he expelled the American Peace Corps and withdrew from the UN, the World Bank, and the IMF. Inflation soared, and Indonesia’s currency (the rupiah) lost 90% of its value during 1965.
At the time that Indonesia became independent, it had had no history of democratic self-government. Its experience of government was instead that of Dutch rule, which in the final decades approximated a police state, as did Japanese rule after 1942. Fundamental to any functioning democracy are widespread literacy, recognition of the right to oppose government policies, tolerance of different points of view, acceptance of being outvoted, and government protection of those without political power. For understandable reasons, all of those prerequisites were weak in Indonesia. Hence during the 1950s, prime ministers and cabinets rose and fell in quick succession. In the September 1955 elections an astonishingly high 92% of registered voters went to the polls, but the outcome was a stalemate, because the four leading parties each obtained 15–22% of votes and parliamentary seats. They could not compromise and went into political gridlock. That breakdown of compromise among several parties equally matched in strength is similar to that of Chile — with the difference that Chile at least had an educated literate population and a long history of democratic government, whereas Indonesia had neither.
Beginning in 1957, President Sukarno ended the gridlock by proclaiming martial law, then replaced Indonesian democracy with what he termed ‘guided democracy,’ which he considered more suitable to Indonesia’s national character. Under ‘guided democracy,’ the Indonesian parliament was supposed to practice ‘mutual cooperation’ or ‘consensus through deliberation,’ instead of the usual democratic concept of the legislature as a setting in which parties compete. In order to ensure that parliament would mutually cooperate with Sukarno’s goals, more than half of the seats in parliament were no longer elected offices but were instead appointed by Sukarno himself and assigned to so-called ‘functional groups’ rather than political parties, the army being one such ‘functional group.’
Sukarno became convinced that he was uniquely capable of divining and interpreting the wishes (including the unconscious wishes) of the Indonesian people, and of serving as their prophet. After the 1955 Bandung conference of Asian and African states, Sukarno extended his goals to the world stage and began to view it as his personal responsibility to have Indonesia play a leading role in Third World anti-colonial politics at a time when Indonesia’s own internal problems were so pressing. In 1963 he let himself be declared president-for-life.
Sukarno launched two campaigns to translate his anti-colonial stance into deeds, by trying to annex two territories on the verge of independence. The first campaign was directed at Dutch New Guinea, which because of its ethnic distinctness the Dutch had refused to cede to Indonesia after the revolution. The Dutch launched a crash program to prepare New Guineans for independence, and New Guinea leaders adopted a national flag and a national anthem. But Sukarno claimed Dutch New Guinea for Indonesia, increased diplomatic pressure on the Dutch, and in 1961 ordered all three branches of the Indonesian armed forces to take over Dutch New Guinea by force.
The result was a political success for Sukarno, but a tragedy for many of the Indonesian troops involved, and for those Dutch New Guineans looking forward to independence. A small patrol boat was sunk by a Dutch warship, causing the deaths of many Indonesian sailors. Indonesian paratroops were dropped by Indonesian air force planes into Dutch New Guinea. Presumably out of fear of Dutch anti-aircraft capabilities during daylight hours, the paratroops were dropped blindly at night over forested terrain, in an incredible act of cruelty. The unfortunate paratroops floated down into a hot, mosquito-infested sago swamp, where those who survived impact on sago trees found themselves hanging from the trees by their parachutes. The even smaller fraction who managed to free themselves from their parachutes dropped or clambered down into standing swamp water. Few survived.
Despite those Dutch military successes, the U.S. government wanted to appear to support the Third World anti-colonial movement, and it was able to force the Dutch to cede Dutch New Guinea. As a face-saving gesture, the Dutch ceded it not directly to Indonesia but instead to the UN, which seven months later transferred administrative control (but not ownership) to Indonesia, subject to a future plebiscite. The Indonesian government then initiated a program of massive transmigration from other Indonesian provinces, in part to ensure a majority of Indonesian non-New Guineans in Indonesian New Guinea. Seven years later, a hand-picked assembly of New Guinean leaders voted under pressure for incorporation of Dutch New Guinea into Indonesia. New Guineans who had been on the verge of independence from the Netherlands launched a guerrilla campaign for independence from Indonesia that is continuing today, over half-a-century later.
Sukarno’s other campaign to translate his anti-colonial stance into deeds was directed at parts of Malaysia, a group of former British colonies. Malaysia consists of states on the Malay Peninsula of the Asian mainland that achieved independence in 1957, plus two ex-British colonies (Sabah and Sarawak) on the island of Borneo, which is shared with Indonesia and Brunei. Sabah and Sarawak joined independent Malaysia in 1963. Whereas Sukarno claimed an Indonesian right of inheritance to Dutch New Guinea as a former part of the Dutch East Indies, he could make no such claim to Malaysian Borneo. Nevertheless, encouraged by his success in Dutch New Guinea, Sukarno began what he termed a ‘confrontation’ with Malaysia in 1962, followed by military attacks on Malaysian Borneo in the next year. But the population of Malaysian Borneo showed no sign of wanting to join Indonesia, while British and Commonwealth troops provided effective military defenses, and the Indonesian army lost its appetite for confrontation.”
“During the 1960s a complex three-way power struggle unfolded among the strongest forces in Indonesia. One force was Sukarno, the charismatic leader and skilled politician who enjoyed widespread support among Indonesians as the father of their country’s independence, and as the first and (until then) only president. The second force was the armed forces, which monopolized military power. The third force was the Indonesian Communist Party (PKI, Partai Komunis Indonesia), which lacked military power but had become by far the strongest and best-organized political party.
But each of these three forces was divided and pulled in different directions. While Sukarno’s ‘guided democracy’ rested on an alliance between himself and the armed forces, Sukarno also aligned himself increasingly with the PKI as a counter-weight against the armed forces. Chinese Indonesians had become so alarmed by anti-Chinese sentiment in Indonesia that many had returned to China. Yet Indonesia simultaneously increased its diplomatic alliance with China and announced that it would soon imitate China by building its own atomic bomb — to the horror of the U.S. and Britain. The armed forces became divided among Sukarno’s supporters, PKI supporters, and officers who wanted the armed forces to destroy the PKI. Army officers infiltrated the PKI, which in turn infiltrated the army. To remedy its military weakness, in 1965 the PKI with Sukarno’s support proposed arming peasants and workers, ostensibly to serve as a fifth national armed forces branch along with the army, navy, air force, and police. In frightened response, anti-communist army officers reportedly set up a Council of Generals to prepare measures against the perceived growing communist threat.
This three-way struggle came to a climax around 3am on the night of September 30-October 1, 1965, when two army units with leftist commanders and 2,000 troops revolted and sent squads to capture seven leading generals (including the army’s commander and the minister of defense) in their homes, evidently to bring them alive to President Sukarno and to persuade him to repress the Council of Generals. At 7am on October 1 the coup leaders, having also seized the telecom building on one side of the central square in Jakarta, broadcast an announcement on Indonesian radio declaring themselves to be the 30 September Movement, and stating that their aim was to protect President Sukarno by pre-empting a coup plotted by corrupt generals who were said to be tools of the CIA and the British. By 2pm, the leaders made three more radio broadcasts, after which they fell silent.
But the coup was badly bungled. The seven squads assigned to kidnap the generals were untrained, jittery, and assembled at the last minute. They hadn’t rehearsed the kidnappings. The two most important squads, assigned to kidnap (not to kill) Indonesia’s two highest-ranking generals, were led by inexperienced low-ranking officers. The squads ended up killing three of the generals in their houses. A fourth general succeeded in escaping over the back wall of his house compound. The squad accidentally shot his five-year-old daughter, whom they mistook for the general himself. The squads succeeded in capturing alive only the remaining three of the generals, whom they nevertheless proceeded to murder instead of carrying out their instructions to bring the generals alive to Sukarno.
Despite the fact that the coup leaders included a commander of President Sukarno’s bodyguard, whose job it was to know where Sukarno was at all times, the leaders could not find Sukarno, who happened to be spending the night at the home of one of his four wives. A crucial error was that the coup leaders made no strategic attempt to capture the headquarters of the Indonesian Army Strategic Reserve, located on one side of the central square, although coup leaders did capture the other three sides of the square. The coup leaders had neither tanks nor walkie-talkies. Because they closed down the Jakarta telephone system at the time that they occupied the telecom building, coup leaders trying to communicate with one another between different parts of Jakarta were reduced to sending messengers through the streets. Incredibly, the coup leaders failed to provide food and water for their troops stationed on the central square, with the result that a battalion of hungry and thirsty soldiers wandered off. Another battalion went to Jakarta’s Halim air force base, where they found the gates closed and spent the night loitering on the streets outside. The PKI leader who was apparently one of the coup organizers failed to alert and coordinate actions with the rest of the PKI, hence there was no mass communist uprising.
The commander of the Army Strategic Reserve was, after Sukarno, Indonesia’s second political leader with unusual qualities that influenced the course of history. He resembled Sukarno in having the confusingly similar name of Suharto, and in being Javanese and politically skilled. Suharto differed from Sukarno in being 20 years younger (1921–2008), not having played a significant role in the struggle against the Dutch colonial government, and being little known outside Indonesian army circles until the morning of October 1, 1965. When Suharto learned of the uprising early on that morning, he adopted a series of counter-measures while playing for time and trying to figure out a fast-moving and confusing series of developments. He summoned the commanders of the two army battalions on the central square to come meet him inside Strategic Reserve hq, where he told them that they were in revolt and commanded them to take orders from him; they dutifully obeyed. The coup leaders, plus Sukarno, to whom the fast-moving situation may have been as confusing as it was to Suharto, now gathered at the Halim air force base, because the air force was the branch of the Indonesian armed forces most sympathetic to the communists. Suharto responded by sending reliable troops to capture first the telecom building, then Halim air force base, which the troops succeeded in doing with minimal fighting. At 9pm on that evening, Suharto announced in a radio broadcast that he now controlled the Indonesian army, would crush the 30 September Movement, and would protect President Sukarto. The coup leaders fled from Halim base and from Jakarta, proceeded separately by train and plane to other cities in Central Java, and organized other uprisings in which other generals were killed. But those uprisings were suppressed by loyalist army troops within a day or two, just as had been the uprising in Jakarta.”
“To this day, many questions about the failed coup remain unanswered. What seems clear is that the coup was a joint effort by two sets of leaders: some junior military officers with communist sympathies, and one or more PKI leaders. But why did professional military officers stage such an amateurishly bungled coup, with such lack of military planning? Why didn’t they hold a press conference to enlist public support? Was the involvement of the PKI in the coup confined to just a few of its leaders? Was Communist China involved in planning and supporting the coup? Why didn’t the coup leaders include Suharto on their list of generals to be kidnapped? Why didn’t the coup forces capture the Kostrad hq on one side of the central square? Did President Sukarno know of the coup in advance? Did General Suharto know of the coup in advance? Did anti-communist generals know of the coup in advance but nevertheless allow it to unfold, in order to provide them with a prext for previously laid plans to suppress the PKI?
The last possibility is strongly suggested by the speed of the military’s reaction. Within three days, military commanders began a propaganda campaign to justify round-ups and killings of Indonesian communists and their sympathizers on a vast scale. The coup itself initially killed only 12 people in Jakarta on October 1, plus a few other people in other cities of Java on October 2. But those few killings gave Suharto and the Indonesian military a pretext for mass murder. That response to the coup was so quick, efficient, and massive that it could hardly have been improvised spontaneously within a few days in response to unexpected developments. Instead, it must have involved previous planning that awaited only an excuse, which the bungled coup attempt of October 1–2 provided.
The military’s motives for that mass murder arose from Indonesia’s breakdown of political compromise and democratic government in the 50s and early 60s, culminating in a three-way power struggle in 1965. It appeared that the armed forces were starting to lose that struggle. As Indonesia’s largest and best-organized political party, the PKI threatened the army’s political power and the money that the army extracted from state-owned businesses, smuggling, and corruption. The PKI’s proposal to arm workers and peasants as a separate armed force threatened the army’s monopoly of military power. As subsequent events would show, President Sukarno alone could not resist the army. But Sukarno was looking to the PKI as a potential ally to serve as a counter-weight to the army. In addition, the military itself was divided and included communist sympathizers, who were the organizers of the coup (along with 1+ PKI leaders). Hence the coup gave anti-communist army officers an opportunity to purge their political opponents within the army itself. Not surprisingly, army commanders alarmed by the PKI’s rising power prepared their own contingency plan, for which the coup offered a trigger. It remains unknown whether Suharto himself was already involved in drawing up that contingency plan, or whether (like Chile’s Pinochet) he became at the last minute a leader of a military take-over prepared by others.
On October 4 Suharto arrived at an area called Crocodile Hole, where the coup squads had thrown the bodies of the kidnapped generals down a well. In front of TV cameras, the decomposing bodies were pulled out of the well. On the next day, the generals’ coffins were driven through Jakarta’s streets, lined by thousands of people. The military’s anti-communist leadership quickly blamed the PKI for the murders, even though the murders had actually been carried out by units of the military itself. A propaganda campaign that could only have been planned in advance was immediately launched to create a hysterical atmosphere, warning non-communist Indonesians that they were in mortal danger from the communists, who were said to be making lists of people to kill, and to be practicing techniques for gouging out eyes. Members of the PKI’s women’s auxiliary were claimed to have carried out sadistic sexual torture and mutilation of the kidnapped generals. President Sukarno tried to minimize the significance of the October 1 coup attempt and objected to the scale of the military’s counter-measures, but the military had now wrested control of the situation from Sukarno. From October 5 onwards, the military began a round-up aimed at eliminating every member of the PKI and of every PKI-affiliated org, and all of the families of those members.
The PKI reaction was not what one would expect of an org that had been planning a coup. Throughout October and November, when PKI members were summoned to come to army bases and police stations, many came willingly, because they expected just to be questioned and released. The PKI could have supported the coup and thwarted military counter-measures by mobilizing railroad workers to sabotage trains, mechanics to sabotage army vehicles, and peasants to block roads; but it did none of those things.
Because the Indonesian killings were not carried out with the meticulous organization and documentation of the Nazi WWII concentration camp killings, there’s much uncertainty about the number of Indonesian victims. The highest estimates are about 2 million; the most widely cited figure is the contemporary estimate of half-a-million arrived at by a number of President Sukarno’s own fact-finding commission. Indonesian killing technology was much simpler than that of the Nazis: victims were killed one by one, with machetes and other hand weapons and by strangling, rather than by killing hundreds of people at once in a gas chamber. Indonesian disposal of bodies was also haphazard, rather than carried out by utilizing specifically built large ovens. Nevertheless, what happened in Indonesia in 1965–1966 still ranks as one of the world’s biggest episodes of mass murder since WWII.
A common misconception is that the killings were mainly of Chinese Indonesians. No, most of the victims were non-Chinese Indonesians; the targets were Indonesian suspected communists and their affiliates. Another misconception is that the killings were a spontaneous explosion by a population of irrational, emotionally unstable, and immature people prone to ‘run amok,’ a Malay expression that refers to individuals who go crazy and become murderers. No, Indonesians aren’t intrinsically unstable and murderous. Instead, the Indonesian military planned and orchestrated the killings in order to protect its own interests, and the military’s propaganda campaign convinced many Indonesian civilians to carry out the killings in order to protect in turn their own interests. The military’s killing campaign was evil but not irrational: it aimed to destroy the military’s strongest opponents, and it succeeded.
The situation as of the end of October 1965 was thus that Suharto commanded the loyalty of some but not all military leaders. Sukarno was still president-for-life, was still revered as Indonesia’s founding father, was still popular among military officers and soldiers, and was politically skilled. Suharto couldn’t just push Sukarno aside, any more than some ambitious American general could have pushed George Washington aside halfway through our beloved founding father’s second term.
Suharto had previously been considered just as an efficient general, and nothing more. But now he proceeded to display political skills exceeding even Sukarno’s. He gradually won the support of other military leaders, replaced military and civil service officers sympathetic to the PKI with officers loyal to him, and over the next 2.5 years proceeded slowly and cautiously to displace Sukarno while pretending to act on Sukarno’s behalf. In March 1966 Sukarno was pressured into signing a letter ceding authority to Suharto; in March 1967 Suharto became acting president, and in March 1968 he replaced Sukarno as president. He remained in power for another 30 years.”
“In contrast to Sukarno, Suharto did not pursue Third World anti-colonial policies and had no territorial ambitions outside the Indonesian archipelago. He concentrated instead of Indonesian domestic problems. In particular, Suharto ended Sukarno’s armed ‘confrontation’ with Malaysia over Borneo, rejoined the UN, abandoned Sukarno’s ideologically motivated alignment with Communist China, and aligned Indonesia instead with the West for economic and strategic reasons.
Suharto lacked a university education and had no understanding of economic theory. Instead, he placed Indonesia’s ‘official’ economy (in contrast to the unofficial economy described below) in the hands of highly qualified Indonesian economists, many of whom had obtained Berkeley degrees. That resulted in the nickname of ‘the Berkeley mafia.’ Under Sukarno, the Indonesian economy had become saddled with deficit spending resulting in heavy debt and massive inflation. Like Pinochet’s Chicago Boys in Chile, Suharto’s Berkeley mafia instituted economic reforms by balancing the budget, cutting subsidies, adopting a market orientation, and reducing Indonesia’s national debt and inflation. Taking advantage of Suharto’s abandonment of Sukarno’s left-leaning policy, the Berkeley mafia encouraged foreign investment and attracted American and European aid for developing Indonesia’s natural resources, especially its oil and minerals.
Indonesia’s other body of economic planning was the military. Suharto declared, ‘The armed forces wish to play a vital role in the process of modernizing the state…if the army stands neutral in the face of problems in consolidating the New Order, it disavows its role as well as the call of history…The military has two functions, that is, as an armed tool of the state and as a functional group to achieve the goals of the revolution.’ Just imagine an American general becoming president, and saying that about the U.S. army! In effect, the Indonesian military developed a parallel government with a parallel budget approximately equal to the official government budget. Under Suharto, military officers constituted more than half of Indonesia’s mayors, local administrators, and provincial governors. Local military officers had the authority to arrest and hold indefinitely anyone suspected of actions ‘prejudicial to security.’
Military officers founded businesses and practiced corruption and extortion on a huge scale, in order to fund the military and to line their private pockets. While Suharto himself didn’t conduct an ostentatiously lavish lifestyle, his wife and children were reputed to practice enormous corruption. Without even investing their own funds, his children launched businesses that made them rich. When his family was then accused of corruption, Suharto became angry and insisted that their new wealth was just due to their skills as business people. Indonesians gave to Suharto’s wife a nickname meaning ‘Madam Ten Percent,’ because she was said to extract 10% of the value of government contracts. By the end of Suharto’s reign, Indonesia was ranked among the most corrupt countries in the world.
Corruption pervaded all aspects of Indonesian life. For instance, while I was working for the WWF in Indonesia, a friend pointed out to me an Indonesian WWF office director and whispered that his nickname was ‘Mr. Corruption’ — because he was not just normally corrupt, but exceptionally corrupt; a boat that overseas WWF donors had bought for that particular WWF office had ended up as a private boat of Mr. Corruption. As another example of non-governmental corruption, whenever I checked in at a domestic airport, the check-in employees came out to me from behind the counter and demanded the excess baggage charges in cash for their own pockets.
Suharto replaced Sukarno’s governing principle of ‘guided democracy’ with what came to be known as the ‘New Order,’ which supposedly meant going back to the pure concepts of Indonesia’s 1945 constitution and to the five principles of Pancasila. Suharto claimed to be stripping away the bad changes subsequently introduced by Indonesia’s political parties for which he had no use. He considered Indonesian people to be undisciplined, ignorant, susceptible to dangerous ideas, and unready for democracy. In his autobiography he wrote, ‘We recognize deliberation to reach the consensus of the people…we do not recognize opposition as in the West. Here we do not recognize opposition based on conflict, opposition which is just trying to be different…Democracy must know discipline and responsibility, because without both those things democracy means only confusion.’
These Suharto leitmotivs — that there is only one way, and that there should be no disputes — applied to many spheres of Indonesian life. There was only one acceptable ideology, Pancasila, which civil servants and members of the armed forces had to study under a bureaucratic indoctrination program. Labor strikes were forbidden; they were contrary to Pancasila. The only acceptable ethnic identity was uniformly Indonesian, so Chinese Indonesians were forbidden to use Chinese writing or to keep their Chinese names. National political unity admitted no local autonomy for Aceh, East Timor, Indonesian New Guinea, or other distinct regions. Ideally, Suharto would’ve preferred just one political party, but parliamentary elections contested by multiple parties were necessary for an Indonesian government to appear legitimate on the international scene. However, a single government ‘functional group’ named Golkar always won elections with up to 70% of the vote, while all other political parties were merged into two other functional groups, one of them Islamic and the other non-Islamic, which always lost elections. Thus, Indonesia under Suharto came to be a military state, much as it was in the last decade of Dutch colonial government — with the difference that the state was now run by Indonesians, rather than by foreigners.
The historical display in my 1979 hotel lobby reflected Suharto’s emphasis on the aborted 1965 coup as a Communist Party plot, portrayed as the defining moment in modern Indonesian history. At the huge Pancasila Monument erected in 1969 to commemorate the killings of seven generals, considered ‘seven heroes of the revolution,’ a solemn ceremony of remembrance and of re-dedication to Pancasila was (and still is) held each year. On September 30 every year, all Indonesian TV stations were required to broadcast, and all Indonesian schoolchildren were required to watch, a grim four-hour-long government-commissioned film about the seven kidnappings and killings. There was of course no mention of the half-a-million Indonesians killed in retaliation. Not until a dozen years later were most political prisoners released.
Indonesia’s parliament reelected Suharto as president for one five-year term after another. After nearly 33 years, just after parliament had acclaimed him as president for a seventh five-year term, his regime collapse quickly and unexpectedly in May 1998. It had been undermined by a combination of many factors. One was an Asian financial crisis that reduced the value of Indonesia’s currency by 80% and provoked rioting. Another was that Suharto himself, at age 77, had grown out of touch with reality, lost his political skills, and been shaken by the 1996 death of his wife, his closest partner. There was widespread public anger at corruption and at the wealth accumulated by his family. Suharto’s own successes had created a modern industrialized Indonesian society, whose citizens no longer tolerated his insistence that they were unfit to govern themselves. The Indonesian military evidently concluded, as had the Chilean military after the 1998 ‘No!’ vote that it couldn’t stop the wave of protests, and that Suharto (like Pinochet) should resign before the situation got out of control.
In 1999, the year after Suharto’s fall, Indonesia carried out its first relatively free elections in more than 40 years. Since then, Indonesia has had a series of elections with voter turnouts far higher than voter turnouts in the U.S.: turnouts of 70–90%, whereas U.S. presidential voter turnouts barely reach 60%. In 2014 Indonesia’s latest presidential election was won by an anti-establishment civilian, the former mayor Jakarta, Joko Widodo, whose defeated opponent was an army general. Corruption has decreased, and sometimes it gets punished.”
“The bad legacies of the 1965 failed coup crisis are obvious. Worst are the mass murder of 500,000 Indonesians, and the imprisonment of 100,000 for more than a decade. Massive corruption reduced Indonesia’s rate of economic growth below the level that it would have enjoyed if so much money had not been diverted into the pockets of the military, running its own parallel government with a parallel budget. Suharto’s belief that his subjects were incapable of governing themselves postponed for several decades the opportunity for Indonesians to learn how to govern themselves democratically.
From the events of 1965, the Indonesian armed forces drew the lesson that success would be achieved by using force and killing people, rather than by solving problems that make people dissatisfied. That policy of murderous army repression has cost Indonesia dearly in Indonesian New Guinea, in Sumatra, and especially on the eastern Indonesian island of Timor, which had been divided politically between a Portuguese colony in the east and Indonesian territory in the west. When Portugal was shedding its last colonies in 1974, all geographic logic argued for East Timor becoming another province of Indonesia, which already accommodated so many provinces with different cultures, languages, and histories. Of course one can object that national boundaries aren’t shaped just by geographic logic, but East Timor is the eastern half of one small island in a long chain of many islands, all the rest of which are wholly Indonesian. Had the Indonesian government and army displayed even a minimum of tact, they might have negotiated an arrangement to incorporate East Timor with some autonomy into Indonesia. Instead, the Indonesian army invaded, massacred, and annexed East Timor. Under international pressure, and to the horror of the Indonesian army, Indonesia’s President Habibie, who succeeded Suharto, permitted a referendum on independence for East Timor in 1999. By then, the population of course voted overwhelmingly for independence. Thereupon, the Indonesian army organized pro-Indonesia militias to massacre yet again, forcibly evacuated much of the population to Indonesian West Timor, and burned most of the new country’s buildings — to no avail, as international troops restored order and East Timor eventually took control of itself as the nation of Timor-Leste. The costs to the East Timorese were that about a quarter of the population died, and that survivors now constitute Asia’s poorest mini-nation, whose per-capita income is six times lower than that of Indonesia. The costs to Indonesians were that they now have in their midst a separate nation with sovereignty over a potentially oil-rich seabed whose revenues will not flow to Indonesia.
Hideous as it was in other respects, the Suharto regime did have positive legacies. It created and maintained economic growth, even though that growth was reduced by corruption. It attracted foreign investment. It concentrated its energy on Indonesia’s domestic problems, rather than dissipating it on world anti-colonial policies or on the effort to dismantle neighboring Malaysia. It promoted family planning, and thereby addressed one of the biggest fundamental problems that have bedeviled independent Indonesia as well as the previous Dutch colonial regime. (Even in the most remote villages of Indonesian New Guinea, I saw government posters describing family planning.) It presided over a green revolution that, by providing fertilizer and improved seeds, greatly increased the yields of rice and other crops, thereby massively raising agricultural productivity and Indonesians’ nutrition. Indonesia was under great strain before 1965; today, Indonesia shows no imminent risk of falling apart, although its fragmentation into islands, territorial extent of thousands of miles, hundreds of indigenous languages, and coexistence of religions were all recipes for disaster. Eighty years ago, most Indonesians didn’t think of themselves as Indonesians; now, Indonesians take their national identity for granted.
But many people give the Suharto regime zero credit rather than some credit. They object: Indonesia might have made those same advances under a regime other than Suharto’s: a historical ‘what if?’ question, but some questions can’t be answered with confidence.”
“Suharto did often illustrate honest, realistic, Machiavellian self-appraisal. In gradually pushing aside Indonesia’s popular founding father and first president Sukarno, Suharto proceeded cautiously, figured out at each step what he could get away with and what he couldn’t get away with, and eventually succeeded in replacing Sukarno, even though it took time. Suharto was also realistic in abandoning Sukarno’s foreign policy ambitions beyond Indonesia’s means, including guerrilla warfare against Malaysia and the attempt to lead a world anti-colonial movement.”
“For reconciliation after killings provoked by the breakdown of political compromise, Indonesia stands at the opposite extreme of Finland, with Chile intermediate: rapid reconciliation in Finland after the Finnish Civil War; much open discussion and trials of perpetrators in Chile, but incomplete reconciliation; and very limited discussion or reconciliation, and no trials, in Indonesia. Factors responsible for Indonesia’s lack of trials include the country’s weak democratic traditions; the fact that post-Pinochet Chile’s motto ‘a fatherland for all Chileans’ found less echo in post-Suharto Indonesia; and, most of all, that Indonesia remained a military dictatorship for 33 years after the mass killings, and that the armed forces remain much more powerful in Indonesia today than in Chile.”
“In the 80s and 90s, the operations of Indonesian commercial airlines were often careless and dangerous. In addition to being shaken down for bribes and diverted excess baggage charges, I experienced one flight on which large fuel drums were placed unsecured in the passenger cabin, the steward remained standing during take-off, and seatbelts and vomit bags for passengers (including one who was vomiting) were lacking. During another flight, the pilot and co-pilot were so absorbed in chatting with the stewardesses through the open door that they failed to notice that they were approaching the runway at too high an altitude, tried to make up for their neglect by going into a steep dive, had to brake hard on landing, and succeeded in stopping the plane only 20 feet short of the runway perimeter ditch. But by 2012 Indonesia’s leading airline, Garuda, was rated as one of the best regional carriers in the world. Every time since 2012 that I’ve checked in with overweight baggage, I’ve been sent to Garuda’s excess baggage officei to pay the charges by credit card to Garuda itself in return for a receipt. I was regularly asked for bribes until 1996; I’ve never been asked for a bribe since 2012.
Until 1996, I would have regarded the phrase ‘Indonesian government patrol boat’ as an oxymoron. I had become accustomed to the Indonesian military’s activities as creating a need for patrolling, rather than as carrying out patrolling.
When I landed on the coast of Indonesian New Guinea in 2014, I was astonished to encounter big or colorful birds, which had formerly been the prime target of illegal hunting, now calling and displaying near and even in coastal villages: imperial pigeons, hornbills, Palm Cockatoos, and birds of paradise. Previously, those species were shot out or trapped near villages, and encountered only far from habitation.
In a New Guinea village, an Indonesian policeman had recently shot four New Guineans; in that district, the district administrator had been very corrupt. Of course, so what else is new? The difference, this time, was that both the policeman and the administrator were put on trial and sent to jail; that wouldn’t have happened before.
While these are signs of progress, they shouldn’t be exaggerated. Many of Indonesia’s old problems still persist, to varying degrees. Bribery is reportedly still widespread. My own Indonesian friends still don’t talk about the mass killings of 1965: my younger friends today weren’t alive then, and my other friends who were alive in 1965 have remained silent about it to me, although American colleagues tell me that they do encounter many Indonesians interested in the killings. There’s still fear of Indonesian military interference in Indonesian democracy: when a civilian politician defeated a general in the 2014 elections, anxious months passed before it became clear that the general wouldn’t succeed in his efforts to annul the election. In 2013 a rifle shot from the ground broke the windshield of my chartered helicopter over Indonesian New Guinea; it remained uncertain whether the shot had been fired by New Guinean guerrillas still fighting for independence, or by Indonesian troops themselves feigning guerrilla activity in order to justify a crackdown.
Indonesia is one of the nations with the shortest national history and the greatest linguistic diversity, and initially was at serious risk of its territory falling apart. The former Dutch colony of the Dutch East Indies might have dissolved into separate nation-states, just as the former French colony of Indo-China did dissolve into Vietnam, Cambodia, and Laos. That dissolution was evidently the intention of the Dutch when they tried to establish separate federal states within their colony in the late 1940s, in order to undermine the nascent unified Republic of Indonesia.
But Indonesia didn’t fall apart. It built from scratch, surprisingly quickly, a sense of national identity. That sense grew partly spontaneously, and partly was reinforced by conscious government efforts. One basis of that sense is pride in the revolution of 1945–1949, and in the throwing-off of Dutch rule. The government reinforces that spontaneous sense of pride by retelling the story of 1945–1949, with considerable justification, as a heroic struggle for national independence. Indonesians are proud of their wide territorial extent. Another basis of national identity is Indonesians’ rapid adoption of their easily learned and wonderfully supple Bahasa Indonesia as the national language, coexisting with the 700 local languages.
In addition to those underlying roots of national identity, the Indonesian government continues to try to reinforce identity by emphasizing the five-point framework of Pancasila, and by annual ceremonies remembering the seven murdered generals at Jakarta’s Pancasila Monument. I haven’t seen another hotel lobby display like the account of the ‘communist coup’ that greeted me in 1979. Indonesians now feel sufficiently secure in their national identity that they don’t need misleading accounts of a ‘communist coup’ to reinforce it.”
Germany
“Germany’s surrender on May 7–8, 1945, marked the end of WWII in Europe. The situation in Germany as of that date was as follows.
The Nazi leaders had committed or were about to commit suicide. Germany’s armies, after conquering most of Europe, had been driven back and defeated. About 7 million Germans had been killed, either as soldiers, as civilians killed by bombs, or as civilian refugees killed while fleeing, particularly from the advancing Soviet armies in the east taking revenge for the horrible things that the Germany military had done to Soviet citizens.
Tens of millions of Germans who survived had been traumatized by severe bombing. Virtually of Germany’s major cities had been reduced to rubble, from bombing and fighting. Between 1/4 and 1/2 of the housing in German cities had been destroyed.
1/4 of Germany’s former territory was lost to Poland and to the Soviet Union. What remained of Germany was divided into four occupation zones that would eventually become two separate countries.
About 10 million Germans were homeless refugees. Millions were searching for missing family members, of whom some miraculously turned up alive years later. But most never turned up, and for many of them the time and place and circumstances of their deaths remain forever unknown.
As of 1945, the German economy had collapsed. The German currency was rapidly losing its value through inflation. The German people had undergone 12 years of Nazi programming. Virtually all German government officials and judges had been convinced or complicit Nazis, because they had had to swear a personal oath of allegiance to Hitler in order to hold a government job. German society was authoritarian.
Today, Germany is a liberal democracy. Its economy is the fourth largest in the world, and its one of the world’s leading export economies. Germany is the most powerful country in Europe west of Russia. It established its own stable currency; then it played a leading role in establishing a common European currency, and in establishing the EU that now joins it peacefully with the countries that it had so recently attacked. Germany has largely dealt with its Nazi past. German society is much less authoritarian than it once was.”
“WWII’s victorious Allies carved Germany into four occupation zones: American in the south, French in the southwest, British in the northwest, and Soviet in the east. While Berlin lay in the middle of the Soviet zone, it too was divided into occupation sectors of all four powers, like an island of non-Soviet occupation within the Soviet zone. In 1948 the Soviets imposed a blockade on American, British, and French overland access to their enclaves within Berlin, in order to compel the three Western Allies to abandon their enclaves. The Allies responded with the Berlin airlift and supplied Berlin by air for nearly a year, until the Soviets gave up and abandoned their blockade in 1949.
Also in 1949, the Allies joined their zones into one entity, known as West Germany. The Soviet zone became East Germany. Today, East Germany is routinely dismissed as a failed communist dictatorship that eventually collapsed and became in effect absorbed by West Germany. It’s easy now to forget that not just Soviet brute force but also German communist idealism contributed to East Germany’s founding, and that numerous German intellectuals chose to move to East Germany from West Germany or from exile overseas.
But the standard of living and freedom in East Germany eventually fell far behind that of West Germany. While American economic aid was pouring into West Germany, the Soviets imposed economic reparations on their zone, dismantled and carted away whole factories to Russia, and reorganized East German agriculture as collective farms. Increasingly, over the next two generations until re-unification in 1990, East Germans grew up unable to learn the motivation, acquired by people in Western democracies, to work hard to better their lives.
As a result, East Germans began fleeing to the West. Hence in 1952 East Germany sealed its borders to the West, but East Germans could still escape by passing from East Berlin into West Berlin, then flying from West Berlin to West Germany. The pre-war public transport system in Berlin included lines that connected West and East Berlin, so that anyone in East Berlin could get into West Berlin just by hopping on a train.
In 1953 dissatisfaction in East Germany blew up in a strike that turned into a rebellion, crushed by Soviet troops. Dissatisfied East Germans continued to escape to the West by way of the Berlin public transport system. Finally, on the night of August 13, 1961, the East German regime suddenly closed the East Berlin U-Bahn stations and erected a wall between East and West Berlin, patrolled by border guards who shot and killed people trying to cross the wall. The East Germans justified the wall by claiming that it was built to protect East Germany from West German infiltrators and criminals, rather than admitting that it was aimed at preventing dissatisfied East Germans from fleeing to the West. The Western Allies didn’t dare to breach the wall, because they knew that they were powerless to do anything for a West Berlin surrounded by East German and Russian troops.
From then on, East Germany remained a separate state from which there was no possibility of fleeing to West Germany without high probability of being killed at the border. (Over a thousand Germans died in the attempt.) There was no realistic hope for the re-unification of Germany, given the polarization between the Soviet Union and the communist Eastern European bloc on the one hand, and the U.S. and Western Europe on the other. It was as if the U.S. became divided at the Mississippi River between a communist eastern U.S. and a democratic western U.S., with no prospect of re-unification for the foreseeable future.
As for West Germany just after WWII, one policy considered by the victorious Western Allies was to prevent it from ever rebuilding its industries, to force its economy to revert just to agriculture under the so-called Morgenthau Plan, and to extract war reparations as the Allies had done after WWI and as the Soviets were now doing in East Germany. That strategy stemmed from the widespread Allied view that Germany had been responsible not only for instigating WWII under Hitler but also for instigating WWI under Kaiser Wilhelm II (a much-debuted historical question), and that permitting Germany to re-industrialize could lead to yet another world war.
What caused that Allied view to change was the development of the Cold War, and the resulting realization that the real risk of another world war came not from Germany but from the Soviet Union. That fear was the dominating motive underlying U.S. foreign policy in the decades following WWII. The communist takeovers of all Eastern European countries already occupied by Soviet troops, Soviet acquisition of atomic bombs and then of hydrogen bombs, the 1948–1949 Soviet attempt to blockade and strangle the Western enclave in Berlin, and the strength of communist parties even in some Western European democracies (especially Italy) made Western Europe seem the most likely site for the Cold War to explode into another world war.
From that perspective, West Germany, lying in the center of Europe, and bordering on communist East Germany and Czechoslovakia, was crucial to the freedom of Western Europe. The Western Allies needed West Germany to become strong again, as a bulwark against communism. Their other motives for wanting Germany to become strong again were to reduce the risk that a weak and frustrated Germany might descend again into political extremism (as had happened after WWI), and to reduce the economic costs to the Allies of having to continue to feed and support an economically weak West Germany.
After 1945, it took several years, during which the West German economy continued to deteriorate, for that change of view to mature among the Western Allies. Finally, in 1948 the U.S. began to extend to West Germany the Marshall Plan economic aid that the U.S. had already begun to provide to other Western European countries in 1947. Simultaneously, West Germany replaced its weak and inflated currency with a new currency, the Deutsche Mark. When the Western allies merged their occupation zones into a single West Germany, they retained veto power over its legislation. However, West Germany’s first chancellor, Konrad Adenauer, proved skilled at exploiting American fears of a communist assault, in order to obtain Allied acquiescence to delegate more and more authority to West Germany and less and less to the Allies. Adenauer’s economics minster, Ludwig Erhard, instituted modified free-market policies and utilized Marshall Plan aid to fuel a spectacularly successful economic recovery that became known as the ‘economic miracle.’ Rationing became abolished, industrial output and living standards soared, and the dream of being able to buy a car and a home became reality for West Germans.
West Germany now felt more prosperous and contented than was Britain. Note the irony: Germany had lost WWII and Britain had won it, but it was West Germany rather than Britain that then created the economic miracle. Politically, by 1955 West Germany had regained sovereignty, and Allied military occupation ended. After the Allies had fought two world wars in order to defeat and disarm Germany, West Germany began to rearm and to rebuild an army — not at its own initiative, but (incredibly!!) at Western urging and against a vote of the West German parliament itself, so that West Germany would have to share with the Allies the burden of defending Western Europe. From a 1945 perspective, that represented the most astonishing change in American, British, and French policy toward Germany.
The West German economy has been characterized by relatively good labor relations, infrequent strikes, and flexible conditions of employment. Employees and employers tacitly agree that employees won’t strike, so that businesses can prosper, and that employers will share the resulting business prosperity with their workers. German industry developed an apprentice system that it still has today, in which young people become apprenticed to companies that pay them while they’re learning their trade. At the end of the apprenticeship they then have a job with that company. Today, Germany has Europe’s largest economy.”
“At the end of WWII, the Allies prosecuted the 24 top surviving Nazi leaders at Nuremburg for war crimes. Ten were condemned to death, one of whom succeeded in committing suicide by poison during the night before his scheduled execution. Seven others were sentenced to long or lifelong prison terms. The Nuremburg court also tried and sentenced numerous lower-level Nazis to shorter prison terms. The Allies subjected much larger numbers of Germans to ‘denazification’ proceedings, consisting of examining their Nazi past and re-educating them.
But the Nuremburg trials and denazification proceedings didn’t solve the legacies of Naziism for Germans. Millions of lower-level Germans who had either been convinced Nazis or had followed Nazi orders were not prosecuted. Because the trials were conducted by the Allies rather than by Germans themselves, the prosecutions did not involve Germans taking responsibility for German actions. In Germany the trials became dismissed as mere revenge taken by the victors upon the vanquished. West Germany’s own court system also carried out its own prosecutions, but their scope was initially limited.
A practical problem for both the Allies and the Germans themselves in developing a functioning post-war government in Germany was that any government requires officials with experience. But as of 1945, the vast majority of Germans who had acquired experience in government acquired it under the Nazi government, which meant that all potential post-war German government officers (including judges) had either been convinced Nazis or at the very least had cooperated with the Nazis. The sole exceptions were Germans who had either gone into exile or had been sent by the Nazis to concentration camps, where they couldn’t acquire experience in governing. For example, West Germany’s first chancellor after the war was Konrad Adenauer, a non-Nazi whom the Nazis had driven out of his office as mayor of Cologne. Adenauer’s policy upon becoming chancellor was described as ‘amnesty and integration,’ which was a euphemism for not asking individual Germans what they had been doing during the Nazi era. Instead, the government’s focus was overwhelmingly on the urgent tasks of feeding and housing tens of millions of underfed and homeless Germans, rebuilding Germany’s bombed cities and ruined economy, and re-establishing democratic government after 12 years of Nazi rule.
As a result, most Germans came to adopt the view that Nazi crimes were the fault of just a tiny clique of evil individual leaders, that the vast majority of Germans were innocent, that ordinary German soldiers who had fought heroically against the Soviets were guiltless, and that (by the mid-50s) there were no further important investigations of Nazi crimes left to be carried out. Further contributing to that failure of the West German government to prosecute Nazis was the widespread presence of former Nazis among post-war government prosecutors themselves: for instance, it turned out of 47 officials in the West German federal criminal bureau, and many members of the West German intelligence service, had been leaders of the Nazi fanatical SS org. Germans who had been in their 30s or 40s during Nazi rule continued to defend the Nazi era in private, listening to recorded Hitler speeches with pleasure and saying that the extermination of millions of Jews was mathematically impossible and the biggest lie ever told.
In 1958 the justice ministers of all West German states finally set up a central office to pool their efforts to prosecute Nazi crimes committed anywhere inside and even outside West German territory. The leading figure in those prosecutions was a German Jewish lawyer named Fritz Bauer, who’d been a member of the anti-Nazi Social Democratic Party and had been compelled to flee Germany in 1935. He began prosecuting cases as soon as he returned to Germany in 1949. From 1956 until his death in 1969 he served as chief prosecutor for the German state of Hessen. The central principle of Bauer’s career was that Germans should hold judgment upon themselves. That meant prosecuting ordinary Germans, not just the leaders whom the Allies had prosecuted.
Bauer first became famous for the Auschwitz trials, in which he prosecuted low-level Germans who’d been active at this largest Nazi extermination camp. The Auschwitz personnel whom he prosecuted consisted of very minor officials, such as clothes room managers, pharmacists, and doctors. He then went on to prosecute low-level Nazi police; German judges who had ruled against Jews or against German resistance leaders or had issued death sentences; Nazis who had persecuted Jewish business people; those involved in Nazi euthanasia, including doctors, judges, and euthanasia personnel; officials in the German foreign office; and, what was most disturbing to German people, German soldiers guilty of atrocities particularly on the eastern front — disturbing because of the widespread German belief that atrocities had been committed by fanatical groups such as the SS but not by ordinary German soldiers.
In addition to those prosecutions, Bauer tried to track down the most important and most evil Nazis who had disappeared after the war. He received information about the location of Adolf Eichmann, who’d organized the round-up of Jews and had fled to Argentina. Bauer concluded that he couldn’t safely pass that information to the German Secret Service for them to capture and punish Eichmann, because he feared that they would just tip off Eichmann and allow him to escape. Instead, he relayed the news of Eichmann’s whereabouts to the Israeli Secret Service, which eventually succeeded in kidnapping Eichmann in Argentina, secretly flying him to Israel on an El Al jet, putting him on public trial, and eventually hanging him after a trial that drew worldwide attention not just to Eichmann but to the whole subject of individual responsibility for Nazi crimes.
Bauer’s prosecutions revealed to Germans of the 1960s what Germans of the 1930s and 1940s had been doing during the Nazi era. The Nazi defendants being prosecuted by Bauer all tended to offer the same set of excuses: I was merely following orders; I was conforming to the standards and laws of my society at the time; I was not the person who had responsibility for those people getting killed; I merely organized railroad transport of Jews being transported to extermination camps; I didn’t personally kill anyone; I was blinded by belief in authority and ideology proclaimed by the Nazi government, and that made me incapable of recognizing that what I was doing was wrong.
Bauer’s response, which he formulated again and again at the trials and in public, was as follows. Those Germans whom he was prosecuting were committing crimes against humanity. The laws of the Nazi state were illegitimate. One cannot defend one’s actions by saying that one was obeying those laws. There is now law that can justify a crime against humanity. Everybody must have his own sense of right and wrong and must obey it, independently of what a state government says. Anyone who takes part in what Bauer called a murder machine, such as the Auschwitz extermination apparatus, thereby becomes guilty of a crime. In addition, it became clear that many of the defendants whom he put on trial, and who gave as an excuse that they did what they did because they were forced to do it, were acting not out of compulsion but out of their own convictions.
In reality, many, perhaps most, of Bauer’s prosecutions failed: the defendants were often acquitted by German courts even in the 1960s. Bauer himself was frequently the target of verbal attacks and death threats. Instead, the significance of Bauer’s work was that he, a German, in German courts, demonstrated to the German public again and again, in excruciating detail, the beliefs and deeds of Germans during the Nazi era. Nazi misdeeds were not just the work of a few bad leader. Instead, masses of ordinary German soldiers and officials, including many who were now high-ranking officials of the West German government, had carried out Nazi orders, and had therefore been guilty of crimes against humanity. Bauer’s efforts formed an essential background to the German student revolts of 1968.”
“German children have been taught at length since the 1970s in school about Nazi atrocities, and many of them are taken on school outings to former KZs that have been turned into exhibits. Such national facing-up to past crimes isn’t to be taken for granted — no other country takes that responsibility remotely as seriously as does Germany. Indonesian children still are taught nothing about the mass killings of 1965, young Japaense are taught nothing about Japan’s war crimes; and it’s not national policy for American children to be taught in grim detail about American crimes in Vietnam, and against Native Americans, and against African slaves.”
“Revolts and protests, especially by students, spread through much of the free world in the 1960s. They began in the U.S. with the Civil Rights Movement, protests against the Vietnam War, the Free Speech Movement at UC Berkeley, and the movement called Students for a Democratic Society. Student protests were also widespread in France, Britain, Japan, Italy, and Germany. In all these countries, the protests partly represented a revolt of the younger generation against the older one. But that confrontation of generations achieved a particularly violent form in Germany for two reasons. First, the Nazi involvement of the older generation of Germans meant that the gulf between the younger and the older generation was far deeper there than it was in the U.S. Second, the authoritarian attitudes of traditional German society made older and younger generations especially scornful of each other. While protests leading to liberalization were growing in Germany throughout the sixties, the lid blew off the protests in 1968. Why 1968?
Changes from year to year have been more rapid and profound in Germany than in the U.S. Americans can’t learn a lot about a person just by knowing their birth year, but Germans explain themselves to each other by saying, ‘My year of birth was 1945.’ That’s because all Germans know that their fellow citizens went through very different life experiences, depending on when they were born and were growing up.
Germans born in 1937 didn’t grow up with what we’d recognize as normal lives. All of them had bad things happen to them as children, due to the war — like being orphaned, watching the district where their parents lived being burned, losing siblings, being separated from parents for a decade, sleeping under a bridge because their towns were bombed every night and it was unsafe to sleep in a house, and being sent by parents to steal coal from a railroad yard so the family could stay warm. Thus, Germans born in 1937 were old enough to have been traumatized by memories of the war, and by the chaos and poverty that followed it, and by the closure of their schools. But they weren’t old enough to have had Nazi views instilled into them by the Nazi youth org called the Hitler Jugend. Most of them were too young to be drafted into the new West German army established in 1955.
Those facts about the different experiences of Germans born in different years help explain why Germany experienced a violent student revolt in 1968. On the average, the German protesters had been born around 1945, just at the end of the war. They were too young to have been raised as Nazis or to have experienced the war, or to remember the years of chaos and poverty after the war. They grew up mostly after Germany’s economic recovery, in economically comfortable times. They weren’t struggling to survive; they enjoyed enough leisure and security to devote themselves to protest. In 1968 they were in their early 20s. They were teens during the 1950s and early 1960s, when Fritz Bauer was revealing the Nazi crimes of ordinary Germans of their parents’ generation. The parents of protesters born in 1945 would themselves have been born mostly between 1905 and 1925, meaning that they were viewed by their children as the Germans who had voted for Hitler, obeyed him, had fought for him, or had been indoctrinated in Nazi beliefs by Hitler school orgs.
As Fritz Bauer in the 60s was publishing his findings, most of the parents of young 1945-born Germans didn’t talk then about Nazi times but instead retreated into their world of work and the post-war economic miracle. If a child did ask, ‘Mommy, what were you doing during Nazi times?’ a parent might answer their child with a response similar to those that older Germans willing to talk gave me in 1961: ‘You young person, you have no idea what it’s like to live under a totalitarian state; one can’t just act on one’s beliefs.’ Of course that excuse didn’t satisfy young people.
The result was that 1945-born Germans discredited their parents and their parents’ generation as Nazis. That helps explain why student protests also took a violent form in Italy and Japan, the other two aggressor countries of WWII. In contrast, in the U.S. the parents of 1945-born Americans were not viewed as war criminals for fighting in WWII but as heroes.
Widely remembered as a symbolic moment of 1968 in Germany was an act by a young German non-Jewish woman named Beate Klarsfled married to a Jewish man whose father had been gassed at Auschwitz. On November 7, 1968, she shrieked ‘Nazi!’ at West Germany’s Chancellor Kurt Kiesinger and slapped him in the face, because he’d been a Nazi party member. But the Nazi past itself was not the only cause of the protests. German students were protesting even more against things similar to what American students and ‘hippies’ of 1968 were protesting: the Vietnam war, authority, bourgeois life, capitalism, imperialism, and traditional morality. German 1968-era equated contemporary capitalist German society with fascism, while conservative older Germans in turn regarded the violent young leftist rebels as ‘Hitler’s children,’ a reincarnation of the violent fanatical Nazi SA and SS orgs. Many of the rebels were extreme leftists; some actually moved to East Germany, which in turn funneled money and documents to sympathizers in West Germany.
German student radicals in 1968 turned to violence far more than did contemporary American student radicals. Some of them went to Palestine for training as terrorists. The best known of those German terrorist groups called themselves the Red Army Faction (RAF). The terrorists began by carrying out arson attacks on stores, then proceeded to kidnappings, bombings, and killings. Over the years the victims whom they kidnapped or killed included leaders of the German ‘establishment,’ such as the president of the West Berlin Supreme Court, a candidate for mayor of West Berlin, Germany’s federal prosecutor, the chief of Deutsche Bank, and the head of West Germany’s Employers’ Association. As a result, even most German leftists themselves felt increasingly endangered by the violence of the radical left, and withdrew their support. West German terrorism peaked during 1971–1977, reaching a climax in 1977 when three RAF leaders committed suicide in prison after the failure of a terrorist attempt to free imprisoned terrorists by hijacking a Lufthansa plane. Two further waves of terrorism followed, until the RAF announced in 1998 that it had dissolved.”
“The 1968 German student revolt is sometimes described as a ‘successful failure.’ While the student extremists failed in their goals of replacing capitalism with a different system, and of overthrowing West Germany’s democratic government, they did achieve some of their goals indirectly, because parts of their agenda became co-opted by the West German government, and many of their ideas were adopted by mainstream German society. In turn, some of the 1968 radicals later rose to leading political positions in West Germany’s Green Party — such as Joschka Fischer, who after being active as a stone-throwing radical developed a taste for fine suits and wines and became West Germany’s foreign minister and vice-chancellor.
Traditional German society had been politically and socially authoritarian. Those qualities, already present long before Hitler, were made explicit in Nazi society by its emphasis on ‘the leader principle.’ Not only was Hitler himself someone to whom all Germans swore unquestioning political obedience; social as well as political obedience to leaders was expected in other spheres and at other levels of German life under the Nazis.
Although Germany’s crushing WWII defeat discredited the authoritarian German state, the old elites and their thinking remained alive. Well into the sixties, spanking of children was widespread, not merely permitted but considered obligatory for parents. Leaders of companies made decisions controlling the careers of each person in their company entirely by themselves. Wherever one went, there were signs saying what was forbidden, and instructing how one should and shouldn’t behave.”
“Authoritarian behaviors and attitudes in Germany were already starting to change by the sixties. A famous example was the Spiegel Affair of 1962. When the weekly magazine Der Spiegel, which was often critical of the government, published an article questioning the strength of Germany’s army, Chancellor Adenauer’s defense minister Franz Josef Strauss reacted with authoritarian arrogance by arresting Der Spiegel’s editors and seizing their files on suspicion of treason. The resulting enormous public outcry forced the government to abandon its crackdown and compelled Strauss to resign. But Strauss nevertheless remained powerful, served as premier of Bavaria from 1978 to 1988 and ran for chancellor of Germany in 1980. (He was defeated.)
After 1968, the liberalizing trends that had already been underway became stronger. In 1969 they resulted in the defeat of the conservative party that had ruled Germany uninterruptedly in coalitions for two decades. Today, Germany is socially much more liberal than it was in the sixties. There’s no spanking: in fact, it’s now forbidden by law! Dress is more informal, women’s roles are less unequal, and there’s more use of the informal pronoun ‘Du’ and less use of the formal pronoun ‘Sie’ to mean ‘you.’”
“Peaceful government’s achievement of many of the goals of 1968 student violence accelerated under West German chancellor Willy Brandt. He had been born in 1913, was forced to flee from the Nazis because of his political views, and spent the war years in Norway and Sweden. In 1969 he became West Germany’s first left-wing chancellor as head of the SPD Party, after 20 uninterrupted years of conservative German chancellors. Under Brandt, Germany began social reforms in which the government pursued student goals such as making Germany less authoritarian and promoting women’s rights.
But Brandt’s biggest achievements were in foreign relations. Under West Germany’s previous conservative leadership, the West German government had refused even to recognize the existence of the East German government, and had insisted that West Germany was the only legitimate representative of the German people. It had had no diplomatic relations with any Eastern European communist country other than the Soviet Union. It had refused to recognize the de-facto loss of all German territories east of the Oder and Neisse Rivers: East Prussia to the Soviet Union, and the rest to Poland.
Brandt adopted a new foreign policy that reversed all of those refusals. He signed a treaty with East Germany and established diplomatic relations with Poland and other Eastern Bloc countries. He acknowledged the Oder-Neisse line as the Polish/German border, and he thereby accepted the irrevocable loss of all German territories east of that line, including areas that had long been German and central to German identity: Silesia and parts of Prussia and Pomerania. That renunciation was an enormous step and constituted an unacceptably bitter pill for Germany’s conservative CDU Party, which announced that it would reject the treaties if it were returned to power in the 1972 elections. In fact, German voters endorsed Brandt’s swallowing of the bitter pill, and Brandt’s party won the 1972 elections with an increased majority.
The most dramatic moment of Brandt’s career happened during his visit to Warsaw. In 1970, Poland had been the country that had had the highest percentage of its population killed during WWII. It had been the site of the biggest Nazi extermination camps. Poles had good reason to loathe Germans as unrepentant Nazis. On his 1970 Warsaw visit, Brandt visited the Warsaw Ghetto, the site of an unsuccessful Jewish revolt against Nazi occupation in 1943. In front of the Polish crowds, Brandt spontaneously fell down on his knees, acknowledged the millions of victims of the Nazis, and asked for forgiveness for Hitler’s dictatorship and WWII. Even Poles who continued to distrust Germans recognized Brandt’s behavior as unplanned, sincere, and deeply meant. In today’s world of carefully scripted, unemotional diplomatic statements, Brandt’s kneeling at the Warsaw Ghetto stands out as a unique heartfelt apology by the leader of one country to the people of another who had suffered greatly. By contrast, think of the many other leaders who didn’t kneel and apologize: American presidents to Vietnamese, Japan prime ministers to Koreans and Chinese, Stalin to Poles and Ukranians, de Gaulle to Algerians, and more.
The political pay-off for West Germany from Brandt’s behavior didn’t come until 20 years after his visit, and long after Brandt himself had resigned as chancellor in 1974. In the 70s and 80s there was still nothing that a West German chancellor could do directly to bring about the reunification of West and East Germany. The two chancellors who followed Brandt continued his policies of trading with East Germany, seeking reconciliation with Eastern European countries, and cultivating good personal relationships with leaders of the major countries on both sides of the Iron Curtain. The U.S. and Western Europe reached the conclusion that West Germany was now to be trusted as a democracy and a dependable ally. The Soviet Union and its Eastern Bloc partners reached the conclusion that West Germany was now to be valued as major trade partner, and was no longer to be feared as military or territorial threat.
Brandt’s treaty and subsequent agreements between the two Germanys enabled hundreds of thousands of West Germans to visit East Germany, and a small number of East Germans to visit West Germany. Trade between the two Germanys grew. Increasingly, East Germans succeeded in watching West German TV. That enabled them to compare for themselves the high and rising living standards in West Germany, and the low and declining living standards in East Germany. Economic and political difficulties were also growing in the Soviet Union itself, which was becoming less able to impose its will on other Eastern Bloc countries. Against that background, the beginning of the end for East Germany was a step completely beyond the control of either West or East Germany: on May 2 of 1989, Hungary, an Eastern Bloc country separated from East Germany to the north by another Eastern Bloc country (Czechoslovakia), decided to remove the fence separating it on the west from Austria, a Western democratic county bordering on West Germany. When Hungary then officially opened that border four months later, thousands of East Germans seized the opportunity to flee by way of Czechoslovakia and Hungary to the west. (That official border opening date was 9/11, coincidentally also the date of Pinochet’s 1973 coup and the 2001 World Trade Center attacks.) Soon, hundreds of thousands of East Germans protesting against their government took to the streets in Leipzig, then in other East German cities. The East German government intended to respond by announcing that it would issue permits for direct travel to West Germany. However, the official making the TV announcement bungled it and said instead that the government would permit travel to West Germany ‘immediately.’ That night (11/9/89), tens of thousands of East Germans seized the opportunity to cross immediately into West Berlin, unmolested by the border guards.
While West Germany’s chancellor at the time, Helmut Kohl, didn’t create this opening, he did know how to exploit it cautiously. In May 1990 he concluded a treaty of economic and social welfare unification (but not yet political unification) between East and West Germany. He worked hard and tactfully to defuse Western and Soviet reluctance to permit German reunification. For example, in his crucial July 1990 meeting with Soviet President Gorbachev, he offered the Soviet union a big package of financial aid, and persuaded Gorbachev not only to tolerate German reunification but also to tolerate the reunified Germany remaining within NATO. On October 3, 1990 East Germany was dissolved and its districts joined (West) Germany’s as new states.”
“Both world wars ended disastrously for Germany, because its leaders didn’t wait for favorable opportunities but instead did take initiatives, with dreadful consequences.
Germany [has considerable] geographic constraints. Today, Germany shares land borders with nine countries, while its North Sea and Baltic coasts are exposed over water to eight other countries. In addition, Germany acquired three other land neighbors when it annexed Austria in 1938, and one more (Lithuania) between 1918 and 1939. Some of those countries formed part of two large land neighbors (Russia and the Habsburg Empire) until 1918. That makes a total of 20 recent historical neighbors of Germany (if one counts each historical entity once). Of those 20, 19 — all except Switzerland — have either invaded, attack by sea, had German troops stationed or (Sweden) in transit, or been invaded by Germany between 1866 and 1945. Five of those 20 neighbors are or were powerful (France, Russia, the Habsburg Empire, Britain, and formerly Sweden).
It’s not just that Germany has neighbors. Northern Germany is part of the flat North European Plain, which is not dissected by any natural defense barriers: no mountain chains (unlike the Pyrenees that divide Spain from France, or the Alps that ring Italy), and only narrow rivers easily crossed by armies throughout history. Armies rolled into northern Germany from the east and from the west, including the Soviet and Allied armies in WWII, Napolean’s armies two centuries ago, and other armies before that.
Germany’s central geographic location surrounded by neighbors seems to me to have been the most important factor in German history. Of course, that location has not been without advantages: it has made Germany a crossroads for trade, tech, art, music, and culture. A cynic would note that Germany’s location also facilitated its invasion of many countries during WWII.
But the political and military disadvantages of Germany’s location have been enormous. The Thirty Years’ War, which was the major religious and political struggle between most of the leading nations of 17th-century Western and Central Europe, was fought mainly on German soil, reduced the population there by up to 50%, and inflicted a crushing economic and political setback whose consequences persisted for the next two centuries. Germany was the last large Western European country to be unified (in 1871), and that unification required the leadership of a highly skilled diplomat (Bismarck) with a unique ability to take account of the possible reactions of many other European powers. The military nightmare for the resulting unified Germany was the risk of a two-front war against both its western neighbor (France) and its eastern neighbor (Russia); that nightmare did materialize and led to Germany’s defeat in both world wars. After WWII, three of its neighbors plus the U.S. partitioned Germany. There was nothing that the West German government could do directly to achieve reunification: it had to await favorable opportunities created by events in other countries.
Differing geographic constraints have meant that bad leadership results in much more painful consequences for Germany than for geographically less constrained countries. For instance, while Germany’s Emperor Wilhelm II and his chancellors and ministers were notorious for their blunders and unrealism, Germany has had no monopoly on poor leadership: the U.S. and Britain and other countries have had their share. But the seas protecting the U.S. and Britain meant that inept leaders doing stupid things didn’t bring disaster upon their countries, whereas the ineptness of Wilhelm and his chancellors did bring disaster upon Germany in WWI.
The philosophy guiding the foreign policy of successful German politicians was summed up in metaphor by Bismarck: ‘One should always try to see where God is striding through world history, and in what direction he is heading. Then, jump in and hold on to his coattails, to get swept along as far as one can go.’ That was also Chancellor Helmut Kohl’s strategy in 1989–1990, when political developments in East Germany and the Soviet Union finally, after Brandt’s 1969–1974 initiatives, created the opportunity for reunification. That philosophy would have been unthinkable to Britain at the height of its imperial power, and still is unthinkable to the U.S. today. Instead, imperial Britain expected, and the U.S. today expects, to take initiatives and to be able to impose their will.”
“Germany actually constitutes not one but two opposite extremes: in its contrasting reactions to WWI and WWII.
By October 1918, shortly before the end of WWI, Germany’s last military offensives on the western front had failed. Allied armies were advancing and had been strengthened by a million fresh U.S. troops, and Germany’s defeat had become just an inevitable matter of time. But German armies were still conducting an orderly retreat, and the Allies hadn’t yet reached Germany’s borders. Armistice negotiations were hastened to a conclusion by a mutiny of the German fleet and by outbreaks of armed insurrections in Germany. This permitted post-war German agitators, especially Hitler, to claim that the German army had not been defeated militarily but had been betrayed by a stab in the back from treacherous civilian politicians. The conditions of the Treaty of Versailles imposed upon Germany by the victorious Allies, including a notorious ‘war guilt clause’ branding Germany as the aggressor responsible for the war, provoked further German resentment. As a result, although many post-war German historians themselves analyzed pre-war Germany’s political blunders that had plunged Germany into war under unfavorable conditions, the prevalent post-war view of the German public was that Germany was a victim whose leaders had not been responsible for their country’s misfortunes.
Now, contrast this German sense of victimization after WWI with Germany’s post-war view after WWII. In May 1945 Germany’s armies had been defeated on all fronts, all of Germany had been conquered by Allied troops, and Germany’s surrender was unconditional. No German or non-German denied that WWII in Europe had resulted solely from Hitler’s intention. Germans gradually learned of the unprecedented atrocities committed by German government policy in the concentration camps, and by the German military on the eastern front. German civilians themselves also suffered: especially in the bombings of Hamburg and Dresden and other German cities, in the flight of German civilians before the advancing Soviet troops, and in the explusion of all ethnic German residents of Eastern Europe and former eastern German territory by Poles, Czechs, and other Eastern European governments just after the war’s end. The Soviet advance and the expulsions are estimated to have sent more than 12 million German civilians fleeing as refugees, killed more than 2 million of them, and subjected on the order of a million German women to rape.
Those sufferings of German civilians receive some attention in post-war Germany. But self-pity and sense of victimization have not dominated Germans’ view of themselves after WII, as they did after WWI. Part of the reason had been German recognition that the horrors inflicted by Russians, Poles, and Czechs on German civilians resulted from the horrors that Germans had so recently inflicted on those countries. The result of this painful reckoning with the past has been to Germany’s advantage today, in the form of much better security and better relations with former enemies than prevailed for Germany after WWI or for Japan today.”
“Two further respects in which Germany is an extreme case for our purposes are linked: the role of leadership and honest self-appraisal or the lack thereof. Because Germany’s geographic location has chronically exposed it to more difficulties and dangers than face Britain and the U.S., the effects of good or bad leadership have been more obvious.
Among leaders whose effects were bad, Hitler enjoys pride of first place in recent world history. One can of course debate whether the combination of the Treaty of Versailles, the collapse of Germany’s currency in 1923, and the unemployment and economic depression beginning in 1929 would have spurred Germany to go to war to overturn the treaty even without Hitler. But one can still argue that a WWII instigated by Germany without Hitler would’ve been very different. His unusual evil mentality, charisma, boldness in foreign policy, and decision to exterminate all Jews were not shared by other revisionist German leaders of his era. Despite his initial military successes, his unrealistic appraisals led him repeatedly to override his own generals and ultimately to cause Germany’s defeat. Those fatally unrealistic decisions included his unprovoked declaration of war against the U.S. in December 1941 at a time when Germany was already at war with Britain and the Soviet Union, and his overriding of his generals’ pleas to authorize retreat by the German army trapped at Stalingrad in 1942–1943.
Second to Hitler in bad leadership in recent German history was Kaiser Wilhelm II, whose 30-year-rule ended with his abdication and Germany’s defeat in WWI. One can again debate whether there still would’ve been a WWI without Wilhelm. However, such a war as well would probably have taken a different form, because Wilhelm, like Hitler, was unusual, albeit in a different way. While Wilhelm was much less powerful than Hitler, he still appointed and dismissed Germany’s chancellors, held the loyalty of most Germans, and commanded Germany’s armed forces. Although not evil, he was emotionally labile and unrealistic, had poor judgment, and was spectacularly tactless on numerous occasions that created unnecessary problems for Germany. Among his many policies that resulted in Germany’s entering WWI under unfavorable circumstances was his non-renewal of Bismarck’s treaty between Germany and Russia, thereby exposing Germany to that already mentioned military nightmare arising from its geographic location: a two-front war simultaneously against Russia and France.
A German counter-example of successful leadership and realistic appraisal is provided by Willy Brandt, whose recognition of East Germany and other Eastern Bloc countries, treaties with Poland and Russia, and acceptance of the loss of German lands beyond the Older-Neisse Line reversed 20 years of previous West German foreign policies. While West Germany’s subsequent chancellors continued Brandt’s policies, one can argue that his leadership made a difference. The opposing CDU party continued to oppose those policies for the next several years; Brandt’s acceptance of the Older-Nisse Line required outstanding realism and political courage lacking in his predecessors; and his successors lacked his charisma that made his visit to the Warsaw Ghetto so convincing and unforgettable. (West) Germany has had an uninterrupted chain of chancellors with good sense since WWII.
The remaining German counter-example of successful leadership that made a difference was Otto van Bismarck, the Prussian prime minister and then imperial German chancellor who achieved German unification in 1871. That unification faced overwhelming obstacles — notably, opposition from the smaller German kingdoms other than Prussia, opposition from the neighboring powerful Habsburg Empire and France that could be resovled only by wars, the more distant potential opposition of Russia and Britain, and the vexing question as to which German populations could realistically be incorporated into a unified Germany. Bismarck was an ultra-realist, familiar with the reasons for the failure of Germany’s 1848 revolutions, aware of the internal and external opposition to German unification, and accustomed to proceeding stepwise, beginning with small measures and moving on to stronger measures only if smaller measures failed. He recognized that Prussia’s ability to initiate major events was limited by geopolitical constraints, and that his policy would have to depend on awaiting favorable opportunities and then acting quickly. No other German politician of his generation approached him in his political skills. Bismarck has often been criticized for failing to groom a suitable successor, and for failing to cure problems in Germany that culminated in WWI, 24 years after his chancellorship ended. But it seems unfair to criticize him for the follies of Wilhelm II and Wilhelm’s appointees, but Germany could hardly have been unified over the prevailing opposition without Bismarck’s three wars, two of them very brief. (The unification of Italy required four wars, but Italy hasn’t been branded as warlike.) Once Germany had been unified in 1871, leaving millions of German-speaking people outside its borders, Bismarck was realistic enough to understand that he’d achieved the most that was possible, and that other powers would not tolerate further German expansion.”
“Today even small German cities have opera houses, older Germans can still afford to live comfortably after retiring, and villages preserve local color (because zoning laws specify that your house’s roof style has to conform to the local style).”
“Support from other countries has varied greatly with place and time in recent German history. American Marshall Plan aid, and West Germany’s wide use of it, made possible West Germany’s economic miracle after 1948. Conversely, negative economic aid — i.e. extraction of war reparations — contributed to the undermining of East Germany after WWII, and of Germany’s Weimar Republic after WWI.
Germany’s strong national identity helped it survive the traumas of devastation, occupation, and partition. (Some non-Germans would go further, and would argue that Germany has had too strong a national identity.) That national identity and pride are based especially on Germany’s world-famous music, art, literature, philosophy, and science; the bond of the German language as codified by Martin Luther’s bible translation transcending spoken German dialectical variation; and memories of shared history that enabled Germans still to identify themselves as one people despite centuries of political fragmentation.”
“Interestingly, recent German history provides four examples of an interval of 21–23 years between a crushing defeat and an explosive reaction to that defeat: the 23-year interval between 1848’s failed revolutionary unification attempt and 1871’s successful unification; the 21-year interval between 1918’s crushing defeat in WWI and 1939’s outbreak of WWII that sought and ultimately failed to reverse that defeat; the 23-year interval between 1945’s crushing defeat in WWII and 1968’s student revolts; and the 22-year interval between those revolts and 1990’s reunification. Of course, external factors played a role in determining those intervals, especially the interval between 1968 and 1990. But there’s nonetheless a significance to those parallels: 21–23 years is approximately one human generation. The years 1848, 1918, and 1968 were decisive experiences for Germans who were young adults then, and who two decades later became their country’s leaders and finally found themselves in a position to try to complete (1871, 1990) or to reverse (1939) that decisive experience of their youth. For the student revolts of 1968, the leadership and participation required were not of seasoned politicians in their 40s or 50s, but instead of unseasoned radicals in their twenties. As one German expressed, ‘Without 1968, there would have been no 1990.’”
Australia
“In 1964 the fundamental fact of Australian society was still the contradiciton between Australia’s geographic location on the one hand and its population makeup and emotional and cultural ties on the other. Australia’s population and national identity were mostly British. But Australia is almost halfway around the world from Britain — eight to ten time zones east of it. The Australian landscape is the most distinctive (and least British) of any continent inhabited by humans. Geographically, Australia is 50 times closer to Indonesia than Britain. Yet in 1964, there were no signs of that proximity to Asia.
The official White Australia policy that had barred Asian immigrants, and the informal policies that had discouraged white Europeans other than British, had disappeared by the 2000s. But Australia still speaks English, the Queen of Britain is still Australia’s figurehead of state, and the Australian flag still incorporates the British flag. It’s consistently ranked as one of the world’s most desirable places to live, with one of the most contented populations and highest life expectancies. It’s British, yet it’s not Brisih. What happened to produce those selective changes?
Like Germany, Australia underwent a crisis that didn’t erupt on one day. (However, three military shocks within the space of 71 days in 1941–1942 stood out in importance.) Instead, Australia’s crisis, like Germany’s, was partly the unfolding a response to the years of WWII. For both Germany and Australia, the war proved that traditional national solutions were no longer working, but the proof was much more cataclysmic and quickly convincing in war-shattered Germany than in Australia. The basic question for Australians has been the issue of national identity: who are we? WWII started to bring to the surface Australians’ recognition that their long-held self-image of being a second Britain halfway around the world was becoming out-of-date and no longer fitted Australia’s changed circumstances. But the war alone wasn’t enough to wean most Australians away from that self-image.
It takes time even for a single person to formulate a new answer to the question Who am I? It takes much longer for a nation, composed of millions of individuals divided into groups with competing views of their nation’s identity, to figure out: Who are we? Hence it should come as no surprise that Australians are still wrestling today with that question. Paradoxically, while crisis resolution in Australia has been slow — so slow that many Australians wouldn’t even consider there to have been a crisis — Australia is the one among our six nations studied in this book that experienced the widest unified set of changes announced within the shortest time, 19 days during the month of December 1972.”
“Approximately 50,000 years after Australia had been settled by the ancestors of Aboriginal Australians, the first European settlers arrived in January 1788, in a fleet of 11 ships sent out from Britain. The British government had sent that fleet not because it considered Australia a wonderful location attractive to British settlers, but because Britain had a problem with its exploding population of convicts that it wanted to dump somewhere far away. Australia and tropical West Africa had both been suggested as suitably remote locations, but it was becoming clear that West Africa’s tropical diseases made it an unhealthy place for Europeans. Australia appeared to offer multiple advantages: it was much more remote than West Africa; it wasn’t known to be (and in reality for the most part proved not to be) unhealthy for Europeans, and it offered potential Pacific Ocean bases for British navy ships, merchants, whalers, and timber and flax suppliers. And so the choice fell on Australia — specifically, on the environs of what become Sydney.
The First Fleet consisted of 730 convicts, their guards, administrators, workers, and a British naval officer as governor. More fleets and ships followed, bringing more convicts to Sydney and then to four other locations scattered around the continent. Soon the convicts and their guards were joined by British free settlers. However, 32 years later, in 1820, Australia’s European population still consisted 84% of convicts and former convicts, and convict transport from Britain to Australia didn’t cease until 1868. To survive and prosper in frontier Australia was difficult, and so modern Australians of convict ancestry regard it as a badge of pride rather than of shame — like the pride felt by modern American descendants of the 1620 Mayflower settlers.
It was expected (correctly) that it would take a long time for the convicts and settlers to figure out how to grow enough food to feed themselves. Hence the First Fleet carried food shipments, which Britain continued to send out until the 1840s. Several decades passed before Australians could send significant exports back to Britain: at first, just products from hunting whales and seals; then from the 1830s onwards, wool from sheep; gold from a gold rush beginning in 1851; and once refrigerator ships for the long journey became available in the 1880s, meat and butter. Today, one-third of the world’s wool is grown by Australia’s abundant sheep population, five sheep for every human. But Australia’s economy since WWII has been dominated by mining of the minerals with which the continent is so richly endowed: Australia is a world-leading exporter of aluminum, coal, copper, gold, iron, lead, magnesium, silver, tungsten, titanium, and uranium.
In other British colonies, including India, Fiji, and West Africa, British colonists dealt with native people either peacefully by negotiating with local chiefs or princes, or else militarily by sending British armies against local armies or sizable tribal forces. Those methods didn’t work in Australia, where Aboriginal organization consisted of small bands without armies, chiefs, or princes. Aborigines lived a nomadic lifestyle and didn’t have fixed villages. To European settlers, that meant that Aborigines did not ‘own’ the land.
Hence European settlers imply took Aboriginal land without negotiation or payment. There were no battles against Aboriginal armies: just attacks by or against small groups of them, sometimes provoked by Aborigines killing sheep that they considered no different from the kangaroos and other wild animals that they were accustomed to hunting. In response, European settlers killed Aborigines, the last large massacre (of 32 Aborigines) took place as recently as 1928. When a British governor ordered the trial and hanging of Europeans who had murdered Aborigines, the Australian public strongly supported those murderers, and London’s colonial office realized that it couldn’t stop its British subjects in remote Australia from doing what they wanted — such as killing Aborigines.
Because Aborigines were hunter-gatherers rather than settled farmers, white Australians scorned them as primitive. One Australian rep said, ‘There’s no scientific evidence that [the Aborigine] is a human being at all.’ That scorn of Aborigines is still widespread even among educated Australians. As Aboriginal numbers declined due to diseases, killings, and land dispossession, white Australians came to believe that the Aborigines were dying out.”
“Aborigines were eventually forbidden to marry non-Aboriginals without government consent. There has been much controversy over a policy, developed in the 1930s, of forcibly removing mixed-race Aboriginal/white children and even Aboriginal children from Aboriginal homes, to be raised (supposedly for their own good) in institutions or foster homes. A movement, beginning in the 1990s, for white Australians to apologize to Aborigines has faced strong opposition. Prime Minister Kevin Rudd did give a formal apology in 2008, but Prime Minister John Howard argued, ‘Australians of this generation should not be required to accept guilt and blame for past actions and policies over which they had no control.’
In short, British Australia’s White Australia policy was directed not just at non-white potential immigrants. It was directed also at the non-white original Australians into whose lands white British settlers were immigrating, whose right to those lands was denied, and who (many white settlers hoped) would die out quickly.”
“Throughout the first decades of the Australian colony, immigrating free settlers as well as convicts came from Britain (including Ireland, at that time still part of Britain). The first substantial group of non-British immigrants began to arrive in 1836 in South Australia. That colony had been founded not as a convict dump but by a land development company that carefully selected prospective settlers from Europe. Among those settlers were German Lutherans seeking religious freedom, a motive for immigration much more conspicuous in the early history of the U.S. than Australia. Those German immigrants were skilled and white, developed market gardening and vineyards, adapted quickly to Australia, and aroused minimal opposition. More controversial was the arrival of tens of thousands of Chinese in the 1850s, drawn (along with many Europeans and Americans) by Australia’s first gold rush. That influx resulted in the last use of the British army in Australia, to quell riots in which a crowd beat, robbed, and even scalped Chinese.
A third wave of non-British arrivals arose from the development of sugar plantations in Queensland beginning in the 1860s. The workers were Pacific Islanders from New Guinea, other Melanesian islands, and Polynesia. While some of them were voluntary recruits, many were kidnapped from their islands by raids accompanied by frequent murders, in a murder known as black-birding (because the islanders were dark-skinned). When plantations (especially of coconuts) were subsequently developed in German and Australian New Guinea, that same Australian model was adopted for bringing Pacific Island workers to New Guinea plantations. Such labor recruitment practices continued in New Guinea long into the 20th century: New Guinean labor recruiters in the sixties still had to take pains to explain how they recruited only voluntary laborers to whom they paid cash bonuses and proudly insisted that they were not kidnapping black-birders (they still used that word), whereas some of the other recruiters with whom they competed still were. The workers did not make Australia’s resident population less white, because they came on fixed-term contracts and were expelled from Australia at the ends of their terms.
Still another group of non-British immigrants was a small number from India. Despite all these arrivals of modest numbers of Germans, Chinese, contract Pacific Islanders, and Indians, Australia remained by policy overwhelmingly British and white until after WWII.”
“Australia doesn’t recognize or celebrate an Independence Day, because there wasn’t one. The Australian colonies achieved self-government with no objections from Britain, and never severed their ties with Britain completely. Australia is still joined with Britain in a (British) Commonwealth of Nations, and still recognizes Britain’s sovereign as Australia’s nominal head of state. Why did the relaxation or severing of ties with Britain unfold differently in Australia and in the U.S.?
There were several reasons. One is that Britain learned lessons from its expensive defeat in the American Revolution, changed its policies toward its white colonies, and readily granted self-government to Canada, New Zealand, and its Australian colonies. In fact, Britain granted many features of self-government to Australia of its own initiative, before Australians had made any requests. A second reason was the much greater sailing distance from Britain to Australia. The First Fleet required eight months to reach Australia, and thereafter for much of the early 19th century the sailing times varied from half-a-year to a full year. The resulting slowness of communication made it impossible for the British colonial office in London to exercise close control over Australia; decisions and laws had to be delegated at first to governors, and then to Australians themselves. For example, for the entire decade from 1809 to 1819, the British governor of the Australian colony of New South Wales didn’t even bother to notify London of new laws that he was adopting.
A third reason was that the British colonial government had to station and pay for a large army in its American colonies. That army served to defend the colonies against the French army that was based in Canada and competing for control of North America, and also against less-well-armed but still formidable populous American Indian tribes with centralized government by chiefs. In contrast, no European power competed with Britain to colonize Australia, and the Aborigines were few, without guns, and not centrally led. Hence Britain never needed to station a large army in Australia, nor to levy unpopular taxes on Australians to pay for that army; Britain’s levying taxes on the American colonies without consulting them was the immediate cause of the American Revolution. The last small contingent of British troops in Australia was withdrawn in 1870, by British initiative. Still another factor was that Britain’s Australian colonies were too unprofitable and unimportant for Britain to care about and pay much attention to. Much more profitable and important to Britain were its colonies of the U.S, Canada, India, South Africa, and Singapore. Finally, Britain’s principal Australian settlements for a long time remained separate colonies with little political coordination.
The course by which the Australian colonies achieved self-government was as follows. In 1828, 40 years after the arrival of the First Fleet, Britain established appointed (not elected) legislative councils in the two oldest of its Australian colonies, New South Wales and Tasmania. Those appointed councils were followed in 1842 by the first partly elected representative Australian colonial government (in New South Wales). In 1850 Britain drew up constitutions for its Australian colonies, but the colonies were subsequently free to amend those constitutions, which meant that they became largely free to design their own governments. The 1850 constitutions and subsequent amended constitutions did ‘reserve’ for Britain the decisions on some Australian matters such as defense, treason, and naturalization, and left Britain with the theoretical power to disallow any colonial law. In practice, though, Britain rarely exercised those reserved rights. By the late 1800s, the only major right consistently reserved for Britain was the control of Australian foreign affairs.
Along with those reserved rights that Britain retained, throughout the 1800s it continued to deliver to Australia important services that an independent Australia would have had to provide for itself, including military protection by British warships, as other European countries and Japan and the U.S. became increasingly assertive in the Pacific Ocean during the late 1800s. Another service involved the governors that Britain sent out to its Australian colonies. Those governors were not resented tyrants forced on protesting Australian colonies by a powerful Britain. Instead, they played an acknowledged essential role in Australian self-government, in which the Australian colonies often reached impasses. The appointed British governors frequently had to resolve disagreements between the upper and lower houses of a colonial legislature, had to broker the formation of parliamentary coalitions, and had to decide when to dissolve parliament and call an election.”
“Australia arose as six separate colonies — New South Wales, Tasmania, Victoria, South Australia, Western Australia, and Queensland — with far less contact among them than the contact among the American colonies. That limited contact was due to the geography of Australia, a continent with few patches of productive landscape separated by large distances of desert and other types of unproductive landscape. Not until 1917 did all five of the capital cities on the Australian mainland become connected by railroad. Each colony adopted a different railroad gauge (track separation), ranging from 3'6” to 5'3", with the result that trains could not run directly from one colony into another. Like independent countries, the colonies erected protective tariff barriers against one another and maintained customs houses to collect import duties at colonial borders. In 1864 New South Wales and Victoria came close to an armed confrontation at their border. As a result, the six colonies did not become unified into a single nation until 1901, 113 years after the First Fleet.
Initially, the colonies showed little interest in uniting. Settlers thought of themselves first as overseas British, and then as Victorians or Queenslanders rather than as Australians. The stirrings of interest in federation emerged only in the latter half of the 1800s, as Japan increased in military power, and as the U.S., France, and Germany expanded over the Pacific Ocean and annexed one Pacific Island group after another, posing a potential threat to Britain’s Pacific colonies. But it was initially unclear what would be the territorial limits of a union of those British colonies. A first federal council of ‘Austronesia’ that met in 1886 included reps of the British colonies of New Zealand and Fiji far from Australia, but only four of the six colonies that now form Australia were represented.
Although a first draft of an Australian federal constitution was prepared in 1891, unified Australia was not inaugurated until January 1, 1901. The preamble to the constitution declares agreement to ‘unite in one dissoluble Federal Commonwealth under the crown of the UK of Great Britain and Ireland,’ with a federal governor-general appointed by Britain, and with a provision that decisions of Australia’s High Court could be appealed to Britain’s highest court. That Australian constitution illustrates that Australians still felt allegiance to the British Crown. The flag that was adopted then, and that remains the Australian flag today, consists of the British flag framed by the Southern Hemisphere star constellation of the Southern Cross.”
“Ausralians debating the constitution argued about many matters but were unanimous about excluding all non-white races from Australia. One of the first acts of the new Australian Federation in 1901 was the Immigration Restriction Act, passed by agreement of all political parties, aiming to ensure that Australia would remain white. The act barred the immigration of prostitutes, the insane, people suffering from loathsome diseases, and criminals (despite Australia’s origin as a dumping ground for criminals). The act also provided that no blacks or Asians would be admitted, and that Australians should be ‘one people, and remain one people without the admixture of other races.’”
“Britain’s colonial secretary objected to the Australian commonwealth mentioning race explicitly, in part because that created difficulties at a time when Britain was trying to negotiate a military alliance with Japan. Hence the Commonwealth achieved that same goal of race-based immigration control without mentioning race, by requiring entering immigrants to take a dictation test — not necessarily in English, but in any European language at the discretion of the presiding immigration official. When a boatload of workers arrived from the British colony but ethnically mixed Mediterranean island of Malta, with the potential for passing a dictation test in English, they were instead administered the test in Dutch (a language unknown in Malta as well as in Australia) in order to justify expelling them. As for the non-whites already admitted as laborers, the Commonwealth deported Pacific Islanders, Chinese, and Indians but allowed two small groups of specialists (Afghan camel-drivers and Japanese pearl-drivers) to remain.
The motive behind these immigration barriers was mainly the racism of the times, but partly also that the Australian Labor Party wanted to protect high wages for Australian workers by preventing the immigration of cheap labor.”
“Until things began to change after WWII, Australians’ sense of identity centered on their being British subjects. That emerges most clearly from the enthusiasm with which Australian troops fought beside British troops in British wars that had no direct significance for Australian interests. The first case was in 1885, when the colony of New South Wales sent troops to fight with British troops against rebels in the Sudan, a remote part of the world than which no other could have been more irrelevant to Australia. A bigger opportunity rose in the Boer War of 1899, between Britain and descendants of Dutch colonists in South Africa, again with zero direct relevance to Australian interests. Australian soldiers performed well in the Boer War, winning five Victoria Crosses (Britain’s highest battlefield bravery medal), and thereby gaining glory and a reputation as loyal British subjects at the cost of only about 300 Australian soldiers dead in battle.
When Britain declared war on Germany in 1914 at the outset of WWI, it did so without bothering to consult Australia or Canada. Australia’s British-appointed governor-general merely passed on the announcement of war to Australia’s elected prime minister. Australians unhesitatingly supported the British war efforts on a far larger scale than in the case of the Boer War or the Sudan War. In this case, the war did have a slight effect on Australian interests: it gave Australian troops a pretext to occupy the German colonies of northeast New Guinea and the Bismarck Archipelago. But Australia’s main contribution to WWI was to contribute a huge volunteer force — 400,000 soldiers, constituting more than half of all Australian men eligible to serve, out of a total population of under 5 million — to defend British interests halfway around the world in France and the Mideast. More than 300,000 were sent overseas, of whom two-thirds ended up wounded or killed. Almost every rural Australian town still has a cenotaph in the town center, listing the names of local men killed in the war.
What became the best-known Australian involvement in WWI was the attack of ANZAC troops (the Australia and New Zealand Army Corps) on Turkish troops holding the Gallipoli Peninsula. The ANZAC troops landed on April 25, 1915, suffered high casualties because of incompetent leadership by the British general commanding the operation, and were withdrawn in 1916 when Britain concluded that the operation was a failure. Ever since then, ANZAC Day (April 25), the anniversary of the Gallipoli landings, has been Australia’s most important national holiday.
To a non-Australian, the emphasis on ANZAC Day as the national holiday is beyond comprehension. Why should any country celebrate the slaughter of its young men, betrayed by British leadership, halfway around the world? The explanation is that nothing illustrated better the willingness of Australians to die for their British mother country. Gallipoli became viewed as the birth of the Australian nation, reflecting the widespread view that any nation’s birth requires sacrifice and the spilling of blood. The Gallipoli slaughter symbolized the national pride of Australians, now fighting for their British motherland as Australians, not as Victorians or Tasmanians — and the emotional dedication with which Australians publicly identified themselves as loyal British subjects.
That self-identification was reemphasized in 1923, when a conference of British Empire member countries agreed that British dominions could henceforth appoint their own ambassadors or diplomatic reps to foreign countries, instead of being represented by the British ambassador. Canada, South Africa, and Ireland promptly did appoint their own diplomatic reps. But Australia didn’t, on the grounds that there was no public enthusiasm for seeking visible signs of national independence from Britain.
However, Australia’s relationship toward Britain not only has been one of the dutiful child seeking approval from its esteemed mother country, but also includes a love/hate component, especially after WWII.”
“The significance of WWII for Australia was very different from that of WWI, because Australia itself was attacked, and because there was heavy fighting on islands near Australia rather than just halfway around the world. The surrender of Britain’s big naval base at Singapore to Japanese troops is often regarded as turning point in the evolution of Australia’s self-image.
During the two decades after WWI, Japan built up its army and navy, launched an undeclared war against China, and emerged as a danger to Australia. In its role as defender of Australia, Britain responded by strengthening its base on the tip of the Malay Peninsula at Singapore, although that base was 4,000 miles from Australia. Australia relied for protection on that remote British base and on the even more remote British fleet concentrated in the Atlantic and the Mediterranean. But Britain cannot be blamed alone for the eventual failure of its Singapore strategy, because Australia simultaneously neglected steps for its own defense. Australia abolished the draft in 1930 and built only a small air force and navy. The latter included no aircraft carriers, battleships, or warships larger than light cruisers, hopelessly inadequate to protect Australia and its international sea connections against Japanese attack. At the same time, Britain itself was facing a more serious and immediate threat from Germany and was lagging in its own military preparations against Japan.
Just as at the outset of WWI, when Britain declared war on Germany again in 1939, Australia’s prime minister promptly announced without even consulting parliament, ‘Great Britain has declared war, and as a result Australia is also at war [with Germany].’ As in WWI, Australia initially had no direct interest in WWII’s European theater halfway around the world, pitting Germany against Poland, Britain, France, and other Western European countries. But again, just as during WWI, Australia sent troops to fight in the European theater, mainly in North Africa and Crete. As the risk of attack from Japan increased, the Australian government requested the return of those troops to defend Australia itself. The British Prime Minister Winston Churchill tried to reassure Australians by promising that Britain and its fleet would use Singapore to protect Australia against Japanese invasion, and against any Japanese fleet that might appear in Australian waters. As events proved, those promises had no basis in reality.
Japan did attack the U.S., Britain, Australia, and the Dutch East Indies beginning on December 7, 1941. On December 10, just the third day after Japan’s declaration of war, Japanese bombers sank Britain’s only two large warships available in the Far East to defend Australia. On February 15, 1942, the British general in command at Singapore surrendered to the Japanese army, sending 100,000 British and Empire troops into prisoner-of-war camps — the most severe military defeat that Britain has suffered in its history. Saldy, those troops surrendering included 2,000 Australian soldiers who had arrived in Singapore only three weeks earlier in order to serve in the hopeless task of its defense. In the absence of British ships to protect Australia, the same Japanese aircraft carriers that had bombed Pearl Harbor heavily bombed Darwin on February 19, 1942. That was the first of more than 60 Japanese air raids on Australia, in addition to an attempted raid on Sydney Harbor by a Japanese submarine.
To Australians, the fall of Singapore was not just a shock and a frightening military setback: it was regarded as a betrayal of Australia by its British mother country. While the Japanese advance on Singapore was unfolding, Australia’s Prime Minister cabled Churchill that it would constitute an ‘inexcusable betrayal’ if Britain evacuated Singapore after all the assurances of the base being impregnable. But Singapore fell because Britain was stretched militarily much too thin between the European theater and the Far East, and because the attacking Japanese forces were tactically superior to the numerically superior defending British and Empire forces.
Australia had been guilty of neglecting its own defense. Nevertheless, Australian bitterness against Britain has persisted for a long time.”
“The lessons of WWII for Australia were two-fold. First and foremost, Britain had been powerless to defend Australia. Instead, the defense of Australia had depended on massive deployment of American troops, ships, and planes, commanded by the American General MacArthur, who established his hq in Australia. MacArthur directed operations, including those involving Australian troops, largely by himself: there was no suggestion of an equal partnership between the U.S. and Australia. While there was concern about the possibility of Japanese landings in Australia, they didn’t materialize. But it was clear that any defense of Australia against landings would have been by the U.S., not by Britain. As the war against Japan slowly unfolded over nearly forty years, Australian troops fought against Japanese troops on the islands of New Guinea, New Britain, the Solomons, and finally Borneo. Those Australian troops played a vital front-line role in defeating Japan’s 1942 attempt to capture Australian New Guinea’s capital of Port Moresby. Increasingly thereafter, though, MacArthur relegated Australian troops to secondary operations far from the front lines. As a result, although Australia was attacked directly in WWII but not in WWI, Australia’s casualties in WWII were paradoxically less than half of those in WWI.
Second, WWII brought home to Australia that, while Australian troops served both wars in the remote European theater, there were grave immediate risks to Australia nearby, from Asia. With reason, Australia now came to consider Japan as the enemy. About 22,000 Australian troops captured by the Japanese during the war were subjected to unspeakably brutal conditions in Japanese prisoner-of-war camps, where 36% of the Australian prisoners died: a far higher percentage than the 1% death toll of American and British soldiers in German prisoner-of-war camps, and of German soldiers in American and British camps. Especially shocking to Australians was the Sandakan Death March, in which 2,700 Australian and British troops captured by the Japanese and imprisoned at Sandakan on Borneo were marched across the island, starved, and beaten until most of the few survivors were executed, resulting in the deaths of almost all of those prisoners.”
“After WWII there unfolded a gradual loosening of Australia’s ties to Britain and a shift in Australians’ self-identification as ‘loyal British in Australia,’ resulting in a dismantling of the White Australia policy. The changes have been strung out over many decades, and they are still going on today.
WWII had immediate consequences for Australia’s immigration policy. Already in 1943, Australia’s prime minister concluded that the tiny population of Australians (less than 8 million in 1945) could not hold their huge continent against threats from Japan (population then of over 100 million), Indonesia (just 200 miles away) with a population approaching 200 million, and China (population 1 billion). By comparison with high population densities in Japan and Java and China, Australia looked empty and attractive to Asian invasion — so thought the prime minister, but Asians themselves did not think that way. The other argument for more immigration was the mistaken belief that a large population is essential for any country to develop a strong First World economy.
Neither of those arguments made sense. There always have been, and still are, compelling reasons why Australia has a much lower population density than does Japan or Java. All of Japan and Java is wet and fertile, and much of the area of those islands is suitable for highly productive agriculture. But most of Australia’s area is barren desert, and only a tiny fraction is productive farmland. As for the necessity of a large population, the economic successes of Denmark, Finland, Israel, and Singapore, each with a population only one-quarter the size of Australia’s, illustrate that quality counts more than quantity in economic success. In fact, Australia would be much better off with a smaller population than it presently has, because that would reduce human impact on the fragile Australian landscape and would increase the ratio of natural resources to people.
But Australia’s prime ministers in the 1940s were neither ecologists nor economists, and so post-war Australia did embark on a crash program of encouraging immigration. Unfortunately, there were not nearly enough applications from the preferred sources of Britain and Ireland to fill Australia’s immigration target, and the White Australia policy limited Australia’s other options. Inducing American servicemen who had been stationed in Australia to say was not an attractive possibility, because too many of them were black. Instead, initially the ‘next best’ source from which post-war Australia encouraged immigration became Northern Europe. The third choice was Southern Europe, accounting for the plethora of Italian and Greek restaurants as early as the 1960s. Australian immigration supporters announced the surprising discovery, ‘With proper selection, Italians make excellent citizens’ (!!). As a first step in that direction, Italian and German prisoners of war who had been brought to Australia were permitted to remain.
Australia’s immigration minister from 1945–1949 was an outspoken racist. He even refused to allow Australian men who had been so unpatriotic as as to marry Japanese, Chinese, or Indonesian women to bring their war-brides or children into Australia, calling them ‘permanently undesirable…a mongrel Australia is impossible.’ As an additional source, he wrote approvingly about the three Baltic Republics (Estonia, Latvia, and Lithuania), whose annexation by Russia had motivated emigration by thousands of well-educated white people with eye and hair color resembling those of the British. The result of that selective encouragement of immigration was that, from 1945 to 1950, Australia received about 700,000 immigrants (10% of its 1945 population), half of them reassuringly British, the rest from other European countries. In 1949 Australia even relented and permitted Japanese war brides to remain.
The undermining of the White Australia policy that produced today’s Asian immigrants resulted from five considerations: military protection, political developments in Asia, shifts of Australian trade, the immigrants themselves, and British policy. As for military considerations, WWII had made clear that Britain was no longer a military power in the Pacific; instead, Australia’s military ties had to be with the U.S. That became officially recognized by the 1951 ANZUS Security Treaty between the U.S., Australia, and New Zealand, without the participation of Britain. The Korean War, the rise of communist threats in Malaya and Vietnam, and Indonesian military interventions in Dutch New Guinea, Malaysian Borneo, and Portuguese Timor warned Australia of proliferating security problems nearby. The 1956 Suez Crisis, in which Britain failed to topple President Nasser of Egypt and was forced to yield to U.S. economic pressure, laid bare Britain’s military and economic weakness. To the shock of Australians, in 1967 Britain announced its intent to withdraw all of its military forces east of the Suez Canal. That marked the official end to Britain’s long-standing role as Australia’s protector.
As for Asian political developments, former colonies and protectorates and mandates in Asia were becoming independent nations, including Indonesia, East Timor, Papua New Guinea, the Philippines, Malaysia, Vietnam, Laos, Cambodia, and Thailand. Those countries were near Australia: Papua New Guinea only a few miles away, and Indonesia and East Timor only 200 miles away. They devised their own foreign policies, no longer subservient to the foreign policies of their former colonial masters. They were also rising economically.
As for trade, Britain had formerly been by far the largest trade partner of Australia, accounting for 45% of Australia’s imports and 30% of its exports even as late as the early 1950s. A rapid rise in Australian trade with Japan began with Australia’s overcoming its racist and WWII-driven hostility to Japan to sign a trade agreement with Japan in 1957, and then in 1960 lifting its ban on exporting iron ore to Japan. By the 1980s, Australia’s leading trade partner was — Japan! — followed by the U.S., with Britain far behind. In 1982, Japan received 28% of Australian exports, the U.S. 11%, and Britain only 4%. But it was an obvious contradiction that, at the same time as Australia was telling Japan and other Asian countries how eager it was for their trade, it was simultaneously telling them that it considered Japanese and other Asian people themselves unfit to settle in Australia.
The next-to-last factor undermining the pro-British White Australia policy was the shift in Australian immigrants themselves. All of those Italians, Greeks, Estonians, Latvians, and Lithuanians who immigrated after WWII were undoubtedly white, but they were not British. They didn’t share Australians’ traditional image of themselves as loyal subjects of the British. They also didn’t share the strong racist prejudices against Asians that were prevalent in Britain as well as in Australia as late as the 1950s.
Finally, Britain was also pulling away from Australia. For Britain as for Australia, its interests were changing, and its self-image was becoming increasingly out-of-date. The British government recognized those cruel realities before the Australian government did, but the acknowledgment was intensely painful on both sides. The changes in Britain were at their peak in the late 50s/early 60s. The British had traditionally viewed their identity as being based on ownership of the largest empire in world history, then on leadership of the British Commonwealth. The Empire and then the Commonwealth had been Britain’s leading trade partners, and major sources of troops. But Britain’s trade was decreasing with the Commonwealth and shifting toward Europe. Britain’s African and Asian colonies were becoming independent, developed their own national identities, formulated their own foreign policies even within the Commonwealth, and (over British objections) forced South Africa out of the Commonwealth because of its racist apartheid policies.
In 1955 Britain decided to withdraw from negotiations among six Western European countries (France, Germany, Italy, Belgium, Netherlands, and Luxembourg) to form a European Economic Community (EEC, progenitor of today’s Common Market.) Contrary to 1955 British expectations, the Six did succeed into bringing the EEC into existence without Britain in 1957. By 1961, Britain’s Prime Minister recognized the shift in Britain’s interests. Europe was becoming more important to Britain than was the Commonwealth, both economically and politically. Hence Britain applied to join the EEC. That application and its sequels constituted a shock to Australia’s and Britain’s relationship even more fundamental than had been the fall of Singapore, although the latter was more dramatic and symbolic, and lingers today as a bigger cause of festering resentment to Australians.
Britain’s application created an unavoidable clash between British and Australian interests. The Six were erecting shared tariff barriers against non-EEC imports, barriers to which Britain would have to subscribe. Those barriers would now apply to Australian food products and refined metals, for which Britain still represented a major export market. Australian food exports to Britain would now be displaced by French, Dutch, Italian, and Danish foods. Both countries’ prime ministers knew this cruel reality. Macmillan promised Australia and other Commonwealth countries that Britain would insist on defending Commonwealth interests in Britain’s negotiations with the EEC. But it seemed doubtful that Macmillan would prevail, and in fact the Six refused to make significant concessions to Australia’s interests.
Australians denounced the application as immoral, dishonest, a basis for moral grievance — a betrayal of Gallipoli, of over a century of other Australian sacrifices for the British motherland, and of the British heritage underlying Australia’s traditional national identity. The shock was profoundly symbolic, as well as material. Worse symbolic shocks were still to come. Britain’s Commonwealth Immigration Act of 1962, actually aimed at halting Commonwealth immigration from the West Indies and Pakistan, avoided all appearances of racism by ending the automatic right of all Commonwealth nations (including Australians) to enter and reside in Britain. Britain’s 1968 Immigration Act barred automatic right of entry into Britain for all FOREIGNERS (Australians were now declared to be foreigners!) without at least one British-born grandparent, thereby excluding a large fraction of Australians. In 1972 Britain declared Australians to be ALIENS (!). What an insult!
In short, it wasn’t the case that Australians were declaring their independence. Instead, the motherland was declaring its own independence, loosening its ties with the Commonwealth, and disowning its children.
British/European negotiations unfolded with agonizing slowness, starts, and stops. Britain was admitted to the EEC in 1971. By then, Britain accounted for only 8% of Australian exports. Australian politicians had come to recognize that joining Europe was in Britain’s vital interests, that Australia shouldn’t and couldn’t oppose British interests, and that Australia’s previous relationship with Britain had become a myth.”
“From an Australian perspective, it may seem that Australian identity changed suddenly and comprehensively in 1972, when Australia’s Labour Party under Prime Minster Gough Whitlam came to power for the first time in 23 years. In his first 19 days in office, Whitlam embarked on a crash program of selective change, for which there are few parallels in the modern world in its speed and comprehensiveness. The changes introduced in those 19 days included: end of the military draft; withdrawal of all Australian troops from Vietnam; recognition of China; announced independence for Papua New Guinea, which Australia had been administering for over 50 years under a mandate from the League of Nations and then from the UN; banning visits by racially selected overseas athletic teams (a rule aimed especially at all-white South African teams); abolishing the nomination of Australians for Britain’s system of honors and replacing them with a new system of Australian honors; and officially repudiating the White Australia policy. Once Whitlam’s whole cabinet had been approved, it then adopted more steps in the crash program: reduction of the voting age to 19; increase in the minimum wage; giving representation to both the Northern Territory and the Australian Capital Territory in the federal Senate; granting legislative councils to both of those territories; requiring environmental impact statements for industrial developments; increased spending on Aborigines; equal pay for women; no-fault divorce; a comprehensive medical insurance scheme; and big changes in education that included abolishing university fees, boosts in financial aid, and transfer from the states to Australia of the responsibility for funding tertiary education.
Whitlam correctly described his reforms as ‘a recognition of what has already happened’ rather than as a revolution arising out of nothing. In fact, Australia’s British identity had been gradually decreasing. The fall of Singapore in 1942 had been a first big shock, the 1951 ANZUS Security Treaty an early recognition, communist threats in Eastern Europe and Vietnam warning signs. But Australia still looked to and sided with Britain long after the fall of Singapore. Australian troops fought alongside British troops in Malaya against communist insurgents in the early 1960s. Australia allowed Britain to test atomic bombs in remote Australian deserts in the late 1950s, in an effort to maintain Britain as a world military power independent of the U.S. Australia was among the few nations to support Britain’s widely denounced attack on Egypt in the 1956 Suez Crisis. In 1954 the first visit to Australia by a reigning British monarch, Queen Elizabeth, was greeted by an enormous outpouring of pro-British sentiment: over 75% of all Australians turned out on the streets to cheer her. But — by the time that Queen Elizabeth visited Australia again in 1963, two years after Britain’s first EEC application, Australians were much less interested in her and in Britain.
The dismantling of Australia’s White Australia policy had similarly proceeded in stages before Whitlam made it official, with the admission of Japanese war-brides in 1949 being a first stage. Under the Colombo Plan for Asian development, Australia accepted 10,000 Asian student visitors in the 1950s. The despised dictation test for prospective immigrants was dropped in 1958. The Migration Act of that same year allowed ‘distinguished and highly qualified Asians’ to immigrate. Hence when Whitlam announced the end of the White Australia policy in 1972 and repudiated all official forms of racial discrimination, his actions aroused much less protest than one might have expected for the end of a policy that had been espoused so tenaciously for over a century. Between 1978 and 1982 Australia admitted more Indochinese refugees, as a percentage of its population, than any other country. By the late 1980s, nearly half of Australians were either born overseas or had at least one overseas-born parent. By 1991, Asians represented over 50% of immigrants to Australia. By 2010, the percentage of Australians actually born overseas (more than 25%) was second in the world, trailing only Israel. The influence of those Asian immigrants has been far out of proportion to their numbers: Asian students have come to occupy over 70% of the places in Sydeny’s top schools, and Asians and other non-Europeans now make up more than half of Australian medical students.
Other changes in Australia have been political and cultural. In 1986 Australia ended the right of final appeal to Britain’s highest court, thereby abolishing the last real trace of British sovereignty and making Australia fully independent at last. In 1999 Australia’s High Court declared Britain to be a ‘foreign country.’ On the cultural front, the 1960s dominance of British cooking in Australia, symbolized by meat pies nad beer, was greatly broadened by many styles of international cuisine. Australian wines now include some of the greatest wines in the world.”
“Australia still maintains highly egalitarian social values and strong individualism. Australian society still has an unmistakably Australian flavor, such as a dedication to sports: especially Australian-rules football (invented in Australia and played nowhere else professionally), along with swimming, plus the British sports of cricket and rugby. Australia’s leaders themselves embrace the national pastimes even when they’re dangerous: Prime Minister Harold Holt died in office by drowning in 1967, while swimming in an ocean area with strong offshore currents.”
“Australia’s reappraisal of its core values, and its train of selective changes, are surely not over In 1999 Australia held a referendum on whether Australia should abandon the Queen of Britain as its head of state and instead become a republic. While the referendum was defeated by a vote of 55% to 45%, decades earlier it would have been utterly unthinkable even to hold such a referendum. The percentage of Australians who were born in Australia is rapidly decreasing. It seems only a matter of time before there will be another referendum on whether Australia should become a republic, and the chances of a ‘yes’ vote will be higher. Within a decade or two, it’s likely that Asians will constitute over 15% of Australia’s population and its legislators, and over 50% of the students in top Australian universities. Sooner or later, Australia will elect an Asian as its prime minister. (A Vietnamese immigrant is already governor of South Australia.) As those changes unfold, won’t it appear incongruous to retain the Queen of Britain as its head of state, to retain her portrait on its currency, and to retain a flag based on the British flag?”
WWII Japan
“Japan today has the world’s third-largest economy, only recently overtaken by China’s. Japan accounts for 8% of global economic output, almost half that of the world’s largest economy (the U.S.’s), and more than double that of the UK, another famously productive country. In general, national economic outputs are the products of two numbers: the number of people in a country, multiplied by average output per person. Japan’s national output is high both because Japan has a large population (second only to that of the U.S. among rich democracies) and because it has high average individual productivity.
While Japan’s large domestic debt attracts much attention, nevertheless Japan is the world’s leading creditor nation. It has the world’s second-highest foreign exchange reserves, and it rivals China as the biggest holder of U.S. debt.
One important factor behind the economy’s strength is Japan’s high spending on research and development to drive innovation. Japan makes the world’s third-largest absolute annual investment in R&D, behind only China and the U.S. with their far larger populations. In relative terms, Japan’s proportion of its GDP that it devotes to R&D, 3.5%, is nearly double that of the U.S. (only 2%), and still considerably higher than that of two other countries known for their R&D investments, Germany (3%) and China (2%).
Every year, the World Economic Forum reports for the world’s nations a number called the Global Competitiveness Index, which integrates a dozen sets of numbers influencing a country’s economic productivity. Japan for many years has consistently ranked among the world’s top 10 countries with respect to this index; Japan, Singapore, and Hong Kong are the only three economies outside Western Europe and the U.S. to rate in that top 10. The reasons for Japan’s high ranking include its excellent infrastructure and transport net, such as the world’s best railroads and its healthy, well-educated workplace especially proficient in math and science. Other reasons include: control of inflation; cooperative labor/employee relations; highly competitive local markets; high-quality research institutions churning out lots of scientists and engineers; large domestic market; low unemployment; more patents filed per yer per citizen than any other country; protection of property rights and intellectual property; rapid absorption of tech; sophisticated consumers and business people; and well-trained business staff.
The only two countries whose economies exceed Japan’s are the U.S. and China, but they devote a large fraction of their budgets to military expenditures. Japan saves itself those costs, thanks to a clause of the U.S.-imposed 1947 constitution (now endorsed by a large fraction of Japanese people themselves) that reduced Japan’s armed forces to a bare minimum.”
“A second set of strengths of Japan, besides its economic ones, is its ‘human capital.’ That population numbers more than 120 million and is healthy and highly educated. Japanese life expectancy is the highest in the year: 80 for men, 86 for women. The socioeconomic inequality that limits opportunities for a large fraction of Americans is greatly reduced in Japan: Japan is the world’s third-most egalitarian nation in its distribution of income, behind only Denmark and Sweden. That’s partly a result of Japanese government school policies: schools in socioeconomically disadvantaged areas have smaller cases (more favorable teacher-to-student ratios) than do schools in richer areas, thereby making it easier for children of poorer citizens to catch up. Social status in Japan depends more on education than on heredity and family connection: again, the reverse of U.S. trends. In short, rather than investing disproportionately in just a fraction of its citizens, Japan invests in all of them — at least, in all of its male citizens.
Literacy and attained educational levels in Japan are close to the highest in the world. Enrollment of Japanese children in both kindergarten and secondary school is almost universal, although neither is compulsory. Student testing in nations around the world shows that Japanese students rank fourth highest in math and science functional literacy, ahead of all European countries and the U.S. Japan is second only to Canada in the percentage of adults — nearly 50% — who go on to higher education beyond high school. Offsetting those strengths of Japanese education is a frequent criticism by the Japanese themselves that it puts too much pressure on students to focus on test scores, and places insufficient emphasis on self-motivation and independent thinking. A result is that, once Japanese students escape the pressure-cooker atmosphere of high school and reach university, their dedication to studying declines.
Tokyo rivals Singapore as the cleanest city in Asia, and is one of the cleanest in the world. That’s because Japanese children learn to be clean and to clean up, as part of their responsibility to preserve Japan and to hand it on to the next generation, a trait that’s been traced back to ancient times. Visitors also notice the safety and low crime rates. Japan’s prison population is far smaller than that of the U.S.: about 80,000 vs. nearly 2.5 million, respectively. Rioting and looting are rare in Japan. Ethnic tensions are low compared to the U.S. and Europe, because of Japan’s ethnic homogeneity and very small ethnic minorities.
Finally, Japan’s strengths include big environmental advantages. Japanese agricultural productivity is high because of Japan’s combination of temperate climate, freedom from tropical agricultural pests, high rainfall concentrated in the summer growing season, and fertile volcanic soils. That contributes to Japan’s ability to support one of the highest average human population densities in the world, calculated with respect to the small percentage (12%) of Japan’s land area in which the population and the agriculture are concentrated. (Most of Japan’s area consists of steep forests and mountains supporting only small human populations and little agriculture.) Nutrient run-off from those fertile soils makes Japanese rivers and coastal waters productive of fish, shellfish, edible seaweeds, and other aquatic foods. Japan is the world’s sixth-largest producer of seafood. As a result of all of those environmental advantages, Japan was unusual in the ancient world in that, already at least 10,000 years before the adoption of agriculture, Japanese hunter-gatherers had settled down in villages and made pottery, rather than living as nomads with few material possessions. Until Japan’s population explosion within the last century-and-a-half, Japan was self-sufficient in food.”
“Asked to name Japan’s most serious problem, economists are likely to answer, ‘its huge national debt.’ The debt is currently 2.5 times Japan’s annual GDP i.e. the value of everything produced in Japan in one year. That means that, even if the Japanese were to devote all of their income and efforts to paying off their national debt and produced nothing for themselves, it would still take them 2.5 years to pay off the debt. Worse yet, the debt has been continuously rising for years. For comparison, while American fiscal conservatives are greatly concerned by the U.S.’s national debt, its still ‘only’ 1 times our GDP. Greece and Spain are notorious for their economic problems, but Japan’s debt-to-GDP ratio is double that of Greece and four times that of Spain. Japan’s government debt is comparable to that of the entire eurozone of 17 countries, whose aggregate population is triple that of Japan.
Why didn’t the Japanese government collapse or default long ago under this burden? First, most of the debt is not owed to foreign creditors, but to bond-holding Japanese individuals, Japanese businesses and pension funds (many of them owned by the government itself), and the Bank of Japan, none of which play tough with the Japanese government. In contrast, much of Greece’s debt is owed to foreign creditors, who do play tough and press Greece to change its fiscal policies. Japan is a net creditor nation for other countries, which owe money to Japan. Second, interest rates are kept low (below 1%) by government policy, in order to keep a lid on government interest payments. Finally, Japanese as well as foreign creditors still have so much confidence in the government’s ability to pay that they continue to buy government bonds. In fact, that’s the main way in which Japanese individuals and companies invest their savings. But nobody knows how much higher the debt can rise before Japan’s creditors lose confidence and the government has to default.
Despite those low interest rates, the sizes of the debt and of Japan’s aged and retired population mean that debt interest and health and social security costs consume much of the government’s tax income. That reduces government funds that would otherwise be available to invest in education, R&D, infrastructure, and other engines of economic growth that could stimulate tax reserves. Exacerbating that problem, Japanese government tax rates and hence government income are relatively low by developed world standards. Ultimately, the debt is held mainly by older Japanese people, who invested their money either directly (buying bonds) or indirectly (receiving pensions from pension funds heavily invested in government bonds) — while those Japanese people ultimately paying the interest on the debt are mainly younger Japanese still working and paying taxes. Hence Japan’s debt in effect represents payments by younger Japanese to older Japanese, constituting an inter-generational conflict and a mortgage on Japan’s future. That mortgage is growing, because Japan’s young population is shrinking while its older population is growing.
The solutions proposed to reduce the debt include raising tax rates, reducing government spending, and reducing pensions of older Japanese. Those and all other proposed solutions prove to be fraught with difficulties. Thus, Japan’s government debt is a big problem that is widely acknowledged to Japan, that has been around for a long time, that has been continuing to get worse, and for which no agreement on a solution is in sight.”
“The other fundamental problems are the four linked issues of women’s roles, Japan’s low and declining birth rate, its declining population size, and its aging population.
In theory, Japanese women and men have the same status. The 1947 Japanese constitution, drafted by the U.S. government of occupation and still in force today, contains a clause (drafted by an American woman) proclaiming gender equality. That draft clause was adopted over fierce Japanese government opposition, and some Japanese lawmakers still want to change the clause.
In reality, Japanese women face many barriers to equality which are stronger — and the gender gap in health, education, and participation in the workforce and in politics is greater — in Japan than in any other rich industrialized nation except South Korea. I speculate that that’s because Japan is the rich industrialized nation in which a woman’s role was until recently most subordinate and stereotyped. For instance, while walking in public, a traditional Japanese woman was expected to remain three steps behind her husband. The barriers vary depending on location and age: e.g. stronger in rural areas and for older Japanese.
At home, the gender division within Japanese married couples is often referred to as the ‘marriage package.’ An inefficient division of labor prevails, whereby a Japanese husband puts in the work hours of two people outside the house and thereby sacrifices time that could be spent with his children, while his wife stays at home and sacrifices the possibility of a fulfilling career. Employers expect employees (mostly men) to stay late in the office and to go out for drinks with one another after work. That makes it difficult for Japanese husbands to share household responsibilities with their wives even if they want to. Japanese husbands do less housework than do husbands in other rich industrialized nations: e.g. only about two-thirds as many hours per week as American husbands. Japanese husbands with working wives perform no more hours of housework than do those with housewives. Instead, it’s predominantly the wives who care for their children, their husbands, their elderly parents, their husbands’ elderly parents — and manage the household finances in their remaining spare time. Many Japanese wives today swear that they will be the last generation of Japanese women to be saddled with these responsibilities.
In the workplace, Japanese women have low participation and low pay. Participation declines steeply with increasing level of responsibility. Whereas women account for 49% of Japanese university students and 45% of entry-level job holders, they account for only 14% of university faculty positions (vs. 33–44% in the U.S., the UK, Germany, and France), 11% of middle-level to senior management positions, 2% of board of director positions, and less than 1% of CEOs. At those higher levels Japan lags behind all major industrial countries except (again) South Korea. There are few women in Japanese politics, and Japan has never had a woman prime minister. Japan’s male/female pay differential for full-time employees is the third highest (exceeded only by South Korea and Estonia) among 35 rich industrialized nations. A Japanese woman employee is paid on average only 73% of a man employee at the same level, compared to 85% for the average rich industrial country, ranging up to 94% for New Zealand. Work obstacles for women include the long work hours, the expectation of post-work employee socializing, and the problem of who will take care of the children if a working mom is expected to stay out socializing, and if her husband is also unavailable or unwilling.
Childcare is a big problem for working Japanese mothers. On paper, Japanese law guarantees women four weeks of maternity leave before and eight weeks after childbirth; some Japanese men are also entitled to paternity leave; and a 1992 law entitles parents to take one whole year of unpaid leave to raise a child if they so choose. In practice, virtually all Japanese fathers and most Japanese mothers don’t take that leave to which they are entitled. Instead, 70% of Japanese working women quit work upon the birth of their first child, and most of them don’t return to work for many years, if ever. While it’s nominally illegal for a Japanese employer to pressure a mother into quitting work, Japanese mothers are actually pressured. Little childcare is available to Japanese working mothers because of the lack of immigrant women to do private childcare, and because there’s so few private or government childcare centers, unlike the situation in the U.S. and Scandinavia, respectively. The widespread view is instead that a mother should stay home, care for her small children herself, and not work.
The result is a dilemma for Japanese women in the workplace. On the one hand, many or most Japanese women want to work, and they also want to have children and to spend time with them. On the other hand, Japanese companies invest heavily in training an employee, expect to offer a lifetime job, and expect in return that the employee will work long hours and will remain for life. Companies are reluctant to hire and train women, because they may want to take time off to have children, may not want to work long hours, and may not return to work after giving birth to a child. Hence women tend not to be offered, and tend not to accept if offered, full-time high-level jobs with Japanese companies.
Japan’s current prime minister, Shinzo Abe, is a conservative who formerly did not display interest in women’s issues. Recently, however, he reversed course and announced that he wanted to find ways of helping mothers return to work — many suspect not because of his suddenly developing a concern for women, but because of Japan’s shrinking population and hence shrinking workforce. Half of Japanese university grads at women. Hence underemployment of Japanese women constitutes for Japan the loss of half of its human capital. Abe proposed that working moms should be able to take three years of maternity leave with the assurance of returning to their jobs, that the government expand public childcare centers, and that businesses receive financial incentives to hire women. But many Japanese women are opposed to Abe’s proposal: they suspect it’s just one more government conspiracy to keep Japanese women at home!
Low and dropping birth rates prevail throughout the First World. But Japan has nearly the world’s lowest birth rate: 7 births per year per 1,000 people, compared to 13 in the U.S., 19 averaged over the whole world, and more than 40 in some African countries. Furthermore, that already low birth rate in Japan is still declining. If in recent years one had linearly extrapolated the decline from year to year, one would have predicted that Japan’s birth rate would hit zero in 2017! Obviously, things didn’t get that bad, but the decline is real.
An alternative way of expressing births is by what’s called the total fertility rate: i.e. the total number of babies born to an average woman over her lifetime. For the whole world that number averages 2.5 babies; for the First World countries with the biggest economies, it varies between 1.3 and 2 babies (e.g. 1.9 for the U.S.). The number for Japan is only 1.27 babies, at the low end of the spectrum; South Korea and Poland are among the few countries with lower values. But the average number of babies that a woman has to bear in order for the population to remain stable — the replacement rate — is slightly more than 2. Japan along with some other First World countries has an average below that replacement rate. For other First World countries, that’s not a problem, because immigration keeps the population size constant or even growing despite low fertility. However, Japan’s near-absence of immigration means that Japan’s population is actually declining.
Part of the reason for Japan’s falling birth rate is that Japan’s age of first marriage has been rising: it’s now around 30 for both men and women. That means fewer pre-menopausal years in which a woman can conceive. A bigger reason for the falling birth rate is that the rate of marriage itself (i.e. the number of marriages per 1,000 people per year) is falling rapidly. One might object that the marriage rate is also falling in most other developed countries without causing the catastrophic drop in the birth rate Japan’s experiencing, because so many births are to unwed moms: 40% in the U.S., 50% in France, and 66% in Iceland. But that mitigation doesn’t apply to Japan, where unwed mothers account for a negligible proportion of births: only 2%.
Why are Japanese people increasingly avoiding getting married and having kids? One reason is economics: it’s cheaper and more comfortable to remain single and live at home with one’s parents than to move out, marry, and have to pay for one’s own apartment plus the expenses of children. Especially for women, marriage and motherhood can be economically catastrophic by making it difficult for them to obtain or retain a job. Another reason offered is the freedom of being single, a consideration especially for women who don’t want to end up shouldering the responsibility of the household, husband, childcare, and their own and their husband’s elderly parents. Still another reason is that many modern Japanese, both men and women in equal proportions, consider marriage ‘unnecessary’ to a fulfilling life.
Despite those counter-arguments, 70% of unmarried Japanese men and women still claim that they want to get married. Why, then, don’t they succeed in finding a suitable mate? Traditionally, that didn’t require effort on their part, because Japanese marriages were arranged by go-betweens who scheduled formal interviews by which young unmarried people could meet potential marriage partners. As recently as 1960, that was still the predominant form of marriage in Japan. Since then, the declining number of go-betweens, and the rise of the Western idea of romantic marriage, have caused such arranged marriages to drop to only 5% of all marriages. But many modern young Japanese are too busy working, too inexperienced at dating, or too awkward to develop a romantic relationship.
In particular, the phasing-out of arranged marriages in recent decades has coincided with the rise of electronic communication by email and texting, and with the consequent decline of social skills.”
“Japan’s low and still declining birth and marriage rates are directly responsible for two remaining problems widely recognized in Japan: the declining population, and the aging population.
Because Japan’s birth rate has for many years been below the replacement level, it was clear that Japan’s population would eventually cease rising and begin to fall. Still, it was a shock when census figures confirmed that that dreaded moment had actually arrived. After the five-year 2010 census had shown a population of 128 million, the 2015 census yielded 127 million. From the current trends and age distribution of Japan’s population, it’s predicted that there will be a further drop by about 40 million by 2060, to a population of only 80 million.
The consequences of Japan’s falling population and its shift from rural to urban are already visible. Japan is closing schools at a rate of about 500 per year. Rural depopulation is causing villages and small towns to be abandoned. It’s feared that, without population growth as the supposed driver of economic growth, a less populous Japan will be poorer and less powerful on the world stage. In 1948 Japan was the world’s fifth most populous country; by 2007 it had only the 10th-largest population; and current projections are that within a few decades it will fall behind even such non-powerhouses as the Congo and Ethiopia. That’s considered humiliating, on the tacit assumption that a country with a smaller population than the Congo will be weaker and less important than the Congo.
Hence in 2015 Prime Minister Abe declared that his administration would aim to maintain Japan’s population at least at 100 million, by trying to boost the average total fertility rate from 1.4 to 1.8 children per woman. But boosting the output of babies will depend on the choices of young Japanese rater than of Abe.
Is Japan’s declining population a ‘problem’ for Japan? There are many countries that have much smaller populations than Japan’s, and that are nevertheless rich and important players on the world stage, including Australia, Finland, Israel, the Netherlands, Singapore, Sweden, Switzerland, and Taiwan. Of course those countries aren’t world military leaders, but neither is Japan today because of its constitution and widespread pacifism. To me, it seems that Japan wouldn’t be worse off but instead much better off with a smaller population, because that would mean less need for domestic and imported resources. Resource pressure has been one of the curses of modern Japanese history, and Japanese themselves think of their country as resource-starved.
Even those Japanese concerned about their country’s declining population agree that a much bigger problem is that Japan’s population is aging. Japan is already the country with the world’s highest life expectancy (84, compared to 77 for the U.S. and just 40–45 for many African countries), and with the highest percentage of old people. Already now, 23% of Japan’s population is 65+, and 6% is over 80. By the year 2050 those numbers are projected to be nearly 40% and 16%, respectively. (The corresponding numbers for Mali are only 3% and 0.1%.) At that point, Japanese people 80+ will outnumber kids <14, and people 65+ will outnumber those kids by more than 3 to 1.
A large number of old people creates a burden on the national healthcare system, because older people are much more subject to illnesses than are younger people: especially to chronic, incurable, hard-to-cure, or expensive-to-treat illnesses such as heart diseases and dementia. As the percentage of the population 65+ increases, the population’s percentage of retirees also increases, and its percentage of workers decreases. That means fewer young workers to serve as the ultimate sources of support for growing numbers o folder retirees: either supporting them directly through financial support and personal care, or else supporting them indirectly through government pensions and senior healthcare systems funded by the taxed earnings of young workers. Japan’s ratio of workers to retirees has been falling catastrophically: from 9 workers to retiree in 1965, to 2.4 today, to a projected 1.3 in 2050.
This same problem occurs throughout the developed world; Japan just has it to an extreme degree. How have other countries escaped this trap? The answer involves the first of what I see as Japan’s remaining three major problems: the ones that aren’t widely acknowledged as problems in Japan itself.
Japan is, and prides itself on being, the most ethnically homogeneous affluent or populous country in the world. It doesn’t welcome immigrants, makes it difficult for anyone who wants to immigrate to do so, and makes it even more difficult for anyone who’s succeeded in immigrating to receive Japanese citizenship. As a percent of a country’s population, immigrants and their children constitute 28% of Australia’s population, 21% of Canada’s, 16% of Sweden’s, and 14% of the U.S.’s, but only 2% of Japan’s. Among refugees seeking asylum, Sweden accepts 92%, Germany 70%, Canada 48%, but Japan only 0.2%. (For instance, Japan accepted only six and eleven refugees in 2013 and 2014, respectively.) Foreign workers constitute 15% of the workforce in the U.S. and 9% in Germany, but only 1% in Japan. Japan does admit temporary foreign workers who receive work visas of 1–3 years because of their high professional skills (e.g. as ship-builders or as construction workers for the Olympics). But such foreigners find it difficult to obtain permanent residency or citizenship.
The only significant immigration to Japan in modern times was of several million Koreans before and during WWII, when Korea was a Japanese colony. However, many or most of those Koreans were involuntary immigrants imported as slave labor. For instance, it’s not widely known that 10% of the victims killed at Hiroshima by the first atomic bomb were Korean laborers working there.
A couple of Japanese cabinet ministers have recently called for more immigration. For instance, Shigeru Ishiba, minister for local regions, said, ‘At one time, people from Japan migrated to South and North America and managed to fit in with the locals while maintaining their pride as Japanese…It doesn’t make sense to say no to foreigners who come ot Japan when our people did the same thing overseas.’ For instance, Peru has had a Japanese president, while the U.S. has had Japanese congresspeople. But the Japanese government is not currently reconsidering its opposition to immigration.
The government opposition reflects the negative views of immigration expressed by Japanese citizens in many public opinion polls, in which Japanese opinions fall at one extreme of the opinions held in other affluent countries. The percentage of Japanese opposed to increasing the number of foreign residents is 63%; 72% agree that immigrants increase crime rates; and 80% deny that immigrants improve society by introducing new ideas, unlike the 57–75% of Americans, Canadians, and Australians who do believe so.
It comes as no surprise that Japan, an ethnically homogeneous country with a long history of isolation and no immigration, values highly its ethnic homogeneity, while the U.S. has no ethnic homogeneity to value. Japan’s dilemma is that it suffers from widely acknowledged problems that other countries mitigate by means of immigration, but that Japan hasn’t figured out how to solve without resorting to immigration.
Despite the U.S., Canada, Australia, and Western Europe sharing Japan’s falling birth rate and aging of their native populations, those countries minimize the consequences by admitting large numbers of young immigrant workers. Japan can’t offset that declining workforce by employing more of its non-working educated mothers, because the large pool of immigrant workers hired as private childcare workers by so many American working moms scarcely exists in Japan.
The large pool of immigrant men and women who furnish most caretakers of senior citizens and most hospital nurses and staff in the U.S. also doesn’t exist in Japan.
While innovation is vigorous in Japan as judged by the large number of patents awarded to Japanese inventors, Japanese are concerned about hosting less breakthrough innovation than one would expect from Japan’s large investment in R&D. That’s reflected in the relatively modest number of Nobel Prizes awarded to Japanese scientists. Most U.S. Nobel Prize winners are immigrants or their offspring. But immigrants and their offspring are as rare among Japanese scientists as they are among the Japanese population in general. That relationship between immigration and Nobel Prizes is not surprising when one reflects that the willingness to take risks and to try something drastically new is a prerequisite for emigrating and for innovating at the highest level.
In the short run, Japan is unwilling to solve these problems by immigration. In the long run, it’s unknown whether Japanese people will continue to suffer from these problems, or will instead choose to solve them by changing their immigration policy, or will figure out some yet-unknown solutions other than immigration. If Japan does decide to reevaluate immigration, a model palatable to Japan might be Canada’s policy, which stresses evaluating applicants for immigration on the basis of their potential value to Canada.
Japan’s next neglected big problem, after immigration, is the effect of Japan’s wartime behavior toward China and Korea on its current relations with those countries. During and before WWII, Japan did horrible things to people in other Asian countries, especially China and Korea. Long before Japan’s ‘official’ declaration of war on December 7, 1941, Japan was carrying out a full-scale undeclared war on China from 1937 onward. In that war, the Japanese military killed millions of Chinese, often in barbaric ways such as using tied-up Chinese prisoners for bayonet practice to toughen the attitudes of Japanese soldiers, killing several hundred thousand Chinese civilians at Nanking in 1937–1938, and killing many others in retaliation for the Doolittle Raid of April 1942. Although denial of these killings is widespread in Japan today, they were well documented. Japan annexed Korea in 1910, mandated that Korean school use Japanese rather than Korean for 35 years of Japanese occupation, forced large numbers of Korean women and women of other nationalities to become sex slaves in Japanese military brothels, and forced large numbers of Korean men to become virtual slaves for the Japanese army.
As a result, hatred of Japan is widespread today in China and Korea. In the view of Chinese and Koreans, Japan hasn’t adequately acknowledged, apologized for, or expressed regret for its wartime atrocities. China’s population is 11 times Japan’s, while the combined population of the Koreas is more than half of Japan’s. China and North Korea both have nuclear weapons. China and both Koreas have big, well-equipped armies, while Japan’s armed forces remain minuscule because of the U.S.-imposed Japanese constitution reinforced by widespread pacifism in Japan today.North Korea from time to time fires missiles across Japan, to demonstrate its ability to reach Japan. Yet Japan is locked in territorial disputes with both China and South Korea over uninhabited tiny islands of no intrinsic value themselves but important because of fish, gas, and mineral resources within each island’s marine zone. That combination of facts spells big danger for Japan in the long run.
Unlike Germans, the Japanese have not had a catharsis and rid themselves of the poison in their system. They have not educated their young about the wrong they had done. A Japanese prime minister expressed his ‘deepest regrets’ on the 52nd anniversary of the end of WWII (1997) and his ‘profound remorse’ during his visit to Beijing in 1997. However, he did not apologize, as the Chinese and Koreans wished Japan’s leader to do. I don’t understand why the Japanese are so unwilling to admit the past, apologize for it, and move on. For some reason, they don’t want to apologize. To apologize is to admit having done a wrong. To express regrets or remorse merely expresses their present subjective feelings. They denied the massacre of Nanking took place; that Korean, Filipino, Dutch, and other women were kidnapped or otherwise forced to be ‘comfort women’ for Japanese soldiers; that they carried out cruel biological experiments on live Chinese, Korean, Mongolian, Russian, and other prisoners in Manchuria. In each case, only after irrefutable evidence was produced from their own records did they make reluctant admissions. This fed suspicions of Japan’s future intentions. Present Japanese attitudes are an indication of their future conduct. If they’re ashamed of their past, they are less likely to repeat it.
Japanese history classes devote little time to WWII, say little or nothing about Japan’s role as aggressor, stress the role of Japanese as victims (of the atomic bombs) rather than as responsible for the deaths of millions of other people plus several million Japanese soldiers and civilians, and blame the U.S. for somehow tricking Japan into launching the war. (In all fairness, Korean, Chinese, and American schoolbooks present their own skewed WWII accounts.)
Chinese and Koreans might be convinced of Japan’s sincerity by Japanese responses analogous to Germany’s: for instance, if Japan’s prime minister were to visit Nanking, fall on his knees before Chinese spectators, and beg forgiveness for Japan’s Nanking wartime massacres; if throughout Japan there were museums and monuments and former POW camps with photos and detailed explanations of Japanese wartime atrocities; if Japanese schoolchildren were regularly brought on school outings to such sites in Japan, and to sites outside Japan such as Nanking, Sandakan, Bataan, and Saipan; and if Japan devoted much more effort to depicting wartime non-Japanese victims of Japanese atrocities than to depicting Japanese victims of the war. All of those behaviors are non-existent and unthinkable in Japan, but their analogies are widely practiced in Germany. Until they are practiced in Japan, Chinese and Koreans will continue to disbelieve Japanese scripted apologies, and to hate Japan. And as long as China and Koera are armed to the hilt while Japan remains without the means to defend itself, a big danger will continue to hang over Japan.”
“All peoples depend for their existence on renewable natural resources, including trees, fish, topsoil, clean water, and clean air. All of those resources pose problems of management, about which scientists have already accumulated much experience. If the world’s forests and fisheries were managed according to recommended best practices, it might be possible to harvest forest products and aquatic food for the indefinite future, in quantities sufficient to meet the needs of the world’s current population. Sadly, though, much actual harvesting is still destructive and non-sustainable. Most of the world’s forests are shrinking, and most fisheries are declining or have already collapsed. But no country is self-sufficient in all natural resources; all have to import at least some. Hence in most countries there are government agencies, branches of international environmental orgs (like the WWF) and local environmental orgs hard at work to solve these problems.
The problems are especially acute for Japan. Until 1853, when Japan was closed to the outside world and did negligible importing, it was self-sufficient in natural resources. Forced to depend on its own forests, and alarmed by their declines in the 1600s, Japan pioneered in developing scientific foresty methods independently of Germany and Switzerland, in order to manage its forests. Now, because of Japan’s population explosion since 1853, rise in living standards and consumption rates, large population crammed into a small area, and need for raw materials essential for a modern industrial economy, Japan has become one of the world’s biggest importers of natural resources. Among non-renewable resources, almost all of Japan’s needs for oil, natural gas, nickel, aluminum, nitrates, potash, and phosphate, and most of its needs for iron, coal, and copper, have to be imported. Among renewable natural resources, Japan ranks variously as the world’s leading or second- or third-leading importer of seafood, logs, plywood, tropical hardwoods, and paper and pulp materials.
That’s a long list of essential resources for which Japan depends on imports. As any of these resources becomes depleted worldwide, Japan will be the first or one of the first countries to suffer the consequences. Japan is also the major country most dependent on imported food to feed its citizens. Japan is also the major country most dependent on imported food to feed its citizens. Japan today has the highest ratio (a factor of 20) of agricultural imports to agricultural exports among major countries. The next highest ratio, that for South Korea, is still only a factor of 6, while the U.S., Brazil, India, Australia, and quite a few major countries are net food exporters.
Japanese thus have good reason to view their country as resource-poor. One therefore expects that Japan, as the developed country with the most extreme dependence on resource imports, would be driven by self-interest to become the world’s leading promoter of sustainable resource exploitation. In particular, the rational policy would be for Japan to take the lead in sustainable exploitation of the world’s fisheries and forests on which Japan depends.
Paradoxically, the reverse is true. Japan appears to be the developed country with the least support for sustainable resource policies overseas. Japanese imports of illegally sourced harvested forest products are much higher than those of the U.S. or of EU countries. Japan is a leader in opposing prudent regulation of ocean fishing and whaling.
For example, Japan is a big consumer of Bluefin Tuna, which are in steep decline from overfishing, and that’s stimulating counter-efforts to preserve this valuable resource by agreeing on sustainable catches and by imposing fishing quotas. Incredibly, when those tuna stocks were proposed in 2010 for international protection, Japan viewed it as a diplomatic triumph to have succeeded in blocking the proposal.
Japan today is also the leading and most insistent whaling nation. The International Whaling Commission determines quotas for hunting whales. Every year, Japan legally circumvents those quotas by killing large numbers of whales for the supposed purposes of research, then publishes little or no research on those dead whales and instead sells them for meat. Yet Japanese public consumer demand for whale meat is low and declining, and whale meat is wasted for dog food and fertilizer rather than for human consumption. Maintaining whaling represents an economic loss for Japan, because its whaling industry has to be heavily subsidized by the government in several ways: direct subsidies to the whaling ships; additional costs of more ships to escort and protect them; and the hidden costs of so-called ‘foreign aid’ paid to small non-whaling countries that are members of the International Whaling Commission, as a bribe in return for their pro-whaling votes.
Why does Japan pursue these stances? First, Japanese people cherish a self-image of living in harmony with nature, and they did traditionally manage their own forests sustainably — but not the overseas forests and fisheries that they now exploit. Second, Japanese national pride dislikes bowing to international pressure. One could describe Japan as ‘anti-anti-whaling’ rather than pro-whaling. Finally, awareness of Japan’s limited home resources has led it for the last 140 years to maintain, at the core of its national security and a keystone of its foreign policy, its claimed right of unrestricted access to the world’s natural resources. While that insistence was a viable policy in past times of world resource abundance, when supplies exceeded demands, the policy is no longer viable in today’s times of declining resources.
Efforts to grab overseas resources already drove Japan to self-destructive behavior once before, when it made war simultaneously on China, the U.S., Britain, Australia, New Zealand, and the Netherlands. Defeat then was inevitable. Now, too, defeat is again inevitable — by the exhaustion of both renewable and non-renewable overseas natural resources. If I were the evil dictator of a country that hated Japan and wanted to ruin it without resorting to war, I would do exactly what Japan is now doing to itself; I would destroy the overseas resources on which Japan defends.”
USA
“Among our foreign relations problems, many Americans are concerned about the long-term threat to us from the rise of China, which already has the world’s second-largest economy after that of the U.S. China’s population is more than four times bigger than ours. China’s economic growth rate has consistently exceeded that of every other major country. It has the world’s largest number of soldiers and second-highest military spending. It has possessed nuclear weapons for half-a-century. It already outstrips the U.S. in some spheres of advanced tech (such as alternative energy generation and high-speed rail transport). Its dictatorial government can get things done much faster than our democracy hobbled by two parties and by checks and balances. To many Americans, it seems only a matter of time before China overtakes us economically and militarily. The current decade of the 2010s really is the one offering the most cause for anxiety, compared to the problems of the past ten decades.”
“While China has many more soldiers in its army, the U.S.’s long-standing investment in military tech and ocean-going warships more than counter-balances this. For instance, the U.S. has ten large nuclear-powered aircraft carriers capable of being deployed around the world; only France has even a single one, and few countries have any aircraft carrier at all, nuclear-powered or not. As a result, the U.S. is today the world’s sole global military power that can and does intervene around the world.”
“The U.S. is close to the world’s highest-ranking country for population and income per person, whereas all other countries near the top for one of those two factors are low for the other. 16 of the countries with the world’s highest populations have low per-capita outputs or incomes, just 3–40% of the U.S.’s. (The three other rich countries in the top 20 for population are Japan, Germany, and France, whose populations are still just 21–39% of the U.S.’s.). The reason for the U.S.’s large population is its large area of fertile land — Russia and Canada have much lower populations, because a large fraction of their land in Arctic, suitable for only sparse habitation and no agriculture.
The U.S. is resource-rich, self-sufficient in food and most raw materials, and large in area, and has a population density less than 1/10th of Japan’s. It’s much easier for the U.S. to support its large population than it is for Japan.
In all per-capita measures, the U.S. exceeds by a large margin all other populous countries with large economies. The only countries in the world with per-capita GDPs or incomes higher than the U.S.’s are either small (populations of just 2–9 million: Kuwait, Norway, Qatar, Singapore, Switzerland, and the UAE) or tiny (populations of 30,000–500,000: Brunei, Liechtenstein, Luxembourg, and San Marino). Their wealth comes mainly from oil or finance, whose earnings are spread over few people, resulting in high GDP or income per person but a low rank in total national economic output (which equals output per person times population).”
“We are fortunate to be endowed with excellent real estate. The lower 48 lie entirely within the temperate zones, which are the most productive for agriculture, and the safest for public health. While China also lies largely within the temperate zones, much of southern China is subtropical, and part of it exceeds into the tropics. More seriously, China includes the world’s largest and highest plateau, of low value for agriculture, plus a large area of high mountains (including five of the world’s six highest mountains) offering no human economic value except mountain-climbing tourism and glaciers that supply water for rivers.
Temperate-zone soils are in general more fertile than tropical soils, due in part to the legacies of high-latitude Ice Age glaciers that repeatedly advanced and retreated over the landscape, grinding rocks and generating or exposing fresh soils. That happened not only in North America but also in northern Eurasia, contributing there to Eurasia soil fertility. But glaciation was especially effective in North America because of a peculiar geographic feature unique among the world’s continents. North America has a unique wedge-like shape, broadest towards the North Pole and becoming narrower at lower latitudes.
That shape had consequences for North American soils. Several dozen times during the Ice Age, glaciers formed in the Arctic and marched south, both in North America and Eurasia. Because of North America’s tapering wedge shape, large volumes of ice were funneled into a narrower band and became heavier glaciers as they advanced toward lower latitudes. In Eurasia, without the wedge shape, the volume of ice formed at high latitudes moved into an equally broad band at low latitudes. The continents of South America, Africa, and Australia ll end far short of the Antarctic Circle, and couldn’t generate ice sheets marching northwards. Hence creation of fertile young soils by the advance and retreat of glaciers originating in the high latitudes was most effective in North America, less so in Eurasia, and slight or non-existent in the southern continents. The result was the deep fertile soils that astonished and delighted immigrant European farmers, and that now constitute the world’s largest and most productive uninterrupted expanse of farmland. Thus, North America’s wedge shape and history of repeated past glaciations, combined with the moderate rainfall prevailing over most of the continent today, are the underlying reasons why the U.S. has high agricultural productivity and is the world’s largest exporter of food. In contrast, China has less fertile soils much damaged by erosion, and an average population density four times the U.S.’s, making China a net importer of food.
The other major geographic advantage of the U.S. is our waterways, both coastal and interior. They constitute a big money-saver, because transport by sea is 10–30 times cheaper than transport overland by road or rail. The long coasts of the U.S. are protected along the Atlantic and Gulf coasts by many barrier islands. All three U.S. coasts have big indentations within which lie sheltered deep-water ports, such as Long Island Sound, Chesapeake Bay, Galveston Bay, San Francisco Bay, and Puget Sound. As a result, the U.S. is blessed with many excellent protected natural harbors: more on our East Coast alone than in all the rest of the Americas south of the Mexican border. In addition, the U.S. is the world’s only major power fronting on both the Atlantic and Pacific Oceans.
As for interior waterways, the U.S. East Coast has many short navigable rivers. But our most important interior waterway is the huge Mississippi River system and its big tributaries (the Missouri and others), which drain more than half of our area, including our prime farmland of the Great Plain. Once barriers to navigation on those rivers had been engineered out of existence by construction of canals and locks, ships could sail 1,200 miles into the interior of the central U.S. from the Gulf Coast. Beyond the Mississippi’s headwaters is the Great Lakes, the world’s largest group of lakes, and the group carrying more shipping than any other. Together, the Mississippi and the Great Lakes constitute the world’s largest network of inland waterways. When one adds the intra-coastal waterway to the Mississippi / Great Lakes system, the U.S. ends up with more navigable internal waterways than all the rest of the world combined. For comparison, Mexico has no large navigable river at all, and the whole African continent has only one navigable river to the ocean (the Nile). China has a much shorter coastline, not as good ports, a much lower fraction of its land area accessible to navigable rivers, and no big lake system.
The other advantage of our sea-coasts is as protection against invasion. It’s cheaper and safer to make deliveries from a ship off the coast than from a vehicle off the land only if the people awaiting you on land welcome your planned delivery. Delivery by sea is expensive and unsafe if the people awaiting you are shooting at you. Amphibious landings have always ranked among the most dangerous forms of warfare. The U.S. was further protected from attack by our annexations of Hawaii and Alaska controlling the approaches to our Pacific Coast. Our land borderers of Mexico and Canada both have much too small populations and armies to threaten us (although we fought a war with each during the early 19th century).
Hence the U.S. is virtually immune to invasion. None has ever been attempted in our history as an independent nation; the U.S. hasn’t been involved in a war on our mainland with a foreign power since the 1846–1848 Mexican War, which we ourselves initiated. Even mere raids on the U.S. mainland have been negligible: just a British raid on Washington during the War of 1812, Pancho Villa’s raid on Columbus in New Mexico in 1916, one shell fired by a Japanese submarine in WWII onto Santa Barbara, and six American civilians killed by an explosive-laden balloon launched from Japan also during WWII. In contrast, all other major nations have either been invaded (Japan, China, France, Germany, India), occupied (Japan, Italy, Korea, Germany) or threatened with imminent invasion (the UK) within the last century. Specifically, China was not only massively attacked from the sea and extensively occupied by Japan in 1937–1945, but was also attacked from the sea by the UK, France, and Japan in the previous century; has recently fought Russia, India, and Vietnam across its land borders; and frequently in the past was attacked by Central Asian armies, two of which (the Mongols and the Manchu) succeeded in conquering all of China.”
“Let’s consider our political advantages, which begin with the fact that our government has been a democracy uninterruptedly for the 230 years of our national existence. In contrast, China has had a non-democratic dictatorial government uninterruptedly for the 2,240 years of its national existence.
What really are democracy’s advantages — or at least its potential advantages since our supposedly democratic government is losing some of those potential advantages by deviating from actual democracy? We envy the dictatorial pace — for example, China’s adoption of lead-free gasoline took just one year, whereas that policy required a decade of debates and court challenges in the U.S. Skeptics can point to examples of disastrously harmful leaders who came to power through democratic election.
But dictatorships suffer from a far worse, often fatal, disadvantage. No one, in the 5,400-year history of centralized government on all of the continents, has figured out how to ensure that the policies implemented with enviable speed by dictatorships consist predominantly of good policies. Just think of the horribly self-destructive policies that China also implemented quickly, and whose consequences were unparalleled in any large First World democracy. Those included China precipitating the large-scale famine of 1958–1962 that killed tens of millions of people, suspending its system of education, sending its teachers out into the fields to work alongside peasants, and creating later the world’s worst air pollution. If air pollution in the U.S. became even half as bad as it now often is many large Chinese cities, American voters would complain and throw out the government then in power at the next election. Think of the even more self-destructive policies implemented in the 1930s without broad-based decision making by dictatorial governments in Germany and Japan, which launched those countries into wars that killed millions of their own citizens. That’s why Winston Churchill quipped that democracy is indeed the worst form of government, except for all of the alternative forms that at one time or another have been tried.
The advantages of a democracy are numerous. Debate and protests may reveal an idea to be the best policy, whereas in a dictatorship the idea would never have gotten debated. Think of how protests were so vigorous that our government eventually decided to end the war in Vietnam, while the same could not have happened for Germany under Hitler. Another basic advantage is that citizens know that their ideas are getting heard and debated. Even if their ideas aren’t adopted now, they know that they will have other opportunities to prevail in future elections. Without democracy, citizens are more likely to feel frustrated, and to conclude correctly that their only option is to resort to violence, and even to try to overthrow the government. Knowledge that peaceful outlets for expression exist reduce the risk of civil violence.
A further basic advantage is that compromise reduces tyranny by those in power, who might otherwise ignore opposite viewpoints. Conversely, compromise also means that a frustrated minority agrees not to paralyze government. Sill another basic advantage is that, in modern democracies with universal suffrage, all citizens can vote. Hence the government in power has an incentive for investing in all citizens, who thereby obtain opportunities to become productive, rather than those opportunities being reserved for just a small dictatorial elite.
In addition, the U.S. derives further advantages from its federal government form of democracy. In a federal system important functions of government are reserved for regional democratic units and aren’t the prerogative of a single centralized national government. 50 competing experiments in the states can test different solutions to the same shared problem, and that may thereby reveal which solution works best. When California tried the experiment of allowing right turns on a red light at intersections in the early 1960s, it proved safe, other states were able to learn from California, and all states eventually adopted the same law.
A more consequential experiment was that Governor Brownback of Kansas maintained that cutting state taxes was more important to the well-being of Kansas citizens than was a full-funded system of public education. Beginning in 2012, he reduced state tax income to the point where drastic cuts in public education became necessary. By 2017, results from Kansas convinced even Kansas legislators belonging to his own party that cutting public education was not a good idea, and so they voted to raise state taxes again. But our federal system permitted one state to test the idea by itself, and let the other 49 states learn from what happened.
The lack of these democratic advantages is in my opinion the single biggest disadvantage that will prevent China from ever catching up with the U.S. in average income per person — as long as it remains non-democratic. But a nominally democratic country loses these advantages if its democracy is seriously infringed, and democracy isn’t necessarily the best option for all countries; it’s difficult to prevail in countries lacking the prerequisites of a literate electorate and a widely accepted national identity.
The U.S. has had uninterrupted civilian control of our military throughout our entire history. That’s not true for China or for most Latin American countries, and it was disastrously untrue for Japan in the 1930s and 1940s. The U.S. has relatively low overt corruption, though it lags behind two dozen other countries. Corruption is bad for a country or for a business, because decisions become influenced by what’s good for corrupt politicians or business people, even though the decision may be bad for the country or the business as a whole. Corruption also harms businesses because it means they can’t count on contracts being enforced. That’s another huge disadvantage of China, which has much overt corruption. But the U.S. does have much covert corruption, because Wall Street and other rich entities and individuals influence U.S. government policy and actions by means of lobbying and election campaign contributions. While these money outlays are legal, they achieve results similar to those achieved illegally by corruption — legislators or officials adopt policies or actions harmful to the public good.”
“The U.S. is also preeminent with the ease with which even young people can found successful businesses. We have a long history of investment in education, infrastructure, human capital, and R&D. (China has only recently been catching up in investments in those areas.) As a result, the U.S. leads all the rest of the world combined in every major field of science, as measured by articles published or Nobel Prizes won. Half of the world’s top-10 scientific research universities and institutions are American. For almost a century-and-a-half, we have held a big competitive advantage in inventions, tech, and innovative manufacturing practices.
One last advantage is immigration. The selective transplanting of emigration brings the youngest, healthiest, boldest, most risk-tolerant, most hard-working, ambitious, and innovative people from other countries to the U.S. Hence it comes as no surprise that more than one-third of American Nobel Prize winners are foreign-born, and over half are either immigrants themselves or else the children of immigrants. That’s because Nobel Prize-winning research demands those same qualities. Immigrants and their offspring also contribute disproportionately to American art, music, cuisine, and sports.”
“There are warning signs that the U.S. may be squandering its advantages today. High among those warning signs are four interlinked features that are contributing to the breakdown of American democracy, one of our historical strengths. The first, and most ominous, is our accelerating deterioration of political compromise. Fierce political struggles have been frequent, and majority tyranny or minority paralysis occasional, in American history. But, with the conspicuous exception of the breakdown of compromise that led to our 1861–1865 Civil War, compromises have usually been reached. A modern example is the relationship between President Reagan and Democratic Speaker of the House Tip O’Neill. While O’Neill disliked Reagan’s economic agenda, he recognized the president’s constitutional right to propose an agenda, scheduled House votes on it, and stuck to that scheduled agenda. Under Reagan and O’Neill, the federal government functioned: it met its deadlines, budgets were approved, government shutdowns were brief, and threats of filibusters were rare. Major pieces of legislation on which Reagan and O’Neill and their followers disagreed, but on which they nevertheless succeeded in reaching compromises, included lowering taxes, reforming the federal tax code, immigration policy, social security reform, reduction of non-military spending, and increases of military spending. While Reagan’s nominees for federal judgeships were usually not to Democrats’ tastes, and Democrats blocked some of them, Reagan nevertheless was able to appoint more than half of federal judges, including three Supreme Court justices.
But political compromise has been deteriorating from the mid-90s onwards, and especially from around 2005, between the two parties and between the less moderate and more moderate wings of each party. That’s especially true in the Republican Party, whose more extreme Tea Party wing has mounted primary challenges against moderate Republican candidates who compromised with Democrats. As a result, the 2014–2016 Congress passed the fewest laws of any Congress in recent American history, was behind schedule in adopting budgets, and risked or actually precipitated government shutdown.
Throughout our history, minority and supermajority parties have recognized the potential for abuse of the filibuster, and resorted only rarely to filibusters and more rarely to cloture votes which end filibusters with a 60/100 vote. Under our first 220 years of constitutional government, our Senate opposed a toatl of only 68 presidential nominees for government positions by filibusters. But after Obama was elected, Republicans blocked 79 of his nominees by filibusters in just four years, more than in the entire previous 220 years. Democrats responded by abolishing the supermajority requirement for approving presidential nominees other than Supreme Court justices, thereby making it possible to fill government jobs but also reducing the safety valve available to a dissatisfied minority.
In Obama’s second term, the Republican-controlled Senate confirmed the lowest number of presidentially nominated judges since the early 1950s, and the lowest number of appeals court judges (the court immediately below the Surpeme Court) since the 1800s. The most frequent tactic used to block nominations was to refuse to schedule a Senate committee meeting to consider the nomination; next most frequent was to refuse to schedule a full Senate vote on a nomination approved by the relevant committee. For instance, one nominee for an ambassadorship never got to serve because he died while waiting more than two years for a confirmation vote that still hadn’t happened.”
“Why has this breakdown of political compromise accelerated within the last two decades? In addition to the other harm it causes, it’s self-reinforcing, because it makes people other than uncompromising ideologues reluctant to seek government service as an elected rep.
One suggested explanation is the astronomical rise in costs of campaigns, which has made donors more important than in the past. Many or most candidates are forced to rely on a small number of large donations. Of course those large donors give because they feel strongly about specific goals, and they give to candidates who support those goals. They don’t give to middle-of-the-road candidates who compromise.
A second suggested explanation is the growth of domestic air travel, which now offers frequent quick connections between DC and every state. Formerly, our reps served in Congress in DC during the week; then they had to remain in DC for the weekend because they couldn’t return to their home state and back within the span of a weekend. Their families lived in DC, and their children went to school in DC. On weekends the reps and their spouses and children socialized with one another, and the reps spent time with one another as friends and not just ad political adversaries or allies. Today, though, the high cost of election campaigns puts pressure on reps to visit their home state often for the purpose of fundraising, and the growth of domestic air travel makes that feasible. Many reps keep their families in their home state. The reps don’t get to know each other’s families, and they see one another only as politicians. At present, about 80 of the 535 members of Congress don’t even maintain a home in DC, but instead sleep on a bed in their office during the week, then fly back to their home state during the weekend.
A third explanation is gerrymandering, which derives its name from Governor Elbridge Gerry of Massachusetts, whose administration already in 1812 redrew the state’s districts for the sole purpose of increasing the number of elected reps belonging to his party. The resulting districts had geographically weird shapes, one of them resembling a salamander and thereby giving rise to the term ‘gerrymandering.’
The consequence of gerrymandering for political compromise is that it makes clearer in advance which parties and which policies a majority of each district’s voters are going to support. Hence middle-of-the-road candidates are likely to be defeated if they take a middle-of-the-road position appealing to voters of both parties. Instead, candidates know that they should adopt a polarized platform appealing only to the party expected to win in their district. But gerrymandering can’t explain why Senators are as uncompromising as House members and polarization in districts that haven’t been redrawn.
Polarization, intolerance, and abusiveness are also increasing in other spheres of American life besides the political sphere. There’s been changes in elevator behavior (people waiting to enter now less likely to wait for those exiting); declining courtesy in traffic; declining friendliness on hiking trails and streets (Americans < 40 less likely to say hello); more vicious academic debates; and above all, increasingly abusive speech all of sorts, especially in electronic communication. All of these arenas represent a decline in social capital.”
“In the remote of areas of New Guinea where I do fieldwork, and where new communication tech hasn’t yet arrived, all communication is still face-to-face and full-attention. Traditional New Guineans spend most of their waking hours talking to one another, with no interruptions to look at their phone. In New Guinea, children in a village wander in and out of another’s huts throughout the day.
The average American checks their phone on average every four minutes, spends 6+ hours per day looking at a screen, and spends 10+ hours a day (most waking hours) connected to an electronic device. We lose inhibitions about being rude when people are reduced to words on a screen. And once we’ve gotten accustomed to being abusive at a distance, it’s an easier next step to being abusive also to a live person.
But why hasn’t political compromise declined and social nastiness increased in other affluent countries as well? I think think of two possible explanations. One is that electronic communication and many other tech innovations became established first in the U.S. from which they and their consequences then spread to other affluent countries. By that reasoning, the U.S. is merely first, not forever unique. If this explanation is correct, it’ll only be a matter of time before other affluent countries develop political gridlock to he degree that the U.S. has already reached.
The other possible explanation is that, already in the past, the U.S. for several reasons had, and still has today, less social capital to oppose the arrival of the impersonalizing forces of modern tech. The U.S.’s area is more than 25 times greater than that of any other affluent country besides Canada. Conversely, U.S. population density is up to 10 times lower than in most other affluent countries; only Canada, Australia, and Iceland are lower. The U.S. has always placed a strong emphasis on the individual, compared to European and Asian emphasis on the community; only Australia exceeds the U.S. in ratings of individualism among affluent countries. Americans move often. The much greater distances between the U.S. than within Japan or any Western European country mean that, when Americans do move, they are likely to leave their former friends much farther away. As a result, Americans have more ephemeral ties, and high turnover of friends instead of lots of lifelong friends living nearby. So it’s going to require more conscious effort on the part of American political leaders and voters to half our gridlock in other countries.”
“Some differences make the U.S. less likely than Chile was to degenerate into a violent military dictatorship — but some make it more likely to. Factors making that bad outcome less likely in the U.S. include our stronger democratic traditions, our historical ideal of egalitarianism, our lack of a hereditary landowning oligarchy like Chile’s, and the complete absence of independent political actions by our military throughout our history. (The Chilean army did intervene briefly in politics a couple of times before 1973.) On the other hand, the factors making a bad outcome more likely in the U.S. include far more private gun ownership, far more individual violence today and in the past, and more history of violence directed against minority groups. The U.S. is very unlikely to suffer a takeover by our military acting independently. I instead foresee one political party in power in the U.S. or state governments increasingly manipulating voter registration, stacking the courts with sympathetic judges, using those courts to challenge election outcomes, and then invoking ‘law enforcement’ and using the police, the National Guard, the army reserve, or the army itself to suppress political opposition. That’s why I consider political polarization to be the most dangerous problem facing us today — far more dangerous than competition from China or from Mexico, about which our political leaders obsess more. There’s no way that China or Mexico can destroy the U.S. Only we Americans can destroy ourselves.”
“By the standard of its citizens voting, the U.S. is barely half-deserving of being called a democracy. Nearly half of American citizens eligible to vote don’t vote even for president. In each of the four most recent presidential elections the number of eligible Americans who haven’t voted has been about 100 million.”
“Among affluent democracies (so-called OCED nations), the U.S. ranks at the bottom in voter turnout. Average turnouts of registered voters in other democratic countries are 93% in Australia, where voting is compulsory; 89% in Belgium; and 58–80% in most other European and East Asian democracies. Since Indonesia resumed free democratic elections after 1999, Indonesian voter turnout has fluctuated between 86–90%, while Italian turnout since 1948 has ranged up to 93%.
For comparison, the U.S. turnout of eligible voters for our national elections averages only 60% for presidential years, and 40% for midterm years. The highest turnout ever recorded in modern American history for the ’08 election, was only 62%, far below even the lowest recent turnout in Italy or Indonesia. One reason why many Americans eligible to vote don’t do so: they can’t, because they’re not registered to vote. That’s a distinctive feature of American democracy that calls for explanation. In many democracies, eligible citizens don’t have to do anything to ‘register’ to vote: the government does it for them by generating a list of people automatically registered, from government lists of drivers’ licenses, taxpayers, residents, or other such databases. For instance, in Germany all Germans 18+ automatically receive a card from the government notifying them that an election is coming up for which they are eligible to vote.”
“In 2000 in Florida about 100,000 potential voters, the vast majority Democrats, were pruned off the list of registered voters. That pruning had an enormous effect on tipping the vote — a much greater effect than did the subsequent well-publicized arguments over disqualifying mere hundreds of so-called chad ballots to which the election’s outcome is commonly misattributed. The basic flaw in our American system of voter registration is that, in Florida and many other states, our registered voter lists and election procedures are controlled by partisan procedures at state and local levels, not by non-partisan procedures at the national level.”
“In 2013 the U.S. Supreme Court, by a 5-to-4 vote, overturned Congress’ 1965 formula for identifying districts to be subject to oversight, on the grounds that it had become unnecessary due to progress in registering black voters. The result was a rush by state legislatures to adopt new obstacles to voter registration. Until 2004, none of the 50 states required potential voters to show an ID in order to register or vote. Only two states had adopted such a requirement by 2008. But immediately upon the Supreme Court decision, 14 states adopted photo ID requirements or other such restrictions, and most states now have or are contemplating them.
Just as the earlier grandfather clauses didn’t specifically mention black people but were instead successfully designed to disenfranchise them, modern methods have similar designs and successes. The percentage of potential voters who posses the required ID is considerably higher (depending on the age group, up to three times higher) for whites than for blacks or Latinos, and higher for rich than for poor people. The reasons are banal ones with no direct relationship to deserving the right vote: e.g. poorer people, and blacks in general, are more likely to not have a driver’s license because they haven’t paid a traffic fine. Alabama closed its DMV offices in counties with large black populations. In response to the resulting public outcry, Alabama reopened those offices — but just for one day a month. Texas maintained DMV offices in only one-third of its counties, forcing potential voters to travel up to 250 miles if they were determined to satisfy the ID requirement by getting a driver’s license.”
“All of these selective obstacles contribute to the fact that voter turnout is over 80% for Americans with incomes exceeding $150k, but under 50% for Americans with incomes under $20k.”
“No country approaches the U.S. in the expense and uninterrupted operation of our political campaigning. In contrast, in the UK election campaigning is restricted by law to a few weeks before an election, and the amount of money that can be spent for campaign purposes is also restricted by law.”
“In some dictatorships like Equatorial Guinea, one man (the president) possesses most of the national income and wealth.”
“Inequality is rising even within the ranks of rich Americans themselves: the richest 1% have increased their incomes proportionately much more than the richest 5%; the richest 0.1% have done proportionately better than the richest 1%; and the three richest Americans have combined net worths equal to the combined net worth of the 130 million poorest Americans. The percentage of billionaires in our population is double that of the major democracies with the next highest percentage of billionaires (Canada and Germany), and seven times that of most other major democracies.”
“Socioeconomic mobility is lower, and family intergenerational correlations of incomes are higher, in the U.S. than in other major democracies. For instance, 42% of American sons whose fathers belong to the poorest 20% of their generation end up in the poorest 20% of their own generation, whereas only 8% of sons of those poorest parents achieve rags to riches by ending up in the richest 20%. Corresponding percentages for Scandinavian countries are about 26% (below Americans’ 42%) and 13% (above Americans’ 8%).”
“Another reason for initially dismissing concerns about American investment in our future is the world dominance of American science and tech, which account for 40% of U.S. economic output: the highest percentage for any major democracy. The U.S. leads the world by far in output of high-quality science articles in every major area of science: chemistry, physics, biology, and earth and environmental sciences. Half of the top science and tech research institutions in the world are American. The U.S. leads the world in absolute spending on R&D (though not in relative spending: Israel, South Korea, and Japan all invest a higher percentage of their GDPs).”
“The U.S. is losing its former competitive advantage that rested on an educated workforce, and on science and tech. At least three trends are contributing to this decline: the decreasing amount of money we devote to education, the declining results that we get for the money that we do spend on education, and large variation among Americans in the quality of education that they receive.
Government funding of education (especially higher education) has been dropping since at least the turn of the century. Despite our growing population, state funding of higher education has grown at only 1/25th of the rate of state funding for prisons, to the point where a dozen U.S. states now spend more on their prison systems than they do on their higher education systems.
In math and science comprehension and test scores, American students now rank low among major democracies. That’s dangerous for us, because the American economy is so dependent on science and tech, and because math and science education plus years of schooling are the best predictors of national economic growth. But our educational spending per student, although in decline, is still high by world standards. That means that we’re getting a poor return on our educational investment. Why?
A big part of the answer is that, in South Korea, Finland, Germany, and other democracies, the teaching profession attracts the very best students, because teachers there are highly paid and enjoy high social status, which leads to low job turnover. South Korean applicants for training as primary schoolteachers have to score in the top 5% on national college entrance exams, and there are 12 teachers applying for every secondary school teaching job there. In contrast, American teachers have the lowest relative salaries (i.e. relative to average national salaries for all jobs) among major democracies. In Montana, for example, schoolteacher salaries are near the poverty level, and teachers have to take one or two additional after-hours jobs to make ends meet. All schoolteachers in South Korea, Singapore, and Finland come from the top third of their school classes, but nearly half of American teachers come from the bottom third of their classes.
In contrast to most other major democracies, where the national government funds education and sets standards, in the U.S. that responsibility falls on the individual states and local government. State spending per student on public higher education varies 11-fold among states, depending on variation in state wealth, in tax revenues, and in political philosophies.”
“In Finland, the national government itself pays the salaries of teachers of private schools as well as the public schools and pays the same salaries to teachers at both types of schools, so Finnish parents (unlike American parents) can’t buy a better education for their children by sending them to private school.
The U.S. is stinting its investment in the future of most Americans. While we have by far the largest population among wealthy democracies, most of that population is not being trained for the skills that are the engine of our national economic growth. But we’re competing against countries like South Korea, Germany, Japan, and Finland, which invest in the education of all their children. And China, whose population is five times that of the U.S., is now embarked on a crash program to improve the educational opportunities of its children. That bodes ill for the future of the competitive advantage that the U.S. economy has hitherto enjoyed.
All of these facts raise a paradox. The U.S. is the world’s richest country. Where’s our money going, if it’s not being invested by our government in our own future?
Part of the answer is that most of our money stays in taxpayer’s pockets; our tax burden is low compared to most other wealthy democracies. The other part of the answer is that much of our tax money is going toward government expenditures on prisons, the military, and health.”
“An American cultural advantage is flexibility, which expresses itself in many ways. Americans change their homes on the average of every five years, much more often than citizens of the other countries I discuss. Our long history of maintaining the same two major political parties — Democrats since the 1820s and Republicans since 1854 — is actually a sign of flexibility rather than rigidity. That’s because, whenever a third party started to become significant (such as Teddy Roosevelt’s Bull Moose Party, Henry Wallace’s Progressive Party, and George Wallace’s American Independent Party), it soon faded because its program became partly co-opted by one of the two major parties.”
“A rend for wealthy and influential Americans with disproportionate power is to recognize that something is wrong, but, rather than devoting their wealth and power to finding solutions, they instead seek ways for just themselves and their families to escape American society’s problems. Currently favored strategies of escape include buying property in New Zealand (the most isolated First World nation), or converting American abandoned underground missile silos at great expense into luxurious defended bunkers. But there’s only so long that a luxurious micro-civilization in bunkers, or even an isolated First World society in New Zealand, can survive if the U.S. outside is crumbling. When will the U.S. take its problems seriously? When powerful rich Americans realize that nothing they do will enable them to remain physically safe, if most other Americans remain angry, frustrated, and realistically without hope.”
“Some problems that Americans regard as frustratingly insoluble are solved by Canadians in ways that earn widespread public support. For instance, Canada’s criteria for admitting immigrants are far more detailed and rational than the U.S.’s. As a result, 80% of Canadians consider immigrants good for the Canadian economy. American ignorance of Canada is astounding — they don’t realize how different Canada is, and how much Americans could learn from Canadian models for solving problems that are frustrating us.”
Nuclear Weapons
“The Hiroshima atomic bomb of August 6, 1945, killed about 100,000 people instantly, plus thousands more who died subsequently from injuries, burns, and radiation poisoning. A war in which India and Pakistan, or the U.S. and Russia or China, launched most of their nuclear arsenal at each other would instantly kill hundreds of millions. But the delayed worldwide consequences would be greater. Even if bomb explosions themselves were confined to India and Pakistan, the atmospheric effects of detonating hundreds of nuclear devices would be felt worldwide, because smoke, soot, and dust from fireballs would block most sunlight for several weeks, creating winter-like conditions of steeply falling temperatures globally, interruption of plant photosynthesis, destruction of much plant and animal life, global crop failures, and widespread starvation. A worst-case scenario is termed ‘nuclear winter’: i.e. the deaths of most humans due not only to starvation but also to cold, disease, and radiation.”
“One can identify four seats of scenarios culminating in the detonation of nuclear combs by governments or non-governmental terrorist groups. The scenario most often discussed has been a planned surprise attack by one nation with a nuclear arsenal on another with one. The purpose would be to destroy the rival nation’s arsenal completely and instantly, leaving the rival without an arsenal with which to retaliate. This scenario was the one most feared throughout the Cold War. Because the U.S. and the Soviet Union both possessed the nuclear capacity to destroy each other, the only ‘rationally planned’ attack would be a surprise one expected to be able to destroy the rival’s retaliatory capability. Hence both countries responded to that fact by developing multiple systems to deliver nuclear weapons, in order to eliminate the risk that all of their own retaliatory capacity could be eliminated instantly. For example, the U.S. has three delivery systems: hardened underground missile silos, submarines, and a fleet of bomb-carrying aircraft. Hence even if a Soviet surprise attack destroyed every single one of the silos — unlikely, because the U.S. had so many silos including deceptive dummy ones, hardened against attack, small, and requiring implausibly high accuracy for Soviet missiles to destroy every one of them — the U.S. could still respond with its bombers and its submarines to destroy the Soviet Union.
As a result, the nuclear arsenals of both countries provided ‘mutual assured destruction,’ and a surprise attack was never carried out. This rational consideration offers limited comfort for the future, because there have been irrational modern leaders: perhaps Iraq’s Saddam Hussein and North Korea’s Kim Jon-Un, plus some leaders of Japan, Germany, the U.S., and Russia. In addition, India and Pakistan today each only possesses a ground-based delivery system: no missile-carrying submarines. Hence a leader of India or Pakistan might consider a surprise attack to be a rational strategy offering a good chance of destroying the rival’s retaliatory capacity.
A second scenario involves an escalating series of miscalculations of a rival government’s response, and pressure by each country’s generals on their president to respond, culminating in mutual non-surprise nuclear attacks that neither side initially wanted. The prime example is the 1962 Cuban Missile Crisis, when the low opinion that the Soviet premier Khrushchev formed of U.S. President Kennedy at their 1961 Vienna meeting led Khrushchev to miscalculate that he could get away with installing Soviet missiles in Cuba. When the U.S. did detect the missiles, U.S. generals urged Kennedy to destroy them immediately (posing the risk of Soviet retaliation), and warned Kennedy that he risked being impeached if he didn’t do so. Fortunately, Kennedy chose less drastic means of responding, Khrushchev also responded less drastically, and Armageddon was avoided. But it was a very close call, as became clear only later, when both sides released documents about their activities then. For example, on the first day of the week-long crisis, Kennedy announced publicly that any launch of a Soviet missile from Cuba would require ‘a full retaliatory response upon the Soviet Union.’ But Soviet submarine captains had the authority to launch a nuclear torpedo without first having to confer with Soviet leadership in Moscow. One such Soviet submarine captain did consider firing a nuclear torpedo at an American destroyer threatening the submarine; only the intervention of other officers on his ship dissuaded him from doing so. Had the Soviet captain carried out his intent, Kennedy might have faced irresistible pressure to retaliate, leading to irresistible pressure on Khrushchev to retaliate further..
A similar miscalculation could lead to nuclear war today. For example, North Korea currently has medium-range missiles capable of reaching Japan and South Korea, and has launched a long-range ICBM (intercontinental ballistic missile) intended to be able to reach the U.S. When North Korea completes development of its ICBM, it might demonstrate it by launching one toward the U.S. That would be considered by the U.S. as an unacceptable provocation, especially if the ICBM by mistake came closer to the U.S. than intended. An American president might then face overwhelming pressure to retaliate, which would create overwhelming pressure on China’s leaders to retaliate in defense of their ally.
Another plausible opportunity for unintended retaliation by miscalculation involves Pakistan and India. Pakistan terrorists already conducted a lethal non-nuclear attack on Mumbai in 2008. In the foreseeable future, Pakistan terrorists might stage a more provocative attack; it might be unclear to India whether the Pakistani government itself was behind the attack; India’s leaders would be pressured to invade some neighboring portion of Pakistan, in order to eliminate the terrorist threat there; Pakistan’s leaders would then be pressured to sue their small tactical nuclear weapons ‘just’ against the invading Indian army, perhaps miscalculating that India would consider such a limited use of nuclear weapons as ‘acceptable’ and not requiring a full retaliatory response; but India’s leaders would be pressured to respond with their own nuclear weapons.
Both of those situations that could lead to nuclear war by miscalculation seem to me likely to begin to unfold within the next decade. The main uncertainty concerns whether leaders will then pull back as happened during the Cuban Missile Crisis, or whether escalation will run to completion.
The third type of scenario that could culminate in a nuclear war is an accidental misreading of technical warning signs. Both the U.S. and Russia have early warning systems to detect a launch of attacking missiles by the rival. Once missiles have been launched, are underway, and have been detected, the American or Russian president has about 10 minutes to decide whether to launch a retaliatory attack before the incoming missiles destroy the land-based missiles of his country. Launched missiles can’t be recalled. That leaves minimal time to evaluate whether the early warning is real or just a false alarm due to technical error, and whether or not to push a button that will kill hundreds of millions of people.
But missile detection systems, like all complex technologies, are subject to malfunctions and to ambiguities of interpretation. We know of at least three false alarms given by the American detection system. For example, in 1979 the U.S. army general serving as watch officer for the U.S. system phoned then-under-Secretary of Defense in the middle of the night to say, ‘My warning computer is showing 200 IBCMs in flight from the Soviet Union to the U.S.’ But the general concluded that the signal was probably a false alarm, the Undersecretary did not awaken President Carter, and Carter didn’t push the button and needlessly kill a hundred million Soviets. It eventually turned out that the signal was indeed a false alarm due to human error: a computer operator had by mistake inserted into the U.S. warning system computer a training tape simulating the launch of 200 Soviet ICMBs. We also know of at least one false alarm given by the Russian detection system: a single non-military rocket launched in 1995 from an island off Norway toward the North Pole was misidentified by the automatic tracking algorithm of Russian radar as a missile launched from an American submarine.
These incidents illustrate an important point. A warning signal is not unambiguous. False alarms are to be expected and still happen, but real launches and real alarms are also possible. Hence when a warning alert does come through, the U.S. watch officer and president (and presumably a Russian watch officer and president in the corresponding situation) must interpret the alarm in the context of then-current conditions: is the current world situation such that the Russians (or Americans) are likely to assume the horrible risk of launching an attack that will guarantee immediate mass-destructive retaliation? In 1979 there were no current world events motivating a missile launch, Soviet/U.S. relations weren’t acutely troubled, and the U.S. watch officer and Undersecretary of Defense felt confident in interpreting the warning signal as a false alarm.
Alas, that comforting context no longer prevails. While one might naively have expected the end of the Cold War to reduce or eliminate the risk of nuclear war between Russia and the U.S., the result has been paradoxically the opposite: the risk is now higher than at any time since the Cuban Missile Crisis. The explanation is the deterioration of relations and of communication between Russia and the U.S.: a deterioration partly due to recent policies of Putin and partly due to imprudent American policies. In the late 90s, the U.S. made the mistake of dismissing the post-Soviet Union Russia as weak and no longer worthy of respect. In line with the new attitude, the U.S. prematurely expanded NATO to encompass the Baltic Republics that had formerly been part of the Soviet Union, supported NATO military intervention against Serbia over strong Russian opposition, and stationed ballistic missiles in Eastern Europe supposedly as a defense against Iranian missiles. Russian leaders understandably felt threatened by those and other U.S. actions.
U.S. policy toward Russia today ignores the lesson that Finland’s leaders drew from the Soviet threat after 1945: that the only way of securing Finland’s safety was to engage in constant frank discussions with the Soviet Union, and to convince the Soviets that Finland could be trusted and posed no threat. Today, the U.S. and Russia pose a big threat to each other, from a possible misinterpretation leading to an attack not planned in advance — because they are not in constant frank communication, and they are failing to convince each other that they pose no threat from a possible attack planned in advance.
The remaining scenario that could result in use of nuclear weapons involves terrorists stealing uranium or plutonium or a completed bomb from, or being given to it by, a nuclear power: most likely Pakistan, North Korea, or Iran. The bomb could then be smuggled into the U.S. or another target, and detonated. While preparing for 9/11, Al Qaeda did seek to acquire a nuclear weapon for use against the U.S. Perhaps terrorists could steal uranium or a bomb without the help of a bomb-producing country, if security at the bomb storage site were inadequate. For instance, at the time of the dissolution of the Soviet Union, 600kg of former-Soviet-bomb-quality uranium remained in newly independent Kazakhstan. The uranium was stored in a warehouse secured by little more than a barbed wire fence and could easily have been stolen. But more likely, terrorists might obtain bomb material by an ‘inside job,’ i.e. with the help of bomb storage personnel or leaders of Pakistan, North Korea, or Iran.
A related risk often confused with that danger of terrorists acquiring a nuclear bomb is the risk of their acquiring a so-called ‘dirty bomb’: a conventional non-nuclear explosive bomb whose package includes non-explosive but long-lived radioactive material, such as the isotope cesium-137 with a half-life of 30 years. Detonation of the bomb in an American or other city would spread the cesium over an area of many blocks that would become permanently uninhabitable, as well as having a big psychological impact. Cesium-137 is readily available in hospitals because of its medical uses. Hence it’s surprising that terrorists haven’t already added cesium-137 to their non-nuclear bombs.
Of these four scenarios, the most likely is the one involving terrorists using a dirty bomb (easy to make) or a nuclear bomb. The former would kill just a few people, the latter ‘just’ a Hiroshima-like death toll of 100,000 people — but both would have consequences far eclipsing those death tolls. Less likely, but still possible, are the first three scenarios that could kill hundreds of millions of people directly, and ultimately most people on Earth.”
Climate Change
“The CO2 that we produce partly gets stored in the oceans as carbonic acid. But the ocean’s acidity is already higher than at any time it the last 15 million years. That dissolves the skeletons of coral, killing coral reefs, which are a major breeding nursery of the ocean’s fish, and which protect tropical and subtropical sea-coasts against storm waves and tsunamis. At present, the world’s coral reefs are contracting by 1-% per year, so they’ll mostly be gone within this century, and that means big declines in tropical coastal safety and protein availability from seafood. CO2 release also affects plant growth, variously either stimulating or inhibiting it.”
“The Himalayan snow pack provides most of the water for China, Vietnam, India, Pakistan, and Bangladesh, and that snow pack and the resulting water supply that those countries have to share are shrinking, but those countries have a poor track record of peacefully settling their conflicts.
Another consequence of the global warming trend is decreased food production on land, from drought and paradoxically from increased land temperatures (e.g. because they can favor the growth of weeds over crops). Decreased food production is a problem because the world’s human population, standard of living, and food consumption are in the process of increasing by a projected 50% over the next few decades, but we already have a food problem now with several billion people currently underfed. In particular, the U.S. is the world’s leading food exporter, and American agriculture is concentrated in the western and central U.S., which are becoming uniformly hotter and drier and less productive.
Another consequence is that tropical disease-carrying insects are moving into the temperate zones. The resulting disease problems so far include the recent transmission of dengue fever and the spread of tick-borne diseases in the U.S., the recent arrival of tropical chikungunya fever in Europe, and the spread of malaria and viral encephalitis.”
“When CFC gases replaced the poisonous gas previously used in fridges until the 1940s, it seemed like a wonderful and safe engineering solution to the fridge gas problem, especially because lab testing had revealed no downside to CFCs. Unfortunately, lab tests couldn’t reveal how CFCs, once they got into the atmosphere, would begin to destroy the ozone layer that protects us from UV radiation. As a result, CFCs became banned in most of the world — but only several decades later. That illustrates why geoengineering would first require ‘atmospheric testing’ — an impossibility because we would have to ruin the Earth experimentally 10 times before we could hope to figure out how to make geoengineering produce just the desired good effects on the 11th try. Hence most scientists and economists consider geoengineering experiments as extremely unwise, even lethally dangerous, and deserving to be banned.”
“Industrial exploitation of coal was followed by exploitation of oil, oil shale, and natural gas. For instance, the first oil well that extracted oil from underground was a shallow well drilled in Pennsylvania in 1859, followed by progressively deeper wells.
There are debates about whether we’ve already reached ‘peak oil’ — that is, whether we’ve consumed so much of the Earth’s accessible oil reserves that oil production will soon start to decline. However, there’s no debate about the fact that the cheapest, most accessible, and least damaging sources of oil have already been used up. The U.S. can no longer scrape up surface oil or drill shallow wells in Pennsylvania. Instead, wells have to be dug deeper (a mile deep or more), and not just on land but also under the ocean floor, and not just in shallow ocean waters but in deeper waters, and not just in Pennsylvania in the U.S.’s industrial heartland but far away in New Guinea rainforests and in the Arctic. Those deeper, more remote oil deposits are much more expensive to extract than were Pennsylvania’s shallow deposits. The resulting potential for oil spills producing costly damages is higher. As costs of oil extraction increase, alternative but more damaging fossil fuel sources of oil shale and coal, and non-fossil fuel sources such as wind and solar, are becoming more economic. Nevertheless, oil prices today still permit big oil companies to continue to be highly profitable.”
“In the U.S., windmills have been estimated to kill at least 45,000 birds and bats each year. But to place that number in perspective, consider that outdoor pet cats have been measured to kill an average of 300 birds per year per cat. If the U.S. population of outdoor cats is estimated at about 100 million, then cats can be calculated to kill at least 30 billion birds per year in the U.S., compared to the mere 45k birds and bats killed per year by mindmills. The windmill toll is equivalent to the work of just 150 cats.”
“In the U.S., the solution to these dilemmas will have to involve two components. One is to reduce energy consumption per person: ours is approximately double that of Europeans, despite Europeans enjoying a higher standard of living. Among the contributing factors are different government policies influencing car purchases. Europeans are discouraged from buying expensive big cars with high fuel consumption and low gas mileage, because the purchase tax on cars in some countries is set at 100%, doubling the cost of the car. Also, European government taxes on gas drive gas prices to more than $9/gallon, another disincentive to buying a fuel-inefficient car. Tax policies in the U.S. could similarly be used to discourage Americans from buying gas-guzzling cars.
The second component of the solution to energy dilemmas for the U.S., besides lowering overall energy consumption, will be to get more of our energy from sources other than fossil fuels — i.e., from wind, solar, tidal, hydroelectric, geothermal, and perhaps nuclear. After the 1973 Gulf oil crisis, the U.S. government offered subsidies to developers of alternative energy generation, and U.S. companies used those subsides to develop efficient wind generators. Unfortunately, around 1980 the U.S. ended those subsidies, so the U.S. market for efficient windmills declined precipitously. Instead, Denmark, Germany, Spain, and other European countries improved on our windmill designs and now use them to generate much of their electricity needs. Similarly, China has developed long-distance power liens to transmit electricity from wind-generating sites in far western China to densely populated Eastern China; the U.S. hasn’t developed such systems.”
“There’s nothing we can do to maintain the world’s reserves of non-renewable resources (minerals and fossil fuels) by our management practices. But management practices have big effects on reserves of renewable biological resources. Some of the world’s forests and fisheries, such as Germany’s forests and the Alaska wild salmon fishery, are already well managed. Unfortunately, most aren’t; they’re being overharvested, with the result that their fish or tree stocks are shrinking or disappearing. Some species like the Atlantic swordfish have been overharvested and gone extinct. We also know how to manage topsoil, but sadly it’s too often mismanaged and gets carried off into rivers and then into the ocean by erosion, or else its fertility and texture get degraded. In short, the world is currently mismanaging many or most of its renewable valuable biological resources.
Probably all natural resources could limit human societies, with the exception of atmospheric oxygen, which we show no signs of using up. Some minerals, especially iron and aluminum, are present in such huge amounts that they too seem unlikely to prove limiting — but the deposits that we’ve been extracting have been the shallow, accessible, cheaply extractable ones. With time, we shall inevitably come to depend on deeper reserves that are more costly to extract.”
“The open ocean is a ‘commons’: while ocean waters within 200 miles of land is considered the territory of the nation to which that land belongs, water beyond that is owned by no one. (The name ‘commons’ comes from a term applied to much pasture land in the Middle Ages: it was considered available for use by the public.) Nations have the legal basis to regulate fishing within their 200-mile limit, but any boat of any country can fish anywhere in the open ocean. As a result, there’s no legal mechanism for preventing overfishing of the open oceans, and many ocean fish stocks are declining. Three other potentially valuable resources also lie in a commons beyond national limits: minerals dissolved in the ocean, fresh water in the Antarctic ice cap, and minerals lying on the sea floor. There’ve already been some attempts to exploit all three: after WII a German chemist worked on a process to extract gold from ocean water; at least one attempt has been made to tow an iceberg from Antarctica to a water-poor Middle Eastern nation; and efforts are far advanced to mine some minerals from the ocean floor. But none of those three exploitations of the commons has proved practical yet; our current commons problem is ‘just’ open-ocean fisheries.”
“Average per-capita consumption of resources like oil and metals, and average per-capita production rates of wastes like plastics and greenhouse gases, are about 32 times higher in the First World than in the developing world. For instance, each year the average American consumes about 32 times more gasoline, and produces 32 times more plastic waste and carbon dioxide, than does the average citizen of a poor country. That factor of 32 has big consequences for how people in the developing world behave, and it also has consequences for what lies ahead for all of us.
To understand those consequences, let’s reflect on our concern with world population. Today, there’s more than 7.5 billion people, and that may rise to around 9.5 billion within this half-century. Several decades ago, many people considered population as the biggest issue facing humanity. But, since then, we’ve come to realize that population is just one of two factors whose product is what really matters: local population times the local average consumption rate per person.
The much bigger problem for the world as a whole is that we 330 million Americans, who outnumber Kenyans 6.6:1 consume as much as 32 Kenyans do, not that Kenya has a growth rate of 4%/year. The U.S. consumes 210 times more resources than Kenya as a whole. Italy’s population of 60 million consumes almost twice as much as do the 1 billion people of Africa.
Until recent times, the existence of all those poor people elsewhere didn’t constitute a threat to First World countries. ‘They’ out there didn’t know much about our lifestyle, and if they did learn about it and got envious or angry, they couldn’t do much about it. The reasons why poor countries can now create problems for rich countries is globalization: the increased ease of communications and travel means that people in developing countries now know a lot about the big differences in consumption rates and living standards, and it’s now possible for many of them to travel to rich countries.”
“Only in poor countries, where much of the population does feel desperate or angry, is there toleration or support for terrorists.
People with low consumptions want to enjoy the high-consumption lifestyle themselves. They have two ways of achieving it. First, the governments of developing countries consider an increase in living standards as a prime goal. Second, tens of millions of people in the developing world seek the First World lifestyle now by emigrating, especially from Africa and parts of Asia, and also from Central and South America. Each such transfer of a person fro a low-consumption country raises world consumption rates.
Is everybody’s dream of achieving a First World lifestyle possible? Multiply current national populations by national per-capita consumption rates (of oil, metals, water, etc.) for each country, and add them up with all developing countries achieving a First World consumption rate up to 32 times higher than their current ones, and the result is that world consumption rates will increase by 11-fold. That’s equivalent to a world population of about 80 billion people.
I haven’t met any optimist mad enough to claim that we can support a world with the equivalent of 80 billion people. Yet we promise developing countries that, if they will only adopt good policies, like honest governments and free-market economics, they too can become like the First World. That promise is a cruel hoax — we’re already having trouble supporting the First World lifestyle when only 1 out of 7.5 billion can enjoy it.
The only sustainable outcome for our globalized world that China, India, Brazil, Indonesia, African countries, and other developing countries will accept is one in which consumption rates and living standards are nearly equal around the world. But since the world doesn’t have enough resources to sustainably support this, are we guaranteed to end up in disaster?
No: we could have a stable outcome in which the First World and other countries converged on consumption rates considerably below current First World rates. The cruel realities of world resource levels guarantee that the American way of life will change; those realities of world resources cannot be negotiated out of existence.
That wouldn’t necessarily be a real sacrifice, because consumption rates and human well-being, while related, are not tightly coupled. Much American consumption is wasteful and doesn’t contribute to high quality of life. For example, per-capita oil consumption rates in Western Europe are about half those of the U.S., but the well-being of the average Western European is higher than that of the average American by any meaningful criterion, such as life expectancy, health, infant mortality, access to medical care, financial security after retirement, vacation time, quality of public schools, and support for the arts. There are other areas besides oil in which U.S. (and other First World country) consumption rates are wasteful, such as the destructive exploitation of most of the world’s fisheries and forests.
It’s certain that within the lifetimes of most of us, per-capita consumption rates in the First World will be lower than they are now. The only question is whether we shall reach that outcome by planned methods of our choice, or by unpleasant methods not of our choice. It’s also certain that, within our lifetimes, per-capita consumption rates in many populous developing countries will no longer be a factor of 32 below First World rates. Those trends are desirable goals, rather than horrible prospects that we should resist. We already know enough to make good progress toward achieving them; the main thing lacking has been political will.
“Israel has invaded and partially occupied Lebanon. Lebanon has served as a base for launching rocket attacks into Israel. Nevertheless, birdwatchers of those two countries succeeded in reaching a milestone agreement. Eagles and other large birds migrating seasonally between Europe and Africa fly south from Lebanon through Israel every autumn, then north again from Israel through Lebanon every spring. When aircraft collide with those large birds, the result is often mutual destruction — such collisions had been a leading cause of fatal plane accidents in both countries. That stimulated birdwatchers to establish a mutual warning system. In the autumn Lebanese birdwatchers warn their Israeli counterparts and Israeli air traffic controllers when they see a flock of large birds heading south toward Israel, and in the spring Israeli birdwatchers warn of birds heading north. While it’s obvious that this agreement is mutually advantageous, it required years of discussions to overcome prevailing hatreds, and to focus just on birds and airplanes.”
“A mere two nations account for one-third of the world’s population; another pair of nations (the U.S. and China) account for 41% of the world’s CO2 emissions and economic output; and five nations or entities (China, India, the U.S., Japan, and the EU) account for 60% of emissions and outputs. China and the U.S. already reached an agreement in principle on CO2 emissions. That bilateral agreement was then joined by India, Japan, and the EU in the 2016 Paris agreement. Of course it wasn’t enough, because it lacked a serious enforcement mechanism, and because the U.S. announced its intention to pull out the next year. But it’s nevertheless likely to serve as a model or starting point for reaching an improved future agreement. Even if the world’s 200 other nations with smaller outputs don’t join such a future agreement, just a five-way agreement among the five biggest players could go a long way toward solving the emissions problem. That’s because the five biggest players can then put pressure on the other 200, e.g. by imposing trade tariffs and carbon taxes on countries that don’t adhere.
Another route toward solving world problems consists of agreements among a region’s nations — there are already such agreements for North America, Latin America, Europe, SE Asia, Africa, and other regions. The most advanced set of regional agreements, with the widest range of institutions and agreement spheres and binding rules, is the set for the EU, currently comprising around 27 European nations. The EU has constituted such a big and radical step forward for any world region.
After several thousand years of nearly constant warfare, culminating in Europe’s nations fighting the two most destructive wars in world history, no EU member has fought any war against any other EU member since the founding of the EU’s predecessors in the 1950s. In 1950, there was rigorous passport control at every national border; but restrictions on trans-border movements are now much more limited. A significant fraction of university positions in EU countries is held by non-nationals. Economies of EU nations are substantially integrated. Most EU nations share a common currency, the euro. For major world problems such as energy, resource use, and immigration, the EU discusses and sometimes adopts shared policies.
Other examples of more narrowly focused regional agreements include ones to eliminate or eradicate regional diseases. A major success was the eradication of rinderpest, a formerly dreaded cattle disease that inflicted huge costs on large areas of Africa and Eurasia. Following a long regional effort that took several decades, there’s now been no known case since 2001. Large-scale regional disease efforts currently underway in both hemispheres include ones to eradicate guinea worm and eliminate river blindness.
The third route consists of world agreements, hammered out by world institutions, and reached not only by the UN with its comprehensive world mission, but also by other orgs with more specific missions — such as orgs devoted to agriculture, animal trafficking, aviation, fisheries, food, health, whaling, and other missions. Just as with the EU, it’s easy to be cynical abou the UN and other international agencies, whose power is generally weaker than the EU’s, and much weaker than the power of most nations within their national boundaries. But international agencies already have many achievements, and they provide a mechanism for more progress. Major successes have been the worldwide eradication of smallpox in 1980; the 1987 Montreal Protocol to protect the stratosphere’s ozone layer; the 1978 International Convention for the Prevention of Pollution from Ships (MARPOL) that reduced world pollution of the oceans by mandating separation of oil cargo tanks from water ballast tanks on ships, then by requiring that all transport of oil at sea be by double-hulled tankers; the 1994 Law of the Sea Convention that demarcated exclusive national and shared international economic zones; and the International Seabed Authority that established the legal framework for seabed mineral exploitation.
Globalization both causes problems and facilitates solutions. One ominous thing that globalization means today is the growth and spread of problems around the world: resource competition, wars, pollutants, atmospheric gases, diseases, movements of people, and many other problems. But globalization also means something encouraging: the growth and spread of factors contributing to solutions, such as information, communication, recognition of climate change, a few dominant world languages, widespread knowledge of conditions and solutions prevailing elsewhere, and — some recognition that the world is interdependent and stands or falls together.
Our problems, especially world population and consumption, have increased markedly since 2005. World recognition of our problems, and world efforts to solve them, have also increased markedly. It is certain that fewer decades now remain until the outcome is settled, for better or for worse.”
Conclusion
“During the centuries when Iceland was governed by Denmark, Icelanders frequently frustrated Danish governors by their apparent rigidity and hostility to proposed changes. Whatever well-intentioned suggestions for improvement the Danish government offered, Icelanders’ response was usually, ‘No, we don’t want to try something different; we want to continue doing things in our traditional way.’ Icelanders refused Danish suggestions about improving fishing boats, fish exports, fishing nets, grain agriculture, mining, and ropemaking.
That national rigidity is understandable when one considers Iceland’s environmental fragility. Iceland lies at high latitudes, with a cool climate and short growing system. Icelandic soils are fragile, light, formed by volcanic ash, susceptible to erosion, and slow to regenerate. Iceland’s vegetation is easily stripped off by grazing or by wind or water erosion, and then is slow to regrow. In the early centuries of Viking colonization, Icelanders tried various subsistence strategies, all with disastrous results, until they eventually developed a set of sustainable agriculture methods. Having devised that set, they didn’t want to consider changes in their subsistence methods, or in other aspects of life, because of their painful experience: having finally devised one strategy that worked, whatever else they tried made things worse.”
“Jones and Olken ask: what happens to the national economic rate when a leader dies in office form natural causes, as compared to what happens when a leader doesn’t die in office from natural causes? This comparison offers a natural experiment to test the effect of a change in leadership. If the Great-Man view is correct, then a leader’s death should be more likely to be followed by changes of economic growth rates — either decreasing or increasing, depending on whether the leader’s policies really made a difference by being good or bad, respectively — than after random moments when a leader doesn’t happen to die. For their database, they took every instance in the world of a national leader dying naturally between 1945 and 2000, assembling 57 such cases which really do constitute a random pertubation: a leader’s economic policies don’t affect the likelihood of a heart attack or drowning. It turned out that economic growth rates were much more likely to change following a natural death than following random moments when a leader didn’t die. That suggests that, averaged over many cases, leadership does tend to affect economic growth.
In their second paper, Jones and Olken ask: what happens when a leader is assassinated? Of course, assassinations are not at all random events: they’re more likely to be attempted under some conditions (e.g. if citizens are dissatisfied with low economic growth). Hence Jones and Olken compared successful assassination attempts with unsuccessful attempts, when the bullet missed. That really is a random difference: national political conditions don’t affect the assassin’s aim. The database consisted of all 298 assassination attempts on national leaders from 1875 to 2005: 59 of them successful. It turned out that successful attempts were more likely to be followed by a change in national political institutions.
In both studies the effect of a leader’s death was stronger for deaths of autocratic leaders than for democratic ones — and stronger for autocrats with no constraints on their power than for autocrats constrained by legislatures or by political parties. These studies agree on a general conclusion: leaders sometimes make a difference. But it depends on the type of leader, and on the type of effect examined.”
“Small countries threatened by large countries should remain alert, consider alternative options, and appraise those options realistically. This lesson has sadly often been ignored. It was ignored by the Melains; it was ignored by the Paraguayans, who waged a disastrous war against the combined forces of the much larger Brazil and Argentina plus Uruguay from 1865 to 1870, resulting in the deaths of 60% of Paraguay’s population; it was ignored by Finland in 1939; it was ignored by Japan in 1941, when Japan simultaneously attacked the U.S., Britain, the Netherlands, Australia, and China while Russia was hostile; and it was ignored by Ukraine in its recent disastrous confrontation with Russia.
Many general themes have emerged from the histories of national crises in this book. One set consists of the behaviors that have helped our seven nations to deal with crises, including acknowledging when one’s nation is in a crisis; accepting responsibility for change, rather than just blaming other nations and retreating into victimhood; building a fence to identify the national feature(s) needing to be changed, so as not to be overwhelmed with a sense that nothing about one’s country is working adequately; identifying other countries from which to seek help; identifying other countries’ models that have solved problems similar to those now facing one’s own country; being patient, and recognizing that the first solution attempted may not work and that several successive attempts may be necessary; reflecting on which core values continue to be appropriate, and which are no longer appropriate; and practicing honest self-appraisal.
Young countries need to construct a national identity, as Indonesia, Botswana, and Rwanda have been doing. For older countries, national identities may need revision, as may core values; Australia illustrates such revision.
Still another theme involves uncontrollable factors that influence crisis outcomes. A nation is stuck with its actual experience of previous crisis-solving, and with its geopolitical constraints. More experience can’t suddenly be constructed, and constraints can’t be wished away. But a nation can still take them realistically into account, as did Germany under Bismarck and Brant.”