I have come to realize that there are many out there who have a hard time believing that there is a concerted effort to enslave and even annihilate millions of people. I know that many find it hard to understand about symbolism and manipulation.
This post contains some really important foundational information. Please read it over more than once and take the time to break it down and seriously consider all that is revealed here. Much of what is posted here is written from the perspective of the ones who are behind these practices and technologies. I have only posted excerpts from these articles but they are posted as written. I believe that when people have an opportunity to review the information they are capable of coming to their own conclusions.
In recent times people are reluctant to accept the FACT that there are EVIL people in the world. Whether you believe in GOD or the teaching from the BIBLE, you have to recognize that there are people who are motivated by their own selfish desires and nothing else. They have no conscience, no empathy or sympathy for others. Their thoughts are only evil all the time.
They are using science and technology, manipulation techniques employing music, images, symbols, vibrations, and persuasion to form your thoughts, beliefs and desires into agreement with their agendas and at the same time taking away all your freedoms. These things are being used against you without your perception.
Things have progressed now to a place where they most likely are not reversible. There may still be a chance to turn things around but there has to be a mass awakening.
spacer
The Importance of a Free Press
Vibrant journalism allows us to expand the scope of our knowledge and experience, enables conversations on issues of public concern, and holds the powerful to account.
A free press is guaranteed by the U.S. Constitution, but no right is truly guaranteed. Despite the United States’ historic role as a global champion of free speech, the nation often receives less-than-stellar marks when it comes to protecting the press (the United States is ranked only 45th out of 180 countries in a report on media freedom).
Why is freedom of the press important?
Freedom of the press is important because it plays a vital role in informing citizens about public affairs and monitoring the actions of government at all levels. While the media may be unpopular —43 percent of Americans say the media supports democracy “very poorly” or “poorly,” a Knight Foundation/Gallup report found — this role should not be forgotten.
To protect our rights, we must understand our rights. Here are four fundamental facts we should all remember about freedom of the press:
The founding fathers valued a free press.
… our revolutionary forefathers knew that when the press examines the actions of government, the nation benefits. News organizations expose corruption and cover-ups, deceptions and deceits, illegal actions and unethical behavior—and they hold our leaders and our institutions accountable.
Freedom of the Press Quotes
“Republics and limited monarchies derive their strength and vigor from a popular examination into the action of the magistrates,” Benjamin Franklin declared. By sharing knowledge and sparking debate, a free press invigorates and educates the nation’s citizens. Freedom will be “a short-lived possession” unless the people are informed, Thomas Jefferson once said. To quote John Adams: “The liberty of the press is essential to the security of the state.”
The First Amendment protects the media.
The Bill of Rights was modeled after the Virginia Declaration of Rights, written by George Mason in 1776. During the Constitutional Convention, Mason and other Anti-Federalists, including James Monroe and Patrick Henry, believed that the U.S. Constitution failed to place specific limits on the government’s power. That led to the eventual creation of the Bill of Rights and its ten amendments, written by James Madison.
What does the First Amendment say about freedom of the press?
The First Amendment is one of the great statements in the history of human rights. It declares: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.” That means the government cannot punish you for your views, thoughts, or words, even if they’re unpopular save for very narrow limits. But we the people can say what we think—and the press can perform its essential role: To agitate, investigate, and scrutinize our leaders and institutions. That freedom is the difference between a democracy and a dictatorship.
The press is under attack.
Threats against journalists aren’t new. The Sedition Act of 1798 prohibited the publishing of “false, scandalous, and malicious writing” against the government, and was “perhaps the most grievous assault on free speech in the history of the United States,” writes Geoffrey Stone, author of Perilous Times—Free Speech in Wartime. Antiwar journalists were arrested in World War I and during the Red Scare. In 1971, the U.S. government attempted to cease publication of the Pentagon Papers. Journalists such as former New York Times reporter Judith Miller have chosen jail sentences rather than reveal confidential sources, and in 2007, Joe Arpaio, then sheriff of Maricopa County in Arizona—agitated by investigations into his commercial real estate transactions by the Phoenix New Times—arrested journalists at their homes on false charges.
Today, reporters face an increasingly hostile environment. Journalists and freelance writers have been forced to hand over cell phones and other devices to border agents for inspection when exiting or entering the United States, the nonprofit Committee to Project Journalists reports. Border agents have also interrogated them about everything from private conversations to their social media posts.
At a local level, journalists were arrested at least 34 times in 2017, according to Reporters Without Borders. Nine journalists were arrested for covering protests in St. Louis, the group reports, and a journalist in North Dakota was arrested for covering a Dakota Access pipeline protest. Reporter Dan Heyman was jailed in West Virginia last year after asking then Health and Human Services Secretary Tom Price a question about healthcare legislation (the charge was willful disruption of state government processes). On a national level, the President has retweeted violent memes against CNN and railed against reporters and news outlets that criticize his administration, even stating that certain media outlets should lose their broadcasting licenses. He has called the press “enemies of the people,” a phrase also used by 20th century authoritarians.
These threats are a danger to free speech.
When we have honest and honorable news reporters: Journalists are watchdogs—not cheerleaders. They ignite dialogue on essential issues. They share the truths that powerful people would rather conceal. They are the force that holds our leaders accountable for their actions.
Why is freedom of the press important in a democracy?
When our leaders threaten journalists, they are threatening the First Amendment, along with our most basic rights. “Our liberty depends on the freedom of the press,” said Jefferson, “and that cannot be limited without being lost.”
spacer
Blog/USA Current Events
Posted Jul 29, 2016 by Martin Armstrong
The mainstream media became far more systemically involved as the government’s tool to spread propaganda after two vital changes were made to the National Defense Authorization Act (NDAA), which included an amendment that legalized the use of propaganda on the American public.
Our pretend “representatives” in Congress have proven that they do not represent the people, and instead exploit us for personal gain both political as well as monetarily. There had always been an anti-propaganda law that prevented the U.S. government from broadcasting programming to American audiences to politically influence people for political gain.
During the 1970s and 1980s, the prevailing view was that American taxpayers shouldn’t be funding propaganda for American audiences. On July 2, 2013, that law was silently repealed with the implementation of a new reform act. The bottom line: Congress knew what it was doing when it deliberately unleashed thousands of hours per week of government-funded radio and TV programs for U.S. domestic propaganda efforts.
Our “pretend” representatives see no problem with propaganda today. Congress also allowed the repeal of the Fairness Doctrine in broadcasting. They no longer require the mainstream media to tell the truth. Why do our politicians now believe they are entitled to lie, cheat, and use mainstream media to create state propaganda?
The Smith-Mundt Act prevented the government from engaging in propaganda domestically. Nevertheless, that restriction was only applied domestically. The U.S. government always had official propaganda programming produced by the Broadcasting Board of Governors that was only for foreign consumption such as Voice of America, Radio Free Europe/Radio Liberty, and the Middle East Broadcasting Networks. The tone and nature always varied, and it was produced for more than 100 countries in 61 different languages. Nonetheless, the scope was always with a political intent. This is clearly against the principles of the First Amendment that the free press was to keep a check against government. Thomas Jefferson maintained:
A press that is free to investigate and criticize the government is absolutely essential in a nation that practices self-government and is therefore dependent on an educated and enlightened citizenry. On the other hand, newspapers too often take advantage of their freedom and publish lies and scurrilous gossip that could only deceive and mislead the people. Jefferson himself suffered greatly under the latter kind of press during his presidency. But he was a great believer in the ultimate triumph of truth in the free marketplace of ideas, and looked to that for his final vindication.If anyone needed a reminder of the dangers of domestic propaganda efforts, the past 12 months have provided ample reasons. Additionally, just back in 2012, two USA Today journalists were targeted for reporting about millions of dollars in back-taxes owed by the Pentagon’s top propaganda contractor in Afghanistan. The co-owners of the firm had to confess to creating phony websites and Twitter accounts designed to smear the journalists anonymously. Unfortunately, the government today is very focused on manipulating news and commentary on the internet, including social media. They are pretending to be people posting comments and honest reviews.
Here in this video (above), former Clinton/Gore political consultant Naomi Wolf explains why we should be skeptical of overly theatrical news stories. She points out that propaganda has become legal thanks to the very people claiming to be our representatives.
Propaganda, dissemination of information—facts, arguments, rumours, half-truths, or lies—to influence public opinion.
What is propaganda?
When was propaganda first used?
Where is propaganda used?
Who was the minister of propaganda for Hitler?
Propaganda is the more or less systematic effort to manipulate other people’s beliefs, attitudes, or actions by means of symbols (words, gestures, banners, monuments, music, clothing, insignia, hairstyles, designs on coins and postage stamps, and so forth). Deliberateness and a relatively heavy emphasis on manipulation distinguish propaganda from casual conversation or the free and easy exchange of ideas. Propagandists have a specified goal or set of goals. To achieve these, they deliberately select facts, arguments, and displays of symbols and present them in ways they think will have the most effect. To maximize effect, they may omit or distort pertinent facts or simply lie, and they may try to divert the attention of the reactors (the people they are trying to sway) from everything but their own propaganda.
Comparatively deliberate selectivity and manipulation also distinguish propaganda from education. Educators try to present various sides of an issue—the grounds for doubting as well as the grounds for believing the statements they make, and the disadvantages as well as the advantages of every conceivable course of action. Education aims to induce reactors to collect and evaluate evidence for themselves and assists them in learning the techniques for doing so. It must be noted, however, that some propagandists may look upon themselves as educators and may believe that they are uttering the purest truth, that they are emphasizing or distorting certain aspects of the truth only to make a valid message more persuasive, or that the courses of action that they recommend are in fact the best actions that the reactor could take. By the same token, the reactor who regards the propagandist’s message as self-evident truth may think of it as educational; this often seems to be the case with “true believers”—dogmatic reactors to dogmatic religious, social, or political propaganda. “Education” for one person may be “propaganda” for another.
The word propaganda itself, as used in recent centuries, apparently derives from the title and work of the Congregatio de Propaganda Fide (Congregation for Propagation of the Faith), an organization of Roman Catholic cardinals founded in 1622 to carry on missionary work. To many Roman Catholics the word may therefore have, at least in missionary or ecclesiastical terms, a highly respectable connotation. But even to these persons, and certainly to many others, the term is often a pejorative one tending to connote such things as the discredited atrocity stories and deceptively stated war aims of World Wars I and II, the operations of the Nazis’ Ministry of Public Enlightenment and Propaganda, and the broken campaign promises of a thousand politicians. Also, it is reminiscent of countless instances of false and misleading advertising (especially in countries using Latin languages, in which propagande commerciale or some equivalent is a common term for commercial advertising).
Related terms
Related to the general sense of propaganda is the concept of “propaganda of the deed.” This denotes taking nonsymbolic action (such as economic or coercive action), not for its direct effects but for its possible propagandistic effects. Examples of propaganda of the deed would include staging an atomic “test” or the public torture of a criminal for its presumable deterrent effect on others, or giving foreign “economic aid” primarily to influence the recipient’s opinions or actions and without much intention of building up the recipient’s economy.
Distinctions are sometimes made between overt propaganda, in which the propagandists and perhaps their backers are made known to the reactors, and covert propaganda, in which the sources are secret or disguised. Covert propaganda might include such things as political advertisements that are unsigned or signed with false names, clandestine radio stations using false names, and statements by editors, politicians, or others who have been secretly bribed by governments, political backers, or business firms. Sophisticated diplomatic negotiation, legal argument, collective bargaining, commercial advertising, and political campaigns are of course quite likely to include considerable amounts of both overt and covert propaganda, accompanied by propaganda of the deed.
Another term related to propaganda is psychological warfare (sometimes abbreviated to psychwar), which is the prewar or wartime use of propaganda directed primarily at confusing or demoralizing enemy populations or troops, putting them off guard in the face of coming attacks, or inducing them to surrender. The related concept of political warfare encompasses the use of propaganda, among many other techniques, during peacetime to intensify social and political divisions and to sow confusion within the societies of adversary states.
Still another related concept is that of brainwashing. The term usually means intensive political indoctrination. It may involve long political lectures or discussions, long compulsory reading assignments, and so forth, sometimes in conjunction with efforts to reduce the reactor’s resistance by exhausting him either physically through torture, overwork, or denial of sleep or psychologically through solitary confinement, threats, emotionally disturbing confrontations with interrogators or defected comrades, humiliation in front of fellow citizens, and the like. The term brainwashing was widely used in sensational journalism to refer to such activities (and to many other activities) as they were allegedly conducted by Maoists in China and elsewhere.
Another related word, advertising, has mainly commercial connotations, though it need not be restricted to this; political candidates, party programs, and positions on political issues may be “packaged” and “marketed” by advertising firms. The words promotion and public relations have wider, vaguer connotations and are often used to avoid the implications of “advertising” or “propaganda.” “Publicity” and “publicism” often imply merely making a subject known to a public, without educational, propagandistic, or commercial intent.
Contemporary propagandists with money and imagination can use a very wide range of signs, symbols, and media to convey their messages. Signs are simply stimuli—“information bits” capable of stimulating, in some way, the human organism. These include sounds, such as words, music, or a 21-gun salvo; gestures (a military salute, a thumbed nose); postures (a weary slump, folded arms, a sit-down, an aristocratic bearing); structures (a monument, a building); items of clothing (a uniform, a civilian suit); visual signs (a poster, a flag, a picket sign, a badge, a printed page, a commemorative postage stamp, a swastika scrawled on a wall); and so on and on.
A symbol is a sign having a particular meaning for a given reactor. Two or more reactors may of course attach quite different meanings to the same symbol. Thus, to Nazis the swastika was a symbol of racial superiority and the crushing military might of the German Volk; to some Asiatic and North American peoples it is a symbol of universal peace and happiness. Some Christians who find a cross reassuring may find a hammer and sickle displeasing and may derive no religious satisfaction at all from a Muslim crescent, a Hindu cow, or a Buddhist lotus.
The contemporary propagandist can employ elaborate social-scientific research facilities, unknown in previous epochs, to conduct opinion surveys and psychological interviews in efforts to learn the symbolic meanings of given signs for given reactors around the world and to discover what signs leave given reactors indifferent because, to them, these signs are without meaning. (Now, AI has made this easy as pie. The information/data AI has been collecting all these years and continuing, gives them all they need to program individuals, families, neighborhoods, church groups, counties, countries… you name it.)
Media are the means—the channels—used to convey signs and symbols to the intended reactor or reactors. A comprehensive inventory of media used in 20th- and 21st-century propaganda could cover many pages. Electronic media include e-mail, blogs, Web– or application (app)-based social networking platforms such as Facebook and Twitter, and electronic versions of originally printed media such as newspapers, magazines, and books. Printed media include, in addition to those just mentioned, letters, handbills, posters, billboards, and handwriting on walls and streets. Among audiovisual media, the Internet and television may be the most powerful for many purposes. Both can convey a great many types of signs simultaneously; they can gain heavy impact from mutually reinforcing gestures, words, postures, and sounds and a background of symbolically significant leaders, celebrities, historic settings, architectures, flags, music, placards, maps, uniforms, insignia, cheering or jeering mobs or studio audiences, and staged assemblies of prestigious or powerful people. Other audiovisual media include public speakers, movies, theatrical productions, marching bands, mass demonstrations, picketing, face-to-face conversations between individuals, and “talking” exhibits at fairs, expositions, and art shows.
The larger the propaganda enterprise, the more important are such mass media as the Internet and television and also the organizational media—that is, pressure groups set up under leaders and technicians who are skilled in using many sorts of signs and media to convey messages to particular reactors. Vast systems of diverse organizations can be established in the hope of reaching leaders and followers of all groups (organized and unorganized) in a given area, such as a city, region, nation or coalition of nations, or the entire world. Pressure organizations are especially necessary, for example, in closely fought sales campaigns or political elections, especially in socially heterogeneous areas that have extremely divergent regional traditions, ethnic and linguistic backgrounds, and educational levels and very unequal income distributions. Diversities of these sorts make it necessary for products to be marketed in local terms and for political candidates to appear to be friends of each of perhaps a dozen or more mutually hostile ethnic groups, of the educated and the uneducated, and of the very wealthy as well as the poverty-stricken.
Modern research and the evolution of current theories
After the decline of the ancient world, no elaborate systematic study of propaganda appeared for centuries—not until the Industrial Revolution had brought about mass production and raised hopes of immensely high profits through mass marketing. Near the beginning of the 20th century, researchers began to undertake studies of the motivations of many types of consumers and of their responses to various kinds of salesmanship, advertising, and other marketing techniques. From the early 1930s on, there have been “consumer surveys” much in the manner of public opinion surveys. Almost every conceivable variable affecting consumers’ opinions, beliefs, suggestibilities, and behaviour has been investigated for every kind of group, subgroup, and culture in the major capitalist nations. Consumers’ wants and habits were studied for a limited time in the same ways in the socialist countries—partly to promote economic efficiency and partly to prevent political unrest. Data on the wants and habits of voters as well as consumers are now being gathered in the same elaborate ways in many parts of the world. Beginning in the early 21st century, many Web sites (especially social networking platforms) and Internet service providers, as well as thousands of applications developed for use with browsers and smartphones, collected massive amounts of personal data about the consumers who used them, generally without their informed consent. Such data potentially included consumers’ ages, genders, marital status, medical histories, employment histories and other financial information, personal and professional interests, political affiliations and opinions, and even geographic locations on a minute-by-minute basis. The collected data was then sold to information or data brokers, who aggregated it and sold it to advertising firms, who in turn used it to identify potential customers for their corporate clients and to make their commercial messages more effective.
Large quantities of such information were also collected about voters and drawn upon for nationwide political advertising campaigns costing billions of dollars annually. Such messages have taken up a high percentage of advertising space or time on social networking platforms and other popular Web sites, in newspapers and magazines (both electronic and printed), and on radio and television. Critics have argued that advertising expenditures on such a scale, whether for deodorants or presidents, tend to waste society’s resources and also to preclude effective competition by rival producers or politicians who cannot raise equally large amounts of money. A rising tide of consumer resistance and voter skepticism has led to various attempts at consumer education, voter education, counterpropaganda, and proposals for regulatory legislation. Most such proposals in the United States have been unavailing.
As far back as the early 1920s, there developed an awareness among many social critics that the extension of the vote and of enlarged purchasing power to more and more of the ignorant or ill-educated meant larger and larger opportunities for both demagogic and public-spirited propagandists to make headway by using fictions and myths, utopian appeals, and “the noble lie.” Interest was aroused not only by the lingering horror of World War I and of the postwar settlements but also by publication of Ivan Pavlov’s experiments on conditioned reflexes and of analyses of human motivations by various psychoanalysts. Sigmund Freud’s Group Psychology and the Analysis of the Ego (1922) was particularly relevant to the study of leaders, propagandists, and followers, as were Walter Lippmann’s Public Opinion (1922) and The Phantom Public (1925).
In 1927, an American political scientist, Harold D. Lasswell, published a now-famous book, Propaganda Technique in the World War, a dispassionate description and analysis of the massive propaganda campaigns conducted by all the major belligerents in World War I. This he followed with studies of communist propaganda and of many other forms of communication. Within a few years, a great many other social scientists, along with historians, journalists, and psychologists, were producing a wide variety of publications purporting to analyze military, political, and commercial propaganda of many types. During the Nazi period and during World War II and the subsequent Cold War between the U.S. and the Soviet Union, a great many researchers and writers, both skilled and unskilled, scholarly and unscholarly, were employed by governments, political movements, and business firms to conduct propaganda. Some of those who had scientific training designed very carefully controlled experiments or intelligence operations, attempting to quantify data on appeals of various types of propaganda to given reactors.
In the course of this theory building and research, the study of propaganda advanced a long way on the road from lore to science. By the second half of the 20th century, several hundred more or less scholarly books and thousands of articles had shed substantial light on the psychology, techniques, and effects of propaganda campaigns, major and minor.
Eventually nearly every significant government, political party, interest group, social movement, and large business firm in the advanced countries developed its own corps of specialized researchers, propagandists, or “opinion managers” (sometimes referred to as information specialists, lobbyists, legislative representatives, or vice presidents in charge of public relations). Some have become members of parliaments, cabinets, and corporate boards of directors. The most expert among them sometimes are highly skilled or trained, or both, in history, psychiatry, politics, social psychology, survey research, and statistical inference.
Many of the bigger and wealthier propaganda agencies conduct (overtly and covertly) elaborate observations and opinion surveys, among samples of the leaders, the middle strata, and the rank and file of all social groups, big and little, whom they hope to influence. They tabulate many kinds of data concerning those contents of the Internet, the press, films, television, and organizational media that reach given groups. They chart the responses of reactors, through time, by statistical formulas. They conduct “symbol campaigns” and “image-building” operations with mathematical calculation, using large quantities of data. To the ancient art of rhetoric, the “technique of orators,” have been added the techniques of the psychopolitical analyst and the media specialist and the know-how of the administrators of giant advertising agencies, public relations firms, and governmental ministries of information that employ armies of analytic specialists and “symbol-handlers.”
It is a commonplace among the highly educated that people in the mass—and even people on high educational and social levels—often react more favourably to utopian myths, wishful thinking, and nonrational residues of earlier experiences than they do to the sober analysis of facts. Unfortunately, average citizens who may be aware of being duped are not likely to have enough education, time, or economic means to defend themselves against the massive organizations of opinion managers and hidden persuaders. Indeed, to affect them they would have to act through large organizations themselves and to use, to some extent, the very means used by those they seek to control. The still greater “curse of bigness” that may evolve in the future is viewed with increasing concern by many politically conscious people.
Propagandists must realize that neither rational arguments nor catchy slogans can, by themselves, do much to influence human behaviour. The behaviour of reactors is also affected by at least four other variables. The first is the reactors’ predispositions—that is, their stored memories of, and their past associations with, related symbols. These often cause reactors to ignore the current inflow of symbols, to perceive them very selectively, or to rationalize them away. The second is the set of economic inducements (gifts, bribery, pay raises, threats of job loss, and so forth) which the propagandist or others may apply in conjunction with the symbols. The third is the set of physical inducements (love, violence, protection from violence) used by the propagandist or others. The fourth is the array of social pressures that may either encourage or inhibit reactors in thinking or doing what the propagandist advocates. Even those who are well led and are predisposed to do what the propagandist wants may be prevented from acting by counterpressures within the surrounding social systems or groups of which they are a part.
In view of these predispositions and pressures, skilled propagandists are careful to advocate chiefly those acts that they believe the reactor already wants to perform and is in fact able to perform. It is fruitless to call upon most people to perform acts that may involve a total loss of income or terrible physical danger—for example, to act openly upon democratic leanings in a totalitarian fascist country. To call upon reactors to do something extremely dangerous or hard is to risk having the propaganda branded as unrealistic. In such cases, it may be better to point to actions that reactors can avoid taking—that is, to encourage them in acts of passive resistance. The propagandists will thereby both seem and be realistic in their demands upon the reactor, and the reactor will not be left with the feeling, “I agree with this message, but just what am I supposed to do about it?”
For maximum effect, the symbolic content of propaganda must be active, not passive, in tone. It must explicitly or implicitly recommend fairly specific actions to be performed by the reactor (“buy this,” “boycott that,” “vote for X,” “join Group Y,” “withdraw from Group Z”). Furthermore, because the ability of the human organism to receive and process symbols is strictly limited, skillful propagandists attempt to substitute quality for quantity in their choice of symbols. A brief slogan or a picture or a pithy comment on some symbol that is emotion laden for the reactors may be worth ten thousand other words and cost much less. In efforts to economize symbol inputs, propagandists attempt to make full use of the findings of all the behavioral sciences. They draw upon the psychoanalysts’ studies of the bottled-up impulses in the unconscious mind, they consult the elaborate vocabulary counts produced by professors of education, they follow the headline news to determine what events and symbols probably are salient in reactors’ minds at the moment, and they analyze the information polls and attitude studies conducted by survey researchers.
There is substantial agreement among psychoanalysts that the psychological power of propaganda increases with use of what Lasswell termed the triple-appeal principle. This principle states that a set of symbols is apt to be most persuasive if it appeals simultaneously to three elements of an individual’s personality—elements that Freud labelled the ego, id, and superego. To appeal to the ego, skilled propagandists will present the acts and thoughts that they desire to induce as if they were rational, advisable, wise, prudent, and expedient; in the same breath they say or imply that they are sure to produce pleasure and a sense of strength (an appeal to the id); concurrently they suggest that they are moral, righteous, and—if not altogether legal—decidedly more justifiable and humane than the law itself (an appeal to the superego, or conscience). Within any social system, the optimal blend of these components varies from individual to individual and from subgroup to subgroup: some individuals and subgroups love pleasure intensely and show few traces of guilt; others are quite pained by guilt; few are continuously eager to be rational or to take the trouble to become well informed. Some cautious individuals and subgroups like to believe that they never make a move without preanalyzing it; others enjoy throwing prudence to the winds. There are also changes in these blends through time: personalities change, as do the morals and customs of groups. In large collectivities like social classes, ethnic groups, or nations, the particular blends of these predispositions may vary greatly from stratum to stratum and subculture to subculture. Only the study of history and behavioral research can give the propagandist much guidance about such variations.
Propagandists are wise if, in addition to reiterating their support of ideas and policies that they know the reactor already believes in, they include among their images a variety of symbols associated with parents and parent surrogates. The child lives on in every adult, eternally seeking a loving father and mother. Hence the appeal of such familistic symbolisms as “the fatherland,” “the mother country,” “the Mother Church,” “the Holy Father,” “Mother Russia,” and the large number of statesmen who are known as the “fathers of their countries.” Also valuable are reassuring maternal figures like Queen Victoria of England, the Virgin Mary, and the Japanese sun goddess. In addition to parent symbols, it is usually well to associate one’s propaganda with symbols of parent substitutes, who in some cases exert a more profound effect on children than do disappointing or nondescript parents: affectionate or amiable uncles (Uncle Sam, Uncle Ho Chi Minh); admired scholars and physicians (Karl Marx, Dr. Sun Yat-sen); politico-military heroes and role models (Abraham Lincoln, Winston Churchill, Mao, “the wise, mighty, and fatherly Stalin”); and, of course, saintly persons (Joan of Arc, Mahatma Gandhi, Martin Luther King, Jr., the Buddha). A talented and well-symbolized leader or role model may achieve a parental or even godlike ascendancy (charisma) and magnify the impact of a message many times.
Media of propaganda
There are literally thousands of electronic, written, audiovisual, and organizational media that a contemporary propagandist might use. All human groupings are potential organizational media, from the family and other small organizations through advertising and public relations firms, trade unions, churches and temples, theatres, readers of novels and poetry, special-interest groups, political parties and front organizations to the governmental structures of nations, international coalitions, and universal organizations like the United Nations and its agencies. From all this variety of media, propagandists must choose those few media (especially leaders, role models, and organizations) to whose messages they think the intended reactors are especially attentive and receptive.
In recent years the advent of personal computers and mobile phones and the development of the Internet has brought about a massive, worldwide proliferation of systems and facilities for news gathering, publishing, broadcasting, holding meetings, and speechmaking. At present, almost everyone’s mind is bombarded daily by far more media, symbols, and messages than the human organism can possibly pay attention to. The mind reels under noisy assortments of information bits about rival politicians, rival political programs and doctrines, new technical discoveries, insistently advertised commercial products, and new views on morality, ecological horrors, and military nightmares. This sort of communication overload already has resulted in the alienation of millions of people from much of modern life. Overload and alienation can be expected to reach even higher levels in coming generations as still higher densities of population, intercultural contacts, and communication facilities cause economic, political, doctrinal, and commercial rivalries to become still more intense.
Research has demonstrated repeatedly that most reactors attempt, consciously or unconsciously, to cope with severe communication overload by developing three mechanisms: selective attention, selective perception, and selective recall. That is, they pay attention to only a few media; they fail (often unconsciously) to perceive therein any large proportion of the messages that they find uncongenial; and, having perceived, even after this screening, a certain number of unpleasing messages, they repress these in whole or in part (i.e., cannot readily remember them). Contemporary propagandists therefore try to find out: (1) what formative experiences and styles of education have predisposed their intended audiences to their current “media preferences”; (2) which of all the Web sites, electronic or printed publications, television shows, leaders, and role models in the world they do in fact pay attention to; and (3) by which of these they are most influenced. These topics have thus become the subjects of vast amounts of commercial and academic research.
In most cases, reactors are found to pay the most attention to the Web sites, publications, shows, leaders, and role models with whose views they already agree. People as a rule attend to communications not because they want to learn something new or reconsider their own philosophies of life but because they seek psychological reassurance about their existing beliefs and prejudices. When propagandists do get people’s attention by putting messages into the few media the people heed, they may discover that, to hold people’s attention, they must draft a message that does not depart very far from what people already want to believe. Despite the popular stereotypes about geniuses of politics, religion, or advertising whose brilliant propaganda converts the multitudes overnight, the plain fact is that even the most skilled propagandist must usually content himself with a very modest goal: packaging a message in such a way that much of it is familiar and reassuring to the intended reactors and only a little is so novel or true as to threaten them psychologically. Thus, revivalists have an a priori advantage over spokespersons of a modernized ethic, and conservative politicians an advantage over progressives. Propaganda that aims to induce major changes is certain to take great amounts of time, resources, patience, and indirection, except in times of revolutionary crisis when old beliefs have been shattered and new ones have not yet been provided. In ordinary periods (intercrisis periods), propaganda for changes, however worthy, is likely to be, in the words of the German sociologist Max Weber, “a slow boring of hard boards.”
For reasons just indicated, the most effective media as a rule (for messages other than the simplest of commercial advertising) are not the impersonal mass media like electronic and printed newspapers and news services and television but rather those few associations or organizations (reference groups) with which individuals feel identified or to which they aspire to relate their identity. Quite often, ordinary people not only avoid but actively distrust the mass media or fail to understand their messages, but in the warmth of a reference group they feel at home, assume that they understand what is going on, and feel that they are sure to receive a certain degree of emotional response and personal protection. The foremost reference group, of course, is the family. But many other groups perform analogous functions—for instance, the group of sports fans, the church, the trade union, the club, the alumni group, the clique or gang. By influencing the key members of such a group, propagandists may establish a “social relay” channel that can amplify their message. By thus concentrating on the few, they increase their chances of reaching the many—often far more effectively than they could through a plethora of communications aimed at larger audiences. Therefore, one important stratagem involves the combined use of mass media and reference-group channels—preparing materials for such media as news releases or broadcasts in ways designed specifically to reach certain groups (and especially their elites and leaders), who can then relay the messages to other sets of reactors.
World-level control of propaganda
One of the most serious and least understood problems of social control is above the national level, at the level of the world social system. At the world level there is an extremely dangerous lack of means of restraining or counteracting propaganda that fans the flames of international, interracial, and interreligious wars. The global system consists at present of a highly chaotic mixture of democratic, semidemocratic, and authoritarian subsystems. Many of these are controlled by leaders who are ill educated, ultranationalistic, and religiously, racially, or doctrinally fanatical. At present, every national regime asserts that its national sovereignty gives it the right to conduct any propaganda it cares to, however untrue such propaganda may be and however contradictory to the requirements of the world system. The most inflammatory of such propaganda usually takes the form of statements by prominent national leaders, often sensationalized and amplified by their own international broadcasts and sensationalized and amplified still further by media in the receiving countries. The only major remedy would lie, of course, in the slow spread of education for universalist humanism. A first step toward this might be taken through the fostering of an energetic and highly enlightened press corps and educational establishment, doing all it can to provide the world’s broadcasters, news publications, and schools with factual information and illuminating editorials that could increase awareness of the world system as a whole. Informed leaders in world affairs are therefore becoming increasingly interested in the creation of world-level media and multinational bodies of reporters, researchers, editors, teachers, and other intellectuals committed to the unity of humankind.
Concerns about democracy in the digital age
About half of the experts responding to this canvassing said people’s uses of technology will mostly weaken core aspects of democracy and democratic representation, but even those who expressed optimism often voiced concerns. This section includes comments about problems that were made by all respondents regardless of their answer to the main question about the impact of technology on democracy by 2030. These worries are organized under seven themes.
Empowering the powerful: Corporate and government agendas generally do not serve democratic goals or achieve democratic outcomes. They serve the goals of those in power
An internet pioneer and technology developer and administrator predicted, “My expectation is that by 2030, as much of 75% of the world’s population will be enslaved by artificial intelligence-based surveillance systems developed in China and exported around the world. These systems will keep every citizen under observation 24 hours a day, seven days a week, monitoring their every action.”
Dan Gillmor, co-founder of the News Co/Lab at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, and professor of practice in digital media literacy commented, “Governments (and their corporate partners) are broadly using technology to create a surveillance state, and what amounts to law by unaccountable black-box algorithm, far beyond anything Orwell imagined. But this can only happen in a society that can’t be bothered to protect liberty – or is easily led/stampeded into relinquishing it – and that is happening in more and more of the Western democracies. The re-emergence of public bigotry has nothing to do with technology, except to the extent that bigots use it to promote their malignant goals. Meanwhile, the institutions that are supposed to protect liberty – journalism among them – are mostly failing to do so. In a tiny number of jurisdictions, people have persuaded leaders to push back on the encroachments, such as a partial ban on government use of facial recognition in San Francisco. But the encroachments are overwhelming and accelerating.”
Leah Lievrouw, professor of information studies at the University of California-Los Angeles, wrote, “To date, virtually no democratic state or system has sorted out how to deal with this challenge to the fundamental legitimacy of democratic processes, and my guess is that only a deep and destabilizing crisis (perhaps growing out of the rise of authoritarian, ethnic or cultural nationalism) will prompt a serious response.”
Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, wrote, “Warren Buffett has said, ‘There’s class warfare, all right, but it’s my class, the rich class, that’s making war, and we’re winning.’ We can examine how this class warfare changes with advances in technology, analogous to how military warfare has been affected by technology.
Miguel Moreno, professor of philosophy at the University of Granada, Spain, an expert in ethics, epistemology and technology, commented, “There is a clear risk of bias, manipulation, abusive surveillance and authoritarian control over social networks, the internet and any uncensored citizen expression platform, by private or state actors. There are initiatives promoted by state actors to isolate themselves from a common internet and reduce the vulnerability of critical infrastructures to cyberattacks. This has serious democratic and civic implications. In countries with technological capacity and a highly centralized political structure, favorable conditions exist to obtain partisan advantages by limiting social contestation, freedom of expression and eroding civil rights.”
Richard Jones, an entrepreneur based in Europe, said, “Government will lag exploitation of data by state and corporate actors in unforeseen ways. Biased censorship (both well-intentioned and corrupt) and propaganda onslaughts will shape opinions as – combined with an anti-scientific revolution – confidence in the institutions and establishment figures essential to peaceful orderly improvement of societies crumbles further. Hysterical smear attacks will further intensify as attempts to placate minority pressure groups continue. Biased technocratic groupthink will continue its march toward authoritarianism. Charismatic leadership will flourish in truly liberal systems. Authoritarianism will take root elsewhere. Online preference surveys may be developed to guide many choices facing government, but it is not clear that can correct the current democratic deficit in a helpful way. As during the Gutenberg process, accompanying the digestion of ‘free-range’ information will be the reevaluation of secular and religious values and objectives.”
John Sniadowski, a systems architect based in the United Kingdom, wrote, “It is proving very difficult to regulate multinational corporations because of the variety of different national government agendas. A globally enacted set of rules to control multinationals is unlikely to happen because some sovereign states have very illiberal and hierarchical control over agendas and see technology as a way to dominate their citizens with their agendas as well as influence the democratic viewpoints of what they consider to be hostile states. Democracy in technological terms can be weaponized.”
Kevin Gross, an independent technology consultant, commented, “Technology can improve or undermine democracy depending on how it is used and who controls it. Right now, it is controlled by too few. The few are not going to share willingly. I don’t expect this to change significantly by 2030. History knows that when a great deal of power is concentrated in the hands of a few, the outcome is not good for the many, not good for democracy.”
Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology, said, “As of 2015, the outcomes of upward of 25 of the national elections in the world were being determined by Google’s search engine. Democracy as originally conceived cannot survive Big Tech as currently empowered. If authorities do not act to curtail the power of Big Tech companies – Google, Facebook and similar companies that might emerge in coming years – in 2030, democracy might look very much as it does now to the average citizen, but citizens will no longer have much say in who wins elections and how democracies are run. My research – dozens of randomized, controlled experiments involving tens of thousands of participants and five national elections – shows that Google search results alone can easily shift more than 20% of undecided voters – up to 80% in some demographic groups – without people knowing and without leaving a paper trail (see my paper on the search engine manipulation effect). I’ve also shown that search suggestions can turn a 50/50 split among undecided voters into a 90/10 split – again, without people knowing they have been influenced. The content of answer boxes can increase the impact of the search engine manipulation effect by an additional 10% to 30%. I’ve identified about a dozen largely subliminal effects like these and am currently studying and quantifying seven of them. I’ve also shown that the ‘Go Vote’ prompt that Google posted on its home page on Election Day in 2018 gave one political party at least 800,000 more votes than went to the opposing party – possibly far more if the prompt had been targeted to the favored party.”
A longtime internet-rights activist based in South Africa responded, “Whether the powers of states and tech corporations can be reined in effectively is the current struggle. The genie is out of the bottle and it does not bode well for systems of democracy that have already been undermined in Western states. A state of global cyber war now exists and is likely to persist over the next decade. The oligopoly of state-supported tech companies, whether in the U.S. or China, will be difficult to break. It is trite to differentiate between a Google or an Alibaba – both received substantial state support from their respective governments – the Googles by failure to apply antitrust law to prevent monopolization, the Alibabas by state protection against competition in China.”
David P. Reed, a pioneering architect of the internet expert in networking, spectrum and internet policy, wrote, “‘Democracy’ in 2030 will be democracy in name only. The mechanisms of widespread corporate surveillance of user behavior and modification of user behavior are becoming so sophisticated that the citizen interests of democratic-structured countries will no longer be represented in any meaningful way. That is, by collecting vast amounts of information about user preferences and responses, and the use of highly targeted behavior modification techniques, citizens’ choices will be manipulated more and more in the interests of those who can pay to drive that system. The current forms of democracy limit citizen participation to election events every few years, where issues and candidates are structured by political parties into highly targeted single-vote events that do not represent individuals’ interests. Instead, a small set of provocative ‘wedge’ issues are made the entire focus of the citizen’s choice. This is not representation of interests. It is a managed poll that can easily be manipulated by behavior modification of the sort that technology is moving toward.”
A pioneering technology editor and reporter for one of the world’s foremost global news organizations wrote, “I do not have great faith that the institutions tasked with ensuring that online discourse is civil and adheres to standards of truth and fairness will be able to prevail over tendencies of autocratic governments and powerful private sector actors to use cyberspace for narrow political ends. … The internet has never had an effective governing body with any considerable clout to set policy that might guarantee network neutrality on a global scale, inhibit censorship and apply such conventions as the Universal Bill of Human Rights. Further, a handful of platforms whose moral compass has been questioned have come to dominate the online world. Some are dominated by governments. Others owe allegiance only to shareholders.”
Jerry Michalski, founder of REX, the Relationship Economy eXpedition, wrote, “‘Capital G’ Government has devolved into a phony consumer mass-marketing exercise. ‘Small g’ governance could involve active, ongoing collaboration among citizens, but it won’t as long as the major platforms they use have as their business models to addict them to TikTok videos, and to sell off their private data to companies that want to stalk them.”
Jonathan Kolber, author of “A Celebration Society: Solving the Coming Automation Crisis,” said, “Deepfakes will completely muddy the difference between facts and falsehood, a distinction that few citizens are equipped to make even now. This will have devastating effects upon democratic institutions and processes. … We are increasingly seeing George Orwell’s nightmare unfold as governments learn to use internet-enabled smart devices (televisions, smartphones, etc.) for surveillance. When the Internet of Things extends to smart cars, smart homes and so forth, the surveillance will be universal and unending. Governments are also increasingly redefining facts and history.”
A professor of computer science said, “Artificial intelligence technology, especially machine learning, has a feedback loop that strongly advantages first movers. Google’s advantages in being a better search engine have now been baked in by its ability to accumulate more data about user search behavior. This dynamic is inherently monopolistic, even more so than prior technological advances. Persuasive technologies built using these technologies are capable of refining and shaping public opinion with a reach and power that totalitarian governments of the 20th century could only dream of. We can be sure that today’s regulatory mood will either dissipate with nothing done, or more likely, become a driver that entrenches existing monopolies further by creating technical demands that no competitor can surmount. Democratic institutions will have a very difficult time countering this dynamic. Uber’s ‘greyball’ program, intended to defeat regulation and meaningful audit, is a harbinger of the future.”
Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” said, “Social media will continue to enable new and more-sophisticated forms of propaganda and disinformation. Artificial intelligence will enable deepfake videos that the average citizen will be taken in by. Facebook, YouTube and Twitter will continue to enable this content in their unending chase for revenue. Politicians will make noises about regulation, but since these platforms will become their primary source of advertising and publicity, they will never commit to the elimination of Safe Harbor and other rules that protect the social networks.”
Bulbul Gupta, founding adviser, Socos Labs, a think tank designing artificial intelligence to maximize human potential, responded, “Given the current state of tech and artificial intelligence ownership, I expect democracy to be even more unequal between the haves and have-nots by 2030, and a major uprising happening from the masses who are being quickly left behind. Tech and AI are owned by their creators, the top 1%, with decisions made about the 100% in every sector of society that have little to no transparency, human judgment or much recourse, and that may not get made the same if they were being forced to happen face to face. People will need their own personal AIs in their corner to protect their basic civil and human rights.”
Technologies of identification and surveillance will expand in usage, eating away at the private sphere of social life.
RETIRED PROFESSORRichard Lachmann, professor of political sociology at the State University of New York-Albany, said, “Democracy will continue to weaken but technology is only a secondary factor. More important in the decline of democracy are the disappearance or weakening of labor unions, the growing power of corporations in all sectors due to mergers, extreme levels of inequality and the ability of the rich and of political actors to manipulate ‘veto points’ to paralyze government initiatives, which then increases citizens’ cynicism about politicians and lessens their participation. All of these preceded the expansion of the internet and will not be significantly lessened by citizens’ online activities.”
Vince Carducci, researcher of new uses of communication to mobilize civil society and dean at the College of Creative Studies, wrote, “Institutional changes are occurring more as a function of power and money rather than technology, particularly in the selection of candidates and in the judicial system. Those are more of threat than technology.”
A cofounder of one of the internet’s first and best-known online communities wrote, “Democracy is under threat. The blame can’t ultimately go to the internet or to computer-aided automation or to artificial intelligence. The vast power of personal and corporate wealth to wield these technologies in support of their selfish interests will increasingly suppress egalitarian and democratic values.”
A research scientist for a U.S. federal agency wrote, “We are in a period of growing isolationism, nativism and backlash that will weaken democracies around the world, and it will probably have reached a peak by 2030. Although technology and online dissemination of information will be a tool of information and disinformation, and it will be a tool of policing populations, the underlying economic and environmental shifts are mostly responsible for changes resulting in weaker democracies.”
A retired professor commented, “Corporations will have more power over employees and customers. This will be achieved as part of the ongoing corporate takeover of democratic institutions, which U.S. President Eisenhower warned of long ago. Technologies of identification and surveillance will expand in usage, eating away at the private sphere of social life. Social media will continue to reinforce strong social ties among family and friends while reducing the formation of the weak social ties among acquaintances that support intergroup cooperation necessary in a diverse society. Worsening climate and its consequences for health, agriculture and infrastructure will create increasing irrational forms of blame and global conflict. Global conflicts will include electronic and biological forms of aggression against the militarily powerful countries. More citizen backlash is to be expected, but will likely be directed against inappropriate targets. Societies as we know them will stumble from disaster to disaster, toward a massive die-off of our species. I hope I’m wrong. I would like to see our species survive with its democratic values intact. I have grandchildren. I would like their grandchildren to inherit a better world than the one that our present technocratic capitalist economy is racing toward.”
Anonymous respondents commented:
- “The internet under capitalism will only serve the few, not the many, and democracy will weaken as a result. The problem is about competitive economic imperatives rather than technological affordances.”
- “It’s not the technology that will cause the changes, but the systems and structures that create various tech.”
- “The loudest voices will continue to be those that are heard. While the media may change, the elite will still run everything.”
- “Technology companies and governments have incentives to avoid doing things to address the damaging ways in which internet platforms damage democratic institutions.”
- “Power corrupts. Look at the tech giants today – manipulation and propaganda. They are elitists who think they know best.”
- “The combination of big data and supercomputing power seems to be having a negative effect on democracy, and I see no signs that that can be effectively policed or regulated, particularly given the power (and data troves) of very large internet companies and of governments.”
- “I do not believe that governments understand the tools, and they will fail repeatedly to regulate or organize them properly; I also do not have faith the private companies are democratic, and therefore they are apt to reinforce capitalism alone, not democracy.”
Charles Ess, professor of digital ethics, at the University of Oslo, said, “Democracy – its foundational norms and principles, including basic rights to privacy, freedom of expression and rights to contest and conscientiously disobey – may survive in some form and in some places by 2030; but there are many strong reasons, alas, to think that it will be pushed to the margins in even traditionally democratic countries by the forces of surveillance capitalism, coupled with increasing citizen feelings of powerlessness against these forces, along with manipulation of information and elections, etc. Not to mention China’s increasingly extensive exports of the technologies of ‘digital authoritarianism’ modelled on their emerging Social Credit System.”
There is simply no reason to believe that technology can strengthen democracy.
GINA NEFFRob Frieden, a professor of telecommunications law at Penn State who previously worked with Motorola and has held senior policy positions at the Federal Communications Commission and the National Telecommunications and Information Administration, said, “Technological innovations appear better suited for expanding government power versus improving the ability of individuals to evade surveillance. Across the entire spectrum of political ideology, national governments can justify increased budgets for ever-more-sophisticated surveillance technologies based on noble-sounding rationales, such as national security. Governments have little incentives and incur even fewer penalties when they fail to calibrate surveillance technology for lawful reasons. Innocent people will have reasonable privacy expectations eroded, particularly with technologies that have massive processing power and range coupled with an ambiguous mandate. Unless and until citizens push back, governments will use surveillance technologies to achieve goals beyond promoting national security. We risk becoming inured and numbed by ubiquitous surveillance, so much so that pushback seems too difficult and unproductive.”
Emilio Velis, executive director, Appropedia Foundation, said, “The way user participation has been shaped by technological platforms for the past 10 years turned the power of decentralized information back to the big corporations, platforms and stakeholders. Or, even worse, it has weakened the capacity of individuals of action while maintaining a false perception that they have control.”
Peter Lunenfeld, professor of design, media arts and digital humanities, University of California-Los Angeles, and author of “Tales of the Computer as Culture Machine,” wrote, “Commercial platform-driven communication technologies like Facebook, Twitter and their eventual successors are unlikely to strengthen representative democracy in the coming decades of the 21st century. They may add ‘voices’ to the conversation, but they will be unlikely to support and sustain the 20th century’s dominant forms of successful democracies – those that designated representatives to debate and legislate on their behalf, from coherent parties that had established ideologies and platforms. What we are starting to see is the development of dialoguing ‘communities’ that mimic the give and take of true democratic action without offering actual power to its participants, like the Italian Five Star Movement, or the emergence of personality-driven, single-issue pop-ups like Nigel Farage’s Brexit Party. Like Five Star and the Brexit Party, future political movements will use social media to offer the affordances of democratic dialogue without actually empowering participants to control or direct the movements. Social media technologies are creating skeuomorphs of democracies; they will have design attributes that look and feel democratic, but they will be authoritarian to the core.”
An anonymous respondent commented, “The degree of tracking of comments by individuals will increase dramatically in the future as DeepMind-style algorithms are applied to internet-based material. It will become much harder for people to make comments without knowing that their attitudes are being logged and accumulated by organisations of all manner, so there will be a reluctance to speak one’s mind. Hence ‘free speech’ will be constrained and thus the democratic process hindered.”
A distinguished professor of electrical engineering and computer science who is an expert in the future of communications networks at a U.S. university wrote, “Social media makes it possible to reach voters in targeted ways and deliver information from a distance that is tailored to specific goals, rather than fostering local community discussion and participation. The lack of privacy in internet service platforms, along with artificial intelligence and big data, now make it possible for candidates to identify and influence voters in ways that could not have been imagined only a few years ago. Without corrective action (such as new election rules limiting the use of private citizen information), these new capabilities could lead to increased political instability and possibly the breakdown of entire democratic systems. The U.S. appears to be the first such casualty in the Western world.”
Sam Adams, a 24-year veteran of IBM now working as a senior research scientist in artificial intelligence for RTI International, architecting national-scale knowledge graphs for global good, said, “The internet provides a global megaphone to everyone in that anyone can publish their opinions and views instantly and essentially for free. The problem with everyone having a megaphone is that we get drowned in more noise than useful information. This is even more problematic since interest groups from all sides have used their power and resources to amplify their own voices far above the average citizen, even to the point of effectively silencing the average citizen by burying their smaller voice under a landslide of blaring voices controlled by wealthy interest groups. Given the interest-driven news cycles and echo chambers of social media, only the loudest or most extreme voices get repeated. This further exacerbates the level of emotion in the public discussion and drives listeners to the extremes instead of more common ground. A democracy must fairly represent its people’s views if it is to succeed. And part of that fairness in this technology-dominant world must include balancing the volume of the voices.”
Philip Rhoades, a business futurist and consultant based in Australia, wrote, “The neoliberal, developed Western world is sliding into fascism as the world’s sixth mass extinction reaches its inevitable conclusion. As this ecological collapse and political regression proceeds, modern technology will mostly be used for suppression of the great majority of people/citizens. Some technology may help defend the populations against state suppression and terror, but its effectiveness will be minor in the greater scheme of things.”
Democratic regimes could become less democratic from the misuse of surveillance systems with the justification of national security.
ANONYMOUS RESPONDENTAn artificial intelligence expert predicted, “‘Democracy’ is likely to be even more of an elitist endeavor by 2030 than it is now. Life is good if you’re a big corporation, but not if you’re an ordinary working-class citizen. Who has a voice in this world will depend even more on money and power. Civic technologists will first promise to save democracy with technology but then start charging for it after five years because ‘someone has to pay for maintenance.’ And they will get away with it, because no one will remember that political rights are a basic right and not a commodity.”
An anonymous respondent wrote, “Recently Hong Kong protesters had to buy single-trip transit cards with cash to be able to exercise democratic power; this will be impossible when mass face-recognition technology is implemented. Essentially, it is becoming almost impossible to behave democratically.”
Anonymous respondents commented:
- “Technology is going to aggregate people’s individual voices and remove individual democracy.”
- “Democratic regimes could become less democratic from the misuse of surveillance systems with the justification of national security.”
- “I am sadly confident that democratic institutions will not be affected in any positive way in future by citizen’s perspectives; instead, technology will continue to create disenfranchised, disempowered citizens.”
Exploiting digital illiteracy: Citizens’ lack of digital fluency and their apathy produce an ill-informed and/or dispassionate public, weakening democracy and the fabric of society
James S. O’Rourke IV, a University of Notre Dame professor whose research specialty is reputation management, said, “As Neil Postman wrote in 1985, ‘We no longer engage in civil public discourse. We are simply amusing ourselves to death.’ Among the more insidious effects of digital life has been a reduction in tolerance for long-form text. People, particularly the young, will read, but not if it involves more than a few paragraphs. Few among them will buy and read a book. News sites have discovered that more people will click on the video than scroll through the text of a story. Given how easy it now is to manipulate digital video images, given how easy it is to play to people’s preconceptions and prejudice, and given how indolent most in our society have become in seeking out news, opinion and analysis, those who seek to deceive, distract or bully now have the upper hand. Jesuits have long cautioned that ‘No man can understand his own argument until he has visited the position of a man who disagrees.’ Such visits are increasingly rare. The long-predicted ‘filter bubble’ effect is increasingly visible. People will simply not seek out, read or take time to understand positions they do not understand or do not agree with. A sizeable majority now live with a thin collection of facts, distorted information and an insufficient cognitive base from which to make a thoughtful decision. Accurate information is no longer driving out false ideas, propaganda, innuendo or deceit.”
Bernie Hogan, senior research fellow, Oxford Internet Institute, said, “Technology without civics is capitalism with crystallised logic and unbounded scope. Democratic institutions and civic societies are premised on boundaries and intelligible scales, like the ‘local paper’ or the ‘provincial radio.’ Technology is allowing for the transcendence of scale, which we might think is great. Certainly, from a logistics and delivery side it is very impressive. But social cohesion requires levels of understanding that there’s a coherent bounded population to care about and define one’s identity through and against. It requires people seeing and doing things as more than consumers and occasional partisan voters.”
People don’t know what to believe, so they often choose either to believe nothing or to believe whatever their gut tells them.
RESEARCH SCIENTISTLarry Rosen, a professor emeritus of psychology at California State University-Dominguez Hills, known as an international expert on the psychology of technology, wrote, “I worry that many in the public will and do not have the skills to determine truth from fiction, and twisted truth can and does lead to misunderstanding the content.”
Continuous media weakens people’s ability to seek information and form their own opinion.
GRETCHEN STEENSTRAMark Andrejevic, associate professor of communications, University of Iowa, wrote, “Much of my career has been built around my profound concerns about the impact that technology is having on democratic processes of deliberation, public accountability and representation. This is because technology needs to be understood within the context of the social relations within which it is deployed, and these have been conducive to privileging an abstract consumerist individualism that suppresses the underlying commitment to a sense of common, shared or overlapping interests necessary to participation in democratic society. I see the forms of hyper-customization and targeting that characterize our contemporary information environment (and our devices and mode of information ‘consumption’) as fitting within a broader pattern of the systematic dismantling of social and political institutions (including public education, labor unions and social services) that build upon and help reproduce an understanding of interdependence that make the individual freedoms we treasure possible. Like many, I share concerns about rising political polarization and the way this feeds upon the weaponization of false and misleading information via automated curation systems that privilege commercial over civic imperatives. These trends predate the rise of social media and would not have the purchase they do without the underlying forms of social and civic de-skilling that result from the offloading of inherently social functions and practices onto automated systems in ways that allow us to suppress and misrecognize underlying forms of interdependence, commonality and public good. I am not optimistic that anything short of a social/political/economic disaster will divert our course.”
Carlos Afonso, an internet pioneer and digital rights leader based in Rio de Janeiro, Brazil, wrote, “Thinking here of a planet with 7 billion-plus persons, most of them (including many of the supposedly ‘connected’) are unable to discern the many aspects of disinformation that reaches them through traditional (entrepreneurial) media, social networking apps and local political influences.”
A longtime CEO and internet and telecommunications expert commented, “Citizens will increasingly act absent of any understanding of critical analysis and reasoning, fact-checking or even rule of law. Under the guise of ‘acting out against injustice’ we will continue to see cyber vigilantism, whereby social media firestorms effectively ‘try and convict’ anyone accused of word or deed not supportive of their values.”
Gretchen Steenstra, a technology consultant for associations and nonprofit organizations, wrote, “I am concerned about higher velocity of information that does not include all critical and supporting information. Data is used to inform one view without context. Consumers do not fact-check (on many issues regardless of party). Americans are not focused on social responsibility or downstream impacts – they only want instant results. Continuous media weakens people’s ability to seek information and form their own opinion. Constant connectedness prevents reflection and allows your brain to relax. No one can argue with the desire for understanding.”
A fellow at a think tank’s center for technology and innovation wrote, “Democracy will be driven by more artificial intelligence systems, which will automate a range of decisions. Consequently, individuals may have limited input into their own decisions because data will be extrapolated from machines. What this will mean is a looser connection to democratic processes or connections driven by what one sees, hears and senses through dominant platforms. Without some level of policy restraint when it comes to specific use cases, such as voting, technology may serve to erode public trust, while simultaneously relying less on actual public input due to the level of sophistication that emerging technologies offer.”
Anonymous respondents commented:
- “People will not use the internet to research the issue, rather, they will simply go with whatever biased opinion is put in front of them.”
- “The problem is that with the erosion of critical-thinking skills, true journalism versus opinion journalism (and the prevalence of ‘sound bites’ in lieu of serious debate based on facts) lack of proper policy and governance principles, these tools are being used to spread false information.”
- “The public made more gullible by a short attention spans, eroding reasoning skills, becomes a malleable target for those who seek to erode the fundamental institutions of our democracy.”
- “I’m less concerned about technology than I am the ability and willingness of my fellow citizens to educate themselves about the sources of information they consult.”
- “The biggest threat to democracy is people’s lack of critical-thinking skills to be able to distinguish between information and misinformation.”
Waging info-wars: Technology can be weaponized by anyone, anywhere, anytime to target vulnerable populations and engineer elections
Richard Bennett, founder of the High-Tech Forum and ethernet and Wi-Fi standards co-creator, wrote, “The economic model of social media platforms makes it inevitable that these tools will do more harm than good. As long as spreading outrage and false information generates more profits than dealing in facts, reason, science and evidence, the bad guys will continue to win. Until we devise a model where doing the right thing is more profitable than exploiting the public’s ignorance, the good guys will keep losing. … One hypothetical change that I would like to see would be the emergence of social media platforms that moderate less for tone and emotion and more for adherence to standards of truthfulness and evidence. Making this approach succeed financially is the major obstacle.”
Mutale Nkonde, adviser on artificial intelligence at Data & Society and fellow at Harvard’s Berkman Klein Center for Internet and Society, wrote, “Without significant regulation, our future elections will be ruled by the parties that can optimize social media recommendation algorithms most effectively. In the present moment, those are parties like Cambridge Analytica who used fear, racism and xenophobia to influence elections across the world.”
Eduardo Villanueva-Mansilla, associate professor of communications at Pontificia Universidad Catolica, Peru, and editor of the Journal of Community Informatics, said, “The lack of agreement about how to deal with these issues among governments is a serious threat to democracy, as much as the potential for misuse of technological innovations. In the next decade, the complete control by a few multinational firms will be completely outside of regulatory and policy reach of developing countries’ governments. This will increase the instability that has been normalized as a feature of governance in these countries.”
This is not like armed revolution; this is small numbers of employees able to affect what thousands, if not millions, see.
RICH SALZA research leader for a U.S. federal agency said, “Working to be respectful of First Amendment rights while not allowing the perpetuation of mis- or disinformation is of critical concern. I don’t expect that to be resolved within the next 10 years. We are living in the times of 50 shades of gray. In many cases, the determination is not black and white. The headline may be misleading, but not entirely untrue. I think that’s appealing to the media right now.”
Kenneth R. Fleischmann, associate professor at the School of Information at the University of Texas-Austin, wrote, “Technology will have complex effects on society that will be difficult to predict, that depend on the decisions of tech companies, governments, the press and citizens. … Trust will be key, not just blind trust, but trust based on transparent provenance of information that can help users exercise their autonomy and agency.”
Anonymous respondents commented:
- “Technology will weaken our ability to come to consensus; by nurturing smaller communities and fringe ideas, it will make compromise and finding a modus vivendi much more difficult.”
- “Social media will continue to erode faith in facts and reason; echo chambers and emotion-driven communications plus security problems in voting will undermine public discourse and faith in elections.”
- “There seems to be no realistic way to check the effects of IT on polarization and misinformation. The true beliefs and actions of political leaders will continue to have decreasing influence on voting.”
- “Foreign countries and hate groups will grow more sophisticated in their ability to infiltrate the web with biased stories and ads designed to suppress or sway voters and negatively impact public opinion.”
- “While it enables voices to be heard, tech has already weakened democracy by enabling governments and corporations to erode privacy and silence those who might otherwise speak out.”
- “We don’t need mass armies anymore. New technology enables centralized control to a degree never imagined before.”
- “In 2030, there will still be splintering and increased political polarization as individuals are able to challenge democratic ideals and influence political processes through anonymous activities.”
- “Democracy is, and will always be, filled with fake news and preposterous bloviation.”
Weakening journalism: There seems to be no solution for problems caused by the rise of social media-abetted tribalism and the decline of trusted, independent journalism
Christopher Mondini, vice president of business engagement for ICANN, commented, “The decline of independent journalism and critical thinking and research skills resulting from easy reliance on the internet make citizens more susceptible to manipulation and demagoguery. A growing proportion of politically active citizens are digital natives with no recollection of life before social media became the primary medium for debate and influence. The pursuit of clicks, retweets and page views encourages extremist or provocative rhetoric. Viral memes and soundbites distract from thoughtful analysis, deliberation and debate. Of course, the vast majority of citizens are not politically active, but they increasingly consume news and adopt a worldview shaped by their online communities. Participation in political processes may rise because of newly inflamed passions brought about by online discourse, but they may crowd out more measured voices.”
Yaakov J. Stein, CTO, RAD Data Communications, based in Israel, responded, “Social media as they are at present have a polarizing effect that destabilizes democracy. The reason is that advertising (and disinformation) is targeted at and tailored to people according to their preexisting views (as predicted based on their social media behavior). This strengthens these preexisting views, reinforces disparagement of those with opposing views and weakens the possibility of being exposed to opposing views. The result is that free press no longer encourages democracy by enabling people to select from a marketplace of ideas. Instead the right to free press is being used to protect the distribution of disinformation and being manipulated to ensure that people are not exposed to the full spectrum of viewpoints. Perhaps an even more insidious result is that people attempting to keep open minds can no longer trust information being offered online, but that free information online has led to the bankruptcy of traditional news outlets that spend resources on fact-checking.”
The decline of independent journalism and critical thinking and research skills resulting from easy reliance on the internet make citizens more susceptible to manipulation and demagoguery.
CHRISTOPHER MONDINIRey Junco, director of research at CIRCLE in the Tisch College of Civic Life, Tufts University, said, “We can expect that attempts to influence public perceptions of candidates and elections are not only ongoing, but that they will continue to be successful. Technology use by citizens, civil society and governments will first weaken core aspects of democracy and democratic representation before there is a restructuring of technological systems and processes that will then help strengthen core aspects of democracy. There are two issues at play: 1) Ideological self-sorting in online spaces that is bolstered by algorithmic polarization and 2) The relative unwillingness of technology companies to address misinformation on their platforms. Individuals who get their news online (a larger proportion who are young – Pew Research) choose media outlets that are ideologically similar and rarely read news from the opposing side (Flaxman, Goel, & Rao, 2018). In fact, these individuals are rarely exposed to moderate viewpoints (Flaxman, Goel, & Rao, 2018). Social media, in turn, allow for not just informational self-sorting as with online news, but such self-sorting is bolstered through algorithmic curation of feeds that promotes ideological separation. … Although major technology companies are aware of how misinformation was promoted and propagated through their networks during the 2016 elections and resultant congressional hearings on the topic, little has been done to mitigate the impact of such deliberate spreading of misinformation. Analyses from the security and intelligence communities show that state actors continue their attempts to manipulate public sentiment in social spaces, while the increased polarization of traditional outlets has minimized the impact of these reports. State actors are emboldened by the fact that the United States has not addressed the spread of misinformation through technological change or through public education.”
An associate professor of computer science who previously worked with Microsoft, said, “I worry about three related trends: 1) the increasing decentralization of news generation, 2) the lack of easy-to-use, citizen-facing mechanisms for determining the validity of digital media objects like videos and 3) personalization ecosystems that increase the tendency toward confirmation bias and intellectual narrowing. All three trends decrease the number of informed voters and increase social division. Governments will eventually become less averse to regulating platforms for news generation and news dissemination, but a key challenge for the government will be attracting top tech talent; currently, that talent is mostly lured to industry due to higher salaries and the perception of more interesting work. Increasing the number of technologists in government (both as civil servants and as politicians) is crucial for enabling the government to proactively address the negative societal impacts of technology.”
Kenneth Sherrill, professor emeritus of political science, Hunter College, said, “When I’m pessimistic, I believe that the fragmentation of information sources will interact with selective attention – the tendency only to follow news sources that one expects to agree with. This will generate even greater polarization without any of the moderating effects and respect for democratic processes that come from genuine participation. This can lead to the collapse of democratic processes. Right now, I’m pessimistic. The 2020 election may be the test.”
Eric Keller, lecturer in international relations and U.S. foreign policy, University of Tennessee-Knoxville, wrote, “Social media will heighten the current strong polarization that we already have. This is mainly from ‘information stovepipes’ and mutually reinforcing narratives that demonize the opposition. This creates the danger of democratic institutions being degraded in the name of ‘saving’ them from the opposing political party.”
A Europe-based internet governance advocate and activist said, “If current trends continue, there won’t be a real democracy in most countries by 2030. The internet’s funding model based on targeted advertising is destroying investigative journalism and serious reporting. More and more of what is published is fake news. Citizens cannot make informed decisions in the absence of reliable information.”
The coordinator of a public-good program in Bulgaria wrote, “By 2030 we will still see fighting between small groups and communities that leads to extremes. This will give ground to governments to become more authoritative and build up even stronger control via the internet.”
Bill D. Herman, researcher working at the intersection of human rights and technology said, “The combination of news fragmentation, systematic disinformation and motivated reasoning will continue to spiral outward. We’re headed for a civil war, and the hydra-headed right-wing hate machine is the root of the problem.”
An internet pioneer and technology developer and administrator said, “The foundation of democracy is an informed public. By undermining the economic foundation of journalism and enabling the distribution of disinformation on a mass scale, social media has unleashed an unprecedented assault on the foundation of democracy. The decline of newspapers, to just highlight one downside, has had a quantifiable effect (as measured in bond prices) on governmental oversight and investor trust.”
A professor and expert in learning in 3D environments said, “The explosion in the volume of information has led to the majority of people tending to rely on or trust the major platforms to filter and distribute information rather than managing their own personal learning environments with feeds from trusted independent sources. … As the filtering mechanisms become more sophisticated and more personalized to the individual, the opportunities for the wealthy to manipulate opinion will become even greater. The democratic system depends fundamentally on free access to reliable information, and once this is gone the system will effectively become less and less democratic.”
Mike Douglass, an independent developer, wrote, “Facebook sold people on the idea that a race to accumulate ‘friends’ was a good thing – then people paid attention to what those ‘friends’ said. As we now know, many of those ‘friends’ were bots or malicious actors. If we continue in this manner, then things can only get worse. We need to reestablish the real-life approach to gaining friends and acquaintances. Why should we pay any attention to people we don’t know? Unfortunately, technology allows mis/disinformation to spread at an alarming rate.”
Eric Goldman, professor and director of the High-Tech Law Institute at the Santa Clara University School of Law, commented, “Our politicians have embraced internet communications as a direct channel to lie to their constituents without the fact-checking of traditional media gatekeepers. So long as technology helps politicians lie without accountability, we have little hope of good governance.”
Janet Salmons, consultant with Vision2Lead, said, “The internet, with unregulated power in the hands of commercial entities that have little sense of social responsibility, will continue to unravel Western-style democracies and civic institutions. Companies profiting from sales of personal data or on risky practices have little self-interest in promoting the kinds of digital and advanced literacy people need to discern between fact and fiction. In the U.S., the free press and educational systems that can potentially illuminate this distinction are under siege. As a result, even when presented with the opportunity to vote or otherwise inveigh on decision-making, they do so from weak and uninformed positions. The lowest common denominator, the mass views based on big data, win.”
A researcher and teacher of digital literacies and technologies said, “In the early internet days, there was a claim it would bring a democratization of power. What we’re seeing now is the powerful having larger and more overwhelming voices, taking up more of the space rather than less. This leads to polarization, rather than a free-flowing exchange of ideas. Anyone falling within the middle of a hot issue is declared a traitor by both sides of that issue and is shamed and/or pushed aside.”
An anonymous respondent commented, “Increased engagement is largely a product of the media environment, and – in places where the press is absent, restricted or has become blatantly politicized – that engagement will bear the marks of a distorted information environment.”
Responding too slowly: The speed, scope and impact of the technologies of manipulation may be difficult to overcome as the pace of change accelerates
The core concepts of democracy, representation, elections and tenure of government will be greatly undermined by artificial intelligence.
EMMANUEL EDETKathleen M. Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, said, “Disinformation and deepfakes in social media as well as the ability of individuals and media-propaganda teams to manipulate both who is and can communicate with whom and who and what they are talking about are undermining democratic principles and practice. Technological assistants such as bots, and information tools such as memes, are being used in ways that exploit features of the social media and web platforms, such as their prioritization rules, to get certain actors and information in front of people. Human cognitive biases, and our cognitive tendencies to view the world from a social or group perspective, are exploited by social media-based information maneuvers. The upshot is that traditional methods for recognizing disinformation no longer work. Strategies for mitigating disinformation campaigns as they play out across multiple media are not well understood. Global policies for 1) responding to disinformation and its creators, and 2) technical infrastructure that forces information to carry its provenance and robust scalable tools for detecting that an information campaign is underway, who is conducting it and why do not exist.”
Jason Hong, professor of Human-Computer Interaction Institute, Carnegie-Mellon University, said, “Basically, it’s 1) easier for small groups of people to cause lots of damage (e.g., disinformation, deepfakes), and 2) easier for those already in power to use these technologies than those who need to organize. In the early days of the internet, new technologies empowered new voices, which led to a lot of utopian views. However, we’ve seen in recent years that these same technologies are now being used to entrench those already in power. We see this in the form of targeted advertising (being used for highly targeted political campaigns), analytics (being used for gerrymandering), disinformation and fake news (being used both domestically and by foreign powers, both unintentionally and intentionally) and filter bubbles where people can seek out just the information that they want to hear. All of this was possible before the internet, but it was harder because of natural barriers. We also haven’t seen the political effects of deepfakes and are just starting to see the effects of widespread surveillance by police forces.”
Mark Raymond, assistant professor of international security, University of Oklahoma, wrote, “Over the next 30 years, democracy faces at least three kinds of technology-based risks. First, actual or apparent manipulation of voting data and systems by state actors will likely undermine trust in democratic processes. Second, social media manipulation (by states and by political campaigns and other nonstate actors) will compound echo chamber effects and increase societal polarization. Decreased trust will heighten social conflict, including, but not limited to, conflict over elections. Third, ‘deepfakes’ will undermine confidence even in video-based media reports. Taken together, there is the risk that these trends could increase the willingness of voters to accept fundamentally authoritarian shifts in their politics. Absent that, it is still likely that increased polarization will make the operation of democratic systems (which are heavily dependent on mutual acceptance of informal norms) incredibly difficult.”
Emmanuel Edet, legal adviser, National Information Technology Development Agency, Nigeria, said, “The core concepts of democracy, representation, elections and tenure of government will be greatly undermined by artificial intelligence. The use of social media coupled with faceless artificial intelligence-driven opinions can manipulate popular opinion that will deny people the right to express their choice for fear of going against the crowd.”
Matt Moore, innovation manager at Disruptor’s Handbook, Sydney, Australia, said, “The issue is not that essential democratic institutions will change, it is that they will not change enough. Elections, voting, representatives, parties – none of these things will go away. They may mean more or less (likely less) than they used to. The number of democracies in the world is likely to decrease as weak or destabilised states fall into authoritarian populism. Western democracies will continue to age and grow more economically unequal. States like China will continue to grow in power, often using new technologies to control their populations. Everyone is talking up the potential of blockchain for democracy. This is mostly nonsense. The issue is not that people do not have the opportunity to vote enough. It is that no one really knows what that vote means. Many of those who vote – or rather, who do not vote – have no sense of what their vote means. Many of those who are voted for, also do not know what that vote means – which is why they rely on polling and focus groups. Deliberative democracy offers a potential new form of political engagement and decision-making – if (and this is a big ‘if’) it can be made to work beyond isolated experiments.”
Mike O’Connor, retired, a former member of the ICANN policy development community, said, “There is cause for hope – but it’s such a fragile flower compared to the relative ease with which the negative forces prevail. ‘A lie can get around the world while truth is getting its boots on’ – pick your attribution.”
A longtime technology journalist for a major U.S. news organization commented, “Our laws and Constitution are largely designed for a world that existed before the industrial age, not to mention the information age. These technologies have made the nation-state obsolete and we have not yet grasped the ways they facilitate antidemocratic forces.”
Hume Winzar, associate professor and director of the business analytics undergraduate program at Macquarie University, Sydney, Australia, said, “Corporations and government have the information and the technology to create highly targeted messages designed to favour their own agendas. We, as citizens, have demonstrated that we rarely look beyond our regular news sources, and often use easily digested surrogates for news (comedy shows, social media). We also seem to have very short memories, so what was presented as a scandal only a year ago is usual, even laudable, now. … None of this is new. The British and the U.S. have been manipulating foreign news and propaganda for many decades with great success, and the church before them. But now the scale and the speed of that manipulation is perhaps too great to combat.”
Ian Fish, ICT professional and specialist in information security based in Europe, said, “I expect the imbalance of power between the major global corporations and democratic national governments will increase to the detriment of democracy. I also expect non-democratic governments’ disruption of democratic norms to increase faster than the democracies can react.”
Puruesh Chaudhary, a futurist based in Pakistan, said, “Democracy needs to develop the capacity to negotiate in the interest of an ordinary citizen, who may not have direct influence on how key decisions play out in geopolitics but is invariably affected by it. The democratic institutions have to have systems that operate at the pace of technological advancements that have an impact on the society.”
Trust suffers when people’s infatuation with technology entices them away from human-to-human encounters
Several respondents argued there were circumstances when humans’ “slowness” was an advantage, but that technology was thwarting that side of life. They believe that a major cause of the loss of trust is the fact that many people are spending more time online in often-toxic environments than they spend in face-t0-face, empathy-enabling non-digital social situations.
Angela Campbell, professor of law and co-director, Institute for Public Representation at Georgetown University, said, “We are just seeing the beginning of how technology is undercutting democracy and social relations necessary to a democratic society. We don’t have good ways of telling what is true and what is false, what is opinion and what is fact. Most people do not yet understand how power technologies (especially combined with a lack of privacy protections) allow them to be manipulated. In addition, as people spend more time using technology, they spend less time interacting with other people (in person) and learning important social skills like respect and empathy.”
Yves Mathieu, co-director at Missions Publiques, Paris, France, responded, “Technology creates new forms of communications and messaging that can be very rough and divisive. Some contributors are rude, violent, expressing very poor comments, insulting or threatening elected citizens. There will be a strong need for face-to-face format, as the technologies will not allow process of deliberation. There will be need for regular meetings with voters, in meetings where people will have the time and the possibility to exchange arguments and increase their understanding of each other’s position. Being associated with media, this will reduce the divide that we know today, as it will increase mutual understanding.”
An anonymous respondent commented, “The expanded use of technology with respect to the democratic processes will tend to weaken one of the most important aspects of democracy and the democratic processes – the use of technology instead of person-to-person dialogue seriously degrades (or removes altogether) meaningful dialogue and exchange of ideas between individuals. When individuals use technology to express their political views/opinions instead of having direct human interactions, these views tend to be more extremely stated than if that person is speaking a view/opinion to another person. Also, in many cases, if someone else expresses a different view from what the original individual expressed, the first person is much less likely to pay any attention to a view expressed using technology than if that view were expressed in a person-to-person discussion. Additionally, the increased use of technology for analyzing segments of society to ‘shape’ delivery of messages for particular segments will result in an increase of messages that distort the reality of the message or distort the results of what the message is describing.”
The future will include a complex interplay of increased online activity but also increased skepticism of those virtual interactions and an enhanced appreciation of offline information and conversations.
MELISSA MICHELSONA futurist and consultant said, “Democracy currently has a crisis in global leadership. Without significant change in 2020, for which I am hopeful, I can’t hold a lot of hope for democracy in 2030. I’m afraid the question is not what will change, but what must change. Without changes in democratic institutions, the future of democracy itself is in question. There is an urban/rural split at work in tandem with a severe disparity in the distribution of wealth – with climate change overshadowing it all. Technology will have a hand in providing as well as impeding solutions.”
Arthur Asa Berger, professor emeritus of communications, San Francisco State University, commented, “People who use Facebook are affected in negative ways by a ‘net effect,’ in which they exhibit impulsivity, grandiosity, etc., as explained in my book, ‘Media and Communication Research Methods’ (Sage). Some young people text 100 times a day and never talk on the phone with others, leading to a radical estrangement from others and themselves. The internet is used by hate groups, neofascists, right-wing ideologues, terrorist organizations and so on.”
An anonymous U.S. policy and strategy professional said, “Technology allows the creation of a bullying environment that polarizes people to the point at which they do not attempt to understand other opinions or views, weakening public discourse and driving outrage and attacks on minority views.”
Japheth Cleaver, a systems engineer, commented, “At the moment, the major social media networks function not by neutrally and dispassionately connecting disparate communicators (like the phone system), but are designed reinforce engagement to sell as many targeted ads as possible. This reinforcement creates resonant effects throughout a society’s culture, and in-person contextual interaction drops away in favor of the efficiencies that electronic communication offers, but without any of the risk of the ‘bubble’ of the like-minded being dropped, as that would hurt engagement. Internet as communications overlay is fine. Internet as a replacement for public space seems detrimental.”
Melissa Michelson, professor of political science, Menlo College, and author, “Mobilizing Inclusion: Redefining Citizenship Through Get-Out-the-Vote Campaigns,” said, “The future will include a complex interplay of increased online activity but also increased skepticism of those virtual interactions and an enhanced appreciation of offline information and conversations. As more adults are digital natives and the role of technology in society expands and becomes more interconnected, more and more aspects of democracy and political participation will take place online. At the same time, the increasing sophistication of deepfakes, including fake video, will enhance the value of face-to-face interactions as unfiltered and trustworthy sources of information.”
Anonymous respondents commented:
- “Unless there is transparency, tech will be the new digital atomic bomb – it has moved faster than individuals’ or the law’s understanding of its unintended consequences and nefarious uses.”
- “At the current rate of disregard and lack of responsibility by those who own and run large tech companies, we are headed toward a complete lack of trust in what is factual information and what is not.”
- “Public institutions move slowly and thoughtfully. People doing nefarious things move more quickly, and with the internet, this will continue to challenge us.”
- “It is the personal and social norms that we’re losing, not the technology itself, that is at the heart of much of our problems. People are a lot less civil to each other in person now than they were just a few decades ago.”
- “More access to data and records more quickly can help citizens be informed and engaged, however more information can flood the market, and people have limited capacity/time/energy to digest information.”
“Today, the military is more focused on manipulating news and commentary on the Internet, especially social media, by posting material and images without necessarily claiming ownership,” reported the Post. Sourcespacer
The Panopticon Is Already Here
Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe.
Story by Ross Andersen
SEPTEMBER 2020 ISSUE
To hear more feature stories, get the Audm iPhone app.I visited the institute on a rainy morning in the summer of 2019. China’s best and brightest were still shuffling in post-commute, dressed casually in basketball shorts or yoga pants, AirPods nestled in their ears. In my pocket, I had a burner phone; in my backpack, a computer wiped free of data—standard precautions for Western journalists in China. To visit China on sensitive business is to risk being barraged with cyberattacks and malware. In 2019, Belgian officials on a trade mission noticed that their mobile data were being intercepted by pop-up antennae outside their Beijing hotel. After clearing the institute’s security, I was told to wait in a lobby monitored by cameras. On its walls were posters of China’s most consequential postwar leaders. Mao Zedong loomed large in his characteristic four-pocket suit. He looked serene, as though satisfied with having freed China from the Western yoke. Next to him was a fuzzy black-and-white shot of Deng Xiaoping visiting the institute in his later years, after his economic reforms had set China on a course to reclaim its traditional global role as a great power.The lobby’s most prominent poster depicted Xi Jinping in a crisp black suit. China’s current president and the general secretary of its Communist Party has taken a keen interest in the institute. Its work is part of a grand AI strategy that Xi has laid out in a series of speeches akin to those John F. Kennedy used to train America’s techno-scientific sights on the moon. Xi has said that he wants China, by year’s end, to be competitive with the world’s AI leaders, a benchmark the country has arguably already reached. And he wants China to achieve AI supremacy by 2030.Xi’s pronouncements on AI have a sinister edge. Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time. [ From the October 2018 issue: Why technology favors tyranny ]China’s government has a history of using major historical events to introduce and embed surveillance measures. In the run-up to the 2008 Olympics in Beijing, Chinese security services achieved a new level of control over the country’s internet. During China’s coronavirus outbreak, Xi’s government leaned hard on private companies in possession of sensitive personal data. Any emergency data-sharing arrangements made behind closed doors during the pandemic could become permanent.China already has hundreds of millions of surveillance cameras in place. Xi’s government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China’s government could soon achieve an unprecedented political stranglehold on more than 1 billion people. Early in the coronavirus outbreak, China’s citizens were subjected to a form of risk scoring. An algorithm assigned people a color code—green, yellow, or red—that determined their ability to take transit or enter buildings in China’s megacities. In a sophisticated digital system of social control, codes like these could be used to score a person’s perceived political pliancy as well.
A crude version of such a system is already in operation in China’s northwestern territory of Xinjiang, where more than 1 million Muslim Uighurs have been imprisoned, the largest internment of an ethnic-religious minority since the fall of the Third Reich. Once Xi perfects this system in Xinjiang, no technological limitations will prevent him from extending AI surveillance across China. He could also export it beyond the country’s borders, entrenching the power of a whole generation of autocrats.China has recently embarked on a number of ambitious infrastructure projects abroad—megacity construction, high-speed rail networks, not to mention the country’s much-vaunted Belt and Road Initiative. But these won’t reshape history like China’s digital infrastructure, which could shift the balance of power between the individual and the state worldwide.American policy makers from across the political spectrum are concerned about this scenario. Michael Kratsios, the former Peter Thiel acolyte whom Donald Trump picked to be the U.S. government’s chief technology officer, told me that technological leadership from democratic nations has “never been more imperative” and that “if we want to make sure that Western values are baked into the technologies of the future, we need to make sure we’re leading in those technologies.”Despite China’s considerable strides, industry analysts expect America to retain its current AI lead for another decade at least. But this is cold comfort: China is already developing powerful new surveillance tools, and exporting them to dozens of the world’s actual and would-be autocracies. Over the next few years, those technologies will be refined and integrated into all-encompassing surveillance systems that dictators can plug and play. The emergence of an AI-powered authoritarian bloc led by China could warp the geopolitics of this century. It could prevent billions of people, across large swaths of the globe, from ever securing any measure of political freedom. And whatever the pretensions of American policy makers, only China’s citizens can stop it. I’d come to Beijing to look for some sign that they might. This techno-political moment has been long in the making. China has spent all but a few centuries of its 5,000-year history at the vanguard of information technology. Along with Sumer and Mesoamerica, it was one of three places where writing was independently invented, allowing information to be stored outside the human brain. In the second century a.d., the Chinese invented paper. This cheap, bindable information-storage technology allowed data—Silk Road trade records, military communiqués, correspondence among elites—to crisscross the empire on horses bred for speed by steppe nomads beyond the Great Wall. Data began to circulate even faster a few centuries later, when Tang-dynasty artisans perfected woodblock printing, a mass-information technology that helped administer a huge and growing state.
As rulers of some of the world’s largest complex social organizations, ancient Chinese emperors well understood the relationship between information flows and power, and the value of surveillance. During the 11th century, a Song-dynasty emperor realized that China’s elegant walled cities had become too numerous to be monitored from Beijing, so he deputized locals to police them. A few decades before the digital era’s dawn, Chiang Kai-shek made use of this self-policing tradition, asking citizens to watch for dissidents in their midst, so that communist rebellions could be stamped out in their infancy. When Mao took over, he arranged cities into grids, making each square its own work unit, where local spies kept “sharp eyes” out for counterrevolutionary behavior, no matter how trivial. During the initial coronavirus outbreak, Chinese social-media apps promoted hotlines where people could report those suspected of hiding symptoms.
Xi has appropriated the phrase sharp eyes, with all its historical resonances, as his chosen name for the AI-powered surveillance cameras that will soon span China. With AI, Xi can build history’s most oppressive authoritarian apparatus, without the manpower Mao needed to keep information about dissent flowing to a single, centralized node. In China’s most prominent AI start-ups—SenseTime, CloudWalk, Megvii, Hikvision, iFlytek, Meiya Pico—Xi has found willing commercial partners. And in Xinjiang’s Muslim minority, he has found his test population.The Chinese Communist Party has long been suspicious of religion, and not just as a result of Marxist influence. Only a century and a half ago—yesterday, in the memory of a 5,000-year-old civilization—Hong Xiuquan, a quasi-Christian mystic converted by Western missionaries, launched the Taiping Rebellion, an apocalyptic 14-year campaign that may have killed more people than the First World War. Today, in China’s single-party political system, religion is an alternative source of ultimate authority, which means it must be co-opted or destroyed.By 2009, China’s Uighurs had become weary after decades of discrimination and land confiscation. They launched mass protests and a smattering of suicide attacks against Chinese police. In 2014, Xi cracked down, directing Xinjiang’s provincial government to destroy mosques and reduce Uighur neighborhoods to rubble. More than 1 million Uighurs were disappeared into concentration camps. Many were tortured and made to perform slave labor. Uighurs who were spared the camps now make up the most intensely surveilled population on Earth. Not all of the surveillance is digital. The Chinese government has moved thousands of Han Chinese “big brothers and sisters” into homes in Xinjiang’s ancient Silk Road cities, to monitor Uighurs’ forced assimilation to mainstream Chinese culture. They eat meals with the family, and some “big brothers” sleep in the same bed as the wives of detained Uighur men.Meanwhile, AI-powered sensors lurk everywhere, including in Uighurs’ purses and pants pockets. According to the anthropologist Darren Byler, some Uighurs buried their mobile phones containing Islamic materials, or even froze their data cards into dumplings for safekeeping, when Xi’s campaign of cultural erasure reached full tilt. But police have since forced them to install nanny apps on their new phones. The apps use algorithms to hunt for “ideological viruses” day and night. They can scan chat logs for Quran verses, and look for Arabic script in memes and other image files.[ Read: China is going to outrageous lengths to surveil its own citizens ]Uighurs can’t use the usual work-arounds. Installing a VPN would likely invite an investigation, so they can’t download WhatsApp or any other prohibited encrypted-chat software. Purchasing prayer rugs online, storing digital copies of Muslim books, and downloading sermons from a favorite imam are all risky activities. If a Uighur were to use WeChat’s payment system to make a donation to a mosque, authorities might take note. The nanny apps work in tandem with the police, who spot-check phones at checkpoints, scrolling through recent calls and texts. Even an innocent digital association—being in a group text with a recent mosque attendee, for instance—could result in detention. Staying off social media altogether is no solution, because digital inactivity itself can raise suspicions. The police are required to note when Uighurs deviate from any of their normal behavior patterns. Their database wants to know if Uighurs start leaving their home through the back door instead of the front. It wants to know if they spend less time talking to neighbors than they used to. Electricity use is monitored by an algorithm for unusual use, which could indicate an unregistered resident. Uighurs can travel only a few blocks before encountering a checkpoint outfitted with one of Xinjiang’s hundreds of thousands of surveillance cameras. Footage from the cameras is processed by algorithms that match faces with snapshots taken by police at “health checks.” At these checks, police extract all the data they can from Uighurs’ bodies. They measure height and take a blood sample. They record voices and swab DNA. Some Uighurs have even been forced to participate in experiments that mine genetic data, to see how DNA produces distinctly Uighurlike chins and ears. Police will likely use the pandemic as a pretext to take still more data from Uighur bodies.
Uighur women are also made to endure pregnancy checks. Some are forced to have abortions, or get an IUD inserted. Others are sterilized by the state. Police are known to rip unauthorized children away from their parents, who are then detained. Such measures have reduced the birthrate in some regions of Xinjiang more than 60 percent in three years.When Uighurs reach the edge of their neighborhood, an automated system takes note. The same system tracks them as they move through smaller checkpoints, at banks, parks, and schools. When they pump gas, the system can determine whether they are the car’s owner. At the city’s perimeter, they’re forced to exit their cars, so their face and ID card can be scanned again. Read: Uighurs can’t escape Chinese repression, even in Europe
The lucky Uighurs who are able to travel abroad—many have had their passports confiscated—are advised to return quickly. If they do not, police interrogators are dispatched to the doorsteps of their relatives and friends. Not that going abroad is any kind of escape: In a chilling glimpse at how a future authoritarian bloc might function, Xi’s strongman allies—even those in Muslim-majority countries such as Egypt—have been more than happy to arrest and deport Uighurs back to the open-air prison that is Xinjiang.
Xi seems to have used Xinjiang as a laboratory to fine-tune the sensory and analytical powers of his new digital panopticon before expanding its reach across the mainland. CETC, the state-owned company that built much of Xinjiang’s surveillance system, now boasts of pilot projects in Zhejiang, Guangdong, and Shenzhen. These are meant to lay “a robust foundation for a nationwide rollout,” according to the company, and they represent only one piece of China’s coalescing mega-network of human-monitoring technology.
China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware. Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.
Read: China’s artificial-intelligence boom
City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.
But City Brain and its successor technologies will also enable new forms of integrated surveillance. Some of these will enjoy broad public support: City Brain could be trained to spot lost children, or luggage abandoned by tourists or terrorists. It could flag loiterers, or homeless people, or rioters. Anyone in any kind of danger could summon help by waving a hand in a distinctive way that would be instantly recognized by ever-vigilant computer vision. Earpiece-wearing police officers could be directed to the scene by an AI voice assistant.
City Brain would be especially useful in a pandemic. (One of Alibaba’s sister companies created the app that color-coded citizens’ disease risk, while silently sending their health and travel data to police.) As Beijing’s outbreak spread, some malls and restaurants in the city began scanning potential customers’ phones, pulling data from mobile carriers to see whether they’d recently traveled. Mobile carriers also sent municipal governments lists of people who had come to their city from Wuhan, where the coronavirus was first detected. And Chinese AI companies began making networked facial-recognition helmets for police, with built-in infrared fever detectors, capable of sending data to the government. City Brain could automate these processes, or integrate its data streams.
Even China’s most complex AI systems are still brittle. City Brain hasn’t yet fully integrated its range of surveillance capabilities, and its ancestor systems have suffered some embarrassing performance issues: In 2018, one of the government’s AI-powered cameras mistook a face on the side of a city bus for a jaywalker. But the software is getting better, and there’s no technical reason it can’t be implemented on a mass scale.The data streams that could be fed into a City Brain–like system are essentially unlimited. In addition to footage from the 1.9 million facial-recognition cameras that the Chinese telecom firm China Tower is installing in cooperation with SenseTime, City Brain could absorb feeds from cameras fastened to lampposts and hanging above street corners. It could make use of the cameras that Chinese police hide in traffic cones, and those strapped to officers, both uniformed and plainclothes. The state could force retailers to provide data from in-store cameras, which can now detect the direction of your gaze across a shelf, and which could soon see around corners by reading shadows. Precious little public space would be unwatched.America’s police departments have begun to avail themselves of footage from Amazon’s home-security cameras. In their more innocent applications, these cameras adorn doorbells, but many are also aimed at neighbors’ houses. China’s government could harvest footage from equivalent Chinese products. They could tap the cameras attached to ride-share cars, or the self-driving vehicles that may soon replace them: Automated vehicles will be covered in a whole host of sensors, including some that will take in information much richer than 2-D video. Data from a massive fleet of them could be stitched together, and supplemented by other City Brain streams, to produce a 3-D model of the city that’s updated second by second. Each refresh could log every human’s location within the model. Such a system would make unidentified faces a priority, perhaps by sending drone swarms to secure a positive ID. The model’s data could be time-synced to audio from any networked device with a microphone, including smart speakers, smartwatches, and less obvious Internet of Things devices like smart mattresses, smart diapers, and smart sex toys. All of these sources could coalesce into a multitrack, location-specific audio mix that could be parsed by polyglot algorithms capable of interpreting words spoken in thousands of tongues. This mix would be useful to security services, especially in places without cameras: China’s iFlytek is perfecting a technology that can recognize individuals by their “voiceprint.”In the decades to come, City Brain or its successor systems may even be able to read unspoken thoughts. Drones can already be controlled by helmets that sense and transmit neural signals, and researchers are now designing brain-computer interfaces that go well beyond autofill, to allow you to type just by thinking. An authoritarian state with enough processing power could force the makers of such software to feed every blip of a citizen’s neural activity into a government database. China has recently been pushing citizens to download and use a propaganda app. The government could use emotion-tracking software to monitor reactions to a political stimulus within an app. A silent, suppressed response to a meme or a clip from a Xi speech would be a meaningful data point to a precog algorithm. All of these time-synced feeds of on-the-ground data could be supplemented by footage from drones, whose gigapixel cameras can record whole cityscapes in the kind of crystalline detail that allows for license-plate reading and gait recognition. “Spy bird” drones already swoop and circle above Chinese cities, disguised as doves. City Brain’s feeds could be synthesized with data from systems in other urban areas, to form a multidimensional, real-time account of nearly all human activity within China. Server farms across China will soon be able to hold multiple angles of high-definition footage of every moment of every Chinese person’s life. It’s important to stress that systems of this scope are still in development. Most of China’s personal data are not yet integrated together, even within individual companies. Nor does China’s government have a one-stop data repository, in part because of turf wars between agencies. But there are no hard political barriers to the integration of all these data, especially for the security state’s use. To the contrary, private firms are required, by formal statute, to assist China’s intelligence services.
The government might soon have a rich, auto-populating data profile for all of its 1 billion–plus citizens. Each profile would comprise millions of data points, including the person’s every appearance in surveilled space, as well as all of her communications and purchases. Her threat risk to the party’s power could constantly be updated in real time, with a more granular score than those used in China’s pilot “social credit” schemes, which already aim to give every citizen a public social-reputation score based on things like social-media connections and buying habits. Algorithms could monitor her digital data score, along with everyone else’s, continuously, without ever feeling the fatigue that hit Stasi officers working the late shift. False positives—deeming someone a threat for innocuous behavior—would be encouraged, in order to boost the system’s built-in chilling effects, so that she’d turn her sharp eyes on her own behavior, to avoid the slightest appearance of dissent.
If her risk factor fluctuated upward—whether due to some suspicious pattern in her movements, her social associations, her insufficient attention to a propaganda-consumption app, or some correlation known only to the AI—a purely automated system could limit her movement. It could prevent her from purchasing plane or train tickets. It could disallow passage through checkpoints. It could remotely commandeer “smart locks” in public or private spaces, to confine her until security forces arrived. In recent years, a few members of the Chinese intelligentsia have sounded the warning about misused AI, most notably the computer scientist Yi Zeng and the philosopher Zhao Tingyang. In the spring of 2019, Yi published “The Beijing AI Principles,” a manifesto on AI’s potential to interfere with autonomy, dignity, privacy, and a host of other human values.
It was Yi whom I’d come to visit at Beijing’s Institute of Automation, where, in addition to his work on AI ethics, he serves as the deputy director of the Research Center for Brain-Inspired Intelligence. He retrieved me from the lobby. Yi looked young for his age, 37, with kind eyes and a solid frame slimmed down by black sweatpants and a hoodie.
On the way to Yi’s office, we passed one of his labs, where a research assistant hovered over a microscope, watching electrochemical signals flash neuron-to-neuron through mouse-brain tissue. We sat down at a long table in a conference room adjoining his office, taking in the gray, fogged-in cityscape while his assistant fetched tea.
I asked Yi how “The Beijing AI Principles” had been received. “People say, ‘This is just an official show from the Beijing government,’ ” he told me. “But this is my life’s work.”Yi talked freely about AI’s potential misuses. He mentioned a project deployed to a select group of Chinese schools, where facial recognition was used to track not just student attendance but also whether individual students were paying attention.“I hate that software,” Yi said. “I have to use that word: hate.”He went on like this for a while, enumerating various unethical applications of AI. “I teach a course on the philosophy of AI,” he said. “I tell my students that I hope none of them will be involved in killer robots. They have only a short time on Earth. There are many other things they could be doing with their future.”Yi clearly knew the academic literature on tech ethics cold. But when I asked him about the political efficacy of his work, his answers were less compelling. “Many of us technicians have been invited to speak to the government, and even to Xi Jinping, about AI’s potential risks,” he said. “But the government is still in a learning phase, just like other governments worldwide.” “Do you have anything stronger than that consultative process?” I asked. “Suppose there are times when the government has interests that are in conflict with your principles. What mechanism are you counting on to win out?”“I, personally, am still in a learning phase on that problem,” Yi said.Chinese AI start-ups aren’t nearly as bothered. Several are helping Xi develop AI for the express purpose of surveillance. The combination of China’s single-party rule and the ideological residue of central planning makes party elites powerful in every domain, especially the economy. But in the past, the connection between the government and the tech industry was discreet. Recently, the Chinese government started assigning representatives to tech firms, to augment the Communist Party cells that exist within large private companies.Selling to the state security services is one of the fastest ways for China’s AI start-ups to turn a profit. A national telecom firm is the largest shareholder of iFlytek, China’s voice-recognition giant. Synergies abound: When police use iFlytek’s software to monitor calls, state-owned newspapers provide favorable coverage. Earlier this year, the personalized-news app Toutiao went so far as to rewrite its mission to articulate a new animating goal: aligning public opinion with the government’s wishes. Xu Li, the CEO of SenseTime, recently described the government as his company’s “largest data source.” Whether any private data can be ensured protection in China isn’t clear, given the country’s political structure. The digital revolution has made data monopolies difficult to avoid. Even in America, which has a sophisticated tradition of antitrust enforcement, the citizenry has not yet summoned the will to force information about the many out of the hands of the powerful few. But private data monopolies are at least subject to the sovereign power of the countries where they operate. A nation-state’s data monopoly can be prevented only by its people, and only if they possess sufficient political power.China’s people can’t use an election to rid themselves of Xi. And with no independent judiciary, the government can make an argument, however strained, that it ought to possess any information stream, so long as threats to “stability” could be detected among the data points. Or it can demand data from companies behind closed doors, as happened during the initial coronavirus outbreak. No independent press exists to leak news of these demands to.[ Read: China’s surveillance state should scare everyone ]Each time a person’s face is recognized, or her voice recorded, or her text messages intercepted, this information could be attached, instantly, to her government-ID number, police records, tax returns, property filings, and employment history. It could be cross-referenced with her medical records and DNA, of which the Chinese police boast they have the world’s largest collection. Yi and i talked through a global scenario that has begun to worry AI ethicists and China-watchers alike. In this scenario, most AI researchers around the world come to recognize the technology’s risks to humanity, and develop strong norms around its use. All except for one country, which makes the right noises about AI ethics, but only as a cover. Meanwhile, this country builds turnkey national surveillance systems, and sells them to places where democracy is fragile or nonexistent. The world’s autocrats are usually felled by coups or mass protests, both of which require a baseline of political organization. But large-scale political organization could prove impossible in societies watched by pervasive automated surveillance.
Yi expressed worry about this scenario, but he did not name China specifically. He didn’t have to: The country is now the world’s leading seller of AI-powered surveillance equipment. In Malaysia, the government is working with Yitu, a Chinese AI start-up, to bring facial-recognition technology to Kuala Lumpur’s police as a complement to Alibaba’s City Brain platform. Chinese companies also bid to outfit every one of Singapore’s 110,000 lampposts with facial-recognition cameras.
In South Asia, the Chinese government has supplied surveillance equipment to Sri Lanka. On the old Silk Road, the Chinese company Dahua is lining the streets of Mongolia’s capital with AI-assisted surveillance cameras. Farther west, in Serbia, Huawei is helping set up a “safe-city system,” complete with facial-recognition cameras and joint patrols conducted by Serbian and Chinese police aimed at helping Chinese tourists to feel safe.
In the early aughts, the Chinese telecom titan ZTE sold Ethiopia a wireless network with built-in backdoor access for the government. In a later crackdown, dissidents were rounded up for brutal interrogations, during which they were played audio from recent phone calls they’d made. Today, Kenya, Uganda, and Mauritius are outfitting major cities with Chinese-made surveillance networks.
In Egypt, Chinese developers are looking to finance the construction of a new capital. It’s slated to run on a “smart city” platform similar to City Brain, although a vendor has not yet been named. In southern Africa, Zambia has agreed to buy more than $1 billion in telecom equipment from China, including internet-monitoring technology. China’s Hikvision, the world’s largest manufacturer of AI-enabled surveillance cameras, has an office in Johannesburg.China uses “predatory lending to sell telecommunications equipment at a significant discount to developing countries, which then puts China in a position to control those networks and their data,” Michael Kratsios, America’s CTO, told me. When countries need to refinance the terms of their loans, China can make network access part of the deal, in the same way that its military secures base rights at foreign ports it finances. “If you give [China] unfettered access to data networks around the world, that could be a serious problem,” Kratsios said.In 2018, CloudWalk Technology, a Guangzhou-based start-up spun out of the Chinese Academy of Sciences, inked a deal with the Zimbabwean government to set up a surveillance network. Its terms require Harare to send images of its inhabitants—a rich data set, given that Zimbabwe has absorbed migration flows from all across sub-Saharan Africa—back to CloudWalk’s Chinese offices, allowing the company to fine-tune its software’s ability to recognize dark-skinned faces, which have previously proved tricky for its algorithms.
Having set up beachheads in Asia, Europe, and Africa, China’s AI companies are now pushing into Latin America, a region the Chinese government describes as a “core economic interest.” China financed Ecuador’s $240 million purchase of a surveillance-camera system. Bolivia, too, has bought surveillance equipment with help from a loan from Beijing. Venezuela recently debuted a new national ID-card system that logs citizens’ political affiliations in a database built by ZTE. In a grim irony, for years Chinese companies hawked many of these surveillance products at a security expo in Xinjiang, the home province of the Uighurs.
If china is able to surpass America in AI, it will become a more potent geopolitical force, especially as the standard-bearer of a new authoritarian alliance.
China already has some of the world’s largest data sets to feed its AI systems, a crucial advantage for its researchers. In cavernous mega-offices in cities across the country, low-wage workers sit at long tables for long hours, transcribing audio files and outlining objects in images, to make the data generated by China’s massive population more useful. But for the country to best America’s AI ecosystem, its vast troves of data will have to be sifted through by algorithms that recognize patterns well beyond those grasped by human insight. And even executives at China’s search giant Baidu concede that the top echelon of AI talent resides in the West.
Historically, China struggled to retain elite quants, most of whom left to study in America’s peerless computer-science departments, before working at Silicon Valley’s more interesting, better-resourced companies. But that may be changing. The Trump administration has made it difficult for Chinese students to study in the United States, and those who are able to are viewed with suspicion. A leading machine-learning scientist at Google recently described visa restrictions as “one of the largest bottlenecks to our collective research productivity.”
Meanwhile, Chinese computer-science departments have gone all-in on AI. Three of the world’s top 10 AI universities, in terms of the volume of research they publish, are now located in China. And that’s before the country finishes building the 50 new AI research centers mandated by Xi’s “AI Innovation Action Plan for Institutions of Higher Education.” Chinese companies attracted 36 percent of global AI private-equity investment in 2017, up from just 3 percent in 2015. Talented Chinese engineers can stay home for school and work for a globally sexy homegrown company like TikTok after graduation.
China will still lag behind America in computing hardware in the near term. Just as data must be processed by algorithms to be useful, algorithms must be instantiated in physical strata—specifically, in the innards of microchips. These gossamer silicon structures are so intricate that a few missing atoms can reroute electrical pulses through the chips’ neuronlike switches. The most sophisticated chips are arguably the most complex objects yet built by humans. They’re certainly too complex to be quickly pried apart and reverse-engineered by China’s vaunted corporate-espionage artists.
Chinese firms can’t yet build the best of the best chip-fabrication rooms, which cost billions of dollars and rest on decades of compounding institutional knowledge. Nitrogen-cooled and seismically isolated, to prevent a passing truck’s rumble from ruining a microchip in vitro, these automated rooms are as much a marvel as their finished silicon wafers. And the best ones are still mostly in the United States, Western Europe, Japan, South Korea, and Taiwan.
America’s government is still able to limit the hardware that flows into China, a state of affairs that the Communist Party has come to resent. When the Trump administration banned the sale of microchips to ZTE in April 2018, Frank Long, an analyst who specializes in China’s AI sector, described it as a wake-up call for China on par with America’s experience of the Arab oil embargo.
But the AI revolution has dealt China a rare leapfrogging opportunity. Until recently, most chips were designed with flexible architecture that allows for many types of computing operations. But AI runs fastest on custom chips, like those Google uses for its cloud computing to instantly spot your daughter’s face in thousands of photos. (Apple performs many of these operations on the iPhone with a custom neural-engine chip.) Because everyone is making these custom chips for the first time, China isn’t as far behind: Baidu and Alibaba are building chips customized for deep learning. And in August 2019, Huawei unveiled a mobile machine-learning chip. Its design came from Cambricon, perhaps the global chip-making industry’s most valuable start-up, which was founded by Yi’s colleagues at the Chinese Academy of Sciences.
By 2030, AI supremacy might be within range for China. The country will likely have the world’s largest economy, and new money to spend on AI applications for its military. It may have the most sophisticated drone swarms. It may have autonomous weapons systems that can forecast an adversary’s actions after a brief exposure to a theater of war, and make battlefield decisions much faster than human cognition allows. Its missile-detection algorithms could void America’s first-strike nuclear advantage. AI could upturn the global balance of power.
On my way out of the Institute of Automation, Yi took me on a tour of his robotics lab. In the high-ceilinged room, grad students fiddled with a giant disembodied metallic arm and a small humanoid robot wrapped in a gray exoskeleton while Yi told me about his work modeling the brain. He said that understanding the brain’s structure was the surest way to understand the nature of intelligence.
I asked Yi how the future of AI would unfold. He said he could imagine software modeled on the brain acquiring a series of abilities, one by one. He said it could achieve some semblance of self-recognition, and then slowly become aware of the past and the future. It could develop motivations and values. The final stage of its assisted evolution would come when it understood other agents as worthy of empathy.
I asked him how long this process would take.
“I think such a machine could be built by 2030,” Yi said.
Before bidding Yi farewell, I asked him to imagine things unfolding another way. “Suppose you finish your digital, high-resolution model of the brain,” I said. “And suppose it attains some rudimentary form of consciousness. And suppose, over time, you’re able to improve it, until it outperforms humans in every cognitive task, with the exception of empathy. You keep it locked down in safe mode until you achieve that last step. But then one day, the government’s security services break down your office door. They know you have this AI on your computer. They want to use it as the software for a new hardware platform, an artificial humanoid soldier. They’ve already manufactured a billion of them, and they don’t give a damn if they’re wired with empathy. They demand your password. Do you give it to them?”
“I would destroy my computer and leave,” Yi said.
“Really?” I replied.
“Yes, really,” he said. “At that point, it would be time to quit my job and go focus on robots that create art.”
If you were looking for a philosopher-king to chart an ethical developmental trajectory for AI, you could do worse than Yi. But the development path of AI will be shaped by overlapping systems of local, national, and global politics, not by a wise and benevolent philosopher-king. That’s why China’s ascent to AI supremacy is such a menacing prospect: The country’s political structure encourages, rather than restrains, this technology’s worst uses.
Even in the U.S., a democracy with constitutionally enshrined human rights, Americans are struggling mightily to prevent the emergence of a public-private surveillance state. But at least America has political structures that stand some chance of resistance. In China, AI will be restrained only according to the party’s needs.
It was nearly noon when I finally left the institute. The day’s rain was in its last hour. Yi ordered me a car and walked me to meet it, holding an umbrella over my head. I made my way to the Forbidden City, Beijing’s historic seat of imperial power. Even this short trip to the city center brought me into contact with China’s surveillance state. Before entering Tiananmen Square, both my passport and my face were scanned, an experience I was becoming numb to.
In the square itself, police holding body-size bulletproof shields jogged in single-file lines, weaving paths through throngs of tourists. The heavy police presence was a chilling reminder of the student protesters who were murdered here in 1989. China’s AI-patrolled Great Firewall was built, in part, to make sure that massacre is never discussed on its internet. To dodge algorithmic censors, Chinese activists rely on memes—Tank Man approaching a rubber ducky—to commemorate the students’ murder.
The party’s AI-powered censorship extends well beyond Tiananmen. Earlier this year, the government arrested Chinese programmers who were trying to preserve disappeared news stories about the coronavirus pandemic. Some of the articles in their database were banned because they were critical of Xi and the party. They survived only because internet users reposted them on social media, interlaced with coded language and emojis designed to evade algorithms. Work-arounds of this sort are short-lived: Xi’s domestic critics used to make fun of him with images of Winnie the Pooh, but those too are now banned in China. The party’s ability to edit history and culture, by force, will become more sweeping and precise, as China’s AI improves.
Wresting power from a government that so thoroughly controls the information environment will be difficult. It may take a million acts of civil disobedience, like the laptop-destroying scenario imagined by Yi. China’s citizens will have to stand with their students. Who can say what hardships they may endure?
China’s citizens don’t yet seem to be radicalized against surveillance. The pandemic may even make people value privacy less, as one early poll in the U.S. suggests. So far, Xi is billing the government’s response as a triumphant “people’s war,” another old phrase from Mao, referring to the mobilization of the whole population to smash an invading force. The Chinese people may well be more pliant now than they were before the virus.
But evidence suggests that China’s young people—at least some of them—resented the government’s initial secrecy about the outbreak. For all we know, some new youth movement on the mainland is biding its time, waiting for the right moment to make a play for democracy. The people of Hong Kong certainly sense the danger of this techno-political moment. The night before I arrived in China, more than 1 million protesters had poured into the island’s streets. (The free state newspaper in my Beijing hotel described them, falsely, as police supporters.) A great many held umbrellas over their heads, in solidarity with student protesters from years prior, and to keep their faces hidden. A few tore down a lamppost on the suspicion that it contained a facial-recognition camera. Xi has since tightened his grip on the region with a “national-security law,” and there is little that outnumbered Hong Kongers can do about it, at least not without help from a movement on the mainland.
During my visit to Tiananmen Square, I didn’t see any protesters. People mostly milled about peacefully, posing for selfies with the oversize portrait of Mao. They held umbrellas, but only to keep the August sun off their faces. Walking in their midst, I kept thinking about the contingency of history: The political systems that constrain a technology during its early development profoundly shape our shared global future. We have learned this from our adventures in carbon-burning. Much of the planet’s political trajectory may depend on just how dangerous China’s people imagine AI to be in the hands of centralized power. Until they secure their personal liberty, at some unimaginable cost, free people everywhere will have to hope against hope that the world’s most intelligent machines are made elsewhere.
Considering that Democrats and their allies in the media spent 3 years trying to illegally impeach Trump, it really shouldn’t come as a surprise that they are now trying to illegally steal the election.
But when all is finally said and done, and after all the lawsuits and recounts, there may be one message that stands out above all else. Trump has once again exposed yet another corrupt American institution.
Regardless of what you may think of Trump’s policies or governing style, one thing is clear. He has exposed more deep-rooted government fraud in 4 years than anyone else over the past 200 years.
Trump was forced to take on a corrupt FBI that had set out to destroy his presidency with the so-called “insurance policy” that was exposed in text messages between disgraced FBI agent Peter Strzok and his FBI agent lover, Lisa Page.
Before Trump took office, the FBI was held in high regard, especially among Republican rank and file voters. But now, as the FBI has been shown to be a totally corrupt and political operation, virtually no Republican has a positive view of the organization. Many Trump voters would be happy to see the FBI dismantled and a new organization put in its place. Something I personally agree with.
Next is the American intelligence apparatus. Together with British intelligence, the CIA and their allies peddled the phony “Steele Dossier”. This was a total intelligence psy-op from the start, and the CIA was knee deep in it. Just like the FBI, Trump exposed the true role of American intelligence agencies in the modern age. He showed that they are also nothing but political enforcers, gathering and manufacturing dirt on political enemies within the United States. Whether you try to oppose their endless wars or expose their drug trafficking, the CIA will take you down if you dare speak out against them. But now, Americans know the truth and there is no longer any debate, and they can thank Trump for that.
Of course, this list wouldn’t be complete without mentioning the media. Trump’s biggest accomplishment may very well be his destruction and exposure of the mainstream news media. A once trusted fourth estate now lay in self-inflicted ruin. The media has a lower trust score than congress, if you can believe that, and it’s all because of Trump. He exposed the fraud that is mainstream media, and they will never regain the stature or control they once had.
Finally, we have big tech. People loved social media and big tech companies, but Trump exposed what they truly are. Instead of being places to share cat videos and silly memes, social media companies are actually power-hungry oligarchs set on controlling the flow of information in free societies. Trump exposed their rampant censorship and manipulation. Now, even those who despise Trump are forced to agree that social media companies are indeed a problem, and their current status is not conducive to a free flow of information and ideas.
So as we now move into the coming weeks and months, where widespread voter fraud will be exposed via the courts and recounts, Trump will once again be doing what he has done for the last 4 years. He will be exposing yet another corrupt institution in America, and this may be his most important undertaking yet. To expose the fraudulent nature of the very foundation of democracy, our voting system.
Note: If you enjoyed this article, please make sure to share it!
For More Information on this Topic see the following Post:
Comments are closed.