ISA+Thingy

toc =Series Introduction=

Nanette S. Levinson, American University International communication (IC) as a distinct area of study has grown dramatically in the last 50 or so years, and especially since the emergence of the internet. It is multidisciplinary in nature, bringing together an array of approaches and methodologies. The breadth and vibrancy of its research agendas and the attention given to its topics and concepts (especially since 1998) indicate its increasing centrality to the study of international affairs. Within the ISA itself, the International Communication section wrote its charter in 1999 to become an official, self-standing section. Other professional associations that incorporate the study of international relations and/or comparative politics have welcomed new sections dealing specifically with international communication during this time. There is also profitable and exciting intellectual overlap across sections, as is evident in reading this compendium's essays as a whole. The essays from this section, published here in alphabetical order, provide strong evidence of this growth and vibrancy. They also reveal the multiple disciplines included in the study of international communication. A representative group of these disciplines includes anthropology, communication, computer and information science, economics, geography, international relations, law, linguistics, political science, public administration, public policy, psychology, and sociology. What does this multidisciplinarity actually mean? One can argue that there is no one single degree program that possesses the sole legitimacy for research in the field. (At the master's level, there is an increase in the number of universities around the world offering degrees in international communication or similar wording such as global communication.) Reading the biosketches of the authors writing essays on international communication reveals the range of disciplines covered as well as the global nature of the field and its research foci today. This does not mean that all research in the field is global in nature nor that any given author or researcher works across several disciplines. The essays also indicate that various disciplines were more or less central or even present at various points in time. Finally, recent work in the field encompasses research that employs critical approaches and emphasizes consideration of class, gender, and race as well as ethical perspectives. To provide a more nuanced and thematically sequenced overview of the field, I suggest the following order for reading the essays from this section. In “A History of International Communication Studies,” Elizabeth C. Hanson provides an extraordinary view and strong analysis of the history of IC, a perfect place to begin. Painting a picture of the field as a “loose topical confederation,” she illustrates its research traditions and highlights its leitmotifs and fractal patterns, including overlaps with related fields and research approaches. She even brings the history up-to-date with treatment of digital divides and related inequalities’ research. Joachim K. Rennstich in his essay “World System in the Information Age” uses a network approach to examine the world system and information's role in its development. This work adds to the broad foundation for reading in more detail about a range of topics in our field from technology standards to network economics and beyond. Focusing specifically on the “Economics of International Communication,” Stefan H. Fritsch highlights the significance of a political economy approach in the field of IC as well as the economic importance of communication. He examines research on the role of multinational corporations and international organizations, along with work on the role of states. His discussion of political economy and global governance as well as technology standards presages well the essays on standards and on internet governance. Also stemming from a political economy approach, J.P. Singh's treatment of “International Communication Regimes” encompasses an overview of theories of regimes and sets forth research on telecommunications regimes in particular and in historical perspective. It also provides a regime chronology and introduces the topics of internet governance and e-commerce, each of which is the subject of its own essay in this compendium. Jeffrey A. Hart, in his essay on “Technology Standards in International Communication,” examines the politics of standards in the world economy with a special focus on communication technology standards. His comprehensive overview includes research from international relations scholars as well as economics and political economy scholars, thus providing a useful catalog of research approaches in this area. Turning to the next essay on “Technology and Development in International Communication,” Nanette S. Levinson adds a discussion of developing countries and the roles of information and communication technologies. This essay categorizes the various research foci and methods for viewing development from early models/paradigms such as in modernization or dependency theory to recent approaches including participatory development and multistakeholderism. It illustrates the research progression from a focus solely on the nation-state to a more wide-angle focus. Research with this broader view includes consideration of civil society, international organizations, and the private sector as well as state actors; it also introduces consideration of culture. Laura Roselle, in her “Foreign Policy and Communication” essay, sets forth a broad-brushed and vital definition of contemporary foreign policy; the work she reviews covers not just a traditional domestic foreign policy viewpoint but also a global and more inclusive view. In a way, her essay echoes the history of the foreign policy field itself. She covers domestic determinants of foreign policy, roles of mass media, and tracks current and emerging research trends. These new trends include the possible effects of new media on foreign policy making and in contributing to the complexity of this review topic today. One example provided in this essay is research focusing on new media and their possible roles in the “framing” of issues and options. Writing about “International Communication in Social Movements and Interest Groups,” Kenneth Rogerson analyzes emerging research on political advocacy and societal change. Again, there is the inclusion of both domestically focused research and the cross-national. He captures the excitement and vibrancy of new technology-related opportunities with research on the blogosphere, mobile technologies, and other new channels of advocacy and change. As Rogerson observes, the addition of international communication to international relations allows for greater explanatory power and potential in this research arena. In “The Information and Communication Revolution and International Relations,” Jonathan D. Aronson and Peter Cowhey deal directly with the last two decades in information/communication technology, the global economy, and international relations. In so doing, they also set the scene for other essays in this compendium, including those on foreign policy, economics of international communication, and even entertainment technologies. They particularly look at and assess the role of states and the United States in communication policy making. In her broad treatment of the “Global Knowledge Society and Information Technology,” Shalini Venturelli examines research from a range of disciplines including those outside of the humanities and social sciences. This essay explores how nations develop new knowledge and related, significant policy implications. From knowledge production to knowledge transfer, she highlights research that deals with creativity, culture, and cultural environment. Again, the theme of state and nonstate actors emerges in this examination of knowledge society, past, present, and future. Priscilla M. Regan's essay on “Global Privacy Issues” highlights a key issue set related to the global knowledge society. This issue set focuses on public–private boundaries, includes a human rights research perspective, and involves intellectual, social, cultural, and cross-national legal dimensions of privacy-related debates and research. Privacy issues pose challenges, as Regan discusses, for a related research arena in international communication, one that is increasingly taking center stage in practice and policy debates. Gabriel Weimann in his cutting edge essay on “Terrorism and Counterterrorism on the Internet” provides analysis of terror conceptualizations as well as research on the use of the internet by terrorists. Beginning with definitions of cyberterrorism in the literature, he continues to analyze counterterrorism and its numerous challenges. This essay itself vividly illustrates why international communication research is so important today as well as the key overlaps with other related research areas in this section of the compendium. The challenges highlighted in the Regan and Weimann essays set the scene for a different but related set of problems and research opportunities. Milton Mueller's essay on “Internet Governance” portrays the internet itself as a subject of political activity by many actors, traditional or not and of policy debate both domestically and internationally. Noting the wide range of research fields involved and their topics of study, this essay highlights existing and new actors, institutions, and policy spaces. It also creatively addresses research methods and topics (such as those also covered in other essays including cybersecurity, privacy, surveillance, identity and intellectual property) needing attention. Moving from governance //of// the internet to government //using// the internet, Gisela Gil-Egui's essay on “E-government” focuses on research related to various governments’ deployment of information and communications technologies and, more recently, the internet to conduct business both internally (analogous to business-to-business in the private sector) and with its clients, serving citizens better through providing services via the internet. She highlights the move in the literature from a supply side or technology push perspective to a more “demand side” or user perspective; she also notes an increase in theory-based literature in this emerging international communication studies arena. Another “E” research area in the field is e-commerce. Sarah Cleeland Knight and Catherine L. Mann in their essay on “Electronic Commerce” provide definitions of this area as well as its historical context. They analyze the key intellectual and social dimensions related to its study. Using the global (international institutions), nation-state, and individual firm or person levels of analysis, these authors explore a wide range of electronic commerce research and identify key trends. Yet another “E” area of emerging research interest is the area of “Entertainment Technologies.” In his essay reviewing research on video games and information and technology-enabled social networking environments, Craig Hayden includes critical scholarship in his analyses and emphasizes the key connections to international studies including political economy and international politics. He even connects video games to the emerging research on public diplomacy, virtual worlds, and social networking technologies. It may be especially interesting to read this essay along side the earlier mentioned essay on “International Communication in Social Movements and Interest Groups.” Derrick L. Cogburn's essay on “Computer-Mediated Communication Technology and Cross-National Learning” serves as a stepping-off point for considering another related area of emerging research, work on cross-national learning, using information and communication technologies. It also provides possible answers to some of the issues highlighted in other essays related to international communication. Discussing research on collaboratories and cyberinfrastructure, this essay emphasizes the potential of collaboratories for bridging digital divides and underlines the importance of trust and of culture as two variables that can make a difference in the success of such cross-national collaborative learning. The essay goes on to discuss recent research on virtual cross-national teams and highlights significant infrastructure issues. While this introductory essay focuses primarily on the essays covering research in the field of international communication, there are important implications for teaching and learning in our field including the teaching of research methods. (The Cogburn essay provides an important foundation here.) Many opportunities ahead for studying pedagogical issues and methods approaches can match the complexity and change in our field, as it evolves. The very pace of change for communications-related technologies, embedded in their ecosystems which themselves often exhibit patterns of imbrication, poses special research challenges. These are both temporal and substantive. (Consider also the uncertainties and their consequences that are generated by such changes.) This calls for recognition of the roles of time, innovation, and uncertainties in our work, amplified by our more traditional foci on power, culture, and related elements. The centrality of communication and information-related technologies (including those fostering social media) to our work, requires innovative ways of theorizing and researching that capture networks and related co-processes and concomitant change. In sum, what impresses me most about these essays, taken together and individually as well, are the ways in which the field of international communication already addresses complexity, change, and connections and includes an array of qualitative, quantitative, and mixed methods research approaches. Authors today engage with culture and communication (including communication technologies) at many levels, involving state as well as nonstate actors, partnerships and collaborations, governance and governments; they also craft research that can be domestic or cross-national and that can extend to a locality, level, country, region, diaspora, or the globe as a whole. Whatever their disciplinary approaches including the cross-disciplinary, these researchers are engaging with knotty and vital research questions and are truly no longer on the periphery of the international affairs research arena. The very inclusion of online resources (and their variety and range) at the end of each essay throughout this compendium, no matter what the ISA section, also reflects the nuanced importance of the work in which IC researchers are engaged today. We are doing things differently across all of our fields; the research questions and history highlighted in the IC section essays reveal particularly the strengths of our work as well as the research challenges and opportunities ahead.

Derrick L. Cogburn
==== Subject [|International Studies] » [|Active Learning in IS], [|International Communication] ==== ==== Key-Topics [|information], [|information and communication technology (ict)], [|learning], [|teaching] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
Over the last decade, information and communication technologies have continued to emerge at an increasingly rapid pace. These technologies are providing an underlying infrastructure to support opportunities for innovative synchronous, asynchronous, and blended cross-national research and learning experiences in international communication. Through the use of these technologies, it is possible for students around the world to work in global virtual teams in collaboration with real organizations engaged in actual policy processes. This essay will trace the development, current status, and future prospects of the literature on information and communication technologies and cross-national learning. However, the reality for scholars working in this area is that the relevant literature is highly interdisciplinary. Thus, in order to address this subject, we must include an analysis of the multidisciplinary literature related to this problem. Over the past decade, the social, technological, economic, and social processes of globalization have proceeded at a dizzying pace. The transformation of the global economy and the era of globalization have had a significant impact on the organization of global society (see, inter alia, Castells 1996–8). Some scholars refer to this transformation as the emergence of a “global information society” or a “knowledge society” ([|Mansel and Wehn 1998]). One of the most recent important manifestations of these developments was the hosting in Geneva (2003) and Tunisia (2005) of the United Nations World Summit on the Information Society, or WSIS ([|www.itu.int/wsis]). Other scholars argue that the developments of this period represent a fundamental shift in the underlying techno-economic paradigm of society ([|Kodama 1994]; [|Freeman 1997]). Regardless of how you characterize this period, it is clear that the knowledge, skills, and abilities required for socioeconomic development are changing rapidly and dramatically. These skills include the need to understand better how to manipulate symbolic knowledge and how to work in global virtual teams ([|Reich 1997]; [|Cogburn 1998]). New applications of information and communication technologies (ICTs) and new organizational models have helped to create important developments in areas such as e-commerce, e-government, and e-learning. Companies, governments, nongovernmental organizations (NGOs), and international organizations have worked to develop strategies for dealing with these monumental changes, including developing “global” strategies for building networks, fostering cooperation, and expanding their geographic reach. Universities are no exception to adaptation. Universities in both developed and developing countries are struggling to come to grips with these changes and to provide opportunities for their faculty and students to work with and learn from colleagues around the world. They are working on how to “internationalize” their campuses and curricula ([|Altbach 2004]). For students, these changes have far-reaching implications for what they learn while pursuing formal studies, how they learn it and from whom, how they apply what they learn, and how they prepare themselves personally for these challenges. Perhaps more importantly, universities are exploring mechanisms to ensure their students are acquiring the skills necessary to succeed in a global knowledge-based economy. Some of these key skills include the ability to manipulate and master abstract concepts and symbolic information, and to identify, critique, and retrieve vast amounts of digital information ([|Hiltz 1995a]). In addition, regardless of whether students are going into industry, science, government, international organizations, or civil society and nongovernmental organizations, they will be expected to master the ability to work in global virtual teams. Working in such geographically distributed teams requires students to work across multiple time zones, languages and cultures; with persons with different levels of technology training, support, and access; with persons of different disciplinary backgrounds, institutional culture, and expectations; and on teams with highly mobile and transient members ([|Cogburn 1998]). These requirements are becoming commonplace for successful engagement in the knowledge economy, regardless of whether one is speaking of global and multinational corporations, small and medium sized companies, international organizations, nongovernmental organizations, or national governments ([|Freeman 1994]). While the need for this diverse set of knowledge, skills, and abilities has become more pronounced, most students in both developed and developing countries find themselves matriculating without having developed these skills in any significant way. Reorienting the university toward an institutional model that can handle these challenges has not been an easy task. Traditionally, universities are highly structured institutions, with most of their human capital organized into four distinct categories: (1) administrators, (2) faculty, (3) staff, and (4) students. Further, universities are structured according to schools and colleges or other academic units, and subdivided by departments, programs, and/or academic discipline. Historically, on even one university campus there can be fairly rigid lines among these various divisions, and rarely does significant collaboration occur between these components. There was perhaps even less interdisciplinary and cross-institutional collaboration between universities in a given country, and less still across universities in multiple countries. Of course, there are “exchange” programs, where students, faculty, and on occasion administrators have migrated to other institutions for “study abroad” programs, fact-finding trips, or faculty sabbaticals and research leaves, but these experiences, for the most part, continue to be episodic and focused on experiencing the “other” and taking it back to one's home institution. Today, especially in the sciences, there is increasing evidence of growing cross-disciplinary collaboration on university campuses. What is fundamentally of interest in this paper is the question of whether or not there are ways of structuring teaching, learning, and research experiences that are more collaborative, ongoing, and authentic, across universities and across countries. In a previous paper, we identified what we called a “triple track approach to maximizing collaborative learning” in these complex, cross-national learning environments, and those ideas will also guide our approach to this paper ([|Cogburn and Levinson 2008]). One organizational mechanism about which we have written and which could serve as a model to facilitate this kind of “authentic” global collaborative learning environment is the scientific collaboratory, which blends the words “collaboration” and “laboratory” (Wulf 1993). In 1989, William A. Wulf called the collaboratory “a center without walls, in which the nation's researchers can perform their research without regard to physical location – interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries.” The Computer Science and Telecommunications Board of the National Research Council (NRC) further clarified the collaboratory concept and raised awareness within the scientific community about its application in a report entitled //National Collaboratories: Applying Information Technology for Scientific Research// (National Research Council, 1993). A collaboratory is more than an elaborate collection of ICTs; it is a new networked organizational form that also includes social processes; collaboration techniques; formal and informal communication; and agreement on norms, principles, values, and rules within the network. To date, most collaboratories have been developed largely in the physical sciences (e.g. physics, upper atmospheric research, and astronomy) and recently in additional areas of research such as HIV/AIDS. Since the emergence of these collaboratories, a substantial and growing knowledge base has emerged to help us understand their development and application in science and industry (National Research Council 1993; [|Finholt and Olson 1997]; Olson and Olson 2000; Finholt, in press). Very recent work includes collaboratories in the definition of cyberinfrastructure, one of the top priorities of the US National Science Foundation. An additional body of knowledge exists for understanding the application of ICTs to learning at nearly all levels, and for understanding the implications for pedagogical strategies and a myriad of learning styles. These approaches, driven by both public and private sector initiatives, include computer-mediated communications (CMC), computer supported collaborative learning (CSCL), technology-enhanced learning (TEL), and other forms of what might be called “distance” education. It appears that the majority of these initiatives explore primarily asynchronous computer-assisted learning ([|Hazemi et al. 1998]). There has also been an increasing number of studies exploring the use of global virtual teams in education, some of them using a synchronous approach ([|Harasim et al. 1997]; [|Jarvenpaa and Leidner 1998]). While we have advanced our knowledge of technology-enhanced learning, there are still many outstanding questions, particularly related to globally distributed synchronous collaborative learning and the science of learning that should emerge (The Learning Federation 2000). Our knowledge of this area could be strengthened through exploring these concepts at the intersection of research on, on the one hand, scientific collaboratories and cyberinfrastructure, and, on the other, corporate virtual teams, coupled with emerging research on computer-supported cooperative learning (CSCL). This research approach should move beyond the laboratory, taking findings uncovered in these controlled environments and testing them in field settings. Further, even more questions exist about the particular challenges of actively involving developing countries in the conduct of globally distributed collaborative knowledge work.

Purpose
For all these reasons, it is important to identify and evaluate new methods of teaching international affairs and studies of globalization that capitalize on the tremendous advancements in ICTs. These approaches should take advantage of lessons learned from collaboratories and cyberinfrastructure that allow diverse groups of geographically distributed learners to collaborate in ways that are at times “beyond being there,” or more interactive than if they were located in the same laboratory or seminar room ([|Hollan and Stornetta 1992]). These new methods of teaching must draw upon the best thinking in a diverse group of academic and professional disciplines. The purpose of this essay is to review the historic and contemporary literature related to communication technology and cross-national learning. It will do so by discussing this literature within the context of a decade-long exploration of using these technologies to create geographically distributed cross-national collaborative learning. From 1999 to 2008, several universities around the world, mainly from South Africa and the United States, participated in the Global Graduate Seminar on Globalization and the Information Society: Information, Communication, and Development, also known as the Globalization Seminar. The Globalization Seminar was created initially between the University of Michigan School of Information (Ann Arbor), the University of the Witwatersrand Graduate School of Public and Development Management (Johannesburg), and the American University School of International Service (Washington, DC). In 2004, the headquarters of the project moved from Michigan University to the Syracuse University School of Information Studies. Working in collaboration with the Web-Based Information Science Education (WISE) consortium, the project branched out to include universities and students in India, Mexico, and Canada, and across the US. The underlying goal of the project was to understand better the sociotechnical infrastructure required to support successful cross-national teaching and learning in interdisciplinary globalization studies. We wanted to explore the degree to which ICTs could support a variety of innovative pedagogical models to build human capacity for a knowledge-intensive global economy. As an organizing framework, the project adapted the “collaboratory” approach, originally created to facilitate geographically distributed collaboration in science, to building a distributed learning environment. We focused on highly interactive, commercially available web-based tools that work well in both low and high bandwidth environments. In the following section, a brief review of the literature is provided that guides the paper and shapes the theoretical foundation for the study, including the collaboratory model and the use of cyberinfrastructure for distributed collaborative learning. We then describe the structure of the seminar and the synchronous and asynchronous technology infrastructure developed to support the distributed collaborative learning environment. Next, we describe some of the various methodologies used over the years in the study, and then present the findings organized as best practices and lessons learned. We conclude the essay with a discussion of the implications of these findings for university administrators, faculty, staff, and students.

Literature Review
Some of the most exciting developments in international communication today involve the increasing convergence of lessons learned from the diverse but related interdisciplinary fields of computer-supported cooperative work (CSCW), CSCL, human–computer interaction (HCI), and international studies. This convergence is evident in a number of ways, including new studies of how transnational civil society organizations use ICTs to coordinate their geographically distributed participation in global policy processes such as the UN World Summit on the Information Society ([|Klein 2003]; [|Siochrú 2003]; [|Cogburn 2004]; [|Selian 2004]; [|Jordan and Surman 2005]), in distance-based capacity building for such complex policy areas as internet governance ([|Kleinwächter 2004]; [|Cogburn 2006]), and in the implications of ICT use in cross-cultural distributed environments ([|Cogburn and Levinson 2003]; [|Abbott et al. 2004]; [|Zakaria and Cogburn 2006]). Many of these amazing developments are due to innovative applications using the internet as a delivery platform and the increasing availability of advanced commercial and open source information and communication technologies capable of supporting the synchronous and asynchronous needs of diverse, cross-national collaborative learning teams. Our exploration in this study has been guided by six broad and interdisciplinary streams of literature, which are: (1) knowledge creation, education, and learning; (2) group/team dynamics; (3) building trust in virtual teams; (4) culture in global virtual teams; (5) geographically distributed collaborative learning; and (6) infrastructure for distributed collaborative learning. In this section, we briefly explore some of the key ideas in each of these areas.

Knowledge Creation, Education, and Learning
Around the world, numerous studies point to the impact of the transformation of a global knowledge-based economy on primary, secondary, and tertiary education (see [|Garmer and Firestone 1996]; [|Brown and Duguid 2000]; [|Duderstadt 2000]). The archetype of a traditional student, teacher, and learning institution is undergoing profound changes, not only in the highly industrialized countries, but in the developing world as well. For example, the African Virtual University is a high-profile “attempt to use, on a grand scale, the power of modern information technologies to increase access to desperately needed educational resources in Sub-Saharan Africa” (World Bank, n.d.). In many ways, the “nontraditional” student – one who has to work to support him-/herself, enters tertiary education later in life after significant life experiences, and requires both more flexibility and highly specialized knowledge – is becoming increasingly the norm on college and university campuses. ICTs can have a tremendous impact on meeting the expectations of these students. However, we must be careful as we explore this environment. In many cases, the hype does not reflect the reality and lessons learned on the ground. Tiffin and Rajasingham (1995:1) argue that “Schools as we know them are designed to prepare people for life in an industrial society,” and that we must try to understand and develop the “kind of [educational] system needed to prepare people for life in an information society.” They suggest that the emerging information and communications infrastructure could have the effect of reducing the need for people to move physically from rural to urban areas, thus easing the burden on overcrowded transportation systems and fragile ecosystems. “The information society may prove to be a telesociety with a revival of rural areas and a return to the cottage industries that existed prior to the industrial revolution” ([|Tiffin and Rajasingham 1995]:2). This project also recognizes the monumental shift in the global economy and the need to equip students with skills appropriate to this era. Brown and Duguid (2000:208) argue that many colleges and universities are beginning to move rapidly to meet these challenges. They see the pressures on the tertiary system as being threefold: (1) a radically changing student body, with new kinds of requirements of the educational system; (2) increasing competition, especially from nontraditional and private sectors; and (3) the application of new ICTs ([|Brown and Duguid 2000]:208–10). Jamil Salmi, Education Sector Manager at the World Bank, conceptualizes these challenges somewhat differently and focuses on (1) economic globalization, (2) the growing importance of knowledge, and (3) the information and communications revolution ([|Salmi 2000]). James J. Duderstadt, former president of the University of Michigan, argues that “universities must find ways to sustain the most cherished aspects of their core values, while discovering new ways to respond vigorously to the opportunities of a rapidly evolving world” ([|Duderstadt 2000]:3). In their report analyzing a major think tank initiative on the use of information technology to create a learning society in the United States, Garmer and Firestone (1996:5) argue that “the revolutions in computers and communications technology have given teachers and students an immense array of tools to enhance learning.” Citing a range of new technologies, from CD-ROMs to multimedia applications and wireless delivery platforms, Garmer and Firestone (1996:5) suggest that these technologies can “engage students in discovery through simulation and exploration of new concepts, connect them to people and ideas beyond the classroom, and expand educational content.” They also argue (1996:6) that these technologies can “aid teachers in adapting materials to different learning styles and promote equity in education by providing a diverse range of resources and experiences to students who might not otherwise be able to afford them.” However, Tiffin and Rajasingham (1995:5) argue that in technology-enhanced learning environments, there is the need for “a balance between computer interaction and human interaction. In the future we will need to strike a balance between telelearning and conventional classroom learning.” These arguments have influenced the design of the Global Graduate Seminar model to include in its initial stages a mixture of co-located faculty involvement with virtual synchronous learning, or what we call the “circuit-rider” model. However, [|Brown and Duguid (2000)] provide significant insight into the challenges for tertiary educational institutions. They point to the tremendous value that such institutions hold in their “credentialing” authority (i.e., their ability to grant degrees that are recognized by the educational establishment; [|Brown and Duguid 2000]:214–15). One strong argument that they make in support of the survival of some types of academic institution is that “knowledge [itself] doesn't market very easily […] it's hard to detach and circulate. It's also hard for buyers to assess” ([|Brown and Duguid 2000]:215). In this project, the Global Graduate Seminar participants will register for the seminar at their respective university. Each respective university handles all certification and “credentialing.”

Group/Team Dynamics
One effective way to facilitate collaborative learning is to introduce teams that have been assigned to work on class-related projects. The dynamics of the teams can have a strong influence on the effectiveness of this method for student learning ([|Brown and Dobbie 1999]; [|Johnson et al. 2002]). Hence universities and educators need to acquire a good understanding of the social and psychological factors (especially cross-cultural communication patterns) that influence team dynamics. We defined team dynamics in terms of the following three components: team performance, leadership style, and the interdependence between team members (House et al. 1971; [|Jago 1982]). In this study, the teams played a critical role in helping to create the learning environment for the students. Each student was assigned to a “virtual” team with no other members from their university. As a result, the global virtual teams in this study were highly diverse in terms of nationality, geographic region, technology and professional expertise, and rationale for taking the course. [|Tuckman (1965)] highlighted four stages of team formation: forming, storming, norming, and performing. The trust level that is developed in the first several stages of team formation is crucial later for the whole team's performance. In global virtual teams, the diversity of the teams' backgrounds, cultures, and races impacts the amount of time it takes for a team to build trust in the first three stages. In homogeneous teams, trust can be developed more quickly. Research ([|McKnight et al. 1998]; [|Rocco 1998]) shows that the climate for effective cooperation is not likely to emerge without specific organizational intervention, especially leadership training activities. Leadership learning interventions before and at the beginning of the life of the team that focus on building trust become critical for the success of teams. The concept of emergent leadership is also important to teams that are not assigned leadership ([|Yamaguchi et al. 2002]). Below, we explore in more detail the dynamics of face-to-face (FTF) and geographically distributed groups.

Dynamics in FTF and Distributed Groups
From the social psychological literature on group dynamics, we know that a range of factors affect group work in any environment. Some of the most important factors include social facilitation and social loafing, deindividuation, and leadership style. Other important factors that are known to affect group dynamics are culture, common ground, and trust. In this section, we briefly review and compare this important literature as it relates to our study of group dynamics in both face-to-face and distributed teams.

Social Facilitation/Social Loafing
Social facilitation theory suggests that when people are working in the presence of others, including their group members or co-workers, then they are more likely to perform better on tasks than if they were performing those tasks alone ([|Zajonic 1965]). In contrast, social loafing theory proposes that the opposite occurs: that when people are working in groups, there will be a decrease in the effort put forth by individuals ([|Steiner 1972]; [|Latane et al. 1979]). It appears that social loafing exists in many different cultures on different types of task ([|Gabrenya et al. 1983]), and that it can be mediated by gender, and moderated by one's perspective on individualism or collectivism and task motivation. In this study, we are not testing for social loafing, but have designed the team and its tasks to maximize any possible social facilitation effect and minimize a social loafing effect.

Deindividuation
The deindividuation thesis proposes that participation in groups might lead some people to behave in more aggressive, uninhibited, and socially unacceptable ways than they might otherwise exhibit as an individual ([|Zimbardo 1970]; [|Diener 1980]; [|Rogers and Prentice-Dunn 2008]). This uninhibited behaviour has also been shown to exist in CMC environments where “flaming” and “mail storms” are becoming increasingly prevalent ([|Reicher and Levine 1994]). Since we know that this behaviour exists in both FTF and distributed groups, this study will look for evidence of deindividuation and any impact that it might have on other factors being studied.

Leadership
Another aspect of group dynamics related to our study is leadership style, particularly emergent leadership (e.g., the type of leadership that emerges in natural settings when the group is initially leaderless). The literature on leadership shows two distinct types of emergent leadership, one is called //task-focused// leadership and the other //relationship-focused// leadership. On one hand, task-focused leadership is seen as direct. It focuses almost exclusively on accomplishing the task at hand, often associated with dominance behavior (e.g., initiating structure). On the other hand, relationship-focused leadership is indirect. It focuses on improving group cohesion, often associated with affiliative behavior, such as democratic decision making. Task-focused leadership is seen to be more effective than relationship-focused leadership. However, on unstructured tasks, relationship-focused leadership is seen to offer some advantages and may even be more effective than task-focused leadership ([|Stogdill and Coons 1957]; [|Fiedler 1958, 1967, 1971, 1981]), especially amongst mixed gender groups ([|Yamaguchi et al. 2002]).

Culture, Common Ground, and Trust
Several factors can contribute to the degree of ease or difficulty of establishing common ground within a group (e.g., shared cultural background, experiences, previous conversations, surroundings). According to [|Clark (1993; 1996)] this “common ground” of knowledge is required in order for two or more people to understand each other. Similarly, according to [|Rogers (1999)], homophily and heterophily (similarity and difference on certain attributes) influence the degree to which an innovation can be diffused into a group. Distributed teams may have less initial common ground, and the constraints of CMC may make it more difficult to identify or build common ground than in FTF teams. We also know that communications media affect cooperation and self-reported trust in group work. FTF groups report the highest levels of cooperation, followed, by video, audio, and then chat conditions ([|Bos et al. 2002]). Higher levels of group participation have been found in CMC environments. CMC groups may also be more “disorganized, democratic, unrestrained, and perhaps more creative than groups communicating more traditionally” ([|Kiesler et al. 1984]). However, this increased democratization may lead to more difficulties in decision making in CMC environments ([|Kiesler et al. 1984]). Thus we expect to find differences in these areas in our FTF and distributed teams, and will be exploring those differences in the study.

Building Trust in Virtual Teams
Collaboration in teams requires a significant amount of shared interaction, decision making, and responsibility for the project's success ([|Ingram and Parker 2002]). These collaborative activities are strongly influenced by the level of trust among team members, especially when the completion of one's own work depends on the ongoing cooperation of another person or group of people ([|Deutsch 1958]; [|Lewis and Weigert 1985]; [|Butler 1991]; [|Mayer et al. 1995]; [|McAllister 1995]; [|Jones and George 1998]; [|Holton 2001]; [|Bos et al. 2002]; [|Zheng et al. 2002]). Thus trust is a key factor for interdependent actors to work together effectively. The initial level of trust among group members is crucial to its evolution. Trust theorists have argued that trust develops gradually over time ([|McKnight et al. 1998]). Thus low to medium trust at the beginning of team construction is usually present, and there is a gradual growth of trust over time. People are not likely initially to have a high level of trust toward strangers. In virtual teams, members are physically distributed in different locations across different national, cultural, racial, and economic boundaries, which further challenges initial trust levels. To explore these issues, [|Rocco (1998)] argued that trust broke down in electronic contexts but could be repaired by some initial face-to-face activities. Studies by [|Jarvenpaa and Leidner (1998)] also confirm that two-week trust-building exercises have a significant effect on team members' perceptions of others' ability, integrity, and benevolence –these perceived characteristics contribute to the construction of trust. Both of these approaches have been integrated into this study, and specific get-acquainted and trust-building exercises were used during the first two weeks of the seminar, beginning with the second year of its implementation.

Trust and Culture in Global Virtual Teams
In their study of global virtual teams in university settings, [|Jarvenpaa and Leidner (1998)] examined whether trust can exist in virtual teams, how this trust develops, and what communication behaviors facilitate trusting relationships in virtual teams. The virtual teams participated in a six-week collaborative learning project organized by the University of Texas at Austin. The project involved 350 graduate business students in 24 countries working in teams of between four and six individuals. These virtual teams communicated by email and accessed information from the project's internet site while completing two voluntary individual tasks and one required team task (i.e., developing a new internet site for information systems professionals, and writing a three- to five-page explanation of the site). [|Jarvenpaa and Leidner (1998)] archived and analyzed all email messages sent to each team's address (team mailing list), and later used this information to prepare case descriptions of twelve virtual teams. The researchers also administered two surveys to measure levels and outcomes of trust in the virtual teams. The survey responses of individual team members were averaged to develop overall team measures of trust. Jarvenpaa and Leidner (1998:23) report that global virtual teams can develop trust but suggest it may take the form of “swift, depersonalized, action-based trust” rather than a more “interpersonal and socially based trust.” As for how global virtual teams develop trust, Jarvenpaa and Leidner found that initial electronic messages were crucial to establishing high levels of trust because they set the tone for team interaction. At the start of the project, high-trust teams conveyed confidence and optimism in their early messages, whereas low-trust teams expressed more skepticism in initial messages ([|Jarvenpaa and Leidner 1998]). These findings, Jarvenpaa and Leidner point out, are consistent with earlier research on the lasting impact of initial group communication patterns (Gersick 1988; [|Gersick and Hackman 1990]). Moreover, teams displaying the highest levels of trust throughout the entire project generally engaged in frequent communication characterized by behaviors such as making social introductions, supporting each other, taking individual initiatives, providing feedback, clarifying and developing consensus on tasks, notifying team members of upcoming absences, and addressing technical problems ([|Jarvenpaa and Leidner 1998]). [|Lipnack and Stamps (1997]:231) stress that virtual teams must work at building trust in all phases of their development because they “have only their shared trust in one another as their guarantee for the success of their joint work.” The people, purpose, and link elements of virtual teams, in [|Table 1] below, can equally serve as sources of trust or mistrust ([|Lipnack and Stamps 1997]). Ravtiz (1997:363) argues that a “climate” of interaction in which “ideas are encouraged, generated, and expressed freely” is central to the development of trust in virtual collaborative learning environments. [|Jarvenpaa and Leidner (1998)] also found that high-trust virtual teams exhibited a strong task focus in their communication behaviors, even while engaging in parallel social exchanges. They explain that this finding confirms earlier research ([|Walther and Burgoon 1992]; [|Adler 1995]; [|Chidambaram 1996]) showing that “social exchanges can make computer-mediated groups ‘thicker’ as long as the social exchange is not at the expense of a task focus” ([|Jarvenpaa and Leidner 1998]:24). Moreover, [|Jarvenpaa and Leidner (1998)] report that initiatives by individual team members – and, more importantly, team responses to these initiatives – were crucial to developing trust and unity in the virtual teams. Citing Pearce et al.'s (1992) conceptualization of responses as trusting behaviors in face-to-face communication, [|Jarvenpaa and Leidner (1998)] suggest that team responses to individual initiatives are particularly important in the more uncertain environment of computer-mediated communication. Trust will change within virtual teams based on the degree to which team members keep promises, engage competently in work, express themselves truthfully about important issues, care about each other, contribute to the success of the team, care about the success of the team, have consistent expectations of each other, acknowledge their mistakes, feel comfortable sharing ideas with the team, have developed friendships with team members, can disclose aspirations, confide in team members about personal difficulties, are considerate of others' feelings, are friendly, and socialize (or would socialize) together.

**Table 1** People/purposes/links model of virtual teams In terms of the impact of culture, [|Jarvenpaa and Leidner (1998)] found that culture did not influence perceptions of trust in the project's global virtual teams. They suggest (1998:25) that “electronically facilitated communication may make cultural differences irrelevant” by eliminating most nonverbal cues such as dress, gestures, greeting styles, and accents. As cultural differences become less noticeable, perceived similarity among virtual team members may rise ([|Jarvenpaa and Leidner 1998]). This finding contrasts starkly with those relating to culture in the pilot phase of this study. [|Atkins et al. (2000)] found that cultural differences profoundly influenced the development of trust in the global syndicates, including pronounced differences in economic ideology and attitudes toward capitalism and socialism. [|Hofstede (1997)] agrees with this assertion, calling culture the “software of the mind”: “Every person carries within him or herself patterns of thinking, feeling, and potential acting which were learned throughout their lifetime […] [we] will call such of thinking, feeling, and acting mental programs, or as the sub-title goes: ‘software of the mind’” ([|Hofstede 1997]:4). This study will use Hofestede's (1997) construction of culture as an independent variable to explore its impact on the development of trust, and the effectiveness of global virtual teams.
 * ~ Foundation concepts ||~ Inputs ||~ Processes ||~ Outputs ||
 * ~ //Source//: Adapted from [|Lipnack and Stamps 1997], p. 49 ||
 * People || Independent members || Shared leadership || Integrated levels ||
 * Purpose || Cooperative goals || Interdependent tasks || Concrete results ||
 * Links || Multiple media || Boundary-crossing interactions || Trusting relationships ||

Geographically Distributed Collaborative Learning
Since the type of knowledge work that we are interested in often requires the ability to learn with others working in collaborative teams, we have explored the literature on distance-independent and distributed collaborative learning. While there have been some notable exceptions (e.g. [|Jarvenpaa and Leidner 1998]; [|Cadiz et al. 2000]), most studies of computer-supported collaborative learning have been of asynchronous approaches ([|Hazemi et al. 1998]). Nonetheless, from this important body of literature, we know that learning is social, and that “peer networks” or collaborative learning are equally important to faculty interaction, and can enhance student performance ([|Fjermestad and Hiltz 1999]; [|Brown and Duguid 2000]). [|Tiffin and Rajasingham (1995)] suggest that the balance between human interaction and computer interaction is a critical factor in the success of a virtual learning environment. [|Brown and Duguid (2000)] suggest that this balance is even more important when the learning environment becomes more complex and geographically distributed. Hiltz (1990b) finds that “collaborative learning” enhances student ratings of virtual courses. Thus we expect that students engaged in virtual teams (global syndicates) that evolve into “learning communities” will have more collective and individual success in the seminar, and will have a higher degree of satisfaction with the seminar. Brown and Duguid (2000:137) argue that “learning is a remarkably social process. Social groups provide the resources for their members to learn. Other socially based resources are also quite effective.” Their argument suggests that building the seminar participants into a healthy community of practice is the best way – if not the only way – to achieve this rich knowledge management and transfer. They go further, to suggest that as the learning process binds people together, they are able “to form social networks along which knowledge about that practice can both travel rapidly and be assimilated readily” ([|Brown and Duguid 2000]:141). These “networks of practice” are valuable in linking together people working in similar areas, which may actually never meet each other or work together. The global syndicates in this project are designed to provide the space for “learning communities” or “networks of practice” to develop. Brown and Duguid (2000:221) also argue that the peer networks are an equally important resource to faculty and university technology resources. In their analysis of a Stanford engineering course being taught in the TVI (tutored video instruction) method, Brown and Duguid (2000:221–2) explained that those students participating in the distance-based course were formed into groups that became learning communities and found that they consistently outperformed those residentially based students when tested on course material. This result was true, even though the distance students entered the course with “lower academic credentials”. ([|Brown and Duguid 2000]:221). Further, Brown and Duguid argue that the TVI method > requires viewers to work as a group and one person from that group to act as tutor, helping the group to help itself. This approach shows, then, that productive learning may indeed rely heavily on face-to-face learning, but the faces involved are not just those of master and apprentice. They include fellow apprentices. > ([|Brown and Duguid 2000]:222) These findings support the understanding found during the pilot phase of the Global Graduate Seminar, that the global virtual teams used in the seminar are critical to a successful distributed learning environment ([|Atkins et al. 2000]). The implications of these arguments for this project are numerous. For example, Brown and Duguid (2000:136) argue that true learning and knowledge transfer should become much more demand-driven. They argue forcefully that “people learn in response to need.” “When people cannot see the need for what's being taught, they ignore it, reject it, or fail to assimilate it in any meaningful way. Conversely, when they have a need, then, if the resources for learning are available, people learn effectively and quickly” ([|Brown and Duguid 2000]:136). We learn from Brown and Duguid that new information and communications technologies can have a tremendous impact on building communities of practice, but they must be implemented with sufficient knowledge of and attention to social dynamics. Therefore, in this project, we built the global syndicates and the Cotelco participants into a “network of practice” that can share ideas, information, and learning. Further, the Cotelco participants at their respective universities and enterprises slowly became “communities of practice” over the last decade. Previous research has shown that students who experience collaborative learning in the virtual classroom are most likely to rate virtual course outcomes more highly than traditional course outcomes ([|Hiltz 1990b]). [|Harasim et al. define (1995]:30) collaborative learning as “any learning activity that is carried out using peer interaction, evaluation, and/or cooperation, with at least some structuring and monitoring by the instructor.” Collaborative learning in the virtual classroom is grounded in “a learner-centered model that treats the learner as an active participant” ([|Harasim et al. 1997]:149). Using active and collaborative learning approaches; promoting meaningful feedback; and offering opportunities for intergroup collaboration, resource sharing, and collaborative writing have been identified as fostering collaborative learning in virtual distance education ([|Palloff and Pratt 1999]). In the virtual or online classroom, the “learning community” replaces the traditional lecture as the main vehicle for education ([|Palloff and Pratt 1999]). The development of community assumes equal importance to course content in the virtual classroom because knowledge is collaboratively produced through this community ([|Palloff and Pratt 1999]). As Palloff and Pratt explain (1999:5, original emphasis), learning is driven by “//the interactions among students themselves, the interactions between faculty and students, and the collaboration in learning that results from those interactions//.” Jarvenpaa and Leidner (1998:2) define the global virtual team as a “temporary, culturally diverse, geographically dispersed, electronically communicating work group.” [|Lipnack and Stamps (1997)] propose a “people/purposes/links” model of virtual teams working in intra- or interorganizational settings. As shown in [|Table 1] below, this systems model focuses on inputs, processes, and outputs, and generates nine principles for virtual teamwork. [|Lipnack and Stamps (1997)] argue that the nature and variety of virtual team links are what distinguish these teams most strongly from traditional, collocated teams. Multiple media constitute “the nervous system for the virtual team,” and the team members' interactions through these media form the team's “thinking” and shared knowledge ([|Lipnack and Stamps 1997]:104). The boundaries crossed in virtual teamwork include organizations, disciplines, distance, time, and cultures ([|Lipnack and Stamps 1997]). These boundary-crossing interactions offer the basis for building trusting relationships in virtual teams.

Infrastructure for Geographically Distributed Collaborative Learning
As [|McLellan (1997)] explains, Schrage's model of collaboration offers thirteen themes to inform the design of internet or Web-based education: (1) competence; (2) a shared, understood goal; (3) mutual respect, tolerance, and trust; (4) creation and manipulation of shared spaces; (5) multiple forms of representation; (6) playing with the representation; (7) continuous but not continual communication; (8) formal and informal environments; (9) clear lines of responsibility but no restrictive boundaries; (10) decisions not having to be made by consensus; (11) physical presence not being necessary; (12) selective use of outsiders for complementary insights and information; and (13) collaboration's end (McLellan 1997:186). In McLellan's (1997) information-design course, for example, the class listserv provided an informal space where the students discussed class assignments and shared personal information. Frequent short assignments were emphasized to foster active discussions on this listserv. Moreover, different representations (e.g., text and visuals, audio and multimedia) were featured in course learning activities; student input on deciding discussion topics and dealing with technical issues was encouraged; and student biographies, photographs, and email addresses were posted on the course's Web page to help students connect with their virtual classmates. Some of the terminology used by Tiffin and Rajasingham (1995:10) is helpful in our project as well. They use the term “virtual learning space” (VLS) “to encompass any kind of distributed virtual reality that can be used for learning.” As with our approach, Tiffin and Rajasingham (1995:10) “avoid the term ‘virtual classroom’ because it suggests that the place a virtual class is held is an electronic simulation of a conventional classroom.” Others, such as Sam Hsu and his colleagues, are comfortable using the term “virtual classroom” without appearing to assign the term any of the limitations that Tiffin and Rajasingham were concerned about ([|Hsu et al. 1999]). However, we also recognize the importance of “managing the metaphor” and using a variety of techniques to help familiarize ourselves with a new environment ([|Norman 1998]). Thus the uses of the term “seminar” in the actual graduate seminar under study, and references to the “seminar room,” have been built into the design of the project. However, following [|Tiffin and Rajasingham (1995)] we do not want to constrain prematurely the seminar participants into thinking about a seminar as they always have in the past. This is not what we are trying to achieve at all. In fact, what we want is perhaps a “better-than-being-there” experience ([|Hollen and Stornetta 1992]). Based on this literature, we used the computer-supported collaborative learning (CSCL) environment of the Global Graduate Seminar to create a learning experience that is difficult or impossible to replicate in a strictly physical setting ([|Atkins et al. 2000]). As Tiffin and Rajasingham put it (1995:12), “what we are seeking is a new paradigm of education with new standards and outcomes, something that may have no resemblance to classrooms as we know them.” In terms of physical infrastructure, Tiffin and Rajasingham (1995:15) suggest that learners could participate in this virtual learning environment from almost anywhere, including their home, conventional school, or “local community center.” Minoli provides a more thorough analysis of the various types of technology options to consider in modern distance-learning initiatives ([|Minoli 1996]:13–37). The infrastructure used in our Global Graduate Seminar allowed users to participate from anywhere they had access to the web. During the pilot phase of the study, students were able to access the seminar from Canada and Japan as they had to travel on business, thus not missing a day of the seminar that they would otherwise have missed. Ideally, geographically distributed learning environments should be both flexible and robust, and designed to be highly interactive. In these environments, the learner can operate simultaneously at multiple levels and move between them with ease. In their analysis, Tiffin and Rajasingham show some of the limitations of earlier computer-aided instruction (CAI) models. They acknowledge that many early CAI approaches were far too linear, and ignored much of the complexity that actually occurs in the learning process. Sam Hsu and his colleagues at the Center for Distance Education Technologies (CDET) provide a very interesting summary of the recommended steps to follow in “the process of conceiving, planning, designing, implementing, and maintaining a virtual classroom” ([|Hsu et al. 1999]). They include ten important elements for establishing a successful virtual learning initiative, each of which will be considered in the design of this project, which are: (1) needs assessment, (2) cost analysis, (3) planning, (4) design, (5) preparation/distribution, (6) enabling communications, (7) implementing student assessment, (8) implementing classroom management, (9) systems setup, and (10) maintenance ([|Hsu et al. 1999]:98). Tiffin and Rajasingham suggest that the addition of virtual reality applications to a learning environment could be instrumental in helping people to experience shared virtual experiences, and thus help them to remember and learn the experiences better. These findings suggest that we should also explore the potential of virtual reality technologies as we design the virtual learning environment for the project. What Tiffin and Rajasingham argue is that CGVR could contribute its rich variety of tools in the creation of a virtual learning environment. This virtual learning environment could help to give birth to the “virtual class,” as an alternative to the physical classroom ([|Tiffin and Rajasingham 1995]:142). This virtual class need not necessarily seek to replace the physical university or other secondary and tertiary institution. Thus we have included amongst the collaborative tools used in the project a low-cost, web-based virtual reality package called EduVerse. Although web-based virtual reality is not as immersive as other forms of computer-generated virtual reality, this package allows the students to engage in socializing activities before and after the seminar, or in their own time. [|Garmer and Firestone (1996)] also support the importance of public and private sector partnerships in the implementation of distance-learning initiatives, arguing that “partnerships and cross-sectoral collaborations can vastly improve learning opportunities” ([|Garmer and Firestone 1996]:13). They argue that private sector leaders can support these initiatives in a number of ways, including (1) speaking out in support of equitable access to the new tools of learning, (2) seeking opportunities to get involved in partnerships with schools, (3) developing creative funding strategies, (4) creating materials and support networks for teachers and administrators, and (5) educating the public about the benefits of integrating technology into the class-room ([|Garmer and Firestone 1996]:13). The findings in this literature review support the ongoing approach taken in the Globalization Seminar and in Cotelco, especially the structure of the global syndicates, and the collaboratory infrastructure for Cotelco. These arguments all suggest that it is critical for participating university and industry partners to explore and better understand the implications of these new technologies. This joint project has always aimed to provide a scientific understanding of the potential of geographically distributed learning and global virtual teams that might help these universities to conceptualize targeted interventions that will use information and communications technologies both to strengthen the university, their faculties, staff, and students in their mission to create, preserve, and disseminate new knowledge and to provide service to their local, regional, national, and global communities. The Globalization Seminar and Cotelco are examples of such interventions. They attempt to meet the challenge of a diverse student body, provide a response to some of the competitive challenges in the tertiary sector in South Africa, and seek to understand and apply new information and communications technologies. Finally, we have explored the tools and social processes required to support the kind of distributed knowledge work under investigation here. Nearly all of the CSCW literature suggests that the appropriate mixture of technologies is important to support the development of distributed collaborative communities. More sophisticated and media-rich CMC environments – such as those that include video, audio, electronic messaging, multimedia visual stimuli, and shared tools – may help to minimize any differences between CMC and FTF environments ([|Kiesler et al. 1984]). Also, students are often more willing to interact with their professors in CMC environments than in FTF ([|Welsch 1982]; [|Kiesler et al. 1984]). However, due to the instantaneous nature of electronic communications, students may have increased expectations for immediate feedback and become frustrated and dissatisfied when that does not occur ([|Kiesler et al. 1984]). As such, there are seven key design considerations to keep in mind for our technology environment. The considerations include the following: (1) creation and manipulation of virtual spaces, (2) multiple forms of representation, (3) continuous but not continual communication, (4) management of the metaphor, (5) diversity of access points, (6) interactivity, and (7) socialization ([|Tiffin and Rajasingham 1995]; McLellan 1997; Norman 1998). We expect to find that the students overcame what may have been initial fears to become comfortable with both the synchronous and asynchronous technologies used in the seminar. Although it may not be the most dominant factor, the technologies used to facilitate distributed learning play significant roles in the effectiveness of the education. These technologies support cross-national collaborative learning in various ways. One of the most important conceptual divisions in technologies that support distributed learning is between //synchronous// and //asynchronous// environments. In asynchronous environments, the focus of the interaction is on different times (e.g., individuals send messages when they want to and receivers pick up and respond to the messages when they want to). Key technologies in this asynchronous space are email and learning management systems (LMS). Email is obviously used to send messages back and forth and to enhance communications among the students. LMS systems are designed to serve primarily as document repositories and as an asynchronous platform from which to build the learning community. On the other hand, synchronous tools require the participants to communicate at the same time. Basic synchronous tools include instant messenger, chat, and presence-awareness packages, in addition to audio and video conferencing and full-blown web conferencing. In many ways, the principal trade-off is between interactivity and flexibility – synchronous technologies provide tremendous levels of interactivity among geographically distributed participants, and asynchronous technologies allow for “anytime, anywhere” access to the material. People can choose to engage individually with the learning materials in the LMS when it is most convenient for them. The widespread availability of commercial LMSs like WebCT and Blackboard and open source alternatives like Moodle ([|www.moodle.org]) and Sakai ([|www.sakaiproject.org]) probably explains why the asynchronous mode of distance education is the most dominant. In contrast, commercial web-conferencing applications are relatively expensive and have no real open source alternatives ([|Cogburn and Kurup 2006]). While asynchronous approaches are popular, their interactivity and support of the growth of trust and other team dynamics may be limited. However, asynchronous approaches may be useful in coping with disparate time zones, work patterns, and university cultures ([|Cristian 1996]; [|Cogburn and Levinson 2003]; [|Benbunan-Fich and Hiltz 2006]). Research conducted primarily in developed countries suggests that a “blended approach” or the appropriate mixture of various synchronous and asynchronous technologies is important to support the development of distributed collaborative learning ([|Hiltz 1990a; 1990b]; [|Steeples et al. 1996]; [|Veerman et al. 1999]). More sophisticated and media-rich CMC environments, such as those that include video, audio, electronic messaging, multimedia visual stimuli, and shared tools, may help to minimize any differences between CMC and face-to-face environments ([|Kiesler et al. 1984]).

Gaps in the Literature
The literature does not cover in depth the following three key areas to which this project seeks to contribute: (1) empirical studies of specific virtual teams operating over an extended period of time that are composed of members from both developing and developed nations, (2) studies of specific virtual teams that focus on the interaction between cross-cultural communication and team effectiveness, and (3) longitudinal examinations of cross-national virtual teamwork at the university level. This case study provides such a long-term view of cross-national ICT-enabled virtual teams at public and private universities in developed and developing nations.

Background to Collaboratories and Cyberinfrastructure
In 1993 the US National Research Council published a landmark report entitled //National Collaboratories// which articulated a vision of how information and communication technologies could be brought to bear on the challenges of facilitating scientific collaboration amongst geographically distributed scientists (National Research Council 1993). The report built on earlier work by William Wulf and others from a workshop in 1989 sponsored by the National Science Foundation (NSF) and identified the increasing demands for scientists to collaborate with colleagues that may be located in research laboratories all over the world. Wulf called a collaboratory a “center without walls” and urged the nation's researchers to take advantage of the opportunities afforded by modern information and communication technologies to make it possible for closely coupled distributed collaboration ([|Wulf 1989]:7). Early examples of fields taking advantage of these collaboratories include the space physics community ([|Olson et al. 1998]), oceanographers, and molecular biology (National Research Council 1993). Each of these scientific communities could immediately benefit, in various ways, from its researchers being able to be better networked with other researchers (Finholt 2001; 2002). An NSF-funded project at the University of Michigan, called the Science of Collaboratories ([|www.scienceofcollaboratories.org]), studied these various collaboratories and identified several common elements to predict the success and failure of these initiatives. One of the most important observations was that those collaboratories that paid significant attention to the social dimensions – not just the technological – had a higher likelihood of success. While the collaboratory movement started in the National Science Foundation, other federal agencies quickly picked up the baton, and the National Institutes of Health (NIH), the National Aeronautics and Space Administration (NASA), and others recognized the need for increased collaboration amongst their scientists as well (Finholt 2001; 2002). However, in many ways, the collaboratory movement took on the patina of elitism. It was seen that only certain scientists could access and participate in collaboratories, and certainly not beyond these high-profile scientific circles ([|Cogburn 2003; 2005]). This is the opposite of what many of the early collaboratory developers had hoped would emerge through the “distributed intelligence” capabilities of a collaboratory ([|Finholt 2002]:5). In this concept, the increased use of information and communication technologies could allow for increased interaction between scientists at nonelite institutions with scientists at elite institutions (Finholt 2005:5).

Broadening the Reach of Collaboratories to Close the Digital Divide
In 2003, Dan Atkins was asked by the NSF to chair a Blue Ribbon panel to examine the status of collaboratories and to explore ways to broaden the concept to include social and behavioral scientists and beyond. This panel created a new term – “cyberinfrastructure” – to express their desire that collaboratory infrastructure should become much more widespread and could make an even greater impact on science and technology and national competitiveness by getting larger and more dispersed communities involved in geographically distributed collaboration (Atkins et al. 2006). While the Atkins Commission Report, as the document has become known, broadens the conception of collaboratories to encompass larger and more diverse groups of scientists, others, such as we in our work in Cotelco, have pushed the boundaries of this concept even further. Within Cotelco we have evolved the collaboratory concept to include even larger groups of geographically distributed social actors, in learning environments such as our Global Graduate Seminar on Globalization and the Information Society (taught in real time between three universities in South Africa and three in the United States) ([|Cogburn and Levinson 2003]; [|Atkins et al. 2000]), in policy environments such as our collaboratory for the Task Force on WSIS organized by the World Federation of United Nations Associations (WFUNA), and in distributed groups of social scientists. These projects all used collaboratory approaches to enhance the participation of social actors of various kinds, especially those excluded from or less effective in global policy processes such as developing countries and civil society organizations. This aspect of the “digital divide” is one that receives far less attention than the dominant understanding of the concept, which focuses on access to the telecommunications, the internet, and the World Wide Web. Researchers studying collaboratories have identified three overarching domains around which collaboratory practices have coalesced (see [|Figure 1]). We characterize these domains as: (1) people-to-people, (2) people-to-information, and (3) people-to-facilities. Each of these domains is critical to the needs of the geographically distributed collaborative learners. For example, while physicists conceptualize “access to facilities” as the need collectively to view a telescope pointed at the upper atmosphere, Globalization Seminar participants need to have blended (face-to-face and distributed) access to the physical facilities of a seminar room or lecture hall. Figure 1 Collaboratory domains //Source//: [|www.scienceofcollaboratories.org]

Future Directions and Trends
This essay has attempted to provide an overview of the interdisciplinary literature relevant to communication technology and cross-national learning. It has anchored this overview in our experiences over the last decade using ICTs to create a distributed collaborative learning environment between the US, South Africa, and other countries around the world. We will end with a few thoughts about the future of computer-mediated communication technology and cross-national learning, particularly with respect to research, theory, and methodology.

Research
From a research perspective, we see two important future directions. One direction is the continued need for empirical studies of long-term cross-national learning teams using computer-mediated communication tools to support their work on tasks that are as realistic as possible. These field studies are needed to complement the growing body of evidence coming from laboratory-based experiments and computer simulations of collaboration. Also, lessons learned from these lab experiments should continue to be integrated into the planning and instrumentation of the field-based studies. A second research direction is to focus more on identifying the factors that influence learning goals in distributed collaborative environments. More research needs to be done on the impact of these computer-mediated communication environments on actual learning objectives.

Theory
Conceptually, more research needs to be conducted not only on the ways in which culture affects virtual teams, but on the ways in which a person “transcends” their cultural background when working in computer-mediated communication environments.

Methodology
From a methodology perspective, we continue to support the idea of taking a mixed-methods approach, blending qualitative, quantitative, and social-network analysis.

Stefan H. Fritsch
==== Subject [|International Studies] » [|International Communication], [|International Organization] ==== ==== Key-Topics [|communication], [|governance], [|networks], [|technology] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
In recent decades, international communication has developed into a central issue of global politics, economics, and culture. Although communication has always influenced the development of human consciousness and societies, communication's impact only began to increase on a globally significant level during the mid-nineteenth century. Generally considered one of the central driving forces behind globalization, communication has provided the backbone for deep integration of trade, finance, culture, and so on. Constant invention and innovation, resulting in new information and communication technologies (ICTs), as well as geographic expansion, therefore became the fundamental driving force of systemic transformation in the global political economy, especially during the second half of the twentieth century. The microelectronic or digital revolution since the 1970s, combined with rapidly changing global politico-economic frameworks, has resulted in the next development stage of the capitalist market economy, which has become deeply embedded in the collective memory through terms such as the “knowledgeable society,” the “information age,” the “postindustrial society” or “digital capitalism.” As a result, global information and communication; the underlying infrastructure of large technological systems for its production, distribution, and storage; and related content-delivering industries have all been the focus of growing academic and public discussions for two reasons. First, dynamic business sectors revolving around ICTs are increasingly seen as vital to the economic welfare, security, and cultural reproduction of societies across the world ([|Hanson 2008]). Second, they have been identified as major driving forces behind the fundamental transformation of structures, interaction processes, actors, and relevant policy issues in the global political economy ([|Rosenau 1990]; [|Skolnikoff 1993]; [|Talalay et al. 1997]). Therefore it comes as no surprise that scholarly attention to issues of international communication has grown remarkably during the last decades and can generally be characterized by an unusually high degree of interdisciplinarity, involving disciplines like economics, international relations and international political economy, communication and technology studies, sociology, cultural studies, geography, urban studies, history, and law. The goal of this essay is to introduce the reader to the basic theoretical discussions, concepts, actors, structures, processes, and policy issues that together make up the study of international communication.

Knowledge, Information, and Communication
Although the importance of knowledge, information, and communication has been widely accepted in contemporary economic and political thought, their economic function only became the subject of scientific interest from a variety of disciplinary perspectives during the 1930s and 1940s. Until the mid-1960s, knowledge and technology were largely defined as external variables, or as quasi-given ([|Rosenberg 1976]; [|Fagerberg 1994]). This represents an interesting blank spot, taking into consideration that Adam Smith already wrote in 1776, with regard to the importance of education and knowledge, that “man educated at the expense of much labor and time […] may be compared to one of those expensive machines” ([|Smith 1776/2003]). From the 1930s on, economists would begin to investigate the effects of knowledge creation – in the sense of (vocational) training and education and technological innovation – for capital formation and productivity increases ([|Schumpeter 1935]; [|Walsh 1935]). Public and academic discussions revolving around the growing importance of knowledge, information, and their communication often use those terms interchangeably. In order to clarify terms one has to separate knowledge from information. [|Machlup (1962]:7), as one of the first, defines knowledge as “anything that is known by somebody, and the ‘production of knowledge' as any activity by which someone learns of something he has not known before even if others have.” [|Bell (1973]:175) defines knowledge more narrowly as “a set of organized statements of facts or ideas, presenting a reasoned judgment or an experimental result, which is transmitted to others through some communication medium in some systematic form.” [|Porat (1977]:2) defines information similarly as “data that have been organized and communicated.” What transforms knowledge into information is the process of communication, the proactive sharing of information with others. During the last two decades, economics has investigated more closely the role of information for business as well as the development and organization of information markets and the behavior of market actors in rapidly changing technological environments ([|Stigler 1971]; [|Evans and Wurster 2000]). However, economics has always defined information and its production and distribution through liberal market mechanisms as neutral. Classical economic theory has several fundamental problems with the analysis of global communication industries and markets, such as – among others – “the absence of an adequate quantitative measure of information, and hence, of the value per unit of information” or “that the sharp division between ‘producers' and ‘consumers' necessary for the application of neoclassical efficiency and optimality (or ‘welfare') theorems is inapplicable in the process of production and distribution of knowledge” ([|Parker 1994]:48). For the political economy of communication, the question of who controls and regulates information and communication (channels), and how the process of information exchange is structured and organized, forms the center of scholarly curiosity in the study of the political economy of communication.

From an Industrial to a Postindustrial Information Society
From the early 1950s on, a growing body of scholarly work from various disciplines began to investigate the role of information and communication for the political economy of empires, states, business enterprises, and individuals. The general observation has been that technologically advanced economies since the late 1960s have been in the process of moving beyond industrial capitalism to information-based economies that will bring profound changes in the form and structure of the economic system, in the sense that information and communication represent major input factors in practically all economic sectors. As [|Melody (1994]:21) observes, “The state of information in the economy has pervasive effects on the workings of the economy generally.” As one of the first, Innis investigated the influence of media, from oral communication of preliterate cultures, through writing and print, to electronic media, for the establishment, stabilization, and reproduction of societies and their politicoeconomic interactions with others ([|Innis 1950/2007]). In the 1950s and 1960s, economists such as [|Machlup (1962)], [|Boulding (1966)], and [|Drucker (1969)] were among the first to recognize and further investigate the growing economic importance to national economies of information and its production, storage, and distribution through large information and communication networks. They came to the realization that knowledge production and distribution, as well as the large-scale incorporation of information and communication services into industrial production processes, had begun to generate a growing portion of national gross domestic products (GDPs). An ever-increasing number of highly skilled jobs, prominently termed “knowledge worker” ([|Bell 1973]) or “symbolic analyst” ([|Reich 1991]), revolved around the production, dissemination, and management of knowledge, information, and communication. As [|Bell (1973]:126) summarizes in his seminal work: > In preindustrial societies – still the condition of most of the world today – the labor force is engaged overwhelmingly in the extractive industries: mining, fishing, forestry, agriculture. Life is primarily a game against nature. One works with raw muscle power, in inherited ways, and one's sense of the world is conditioned by dependence on the elements – the seasons, the nature of the soil, the amount of water […] Industrial societies […] are goods-producing societies. Life is a game against fabricated nature. The world has become technical and rationalized. The machine predominates, and the rhythms of life are mechanically paced […] A post-industrial society is based on services. Hence, it is a game between persons. What counts is not raw muscle power, or energy, but information. Bell and others have repeatedly been accused of technological determinism, which perceives technology as the dominant source for sociopolitical and economic conditions and their change. Technology, so their critics say, has been interpreted as an agent of domination and oppression rather than liberation, as the source of the problem rather than the solution. However, crude forms of technological determinism have been rejected by most scholars ([|Smith and Marx 1994]; [|Castells 2000]; [|Kaplan 2004]). The need to analyze the extent to which ICTs and various information and communication-related products and services have changed the structure of national economies generated a number of pathbreaking studies. Seminal works by [|Porat (1977)] and [|Nora and Minc (1980)] defined more precisely what became known as the “information society” and in extensive quantitative studies tried to assess the economic contribution of ICTs, and the services produced and distributed through them, to national economies. The importance to various economic processes of information and communication as input factors became the center of attention for a growing body of economic literature concerned with the formulation of a new “techno-economic paradigm” ([|Freeman 1987]; [|Dosi et al. 1988]; [|Archibugi and Michie 1997]), which investigated the relationship between information, communication, technological innovation, and the necessary prerequisites for competitiveness in an postindustrial, information-based global political economy. Empirical research during the 1990s investigated the contribution of ICTs to economic growth and development ([|World Bank 1999]; [|Collecchia and Schreyer 2001]; [|OECD 2002]; [|Jalava and Pohjola 2002]; [|Schreyer 2002]), and showed that ICTs not only contribute significantly to economic growth in general or in newly established service (tertiary) sectors, but also improve performance – i.e., the productivity – in “old” (secondary/manufacturing) sectors ([|Ark 2001]; [|Pilat and Lee 2001]).

The Commodifiaction of Information and Communication
Aside from the generally growing economic significance of information and communication for the economic development of societies, the commodification of information itself has been the subject of investigation by political economists. First analyzed by classical political economists like Smith and Marx, commodification describes a transformation process by which a product's value is not determined by its use value – i.e., the satisfaction of specific human needs and wants – but instead through the price a product can command in exchange – i.e., its exchange value ([|Smith 1776/2003]; [|Marx 1867/1976]). According to these scholars, commodification is a fundamental characteristic of capitalist development, geared towards the generation of surplus value or profit. With regard to the commodification of information and communication, one can distinguish two cases. In the first, information is the final product. In the second, information is an intermediate component of production. In either case, scholars have scrutinized the means “whereby capitalist social relations are insinuated or accepted into what had earlier been non-capitalist forms” ([|Schiller 2007]:21). As will be shown below, the commodification process has been one of the major trajectories in the political economy of communication, especially as a result of the liberalization and deregulation of information and communication sectors since the late 1970s, which transformed communication from a public into a private good ([|Schiller 1999]). This has resulted in new conceptions of public and private information, in the development of information and communication markets, and in the development of property rights associated with marketable information. Primarily, scholars influenced by critical theory have highlighted the special and particularly powerful character of communication as a commodity: Besides its ability to generate profit, “it contains symbols and images whose meaning helps to shape consciousness” ([|Mosco 1996]:147). That is, mass media and ICT industries in capitalist society not only serve the goal of profit generation, but also provide the outlet for (ideological) messages that reflect and advance the interests of capital as a whole and for specific class fractions and so serve the successful reproduction of capitalism ([|Schiller 1973; 1976; 1984; 1989]; [|Herman and Chomsky 1988]). Ideological and economic aspects, however, cannot and should not be separated in order to avoid simplifying analysis ([|Garnham 1979; 1990]). Further research investigated the process of “extensive commodification” of previously only lightly touched areas, like public education, government information, media, culture, and telecommunication. This topic has been researched in communication studies ([|Schiller 1989]; [|Garnham 1990]), geography ([|Harvey 1989]), urban studies ([|Davis 1990]), and cultural studies ([|Davis 1986]).

Technology and International Communication
Since various technologies form the basis of international communication, technology studies in recent decades have tried to explain processes of technological invention and innovation and the impact of new ICTs on the architecture of international communication. ICTs and media industries form the very beginning have been defined by “creative destruction,” a metaphor for constant technological and organizational innovation. As [|Schumpeter (1942]:83) argued, “Revolutions are not strictly incessant; they occur in discrete rushes which are separated from each other by spans of comparative quiet. The process as a whole works incessantly, however, in the sense that there is always either a revolution or absorption of the results of revolution.” Although the political economic literature on communication often speaks of technological revolution, research in recent decades has pointed to the rather evolutionary aspects of technological innovation. A central concept revolves around the notion of “path dependence;” i.e., technological progress often builds on previous inventions and innovations, which predetermine, to some extent, the next steps ([|Rosenberg 1994]). Innovation studies have developed explanatory models that differentiate between market-pull (demand) and technology-push in order to understand which factors contribute to basic research as well as the development and market introduction of new technologies ([|Mansfield 1974]; [|Gilpin 1975]). Understood as a loose chain of innovations, one can distinguish several development stages in the technological evolution of ICTs: (1) telegraph, (2) telephone, (3) wireless telegraphy (later radio or broadcasting), (4) television, (5) geosynchronous satellites, (6) computers (including personal computers or PCs), (7) fiber-optic cables, (8) the internet, and (9) mobile communication (cell phones, wireless internet). Some of these inventions or innovations happened almost simultaneously, others clearly developed on the basis of previous technologies. They all had particular development histories and were introduced in specific historical and economic situations ([|Noble 1977]; [|Aitken 1985]; [|Wasserman 1985]; [|Douglas 1987]; [|Hughes 1989]; [|Saxby 1990]). However, they all share some basic characteristics, which came to define ICTs and the broader structures of contemporary international communications. First, they all share the character of large technological networks. Information and communication networks are basically open systems. They tend to grow, because the larger the network – i.e., the more nodal points or system participants a system contains – the larger the economies of scale, defined as lower costs per unit of information, and the higher the usefulness for the network participants in terms of connectivity with other participants. That is, the transaction costs decline with the growing size of a network. This observation has been named Metcalf's law after its discoverer. Following the logic of path-dependence, authors have pointed to so-called lock-in effects or “positive network externalities” that represent another characteristic of large information and communication networks ([|Rohlfs 1974]; [|Oren and Smith 1981]). Since the usefulness of a network for participants grows with its size, the economic incentives for leaving a network decrease as more users join it, because leaving the network could result in higher transaction costs or loss of specific product applications. This observation has also been made for various electronic consumer products and the technological standards they are based on and explains why the role of technological standards is so central to market success ([|Katz and Shapiro 1985; 1986; 1994]). Second, since the 1960s the microelectronic or digital revolution has helped to overcome the former separation of ICTs into different media (telephone, internet, PC, TV, radio) and information sectors (language, text, pictures, data, video) through the introduction of computer technology, fiber-optic cable and communication satellites into existing communication infrastructures. This new technology, which was based on the encoding of information in binary digits, largely replaced the previous technologies (which were based on analogue electrical wave technology) and allowed for the convergence between formerly separate categories of information industries and distribution channels ([|Negroponte 1996]). Higher speed and reliability of information distribution as well as constant growth of data volume through expanded network capacity (bandwidth) caused dramatic price reductions in global communication ([|Singh 2002]) and new applications and information services. The accelerated speed of innovation in software and computer chip technology has led to Moore's law, which predicts that the capacity of microprocessors and memory devices doubles roughly every eighteen months while the price per operation stays the same ([|Moore 1996]). Third, networked digital information and communication technologies are characterized by high fixed costs, such as investments in infrastructure or other technical components such as microchips, and low variable costs, such as the addition of new network participants or the copying of music CDs from an original digital recording. That is, the exploitation by a number of parties does not degrade a digital artifact's quality ([|Mowery 1996]). This explains why the networks have to grow as much as possible in order to amortize the high initial investments and lower the costs for each participant. Fourth, another major policy issue in global communication is the protection of intellectual property and the centrality of technological standards, both of which are directly related to the previous issue. Since knowledge represents the core product of ICT and media content industries, both aim for strong intellectual property laws to protect their initial investments in technological standards and content. The regulation of intellectual property protection has primarily been a matter of national jurisdiction. However, as will be shown below, corporate lobbying with governments has resulted in a growing number of bilateral and multilateral agreements and regimes for the global protection of intellectual property ([|Leaffer 1991]; [|West 1995]). This explains the rising number of lawsuits revolving around issues of patent or intellectual property infringement and the growing importance of these issues on national and global levels ([|Clapes 1993]; [|Moore 1997]). However, in order to remain at the cutting edge of technological innovation without losing proprietary knowledge or market share, companies often apply the strategy of open, yet owned, standards. They pursue a strategy of limited licensing of architectural standards to other manufacturers, in order to increase equipment or software interoperability and create positive network externalities ([|Borrus and Zysman 1997]).

Individuals as Commodities, Consumers, and Political Actors
A growing number of scholars have analyzed the role of individuals in the political economy of communication and in a broader sense in international relations. One aspect that has generated a number of studies revolves around individuals' important role as consumers of global communication services and media content. From this point of view, the audience of the message or the consumer of a communication service has been analyzed as part of the broader commodification process ([|Garnham 1979]; [|Mosco 1996]). On the demand side, consumers decide their media consumption taking into account budgetary and time constraints, ideological preferences, and individual characteristics. On the supply side, media corporations choose story content, format, and attributes to maximize the number of readers, viewers, listeners, etc. and therefore profit. The role of the audience has generated one of the most interesting discussions, focusing on the commodification process, which revolved around the character of the audience either as commodity or as labor. This discussion helped to highlight the reciprocal relationship between media, advertising industry, and audience in the broader context of media commodification ([|Smythe 1977; 1978]; [|Murdock 1978]; [|Jhally 1990]). This discussion added new insights into the audience's role, which went beyond earlier mass society theories ([|Gasset 1957]) which conceived of the audience as an inert mass, as well as pluralist positions which went to the opposite extreme by emphasizing the audience's role as coproducer of media content ([|Fiske 1989]; [|Ang 1991]). The later, however, overestimated the audience's level of control over the media production process. A second group of primarily international relations scholars, which became interested in individuals and their usage of information and communication, rather focused on their political empowerment through new ICTs like television, videocassettes, the internet, or mobile communication. These “technologies of freedom” ([|Pool 1983]) enable citizens to gather more information outside state- or corporate-controlled media outlets and lead to a pluralization of content production and distribution, causing a “skill revolution” through which individuals are better informed about local and global issues and increasingly able to organize with like-minded people around the world in order to channel their demands, for example in the form of nongovernmental organizations (NGOs) ([|Rosenau and Fagan 1997]; [|Rosenau 2003; 2007]). However, recent research has put more emphasis on the difficulties for individuals to gather reliable information in a global media system. The central problem is related to the question of media ownership and the selective and biased provision of information to media consumers. Explicit attention has been paid to the relationship between media ownership and production, media bias, and the effects of media reports on public awareness regarding specific policy issues ([|McCluskey and Swinnen 2004]; [|Strömberg 2004]; [|Swinnen and Francken 2006]).

The Post-Westphalian State
States, as one of the main actors in the global political economy of communication, have been extensively analyzed in the last few decades. Until the invention of the printing press with moveable type by Johann Gutenberg, the production, storage, and dissemination of information in Latin through a European network of seminars and monasteries were largely controlled by the Catholic Church ([|Dudley 1991]). The cheap mass production of books in common languages not only challenged the monopoly of the church, but also made the reduction of language pluralism as well as the formation of national identities much easier ([|Eisenstein 1979]; [|Anderson 1991]). Worldly rulers quickly realized the usefulness of this new medium to extend their control over citizens and strengthen state authority. The following centuries would further strengthen the state as a central organizational agent of information and communication. ICTs and global communication take a central role in discussions about the role of the state in the contemporary global political economy, since they have generally been identified as the main drivers of the process of globalization, defined as growing complex interdependence between individuals, national economies, and corporations ([|Morse 1976]; [|Keohane and Nye 1998; 2001]; [|Rogerson 2000]). A vital discussion revolves around the question of system modification versus transformation by ICTs. Globalization sceptics, mainly represented by state-based realism or neorealism, doubt that technology-driven globalization has transformed the structures and processes of the global system and reject the observation that states' sovereignty – i.e., their autonomous ability to control economic processes – has eroded and is increasingly shared with various international organizations and nonstate actors on multiple levels ([|Weiss 1998]; [|Hirst and Thompson 1999]; [|Rugman 2001]; [|Gilpin 2002]). Hyperglobalists have emphasized the dramatic transformation processes in the global system driven by technological progress, mainly new ICTs, which undermine system-building and system-stabilizing principles of political sovereignty. Territoriality and autonomy have been significantly eroded ([|Strange 1994; 1995; 1996]), thereby making it necessary to search for new modes of political organization ([|Ohmae 1993; 1994]). According to a third perspective, which might be called transformationalist, globalization processes further the establishment of newer, as well as re-strengthening older, spheres of authority or polities, which cause increasingly complex structures on the global level ([|Held et al. 1999]; [|Rosenau 2003]; [|Ferguson and Mansbach 2004]; [|Held and McGrew 2007]). The state-centric world, represented by the “Westphalian” state, is enriched by a multicentric world consisting of actors who are less bound by sovereignty or territory, but rather organize along the logic of global networks in which concepts of time and space are radically modified ([|Innis 1950/2007; 1964]; [|Carnoy and Castells 2001]). In some aspects the resulting patterns of global politics resemble earlier forms of political organization during the Middle Ages, with complex and overlapping patterns of politicoeconomic governance ([|Friedrichs 2001]; [|Slaughter 2004]). For example, ICTs have contributed to two parallel developments in the global system, namely integration and fragmentation, resulting in the paradox of fragmegration ([|McPhail 1989]; [|Rosenau 1997]).

The Post-industrial Competition State
Most observers, though, agree on the growing relevance of information and communication-related issues for state survival in a broad sense. With regard to the relationship between power/influence and information and communication, [|Strange (1994)] developed the concept of structural power, which emphasizes the importance for states to control knowledge generation, storage and distribution in order to stay at the cutting edge of technological innovation and competitiveness. Another form of power, which gains more importance in the knowledge society, is soft power or cultural-communicative power, which is defined as a state's ability to disseminate its cultural, societal, ideological, and political core values into other countries so that their societies can accept and internalize them ([|Nye 1990]). Instead of classical power categories such as military power, new power categories such as patents, technological standards, size and capacity of communication networks (broadband width, PCs per capita, number of cell phones or landlines, etc.), and levels of interconnectivity between states become vital for measuring state power in a globalized knowledge economy ([|Porter 1990]). Scholars from diverse theoretical backgrounds agree that the role of states in the governance of global information and communication has changed dramatically over the last 150 years and generated a large body of scholarly literature. States have played a central role in the creation of all major ICTs – the telegraph, radio, telephone, satellite technology, internet and mobile communication ([|Deibert 1997]; [|Abbate 1999]; [|Hanson 2008]) – as well as in their domestic and global regulation. They were responsible for extending them globally, e.g. to improve control over colonial territories and to further their national-security and economic interests ([|Headrick 1988]; [|Hugill 1999]). Importantly, though, state bureaucracies often controlled ICTs directly in the form of public postal, telegraph, and telephone administrations (PTTs), while in the US, for example, the government allowed AT&T to develop the national telephone system on the basis of a private monopoly.

The Liberalization of International Communication
State control of information and communication networks in the form of universal public services would persist in most states until the 1970s as part of the Keynesian welfare state and embedded liberalism, which tried to combine liberal global market principles with domestic welfare systems to absorb external shocks ([|Ruggie 1982]). From the 1970s on, liberalization policies originating in the US and Great Britain incrementally spread throughout the world and would cause a fundamental reorganization of global information and communication along liberal market principles. The consequences for states have been far-reaching. Direct state control and intervention in markets was replaced by a focus on structuring liberal regulatory frameworks within which market transactions took place. As competition states ([|Cerny 1990]), they try to become attractive to foreign direct investment (FDI) in education, infrastructure, and research and development capacities, and so increase their world market share ([|Stopford and Strange 1991]; [|Rosecrance 1996]; [|Lawton et al. 2000]), or they merge with other states to form larger politicoeconomic units in order to regain control ([|Castells 2000]). In these transformation processes, the state has always played a double role. As [|Mosco (1996]:200) states, there is “certainly ample evidence to support the view that the contemporary state has reacted to changes in corporate and industry structure, as well as to changes in technologies and services. Nevertheless, there is also support for the view that these changes have come about with the active legal, regulatory, and policy directions of the state.” This has also been the main topic of another school of thought, the neo-Marxist regulation school, which tries to overcome the fruitless (in their opinion) discussion about the loss of state power in a globalized economy. For regulation scholars, the “policy debate over deregulation is disingenuous at best, because deregulation is not an alternative. Rather, the debate comes down to the choice among a mix of forms that foreground the market, the state, or interests that lie outside of both. Eliminating government regulation is not deregulation but, most likely, expanding market regulation” ([|Mosco 1996]:201). Successive developmental periods in capitalism are rather based on specific combinations of regimes of accumulation and modes of regulation. According to this view, capitalism – and with it information and communication – is undergoing a transition from monopolistic to flexible regulation ([|Boyer 1986]; [|Lipietz 1988]; [|Bowles et al. 1990]; [|Jessop 1990]).

Technology, Communication, and Structural Adjustments
Besides states, the global political economy of communication has most deeply been affected by the activities of multinational corporations (MNCs). A vast literature has analyzed MNCs and their role as drivers of global FDI, intra- and interindustry trade, technological innovation, and the development of transnational production and distribution structures ([|Dunning 1974]; [|Vernon 1977]; [|Daniels and Lever 1996]; [|Caves 2007]). The MNC itself has been described as a result of industrial modernization processes, which required new organizational structures to effectively manage processes of mass production and distribution by eliminating problems such as imperfect market information and market insecurities ([|Coase 1937]; [|Chandler 1977]; [|Beniger 1986]). MNCs have been of central importance to the development of global information and communication industries as (1) producers of telecommunication equipment and information and communication networks ([|Chandler 2001]), (2) producers and distributors of media content ([|Mosco 1996]) and (3) large-scale consumers of various information and communication services ([|Charles 1996]; [|Junne 1997]). The microelectronic revolution of the 1970s and accelerated technological innovations have resulted in a dramatic fall of prices for international communication services, which enabled MNCs to reorganize their economic activities on a global scale and thereby improve their productivity. Those productivity increases were largely based upon a transformation of industrial production processes from vertically integrated Fordist modes of standardized mass production to transnational production networks based on “flexible specialization” ([|Sabel 1982]; [|Piore and Sabel 1984]; [|Sabel and Zeitlin 1985]). ICTs thereby enabled companies to respond rapidly to changing consumer demand and concentrate on their core business ([|Harrison 1994]).

Corporate Control of Global Communication
Much attention has been paid to MNCs, especially in the communication, media, and high-technology industries, as main drivers of liberalization and privatization of domestic and global communication since the 1970s ([|McChesney 2008]). Historically, as [|Smith (1991)] and [|Tunstall (1977)] have demonstrated, transnational media enterprises are as old as the mass media themselves. So the production and distribution of news in the nineteenth century was controlled by three press conglomerates, the British Reuters, the French Havas and the German Wolf, which divided global markets into monopoly zones that kept out competition at least temporarily ([|Cooper 1942]; [|Schiller 1976]). The same can be said for manufacturers of telecommunication equipment and consumer electronics, which often were protected by their special relationships with national governments, either through direct state ownership or as “national champions” that became the backbone of innovation and competition policies ([|Peterson 1993]; [|Peterson and Sharp 1998]). Other studies have pointed to the dominant role of corporate demand for the development of wireless signal transmission (radio telegraphy) and telephone services ([|Herring and Cross 1936]; [|Dilts 1941]; [|Edelmann 1950]; [|Smythe 1957]). The increasingly private and global (domestic and international) corporate control of communication and media industries, however, also nurtured fears of reduced public access to information and communication networks and services and the ability to cross-subsidize between local and international services in order to lower the prices of local service access, a main goal of state policies during the era of state-owned or state-controlled communication systems ([|Stone 1993]; [|McPhail 2006]; [|McChesney 2007]). Various studies ([|Mansell 1993]; [|Mowlana 1997]; [|Wilkin 2001]) investigated the primacy of ICT service provision for large corporate information consumers over the provision of universal and cheap access to communication for individual or noncorporate consumers, which together with corporate control over communication networks “translates into less societal control and reduced democratic accountability” ([|Schiller 2007]:96). Opinions regarding their position in and impact on the global political economy of communication range widely. Some scholars point to the positive effects of higher efficiency and the potentially lower costs for consumers as a result of intense competition between corporate market participants ([|Thurow 1992]; [|Ohmae 1994; 2000]). Others, however, point to various problematic consequences of market concentration. According to still others, the formation of global oligopolistic market structures in various communication and media markets, consisting of a relatively small number of global MNCs (e.g., Microsoft, Intel, Disney, Time Warner, Sony, Vivendi Universal, Bertelsmann, NTT, Vodafone, Telefonica, T-Mobile, etc.), actually reduces market competition, as foreseen in neoclassical models. The merger-mania of the mid-to late 1990s in the telecommunication and media industries was a clear signal for market and capital concentration, which resulted in ever larger and powerful MNCs or “behemoths” ([|Smith 1991]; [|Korten 2001]). ICTs provided MNCs with unprecedented opportunities to restructure their activities globally. Outsourcing and offshoring labor-intensive production processes allowed them to focus on their core business. Hierarchical structures emphasizing vertical integration have given way to flat horizontal and globally integrated network structures consisting of many value-generating entities within larger profit-generating networks (Casson 1990; [|Phillips 2000]; [|Scholte 2000]). ICTs thereby enable MNCs to make use of time arbitrage, defined as the exploitation of “time discrepancies between geographic labour markets to make a profit” ([|Nadeem 2009]:21), as well as lower labor costs or geographically concentrated technological competence in the form of innovation clusters ([|Saxenian 1994]; [|Koski et al. 2002]). Another strand of research has more specifically focused on this concentration of technological, communicative, and politicoeconomic activity in urban centers or “global cities,” and the consequences in terms of growing inequalities between rural and urban and between global core and peripheral regions ([|Sassen 2001]; [|Neal 2008]). Despite the often differing opinions regarding the effects of MNCs for the global political economy of communication, as for other sectors of the global economy, most scholars agree that the sociopolitical struggles over their role and impact will most likely increase ([|Vernon 1998]; [|Chandler and Mazlish 2005]).

Transforming the Global Governance of International Communication
International Governmental Organizations (IGOs) or functional organizations help states to manage complex interdependence. Since international communication is one of the driving forces behind globalization, defined as growing interdependence, scholarly attention has regularly turned to IGOs as the global arena for negotiations concerning issues of international communication. The International Telegraph Union (1865) and the World Radiotelegraph Conference (1906) merged in 1932 into the International Telecommunication Union (ITU). The ITU became one of the cornerstones in the global governance of international telecommunication in all its aspects in order to ensure the openness and interoperability of national telecommunication systems ([|Codding 1972]; [|Codding and Rutkowski 1982]; [|Savage 1989]). Between 1865 and the 1970s, the global regulation of telecommunication was primarily driven by states and geared toward the provision of universal (public) service and cross-financing between international and domestic telephone and telephony and other services. The primary tasks of the ITU were the regulation of technical standards, market-entry rules, and prices ([|Zacher 1996]). As [|Cowhey (1990]:169) describes, > Like other service industries, telecommunications was traditionally oriented toward domestic markets, and competition in both services and equipment was limited. There were three important rationales for the system: it would increase reliability in the performance of tasks central to the public order (such as the provision of communications), would tap economies of scale or scope in the provision of services (the network thesis), and would advance considerations of equity expressed in the ideal universal service. While the regulation and adaptation of national or international technological standards in the telecommunications sector was relatively easy, technological standard harmonization for consumer products remained difficult ([|Zacher 2002]). Various studies have investigated so-called “standard wars” in various areas of consumer electronics (video, TV), PCs or mobile communication ([|Dai et al. 1996]; [|Chandler 2001]; [|Gandal et al. 2003]; [|Steinbock 2003]). It is interesting to note that with regard to technological standard harmonization, the ITU increasingly delegates the negotiations to either regional regulatory bodies or even sectoral members (MNCs), who have to balance tensions between standardization and competition. This means that standard setting increasingly happens in a bottom-up mode, driven by markets and consumer demand, instead of the more traditional state-dominated top-down process ([|Salter 1999]). Another major interest for scholars has been the increasing liberalization and privatization of national and global communication networks, which has also impacted the ITU and other organizations. Beginning in the USA during the 1960s, the introduction of digital telecommunication technologies (packet-switching), their convergence with computers and the resulting cost-saving effects, as another outcome of increasing market competition, offered corporations and various business alliances arguments for a continuous loosening of state regulations that further blurred the lines between voice and data transmission and local, national, and intra- and inter-firm networks ([|Krasner 1991]; [|Schiller 1999]). Finally, these liberalization and privatization efforts were brought to the international level within the Uruguay Round negotiations that established the World Trade Organization (WTO). The General Agreement on Trade in Services (GATS) and several additional agreements resulted in a far-reaching liberalization of domestic and global telecommunication markets ([|Aronson and Cowhey 1988]; [|Drake and Noam 1997]; [|Krein-Fredebeul and Freytag 1997]). Besides the developments described above, in which states and IGOs remained – in a formal sense – in the center of the policymaking and negotiation process, although heavily lobbied by corporate interests, scholars have also paid increasing attention to governance issues that do not directly involve states or functional organizations. Areas of interest have been the increasing dominance of strategic alliances between MNCs in pre-market research and development (R & D) and other business areas ([|Sharp 1997]; [|Inkpen 2001]), the creation of networked “knowledge oligopolies” ([|Mytelka and Delapierre 1999]), and the process of technological standard setting, as for example the alliance between Microsoft and Intel, also described as “Wintelism” ([|Kim and Hart 2002]). All these processes represent new modes of “governance without government” ([|Rosenau and Czempiel 1992]).

Future Issues and Research
Recent scholarly research has addressed several issues, which most likely will become more salient in the near future. One revolves around the growing tensions between the concept of intellectual property protection and that of freedom of information/ public access, and the integration of selective or temporal access/ownership into the technological architecture of communication networks and consumer products ([|Lessig 1999; 2001; 2004]). Another problem, which will likely become more important, is the problem of “digital gaps” between information-rich and information-poor people, states, and regions and how information technologies or their lack affect human development. This is not only a specific aspect of the larger north–south problematic, but also is of growing domestic concern for industrialized countries ([|Persaud 2001]; [|UNDP 2001]). Finally, the question of democratic control of global governance structures and processes in the field of information and communication has been the focus of research activities. The question is, how can the tensions between the increasing demand for global governance solutions, caused by complex interdependence, and democratic control of those be solved. This is a special aspect of the larger globalization debate and of the impact of ICTs on globalization. Research suggests that ICTs have a double character: they can lead to greater homogenization, but at the same time provide the means to conserve surprising cultural and political diversity ([|Held et al. 1999]; [|Hanson 2008]).

Gisela Gil-Egui
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|governance], [|information], [|information and communication technology (ict)], [|innovation] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
“E-government” (also known as “digital government” or “online government”) refers to a set of public administration and governance goals and practices involving information and communication technologies (ICTs), especially, but not exclusively, the internet. Although integration of ICTs in the operations of public agencies can be traced back to the late nineteenth century (e.g., with the adoption of tabulating machines for data management in the US census of 1890), the notion of e-government denotes an “outward” orientation in the use of such technologies, in order to serve public agencies' external audiences and constituents ([|Coursey and Norris 2008]). However, the scope of that service is the object of much debate and, consequently, no consensual definition of e-government had been formulated by the date of this publication. For example, the World Bank offers an instrumental definition of e-government as “the use by government agencies of information technologies (such as Wide Area Networks, the Internet, and mobile computing) that have the ability to transform relations with citizens, businesses, and other arms of government” (World Bank n.d., para. 1). Conversely, the United Nations has framed the notion of e-government in different terms through successive reports on the matter, whether as deliverance of governmental information and services through the internet and the World Wide Web ([|United Nations 2002]); as the creation of public value resulting from governments' harnessing of information technologies ([|United Nations 2003]); as the promotion of economic and human development resulting from governments' effective deployment of automated public services ([|United Nations 2004]); as a tool for inclusion and empowerment, especially in the case of disenfranchised populations ([|United Nations 2005]); or as a resource for knowledge management in the public sector ([|United Nations 2008]). Therefore, functions sought and/or performed through e-government range from provision of basic information on a 24/7 basis, to the facilitation of transactions with public agencies, to the promotion of transparency and accountability in the public sector, to the opening of channels for citizen participation, to the empowerment and inclusion of a plurality of stakeholders in shared governance and collective decision making.

Early History: Technological Imperatives
Coined in the mid-1990s, amidst a wave of public administration reforms taking place in the US ([|Gore 1993]; [|National Performance Review 1993]; [|National Science Foundation 1999]) and other developed nations ([|Osborne and Gaebler 1992]), as well as amidst the boom of online activity generated by the emergence of the World Wide Web, the notion of e-government as equivalent to better government, economic growth, human development, and, in general, the knowledge society, was quickly and rather uncritically embraced by practitioners and scholars alike ([|Heeks and Bailur 2007]). Observers such as [|Gabberty and Vambery (2007)] and [|Dawes (2008)] argue that many local and national public administrations embarked on projects for the automation of both back-end and front-end operations of their agencies, under technologically deterministic assumptions about procedural and organizational reengineering caused by the mere adoption of ICTs, which would in turn lead to higher efficiencies and efficacies. Meanwhile, academic explorations on e-government, particularly in the fields of computer and information science, focused initially on assessing information processing deficits; developing customized applications; and formulating hypotheses on the economic, administrative, and political potentials of ICTs applied to governance (see, for example, [|Dunleavy and Margetts 2000]; [|Gant and Gant 2001]; [|Warkentin et al. 2002]). In this sense the prehistory of e-government resonates, to some degree, with assumptions from the school of thought in public administration known as “new public management” (NPM), which proposed, from the late 1980s, a restructuring of governmental agencies by adopting a market-based approach intended to ensure cost efficiencies in the public sector ([|Dunleavy et al. 2006]). Similarly, a number of early studies on e-government presumed, frequently without sufficient empirical substantiation, direct or causal relations between ICTs, more efficient and streamlined government, and even enhanced democracy (e.g., [|Stowers 1999]; [|Baum and Di Maio 2000]; [|Ho 2002]; [|Thomas and Streib 2003]).

Growth and Diversification
By the early 2000s, research on e-government began to show some diversification, as scholars from different disciplines, including politics, communication, and sociology, paid increasing attention to the intersections of structural factors, hardware, and culture in the adoption and use of ICTs (see, for example, [|Cresswell and Pardo 2001]; [|Welch and Wong 2001]; [|Shi 2002]). Additionally, case studies and showcases of best practices multiplied to the point of building a highly diverse, international, and multilevel base to the literature on the subject (e.g., [|Cullen and Houghton 2000]; [|Thompson 2002]; [|Golden et al. 2003]; [|Timonen and O'Donnell 2003]; [|Barnes and Vidgen 2004]; [|Bhatnagar 2004]; [|Cho and Choi 2004]; [|Zhou 2004]). It is also at this point that a number of individual and organizational authors begin formulating punctual and periodical assessments of e-government initiatives, gradually introducing parameters for benchmarking in the field, such as those produced by the United Nations, the Information Society Initiative of the European Commission, the Organization for Economic Cooperation and Development, [|Steyaert (2004)], and West (from 2001 to present). Noteworthy in this regard is the first meta-analysis on e-government assessment produced by [|Janssen et al. (2004)], which evaluated the potential impact of external benchmarking and comparative studies of performance on countries' policies on the matter. At the dawn of the twenty-first century, the number of e-government websites from local and national administrations has grown sufficiently to allow some generalizations based on empirical observation. Several authors thus propose sequences of evolutionary stages in attempts to model the process of e-government ([|Layne and Lee 2001]; [|Koh and Prybutok 2003]; [|Reddick 2004], among others). According to most accounts in this regard, governments will evolve from a static presence on the internet to more interactive and transactional features, and then to seamless integrations between online and offline realms and across agencies that will lead to growing citizen participation and, eventually, to some ideal situation of e-democracy. A problem with these models of e-government development is that they frequently move from description to prescription without clearly explaining such a transition. [|Coursey and Norris (2008)] note that while the initial stages of information provision and interactivity can be empirically verified, later stages constitute more normative scenarios than necessary outcomes within a supposedly linear progression in the development of e-government. In these authors' view, the models not only do not explain how the transition to the final stages of evolution takes place (which overlooks questions related to financial, organizational, technological, and other challenges that e-government implementation faces in different contexts), but also ignore the fact that in recent years some developing countries venturing into e-government for the first time have been able to learn from other nations' experiences and, therefore, leapfrog into intermediate stages of the process, without going through the initial stages. By the mid-2000s, concerns with the “demand side” of e-government – that is, with individual, organizational, and collective users – inform a significant portion of the scholarly and applied production in the field. This complements the first leg of publications on the subject, which had focused mostly on “supply side” issues, such as development of applications, streamlining of procedures and, in general, unilateral determinations of needs and purposes to be served through automation of governmental functions. The new wave of e-government, in contrast, takes on deeper explorations of government-to-citizens (G2C) ([|Carter and Bélanger 2005]; [|Sweeney 2006]), government-to-business (G2B) ([|Scholl 2003]), and government-to-government (G2G) ([|Joia 2004]; [|Iyer et al. 2006]) interactions. In its attempt at gaining a better understanding of the different audiences and constituencies involved in the provision of electronic public services, this production explores institutional and cultural factors affecting users' engagements with both the technology and content of e-government ([|Shelley et al. 2004]; [|Sipior and Ward 2005]).

Critical and Comprehensive Approaches
Echoing arguments from the latest literature on the digital divide, this approach to e-government frequently adopts a critical stance to denounce oversimplifications, determinisms, and omissions in the formulation of e-governance projects, as well as in the evaluation and adoption and/or the assessment of e-government effectiveness ([|Rennie 2006]; [|Bekkers and Homburg 2007]; [|Bolgherini 2007]). Indeed, while studies on the digital divide seem to have arrived, by the mid-2000s, at some epistemological consensus as to the need of approaching their subject from a more comprehensive perspective that transcends the mere availability of hardware, the production on e-government during the same period moves towards valuing interdisciplinary inquiries and “big picture” analyses. Such a holistic perspective transcends the immediacies of top-down e-government proposals that had characterized the field until then, in order to consider context, users' needs, and social constructions of technology. The increasing diversity and comprehensiveness observed in the e-government literature does not suppose, however, a disregard for specificity. Rather, it means that explorations of even the most particular questions on e-government are paying recognition to the complexity of the subject and, consequently, acknowledging the importance of considering a plurality of sources, methodologies, and/or approaches. Some specific issues related to e-government that have been looked at through “multifocal lenses” in recent years include digital inclusion of disenfranchised populations ([|European Commission 2007]), tensions between e-government expansion and individual privacy ([|Tolley and Mundy 2009]), funding e-government ([|Reeder and Pandy 2005]; [|Wild and Griggs 2006]), transparency and accountability ([|Pina et al. 2007]), e-procurement ([|Kumar and Peng 2006]), and the protection of the security of data and the integrity of information systems in the face of hacking and “cyberterrorism” ([|Alfawaz et al. 2008]; [|Smith and Jamieson 2006]).

Seeking Theoretical Foundations while Keeping up with New ICTs
Today, a survey of the first decade or so of publications explicitly including the term “e-government” as a keyword reveals that scholarly and applied production on online and digital governance has grown to define a new and discrete field of study, with specialized periodicals (e.g., //Journal of E-Government, Electronic Journal of e-Government, Electronic Government, International Journal of Electronic Government Research//), dedicated fora (International Conference on eGovernment and eGovernance, International EGOV Conference, Service-Oriented Architecture for E-Government, iGOV Global Exchange, European Conference on eGovernment), recognized experts, and even competing schools of thought. Theoretically and methodologically speaking, though, this field reflects the immaturity and contradictions of a nascent domain that is still in the process of building its epistemological foundations. As noted in some of the first inventories of e-government literature, conducted by [|Andersen and Henriksen (2005)], [|Bertucci and Szeremeta (2005)], [|Grönlund (2005)], [|Heeks and Bailur (2007)], [|Norris and Lloyd (2006)], and [|Ridley (2008)], only a small portion of studies have ventured into theory testing or theory development. Moreover, queries on e-government conducted in major academic databases and in major commercial search engines retrieve mostly studies on e-government generated in the US and in English, thus evidencing deficits in the global circulation of relevant knowledge on the subject. Gaps affecting expert interaction and information sharing persist not only between scholars and practitioners, but also between developed and developing regions, particularly Africa and Latin America. The future, nevertheless, looks promising for the field of e-government, as production on the subject keeps growing exponentially, involving authors from a diversity of disciplines. The potential impact of new technological advances on the public sector (i.e., third and fourth generations of mobile telephony, versatile personal digital assistants (PDAs), Web 2.0, and Geographical Information Systems (GIS)) is already being gauged, consequently informing new paths of research and application development, such as that of “m-government” ([|Trimi and Sheng 2008]; [|Vincent and Harris 2008]). Yet beyond the particularities of each emerging technology, reflection on the intersections between ICTs and government is moving away from an exclusive focus on hardware or on functionality, to ponder broader questions on governance. In other words, as ICTs get embedded into the core operations of public agencies, understanding their long-term perils and possibilities becomes less a matter of technologies themselves, and more a matter of people, organizations, institutions, culture, and the historical circumstances surrounding them – less about “e-” or any other defining prefix or infrastructure, and more about //government// and //governance//, in the broadest senses of the terms.

Craig Hayden
==== Subject [|Culture] » [|Popular Culture] [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|networks], [|propaganda], [|representation] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
Entertainment technologies are increasingly relevant to international studies. The rapid proliferation of their consumption, their growth as a global industry, and the ways in which they have been utilized by international actors for political purposes reveal a growing significance for scholars of international studies. The term “entertainment technologies” does carry connotations of mass communication and news media technologies such as radio, television, and the internet. To avoid conceptual overlap with traditional academic literatures in political communication and media studies, entertainment technologies are discussed in this essay as those forms of media communication that are primarily purposed to provide forms of play, fantasy, and other forms of recreation. Entertainment technologies are presented both as vehicles for content and as modes of social interaction. The significance of entertainment technology for international studies is evident in multiple studies across fields relevant to international studies – such as communication, media studies, geography, critical studies, and related subfields. From [|James Der Derian's (2001)] pathbreaking work on video games and the simulation of war to [|Marwan Kraidy's (2007)] analysis of reality television in the Arab world, international studies scholars have explored how these technologies are a factor in a wide array of international phenomena. The kinds of entertainment technology discussed in this essay include video games, virtual worlds and online role-playing games, recreational social networking technologies, and, to a lesser degree, traditional mass communication outlets. As studies evidence, the impact of entertainment technologies is often visible at the intersection of “traditional” international relations concerns, such as national security, political economy, and the relation of citizens to the nation-state, and new modes of transnational identity and social action. Thus the study of entertainment technologies in the context of international studies is often interdisciplinary – both in method and in theoretical framework. Scholarly focus on the significance of entertainment technologies draws attention to the ways in which such technologies are present or otherwise implicated in broader social and political formulations. It should be noted that much of the scholarly attention to entertainment technologies related to international studies is critical in nature, and often draws from perspectives in political economy and media studies. Much of the early academic work related to entertainment technologies in the international context can be categorized as political-economic studies of the media and communication industries (see [|Thussu 2006a]). Yet the focus on technology is secondary to the political ramifications of regulation and governance in this tradition. This essay, however, provides an overview of research about the role of technology and its relation to subjects germane to international studies. Perhaps the most prominent research in this category of work focuses on the relationship between video game developers and the United States Department of Defense. This work reveals qualitatively new political-economic configurations between the state, the entertainment industry, and the military. The entertainment products that emerge from these relationships reflect distinct modes of representation – how international action and conflict are constructed in the game's message and structure in ways that may have significant sociocultural ramifications for media consumers. How the world is presented in such content may impact how users develop attitudes toward more “traditional” social roles, institutions, and foreign policy. While representation is often a significant focus for critical scholarship of entertainment technologies, other studies have also examined how these technologies reflect or facilitate new modes of social identification and, importantly, political action. For example, “virtual worlds” have alternatively been discussed as a promising venue for “public diplomacy” between nation-states and foreign audiences, and as environments that reveal persistent intercultural barriers to communication and understanding. There are two primary emphases in the scholarly treatment of entertainment technologies. At the level of audience consumption and participation, media outlets considered as entertainment technologies can be discussed as means for acquiring information and cultivating attitudes, and as a “space” for interaction. At the more “macro” level of social relations and production, representation can work to reinforce modes of belonging, identity, and attitudes. The macro-level approach depicts entertainment content as the product of political and cultural economies sustained by the production of image and “spectacle” (see [|Banet-Weiser and Gray 2009]). For international studies scholars, entertainment technologies open up new areas of study that traverse levels of analysis and direct attention to new sources of political and social influence. This essay reviews a series of studies and cases dealing with video games, ICT-enabled social networking environments that include “virtual worlds,” and interactive components of traditional media forms. The essay concludes with a discussion of how new and emergent entertainment technologies blur the practice of entertainment consumption with the traditional roles and practices associated with political action – through what media studies scholar [|Henry Jenkins (2006)] calls “convergence.” Entertainment technologies are presented here as instructive “texts” to be mined by scholars for evidence of influential symbols and practices that contribute to common understandings of politics, identity, and conflict. These technologies are also shown to evidence the interests of international actors in utilizing such technologies to facilitate political objectives and organization.

The Critical Context
Entertainment technologies are not new, nor is their relevance for international studies (see [|Lasswell 1927]; [|Mattelart 1994]; [|Tehranian 1999]; [|Price 2002]; [|Taylor 2003]; [|Thussu 2006a]) The production, regulation, and dissemination of these technologies have been at the center of controversies over the flow of news and cultural products since the dawn of popular communication in the nineteenth century. More recently, the dominance of the US cultural industries have been a driving concern behind historical debates in communication modernization projects of the mid-twentieth century, media imperialism, and dependency theory ([|Shiller 1976]; [|Tomlinson 1991]; [|Halloran 1997]; [|Vincent et al. 1999]). This dominance can be traced to the early intervention of the state in promoting US cultural products abroad and its influence on international trade liberalization ([|Miller 2005]; [|Thussu 2006a]). In the early days of mass entertainment forms such as cinema, the state was central to the emerging global political economy of entertainment. In can be argued that the consequences of the US subsidization of its film and entertainment industry ultimately led to subsequent controversies in the United Nations over informational sovereignty (the NWICO movement of the 1970s), the rise of the neoliberal trade regime in entertainment goods and services, and the resurgence of cultural protectionism and identity movements that grew in reaction to Western cultural hegemony (see [|Siochru et al. 2002]). In the wake of US success in dominating global cultural flows, nation-states have demonstrated interest in growing, protecting, and promoting cultural industries tied to entertainment products (see [|Thussu 2006b]). In contrast to political-economy-oriented studies of international communication, the rapid proliferation of communication technologies has been described as partly responsible for the social and economic transformations characteristic of globalization (for an early synthesis of this argument see Apparadurai 1996). The work of Manuel Castells, in particular, has situated technology as part of a widespread cultural transformation with implications for the practice of politics and identity in the twenty-first century. For Castells, information and communication technologies do not necessarily determine social change, but instead reflect shifting values of identity, politics, and organization ([|Castells 2004]; [|2007]). Castells's notion of the “network society” establishes a precedent for envisioning how values associated with the use and consumption of communication technologies may translate into attitudes and behaviors in other spheres of social activity. [|Castells (2004)] cites the “hacker ethic” of the Open Source movement that started in software development as an example of how a communication practice may have ramifications for how communication technologies mediate relations between political hierarchies and individuals. The connection between technology and social action has been increasingly noted by media scholars, as the technology becomes more interactive and users have become more horizontally connected in networks (social or otherwise). The capacity of communication technology to reach and recruit for political causes has long been a concern for political communication and media studies scholars. Harold Lasswell, writing in the mid-twentieth century, described the increased presence of the military in civic space through forms of entertainment technology like radio. The “garrison state” that Lasswell warned of is seen plainly in television programming during the early years of the Cold War in the United States ([|Stahl 2006]:113). During the early years of the Cold War, popular shows like //Battle Report-Washington// blurred the lines between popular communication and government propaganda ([|Bernhard 2003]) and anticipated the increasing role of news as entertainment that would develop with the consolidation of the media industries ([|Baum 2002]; [|Thussu 2008]). The “blurring” of boundaries between political and commercial interests evident in entertainment content was a principal concern of critical theorists, and during the twentieth century scholars argued that entertainment media functioned to stabilize political authority and reaffirm the ideological positions of the status quo that kept certain groups in power ([|Althusser 1998]; [|Horkheimer and Adorno 2002]; [|Benjamin 2006]). The legacy of critical theorists continues to provide a common justificatory framework for analyses of entertainment technology in fields related to media studies. In this view, media content serves to limit the options by which a society can imagine social alternatives – thus leaving “mass society” distracted and disinclined to focus efforts on political transformation. This form of critical scholarship assumes a strong correlation between media content and effect – and at least initially provided a notion of the “audience” as largely passive receivers not likely to interpret information and personally ascribe meaning. Later scholars from this tradition would identify specific political uses and consequences with content. For example, [|Guy Debord (1983)], [|Douglas Kellner (1992)], and [|Jean Baudrillard (1995)] argued that entertainment media provided a “spectacle,” which was construed as a //primary// means of control in the modern nation-state. Entertainment technologies enabled state power through distraction and diversion as well as coercion, as discussed in scholarly criticism of media coverage of the first Gulf War. More contemporary media studies generally accept an “active-audience” perspective on the effects of media content. Media entertainment and news audiences may reject, accept, or reinterpret the content and messages they receive ([|Hall 1980]; [|Fiske 1987]; [|Jenkins 1992]). In this framework, “meaning” emerges between the materials “encoded” by the producer and “decoded” by the media consumer. This perspective anticipates the ramifications of participatory technologies like the internet, as well as immersive video games that involve players in engaging the “message” of content as much as passively receiving content. With the widespread proliferation of digital and networked ICTs, entertainment technology became increasingly salient for international studies in three ways. First, domestic constituents for foreign policy positions could be reached through less direct and monolithic messaging strategies and entertainment modes like games, film, and television. Essentially, entertainment technologies became increasingly capable of cultivating dispositions amenable to certain political perspectives. Second, entertainment technologies facilitate intercultural contact and provide qualitatively new scenarios for emergent cultural practices that may be distinct from the spaces created by the medium. Finally, the interactive nature of entertainment technologies may facilitate new modes of participation and interaction for political and cultural purposes. The following sections illustrate these major themes. They are by no means exhaustive, but draw upon a wide range of academic inquiry into entertainment technologies relevant for international studies.

Video Games: The Military–Entertainment Complex
Video games have evolved alongside technological developments spurred by the Cold War and a significant body of scholarship has focused on the relationship between games and the United States military. The growing use of simulations by the US military since the first Gulf War in 1991 has spurred scholarly interest in games, a body of work that has evolved to incorporate critical and cultural perspectives on the impact of video games. Scholars have since identified significant partnerships forged between US defense institutions, game developers, and, more recently, more horizontally integrated entertainment companies – what [|J.C. Herz (1997)] initially called the “military–entertainment complex.” The relationships between these organizations and institutions have been well documented by international relations scholar James Der Derian in his definitive volume //Virtuous War: Mapping the Military–Industrial–Media–Entertainment Network// ([|2001]). His book identifies critical junctures in the development of simulation technology in video games for the preparation, practice, and indeed execution of war as manifest in the “MIME-NET.” Der Derian's work is significant in that it draws together the implications of representation with political-economic arrangements. His update of the famous Eisenhower warning against the influence of the “military–industrial complex” broadens the critique of US civil–military relations into a more profound assessment of how mediated violence, representational technologies, and the ways in which their products work to cultivate norms of war and conflict can transform sociocultural attitudes towards war and peace. More importantly, Der Derian sees a significant link between the implications of virtual violence and the dream of “virtuous” war – where war is transformed and cleansed of its normative sanctions and made a viable (and perhaps preferred) strategic alternative. Der Derian's arguments have been unpacked in greater detail by scholars from a variety of disciplines that explore the intersection of security and cultural practice – with contributions from communication, critical geography, and media studies (see [|Stahl 2006]; [|Power 2007]). These studies also share some assumptions about the effects of representation (content, imagery, and issues of consumption) and the implications of industrial collaboration with military institutions – which can have tangible ramifications for the formation of policies that ultimately lead to war. Studies have identified how specific entertainment technologies work to create new understandings of international politics, the role of the citizen in a democratic society, and how the technologies themselves are appropriated to serve the interest of the state through certain aesthetics of play. This critical work warns of a spillover effect – where the narratives, images, and normative characteristics of video game representation provide “a nexus for the militarization of cultural space” (Stalh 2006:113). These studies do not explicitly assert that games tell people what to think. Rather, games draw upon “narrative, generic, and associational frameworks” and also come “freighted with a range of socio-cultural-ideological meanings” ([|King and Krzywinska 2006]:168). The ways in which audiences interact with the content of games, the “the active nature of play,” gives the nature of their messages more potency ([|King and Krzywinska 2006]:169). The characteristics of the medium are made more significant for critical scholars by the sheer scale of the industry itself, whose growth has outpaced both the film and music industry in recent years ([|Bangeman 2008]). The growth of the video game industry has paralleled considerable advances in video game technology. Games have also evolved from simple two-dimensional abstract depictions of conflict to complex, three-dimensional simulations – often explicitly of very real operational theaters of combat. Contemporary games related to combat include both first-person shooters and massively multiplayer online roleplaying games (or MMORPGSs). Games of both these kinds now involve a high degree of interactivity and visual realism. Yet the impetus for this kind of game technology is not entirely market-driven. The US military has historically been interested in using such technologies for simulation purposes. This has lead to collaborative efforts – either beginning as adaptations of existing game software, or where versions of military-funded projects eventually find their way into domestic distribution in the gaming market. In the aftermath of the first Gulf War, the United States Marine Corps developed a modified version of the first-person shooter //Doom// ([|Stahl 2006]:117). In the 1990s the US military increasingly commissioned games for training purposes. After September 11, 2001 games such as //Real War: Rogue States// and //Full Spectrum Warrior// were released to the public – with plotlines and combat scenarios mirroring the situations facing US military operations in the Middle East and Afghanistan, or at least providing close analogues to real countries and regions. The collaboration trend was formalized in 1999, with the formation of the Institute for Creative Technologies (or ICT) at the University of Southern California. This $45 million-dollar venture was designed to provide advanced military simulations – by taking advantage of academic expertise and access to professional game designers and Hollywood screenwriters. This partnership reflected the increasing use of simulation a training tool, with the end product eventually being adapted for public consumption. [|Roger Stahl (2006]:117) describes the facility as a definitive example: “The ICT is a microcosm of much broader trends in military and game industry collaboration, reflecting the mobilization of information-age warfare across an entire spectrum of media.” The video games based on war scenarios in semifictional combat zones for the United States military both served the needs of the military and cultivated a market demand for similar games. [|Amy Harmon of the //New York Times// wrote in 2003], “What is new is both the way the games are filtering down through the ranks to the lowest level of infantry soldiers, and the broader vision that is being contemplated for them at the highest levels of the Pentagon.” Yet these games grew to be more than an organizational solution to training needs within the military. Their proliferation into the social realm of popular culture portended consequences identified by critical media scholars.

Effects of Representation in Video Games
In [|Marcus Power's (2007)] analysis of the wargaming industry's collaboration with the US Department of Defense, he examines the “entanglements” of the military and the video game production industry, and argues that exposure to games produced in such tight coordination with the demands and perspectives of the military can potentially effect how audiences view the role of the military as a tool of international relations. The scenarios represented in video games like //Full Spectrum Warrior: Ten Hammers// provide purposive approximations of US military engagements – and can work to shape “popular, everyday understandings of geopolitics.” Put another way, the games present an engaging and distinctly military logic that is cultivated through the gaming experience. Power's argument echoes the critical argument that military “objectives, rationales, and structures” are introduced into the civilian spaces of entertainment media ([|Woodward 2005]:4). Representation is thus a vehicle for “militarization” of everyday life that can distort traditional institutions of democratic participation. Jordan Crandall ([|Crandall 2005]:20) argues that militarization is distinctly tied to “media and entertainment industries […] it's a powerful rhetorical frame and a machine of territorialization, indoctrination, and recruitment.” Crandall situates this phenomenon squarely in the youth culture of video games. This claim does not necessarily posit that the youthful audience for games unwittingly accepts the potentially propagandistic game messages. Instead, the “militarization” effect of this form of entertainment technology is tied strongly to how the logics of military thinking and lifestyle are represented and consumed in act of playing ([|Stahl 2006]). In particular, the vehicle of video games allows for the audience to participate in a “clean, sanitized, and enjoyable version of war […] that obscures the ‘realities,’ contexts, and consequences of war” ([|Power 2007]:274). Video games, in essence, provide powerful media frames to thematize conflict and deflect alternative conceptions of conflict. Roger Stahl argues that there is a distinct intertextual quality to video game popularity after the events of September 11, 2001. He notes that the gaming industry grew after September 11, 2001 and in particular after the launch of the Iraq War. [|Stahl's (2006]:118) critique considers how “the economy of war-themed games restructures the civic field” and sanctions the use of force. For Stahl, the kind of messages carried by these games does little to question the “why” of killing, and puts the use of force as the preferred and indeed rational option for statecraft beyond debate. [|Stahl's (2006]:118) analysis shows these games to be intertextual in that he finds the games “mobilize rhetorics consistent with the War on Terror.” The narrative of the games, the roles played by the protagonists and antagonists, and the justificatory assumptions behind acts of violence mirror the official discourse coming from the American administration at the time. [|Stahl (2006]:118) observes “a strong disdain for diplomacy and a preference for force […] [and] the new enemy, the rogue state, is often condemned as insane and thus beyond the reach of reason.” What works for the Bush administration's “War on Terror” rhetoric provides the basic premise for much of the wargame genre. [|Stahl (2006]:119) also notes that media representation is not only confined to the games themselves, but also to the advertisements and strategies of appeal surrounding game promotion. Advertisements for games like //Tom Clancy's Rainbow Six// series reveal “a disarming cynicism about the nature of the fourth estate” – where alternative voices for conflict resolution are viewed as beyond serious consideration. Representation also has implications for the timing of international events. The pace of video games offers another opportunity to witness intertextual spillover into more “real” aspects of experiencing international events. [|Patrick Crogan (2003]:280) offers that games expand militaristic culture into the “domestic sphere” in part by changing our temporal aesthetics – we are cultivated to expect history to unfold in //game time//. This is seen both in the game-like re-creation of the Pearl Harbor attacks in the 2001 film //Pearl Harbor//, and yet also in the US rhetoric in advance of Operation Iraqi Freedom, where the weapons inspections receded from viability because the administration told the public it was “running out of time” ([|Stahl 2006]:120). These critical observations are admittedly interpretivist accounts of sociocultural ramifications, but they do provoke serious consideration about how cultural products cultivate and influence attitudes towards institutions responsible for implementing international affairs. Critical researchers argue that games of these kinds “put a friendly, hospitable face on the military, manufacturing consent and complicity among consumers for military programmes, missions and weapons” ([|Power 2007]:278). Power cites Karen Hall, who argues this is accomplished by “mystifying the relationships between consumers, institutions and economies of violence” ([|Hall 2000]:13). The result bears directly upon the notion of citizenship itself, by linking patriotic themes to the consumption of military stories. For Hall, gamers perform, practice, and consume a militarized, technologically based form of citizenship training ([|Hall 2000]:3). Another “effect” of representation can be anticipated from how the enemy is portrayed and the conduct of combat is manifest in gameplay. For example, in the immensely popular //America's Army// game, developed and distributed for free by the US Army, the game is set in Afghanistan, with the protagonist (not surprisingly) portrayed as an American. The enemy, however, is neither named nor developed as a complex character. In the game space, the “enemy is irrelevant and technology provides a virtual cure for a global insecurity” ([|Kumar 2004]:14). As Power explains, this kind of participatory exposure via games to both international relations and subjects can have larger implications: “It is also important to attend to the roles that digital games have as affective assemblages through which geopolitical sensibilities emerge and are amplified in order to explore the kinds of affective resonances that digital games create among gamers” ([|Power 2007]:284). Lt. Col. Wardynksi, the director of the US Army's Office of Economic and Manpower Analysis, is largely responsible for the //America's Army// video game project, and describes the project not as a recruitment tool but rather as a successful attempt to put the Army in popular culture ([|Stahl 2006]:125). This intersection of culture and military is seen as problematic. [|Andy Deck (2004]:1) claims “the entertainment industry has assumed a posture of cooperation toward a culture of perpetual war.” Games saturate popular culture with specific worldviews and attitudes towards force, diplomacy, and the “other” that are conducive to the uncritical acceptance of a particular foreign policy. The technological medium allows for a kind of “realism” that can be marketed, yet this “realism” is stripped of the gruesome details of violence. Video games are marketed by virtue of their realism – where viewers are invited to “perform the acts” of the warfighter and embody the ideals of the cause ([|Galloway 2004]). The medium also allows for a kind of interactive participation in current events that reduce the complexity of combat situations and their larger geopolitical context. This reductive retelling is manifest in how time relates to events – they are narrated in accordance with more dramatic (and entertaining) conventions. As Brian Cowlishaw argues, video games are marketed and sold as //realistic//, but in practice portray the experience of war as //cinematic// ([|Cowlishaw 2005]:6). Games are “interactive movies about war with all the boring parts taken out” ([|Cowlishaw 2005]:6). [|Power (2007]:286) argues that this kind of engagement can diminish the perceived costs of war when portayed as a so-called “realistic” game: “The power of many digital war games lies in the ability to transpose fear into historically based combat scenarios with clear battle lines, in a war that is safe and winnable.” Powers's critique directs attention to how games diminish the politics of war (indeed, video games often put political deliberation beyond question) while at the same time assert political attitudes in the manner in which target populations are “othered” in their representation. Specific political attitudes are encoded into the representational frames that make up gameplay in such a way as to discourage contention. Geopolitics is rendered in such a way as to promote a particular view of political strategy ([|Hughes 2007]). The encoding of political messages, of course, does not necessarily mean that video games produce specific kinds of political subjects, fashioned from game players. The critical aspects of wargames must be considered alongside their established use within military activity. [|Geoff King (2005)] acknowledges that gameplay has both a legitimate training and an interpellating quality. The concern for critical scholars like Der Derian, Power, and Stahl, however, is that such encoding //infiltrates// the social sphere unnoticed. Stahl, in particular, links the militarization of the “social field” with how video games invite users to participate in the message (as opposed to just receive it) and how it reorders the flow of time surrounding real events. He calls games a “third sphere” of cultural production where they recode “the social field with military values and ideals” ([|Stahl 2006]:125–6). Stahl links this phenomenon with John Arquilla and David Ronfeldt's (1996) notion of “netwar” – a concept that reenvisions modern conflict to include informational battlegrounds that transcend earlier boundaries of warfare. Video games represent an instrument of netwar in the sense that they directly influence the ideational sphere of identity and political ideology – by reformulating the experience of citizenship. The participatory nature of games invites the user into uncritical acceptance of military objectives and rationales, cultivating the audience as “citizen soldiers.” This is a problem for Stahl, because democratic citizenship involves a space for public deliberation, where political decisions can be debated. The social role modeled in popular war-based video games is that of a soldier who takes orders, not a citizen who may question policy. This amounts to a “depoliticization of the public sphere” ([|Stahl 2006]:125). Stahl's argument is significant in the study of entertainment technology, because he marks the boundary between considering entertainment technologies as spectacle and how they now actively involve audiences in the production of meaning within the technology itself. If the cultural resources of the social field are dominated by narrow perspectives embodied in such games, then the ability to question the logic of engaging in violence is diminished. Given the rapid pace of global media consolidation and horizontal integration, the critical consequences of video game consumption may not be limited to the United States or Western countries. Entertainment technologies of these kinds may serve to amplify other ideological perspectives and agendas. Even without direct marketing, these technologies and, indeed, narrative conventions have been appropriated by the US's adversaries. For example, the Syrian publisher Dar Al-iker released //UnderAsh// (later called //Under Siege//) in 2001, a first-person shooter portraying the story of Ahmed, a Palestinian fighter facing the Israeli military. Hezbollah also developed the shooter //Special Force// in 2003 ([|Machin and Suleiman 2006]). Dan Devlin, a US Defense Department expert on public diplomacy, testified before the House Permanent Select Committee on Intelligence that video games were used to train and instruct young people to attack US forces in Iraq using modified versions of the game //Battlefield 2// ([|2006]). Video games are clearly no longer bound to Western markets, and have been purposively deployed and refashioned to meet specific storytelling needs relevant to political and war-fighting objectives. It is evident that research on the critical aspects of video game technology has focused largely on the US experience; though as the previous examples indicate, further international work is clearly warranted. Also, further content and audience studies may provide greater insight into the critical claims made by media scholars. Matthew Thomson suggests that many popular games discussed in previous studies do not necessarily represent and align with the strategic imperatives of US policy ([|Thomson 2009]). Nevertheless, [|Der Derian (2001]:xvii) cautions attention to the implications of how technology translates war into the social realm: “Virtuous war requires a critical awakening if we are not to sleepwalk through the manifold travesties of war.” The globalization of media industries and the rapid pace of technological proliferation heighten the salience of Der Derian's warnings.

Virtual Worlds and MMOs
While war- and combat-based video games reveal structures of political economy that amplify political agendas and may cultivate ideational dispositions toward geopolitics, entertainment technologies also provide venues for cross-cultural interaction and trans-border relations that transcend obvious physical barriers. Virtual worlds represent opportunities for inquiry into cross-cultural encounters that are framed by social constraints imposed by the medium, but also reflect cultural behaviors and attitudes external to the technology. “Virtual Worlds” is a term requiring some clarification. Robert Schroeder defines virtual worlds as multi-user or collaborative environments in “which users experience other participants as being present in the same environment and interacting with them – or ‘being there together’” ([|Schroeder 2008]:2). Virtual worlds are persistent online places, “that people experience as ongoing over time and that have large populations which they experience together with others as a world for social interaction” ([|Schroeder 2008]:2). The most popular of this kind of platform is the online world of //Second Life//, which is not designed as a game, but rather as a venue for socializing and creative expression. MMORPGs like //World of Warcraft// are subsets of this kind of environment, but are still primarily games ([|Steinkuehler and Williams 2006]). Virtual worlds are nevertheless environments that can enable emergent cultures that incorporate participants from a variety of national, regional, and otherwise different cultural backgrounds – which may reveal the inertia of cultural beliefs and attitudes imported into the practice of game playing. As Edward Castronova has argued, virtual worlds as technological platforms of social interaction provide evidence of how social institutions develop within technical constraints (like the design of a game), while acknowledging or channeling cultural influences external to “the world.” [|Castronova (2005)] presents gaming in virtual worlds as a viable laboratory for exploring the emergence and evolution of social institutions – which in turn has implications for a host of social scientific concerns. Castronova's perspective suggests how gaming in virtual worlds can be an instructive indicator of how technology mediates beliefs and participation in social institutions and norms. Analysis of games of these kinds reveals the constructed nature of incentives and motivations, and the performative elements of identity formation, as evident in the space of interaction enabled by the game or virtual environment. [|T. Taylor's (2006a; 2006b)] incisive ethnographic exploration of the extremely popular MMORPG //World of Warcraft// illustrates the potential of cultural inquiry into gaming as a kind of international phenomenon. Her research examines how technological platforms provide means for expression of existing cultural attitudes as much as a space for the formation of emergent practices and identities. Taylor acknowledges the speculative potential of such technological platforms for purposive cultural diplomatic interventions: “As we encounter people from other countries and cultures in mundane, playful situations, the artificial or corrosive boundary lines that shape offline life might be productively eroded” ([|Taylor 2006a]:319), yet she also cautions that stereotypes from the “real” world are expressed in the interactions she observes online. Among the aspects of behavior and discourse she notes in her study of //World of Warcraft//, Taylor observes that language plays a distinct role in how social roles are regulated and ascribed. In //World of Warcraft//, players are often reprimanded by other players for not using English. As the game is largely cooperative, language becomes a key to how players interact and are afforded opportunities. Language thus provides an initial reflection of real-world disparities in English education – and by extension social stratifications ([|Taylor 2006a]:321). Disparities in English-language training have led to the formation of player guilds that are often defined by national affiliation. And nationality-based groups have become an important kind of signifier. Taylor observes that the ways in which these groups play the game have developed into perceived national “styles” of play. Players “read into” styles of play and apply generalizations and stereotypes. The most obvious convergence of stereotyping, style, and language in //World of Warcraft// is around the floating signifier “Chinese Gold Farmer” ([|Nakamura 2009]). The term refers to the closet industry of Chinese game players who develop characters and in-game resources for sale in real-world currency to other players around the world. This activity is, of course, not limited to Chinese “farmers.” The term, however, has grown larger than simply a referent to the quasi-ilicit “labor” provided by Chinese hired gamers. It has become a derogatory label – one which oddly enough has little to do with the act of “farming” resources but more readily with how the in-game actions coincide with other social markers of “gold farming.” For example, gamers who regularly “farm” resources for real-world sale to other gamers may not face branding as //Chinese// “gold farmers” unless they fail to understand English. Taylor cites Nick Yee, who observes, “What fascinates me is how race/nationality is now invoked to create the social category known as ‘gold farmers’ (rather than the other way around)” ([|Yee 2006]). [|Constance Steinkuehler's (2005; 2006)] research on the MMORPG //Lineage// has yielded similar, and potentially troubling, observations about the nature of political communities within the games. Her research showed how concern over the supposed “Chinese” activity superseded the designed interaction dynamic of in-game competition: “Instead, folks are joining forces in a sort of ‘us versus them’ mentality to wage perpetual field war against all (perceived) Chinese. In other words, the one game mechanic that made //Lineage// unique – clan sieges for castle control – has been substituted by a game mechanic of quite a different sort: farmer farming” ([|Steinkuehler 2005]:12). Taylor's research highlights the potential for virtual environments to become vehicles for social practices of discrimination and stereotyping – and provides a check on the utopian visions of a global virtual community. She argues that the technology itself may be part of the issue: “Stratification and systems of categorization become embedded, indeed embodied, within technical systems. Methods to deal with conflict and complexity can get folded into game architectures and automated systems. As a result, we get things like the segregation of servers based on region and language, or the growing use of ‘instanced’ game content […]” ([|Taylor 2006]:323). Taylor's ethnographic observations suggest that entertainment technologies may have distinct medium-specific “effects.” She does not suggest, however, a model of technological determinism. Rather MMO's reveal the pervasive dynamics of conflict and identity refracted through the constraints of the medium. The Chinese Gold Farmer phenomenon in particular reveals how conflict is socially constructed, where language and national identity are linked as signifiers within the game's space of play. This suggests further exploration for how games both reflect and facilitate existing tendencies toward intercultural conflict, affiliation, and stereotyping.

Video Games and History
Research on military-inspired video games suggests concern over the broader social and political-economic dimensions of such games on youth culture and implicitly on how such games influence public understanding of geopolitical issues. [|Kevin Schut (2007)] provides a comprehensive overview of how video games are understood to reorganize perceptions of how the world works. Schut's analysis offers that video games have significant potential for how to teach history, but that the manner in which events are reduced to specific factors in the context of gameplay inevitably diminishes the complexity of a historical moment and emphasizes certain competitive and often gendered perspectives on history. Schut presents evidence that ideological or narrow perspectives on history are built into the structure of games themselves – either as incentives intrinsic to the game mechanic or in the representation of events that the game purports to portray. For example, [|Shoshana Magnet's (2006)] study of the game //Tropico// reveals capitalist and ethnocentric representation strategies. [|Kevin Chen (2003)] reaches similar conclusions about the game //Civilization 3//. [|Salen and Zimmerman (2004)] recapitulate this argument – that games encode ideological positions. Schut's position on the way games present history is philosophical in that he engages the epistemological constraints imposed by mediated presentation. He cites [|Ted Friedman (1999)], who offers that games, like any other medium, organize perception in a specific way that can have implications for how we perceive the world. While Schut acknowledges that audiences are conceived as having some form of agency –they are not just receptacles for the messages embedded in video games – “what they receive has been created (often thoughtfully) and delivered with tools that have specific abilities and limitations” ([|Schut 2007]:217). When games are designed to represent history, the constraints of the medium are imprinted on the presentation. As [|Friedman (1999)] indicates, when we play games we are taught to “think like a computer.” For [|Schut (2007]:223), computers “process symbols in a highly systematic manner.” When games present history, they inevitably reduce the dynamics of history (as a playable simulation) to a series of rules (see [|Murray 1997]; [|Manovich 2001]). There is certainly recognized value in using historical simulations to teach historical scenarios of international politics (see Weir and Baranowski, in press). Video games can represent a kind of “counterfactual” exercise, where events are not destined to follow a preordained path ([|Ferguson 1997]). Yet Schut's point is that games show historical actors doing only what the game allows them to do, with a linear progression of events and causality and little room for complexity. The incentive structure, the fixed attributes of the actors, and the options available to those who would play history or politics in a video game scenario are still at the mercy of the decision rules offered by the game itself. Schut's analysis is relevant to international studies insofar as games may uncritically replicate assumptions about international behavior, bounded social categories like ethnicities and nation-states, and the “logical” objectives of international actors in ways that do not reflect the contingent nature of international relations and, indeed, the constructed dimension of international political norms. The constraints of the technology mean that video game representation of history tends to reflect systemic conceptions of history and international politics.

Public Diplomacy, Virtual Worlds, and Social Networking Technologies
Taylor's discussion of virtual worlds presents virtual environments as spaces for interaction that can provide for productive engagement between national, ethnic, and regionally defined cultures. While Taylor recognizes that such technological venues are not without the “baggage” of real-world cultural experience, MMORPG's like //World of Warcraft// or virtual worlds like //Second Life// present unique possibilities for overcoming existing social barriers to constructive communication. What Schroeder described as the “persistent spaces” of virtual worlds was recognized by the USC Center on Public Diplomacy as a potential tool for nation-states seeking to engage foreign audiences in meaningful and effective ways. USC launched the “Public Diplomacy in Virtual Worlds Project” in 2005 to explore how virtual worlds can “create better understanding between people of different cultures and nationalities” (Center on Public Diplomacy). The Center established on online permanent presence in //Second Life//, and worked with the US State Department's International Information Program (IIP) to host products and services. The program was justified by the notion that the nature of interaction within the medium can “transcend national boundaries and ideologies.” As [|Angela Adrian (2007)] argues, games, by nature of their social activity and involvement, have the potential to contribute to a nation's “soft power.” The potential of virtual worlds for public diplomacy has been enthusiastically lauded. Jean Miller of Linden Labs, the creator of //Second Life//, argues: > They are basically a giant forum for communication. Virtual worlds are not just a good environment for public diplomacy; it is a great environment for it. It can facilitate communication across borders and culture that has never been done to that scale before – people getting the chance to chat/talk/meet/play with hundred of people that they have never met before. > ([|Terdiman 2006]) These arguments implicitly draw upon the well-established “contact hypothesis” (see [|Allport 1954]; [|Schiappa et al. 2005]), in that cross-cultural contact in mediated, virtual environments can circumvent preestablished prejudices and stereotypes. Adrian (2006) cites testimony from the //Serious Games Summit// of 2006 that “Americans aren't so bad if you play with them.” While Taylor's research suggests some limitations regarding anecdotal claims about games and public diplomacy, nationality appears to recede in the context of play according to those with experience in //Second Life//. The USC initiative is presented as a technological solution to cultural fault lines. As program directors Joshua Fouts and Douglas Thomas argued in 2005, virtual worlds provide a “unique opportunity to create, foster and sustain intercultural dialogue,” and “perceptions of national values, ideals, and character are both reinforced and altered by the real time interactions that occur in these spaces” ([|Fouts and Thomas 2005]). The technology provides the crucial element; virtual worlds can mitigate the significance of social perception, culture, and bias that exist between international audiences. Fouts has since elaborated the notion of “digital diplomacy” as a viable alternative to “traditional” methods of public diplomacy. In a 2008 report entitled “Digital Diplomacy: Understanding Islam through Virtual Worlds” Rita King and Fouts present virtual worlds as a venue for gauging public opinion and as an effective means to engage audiences crucial to US foreign policy objectives. For the United States, there is already precedent to leverage such technologies. King and Fouts cite that the US Department of Defense planned to spend $300 million on entertainment programming for Iraq as part of a larger strategic communication plan. They argue that over 300 million people worldwide participate in some form of virtual world – which represents a unique opportunity to support flagging public diplomacy efforts. Fouts and King point to examples of how individuals and nongovernmental entities have used the media platform to create cross-cultural points of contact and education – especially for dialogue //with// and //between// Islamic communities online, such as the //Al-Andalus// virtual Islamic community and the virtual hajj to Mecca in //Second Life//. They highlight that virtual worlds (as well as other participatory technologies) have become a space for public deliberation, and as such exemplify the rapidly shifting terrain of influence among global audiences. Fouts and King recommend that governments should follow the lead of corporations and individuals already connecting to audiences in virtual worlds, in part because these are the spaces in which legitimacy is increasingly cultivated. The “nonofficial” status of actors within entertainment and social media spheres appears to engender more credible messages and, indeed, spokespersons (see [|Cooper 2007]). The argument that such “entertainment” technologies can impact international relations is not new, though earlier academic discussion in international studies centered on the role of civil society actors. [|Ronald Deibert (1997)] drew upon Canadian media theorists Marshal MacLuhan and Harold Innis to claim that the media ecology was responsible for significant social transformation in international politics during the nascent period of media globalization of the 1990s. Attention to the media structure – how media access and technology is situated in the lives of citizens, groups, and indeed cultures – is necessary because identity, attitude, and agency are increasingly tied to mediated resources. In this framework, entertainment media are significant because they can be a site of meaning creation and political action for pivotal stakeholders. This idea was put into practice in US policy in 2008, when US Undersecretary of State for Public Diplomacy and Public Affairs James Glassman introduced “Public Diplomacy 2.0” as a corrective to existing moribund public diplomacy activities ([|Glassman 2008]). Glassman suggested that social networking technologies, such as Twitter, Facebook, and YouTube, are critical outlets that reach the “target” audience of US public diplomacy. He argued that entertainment technologies of these kinds are inherently democratic and participatory, and that the United States can embrace these to facilitate conversation rather than engage in monological messaging. Put simply, public diplomacy should be conceived as a conversation between peers, rather than simply the dissemination of one-way messages. Glassman cites the success of the “One Million Voices against the FARC” as an example of how social networking technologies can facilitate political objectives. The “One Million Voices” campaign organized a massive, global protest against the Colombian militant organization in less than six weeks on Facebook ([|Holguin 2008]). Glassman attributes the collective ownership of the process to the success of the event. Instead of a central political authority, like the Colombian government, dictating desired attitudes towards the extremist group, the protest was a grassroots effort facilitated by the social networking technology. Glassman subsequently organized facilitative programs for the US State Department – including a video contest on the meaning of democracy and a summit of pro-democracy youth leaders in partnership with YouTube, NGOs, and other media companies ([|Cormann 2008]). In each case, the programs were open to anyone and included voices in opposition to US policy objectives. This strategy reflects what [|Ali Fisher (2008)] has termed “open source” diplomacy, a kind of engagement that deemphasizes the institutional asymmetry between audience and the state and encourages collective production of messages and diplomatic solutions. Others have observed the capacity of such technologies to counter the US message ([|Christensen 2008]). Indeed, the Islamic militant movement is linked via a sophisticated web of “virtual media production centers” that continue to produce entertainment and informational materials to sustain their movement ([|Kimmage 2008]). Yet the possibilities of collaborative communication encounters suggest positive potential for connecting nation-states to other diplomatic stakeholders and constituents via entertainment technologies.

Convergence: Political Transformation and Entertainment Technologies
Virtual worlds and social media technologies suggest that ICTs designed for entertainment have the potential to be useful for political objectives. The critical element behind this potential is not necessarily the content that these technologies convey, but the kind of political action they can enable. Marwan Kraidy describes this kind of medium-centric perspective as a consequence of “hypermedia.” For Kraidy, the ways in which people interact via technologies represents a “symbolic field” that defines media-saturated societies enabled by significant technological convergence ([|Kraidy 2007]:140). Kraidy's notion of convergence here suggests both the multi-platform nature of technology (the ways in which content is produced and consumed across different forms of technology), but also the way in which social practices are expressed within these diverse technological outlets. Kraidy identifies several cases in the Arab world where media outlets appear to signal social and political transformation through the way in which the media were used and consumed. For example, the reality show //Star Academy// produced in Lebanon was shown to represent nascent democratic practices. As [|Kraidy argues (2007]:141), “In this context, reality television acts as a catalyst (among others) because its commercial and dramatic logics promote participation in public events through the interactive use of information technologies, in activities like voting, mobilization, and alliance building.” The ways in which technology enables social interactions that parallel the requirements of real-world political action suggest how “hypermedia space operates as a tool to extend the scope of agency into ‘real’ public space” ([|Kraidy 2007]:153). Kraidy offers that political change is not something imposed top-down in the Arab context, but something emerging in the spaces provided by entertainment technologies. The presentation in this way of entertainment technologies situates them as increasingly central to social and political life. As Kraidy argues, they influence the “social epistemology” – the way individuals and groups perceive the range of alternatives available to social, political, and cultural arrangements. Understood in this way, technologies are vital to the construction of meaning and the means by which attitudes are acted upon. The salience of technology in this view can be traced to the pioneering work of political scientist [|Ithiel de Sola Pool (1983)], and intellectual inheritors such as Henry Jenkins. Jenkins emphasizes that media convergence has potentially profound implications for the nontrivial aspects of life outside entertainment. He describes convergence as > the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. Convergence is a word that manages to describe technological, industrial, cultural, and social changes depending on who's speaking and what they think they are talking about. > ([|Jenkins 2006]:2) Convergence enables a unique kind of participatory culture. The idea underscores Kraidy's arguments about political change in the Arab world, and directly challenges basic assumptions about how messages and media communication influence audiences in a linear, and clearly demarcated, fashion. As [|Jenkins (2006]:3) argues, “Rather than talking about media producers and consumers as occupying separate roles, we might now see them as participants who interact with each other according to a new set of rules that none of us fully understands.” Jenkins offers that the reality of convergence forces reconsideration about what media technologies do to people, the consequences of who owns the means of production, and how power may be redistributed through the political agency that new media technologies encourage and enable. [|Andrew Chadwick (2007)] demonstrates that new “digital network repertories” have emerged in the wake of successful online social organization for political goals. Social networking practices take place both in entertainment-oriented venues like Facebook or YouTube, but also leave sedimentary traces of latent political potential. These same networks can be leveraged for political organization and influence because of their intrinsic legitimacy. Similar capacities for political transformation have been noted for mobile communication technologies ([|Castells et al. 2007]). Taken together, the social and interactive potential of contemporary entertainment technologies imply qualitatively new and distinct modes of political power that are nonhierarchical (see [|Shirky 2008]). While entertainment technologies may not have been designed for such potential, they have come to model, and represent, more horizontal hierarchies of information. The effect of such immersive interactivity is that the audiences for message and content are more attuned to being communicated to, and more aware of asymmetries of power and identification between individuals, corporations and nation-states seeking influence through communication technologies.

Laura Roselle
==== Subject [|International Studies] » [|Foreign Policy Analysis], [|International Communication] ==== ==== Key-Topics [|CNN effect], [|communication], [|war] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
Many readers will associate the topic of foreign policy and communication with the role of mass media in the agenda setting, decision making, and implementation of foreign policy making. However, this chapter draws together a number of different literatures that address how communication, broadly speaking, affects foreign policy, both in the policy-making process (at its various steps), but also at a higher level associated with the nexus of foreign policy and international relations. Scholars of foreign policy, mass media and communication studies, audience costs, policy legitimacy, and discourse analysis have all addressed issues of communication and foreign policy. This is because communication refers to the transmission or conveying of information through a system of symbols, signs, or behavior. Foreign policy, for its part, includes not only the foreign policy process involving agenda setting, formulation, adoption, implementation, and evaluation, but the placement of foreign policy decision making within the wider realm of cultural, social, and political context in the global system. Communication serves to connect individuals and groups; (re)construct the context; and define, describe, and delineate foreign policy options. The current trend is one of synthesis in many areas and a greater attention to the psychological processes associated with who communicates, how, to whom, and with what effect in the realm of foreign policy, and to the structural characteristics of communication or discourse. The focus, but not the exclusive area of interest here, will be on mediated communication and its role in foreign policy. (For more on foreign policy, the foreign policy section has over 40 chapters in this compendium.) The major areas of scholarly work that impart important insights on foreign policy and communication include: (1) the making of foreign policy and the role of the mass media as domestic determinants in this process; (2) how foreign policy is understood as a communicated message by allies and adversaries in international relations; and (3) constructivism, poststructuralism, and discourse analysis. Within the area of foreign policy and media falls work associated with the CNN effect, framing, and public opinion – all focused on the foreign policy-making process. Work within international relations has focused on how foreign policy signals international intent, including threat and willingness to cooperate. The literature on domestic audience costs, for example, addresses reputation by asserting the importance of creating credible foreign policy threats, and is beginning to address how exactly these communicative processes work. Constructivism and discourse analysis provide an important contribution by emphasizing the need to look at the (re)construction of ideas, identities, and interests rather than taking them for granted. Communication associated with discourse is central in this process. One way to break down the communication process itself is to look at //who// communicates //what// to //whom// via what //channels// for what //purpose//. This is certainly an appropriate way to analyze the literature, particularly from the United States, on communication and the foreign policy-making process itself. The linearity of this model, however, is called into question by the scholarship on discourse and is taken up by many scholars, including a strong international group. This scholarship raises methodological issues or critiques as much of the literature in the area of poststructuralism and discourse analysis eschews the use of positivist methodologies or causal inferences. A final section in this essay sets out the work from both traditions in the study of communication and foreign policy during war and conflict.

Communication and the Domestic Determinants of Foreign Policy
The foreign policy process is often studied by analyzing the steps associated with public policy making, including agenda setting, formulation, adoption, implementation, and evaluation. Communication is central to each stage and can be grouped into interpersonal communication (within groups or among individuals) and mass mediated communication (in which technology mediates communication). For example, some scholars study the effects of communication within elites groups involved in decision making. Work in this category includes analyses of how groups shape policy options, such as in Allison's research on the Cuban Missile Crisis. Janis's work on groupthink, for example, suggests that groups will develop a narrower range of policy options due to communication patterns associated with group dynamics that discourage innovative or risky or unpopular suggestions from being brought up. There is also work on how individuals interpret messages (communication) about particular policies based on individual characteristics such as level of risk acceptance or avoidance (prospect theory), operational codes, historical analogies, and cognitive processes. See the compendium essays on many of these topics under the foreign policy section. While noting that communication is inextricably linked to these individual and group processes associated with the formation of foreign policy, foreign policy scholars in the post–WW II era, especially in the United States, who have focused on communication have traditionally focused on mass media as a domestic determinant of foreign policy (see also Doug Van Belle's compendium piece). Often the focus on media is related to the policy-making process itself. For example, identification of a problem and its classification as something that can be solved or addressed is linked with agenda setting; formulation of a policy is linked with interest articulation and decision making; and policy legitimation is linked with the implementation of the policy. Foreign policy scholars often see media as one among many players in the process. However, for many political communication scholars studying foreign policy and communication, the media //is// the message. That is, there is an active group of scholars who focus their attention on the role of mass media as an institution in the setting of the foreign policy agenda and the shaping of public opinion about foreign policy decisions.

Who Communicates
The actors who communicate during the foreign policy making process include the political leadership, opposition groups, advocacy groups, interested foreign audiences including political leaders, the public, and the media themselves. Not all are involved at all times and with all issues or potential policies. [|Western (2005)], for example, argues that an advocacy process shapes the debate over specific policies. Even protest groups are involved in this process ([|Knopf 1998]). While advocacy groups are often important, much of the literature has focused on the interaction of the political leadership and the mass media in the development, presentation, and legitimation of foreign policies. Much of this work has focused on the case of the United States; however, there is a growing recognition that comparative work is needed in this area. For example, [|Kriesi (2004)] argues that decision makers, challengers, and the media are the most prominent actors, but that their roles differ across institutional context and according to issue-specific context. Two important variables, then, are the concentration of power in parliament and government and the institutional accessibility of actors. Walter Lippmann is often cited as the first to examine the interaction of media as an actor in policy making with the 1922 publication of //Public Opinion//. Lippmann argued that journalism with its coverage of particular events and in particular ways could shape public opinion about matters of the day, and therefore could affect the functioning of democracy. One of the classic works that included an explicit analysis of the US press and foreign policy was [|Bernard Cohen's 1963] work that dealt with the competition between the political leaders’ desire to preserve the prerogatives of diplomacy and the desire of the press to enhance democracy (and to meet the bottom line). This pointed to a complex relationship that is still the focus of much scholarly research today. By 1991, O'Heffernan had reviewed the relationship of media to the foreign policy-making process, arguing that “interdependent mutual exploitation” explained the relationship. Political leaders, on the one hand, often want to preserve a wide range of options in foreign policy making, do not want to be constrained by public opinion, yet want to use media to reach audiences with their own messages. Media, on the other hand, seek to inform the public and highlight important political issues, including those in the foreign policy realm – all while earning profits and not alienating important governmental information sources.

Media's Effect on Foreign Policy – CNN Effect
Many scholars have focused on media's substantive effect on foreign policy decision making and the debate over a CNN effect falls clearly into this category (see [|Gilboa (2005)] for one review of the literature). Introduced first as a shorthand for the idea that media coverage can push political leaders to make, change, or implement particular foreign policies, the “CNN effect” idea addressed the perception of a different media context in the world of 24/7 news networks. The instant global dissemination of footage and reporting from distant locations suggested that a qualitative shift in the relation between media and policy would occur insofar as such footage might stir publics to demand sudden humanitarian or military interventions before policy makers had a chance to formulate an official position. [|Steve Livingston (1997)] argues that the media may play three non-mutually exclusive roles. The first is as an agenda-setting agent. The idea here is that media coverage of a problem, event, or circumstance, such as natural disasters or other human tragedies, compels or pushes policy makers to commit funding or personnel to that area. Second, the media may act as a block against the achievement of a certain foreign policy by pointing the audience to pay attention to a particular issue. Third, media coverage can function to decrease the perceived timeframe for policy making, pushing leaders to make decisions more quickly than might be desired. Other researchers have attempted to establish the degree to which anecdotal stories of the CNN effect have merit. Strobel asserts, in work on policies related to peace operations, that “the news media are rarely, if ever, independent movers of policy” (1997:5). [|Robinson's (2002)] study of media coverage of humanitarian crises argues that when policy is uncertain and the framing of the government's actions/non-actions are critical, media coverage can have an effect on policy making. Van Belle's earlier studies of foreign aid allocations and media coverage showed that media coverage was related to allocations. The more coverage a country received, the more foreign aid was allocated ([|Rioux and Van Belle 2005]; [|Van Belle et al. 2004]). A later study, however, studying post–Cold War allocations did not show the same effect, something [|Van Belle (2007)] attributes to the end of a very structured Cold War foreign policy-making process. This work emphasizes the role of media in shaping the foreign policy process, either by shaping the agenda or by affecting the timeframe of decision making. It is often difficult, however, to determine which way the causal arrow goes. Does media coverage force an issue onto the policy agenda or does media coverage focus on issues pushed there by political elites? Many of the academic studies of the CNN effect have determined that the relationship is complex, and involves competition between and among various political actors and media outlets. [|Entman's (2004)] network activation model incorporates many of these ideas, suggesting that there is a hierarchy of actors including the administration, other political elites, and media organizations and journalists, and that discord and uncertainty affect how media cover issues related to foreign policy. [|Wolfsfeld (1997)] argues that there is a political contest between political leaders and opposition and that this contest shapes the degree to which mass media affect political conflict. That is, the contestation over media can shape foreign policy because who wins the access and framing battle for media coverage also shapes the context of foreign policy making. While political leaders usually have the upper hand in this competition, under certain circumstances challengers can use the media to exert political influence. An important additional consideration for scholars has been how journalistic norms and standard operating procedures affect the formation of news about foreign policy ([|Cook 1994]). Some might argue that communication scholars have sometimes overestimated the role of media in foreign policy making, while foreign policy scholars have often underestimated the role of mass media. Overall, however, there has been a growing recognition that the process is not unidirectional, and is, in fact, quite complicated. [|Miller's work (2007)], for example, contends that to understand the role of media pressure on foreign policy one must understand the conversation between political leaders and the media. In particular, leaders may feel compelled to respond to media coverage and questions posed during press conferences. These responses are shaped by reputational concerns, distinguishing among media coverage, media pressure, and media influence. At times the media do shape the political agenda, but there are often political interests quite willing to activate and enable the coverage of those covered issues. This general discussion of foreign policy and mediated communication raises one of the most important concepts associated with the literature on foreign policy and communication: framing.

What Is Communicated: Framing
While some of the literature on the role of media in foreign policy making focuses on how media bring issues to the forefront or agenda, other research emphasizes that how messages are framed is of crucial importance as well. Much of the literature on foreign policy either implicitly or explicitly addresses the role of framing. This is because communication implies a framing process. The selection of words, images, ideas, and themes – or framing – of foreign policy issues, policy options, and policies themselves affects the process of foreign policy making at every step. [|Just et al. (1998]:134) say that frames refer to “structures … that select or highlight particular bits of information in constructing an argument or in evaluating an object.” [|Leighley (2004]:258) defines a frame as the presentation or conceptualization “of an issue, event, or idea associated with other beliefs or values.” Right away one notices the many levels of components or structures that may be associated with a frame. These structures are comprised of events, issues/subjects, ideas, and actors as frame components that establish definitions of problems, policies or issues, causal interpretations and proposed solutions, and convey affect or moral judgment. [|Entman (2004]:5) defines framing as “selecting and highlighting some facets of events or issues, and making connections among them so as to promote a particular interpretation, evaluation, and/or solution.” [|Wolfsfeld (1997]:35) defines an interpretive frame as “central organizing idea[s] for making sense of relevant events and suggesting what is at issue.” Most definitions of frame have in common the notion of an organizing principle that structures meaning. This will be discussed below in more detail because this difference has implications for what scholars assume about the purpose and effects of framing. [|Entman (2004]:7) applies “frame” to texts or messages, rather than to the “interpretive processes that occur in the human mind.” He distinguishes framing from work focused more on cognition, including work on schemas and heuristics. Frames can refer to large or small components. These include master frames ([|Snow and Benford 1992]) that are broad in scope and include values about the political system or process, and issue frames that are more narrowly focused on specific policy or political issues. [|Druckman (2004)] makes the distinction between equivalency and issue frames. Equivalency frames are those that are logically equivalent presentations. International relations scholars have picked up on the issue of framing in discussions of identity construction ([|Lynch 1999]; [|Risse 2000]; [|Schimmelfennig 2003]; [|Muller 2004]) and normative change ([|Keck and Sikkink 1998]). Framing at its broadest includes the construction of the story of realism itself ([|Beer and Hariman 1996]), and at its most specific includes the changing of one word in a survey question to elicit a different response. Finally, [|Tuchman (1978]:209) makes an important point when she says that frames “both produce and limit meaning.” So, an additional purpose of framing is to keep competing frames out of the discourse or to counter them. In the foreign policy realm, the communication of frames lies at the heart of agenda setting, policy advocacy, and policy legitimation. Much of the scholarship in this area assesses how political leaders ([|Smoller 1990]; [|Kernell 1997]; [|Grossman and Kumar 1981]) and/or opposition ([|Wolfsfeld 1997]) frame messages about foreign policies. Political leaders usually have a considerable ability to shape the way issues, policies, and events are depicted. When one talks about spin ([|Brown 2003]), one is talking about framing. Leaders may choose to communicate using particular frames during different policy phases, at varying levels of elite and public consensus, and under different international contexts ([|Roselle 2006]). [|Wolfe (2008)], for example, argues that loss framing is more prevalent before a foreign policy is implemented and gain framing is more prevalent after. [|Entman's (2004)] cascading network activation model is also about how frames are created under particular conditions. And as some IR theorists have noted, political leaders can attempt to “filter identity discourses” within a state ([|Checkel 2004]:234), and can frame policies “with public justifications which enact the identity and moral purpose of the state” ([|Lynch 1999]:18). Still, leaders in the United States and in many other countries complain about their inability to get their message on television, and often claim that media are biased and/or antagonistic ([|Grossman and Kumar 1981]). Successful frames are tied directly to familiar, compelling, and/or persuasive values, myths, or identities. [|Snow et al. (1986)] refer to this as frame alignment. Others refer to this as resonance and the literature in the realm of foreign policy has moved to assess how identity shapes foreign policy over time. [|Herman (1996)], for example, explains changes in Soviet foreign policy by analyzing changes in “collective ideational constructs” that were successfully communicated and won acceptance within a particular political and structural environment. [|Schimmelfennig (2003)], in his study of European integration, recognizes the importance of collective identity within a rhetorical action framework that emphasizes strategic behavior. [|Entman (2004]:17) argues that “presidential control over framing of foreign affairs will be highest when dealing with the culturally congruent or incongruent. In response to these situations, elites outside the administration tend to remain silent, and their quiescence allows the administrations’ claims to flow unimpeded, directly through the media.” National identity has been addressed in the literature as “a constructed //and// public national self-image based on membership in a political community as well as history, myths, symbols, language, and cultural norms commonly held by members of a nation” ([|Hutcheson et al. 2004]:28). National identity, then, should constrain how leaders seek to legitimize policies. [|George (1989)] suggests that because information about policies will be more detailed and sophisticated at the elite level and less so at the level of the mass media, leaders’ communication via the mass media will be more broadly consistent with dominant national values, myths, and identities. In his work on coalition building, [|Snyder (1991)] notes that because these “myths are necessary to justify the power and policies of the ruling coalition, the leaders must maintain the myths or else jeopardize their rule” (1991:17). These myths are not simply used strategically by groups as political instruments, although that certainly is true: “[o]ften the proponents of these strategic rationalizations, as well as the wider population,” notes Snyder, “came to believe them” (1991:2). These beliefs then affect future policy decisions.

To Whom: Public Opinion
This discussion of resonant frames highlights the importance of the audience in the literature on communication and foreign policy. Just as various actors’ communication may affect foreign policy making, mass media communication about foreign policy will be directed at various audiences. These audiences include the political elite and decision makers themselves, the public, and foreign audiences. Members of the political elite have high information needs, pay closer attention to communication related to issues of interest to them, and also have more knowledge on those issues. Many members of the public may be quite interested in foreign policy events or issues during times of high tension or crisis, but during relatively calm periods the public is not usually attentive to foreign issues. From radio to television to internet, the range of media has expanded and much of the information about the world and about foreign policy comes from mass media. This explains the growing interest in the role of media in the conduct of foreign policy. This also points to the importance of different audiences in the policy-making process. Certainly political elites and interest groups may be involved in policy making, but the development and expansion of mass media has increased the amount of information available to the general public. The literature on foreign policy and communication is often linked to that on foreign policy and public opinion. The literature on foreign policy and public opinion ties in directly to communication processes because public opinion is shaped by communicated messages, narratives, images, and stories about foreign affairs, events, states, and people. Many scholars place this literature into two groups: one that asserts that public opinion is unstable and not a major determinant of foreign policy making while the other asserts that public opinion is more rational and that leaders do or should pay it more attention ([|Holsti 1992]). Although much of the literature in both areas does not directly address the communicative link between public opinion and political leaders, a growing subset does address this area. [|Entman (2000)], for example, suggests that political elites do not necessarily rely on public opinion polls for an understanding of public opinion, but they do rely on media coverage of public opinion. Much of the literature suggests that the public pays attention to foreign affairs most attentively in crisis or during war ([|Knecht and Weatherford 2006]). Particularly in these situations political leaders have the ability to shape mass media coverage: the events are most often happening far away from the homeland, and the political leaders often have the most up-to-date information. At least theoretically, leaders will pay attention to public opinion when the public is paying attention. However, leaders’ courting of overseas opinion is an increasingly significant phenomenon, exemplified by attempts early in the Obama Administration to engage with the Muslim world. The opinion of people “on the move” is another emerging research agenda. In Europe there is a body of audience ethnographic studies which indicates that, particularly for migrant and diasporic publics, news about war, conflict, and international affairs is particularly important in determining individuals’ sense of security and belonging in the UK, Germany, and elsewhere ([|Gillespie 2006, 2007]).

Via What Media: Communications Revolution
There is no doubt that the mass media environment has become increasingly complex as the twentieth and twenty-first centuries have been marked by a communications revolution. The literature in communication and political communication acknowledges that the type of media can shape the framing of messages ([|McLuhan 1994]). From works on the differences in newspaper and television coverage to the new role of the internet, the characteristics of the medium itself are important to consider. This is directly relevant to foreign policy in a number of ways. [|Dizard (2001)], for example, argues that electronic information and resources are affecting foreign policy by (1) raising a new set of strategic issues; (2) changing how information is used and stored in the foreign policy-making establishment; and (3) the rise of public diplomacy. [|Robin Brown (2005)] argues that “the diffusion of communications technologies, ranging from the telephone to the Internet, is producing a more open, more public, political environment and that this environment modifies the type of political strategies that work.” The rise of 24/7 television news coverage and the internet are the focus of special attention in the literature on the communication revolution and foreign policy. Besides the complicated effects associated with television coverage of events and decisions to a worldwide audience, [|Hanson (2008)] suggests that the development of the internet has increased transparency of governmental actions and events around the world. In particular, new technologies allow nongovernmental actors to communicate more easily and allow international events to be more broadly transmitted. Communication transparency, then, has a place in the foreign policy-making process. [|Livingston (2003]:257) categorizes transparency into (1) domestic transparency that focuses on the state's disclosure of information; (2) imposed transparency that attempts to gain access to information from others; and (3) systemic transparency that refers to the proliferation of communication technology. The increased reach and availability of communication technologies allows nongovernmental groups to organize and communicate their positions on issues related to foreign (as well as domestic) policies. In this sense, technology has empowered additional actors in the foreign policy process. Studying what is covered in the mass media about foreign policy shows that the media within a state tend to focus on their own state's involvement ([|Archetti 2008]). Related to the study of what is shown in the news is work on communicated messages more broadly. For example, [|Baum (2004)] argues that the new trend of soft news, which places greater emphasis on dramatic, human-interest themes and episodic frames and less emphasis on knowledgeable information sources or thematic frames, tends to induce suspicion and distrust of a proactive or internationalist approach to US foreign policy, particularly among the least politically attentive segments of the public.

Communication for What Purpose?
The literature on communication and foreign policy has clearly addressed not only how actors communicate in the realm of foreign policy, but why they do so. [|Entman (2004]:4), for example, says that political leaders “peddle their messages to the press in hopes of gaining political leverage,” while [|Brown (2005)] emphasizes that political actors use media to mobilize support. Likewise, [|Pan and Kosicki (2001]:59) suggest that “framing is a discursive means to achieve political potency in influencing public deliberation. It is an integral part of the process of building political alignments.” Nor is the achievement of legitimacy a matter only for nation-states: non-state political actors such as Al-Qaeda strive to use political communication to legitimate their beliefs and actions and elicit consent from dispersed actual or potential followers ([|Hoskins and O'Loughlin 2008]). Domestically, leaders may be concerned to one degree or another with securing domestic support for, or acquiescence to, a foreign policy decision from a variety of groups including elites, interest groups, and/or the public, a fact that raises concerns about policy legitimacy and coalition-building. Students of American presidential communication have long emphasized the importance of elite and popular support ([|Denton and Hahn 1986]; [|Tulis 1987]; [|Stuckey 1991]). Leaders use media to explain and justify policy decisions because in a democracy leaders rely on the public for votes. [|George (1989]:584) notes that //policy legitimacy// is important to the President of the United States so that “the forces of democratic control and domestic pressures do not hobble him and prevent him from conducting a coherent, consistent, and reasonably effective long-range policy.” In the United States, policy legitimacy is tied to the role of political elites and public opinion in policy making because these political forces have a powerful role in decision making and may act as a counterweight to leaders and their agendas. Therefore, policy legitimacy is important because it creates a “fundamental consensus” which eases constraints on policy making ([|George 1989]:585). Moreover, it is important to remember that policy ideas are conveyed through political communication. When leaders attempt to legitimize policy, communication is central for shaping both the context for elite discussion and public opinion, something that certainly reflects what [|Tulis (1987]:4) calls the rhetorical presidency. This duty is undertaken through the mass media. As Trout asserts: “the process of shaping the image of the environment in support of a given policy at a given time is both politically significant and at the foundation of legitimation” (1975:256). According to [|George (1989)], policy legitimacy has two components. First, a leader “must convince people that he knows how to achieve these desirable long-range objectives” ([|George 1989]:235). George calls this the cognitive component that establishes the feasibility of the policy. Second, an American leader must convince others in his administration, Congress, and the public “that the objectives and goals of his policy are desirable and worth pursuing – in other words, that his policy is consistent with fundamental national values and contributes to their enhancement” ([|George 1989]:235). This sounds very much like [|Gelpi et al.'s (2005/2006)] argument that the public's tolerance for casualties depends on beliefs about the rightness or wrongness of the war and beliefs about the chances of success. This brings us to the blurring of the lines between a domestic and international focus in the scholarship.

Foreign Policy and Communication in International Relations
Acknowledging that categories are artificially constructed, the next general area of scholarly literature that addresses communication and foreign policy does so not from a primarily domestic or comparative perspective, but through the lens of international relations. Communication and foreign policy are inextricably linked to diplomacy, signaling, and threat perception, for example. The literature on honor, face, prestige, and reputation highlights the importance of audience and the strategic nature of communication. The vast literature on diplomacy is covered in a number of other essays in the compendium and will not be directly addressed here. In addition, foreign publics as audience are covered extensively in the related essay on public diplomacy in this compendium. [|David Wedgwood Benn (1992]:3) has written that “the role of information is so fundamentally important in the shaping of political perceptions that one is sometimes in danger of overlooking it.” Certainly the realist paradigm in international relations minimized the importance of communication and foreign policy, yet other work called for attention to information framing and reception in the international realm. Classic works in this area include those of [|Jervis (1970, 1976)], [|Axelrod (1976)], and [|Riker (1986)]. In his early work, [|Jervis (1970)] distinguishes between signals and indices. Signals are characterized by tacit or explicit agreements about their meaning, while indices “carry inherent evidence that the image is correct because they are believed to be inextricably linked to the actor's capabilities or intentions” (1970:xi). Communication in the form of signals and indices, then, is central to how foreign policy is understood by adversaries and allies. Work on image and reputation in the international system ([|Jervis 1989]; [|Jervis et al. 1985]) is also about the framing or communication of intent. In [|O'Neill's (1999)] work on symbolism that addresses honor, social face, prestige, and moral authority, he stresses communication in a strategic context. O'Neill argues that communication goes beyond language and rhetoric. Much of this work highlights the importance of credibility as well as framing in the communication of foreign policies and in foreign policy behavior. Jervis also notes that the reception of foreign policies will be shaped by a number of different characteristics, including by prior images and rationalization, for example.

Audience Costs
One current thread in the scholarly research agenda that addresses foreign policy and communication is one that studies audience costs. Foreign policy making demands that leaders communicate not only with domestic audiences but with international audiences as well. In particular, states must signal or communicate their foreign policy intentions. Those who study audience costs assert that political leaders can better communicate foreign policy intentions in democracies. In democracies the public can hold leaders accountable by voting them out of office if leaders back down from a policy (e.g., a threat). Thus, the argument suggests that leaders in democracies will not make threats lightly, other states’ leaders understand this dynamic, and therefore the threats are more believable. There have been a number of critiques of this argument based on the need to understand better the communicative processes involved. [|Slantchev (2006)], for example, points out that making commitments credible is difficult at best. Schultz notes that while rational choice models assume that more information is good, this is not always the case because it is not true that actors always “use information correctly and efficiently” (2000:60). [|Warren (2008)], in his theory of communicative structuralism, suggests that the literature on audience costs assumes a communication network, signals transmitted via that network, a mass audience to receive the signals, and the means to effect a coordinated response. He argues that the structure of communication networks affects the creation and development of mass audiences and that this has significant implications for theories associated with audience costs. In particular, mass media networks must be sufficiently free and sufficiently dense. [|Baum (2008)] argues that “media in multi-party democracies are more likely to make competing frames – including alternatives to the government's preferred frame – available to citizens when the chief executive engages the nation in a foreign conflict.” He showed this in his study of support for the war in Iraq across countries, finding that people in those countries with greater media access and a great number of political parties were more likely to oppose the Iraq War and their countries less likely to supply troops.

Constructivism/Poststructuralism/Discourse Analysis
The role of mass media is central to understanding the construction of foreign policy, and is useful to scholars interested in “discursive structures and framing processes” ([|Lynch 1999]:262). As discussed above, frames are political instruments set within a broader social context, and they have purposes that include more than persuasion. Thus framing at the domestic level and through mass media is central to the construction and maintenance of state identities ([|Bruner 2002]; [|Rowland and Frank 2002]; [|Nau 2002]), and this leads one to a necessary look at the role of communication in the formation and (re)construction of the broader social context. While constructivism opened a door in international relations to the examination of identity and interest construction in part discussed above, recent poststructural work has focused specifically on discourses – at the heart of communication processes. In international relations theory, early constructivist literature on the role of state identity ([|Katzenstein 1996]; [|Lapid and Kratochwil 1996]; [|Wendt 1999]; [|Kubalkova 2001]; [|Hopf 2002]; [|Checkel 2004, 1997]) argued that identity affects foreign policy and international relations. Identity, wrote [|Lynch (1999]:22), indicates: “how each state understands the meaning and purpose of regional and international organizations, the role the state should play in the world, and the kinds of interests worth pursuing.” State identity, then, directly affects the context for foreign policy making. This perspective suggests that identities are complex and multifaceted, and must be (re)constructed over time ([|Checkel 2004]); how this is accomplished was less clear and is directly related to communication. [|Wendt (1996]:57) argued that “rhetorical practice,” through “consciousness-raising, dialogue, discussion and persuasion, education, ideological labor, symbolic action, and so on” could affect identities and interests. Others who address communication and identity focus on communication among or between states, and argue against realists who “dismiss public justifications as empty talk, with no impact on the actual pursuit of policy” ([|Lynch 1999]:44). That is, the possibilities for policies are constructed as identity and interests are constructed. Yet, the specific communication dynamics associated with constructivism are still understudied and underspecified. [|Rousseau (2006)], for example, argues that characteristics of the communication system within a state affect the degree of shared identity, i.e., the greater the concentration of the media, the higher the level of shared identity. One critique of constructivism is that it has focused too much attention on ideas or culture and too little attention on the agency of actors within an institutional or strategic context ([|Barnett 1999]). Communication is central to an actor's agency. Other critiques of some constructivist work question whether or not this construction can be studied with a positivist methodology or whether the search for causal mechanisms is both fruitless and detrimental to the full understanding that foreign policy is embedded in meanings and constructions without linear direction. In particular, poststructuralists, following in the tradition of Derrida, Foucault, Kristeva, Laclau, and Mouffe, make this argument. For example, as [|Hansen (2006]:1) writes, “[t]he relationship between identity and foreign policy is at the center of poststructuralism's research agenda: foreign policies rely upon representations of identity, but it is also through the formulation of foreign policy that identities are produced and reproduced.” So foreign policy cannot be understood as a linear or causal relationship between variables such as identity as independent and policy as dependent. These are mutually constructed. Important works include [|Campbell's (1992)] analysis of US foreign policy, [|Crawford's (2002; 2004)] use of informal argument analysis to understand the underlying beliefs and political arguments about slavery and colonialism; [|Fierke's (1998)] work on the end of the Cold War; [|Hansen's (2006)] work on the Bosnian War; and [|Hoskins and O'Loughlin's (2007)] study of the interplay of media and political discourses in the representation of terrorist threats and policy responses. Crucial here is the acknowledgment of the importance and political nature of language that is both structured and unstable. Hansen does make the argument that a poststructuralist methodology – understood as “the procedures and choices by which theory becomes analysis” (2006:2) – is desirable. Others agree, suggesting that a systematic and transparent methodology adds legitimacy to the scholarly work ([|Crawford 2004]; [|Hopf 2004]). Some argue, as Holland does, that poststructuralism is strong on discourse but poor on cultural context and constructivism is strong on culture but weak on discourse. He offers yet another option, writing that critical geopolitics “offers a preferential starting point, conceptualizing foreign policy as culturally embedded discourse” ([|Holland 2008]:6). This brief essay can in no way cover all of the issues associated with the various schools of thought associated with constructivism, poststructuralism, discourse analysis, and/or critical geopolitics, but a fundamental contribution of these literatures must be acknowledged. Take, for example, works associated with Securitization Theory or the Copenhagen School ([|Buzan et al. 1998]), or Critical Discourse Analysis ([|Fairclough 2007]; [|Nabers 2009]). These works take discourse seriously. The study of discourse takes as central “public, discursive activity” or communication to the understanding of foreign policy and international relations more broadly. Some scholars suggest that public discourse structures the behavior of states by constraining and constructing the realm of the possible in foreign policy: “In particular overall policy must hold a definite relationship to discursive structures, because it is always necessary for policy makers to be able to argue where ‘this takes us’” ([|Waever 1996]). Overall, argues Waever, “[p]ublic, discursive activity constitutes a realm with its own coherence, logic and meaningful tensions and by studying this, one can capture strong structuring logics at play in foreign policy.” The notion of structure is central here as there is a clear distinction between discourse analysis and attempts to discern individual or collective meaning (in a psychological sense) from articulated messages. Structure here is akin to Waltz's sense of structure. So, discourse here is not “cheap” and is layered, having a significant effect on foreign policy behavior by constituting the meaning of the present situation, the identities of those involved, and the nature of the relations between them. Rather than treat policy and discourse as independent variables and seek to construct explanations of what caused policy change (e.g., “when and how does discourse matter?”), analysis of discourse asks how the policy came to be something that would be considered in the first place ([|Holland 2008]:10) and what other policies were thereby rendered unwarranted or even unthinkable.

War and Crisis
The chapter concludes with a section on how the literature on foreign policy and communication deal with war and crisis. It has been events such as war and crisis that have driven much of the literature in the areas explored above – especially because international threats and violence have been some of the central issues of foreign policy making. Because communication is central to foreign policy generally, it is central to foreign policy and war. In this area, too, communication is key because “[w]ithout rhetorical framing, it would be impossible for any policy maker to present a case for war” ([|Wolfe 2008]:2). There has been an ongoing debate about the role of media during war and conflict. While many politicians, military planners and officers, and journalists claim that media can “lose a war,” most of the scholarly literature in this area again is more complex. Hallin's classic work on Vietnam, for example, showed that US media coverage of the Vietnam War was, for the most part, supportive of the war until elite consensus began to falter. In addition, as the war dragged on and events on the ground seemed not to match the statements of leaders (creating a credibility gap), media had a more difficult time reconciling images and words. Building on the study of the Vietnam War, many scholars have sought to understand the coverage of particular wars. Analysis of media coverage of the Falklands War, for example, showed how the British government was able to control information in part because of the short duration and long distance and isolated location of the war ([|Harris 1983]). The two Persian Gulf Wars have also provided historical cases for recent research on communication and war. Research on the role of media in the conduct of the first Persian Gulf War assesses the “pool” system of organizing journalists, the military control of information, and the emphasis on technology and a “bloodless war” ([|Taylor 1992]; [|Young and Jesser 1997]). The video images of so-called “smart bombs” suggested a technological marriage of military and media capabilities. Bennett and Paletz's edited book, [|//Taken by Storm// (1994)], is a seminal piece on the first Persian Gulf War. Much of the recent literature on the second Gulf War is focused on the role of multiple media outlets, including al Jazeera, and the performance of the American press in legitimizing the use of force in 2003. Some have even addressed a so-called “al Jazeera” effect ([|Seib 2008]). [|Hoskins (2004)] explored how previous wars in Vietnam and Iraq offered “template” images or episodes through which journalists and political leaders tried to make sense of the unfolding 2003 Iraq War and its aftermath. In addition, scholars have been interested in how the embedding of journalists has affected the coverage of war ([|Tumber and Palmer 2004]). Some suggest that embedding allowed television viewers to see and empathize with soldiers in the field while obscuring the bigger political issues and international context. More generally, scholars have looked at how media structure and messages affect the likelihood of a state going to war – that is, choosing a foreign policy of war waging. Van Belle argues that shared press freedom across countries affects whether or not those states go to war. This is due to the role of the free press in the foreign policy-making process. First, a free press demands that leaders seek to explain and gain support for their foreign (and domestic) policies, and second, the policies must at the very least appear to be responsive to the public. In addition, [|Van Belle (2000)] argued that press freedom is a “much more robust indicator of peaceful coexistence between states than democratic political structure.” Those who study audience costs argue that leaders who are more likely to be removed from office (either through the ballot box – [|Fearon 1994]; [|Schultz 2001], or through an elite ouster – [|Weeks 2008]) make more credible threats and are more cautious in initiating conflict. Another group of scholars argues that, at least in countries with open media systems, information about casualties and various views on the conflict (including oppositional views) heighten audience costs and make states with open media less likely to initiate conflict ([|Van Belle 2000]; [|Choi and James 2006]). [|Mansfield and Snyder (2005)], on the other hand, argue that during transitions of political systems, leaders must try to consolidate power and are much more prone to removal from office than before, and this //increases// the likelihood of conflict. They argue that this is because of the use of nationalist rhetoric that rallies the population during transition, making it more difficult for the leader to back down. So, in established democracies and in open media systems, because leaders are more prone to removal, conflict is //less// likely, while in transitional states, because leaders are more prone to removal, conflict is //more// likely. The central explanation for the difference in these cases seems to be that in transitional states a point may be reached beyond which leaders cannot back down because they have built up public support for conflict behavior to such a high degree, and opposition political elites within the system stand ready to exploit this. [|Fearon (1994)] and [|Baum (2006)] do recognize that there may be a threshold beyond which leaders are more likely to go to war in democracies, but the focus in the literature has been on the pacifying effects of audience costs. [|Snyder (2000)] makes the argument that “democratization produces nationalism” (p. 45) because leaders use nationalism to consolidate power. Nationalism can thus rally people around a new government, and can increase the likelihood of conflict, according to this argument. By promoting nationalism and, in essence, implying a promise of action to support nationalism (i.e., increasing audience costs), a leader is less able to back down, and conflict becomes more likely (see [|Baum, 2006], for a similar idea). Mansfield and Snyder make a similar argument about transitional, rather than democratizing, states: “[w]hen an autocratic regime breaks up, there is a dramatic rise in the importance of mass political ideology for legitimating the power of ruling authorities and other elites. The people can no longer simply be repressed or bought off; they must be persuaded” (2005:60–1). Mass persuasion implies a primary role for the media in this process as citizens turn to media to understand political changes, leadership decisions, and the construction of the idea of the state and its foreign policy. Snyder argues that “democratization produces nationalism,” especially during the early phase of democratization, in part because institutions, including the media, are not strong enough (or independent enough) to counter “the influence of nationalist mythmakers” (p. 54). Leaders’ abilities to use media depend on (1) their ability to control sources; (2) journalists’ independence and professionalism; and (3) segmentation of the media market (p. 56). A group of scholars are examining the role of technological advancement in current and future war scenarios. Specifically, new technologies have the potential to increase the amount of information available both to military commanders and to political leaders about the course of the war ([|Owens 2000]). In addition, new technologies bring the possibility of new threats from small groups ([|Arquilla and Ronfeldt 2001]). This point will be discussed in the section on terrorism below. Finally, there is a group of scholars who question what makes states secure in the first place. [|Fierke (2007)] writes, reviewing definitions of security, that “security is about being and feeling safe from harm or danger” and that security is a contested concept. The fact that security is so often viewed by political leaders as related to military security against external military threats is, many would argue, already assuming quite a lot. This assumption then leads to foreign policies that prioritize military solutions to international differences. Hence, the construction or communication of a worldview that gives military instruments of power primacy is central to the understanding of war and peace in the international system. “What is Foreign Policy?” is an important question for scholars who work in this vein ([|Gaskarth 2006]:332).

Terrorism
Clearly related to historical developments, an area of research that has grown significantly in the past decade is that of the role of communication and the mass media in relation to terrorism and policies related to it. (See the related essay Terrorism and Counter Terrorism in Cyberspace.) A central question raised in this literature is to what degree mass media facilitate terrorism by giving terrorists a platform for their grievances and a showcase for their violence, and with what effect. The technological revolution in communications has only contributed to the ability of non-state actors, including those using violence to send messages to the world. This creates a complex foreign policy-making environment for political leaders. Research in this area focuses on the dynamics of communication related to terrorism. For example, one result of foreign terrorist activities is that foreign sources and American victims are preferred over American political sources in the mass media ([|Nacos 2007]). So, according to Nacos, during and after terrorist events, American political leaders lose their usual preeminence in the media during times of crisis. This means that the foreign policy-making environment is changed as terrorists and their allies can often communicate via the mass media with a broad audience. Nacos notes that this is different when terrorists strike on American soil. In this case, American political officials have the ability to dominate the mass media and communicate directly with the American people. Studies of political communication on and after 9/11 suggest that American political leaders used media effectively to promote support for governmental policies in reaction to 9/11 ([|Reynolds and Barnett 2003]). Yet there are differences across different countries due to differences in the structure and role of media in various societies. For example, the BBC and CNN covered terrorist activity in different ways, in part due to different journalistic traditions and structures ([|Barnett et al. 2008]). The Russian media coverage of Chechnya was described in terms of terrorism, and for the most part, the coverage has demonized the Chechens ([|Oates 2005]). Another set of scholars associated with cultural studies sees terrorism itself as communication (see, for example, [|Schmid and De Graaf 1982]). Violent acts labeled terrorism are a means through which a group can communicate, to be sure, but this literature also suggests that the structure of media itself – particularly in commercial media systems – encourages or enables this type of communication. Violence sells. In addition, through discourse analysis these scholars deconstruct the language used by political leaders and others to produce political violence: “the media and culture are directly implicated in the wars of meaning which pervade contemporary politics” ([|Lewis 2005]:249). In response to the emergence of transnational terrorism, innovative recent studies have addressed the diffusion and contestation of communications by or about terrorist groups across national borders and how media and now citizens in different countries transfer and translate stories across different channels and platforms ([|Corman et al. 2008]; [|Hoskins and O'Loughlin forthcoming 2009]; [|Awan et al. forthcoming 2010]). [|Debrix (2008)], for example, analyzes tabloid geopolitics, arguing that certain mediatized discourses “take advantage of contemporary fears, anxieties, and insecurities to produce certain political and cultural realities and meanings that are presented as commonsensical popular truths” (p. 5). [|Weber (2006)] analyzes US film in the post–9/11 period to illuminate the discourse on US foreign policy. The breadth in the scope of the literature on terrorism mirrors the breadth in the literature on communication and foreign policy more generally. Scholars working within different disciplines, subfields, and intellectual traditions have all addressed the importance of understanding communicative processes associated with foreign policy. Often this work is found in disassociated scholarly research streams: scholars in one group have not usually reached across to others. This, however, is changing significantly. Scholars are beginning to pursue interdisciplinary and more internationally inclusive research working groups. One example is a working group, in part supported by ISA workshop monies, that is studying strategic narratives and international relations. This international and cross-disciplinary collaboration presents an exciting opportunity for future research within this area.

Shalini Venturelli
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|A Midsummer Night's Dream], [|geopolitics], [|information], [|information and communication technology (ict)], [|intellectual property], [|science] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
The Global Knowledge Society is a broad interdisciplinary effort to probe the socioeconomic, technological, and geopolitical dimensions of knowledge production, growth, diffusion, and exploitation, in terms of the impact on the development of societies worldwide. The intrinsic subject, namely the structural conditions for genesis of new ideas and their social utilization in specific national environments, leaves practically no major tradition of inquiry untouched. The field conjoins scholarly undertaking not only across all areas of social science – including international relations, international communication, information technology, international development, and economics – but also across the physical sciences and humanities. Its emergence as a field of inquiry in the last decade of the twentieth century signaled intellectual recognition of the increasingly decisive role of knowledge infrastructure in national development and international policy (for example, [|NSTC 2006]; [|World Economic Forum 2006]). But the field also addresses a historical void in traditional social science – from economics and political science to international affairs and development studies – for explaining structural and environmental differences in societal rates of knowledge generation, application and adoption. How nations develop new knowledge, or successfully adopt and integrate ideas and innovations from elsewhere to solve their particular development problems and pressing societal needs in the short and long term is now acknowledged as a key determinant of their long-term prospects and viability. Understanding the comparative lessons of successful knowledge development for societies everywhere is unquestionably pertinent to science but is also vital to national and international policy, thus carrying the dual merits of compelling intellectual interest and social usefulness in a global context (see [|Task Force 2005]). The field confronts phenomena of considerable complexity rivaling other complex systems found in molecular biology, for instance. There is good reason for the parallel since the domain of ideas and knowledge-generation is governed by multitudes of interrelated factors and knowledge-development conditions, including the random mutation, selection and adoption of ideas and innovations from an infinite range of sociocultural possibilities. Moreover, within any single national context, conditions of knowledge generation and adoption vary over time and circumstance between disparate social environments that are themselves in turn transformed by the smallest shifts in knowledge growth. Another source of complexity that challenges the field is the innate interdisciplinarity on a macro scale spanning major disciplinary orders such as biophysical sciences, information technology, and humanistic inquiry, as well as on a micro scale within each disciplinary order such as within the social sciences. Each disciple and field selects some factors for investigation that together multiply the range of relevant factors. Even further, the interaction among theoretical and methodological perspectives flowing from different disciplines creates its own overlay of evolving conceptual complexity. Thus investigation of the Global Knowledge Society presents a fascinating space to observe real-time transformations in cross-pollinated paradigm formation – itself a broad terrain of conceptual innovation. However, the one area of consensus in the research and policy community (see [|Task Force 2005]) is acknowledgment of the major historical realignment and transformation of nations taking place through an important shift from industrial society whose foundations are premised in physical resources that are natural, technological, and capital in origin to a knowledge-based society whose primary resources are conceptual and creative in origin. Understanding this shift and why some societies have taken the leadership in knowledge innovation while others persistently fail to benefit from the rapid expansion in knowledge production and dissemination may be characterized as the general investigative problem of this comparatively recent field of study.

Understanding Knowledge Development: Theoretical and Policy Challenge
The problem of knowledge growth, which is neither uniform across societies nor constant over time, is described as one of the most profound and indefinable phenomena in human history ([|Mokyr 2002]). A society's knowledge system and capacity for idea generation could well be termed a “sociocultural puzzle,” to borrow a phrase from [|Marvin Harris (1978)]. Scholars have attempted to unravel the puzzle and to understand forces behind divergent forms of knowledge development across societies. In the past decade, interest has even further deepened with new engagement from the fields of the history of technological change, economic history, international communication, and information technology development. Simultaneously, the spread of digital networks has heightened the importance of the knowledge-based economy for policymakers, communities, and regions seeking solutions to the immediate challenges of globalization, industrial decline, and the outsourcing of jobs in all sectors ([|Landry 2000]; [|Govindarajan and Gupta 2001]; [|Jalan 2003, 2005]; [|Bhagwati 2004]). But one important focus of reformers in both developed and developing countries has been to prioritize knowledge production since a lag in this area is now regarded a threat to long-term prosperity, capacity for innovation, and the chance to develop and strengthen the foundations of human capital in education, creative enterprises, science, information systems, and technology ([|OECD 1996, 2005]; [|World Bank 1999; 2005]; [|Hartley 2005]; [|United Nations 2005]). To accomplish this goal, researchers and policymakers have stressed the importance of sectors such as intellectual property rights, the information technology and scientific infrastructures, investment in IT industries, educational reforms, and integration of domestic initiatives into the global economy – all with varying results. Despite research and policy efforts to explain why and how knowledge growth occurs, there is absence of certainty in view of large persistent variations even among societies at similar stages of socioeconomic development ([|Mansell and Wehn 1998]; [|Sen 2000]; [|OECD 2003; 2005]). The difficulty arises from the inherent elusive and complex nature of the phenomenon of knowledge innovation though this has not deterred public perception of its vital importance to the survival and future prosperity of every community ([|European Union 2001]; [|CEDP 2004]; [|European Commission 2005a; 2005b]; [|European Council 2005]). A great deal has been written on information technology infrastructure and the “information society,” especially on the restructuring of communications policies and industries ([|Noam 1992]; [|Klinger 1996]; [|Yarborough 2001]; [|Bouwman 2003]), while others turned their attention to the political and economic effects of diffusion of new information technologies such as the internet ([|Shapiro and Varian 1999]; [|Castells 2001]; [|Litan and Rivlin 2001]; [|Friedman 2005]), or to the application of information technology for development goals ([|Rogers 1995]; [|Mansell and Wehn 1998]; [|Morales-Gómez and Melesse 1998]; [|OECD 2002]; [|Gripenberg et al. 2004]). In a few other fields where the question of knowledge innovation is at issue, cultural perspectives have played a role. Most notably, important studies on the history of technological innovation draw a link between culture and knowledge development and employ cultural factors to explain historical shifts in scientific, economic and technological change ([|Jones 1987; 1995]; [|Jacob 1997; 1988]; [|Landes 1998a]; [|Beise and Stahl 1999]; [|Pomeranz 2000]; [|Ziman 2000]; [|Mokyr 2002]). This approach contrasts with recent analyses in ecological history and biogeography which posit that natural physical resources and geography govern a society's fate as a millennial constant, fixing the upper limit on knowledge production and economic development relative to other societies. Patterns of technological and knowledge development, even culture itself, are explained, in this approach as mere byproducts of a society's interaction with its physical and biological environment over long periods of local human settlement ([|Diamond 1997; 2005]; [|Lang 1997; 2002]; [|Olsson and Hibbs 2005]). In modern anthropological investigations, on the other hand, culture and not geography is the determinant and occupies the center of analysis, although the classical stance in research and cultural policy grounded in anthropological frameworks is to regard cultural forms as distinctive and essentialist properties inherited across generations. At times this has perpetuated the notion that culture is in the nature of an artifact, organized into integrated and cohesive canons of structural, functional, or symbolic systems ([|Weber 1913; 1919; 1923]; [|Geertz 1973]; [|Hall 1976]; [|Bourdieu 1979]; [|European Council 2002]; [|UNESCO 2005; 2003; 2002]). The question of a society's knowledge system is thus affixed to and determined by its received cultural form in the same way that some studies in biogeography might perceive culture to be structurally affixed to and a product of an inherited physical environment. Finally, socio-legal studies of knowledge and innovation policies acknowledge the relation between culture and knowledge in the analysis of laws governing creativity and innovation, as well as in the regulation of access to information technology ([|Woodmansee 1984]; [|Boyle 1996]; [|Coombe 1998]; [|National Research Council 2000]; [|Lessig 2001]). But other cultural factors governing creativity and divergent cultural understandings of innovation, intellectual property and the public domain of ideas and forms of expression remain to be explored under this approach. Although development of the knowledge sector in the information age is of even greater significance to the survival and prosperity of human communities and of nations than at any time previously, comparative examination of knowledge systems at the macro-social level has to date received only a technologically-centered or historically-oriented treatment in literature across many fields. Determinants and conditions of knowledge growth under relative sociocultural, technological, institutional, and policy structures has thus far not been comprehensively explored because of the international scope spanning multiple cultural zones, the profound interdisciplinary challenges, and sheer complexity of subject matter. These deterrents are valid, but inherent difficulties in grasping the phenomenon of knowledge development and the theoretical and methodological challenges of comparative international field research would necessarily need to be resolved. This is because assessment of combined implications drawn from single-factor and single-issue investigations in a number of fields already suggest the need for a new approach that is both broader and deeper in scope, and because the issue grows increasingly more important for communities and societies everywhere.

Path to a Knowledge Society: Unraveling the Comparative Puzzle
Many would rightly argue that [|Francis Bacon (1605/1996]:120–299) earns the distinction of being the first scholar to recognize the pivotal role of knowledge development in national progress. He analyzed the problem with uncommon prescience in a pre-industrial age and projected challenges ahead with a mixture of theoretical insight and empirical regard for design of national policies in promoting knowledge growth. In many of his letters and writings Thomas Jefferson, political leader and policymaker also repeatedly articulated this indispensable relation between knowledge and the development of a republic (for example, 1813/1943). On the other hand, classical social theory for most of the nineteenth and twentieth centuries – encompassing economics, anthropology, sociology, and political science – instead of confronting or accounting for the role of knowledge generation and adoption in social change, tended to assume or explain it away as a mysterious X factor running in the background of shifting market mechanisms, cultural and social transformation, and political and geopolitical processes. From Adam Smith and David Ricardo to Max Weber, Karl Marx, and Bronislaw Malinowski, the relevance of knowledge generation to understanding differences in long-term societal development and technological fluctuations was ignored. This gap persisted through most of the twentieth century even when social theories themselves seemed to rely upon a silent assumption of knowledge conditions that are bound to be present and acting upon the social phenomena they describe. Not until the mid-to-late twentieth century with the emergence, for instance, of Joseph Schumpeter's economic analysis of “creative destruction” (1934; 1942) and of Clifford Geertz's cultural analysis of continuous processes of the production of symbolic meaning (1973; 1983), were the foundations of social change resulting from knowledge production and dissemination over time taken up with any seriousness.

Factors of (Knowledge) Production: Economics and Endogenous Change
Manifestation of these knowledge growth shifts, often more readily observed in forms of technological innovation, began to absorb economists in their explanations of the “post-industrial economy,” the “information economy,” and most recently, the “knowledge-based economy,” and in efforts to account for expansion and change in a society's economic passage from agrarian and industrial production to information and technology production. Beginning in the final decade of the last century, economists and economic historians (for example, [|Freeman 1997]; [|Landes 1998a]; [|Mokyr 1990]; [|Romer 1990; 1994]) began placing the evolution of technology at the center of modern economic growth, with technology in essence serving as a measure of the tangible application of new knowledge or as indicator of innovations in existing knowledge. The notion of “growth” in economic theory at its core stood for growth in the public stock of knowledge and in its economic value, or to put it differently, higher rates of expansion in the knowledge base of economy and society. But underlying problems remained: recognition of technological change as an expression of knowledge change still begs the question of the origins of change in the first place. The question was less easily explained, for example, in [|Romer's (1990)] economic models of endogenous change and the many subsequent models in the discipline (for example, [|Grossman and Helpman 1991]; Helpman 1994) that built upon his original breakthrough and initiated the trend toward foregrounding knowledge and technology in contemporary economic research. Scanning the research landscape beyond the field of economics, we note that aside from the many compelling theoretical insights on scientific and social knowledge contributed by a number of scholars in sociology and philosophy (such as [|Polanyi 1958/1962]; [|Kuhn 1962]; [|Foucault 1972]; [|Popper 1972]; [|Geertz 1983]), the general theory of knowledge //growth// in contemporary social contexts must be inferred bit by bit from multiple cross-disciplinary propositions on the origins of technological change and innovation. Each proposition contains an element of validity and can be organized by the type of relationship postulated between one or more environmental conditions, such as culture, economy, infrastructure, civilizational grouping, geographical location, natural resources, state system, religious system, strategic power, individual characteristics of scientists, and inventors (a “genius meter,” let us say), on the one hand, and technological inventions and applications, on the other. Outside the field of economic theory already touched, the substantive interdisciplinary literature may be classified into several streams or rough models which this essay organizes into the following groups: “Distributed Information Networks,” “Technological Diffusion,” “Genius Theory of Invention,” “Creative and Proprietary Incentives,” “Cultural Legacy,” and “Idea Evolution.” A brief assessment of each is provided along with their potential relevance to the design of a cross-disciplinary framework. The essay's approach considers some relevant factors in each stream, and identifies some elements that would still be needed to construct a far more dynamic knowledge development model for international and cross-national comparison than currently available in the literature. The discussion outlines the search for a more effective multi-disciplinary approach whose complex framework might match that of its subject matter, and one ultimately required in any serious comparative study of knowledge growth and development. The models touched on here all share the common goal of unraveling the puzzle for research as well as for effective international policymaking, especially for application to societies enduring chronic gaps in rates of knowledge development and absorption.

Distributed Information Networks: Knowledge Ensues through IT Networks
A key national obstacle to building a knowledge system is the problem of broad social access to information. Historically, most societies have endured varied forms of centralized knowledge monopolies managed by elite classes and institutions. As a result, control of the knowledge base and restrictive mechanisms limiting contributions to its growth are considered serious structural constraints on progress in any age or place ([|Innis 1950]; [|Valente 1996]). As information technologies (IT) emerge, from the technology of writing to the internet, existing information monopolies are challenged and the circle of social participation in ideas is enlarged. This overriding assumption in a “Distributed Information Networks” approach is also echoed in conceptualization and research on new network technologies. Examining IT innovation from separate directions, [|Negroponte (1995)], [|Gershenfeld (1999; 2005)], and [|Levy (2001)], for example, regard them as powerful instruments for eliminating institutionally embedded information bottlenecks. As technological networks expand, others argue ([|Freeman et al. 1992]; [|Saxenian 1999]; [|Shapiro and Varian 1999]; [|Castells 2001]; [|Gordon 2001]; [|Litan and Rivlin 2001]; [|Coe and Bunnell 2003]), information control is gradually deinstitutionalized resulting in a structurally distributed environment where information is held, circulated and passed along multiple pathways within the network rather than channeled through institutional gates and doorways that inevitably regulate the terms of participation (see [|Putnam and Feldstein 2003]). To promote a distributed rather than centralized information system penetrating across a society would require construction of technological networks for information sharing that are resistant through social usage to institutional commands ([|Wasserman and Faust 1994]; [|Stein et al. 2001]; [|Baumol 2002]). This argument is borne out for other key instances, most notably the distributive network effects of print technology from the sixteenth to the eighteenth centuries ([|Eisenstein 1983]) still in evidence, and electric communication networks of the late nineteenth century ([|Marvin 1988]), both of which overturned entrenched information monopolies and fueled the spread of ideas. The problems of social access and the level of a network's socio-economic reach are therefore important factors in any comparative study of knowledge development (see [|Kling 1999]). Yet the assumption in this approach that universal network penetration is an autonomic trigger to higher rates of knowledge production is a considerable leap. This approach may be said to insufficiently recognize that ubiquitous access to information networks is not in and of itself a sufficient condition for building effective knowledge systems and thus cannot serve as its sole or even most consequential predictor ([|Venturelli 1998; 2002a; 2002b]). The approach further assumes that information networks bear an isomorphic relation to knowledge production in which the geopolitical map of the network mirrors that of knowledge creation and distribution. There are of course reasonable grounds to question the validity of the correspondence of information technology networks with knowledge, suggesting the need to explore other missing factors.

Technological Diffusion: Technology Transfer Equates Knowledge Transfer
Similar properties describe the “Technological Diffusion” approach. Despite this limitation, the diffusion model seems to have played a prominent international role in development policies of the past half-century – both in industrialized and developing countries. It tends to carry weight not just in communication technology and development studies but also among decision makers because of its simplicity and usefulness in explaining every type of technological innovation from the quill pen to the automobile (for example, [|Valente 1996]). And like the previous approach, it possesses an important kernel of truth and historical validity. Since the roots of underdevelopment under this approach are theorized to be the result primarily of technological scarcity, its postulate of a single compelling causal linkage has gained ascendancy in international policies for modernization of impoverished communities. Leading communication and information technology studies argued that development gaps could be “leapfrogged” or passed over by means of information technology transfers, with wide relevance to impoverished zones within industrialized societies or regions of the world ([|Lerner and Schramm 1969]; [|Roger 1976]; [|Yarbrough 2001]). The key proposition in IT and development initiatives stems from an understanding about the role of information technologies – such as radios, publishing, newspapers and television, telephony, the internet, digital media. This role lies in leveraging socioeconomic advancement through overcoming the drag of traditional practices and promoting social knowledge of the development process. Hence the spread of modernist ideas would be a spontaneous outcome of diffusion of the technology itself ([|Lerner and Schramm 1969]; [|Rogers 1995; 1986; 1976]; [|Mansell 1998]; [|Singhal and Roger 2001]; [|Courtright 2004]). The same view dominates policy thought and research on the communications revolution of the digital age ([|Grossman and Helpman 1991]; [|Noam 1992]; [|King 1994]; [|Morales-Gomez and Melesse 1998]; [|Ruttan 2001]; [|OECD 2002; 2003]). At first this approach appears indistinguishable from the Distributed Information Networks model. The key departures are in assumptions about technology: the network approach focuses on decentralized information pathways branching unpredictably, made possible via network distribution technologies, while the diffusion approach privileges the information technology itself since technology adoption equates with information adoption. While the degree of departure seems small, the impact on research and policy is huge. In one, you devise measures for information networks since you regard this factor as a core trigger for an idea chain reaction, and in the latter you count the total numbers of technological innovations and their penetration rates under the assumption that numbers and rates are what constitute the information chain. Needless to say, factors identified in both approaches have a role to play in a more complex and dynamic knowledge system model designed for comparative study, though they may need some modification of the many hidden assumptions.

Plumbing the Creative Factor: Genius Theory of Invention vs. Creative and Proprietary Incentives
The third important factor in the literature on knowledge development is addressed in separate ways by two models: “Genius Theory of Invention” and “Creative and Proprietary Incentives.” The former explains progress as a product over historical time of the cumulative sum of individual acts of “genius” or inimical talent for which there are aesthetic, intellectual, biological, and motivational/psychological explanations ([|Simonton 1981; 1999]; [|Dissanayake 1992]; [|Eysenck 1995]; [|Murray 2003]). Here the weight is assigned to individual creativity and potential rather than to the general social, cultural and institutional environment of knowledge creation. The “Creative and Proprietary Incentives” approach, on the other hand, argues a more rationalistic basis and predictable character of creative or innovative phenomena. These phenomena are defined as rational outcomes of economic incentives which society guarantees through socio-legal mechanisms such as intellectual property rights, contractual rights, commercial rights and other rights and regulatory codes for creators, inventors and industries ([|Woodmansee 1984]; [|Boyle 1996]; [|Coombe 1998]; [|Khan and Sokoloff 1998]; [|National Research Council 2000]; [|Lessig 2001]; [|Tully 2003]). There is considerable substance to both propositions on the creative factor, yet several crucial aspects of creativity are unaccounted for in the two models. For one, research within these models would need to develop a fuller picture of a collection of vital factors taken not just individually but in conjunction to sketch a meaningful picture. For instance, the framework may be redesigned to focus on what we may call the “creative conditions of knowledge development.” Then the focus will automatically shift to conditions that at the very least would need to include dimensions such as the social environment of creative and innovative expression within civil society; conditions governing access to the means of producing creative ideas; conditions for enrichment of the public sphere from where ideas can be drawn for modification and exploitation to create further innovations; learning conditions permeating each level of the educational system that promote or inhibit creative ideas; conditions of personal freedom for independent and iconoclastic thought; resources for inventiveness; tolerance of unorthodoxy; and cross-cultural determinants for socio-legal recognition of intellectual property rights. These are just a few among the many additional and important dimensions of the creative factor ([|Venturelli 2001]) unspecified within our two models.

Cultural Legacy and Knowledge Advantage: How Cultural Environment Shapes Knowledge Environment
Another model relevant to a multi-disciplinary framework is the “Cultural Legacy” approach whereby a society's development pathway is explained by its past or by binding forces of tradition, worldview, and canonical inheritance. Weber's definitive comparative cultural analysis of civilization differences (1946; 1951; 1961) has yet to be surpassed. His influence is evident even in later studies ([|Lewis 1982]; [|Huntington 1996]) that remain tethered alongside an approximate version of the Confucian–Protestant endpoints on the spectrum of cultural values. The analysis is compelling because Weber's framework bears many apparent trajectories in societal evolution around the world. When integrated into literature on technological change, the framework is an almost exact overlay onto a map of civilizational and cultural characteristics, and thus mirrors the grammar of civilizations argument. Drawing their cases largely from analysis of the Industrial Revolution (for example, [|Mokyr 1990; 2002]), these studies offer real evidence and a persuasive argument that cultural traits and preferences play a major role in the selection and cultivation of the knowledge base of a society, thus accounting for many deep causal factors in contemporary national variations among technological development and innovation rates. [|Jacob (1997; 1988)] makes a significant contribution to this research by tracking technological change back to its roots in conceptual and thus cultural upheavals, especially in the form of knowledge transformations which gradually propel the pace and range of innovative practices. [|Landes (1998a; 1998b)] provides another piece of the sociocultural puzzle by identifying factors that trigger inversions of knowledge growth in such a manner that a society whose knowledge system is expanding relatively faster suddenly reverses course, while another simultaneously switches its historically established trajectory of idea production in the opposite direction from slow or normal to precipitous advancement. In short the two societies utterly flip over and trade relative positions, as occurred between China and Islamic societies, on one side, and Western Europe, on the other in the period between the twelfth and fifteenth centuries. Precise cultural elements are sometimes enumerated to account for technological growth and innovation and each study enumerates its own set. [|Jones (1987; 1995)], for example, credits adaptive traits or “reciprocity,” that is to say, openness to sharing and exchanging ideas, as the key to a society's capacity for development, while [|Goldstone (1987)] points to the determinative role of a cultural climate of orthodoxy or unorthodoxy. This literature, and in particular Jacob's research, holds great promise for comparative studies of knowledge change to draw upon since it has the advantage of relating cultural context to a society's innovative progress. The approach gains even greater importance in view of the marginal attention to issues of knowledge growth in the fields of anthropology, biogeography, information network research, IT and development, communication studies, and socio-legal studies ([|Venturelli 2002a; 2002b; 2005]). The overall picture offer under the Cultural Legacy model contains many blank spaces and moreover appears constructed predominantly from historical evidence predating the twentieth century and not by explorations of contemporary circumstances governing comparative knowledge systems in the twenty-first. Employing evidence from technological innovation alone to represent the general topography of knowledge development is another limitation that would need some re-conceptualization. A further question for this approach, but also for other cultural analyses drawn from anthropology and biogeography, is whether inter-cultural and inter-civilizational factors could matter all that much in explaining knowledge growth and change since intra-societal differences within the same cultural, civilizational, and geographic groupings may be just as great or even greater. To improve our grasp of knowledge change and to explain the relative characteristics of contemporary knowledge systems, we would need significant advances in the interdisciplinary framework for examining structures and mechanisms that promote, for example, orthodoxy destruction, the conditions of creativity, the exploitation of ideas within social information networks, structures of the public domain, and the density of interconnections between information pathways that crisscross the sociocultural and institutional landscape. Thus the “Cultural Legacy” factor confronts an appropriate set of adjustments if it is to serve our understanding of not just historical outcomes, but also current and future trends in comparative knowledge development. Nevertheless, one must acknowledge the profound importance of this model linking culture and innovation to any serious effort to grasp comparative knowledge development over time. Credit should be accorded in particular to the significant conceptual and methodological contributions made in studies undertaken by [|Jacob (1997; 1998)], and also by [|Mokyr (1990)], [|Jones (1995)], and [|Landes (1998a)]. To build from this research, in the final analysis, would call for application to contemporary rather than historical knowledge systems.

Disrespect Boundaries: A Scientific Probe of Idea Evolution
Models emerging outside the social sciences and humanities also offer some rich possibilities. Loosely grouped by the paper under the label of “Idea Evolution,” they are drawn from research in the natural sciences, particularly molecular science, which admittedly makes them many levels removed from sociocultural research. The pace of scientific discovery at the molecular level is far too rapid for the social sciences to adapt to, comprehend and integrate with parallel speed and therefore too numerous to account for here. After all, these distinct research practices are largely akin to conversations in closed rooms. Instead two exemplary research cases are addressed here as instances of natural science from which one could draw some rich implications for the design of better questions and the deeper grasp of complex, dynamic, and hidden mechanisms potentially at work in information and knowledge development. For instance, interesting parallels may be drawn from explorations in genetics research that are analogous to the field of study with which this essay is concerned. Are processes of idea mutation analogous to general processes of genetic change, for example? While there has long been speculation on whether cultural evolution, scientific progress and technological change mirror genetic evolution ([|Popper 1994]) we are only at a primitive stage in our understanding to assert any concrete correspondence, whether at the unit levels of society, the single mind, or the single idea. Yet this essay attempts a small step toward examining the validity of this analogy at least at the macro-social or bird's eye level of a knowledge system. Intriguing implications of at least two interpretations from molecular science may be worthwhile to consider. The first is illustrated by [|Dawkins' (1989)] tentative hypothesis, almost offered in passing in his theory of the “selfish gene,” that idea units or “memes,” as he puts it (p. 189), could indeed be similar to genes insofar as both are essentially replicators or self-copying entities. Might this apply to our understanding of a knowledge system as a whole? It may, but only if we assume that a knowledge system, like similar propositions about the phenomenon of culture, is really a system of reproduction. Yet, as pointed out earlier, that is precisely the problem in some models of cultural analysis, whereby only the reproduced elements are seen as relevant thus projecting the static, hereditary, and essentialist properties of culture and the social environment while ignoring the more complex, replication-resistant, and unorthodox properties often seen as anomalies. This essay assesses principal models of knowledge development and thus innovation primarily from the standpoint of the way in which they account for conditions that induce more or less knowledge generation, production, and dissemination, and by how they account for creativity rather than canon replication. For consistency, if one were to use the same yardstick, then the replicator thesis borrowed from Dawkins is far better suited for explaining cultural transmission than it is for explaining knowledge change. Thus Dawkins' natural science model applied to knowledge can be largely said to reinforce the classical theories of cultural analysis. Still remaining in the domain of molecular science but working with a different model, we are lead to conclude that if understanding knowledge growth is our aim, then what is far more interesting than //replication// of ideas is the question of //mutation// of ideas. Both processes of course are relevant in a broad sense to understanding knowledge systems since the first could explain the persistence of a knowledge tradition and social structures and barriers, while the second might explain the transformation of knowledge traditions, such as during the Renaissance, Reformation, Enlightenment, or in the ideological departures of the American Revolution that gave rise to an entirely new type of sociocultural knowledge system. Yet mutation of genes, organisms, and the knowledge base are even more complex phenomena to account for than is replication. [|Nimwegen et al.'s study (1999)] illustrates a second though contrasting example here from molecular science on the degree of complexity demonstrated in recent genetics research. This second model suggests that mutational diversity in protein function causes mutational disruptiveness, but the destabilizing process also paradoxically generates system robustness and tolerance for much higher levels of mutation. The research further shows that mutations concentrate often in highly connected parts of an RNA network, resulting in phenotypes that are relatively robust against mutations (p. 9716). What may the study of the knowledge society infer from this stream of scientific research? This essay argues that indeed there are several intriguing implications for the knowledge development literature. One implication is the possibility that instead of replicating mechanically, if Dawkins' “memes,” or equivalently, idea units were to generate idea destruction in large information environments at points where information pathways are most densely interconnected, they would disrupt the network. Yet by analogy to recent findings in protein evolution research, such disruption even while inducing destabilizing forces may actually render a knowledge system more robust and increasingly more tolerant to further idea mutation. One way to examine this would be on a cross-societal scale whereby varying patterns of knowledge development could conceivably be assessed. Knowledge systems that show higher frequencies of idea disruption, greater network density for information sharing, production, and distribution across multiple layers of the societal and institutional system can then be compared with other sociocultural knowledge systems less polymorphic in their expressive forms and network pathways, and which appear to manifest clearly defined information trajectories, and fewer interconnections among vertical, horizontal, and concentrically organized structures for information production and distribution. How may we describe the type, degree, and direction of knowledge change, idea generation, and innovation that occur under both types of systems, and what factors and conditions appear to sustain each? Is it replication or mutation capability, or particular combinations of both that produce powerful forms of knowledge expansion? Conversely which of these conditions is more vulnerable to innovation contraction? Research in the biological sciences provides at least two models of change to work from in comparative studies relevant to national sustainability and robustness in knowledge generation.

The Many X Factors of Knowledge Production: Solving the Puzzle
The promising set of factors drawn from preceding models in the literature would have to be subjected to substantive refinement, clarification, and adaptation necessary to the objectives of comprehensive comparative national study in such a way that categories of evidence investigated match with defined criteria. Several of the models such as, for example, “Cultural Legacy” with its emphasis on the role of culture in historical cases of innovation, and “Idea Evolution” patterned after research in evolutionary biology with relevance to the study of knowledge replication and mutation, suggest the need for rethinking the mystery of persistent societal differences in knowledge growth within and between countries. Furthermore, as pointed out in the preceding discussion, key factors and properties would still be needed to solve the many begging questions, unclarified premises, and missing elements within existing models in the literature. A number of factors pertinent to substantive comparative national research on what accounts for more or less successful transitions to the knowledge society, higher or lower rates of knowledge generation, and greater or lesser knowledge development across social sectors remain to be unearthed both within and beyond the models addressed in the essay. Comprehensive frameworks await to be designed that can, first, better explain and describe the primary macro-social components across national knowledge systems located in different world regions and, second, account for conditions that shape relative cross-societal capacities for either progressive levels of idea generation, knowledge stasis and stability, or knowledge-system degradation and contraction. The need to formulate a deeper interdisciplinary approach is therefore imperative for explaining both particular national cases and in discovering general forces and lessons applicable to underdeveloped and developing societies. Working through separate models as evident in the literature contributes key variables and insights. But combined intelligently in a more comprehensive, empirically derived paradigm to address the complexity of knowledge growth holds greater promise for research and for application to policies in higher education, civil society knowledge networks, scientific infrastructure, and innovation and entrepreneurship for strengthening the knowledge sector in countries at highly varied stages of socio-economic development.

Exploring the Knowledge Society Ahead
Though much has been made of the “revolutionary” nature of the information age, to date very little research attention has been paid to whether and in what sense the technological, policy and institutional shifts of the end of the twentieth century and beginning of the twenty-first alter the fundamental socio-cultural, institutional, and historically entrenched basis of knowledge change in comparative national contexts. Which is to say that, the evidence of inherent alterations in the knowledge system stemming from the revolution in digital and networking information technology is yet to be established in research (for an example, see [|King 2002]). We may only infer from the transformation of information technology infrastructure and IT industries that knowledge growth is also happening though we cannot know why that is the case in some societies and communities but not in others, and why particular forms of knowledge change vary so widely from one locale to the next. The issue is of particular importance given public demand and policy pressures to exploit the opportunities of globalization in areas such as how to maintain and extend higher rates of innovation; how to sustain a society's existing knowledge leadership against global competitive challenges from new forms of competition; how to address the problems of chronic underdevelopment; or how to start from the ground floor in building knowledge institutions and redesigning basic elements of a functioning knowledge system in lagging communities facing socioeconomic decline. To improve our understanding of the underlying and intricate dimensions of the problem, the methodology for a comprehensive cross-societal empirical investigation would need to combine national data collection with field research and policy analysis to identify sociocultural, technological, institutional, and policy-legal characteristics of a knowledge society. By multi-methodological means and deeper interdisciplinarity it would uncover patterns of similarity and variation in trajectories of dynamic knowledge change. Based on results of such investigation, a successful study could construct an integrated knowledge development paradigm that accounts for a number of complex factors involved in sustaining dynamical levels in innovation and idea-generation. To fill these large gaps in science, future research should aim for the following objectives: first, for real intellectual merit, an interdisciplinary assessment of comparative knowledge growth in contemporary national knowledge development would have to be attempted on far larger scales than currently found in the literature. Second, and more important, it should strive for broad societal and educational impact in multiple ways, and aim to serve as a model to guide policymakers at the local, national, and transnational levels in the design of effective policy strategies geared to expansion of the knowledge society. As such, it would offer far deeper understanding for the international community, and furnish more rigorous grounds for construction of recommendations to improve secondary and higher education, scientific infrastructure, socio-legal and economic incentives for knowledge creators and producers, strengthened mechanisms for knowledge transfers and knowledge sharing in civil society, and many other areas of improvement based on results of comparative investigations. Third, future research on the knowledge society should bring together researchers and policymakers from many disciplines across the natural and social sciences to review the substance of the field's comparative methods and findings using interdisciplinary frameworks and complex factors. Instead of conversations in closed rooms, scientists would work in concert to build an effective agenda for cross-national research and international collaboration. Finally, results of these progressively enhanced efforts in research and policy design should be widely disseminated, not just within the research and policy communities but also in civil societies worldwide. This is because of the increasingly significant role played by formal and informal non-state networks in the social exploitation of knowledge. And, as [|Jacob (1997; 1988)] observes, social exploitation of knowledge can be regarded as one of the most prominent proximate causes for idea generation and knowledge application within any economy and society, and at any and every stage of its development.

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
The value of privacy has long been recognized across cultures and historical time periods and indeed is one that some argue is physiologically embedded in human beings as well as in animals ([|Westin 1984]). Over the same time and again across cultures, questions of appropriate intrusions on privacy have been debated and most often framed as concerns about the relationship between the public and private realms, the state and society, and the individual and society. National statutes and constitutions, as well as international agreements, frequently express a country's understanding of the role and importance of privacy in these relationships. In modern times national and international debates and research about privacy have primarily focused on the privacy of personally identifiable information. The timing of such concerns coincided with the large-scale use of computers for processing personal information, which for advanced industrial countries was generally the mid-1960s. This essay begins with a discussion of the intellectual and social dimensions of global privacy issues and an overview of the associated literature. Conceptual and empirical research interests in privacy cross a number of disciplinary lines including philosophy, anthropology, sociology, political science, law, and economics. The essay then proceeds to focus on information privacy which has become the dominant global privacy concern during the twentieth and twenty-first centuries. For purposes of analysis, global information privacy issues are categorized into four key periods: the 1960s, which focused on computerization of records; the 1980s, when trade and exchanges of personally identifiable information became the policy issue; the 1990s, during which the technological changes generated by the internet and computer networking raised new privacy policy issues; and finally the 2000s and beyond, when various surveillance measures in the post–9/11 period generated national and international debates about privacy and security.

Intellectual and Social Dimensions
As a value privacy's importance extends back to earliest times with some recognition of privacy concepts noted in the Bible, classical Greek writings, the sayings of Mohammed, and Jewish law ([|Konvitz 1966]; [|Moore 1984]; [|Hixson 1987]; [|Rosen 2000]). As will be discussed below, philosophers, social scientists and legal analysts note that privacy is an enormously difficult concept to define. Nevertheless there is agreement that privacy is an important value and that its meaning and importance vary by culture as recognized by anthropologists and social psychologists ([|Mead 1949]; [|Altman 1977]; [|Margulis 1977]; [|Moore 1984]), as well as more recently by researchers doing cross-cultural studies of privacy in the context of health care provision ([|Monshi and Zieglmayer 2004]). Scholars also agree that privacy is a “relative, contextual concept” ([|Gutwirth 2002]:29). Despite cultural differences, privacy tends to be rather universally viewed as important in protecting some realms of life that are seen as off limits to society more generally. The home, for example, is often viewed as a sanctuary to which people can retreat for solitude and intimacy beyond the unwanted gaze of others. In this sense privacy is regarded as a boundary delineating what is public or semi-public from what is private. Aristotle, for example, noted such a boundary in delineating the //oikos//, the private sphere associated with the household and family, and the //polis//, the public sphere associated with the body politic and work of government ([|Aristotle 1962]). In Enlightenment liberal thinking, the public–private distinction was retained with emphasis on the tension between the individual and the larger social and political entity of which the individual was a part. Thomas McCollough points out: “Both Hobbes and Locke began in their political theorizing with the atomistic unit of the self-interested individual; the problem was how essentially separate individuals, with private and conflicting interests, could coexist in tolerable harmony” ([|McCollough 1991]:66). In the //Second Treatise on Government// (1690), [|Locke (1952)] argued that the government is a mechanism for public protection of certain private ends, including life, liberty, and property. In [|//On Liberty// (1859)], John Stuart Mill posited that the private sphere of the individual was important not only in individual development but also in realizing the preferred public sphere: “In proportion to the development of his individuality, each person becomes more valuable to himself, and is therefore capable of being more valuable to others. […] When there is more life in the units there is more in the mass which is composed of them” ([|Mill 1939]:998). This public–private distinction, as Jean Bethke Elshtain pointed out, operates both as a symbolic form and as a political and moral exigency and has profound implications for certain social groups, including women ([|Elshtain 1981]). Where this boundary is placed and the circumstances under which it may be legitimately breached vary much according to culture and historical time period. Such a boundary is now viewed as increasingly problematic as technological changes make it more difficult to define spaces in such a simplistic way ([|Nissenbaum 1998]) but for much of history such a boundary was recognized by many countries and cultures ([|McCloskey 1971]). Conceived as a boundary between public and private realms, privacy has been of most interest to liberal political philosophers and to the Western, particularly the Anglo-American, legal community. The most often quoted and most often referenced writing in this tradition is [|Samuel Warren and Louis Brandeis's 1890] //Harvard Law Review// article in which they defined the “right to privacy” as the “right to be let alone” ([|Warren and Brandeis 1890]:193). They focused on a technological change, instantaneous photographs and newspaper publishing, which made it possible to invade “the sacred precincts of private and domestic life” ([|Warren and Brandeis 1890]:195). Their notion of privacy was defined very much in terms of the individual and the need for a personal space. As they eloquently stated, “The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual” ([|Warren and Brandeis 1890]:196). Warren and Brandeis anchored the right to privacy in the common law, specifically the protection for intellectual and artistic property which they viewed as arising from the principle of an “inviolate personality” ([|Warren and Brandeis 1890]:205). The Warren and Brandeis article precipitated legal discussions about whether the common law did in fact protect privacy and what aspects of the common law were most relevant. In examining tort law development in the US, William Prosser found that by 1960 most courts recognized privacy rights as being protected from four kinds of invasions: intrusion, public disclosure of private facts, placing someone in a false light in the public eye, and appropriation of an individual's name or likeness ([|Prosser 1960]). The Warren and Brandeis article also generated legal and philosophical discussion about whether the right to privacy was protected by common law or was a more fundamental human right. Edward Bloustein took issue with Prosser and argued that privacy involved the protection of “human dignity” ([|Bloustein 1964/1984]:181) with social value attached to privacy as there was a “community concern for the preservation of the individual's dignity” (1964/1984:191). Arnold Simmel similarly viewed privacy as protecting “the sacredness of the person” ([|Simmel 1968]:482) and an invasion of privacy as “an offense against the rights of the personality – against individuality, dignity, and freedom” ([|Simmel 1968]:485). This latter thinking was important in moving thinking about privacy from the confines of common law and situating it more firmly as a fundamental right, and a right that entailed broader social importance. Although the common law view of a right to privacy continued to be dominant in Anglo-American law, the human dignity view was adopted in constitutional protections in several European countries, such as France and West Germany, and in the international movement for human rights ([|Flaherty 1989]:9). For example, Article 12 of the 1948 United Nations Declaration of Human Rights stated: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks” ([|www.un.org/Overview/rights.html], accessed Nov. 8, 2009). The same year the Organization of American States in its Declaration of the Rights and Duties of Man avowed that “Every person has the right to the protection of the law against abusive attacks upon his honor, his reputation, and his private and family life” ([|www1.umn.edu/humanrts/oasinstr/zoas2dec.htm], accessed Nov. 8, 2009). In 1966, the International Covenant on Civil and Political Rights embraced a similar concept of privacy in Article 17: “No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks” ([|www.hrweb.org/legal/cpr.html]). The European Convention for the Protection of Human Rights and Fundamental Freedoms in 1970 noted the importance of such a right in this language in Article 8: “Everyone has the right to respect for his private and family life, his home and his correspondence.” It then, however, went on to recognize the need for some legitimate limitations on this right: > There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others. > ([|http://conventions.coe.int/treaty/EN/Treaties/html/005.htm], accessed Nov. 8, 2009) This list of interests that arguably compete with the right to privacy is quite extensive and the need to balance such competing rights and interests presents a challenge at a variety of historical moments. Questions about the relationship among human rights, human dignity, and political regime type have been of interest to both political philosophers and international studies scholars. Howard and Donnelly, for example, conceive of conceptions of human dignity as reflecting an understanding of the moral worth of the person while human rights are social practices that encompass entitlements to make claims against the state. They argue that human rights actually require a liberal regime ([|Howard and Donnelly 1986]:802). Of most interest to this discussion is their typology reflecting the valuation of privacy in different regime types. Liberal regimes place high value on privacy; minimal regimes place very high values on privacy; traditional and communist regimes both place very low values on privacy; and corporatist and developmental regimes place low value on privacy ([|Howard and Donnelly 1986]:814). The modification in the 1970 human rights convention, conceding the need for some limitations on the right to privacy, reflects the struggles that philosophers and legal scholars continued to have in defining privacy and in identifying the realm of privacy that was legitimately protected given the competing needs of societies, governments, and other individuals. Alan Westin began his seminal book //Privacy and Freedom// stating that “Few values so fundamental to society as privacy have been left so undefined in social theory or have been the subject of such vague and confused writing by social scientists” ([|Westin 1967]:7). A large component of writing about privacy focused on the functions privacy served and the needs of individuals for solitude, intimacy, anonymity, autonomy, emotional release, self-evaluation, and relationships of love, friendship, and trust ([|Westin 1967] and [|Fried 1968]). There was also renewed attention at this time to more clearly delineating the appropriate boundaries between private and public. A collection of essays by the American Society for Political and Legal Philosophy well represents this discourse. The editors framed their discussion in terms of modern polities growing “more congested, complicated, and powerful vis-à-vis their citizens” ([|Pennock and Chapman 1971]:vii) thus increasing disputes about the boundaries between the private and the public. A number of writers began by acknowledging that privacy had some social importance or was derived from a social context. Carl Friedrich, for example, argued that privacy served several functions in a democratic society and that the destruction of privacy was often regarded as the core of totalitarianism ([|Friedrich 1971]:107–19). But most of the scholarship during the 1970s underscored privacy's importance to the individual and its role as a boundary between public and private ([|Rachels 1975]; [|Scanlon 1975]; [|Thompson 1975]). Sociologists offered a perspective on privacy that was more rooted in its importance to society. Robert Merton argued that “Privacy is not just a personal predilection; it is an important functional requirement for the effective operation of social structure” ([|Merton 1957]:375). Social psychologists, such as [|Altman (1977)] and [|Margulis (1977; 2003)], underscored privacy as a social process and argued that its understanding involved appreciation of a range of social interactions including people, the societal context, the physical environment, and the time period. While much of the twentieth century thinking about privacy focused on its value for the individual, in the twenty-first century scholars are placing more importance on its broader importance as a public, social, and collective value ([|Regan 1995; 2002]), as a key component of “contextual integrity” ([|Nissenbaum 2004]), as “inter-subjectively constituted through social interaction” (Steeves in press) and as a resource for the making of identity and social meaning (Phillips in press). In addition to the recognition of privacy as a social value as well as an individual value, philosophical and social science thinking about privacy in the latter twentieth and early twenty-first centuries has been informed by the writings of [|Michel Foucault (1977)], a “foundational thinker” in the interdisciplinary area of “surveillance studies” ([|Wood 2003]:235). Foucault elaborated on Jeremy Bentham's concept of the Panopticon, adopting it as the central image to understand the impact of modern surveillance techniques with their concomitant technology of power and social control. Foucault emphasized the design of the Panopticon which assured that one may always be seen without knowing when, creating “a state of conscious and permanent visibility that assures the automatic functioning of power” ([|Foucault 1977]:201). Power was then automated, disindividualized, and made efficient. Most importantly the panoptic arrangement is generalizable and easily transferred to other social settings, resulting in “the disciplinary society” ([|Foucault 1977]:209). Other sociologists, such as [|Anthony Giddens (1985)] and [|David Lyon (1994)], disagreed that this type of disciplinary power expresses the nature of state administrative power and social power more generally. Nonetheless Foucault's concepts of social control and of classification have served to underscore the power – rather than the privacy – and the implications of surveillance and to inform theoretical critiques of, and empirical research on, surveillance and privacy ([|Gandy 1993]; [|Green 1999]; [|Norris and Armstrong 1999]). Oscar Gandy, for example, refers to organizational monitoring of individuals as the “panoptic sort” – “a kind of high-tech cybernetic triage through which individuals and groups of people are being sorted according to their presumed economic or political value” ([|Gandy 1993]:1–2). The result of the panoptic sort's “identification, classification and assessment” ([|Gandy 1993]:24) is discrimination of certain groups.

The 1960s: Computerization and Privacy – National Perspectives on a Global Trend
In the late 1960s and early 1970s, government agencies and private sector organizations increasingly adopted computers to collect, retain, exchange, and manipulate personally identifiable information ([|Miller 1971]; [|Westin and Baker 1972]; [|Rule 1973]). In all countries this innovation in record-keeping precipitated a concern with the rights of the individuals who were subjects of that data and with the responsibilities of the organizations processing the information. Two models emerged during this time: some countries adopted a data protection approach and others a civil liberties approach ([|Flaherty 1989]; [|Bennett 1992]; [|Regan 1995]). The data protection approach viewed the problem as one of accountability and responsibility on the part of the organizations collecting and using personally identifiable information. The solution then was framed in terms of placing procedural requirements for and oversight mechanisms of these organizations. The civil liberties approach viewed the problem as one of possible violation of rights of individuals in the context of their disclosure of information to organizations and the organizations' subsequent uses and elaborations on that information. The solution in this model was framed in terms of giving individuals legal rights by which they could find out what personally identifiable information was being collected and the uses and exchanges of such information and grievance mechanisms by which they could challenge organizational practices and information quality. At the core of both of these approaches was the framework of “fair information principles”; the two approaches differed mainly in whether these principles would be enforced by government oversight or by individual redress of grievances. The principles were first developed in the US by the Department of Heath, Education, and Welfare's (HEW) Advisory Committee on Automated Personal Data Systems. Its report, //Records, Computers, and the Rights of Citizens//, recommended the enactment of a Code of Fair Information Practices. Other countries adopted similar fair information principles in their national legislation and the Organization of Economic Cooperation and Development (OECD) incorporated the core of these principles in its 1980 guidelines on the protection of privacy (see [|Table 1] for summary of both sets of fair information principles).

Table 1 Fair information principles Several political scientists and legal scholars evaluated these two approaches both in terms of understanding why countries were more likely to adopt one approach instead of the other and in terms of evaluating which approach was likely to be more effective. David Flaherty conducted a detailed comparative study of the adoption and implementation of privacy and data protection laws in five countries – the Federal Republic of Germany, Sweden, France, Canada, and the US. His in-depth study of each country's politics and legal issues was based both on interviews with key participants and on government reports and documents. His analysis concluded with an emphasis of the critical role that an independent data protection agency plays in ensuring the effective implementation of laws designed to protect personally identifiable information. As he stated “it is not simply enough to pass a data protection law” ([|Flaherty 1989]:381). Priscilla Regan examined how issues of policy implementation affected the formulation and adoption of personal information policies in the US and Britain. She concluded that in this case, when implementation questions were raised during policy formulation, programmatic goals were sacrificed to questions about how the policy would be executed, and the interests of the bureaucracy weakened the personal information policy adopted ([|Regan 1984]). Colin Bennett analyzed the policy processes resulting in privacy or data protection legislation in Sweden, the US, West Germany, and Britain in order to determine whether policy convergence or divergence was a result of technological determinism, emulation, elite networking, harmonization, or penetration. He concluded that pressures for convergence of policy in this area were likely to increase as the technology itself became more transnational, as insecurities about their effects intensified, and as international regimes promoted harmonization among laws ([|Bennett 1992]:251). Analyses of privacy and data protection laws in a number of countries also generated more refinement of the approaches that countries were taking to address the issues of information privacy. [|Bennett (1992]:153–61) identified five models: These models have served as a way of characterizing a country's policies and have been used particularly by scholars who seek to understand whether effectiveness of privacy protection varies based on the model adopted. Although research indicates that there are strengths and weaknesses of various models, the research also concludes that country specific characteristics – particularly culture, expectations, trust in government, and role of government – also influence their effectiveness ([|Flaherty 1989]; [|Bennett 1992]; [|Gellman 2003]). By the late 1980s most advanced industrial countries had adopted laws conforming roughly to one of these models (see [|Table 2]). National laws vary in a number of significant ways: some regulate only computerized records while others regulate paper and computerized records; some regulate the public and private sectors similarly while others regulate them differently; some protect only citizens of that country while others protect residents; and some protect “natural persons” while others protect “legal persons,” including corporations.
 * ~ //Source//: US Department of Health Education and Welfare, Secretary's Advisory Committee on Automated Personal Data Systems (1973), //Records, Computers and the Rights of Citizens// (Washington, DC: Government Printing Office) and OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data ||
 * **1973 HEW Code of Fair Information Practices** ||
 * There must be no personal record-keeping system whose very existence is secret. ||
 * There must be a way for an individual to find out what information about him or her is in a record and how it is used. ||
 * There must be a way for an individual to prevent information about him or her that was obtained for one purpose from being used or made available for other purposes without his or her consent. ||
 * There must be a way for an individual to correct or amend a record of identifiable information about him or her. ||
 * All organizations creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data. ||
 * **1980 OECD Guidelines on the Protection of Privacy** ||
 * **Collection Limitation Principle**. There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject. ||
 * **Data Quality Principle**. Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date. ||
 * **Purpose Specification Principle**. The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose. ||
 * **Use Limitation Principle**. Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with Paragraph 9 except: a) with the consent of the data subject; or b) by the authority of law. ||
 * **Security Safeguards Principle**. Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorized access, destruction, use, modification or disclosure of data. ||
 * **Openness Principle**. There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the data controller. ||
 * **Individual Participation Principle**. An individual should have the right: (a) to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to him; (b) to have communicated to him, data relating to him within a reasonable time; at a charge, if any, that is not excessive; in a reasonable manner; and in a form that is readily intelligible to him; (c) to be given reasons if a request made under subparagraphs (a) and (b) is denied, and to be able to challenge such denial; and (d) to challenge data relating to him and, if the challenge is successful to have the data erased, rectified, completed or amended. ||
 * **Accountability Principle**. A data controller should be accountable for complying with measures which give effect to the principles stated above. ||
 * • voluntary control – organizations responsible for self-regulation of their practices to keep them consistent with fair information practices;
 * • subject control – data subjects responsible for raising questions about information practices using their rights of access and correction;
 * • licensing – databases must be licensed by the government and those managing those databases must be maintained in accordance with the license;
 * • data commissioner – ombudsman office to assist individuals and serve as a forum to address concerns;
 * • registration – all computerized databases containing personally identifiable information are registered with the government.

Table 2 National privacy laws
 * ~ //Country// ||~ //Title of Law// ||~ //Date// ||
 * ~ //Source//: National Omnibus Laws, [|www.privacyexchange.org/legal/nat/omni/nol.html] ||
 * Argentina || Personal Data Protection Act || 2000 ||
 * Australia || Privacy Act || 1998 ||
 * || Privacy Amendment || 2000 ||
 * || (Private Sector) Act ||  ||
 * Austria || Data Protection Act || 2000 ||
 * Belgium || Consolidated Version of the Belgian Law of December 8, 1992 on Privacy Protection in Relation to the Processing of Personal Data as Modified by the Law of December 11, 1998 implementing Directive 95/46/EC || 1992/1998 ||
 * Canada || The Privacy Act || 1983 ||
 * || Personal Information Protection and Electronics || 2000 ||
 * || Documents Act ||  ||
 * Chile || Act on the Protection of Personal Data || 1998 ||
 * Czech Republic || Act on the Protection of Personal Data in Information Systems || 1992 ||
 * || Act of 4 April 2000 on the Protection of Personal Data and on Amendment to Some Related Acts || 2000 ||
 * Denmark || Danish Private Registers Act (Consolidated) || 1978 ||
 * || The Danish Public Authorities Registers Act (Consolidated) || 1978 ||
 * || Act on Processing of Personal Data, Act No. 249 || 2000 ||
 * Estonia || Personal Data Protection Act || 1996 ||
 * Finland || Act on the Amendment of the Personal Data Act (986) || 2000 ||
 * France || Act on Data Processing, Data Files and Individual Liberties || 1978 ||
 * Germany || Federal Data Protection Act || 1990 ||
 * || Federal Data Protection Act (Amended) || 1994 ||
 * || Federal Data Protection Act || 2002 ||
 * Greece || Law No. 2472 on the Protection of Individuals with Regard to the Processing of Personal Data || 1997 ||
 * Hungary || Act LXIII of 1992 on the Protection of Personal Data and the Publicity of Data of Public Interests || 1992 ||
 * Iceland || Act Concerning the Registration and Handling of Personal Data || 1989 ||
 * Ireland || Data Protection Act || 1988 ||
 * || Data Protection (Amendment) Act || 2003 ||
 * Israel || Protection of Privacy Law || 1981 ||
 * Italy || Processing of Personal Data Act || 1997 ||
 * Japan || Law for the Protection of Computer Processed Data Held by Administrative Organs || 1988 ||
 * Latvia || Personal Data Protection Law || 2000 ||
 * Liechtenstein || Data Protection Act || 2002 ||
 * || Ordinance on the Data Protection Act || 2002 ||
 * Lithuania || Law on Legal Protection of Personal Data || 2003 ||
 * Luxembourg || Organising the Identification of Physical and Legal Persons by Number || 1979 ||
 * || Regulating the Use of Nominal Data in Data Processing || 1979 ||
 * || Protection of Persons with Regard to the Processing of Personal Data || 2002 ||
 * Malta || Data Protection Act || 2001/2002 ||
 * The Netherlands || Data Protection Act || 1989 ||
 * || Personal Data Protection Act || 2000 ||
 * New Zealand || Privacy Act || 1993 ||
 * || Privacy Amendment Act || 1993 ||
 * || Privacy Amendment Act || 1994 ||
 * Norway || Act Relating to Personal Data Registers || 1978 ||
 * || Personal Data Act || 2000 ||
 * Poland || Protection of Personal Data || 1997 ||
 * Portugal || Act on the Protection of Personal Data || 1998 ||
 * Romania || Law No. 677/2001 for the Protection of Persons Concerning the Processing of Personal Data and Free Circulation of Such Data || 2001 ||
 * Russia || Information Computerization and Protection of Information || 1995 ||
 * || Participation in International Information Exchange || 1996 ||
 * Slovak Republic || Act No. 428 of 3 July 2002 on Personal Data Protection || 2002 ||
 * Slovenia || Personal Data Protection Act || 1999 ||
 * Spain || Law 15/99 on the Protection of Data of a Personal Character || 1999 ||
 * Sweden || Personal Data Protection Act || 1998 ||
 * Switzerland || The Federal Law on Data Protection || 1992 ||
 * Taiwan || Computer-Processed Personal Data Protection Law || 1995 ||
 * Thailand || Official Information Act || 1997 ||
 * United Kingdom || Data Protection Act || 1998 ||
 * United States || Privacy Act || 1974 ||

The 1980s: National Laws and Trade Issues
The global economic and communication systems are fundamentally global information systems. These systems collect, store, exchange, and manipulate vast quantities of information, including personally identifiable information. With the variation in national laws, international and regional bodies recognized that domestic laws could affect the flow of personal information into and out of a country. This brought scholarly and policy attention to the issue of transborder data flows and questions about whether privacy and data protection laws constituted non-tariff trade barriers. The implications of national data protection laws on transborder data flows provoked heated debate in a number of regional and international forums, where discussion was framed primarily in terms of free flow of information, championed largely by the US, versus trade restrictions. This debate sparked a number of articles in business, law, and political science journals ([|Eger 1978]; [|Bigelow 1979]; [|McGuire 1979]; [|Buss 1984]; [|Wigand 1985]; [|Regan 1993]). During the 1980s and 1990s the focus of policy attention concerning transborder data flows was the European Union (EU) and the development of its “Directive on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data” (hereafter referred to as the Data Protection Directive). The Directive was part of the EU's development of one European market and the harmonization of policies affecting that market. The EU's proposed Directive generated debate between the EU and the US on three key issues: the degree of individual control over uses of personal information which was couched largely in terms of individuals' “opting-in” or “opting-out” of information practices; the level of national protection countries needed to ensure prior to transfers of personal information; and the nature of the national enforcement authority or regime that was consistent with the requirements of the EU Directive. Analyses of the EU Data Protection Directive, compatibility of national laws with the Directive, and the national and international policy deliberations on this issue can be found in a number of law review, business, and political science articles and books ([|Branscomb 1994]; [|Schwartz 1995]; [|Cate 1997]; [|Swire and Litan 1998]; [|Regan 1999]; [|Reidenberg 1999; 2000]). Although the debate over the EU Data Protection Directive was framed largely in terms of differences between the EU and the US, there were also differences among European countries that impacted the development and meaning of the Directive. Germany and France advocated stronger protections that were consistent with their national laws. Some European countries favored weaker protections, again consistent with national laws, and others had not yet passed legislation. Although the members of the EU did not share a common perspective on the specifics, they did share a common perspective on the need for free trade purposes of harmonizing regulations governing transborder data flows ([|Schwartz and Reidenberg 1996]; [|Shaffer 2000]). Ultimately the US and the EU agreed on “Safe Harbor Privacy Principles” to establish a framework for the exchanges of personally identifiable information. This agreement became the consensus policy position once American businesses realized that it was unlikely either that the EU would issue a general ruling that US law was adequate to meet the requirements of the Data Protection Directive or that individual contractual arrangements between American companies and the EU would be easily negotiated. The US Department of Commerce and the EU's Internal Market Directorate General developed the Safe Harbor agreement after a rather torturous two-year process that was finalized in 2000 ([|Farrell 2003]; [|Regan 2003]). Research, especially by political scientists, continues into analysis of the ways in which national laws articulate with those of other countries and with international agreements ([|Bennett and Raab 2006]).

The 1990s: The Networked World
Somewhat paralleling the principally business dominated debate and analyses over transborder data flows was a broader discussion about privacy issues resulting from global communication and information systems, particularly the internet. The focus in policy and scholarship was less on variations in national laws and more on two features of networked communication systems: first, the technical infrastructure supporting the flow of information, an analysis in which computer scientists joined with legal and policy experts to examine how privacy might be invaded in such a networked environment and whether technology might at least be part of the solution; and second, the globalization of communication systems and information flows, an analysis in which political scientists and communication scholars explored the causes and implications of the globalization trend, including how that trend was affecting cultural and social views of privacy. Each of these will be discussed below. In terms of the technical infrastructure, there are a number of ways in which personally identifiable information can be automatically captured as one surfs the internet – and this capture occurs regardless of national, geographic boundaries. First, each site that someone visits obtains the internet protocol (IP) address of the computer being used. Although the IP address itself does not yield personally identifiable information, it does allow tracking of the internet movements from that computer. Second, “cookies” can be placed on a user's hard drive so that a website can determine a user's prior activities at that website on a return visit. Although users can monitor and/or delete cookies, some sites require users to accept cookies. Third, “web bugs,” graphics embedded in a web page or email, can monitor who is reading the page or message ([|Lyon and Zureik 1996]; [|Agre and Rotenberg 1997]; [|Regan 2002]). As organizations in both the public and the private sectors increasingly conducted transactions and provided services online, new privacy issues emerged in a number of sectors. For example in electronic government, officials and legislatures provided new rules, procedures, and protections for the submission of personally identifiable information over the internet and for access to such information by other organizations. Similarly moves to electronic health records and websites offering health advice have provoked privacy and access concerns, as has electronic banking. These policy and management issues have again generated policy discussions and actions at the national level but also at the international level as the internet does not recognize geographic boundaries ([|Johnson and Post 1996]; [|Reidenberg 1996]; [|Swire 1998]). Because of the emphasis on the internet, technology specialists became more vocal players in both national and international debates. As policy makers realized that there were limitations to laws and organizational policies, attention shifted to the technology and the possibilities of inserting privacy protections into the architecture itself. Although proposals for encryption ([|Chaum 1992]) had been discussed for some time as an information privacy protection, the emphasis now turned to the possibility of writing privacy protections into the technical codes and standards for the computer and information systems that formed the networks ([|Lessig 1999]). Organizations such as the World Wide Web Consortium (W3C) and Institute of Electrical and Electronics Engineers (IEEE) served as forums for discussions of policy problems and solutions. W3C established a working group that developed principles for a Platform for Privacy Preferences (P3P) in 1998; P3P enabled websites to express their privacy practices in a standard format that could then be retrieved automatically and interpreted. The goal was both to convenience web users and also to enable users to integrate their ideal privacy standards into their web practices. Privacy protections were viewed as a key component in establishing trustworthy networked information systems and various groups, including the Computer Science and Telecommunications Board of the US National Research Council and the Rathenau Institute of the Royal Netherlands Academy of Sciences, held meetings of international technical and policy specialists and published various reports with policy and technical proposals ([|National Research Council 1999; 2001]; [|Gutwirth 2002]). In terms of the implications and causes of globalization of communication systems and information flows, research and analysis is quite broad ranging but includes attention to privacy as one of the social values whose meaning and protection were affected by globalization. Much of the research focuses on the potential of global communication systems to foster the development of a global public sphere ([|Mitzen 2005]) or global civil society ([|Comor 2001]) and the emergence of a “globally oriented citizen” ([|Parekh 2003]:12). In this view networked communication systems provide the capacities to form transnational networks which have the potential to circumscribe state-based systems. The potential for this, rather than its inevitability, is underscored as more than system integration is required in order for the potential to be realized; indeed, “transnational intersections of culture, meaning, and identity are required” ([|Comor 2001]:390) and “local relationships tend to prescribe the context through which global influences are adopted and understood” ([|Comor 2001]:398). Global communication systems could modify the local context by influencing changes in lifestyle and culture, which would then affect conceptual systems that may lead to more globalized intersections of culture, meaning, and identity ([|Comor 2001]). Other analysts of global communication systems and the possible emergence of a global public sphere, including Habermas, emphasize the role of legal rules in “converting normative ideals to social facts” ([|Mitzen 2005]:404) including legal rules protecting “the preconditions for communicative action such as the rights to privacy” ([|Mitzen 2005]:404). Geoffrey Herrera sees the evolution of a global digital information network as involving “a three-way political struggle between centralized political authorities (states), centralized economic entities (firms) and individuals as both consumers and citizens” ([|Herrera 2002]:93). Privacy protections for personally identifiable information are important in different ways to each of these three sets of actors and therefore implicated in the struggle. Viewed as a struggle of such proportions, the question becomes who can most influence the outcome of the struggle – raising questions of American cultural imperialism. David Rothkopf, for example, suggests that “Americans should promote their vision for the world, because failing to do so or taking a ‘live and let live' stance is ceding the process to the not-always-beneficial actions of others. Using the tools of the Internet Age to do so is perhaps the most peaceful and powerful means of advancing American interests” ([|Rothkopf 1997]:49).

Surveillance Post–9/11
The privacy landscape and discourse changed dramatically throughout the world after the terrorist attacks in the US on September 11, 2001. Concerns about privacy and civil liberties were trumped by concerns about security and identifying possible terrorists. Pew Research and Gallup public opinion polls in the US conducted soon after 9/11 indicated support for sacrificing civil liberties, more extensive surveillance of communications, and a national ID card ([|Bartlett 2001]; [|Gallup 2001]). Congress, with virtually no opposition, passed the USA PATRIOT Act of 2001 (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism) 45 days after the attacks. Other countries passed similar measures. The Canadian Bill C-36 contained measures to enhance the government's ability to prevent and detect terrorist activity. The Council of Europe's Cyber Crime Convention was ratified not only by the 41 member states, but also by the US, Japan, and South Africa. The hasty passage and draconian character of these initiatives provoked concerns, not only among human rights activists who saw these measures as clashing with civil liberties and privacy protections, but also in more conservative circles who viewed the measures as giving unchecked investigative power to law enforcement and intelligence officials ([|van Est and van Harten 2002]). One international area of controversy involved airline passenger data. Due to the fact that the 9/11 terrorists used airplanes as their weapons, much subsequent attention has focused on identifying and apprehending potential terrorists before they get on an airplane. To that end the US initiated plans for a new passenger screening system to replace the existing system operating on airlines' reservations systems. CAPPS II (Computer-Assisted Passenger Prescreening System), proposed soon after 9/11, would access more diverse data and perform more sophisticated analyses. It would begin with the airlines transmitting Passenger Name Record (PNR) data including name, phone number, itinerary, and method of payment to CAPPS II, which would then request identity authentication from commercial data providers who would then send CAPPS II an identity authentication score, and finally CAPPS II would use government databases, including classified and intelligence data, to conduct a risk assessment score which would be transmitted to the check-in counter ([|Government Accountability Office 2004]). Testing of CAPPS II has been delayed due to difficulties in obtaining passenger data from the airlines who voiced privacy concerns and opposition to the system from other countries, especially those of the European Union. Following lengthy negotiations, European Union officials on May 28, 2004 reached an agreement with the Bureau of Customs and Border Protection (CBP) in the Department of Homeland Security (DHS) for air carriers to provide CBP with electronic access to PNR ([|Commission of European Communities 2004]). Another proposal for an integrated system designed to track the movements and activities of individuals is the US-VISIT (United States Visitor and Immigrant Status Indicator Technology) which was proposed as a dynamic interoperable system to collect and retain biographic, travel, and biometric data (i.e., photograph and fingerprints) pertaining to visitors ([|United States Department of Homeland Security 2003]). Both CAPPS II and US-VISIT were subject to technical and budgetary difficulties in their development and pilot-testing and were analyzed by researchers as examples not only of international data systems raising questions of national sovereignty and civil liberties but also of the public management and accountability of advanced technology initiatives. In general throughout the world, surveillance became the frame for discussion of privacy policy and research on global information privacy in the post-9/11 period. Much research was conducted in the field of surveillance studies, which had intellectual roots primarily in sociology and science, technology, and society (STS), was interdisciplinary in approach, and was international in members. David Lyon's Surveillance Project at Queen's University, funded in part by the Social Sciences and Humanities Research Council of Canada, is a prime example of the collaborative international research efforts that are ongoing in this field. His group has successfully completed research on: borders, citizens and surveillance ([|Topal 2005]); national ID card systems ([|Bennett and Lyon 2008]); and a nine-country survey of public attitudes to and experiences with the global flow of personal data, with special focus on privacy and surveillance ([|Zureik 2009]). In 2008, the Queen's research team received funding for a multiyear international collaborative research effort, The New Transparency Project, to investigate three questions: What factors contribute to the general expansion of surveillance as a technology of governance in late modern societies? What are the underlying principles, technological infrastructures, and institutional frameworks that support surveillance practice? What are the social consequences of such surveillance both for institutions and for ordinary people ([|www.surveillanceproject.org/projects/the-new-transparency/about], accessed Jun. 2009)? Somewhat independently of the effects of 9/11 some international relations and communication scholars were already questioning whether the development of the global digital information network itself was diminishing or strengthening state capacity and autonomy ([|Herrera 2002]). This line of research examined whether the data processing and related surveillance capabilities were leading to enhanced state power and threats to personal privacy. Another line of research that pre-dated 9/11 concerns about surveillance focused on the internationalization of law enforcement and the development of international organizations and arrangements ([|Marx 1997]; [|Deflem 2000]). Both of these lines of research supported the importance of understanding the underlying historical, institutional, and technological contexts in which post–9/11 surveillance occurred.

Future Research Directions
Although the topics discussed above serve to organize the research on global information privacy in time periods, each of these is also a topic of continuing research. Philosophical questions about the meaning of privacy and the various ways in which privacy is likely to be defined and constructed in different cultures will continue to be of interest to philosophers and a variety of social scientists (Kerr et al. in press). Such questions elicit not only theoretical analysis but also empirical study through methods such as public opinion surveys, focus groups and in-depth interviews ([|Zureik 2009]). Many privacy issues appear both within countries and internationally as a result of advances in computer and information technologies, and there is no question that such technological advances will continue and be of interest to technologists and social scientists. One area in particular that is likely to be of research interest is location or mobile privacy which is taking on significant importance with the spread of wireless technologies, global positioning satellites, and RFID (radio frequency identification) systems ([|Bennett and Regan 2003]). Variations and conflicts among national laws, other countries' laws, and international agreements in the area of information privacy will continue to present policy challenges in terms of policy formulation and policy implementation. Finally the surveillance issues that emerged as a global concern after 9/11 are not expected to abate and will continue to be analyzed from a variety of relevant perspectives and disciplines.

Elizabeth C. Hanson
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|information and communication technology (ict)], [|propaganda] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**] Communication is at the heart of all international interaction and, indeed, all human interaction. Boundaries for the subject matter of international communication are difficult to establish, and the substantive content consists of multiple streams of diverse research. It is, therefore, appropriate to refer to international communication studies in the plural form. A “loose topical confederation” may be a more accurate description than a field or subfield of study. The intellectual impetus for international communication research has come from a variety of disciplines, notably political science, sociology, psychology, social psychology, linguistics, anthropology, and, of course, communication science and international relations. Many scholars who study topics that might well be included have not identified themselves as scholars of international communication. The basic questions that fuel this diverse research concern the intersection of communication and international relations, but how to define and how broadly to construe these two areas of inquiry? Harold Lasswell, widely acknowledged as a founding father of international communication, described the communication process in terms of the questions: “Who says what in which channel to whom with what effect?” This famous formulation excludes nonverbal forms of communication, as well as other kinds of transactions that have been considered under the rubric of international communication. It does not include the question “Why?,” the purposive element, or the impact of technology, an important topic in recent research. International relations as a discipline has its own definitional and boundary controversies, particularly regarding the role of nonstate actors and the primacy of the nation state. The term “global communication” has often been used as a more inclusive term, as the processes of globalization have accelerated and the significance of nonstate actors has increased. Nevertheless, “international communication” persists as the conventional term to include all types of communication that occur across national boundaries or affect international outcomes. Although highly diverse in content, international communication scholarship, past and current, falls into distinct research traditions or areas of inquiry. The content and focus of these have changed over time in response to innovations in communication technologies and to the political environment. In retrospect, threads can be discerned, but they are overlapping. Certain threads disappear, only to resurface later under a different label. To convey this complexity, this essay will move chronologically, identifying the major foci of scholarly interest as they emerge in response to technological and political change. How the various topics evolved and became areas of inquiry will be indicated along the way.

The Persuasion Paradigm
The development and spread of radio and film increased public awareness and scholarly interest in the phenomenon of the mass media and in issues regarding the impact on public opinion in the 1920s and 1930s. The extensive use of propaganda as an instrument of policy by all sides in World War I, and the participation of social scientists in the development of this instrument, provided an impetus for the development of both mass communication and international communication studies. There is some variation in the early histories of systematic communication research, but the three scholars who are generally cited and who are also most relevant to international communication are Harold Dwight Lasswell, Paul Felix Lazarsfeld, and Wilbur Lang Schramm ([|Lerner and Nelson 1977]; [|Rogers 1994]). Lasswell's five-question encapsulation of the communication process – “Who says what to whom, through what channels, with what effects?” – provided a framework for much future communication research. His scholarship on communication began with his dissertation at the University of Chicago, //Propaganda Technique in the World War//, which was published in 1927. His advisor, Charles E. Merriam, had worked for the Creel Committee on Public Information, which had designed, organized, and conducted extensive domestic and international propaganda activities during World War I. Lasswell's dissertation/book analyzed the various propaganda techniques and strategies used by the Germans, British, French, and Americans and indicated factors that affected their impact. He defined propaganda here as “the control of opinion by significant symbols […] by stories, rumors, reports, pictures and other forms of social communication” ([|Lasswell 1927]:9). His focus on the symbols and themes used in the messages foreshadowed his later development of content analysis as a research tool ([|Lerner and Nelson 1977]). His research and teaching on propaganda and public opinion helped to launch the teaching of university courses on this subject and contributed to the growth of scholarly interest during the 1930s. By 1935 there were already 4500 publications listed in an annotated bibliography on the subject by Lasswell and colleagues ([|Lasswell et al. 1935]), which was compiled under the auspices of the Social Science Research Council. Meanwhile, as the decade progressed, the “skillful management of mass attitudes” that contributed to the rise of Fascism in Italy and the National Socialist Movement in Germany intensified scholarly interest in the subject. Lasswell's book //World Revolutionary Propaganda: A Chicago Study// ([|Lasswell and Blumenstock 1939]) was another empirical investigation of symbol manipulation. The book was a case study of communist propaganda among Chicago's unemployed during the great depression. The findings indicated that, despite some favorable circumstances, communist propaganda was blocked by American nationalism and individualism. Lasswell's interest in the factors that facilitated and inhibited the communist world's revolutionary appeal to a particularly vulnerable population demonstrated the interaction of macro and micro factors in politics at the local, national, and international levels ([|Almond 1996]). The intellectual environment of the University of Chicago played a vital role in the development of Lasswell's scholarship and in the origins of communication research. Significant funding for social science research in the 1920s and 1930s from the Laura Spelman Rockefeller Memorial Fund and Rockefeller Foundation (which merged in 1932) attracted a highly competent group of scholars from various disciplines and encouraged exchange and collaboration among them. It was at the University of Chicago during those two decades that empirical social science research began to flourish and where cross-disciplinary collaboration established a pattern for future political communication research ([|Nimmo and Sanders 1981]). The Rockefeller Foundation took a particular interest in communication research and helped to shape its direction in even more direct ways. From 1937 to 1944 it supported the Radio Research Project, directed by Paul F. Lazarsfeld. The impetus for this project came from several directions ([|Czitrom 1982]). Analysis of propaganda and its effects led logically to investigations of public opinions and attitudes: how to explain their origins, persistence, and shifts. George Gallup's founding of the American Institute of Public Opinion in 1935 and his statistical method of survey sampling held out the promise of a new scientific approach to these questions. There was a growing sense that increased literacy and the spread of newspapers, periodicals, motion pictures, and especially the radio had created a new situation that needed investigation. As described in the foreword to the first issue of //Public Opinion Quarterly// in 1937, “Always the opinions of relatively small publics have been a prime force in political life, but now, for the first time in history we are confronted nearly everywhere by mass opinion as the final determinant of political and economic action” ([|Czitrom 1982]:124). Sociologists and social psychologists had begun to look at the individual and social effects of what later (1940s) came to be known as the mass media. For example, the sociologist [|Robert Park (1922)], at the University of Chicago, had studied the immigrant press in the United States, and the Payne Fund Project ([|Charters 1933]) had conducted an extensive quantitative study on the role of motion pictures in American society, particularly their effect on children. Finally, the rapid diffusion of radio was spurring advances in market research, a measurement challenge for a medium without subscriptions or circulation figures ([|Czitrom 1982]). The rather vague purpose of the Rockefeller grant for the Radio Research Project, initially located at Princeton, was to study the psychological and social effects of radio. The associate directors were Hadley Cantril, a psychologist at Princeton and a founder of //Public Opinion Quarterly//, and Frank Stanton, a CBS researcher (later president of the network). Under Lazarsfeld's direction, the project conducted dozens of studies that analyzed radio content and the demographics of the radio audience, correlating preferences with social stratification ([|Czitrom 1982]). In addition to several project summary publications, Lazarsfeld incorporated some of this research in his book //Radio and the Printed Page: An Introduction to the Study of Radio and its Role in the Communication of Ideas// ([|Lazarsfeld 1940]), which [|Czitrom (1982)] claims was a key step in the consolidation of the field of communication. In 1939 Lazarsfeld moved to Columbia University with the Radio Research Project. The project, which had been renamed the Office of Radio Research, became in 1944 the Bureau of Applied Social Research with a much broader focus. The emphasis here, as in the project's earlier stages, was on the social psychology of short-term effects of the mass media. In the early years commercially sponsored research made up about half of the budget ([|Rogers 1994]). Later, during World War II and for the decade that followed, more than half of the budget came from funding for government projects ([|Rogers 1994]). Lazarsfeld used a variety of quantitative and qualitative methodological approaches, including survey research, laboratory experiments, community studies, content analysis, and a few innovations. He advanced survey research methodology by combining the survey interview with multivariate data analysis. His focused interview was designed to access the individual's perception of a media message. Focus group interviews and a method for measuring the emotional responses of the audience to radio programming were two of his most important methodological contributions ([|Rogers 1994]). He conducted the first comprehensive studies of radio in the US and was the most important single individual in launching mass communication research ([|Rogers 1994]). Lazarsfeld, even more than Lasswell, helped to shape the early direction of communication research to emphasize mass communication effects ([|Rogers 1994]). Their heavy emphasis on micro-level effects narrowed the focus of communication study to “essentially a process of persuasion” ([|Czitrom 1982]:132) and steered communication scholars away from other topics, notably macro-level issues ([|Rogers 1994]). Strategic considerations prior to and during World War II reinforced the emphasis on this genre of research. One response to the approaching involvement of the US in World War II was the Rockefeller Foundation's convening and funding of a communication seminar, which met monthly at its headquarters in New York between 1939 and 1940. The initial purpose of bringing together this diverse group of leading scholars interested in communication (including both Lasswell and Lazarsfeld) was to provide theoretical guidance regarding future communication research ([|Rogers 1994]). Lasswell's five-question formulation became the basic framework for discussion. Communication was conceptualized as “one-way, and intentional, oriented toward achieving a desired effect” [|Rogers 1994]:223). As the crises mounted in Europe, the discussions began to direct the application of communication research to government policy. When the United States entered the war at the end of 1941, the network of scholars who had participated in the seminar moved almost en masse to Washington, DC to play an important role in conducting applied communication research ([|Rogers 1994]). World War II became an important catalyst for research in mass communication, increasing its legitimacy and visibility and guaranteeing funding and support, as well as bringing together scholars from around the country interested in studying media and public opinion. Analytical tools of communication research were applied to the tasks of mobilizing domestic public support for the war, understanding enemy propaganda, and developing psychological warfare techniques to influence the morale and opinion of allied and enemy populations ([|Simpson 1994]). As these tasks required an interdisciplinary approach, a network of relationships developed among the social scientists who were located in numerous agencies and involved in various aspects of the war effort. In the War-Time Communications project for the Library of Congress and the Department of Justice, Lasswell developed a systematic, quantitative content analysis method for monitoring the foreign language press. Along with other social scientists, including Nathan Leites and Edward Shils, Lasswell analyzed the content of Nazi communications for information on internal political and morale conditions in Germany and occupied Europe for the Foreign Broadcast Intelligence Service of the Federal Communications Commission ([|Almond 1996]). Lazarsfeld and other specialists in survey research and social psychology were also employed by the military services. Survey research and various interviewing methods were used by the military services to address personnel issues such as recruitment and morale, by the Department of Agriculture in its effort to increase food production, by the Treasury Department in its efforts to sell bonds, and by the various intelligence services ([|Almond 1996]).

The Cold War Impact
World War II generated intense scholarly interest in the potential of the mass media for influencing opinions, attitudes, and behavior to meet US strategic needs ([|Czitrom 1982]). As the hot war segued into a cold war, US foreign policy goals continued to shape the direction of much research in international communication. Besides the East–West conflict, two other international developments influenced scholarship: the integration and disintegration of Europe and the rise of new nationalisms in countries that were former European colonies. In this new international context, efforts to conceptualize international communication as a field of study accelerated. A major impetus to the development of the subject as a field was a Ford Foundation four-year grant in 1952 to the Center for International Studies (CIS) at the Massachusetts Institute of Technology (MIT) for a research program in international communication, which became an influential center for research in the field in the 1950s and 1960s. The approach of the scholars associated with this center was to view international communication as a more complex process than a simple one-way reaction to mass media. The basic orientation of the work emanating from the center was articulated in the report that the Planning Committee, which the center had appointed to advise on the use of the grant, published in condensed form in the journal //World Politics// ([|MIT 1954]). It defined international communication in very broad terms as “the interchange of words, impressions and ideas which affect the attitudes and behavior of different peoples toward each other” ([|Mowlana 1996]:9). In even more sweeping terms, the report declared: “The study of communication is but one way to study man, and the study of international communication is but another way to study international relations” (MIT 1954:359). The MIT Planning Committee report referred to the large body of cumulative research and indicated two recurring problem areas: 1) “To what extent do changes in the structure of world politics interact with changes in the structure of world communication?” and 2) “What are the strategy and tactics of communication in achieving the aims of national policy in world affairs?” ([|MIT 1954]:374–375). Although the criteria for selecting areas for future research were both “scientific merit and political significance,” the latter dominated the discussion. Indeed, the report asserted that “there was every reason why a program in communication research should select the great problems of our time.” These were stated in almost apocalyptic terms: 1) “The conflict between the Communist and the free worlds is of decisive importance to the balance of power and the future character of our civilization;” 2) “The future course of Western civilization is dependent upon the result of efforts to find new and stronger forms of economic, political, and military organization in Western Europe;” and 3) “The rise of new nationalisms in Asia and Africa […] may profoundly affect balance of power and the status of Western civilization in the years to come” ([|MIT 1954]:365–366). The report suggested various research approaches, which would be appropriate for addressing these problems and which also had scientific merit. Stressing the importance of studying the structure of a society in order to understand the communication processes within it, the report identified “elite communication” as an important direction for future research. For example, who are the opinion leaders in a society, what are their characteristics, how are their images formed, what is the relationship between elite and mass opinion, what is the process of mediation between the mass media and the audience, and how do attitudes and behavior change under the impact of communication? The report identified “historical studies, field research, and laboratory experiments” as important types of research and expressed the need for improved methodologies for field research abroad, especially in nonwestern and communist societies. It outlined a sample field study in India as an area in which to study the rise of new nationalisms and the impact of international communication on political decision making in the Third World. Suggested sample projects for studying the East–West conflict were diplomatic negotiations between Soviet and Western powers and non-Western interpretations of news from communist versus Western sources. Two special issues of [|//Public Opinion Quarterly//, Winter 1952–53] and Spring 1956, were devoted entirely to research in international and political communication respectively, in order to demonstrate the large volume of research that was accumulating in the field and to serve as a forum for discussion on the subject. These special issues were a product of a committee for the development of research in the field of communication that was appointed by the parent organization of the journal, the American Association for Public Opinion Research (AAPOR). Both issues grappled with the difficulties encountered in defining the field, establishing boundaries, and conducting the necessary interdisciplinary research. A section on “problem areas” focused on such methodological issues as how to modify research methods, notably survey research, to populations abroad, especially non-Western and those not accessible for political reasons. The need to develop better concepts, better methods, and more pertinent and far-reaching data was noted repeatedly. The MIT Planning Committee report and the two issues of //Public Opinion Quarterly// exemplified scholarly pursuits in international communication that were already under way and foreshadowed much of the research that would be conducted in the late 1950s, 1960s, and beyond. A sample of articles from the journal issues illustrates how the three problem areas outlined in the MIT report provided direction for future research. The first might be summarized by the title of an article by [|Peter H. Rossi and Raymond A. Bauer (1952–53)], “Some Patterns of Soviet Communications Behavior.” This particular article was concerned with the patterns of exposure to the media among the population in the Soviet Union (radio, foreign radio, newspapers, magazines, books, movies, theater, and lectures) and how that exposure was related to involvement in the system. A variety of indirect approaches were employed by researchers on communication within the Soviet system, and there was considerable discussion within the two //Public Opinion Quarterly// issues about the validity of these methods. The Bauer and Rossi study involved interviews with Soviet displaced persons in the United States and Europe, and the data pertained to 1940. It was part of a larger study on Soviet communications behavior, which in turn was part of the Harvard Project on the Soviet Social System ([|Bauer et al. 1956]). Another study, by [|Ivor Wayne (1956)], explored Soviet and American “themes and values” through a content analysis of popular magazines in both countries. In order to assess the impact of the Voice of America on the Soviet system, [|Alex Inkeles (1952–53)] analyzed references to the VOA in the Soviet media, because access to the Soviet population as an audience was not possible. This approach represented only the “official reaction” to US broadcasts, but he claimed that it offered some clues about the impact on the system. The study also provided a case study of “the exchange of propaganda between two vast, competing mass communication systems” ([|Inkeles 1952–53]:612). This strain of research obviously represented a further development of the earlier propaganda studies, concerned with the effects of mass media on strategically important populations. This article was part of his work on Soviet society, published in [|//Public Opinion in Soviet Russia: A Study in Mass Persuasion// (1958)]. The assumption that international broadcasting was playing an increasingly important role in the worldwide tug of war for the minds of men encouraged considerable research that compared the characteristics of international communications from Soviet and non-Soviet sources. [|Paul Kecskemeti (1956)] looked at the “operating principles of Soviet foreign mass propaganda,” [|Harold Mendelsohn and Werner Cahnman (1952–53)] examined communist broadcasts to Italy, and [|Daniel Lerner (1952–53)] discussed the theory of international coalitions and suggested how international communication research could help to identify common interests that would draw “neutralists” into the “American-centered coalition.” Winning hearts and minds in the “non-industrial countries” – the second problem area – was deemed particularly important among these scholars in the early 1950s. [|Bruce Smith (1952–53)] analyzed the predispositions and “value constellations” of “non-industrial audiences” and suggested lines of communication research in this area. He indicated some significant advantages of Soviet communications and problems that hampered Western efforts, including past behavior that belittled and insulted the cultures of these countries. Communism's appeal and the challenge it posed to the West received considerable attention during the 1950s, for example Gabriel Almond, //The Appeals of Communism// (1954). Although this theme did not disappear, a different approach, as well as terminology, came to dominate both scholarship and policy making regarding “less developed countries” or the “Third World” in the 1960s. The word “propaganda” appears less often because, as [|Ralph K. White (1952–53]:539) pointed out, “The world is more and more tired of propaganda.” A more indirect approach to the Cold War goals of containing the Soviet Union and gaining the allegiance of developing countries would use international communication research as a tool of development and modernization. That development paradigm is discussed later. The third problem area considered in the international communication research of the 1950s and 1960s was Europe. Some of this research was oriented toward the effectiveness of international broadcasting, as illustrated in [|Mendelsohn and Werner (1952–53)] and [|Lerner (1952–53)], but other research went beyond this dominant “persuasion paradigm” to develop new, less Cold War-oriented topics and substantive areas. Karl Deutsch's work on nationalism and international integration, which also contributed to the study of international communication, was motivated less by the East–West conflict than by the desire to study the conditions that made peaceful change and collaboration possible ([|Puchala 1981]). His wide-ranging research was particularly concerned with patterns of social communication and their relationship to political organization and integration. Deutsch's dissertation, [|//Nationalism and Social Communication// (1953)], developed a new model of nationalism based on the idea of “a people bound together by habits of, and facilities for, communication” ([|Merritt and Russett 1981]:6). “Membership in a people,” Deutsch argued, “consists in the ability to communicate more effectively, and over a wider range of subjects, with members of one large group than with outsiders” ([|Deutsch 1953]:71). His developmental model of political unification posited first the development of functional linkages and increased flows of transactions between communities that “enmesh people in transcommunity communications networks” ([|Puchala 1981]:156). A high volume of reward-producing transactions generates social-psychological processes that lead to assimilation and integration into larger communities. This model was intended to explain the formation of large-scale political communities, hence integration at the international as well as the national level. At both levels mutual responsiveness and “two-way channels of communications between elites and mass and among non-elites are central to his conception of successful integration” ([|Merritt and Russett 1981]:8). The empirical focus for Deutsch's investigations into political integration were in Western Europe ([|Deutsch et al. 1957]). Deutsch was also interested in conditions that lead to political disintegration or dysfunctional integration. Drawing on his study of what he called the work of “communication engineers,” and foreshadowing his later work that more explicitly theorized from the field of cybernetics (1963), he postulated conditions that were likely to destabilize an amalgamated political community. One was an imbalance in the loads on the system from increased transactions and the capabilities to accommodate the increased loads. Inequities in the distribution of burdens and rewards were also a source of hostility that could make integration efforts self-defeating ([|Deutsch 1954]; [|Merritt and Russett 1981]). Deutsch was convinced not only that measuring the balance of communication flows was both feasible and valuable, but also that “statistics of communications flows constitute essential background data for almost any effective analysis of international communication” ([|Deutsch 1956]:145). He was particularly interested in measuring the balance of communication flows within a system, such as a country, and the flow of messages across its boundaries. “Inside–outside ratios of communication or transaction flows can [also] suggest something about the extent to which some particular human activity, such as science, is ‘international’ or ‘national’ and in what direction it may be changing” ([|Deutsch 1956]:159). His interest in cybernetics and in communication and transaction flows convinced him of the need for quantitative, replicable data, a conviction that contributed to his development as a pioneer in quantitative international relations. The balance of communication flows became a matter of controversy and concern in the 1970s. Deutsch's research investigated empirically idealistic assumptions about the relationship between communication flows and international understanding that were prominent in the euphoria of the immediate post–World War II period. When the United Nations Educational, Scientific, and Cultural Organization (UNESCO) was established in 1945, the constitution expressed the belief of the parties to the constitution in the “free exchange of ideas and knowledge” and their determination “to increase the means of communication between their peoples and to employ these means for the purposes of mutual understanding and a truer and more perfect knowledge of each other's lives.” Soon afterwards, the [|Hutchins Commission on Freedom of the Press (1946)] published a report that advocated the free flow of information across borders as means to a better world ([|Rogers and Hart 2003]). The report argued ([|Hutchins Commission 1946]:14) that what was needed in the field of international communication was “the linking of all the habitable parts of the globe with abundant, cheap, significant, true information about the world from day to day, so that all men increasingly may have the opportunity to learn, know, and understand each other.” In the decade that followed World War II, UNESCO sponsored the collection and publication of data on the world's network of mass communication facilities, studies on international news agencies ([|UNESCO 1953]), and the handling of world news in various countries ([|Kayser 1953]). It published or sponsored social scientific studies on the roots of intergroup tension and on nations’ images of one another (for example [|Klineberg 1950]; [|Buchanan and Cantril 1953]). UNESCO also sponsored and assisted with the development of several international professional organizations, such as the International Political Science Association, which were designed to facilitate scientific exchange and communication as well as cross-cultural research. In the context of the Cold War, however, the ideas of a free flow of information and freedom of information became another tool of US foreign policy to penetrate closed communist societies and to demonstrate the superiority of the US way of life. [|Siebert et al.'s (1956)] //Four Theories of the Press: The Authoritarian, Libertarian, Social Responsibility, and Communist Concepts of What the Press Should Be and Do// provided a conceptual framework for categorizing media systems that “served primarily to celebrate the Anglo-American models” ([|Downing 1996]:xiii, 191; [|McDowell 2003]). [|Davison (1965)] similarly compared the role of communication in various societies with a view to using communications to advance US foreign policy. By the middle of the 1950s, a bibliography on //International Communication and Political Opinion// ([|Smith and Smith 1956]) was published that contained almost 2,600 entries on relevant research since 1945. The categories included political persuasion and propaganda activities, channels of international communication, audience characteristics, and methods of research and intelligence. An article by one of these authors ([|Smith 1956]) for a special issue of //Public Opinion Quarterly//, which analyzed trends in international communication research over the decade following World War II, indicated that the bulk of attention had been devoted to propaganda and political warfare. As in the hot war of World War II, government support in the Cold War was a powerful stimulus for research in international communication within the persuasion paradigm. Federal funds sponsored evaluations of various informational and educational programs, such as efforts to measure audiences of the Voice of America, the United States Information Libraries, and even exhibits traveling abroad. Also sponsored were surveys of mass and leadership opinions, which formed the basis for studies of images that audiences had of the US, the USSR, and other countries.

Modernization and the Development Communication Paradigm
As new states began to emerge from colonial empires, communication became an important component of research on development. Indeed, “development communication” (or communication and development) was recognized as a distinct field of research and policy ([|Stevenson 1994]). Two books were particularly important in establishing what remained the dominant paradigm of development for the 1950s and 1960s: [|Daniel Lerner's (1958)] //The Passing of Traditional Society: Modernizing the Middle East// and [|Wilbur Schramm's (1964)] //Mass Media and National Development//. Both emphasized the role of the mass media in guiding and accelerating development. For Lerner, the mass media provided a vicarious contact with the world for those constrained by traditional ways of thinking, enabling them to imagine different ways of doing things and to aspire to a better life. Thus wider exposure to the media in a traditional society helped the process of transition to a “modernized” state; that is, one that followed the Western model of development. [|Wilbur Schramm (1964)] buttressed Lerner's view regarding the potential of the mass media to raise the aspirations of people in developing countries and saw the media as a “bridge to the wider world, as the vehicle for transferring new ideas and models from the North to the South and, within the South, from urban to rural areas” ([|Thussu 2000]:57). The mass media were “the great multiplier,” amplifying, spreading, and accelerating the efforts of all the agents of change. [|Schramm's (1964)] book appeared early in the UN's designated “Decade of Development,” when its agencies, the US government, universities, and private companies were beginning to provide significant funding for research on how to “modernize” the newly independent countries ([|Thussu 2000]). Schramm's book, published in conjunction with UNESCO, became “both a technical manual for communication development and a bully pulpit for advocating the use of mass media as a key component of development programs” ([|Stevenson 1994]:234). This paradigm guided both national and international development programs throughout the 1960s. It resurfaced in the 1980s with a focus on telecommunication ([|Hudson 1984]) and again in the 1990s in modified form under the comprehensive label “information and communication technologies for development.”

Critical Perspectives in the 1970s
Development communication began to meet stiff criticism in the 1970s, as the more general modernization paradigm was challenged by an array of scholars, researchers, and Third World political leaders. The optimism of the modernization theorists confronted the reality of many failed projects and a general lack of significant progress toward development. A wave of scholars criticized the focus on the internal causation of underdevelopment and emphasized external constraints, the structural biases in the international economy that put developing countries at a disadvantage, and the vulnerability of dependence. Multiple variations of dependency theory, world systems theory, and an assortment of other related approaches agreed that the modernization model of development served merely to strengthen the dominance of the wealthy, developed countries and maintain the dependence of the countries at the periphery of the global system. They viewed the modernization paradigm as an instrument of neo-imperialism. One set of variations focused specifically on the role of communication in maintaining structures of economic and political power ([|Thussu 2000]). [|Galtung (1971)], for example, cited “communication imperialism” as one of the five types of imperialism in his now classic article on “structural imperialism.” A critical perspective that is often labeled “cultural imperialism” ([|Schiller 1976]) or “media imperialism” ([|Mattelart 1979]) was concerned about the detrimental effects on developing countries of Western, and particularly American, domination and control of global communications industries. According to this view, Western media hegemony inhibited indigenous industries in Third World countries, reinforced patterns of dependency, and imposed Western cultural values. These critical perspectives also challenged another paradigm that had reigned since the end of World War II; that is, the free flow of communications, which [|Schiller (1976)] argued actually leads to an asymmetrical flow. UNESCO modified its position from free flow of information to free and balanced flow. Structural inequalities in international communication became a high priority concern among newly assertive developing countries, which called for a New World International and Communication Order (NWICO) linked to their demands for a New International Economic Order. In response, UNESCO appointed the International Commission for the Study of Communication Problems, chaired by Sean MacBride. The final report ([|MacBride 1980]) concluded with 82 recommendations all aimed at “eliminating imbalances and disparities in communication and its structures, and particularly information flows” ([|MacBride 1980]:253). Among the specific infrastructural changes recommended was “more equitable sharing of the electro-magnetic spectrum and geostationary orbit.” The discussions of the commission and the final report generated significant political debate, with some Western countries, most notably the US and UK, concerned that efforts to balance the flow of communication and information across national borders could amount to impeding that flow.

International News Flow/Coverage and International Transactions
During the decade of the 1970s and somewhat beyond, voluminous research was conducted more specifically on patterns of international news flow and coverage. [|Mowlana (1983)] estimated in an annotated bibliography that between 1973 and 1983 there were 441 papers, books, articles, reports, and book chapters published on this subject. These included studies that dealt with the volume and direction of news flows among countries and those that analyzed not only the amount but also the type of news disseminated ([|Hur 1984]). In a summary and critique of 80 major studies of international news flows and coverage, [|Hur (1984)] discussed research trends, categorized the different methodological approaches, and identified their shortcomings. Geographical approaches looked at the international news flow into a specific country, coverage of international news by the media of a specific country, or news flows or coverage across two or more countries. He found that the generality of theories proposed was seriously limited by unevenness and lack of inclusiveness in the geographical focus. Similarly, there was a lack of comparable data in the various media approaches and unevenness in the media selected for study. Most attention was devoted to newspapers, then television. There was a lack of cross-media research to explain variations in international news flow and coverage. Hur also found a general lack of longitudinal studies. Another category included studies focused on specific events or specific periods of time. In these studies [|Hur (1984)] found problems of typology that limited their cumulative effect. He also noted the lack of studies looking at long-term patterns of international news flows and coverage, which was significant because these patterns are subject to change. [|Hur (1984)] concluded that because of these shortcomings in the research, “there are relatively few research findings that can form ‘theories’ of international news flow or coverage” and, although there were well-formulated theoretical propositions in some studies, adequate empirical support was lacking. [|Hur (1984)] also noted a lack of multivariate research up to that point. This was an important deficiency because so many different independent variables (many never operationalized) were proposed in this literature to explain variations in international news flows and coverage, for example the power hierarchy of nations, cultural affinity, and economic variables such as export and import values. A cross-national study of 29 countries sponsored by UNESCO ([|Sreberny-Mohammadi 1984]; [|Stevenson and Shaw 1984]) found geographical proximity to be a major determinant of news coverage for most countries. In sum, few reliable generalizations resulted from this research because of the difficulty in operationalizing some variables; because findings were divergent; and because comparability in the data across countries, time, and media was lacking. A cursory examination of the [|Mowlana (1983)] bibliography and the [|Hur (1984)] critique indicates that research on international news flows and coverage during the 1970s was conducted for the most part by scholars in mass communication, with little input from or connection to scholars in international relations or political science. The 1960s and 1970s marked the rise, spread, and institutionalization of communication studies in American universities. Hundreds of university departments were established, some as completely new entities and others arising out of such existing departments as speech and journalism. Communication research as a field gained coherence and other advantages with its separate institutionalized structure; but it “lost some of its strong academic connections to the other social sciences,” which had characterized this research in its beginnings ([|Rogers and Balle 1985]:6). A strong emphasis among the new stream of scholars was on media studies, and, even among those interested in international communication, most became more professionally integrated into communication studies than international relations. From the 1970s onward only a small number of scholars have had a foot in both communication studies and international relations. One major edited volume, //Communication in International Politics// ([|Merritt 1972]), attempted to revive the study of international communication in political science and to give it more rigor. The book, which was dedicated to Harold Lasswell, was a product of a set of panels organized at the American Political Science Association under the presidency of Karl Deutsch. It considered international political communication in broad Deutschian terms of transaction flows across national borders, which included student exchanges, tourism, trade, diplomacy, and especially the exchange of ideas. [|Merritt (1972)] delineated three types of communicators relevant for international politics – governmental actors, nongovernmental actors, and cultures – which yielded nine types of communication flows. Merritt defined the communication process as a transmission of values, while [|Bobrow (1972)] in another chapter saw it as the transfer of meaning, and both deplored the lack of conceptual development in international political communication. The book's stated purpose was to synthesize and integrate existing research beyond isolated empirical findings and to encourage the building of theory. However, most research in international communication for the next two decades took place within communication sciences and journalism. Meanwhile, during those two decades technological and political changes were occurring that would transform the communication environment and inspire a wave of research on new topics.

The Globalization of Communication
Two mutually reinforcing trends in the 1980s and 1990s shifted the discourse on international communication to global communication. The progressive development and diffusion of fiberoptic cables, satellites, and the Internet were eroding the barriers of space and time, increasing the speed, and reducing the cost of transferring all kinds of information. At the same time, a widespread political shift was occurring toward liberalization in international trade, as well as domestic policies regarding telecommunications and broadcasting. These developments spurred the growth of a privatized global media structure and new global networks for communication and information access. They also mobilized scholars from multiple disciplines to study the implications of these developments. One response was to think more broadly about the impact of new information and communication technologies on power structures and social change. The terms information revolution and information society were invoked to convey the profound impact these new technologies were having on all aspects of society and the international system ([|Castells 2000]) Some saw a significant break from the past; others stressed continuities and pointed to a historical tendency to focus on the novelty of new technologies ([|Webster 1995]). While not espousing a version of technological determinism, some scholars emphasized the ways in which the specific properties of technologies can shape new ways of thinking and hence social and political structures ([|Pool 1990]; [|Deibert 1997]). [|Deibert (1997, 2000)] drew on a tradition of scholarship known as “medium theory,” while most work that considered the relationship between changing communications technology and society did not make an explicit link to that approach. As with earlier technologies, optimists and pessimists debated the potential for positive versus negative effects. The international dimensions of these debates tended to focus on the impact of the new information and communication technologies (ICTs) on three major issue areas: the global economy, the nation state, and foreign policy. Multiple issues received attention within each of these categories.

The Global Economy
There has been general agreement that the new ICTs not only provide the infrastructure for a global economy, but also facilitate new forms of transnational and global economic organization ([|McGrew 2005]). Considerable attention was given to the ways in which these technologies have improved the capacity of firms to organize at a distance, and have provided the flexibility to disperse aspects of the production and distribution processes across different national locations ([|Deibert 2000]). These developments stimulated new directions of research from political economy ([|Comor 1994]; [|Mosco 1996]) and other critical perspectives. Major concerns of this literature are the economic, political, cultural, and ideological effects of corporate consolidation in the communication and media industries. As the media are commercialized and centralized they increase their command over information flows, their political influence, and their ability to set the media-political agenda ([|Herman and McChesney 1997]). Information and cultural products “are commodified and are designed to serve market ends, not citizenship needs” ([|Herman and McChesney 1997]). The central question for critical perspectives is “who owns and controls the distribution of communication, and for what purpose and intent” ([|Mowlana 1993]:72). This question is deemed important because control of the communication process and how that control is exercised condition how and what human beings think and therefore how they act ([|Comor 1994]; [|McPhail 2006]). Other analysts have viewed the new communications environment with a pluralist perspective, emphasizing its complexity and paradoxical tendencies. They acknowledge that the combination of the new expansive technologies and the widespread political shifts toward liberalization facilitate the concentration of economic power in enormous Western, especially American, communications conglomerates. But they emphasize how these same developments have generated new possibilities at the regional, national, local, and individual levels. New channels of communication, production centers, regional networks, and news exchange agencies have multiplied ([|Gurevitch 1996]; [|Sinclair et al. 1996]; [|Sreberny-Mohammadi 1996]; [|Thussu 2000]). Scholarly interest in the distributional effects of the new international and communication technologies (ICTs) generated a whole new genre of research under the rubric of the “digital divide” ([|Norris 2001]). The term has come to refer to great disparities in access, both within and across countries, to information and communication technologies more generally, not merely the Internet. It served as a focal point of debate about the political, social, and particularly the economic impact of ICTs, especially for developing countries. The research revived some earlier themes in international communication. The development communication paradigm was given a big boost, updated, and broadened to “ICT for development,” often stated as the acronym ICT4D. This perspective saw the digital revolution as a historic opportunity for developing countries to take a quantum leap forward to develop their productive capacities and to become integrated in the global economy. It assumed that access to ICTs opens the doors to wider economic and social development opportunities and has the potential to address poverty, inequality, and just about every other problem. Most scholars writing from this perspective, while enthusiastic about the possibilities, clearly acknowledged the structural, institutional, and cultural constraints ([|Wilson 2006]) and were more restrained than some of the materials issuing from international and nongovernmental organizations, especially from ICT-related corporations. The literature on ICTs and development raised broader issues regarding the relationship between technology and inequality. Some questioned whether the digital divide is bridgeable at all ([|Van Dijk 2005]), while others emphasized social variables in determining the benefits of ICTs ([|Warschauer 2003]) and the importance of projects and applications that are relevant to the particular social and cultural context ([|Keniston and Kumar 2004]). Still others applied a critical perspective to the push to bridge the digital divide, arguing that these efforts “will have the effect of locking developing countries into a new form of dependency on the West, trapping them in an increasing complexity of hardware and software that is designed by developed country entities for developed country conditions” ([|Wade 2002]:443). One angle of this argument reflected a view – echoes of earlier critiques of development communication – that digital technologies are simply one more instrument for the powerful to maintain control over the powerless, while misallocating funds that could be used to meet more basic needs of underprivileged populations.

The Nation State
The expansive character of the new ICTs focused attention on their impact on the nation state: its centrality in the international system, its sovereignty, and its relationship to its citizens. Although there was little sign of the state withering away in the international communication literature, there was widespread agreement that much has changed regarding the basis of state power, the context in which states operate, and the ways in which they exercise power. Information technology is now one of the most important power resources; and control over information creation, processing, flows, and use has become the most effective form of power ([|Braman 2007]). States are adopting and must adopt new information and media policies to maintain their sovereignty and to exercise their power effectively ([|Price 2002]; [|Braman 2007]). Controlling the flow of information has become more difficult and costly, however, leading some scholars to consider the potential of the new ICTs to open and undermine closed regimes, even to democratize them ([|Kalathil and Boas 2003]). From certain critical perspectives, the power of nation states is now subordinate to the power of the transnational corporations in a globalized economy ([|Tehranian 1999]). From a pluralist perspective, an array of nonstate actors has proliferated and gained influence, challenging the exclusive prerogative of the state to act on the world stage ([|Livingston 2001]; [|Brown 2004]). Much attention was given to the new political environment that ICTs have helped to create by empowering nonstate actors to connect, communicate, and mobilize more effectively across national boundaries. One strand of the literature focused on transnational advocacy groups as a manifestation of an emerging civil society, a more inclusive political process ([|Warkentin 2001]). A second strand of this literature pointed to the darker side of these technologies that can be used by anyone for any purpose, including criminal syndicates, drug cartels, and terrorists ([|Arquilla and Ronfeldt 2001]). The impact of ICTs on the relationship between the state and its citizens was considered in a variety of ways. The ability of governments to provide more direct and efficient services over the Internet and to engage in two-way communication with their citizens (e-government) has the potential both to strengthen the bonds and to enhance their capacity to monitor their citizens. On the other hand, ICTs also enable subnational and transnational ethnic, religious, and cultural groups that are geographically dispersed to connect and to consolidate a sense of common identity, challenging that of the territorially based state. In any case, the Internet changes the way we do and think about politics ([|Chadwick 2006]). The increased volume, speed, and scope of cultural products across national boundaries from satellite and Internet technologies intensified concerns about threats to the cultural integrity of states that have persisted since the first exports of Hollywood films. Critics stressed the dominant role of Western, especially US, industries in the flow of products and worried not only about the economic impact on indigenous cultural industries, but also about the effect on the society's cultural values ([|McPhail 2006]). Others pointed to an increasingly complex media environment with the bourgeoning of new production centers, networks, and export markets in some developing countries ([|Sinclair et al. 1996]). To expand market share, the global conglomerates had to collaborate and make some adjustments to local cultures. These local and global collaborations have often resulted in a form of cultural hybridization, as well as the firm establishment of commercial models ([|Thussu 2000]).

Foreign and Security Policy
The advent of live satellite television news coverage in the 1980s generated an upsurge of scholarly interest in the role of the media in foreign policy making. The pioneers of communication research, grounded in social psychology, were interested in the effects of the media on mass audiences or individuals: their attitudes, opinions, and beliefs. Although their research generally found only limited effectiveness in changing attitudes, two mass communication researchers ([|McCombs and Shaw 1972]) later concluded that the mass media did have significant indirect effects by influencing what people think about and thus the “public agenda.” This “agenda-setting” concept generated a whole genre of research in the study of mass communication and in political communication focusing on the US political process. Other research concentrated on the concept of “framing,” finding that the way issues are framed shapes the way the public understand the issues, their causes, and solutions ([|Iyengar 1991]; [|Wolfsfeld 1997]; [|Entman 2004]). Much research on foreign policy making emphasized the inherent advantages of political power, which, combined with journalistic norms, ensured that news content was shaped more by the preferences of political officials than by news media priorities ([|Sigal 1973]; [|Bennett 1990]). The new media environment of the 1990s led to new investigations regarding the media–foreign policy nexus. Prompted in part by a widespread public perception that the television images of starving children in Somalia had pushed the US to intervene in 1992, while subsequent images of a dead soldier being dragged through the streets had forced the US to withdraw, a body of research developed to study the “CNN effect.” Much attention was given to the ways in which satellite television affected the decision-making process, for example speeding it up and making the actions of leaders more transparent. However, empirical studies of the CNN effect generally failed to find evidence that the news media were seizing the policy initiative away from political officials ([|Livingston and Eachus 1995]; [|Merman 1999]). [|Gilboa (2005)] analyzes this body of work and concludes that there has been a lack of evidence that global television networks are decisive actors affecting foreign policy decision making and international outcomes. New technological developments brought more changes in the latter part of the 1990s that created an even more challenging media environment for leaders seeking to mobilize support for their policies. The proliferation of news satellite channels, especially Arab language channels, challenged the hegemony of the Western news media and provided alternative, widely distributed interpretations of events ([|Seib 2005]). The spread of the Internet provided even more abundant sources of information and alternative interpretations. These developments stimulated interest in the consequences for diplomacy and foreign policy making of this new competitive environment, where politics has increasingly become a contest for attention and credibility. (See the essay in this compendium on public diplomacy.) Considerable attention was given to the need for public diplomacy geared to a new political environment where nonstate actors are empowered by the new ICTs to participate more assertively in world politics ([|Livingston 2001]; [|Brown 2004]). Public diplomacy and the management of information were considered particularly important in international conflicts. Some of the literature on the impact of ICTs on the conduct of war echoed themes of the earliest international communication research on propaganda, about shaping the perceptions of allies and neutrals while demoralizing the enemy. An array of security threats posed by the new ICTs and the various forms of conflict that may emerge in cyberspace were analyzed ([|Libicki 2007]). Information warfare was discussed from both offensive and defensive perspectives, demonstrating how information systems can serve as both weapons and targets ([|Rattray 2001]). Another strand of literature focused on the use of ICTs by nonconventional combatants, especially terrorists, and how the Internet, cell phones, and other technologies empower these groups ([|Arquilla and Ronfelt 2001]; [|Weimann 2006]). There was widespread agreement that the new ICTs both enhance the military advantage of great powers and make them more vulnerable, significantly changing the conduct of warfare. The effects of the mass media on terrorism have also been explored ([|Norris et al. 2003]). Diplomatic communication has received surprisingly little systematic attention in the past, despite its obvious importance. But recently, path-breaking research projects have been conducted using constructivist approaches and linguistic and dialogical analysis to examine how language structures the interactions of leaders and their policies (e.g., [|Duffy et al. 1998]; [|Goh 2005]; [|Duffy and Goh 2008]; [|Duffy and Frederking 2009]). Another notable new approach is [|David Sylvan and Stephen Majeski's (2009)] cybernetic account of US foreign policy using a theoretical framework based on organizational feedback processes that operate on information from the US foreign policy bureaucracy.

Managing the New Information and Communication Environment
Traditionally, telecommunication services were territorially organized, and radio and television systems developed everywhere as a collection of national systems serving primarily domestic audiences (except for shortwave broadcasting and trade in taped television programs). Policies and most regulation came from national governments, apart from where international agreement was required for optimum operability (as with the telegraph and to prevent interference in radio waves). The emergence of border-crossing communication technologies, particularly satellites and the Internet, raised a host of issues regarding national sovereignty, jurisdiction, surveillance, and personal privacy that directed attention to questions of policy, law, and regulation. Most controversy focused on the control of the Internet or, as often expressed, Internet governance. In the early days of the Internet there was a widespread assumption that the basic architecture of the Internet ensured that it could not be controlled. Research in the second decade of the Internet has challenged that assumption. One line of argument claimed that national governments retain the power to shape the architecture of the Internet in various ways and to enforce national laws within their territory ([|Goldsmith and Wu 2006]). Part of the explanation is coercion and another part is economic; that is, the need of e-business for government support. Other research deplored a trend toward greater control and less openness to communicate and to innovate ([|Lessig 1999]; [|Mueller 2004]; [|Zittrain 2008]). A model of “multistakeholder governance” was put into practice in the Internet Governance Forum. This approach to Internet policy, which is just beginning to receive attention in the literature, involves bringing together governments, the private sector, and civil society in partnership.

Looking Backward, Looking Forward
There are many other areas of research that might have been included in this historical narrative of a subject with a broad scope and no clear boundaries. Although the subject of international communication encompasses diverse and disparate topics and involves multiple disciplines, certain overlapping themes, strands, or threads can be discerned. Much of the research on international communication from the beginning has focused in various ways on the impact of communication media. Early propaganda and mass communication researchers were interested in the direct effects of the media on attitudes and opinions and tended to focus on the strategic uses of communication by governments. In the context of the Cold War and an emerging Third World, communication scholars became interested in the capacity of the mass media to change societal attitudes as the essential step to development. Toward the end of the twentieth century, interest in the new information and communication technologies revived this development communication strand of research. As a global communication infrastructure began to emerge, research proliferated regarding other effects of the new technologies, especially satellite television and the Internet, on the global economy, foreign policy decision making, international outcomes, and national sovereignty. A second strand of research focused on communication flows. In the brief euphoric period after World War II, many assumed that the increased flow of messages across national borders could enhance international understanding. Karl Deutsch was also interested in the relationship between communication flows and political community, but his work investigated the conditions that affected this relationship. In the 1970s an enormous literature, primarily in the communication sciences, examined patterns in the coverage of international news flows. Structural inequalities in communication flows between developed and developing countries also became a focus of attention in the same decade. A third theme that runs through the literature concerns the relationship between communication and power. There is a long tradition of critical theorists who see the media and communication structures as a force to maintain the hegemony of entrenched economic and political power interests. The cultural imperialism perspective, a critical response to the development communication paradigm, has continued to focus on the deleterious effects of Western media hegemony on developing countries. As Western, especially US, communication industries have become global conglomerates, the issue of corporate ownership and control of the media has gained even more attention. Other strands of the power and communication theme have focused on government controls and the use of the media to advance government policies. The issue of who should control the media and for what purpose, which has long interested scholars, has generated a new line of research, namely Internet governance. Also fitting into this category are the controversies over the impact of the new information and communication technologies on state power and sovereignty and on international hierarchies of power. The themes indicated above are simply suggestive of ways in which the diverse topics in international communication might be tied together to form coherent research traditions. But in order to increase our cumulative knowledge, future research should attempt to combine some of these topics into new theoretical frameworks and to conduct more studies in which hypotheses are tested. More interaction with scholars in communication science, political communication, and other relevant disciplines would suggest new approaches, concepts, and areas of inquiry. It was, in fact, interdisciplinary exchange and collaboration that gave rise to the earliest work in international communication, when the Chicago School and World War II physically brought together scholars from multiple disciplines. After the institutionalization of communication studies as a separate discipline, subfields relevant to the study of international communication developed, but they rarely connected with international relations scholarship. Mass communication and intercultural communication are two examples. Political communication managed to connect with both communication science and political science, but the tendency of all three to focus on the individual level (opinions, attitudes, beliefs) may have discouraged more interest on the part of international relations scholars. The micro level may have gained importance in the new media and political environments. If nonstate actors are playing more assertive roles in world politics, then the impact of new communication technologies on individual attitudes has consequences for foreign policy and diplomacy. The systematic analysis of media effects, as well as cultural exchange, may warrant renewed attention. There are also areas of inquiry in international relations that could be studied more directly as international communication topics, although they are not generally identified as such. Deterrence, negotiation, bargaining, and conflict resolution are a few examples. Cognitively oriented approaches and concepts from the social sciences more generally that relate language, culture, and policy could also be applied to international communication. Future research must continue to investigate the myriad ways in which new communication technologies are affecting world politics. This task has been a major focus of recent work, but it is a formidable one because of the pace of change and the scope of its impact. The task will require both macro and micro approaches, case studies, and aggregate data analysis; all the research methods that have been used thus far plus new ones. The field is vibrant, dynamic, and wide open for future research.

Subject [|International Studies]
==== Key-Topics [|A Midsummer Night's Dream], [|CNN effect], [|information and communication technology (ict)], [|networks] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
The networked information infrastructure that blends computing and communications is the largest construction project in human history. During the last two decades advances in information and communication technology (ICT) and an accompanying revolution in logistics (e.g., the advent of containerization) fundamentally reshaped the global economy. The production and the distribution of goods changed fundamentally as complex global supply chains changed where and how the world undertook these functions. The services supporting and complementing the “goods” economy, ranging from research and design through finance and logistics, became the dominant share of the world's output, and all these activities grew markedly more global, more information intensive, and more communications intensive. These upheavals resulted in a significant increase in the world's productivity and wealth ([|Mann and Rosen 2002]; [|Mann 2006]; [|Levinson 2006]). They also transformed important aspects of the conduct of international relations. This essay is divided into five distinct sections. This section reviews the major trends in information and communication technology that are transforming the commercial and technology landscape. The second section argues that the United States will continue to serve as the “demandeur” in international high technology policy for the next two decades. Section three considers the implications of the ICT revolution for international institutions and governance. The final two sections consider the consequences of the ICT revolution for foreign policy making and for the conduct of international relations. In considering the technology and communication revolution we first specify three long-term trends that revolutionized the ICT infrastructure. The first trend involves the end points on the ICT networks: What are their number, scope (ubiquity), and heterogeneity? How many and what type of processors and data sources connect at the edge of the network? Consider the evolution of terminals. First there were voice-only dumb terminals, then there were dumb data terminals, and finally powerful, networked personal computer (PC) terminals emerged. The number, ubiquity, and heterogeneity of network end points accelerated as PC connections to the internet proliferated and as voice and data mobility spread. The second trend involves the price point for a specific speed or quality of service in ICT markets. This point determines which applications might be usefully deployed across a network. Sometimes performance levels are not available. In the 25 years leading up to 1984, the price for services of comparable quality and speed declined sharply. The decline in cost structures spanned applications and services. The third trend was that the breadth of applications supported by the network increased substantially, as determined by the processing capabilities, the location of the processing and application logic, and interoperability across the network. Mainframes were limited in their processing power and in their ability to run applications that relied on data from multiple systems and resources. Client–server architectures continue to evolve. Cable televisions running on cable networks once mainly relied on dumb data-entry terminals. But as applications increasingly run partly in “the Cloud” and partly on devices at the edge, additional flexibility and resources both at the edge and in the network will be needed. A second stage of the technology and policy revolution continuing the convergence of computing, software, and communications began with the breakup of AT&T in 1984 and extended through 2000. After the decision to break up AT&T, the US government began to preach the virtues of facilities-based competition ([|Aronson and Cowhey 1988]). In the United States and internationally the telecommunications market experienced the gradual but forceful introduction of competition in all infrastructure, hardware, software, and services segments. Three important commercial developments spilled over into international relations. First, the gathering momentum of the microprocessor revolution for personal computing, competition in communications networking, and a second generation of computer networking architecture shifted the market horizon again. By the mid-1980s, the semiconductor industry began to enable deeper network architecture changes and revolutionize ICT devices’ power at the edge of the network. Telecommunications switching grew more sophisticated, but this happened more slowly than intelligence could be incorporated in computers and other devices operating at the network's edge. This “flipped” the logic of network architecture even as Moore's Law took hold and the spread of PCs in business and consumer arenas created new demands for networked applications and services. Second, there was as explosive growth of mobile wireless. In developing countries, mobile wireless connections rapidly overtook wireline connections when the introduction of second-generation (2G) systems greatly upgraded capacity and quality while reducing costs. By 2000, mobile communications had emerged as a vertically integrated competitor to the wired network in all market segments except for data. (A decade later, mobile broadband data services (3.5G) began to explode in Japan, Korea, and elsewhere.) Third, the internet and its commercialization also were hugely important. The internet revolutionized the architecture and underlying capacity of the network. Cisco shipped its first router in 1986 allowing companies and network providers to begin to “interconnect” their networks. In 1991 US policy changes enabled the commercial use of the internet. This set the stage for the ICT growth of the 1990s. By 1994, the internet swamped commercial email services. In August 1995, Netscape went public, igniting the “dot com” boom. In the United States, and to a limited extent elsewhere, new internet services providers and later large content and e-commerce applications aimed to take advantage of the network's power and scope. A myriad of smaller, more specialized applications also emerged that built their businesses on powerful, cheaper PCs, broadband networking at the office, and widespread narrowband networking in the home. These opportunities spread rapidly throughout industrial and developing countries. The emergence of the internet provided Tim Berners-Lee with the base from which he launched a suite of software applications – now known as “the World Wide Web” – that further altered these dynamics ([|Berners-Lee 1999]). HTML, the programming language that enabled the Web, consciously avoided the Microsoft approach and embraced open application programming interfaces (APIs) (an API is a set of routines, data structures, object classes and/or protocols that support the building of applications). Netscape's web browser and the subsequent inclusion of Microsoft's browser in Windows sounded the death knell of Internet Service Providers (ISPs) and forced consumers and countries to rely on proprietary software systems to access the web ([|Greenstein 1993]). As policy and technology development progressed in the United States, parallel changes were underway elsewhere. Usually changes originated first in the United States, but not always. A significant exception was the takeoff of the mobile wireless infrastructure. However, change remains dynamic. Starting in the late 1990s, new computing and information architectures (e.g., “the Cloud” and “the Grid”) began emerging that implicitly rest on a much different set of capabilities and market organization than in the past ([|Stockinger 2007]). (There are disputes over the definitional lines. We use “the Grid” to indicate an architecture that joins multiple computing platforms within a predefined organization. It is a subset of “the Cloud,” a virtual “on demand” approach that allows decentralized users to tap networked computing and storage as needed. Interfaces must be open but we do not assume that they must be produced by open-source code.) These architectures assume that powerful broadband networks intersect with two other emerging trends: (1) the integration of massive and inexpensive information storage with network architecture and services; and (2) the emergence of virtual computer systems that collectively and flexibly harness many computers, including high-end supercomputers, to mesh on demand to meet user needs. In short, the global information economy – including telecommunications, information technology, and increasingly all forms of copyrighted content – is at an inflection point. At this inflection point, if policy permits, a shift in the strategic context of the market invites a new direction in networked ICT infrastructure. But we believe that more and more the new leverage points are pervasive modularity in ICT capabilities and ubiquitous, inexpensive broadband networking. The “Cheap Revolution,” a pithy sobriquet coined by [|Rich Kaarlgard (2003)], captures the consequences for commerce of the cumulative impact of (1) the dizzying price-performance dynamics ranging from microelectronics innovations involving computer chips through data storage; (2) innovations in regard to fiber optic and wireless bandwidth; (3) changes in software design and costs; and (4) the emerging cost and delivery structure of digital content. All four of these processes reflect the advantages of modularity, but software and content were the slowest to yield to the logic of modularity. This process also will have continuing implications for international relations. Briefly, first, a microelectronics revolution enabled the Cloud architecture, but also spawned two other forces. Terminals became more powerful and escaped the desktop. For many in the developing world, the first experience of the web will be on phones, not personal computers. In addition, terminals and devices on the edge of the network, as exemplified by radio-frequency identification devices (RFIDs) and sensors, open entirely new applications and architectures with huge growth potential. A second driver of the Cheap Revolution is the ubiquitous broadband packet-switched network, which will stimulate network traffic and the geographic spread of ICT applications in unexpected ways. With the predominantly wireline, circuit-switched, telephone architecture in rapid decline, incumbent networks and their suppliers tried to slow the transition in network architectures. Nonetheless, after 2000 the transformation of the general telecom infrastructure began to accelerate ([|Endlich 2004]). Broadband service will become faster, ubiquitous, and a hybrid of many network infrastructures ([|Cave et al. 2006]). This combination will support new information services, a dizzying array of applications, and content delivery to an ever-growing number of subscribers. [|Figure 1] illustrates the most important trends. Figure 1 The mobile network revolution begins. //Sources//: [|www.chetansharma.com] (mobile data users and total mobile internet subscribers); [|www.cdg.org] (provider data costs and mobile download rate). The third part of the Cheap Revolution is software. Although modularity began when IBM broke up the integration of its hardware and software components (which led to the creation of an independent software industry), modularity has been slower to come to software. Software is becoming more open and modular, especially at the infrastructure layer, in part because the rise of the web propelled changes in software design (and associated standards) and in part because of market pressures. A critical change is the growth of multiple operating systems as a reality that informs any major suppliers to the enterprise IT market. [|Figure 2] shows the stunning impact of operating system (OS)-Agnostic Applications on software. A huge percentage of the applications routinely run on Windows. The inflection point means that applications can run on anything. A significant factor in promoting this shift is that large users demanded that their huge investments in heterogeneous software systems, each installed for a special purpose, become interoperable ([|Cortada 2005]). Figure 2 The growth of agnosticism. //Sources//: Gartner Research (2005), as cited in [|Cowhey and Aronson (2009]; fig. 3.6). Fourth, a parallel change is under way in media content, which has far-reaching consequences for commerce, journalism, and international politics. Specifically, digital content is more convertible across networks and terminal systems. As the media industry is disaggregated, screens for television shows are migrating to mobile phones, computers, and iPods. The distribution pipe includes broadband, cable, satellite, and now mobile broadband. Smart terminals plus broadband are challenging media stalwarts. These devices challenge the geographic boundaries of traditional broadcast models.

The United States Will Remain the Agenda Setter
Since 1945 the US market has been the most consistent agenda setter for the global market. American policy choices shaped other countries’ strategic choices. This is not a uniform story; but overall on international economic, trade, and ICT issues the US was the dominant force. Now, as economic gloom haunts the world, even as a new President settles in the United States, predictions abound that American dominance in international relations will give way to the leadership of China or others. By contrast, we believe that if the United States acts vigorously on the policy front, it can maintain its international leadership position until at least 2025. Substantial policy missteps could markedly alter the situation, but especially before 2020 a combination of inertia and continuing American dominance in many arenas should guarantee that the US remains the pivot of global relations. This view rests on five premises. First, the US has a large lead in its deployed ICT stock that is extremely difficult for other countries to overcome. This creates meaningful advantages in America's ability to deploy complex innovations across the economy. The United States has both the experience and the cumulative infrastructure investment to innovate rapidly and massively. Second, the US has the largest investment base and flows in the critical areas for innovation – national R&D spending, capitalization of the high tech industry, and private venture capital expenditure in IT and telecom. Third, the US will remain the leader for the foreseeable future in software, networked digital applications, high value-added commercial content, and high-end IT computing systems and solutions. Fourth, the US will continue to be among the top three global markets across the full range of ICT markets, from networking to software to services. In view of the breadth of the US position, the relative US position in any specific market segment (such as the world telecom service market or particular equipment markets) is less relevant than commonly claimed. Moreover, in view of the still sometimes fragmented nature of the “single” European market and the complexities tied to the less-than-transparent Chinese technology market, the effective market power of the US often is greater than the raw numbers suggest. The US is a single giant market that operates under relatively transparent rules and with a market framework that involves flexible capital and labor resources. Fifth, the United States is the leading producer of high value-added content (movies, television, music, video games), a critical element at present. Further, US legal decisions related to content (digital rights management (DRM), intellectual property rights (IPR), sharing, and monetization issues) would set the stage for any global arrangements in this arena. Two counter-arguments sometimes are raised to suggest why the United States might not continue as the pivot point in world ICT relations. We believe that these suggestions overlook the fundamental market situation. The first argument for decreasing US importance in world markets revolves around China. The increasing numbers of Chinese engineers, the emergence of Chinese firms such as Huawei as global leaders, and the sizzling Chinese domestic market are cited as evidence that China is assuming a global leadership position. Central to this argument is the ability of China to parlay the size of its domestic market into scale economies on the production side and the ability to leverage homegrown standards into leadership positions in adjacent market areas. This reasoning assumes that China can develop a shrewd plan and implement it, but for familiar political reasons such as corruption, huge labor displacement, changing demographics as the pool of younger rural workers available to industry shrinks, skyrocketing demand for natural resources, and environmental and health crises, China's continued economic boom is not a sure thing ([|Kennedy 2006]). A second argument is that the continuing decline of US spending in major ICT market segments will erode America's dominant position. We believe that these stories are overblown. The US still is the largest player in world ICT across the board. It ranks between first and third in world standings for most market categories. Inferring leadership for hardware is trickier because of hardware's global production model. The largest segment of the market is communications. The OECD communications services data from 2005 placed total revenues at $1.22 trillion, about 39 percent of which was from mobile. The United States accounted for about one-third of the OECD market and was the largest revenue market for mobile in the OECD. Together, the US and Japan constitute 47 percent of the OECD mobile market (OECD//)//. The US also remains the dominant ICT market overall with between 30 and 40 percent of the $3 trillion services and equipment market, but European IT spending is approaching US levels. Although Europe is growing faster, the US still dwarfs all other geographic regions in total ICT spending (more than 40 percent of the total in 2005). In short, although the United States may grow less quickly relative to other market centers, it remains the dominant market across the full ICT landscape. Although the EU (with 27 member states in 2009) now exceeds the American market in overall size, it is a less perfectly integrated market. Still, its magnitude means that it is the logical starting point for US international policy negotiations about ICT.

The Impact of the ICT Revolution on Institutions and Governance
The changing of actors’ roles in international relations was accelerated by the information revolution. The web and the information revolution resulted in tremendous security, political, economic, social, and cultural consequences. These changes altered the roles of countries, companies, non-governmental actors, and international institutions in the conduct of international relations. The information revolution altered the role of government policy makers in four main ways. First, policy makers now have access to far more information, perhaps too much information. Paralysis through information overload is a real danger. Second, global networks mean that decision making can be centralized or decentralized. Governments generally have centralized decision making, reducing the importance of ambassadors and embassies and tempting political leaders sometimes to micro-manage military situations and economic negotiations in distant lands because they can, not because they should. Third, global networks erode the monopoly of information in the hands of governments. Firms, journalists, and non-governmental organizations often have better information than governments. Fourth, global networks provide greater transparency to everybody, making it difficult for countries unilaterally to take national policy decisions when the problems are global. Globalization and global networks also allow business firms to think and act in terms of a global marketplace, heightening their international influence. The global movement of money and information allows firms to achieve global production strategies and simultaneously makes it more difficult for national governments to regulate them. In the absence of effective international regulation, especially after the push toward deregulation by the George W. Bush administration, these firms gained considerably greater influence. Global networks empowered non-governmental organizations (NGOs) and led to a vast increase in their numbers on the international stage. NGOs now create, track, and disseminate information, and motivate and organize individuals and groups sympathetic to their goals to pursue specific policy outcomes in areas such as human rights advocacy, environmental protection, and women's rights. A striking example of the positive influence of NGOs was their major role in the negotiations to ban landmines that resulted in the Ottawa Treaty. (The Ottawa Treaty, formally the Convention on the Prohibition of the Use, Stockpiling, Production, and Transfer of Anti-Personnel Mines and on their Destruction, completely bans all anti-personnel landmines. As of May 2009, 156 countries have ratified and two more have signed but not yet ratified it. An additional 37 countries, including the United States, Russia, China, and India, have not become signatories.) Similarly, NGOs drew attention to the plight of women and children being trafficked across borders and raised the issue much higher on the international agenda. NGOs can also block government action, as when environmental NGOs and labor unions joined to disrupt the attempt by governments to launch a WTO Trade Round in Seattle in November 1999. Ironically, international institutions such as the WTO and the IMF are both more important and less effective international actors because of the rise of global networks. They are more important because in the absence of effective national policies to deal with globalization these institutions are the logical venues through which to organize cooperative international policies. They are less effective because critics of such institutions, who complain that they are neither democratic nor even-handed, have stymied their initiatives at major junctures. As globalization proceeds, governance issues grow more complicated. At each stage governments and private firms react to new developments which in turn alter the dynamics of globalization and international relations. At the same time social movements, religious groups, terrorists, revolutionaries, and criminal organizations, which are focused on their own goals and interests, try to manipulate globalization and global networks to their own advantage. As complexity and numbers increase, international relations grows ever more complicated and the chance increases that networks will fall apart, leading to system breakdown, economic collapse, and violence. Unless a flexible system of governance emerges, challenges that undermine cooperative international relations are likely to persist and grow. There are three main options. First, governments can try to muddle through, reacting as new circumstances and issues arise. The problem is that national regulations are less and less effective when dealing with global issues and transnational movements. Second, governments can maintain a deregulatory stance, step aside, and put their faith in the magic of markets. However, as they pursue power and profit, large firms and their well-compensated executives frequently distort markets. Over time, firms may behave better and practice self-regulation, fearful that their behavior will be exposed globally. However, as events surrounding the global economic downturn that began in late 2008 demonstrated, the record of self-regulation is spotty at best. Further, criminal organizations, terrorists, and other rogue actors can be counted on to “cheat” whenever it is in their interest. Third, governments may try to work through international institutions such as the ITU, WTO, or IMF. Here too there is a problem. Activists and NGOs fear that international institutions are undemocratic and serve as puppets for rich firms and governments. Thus, although the international telecommunications regime was significantly amended and updated since 1984, the effort to achieve improved international relations has proceeded only in fits and starts. With technology changing so rapidly, rules negotiated in prolonged negotiations are always out of date before they come into force. Thus advances in telecommunications during the 1990s did not address important information issues raised by the proliferation of the internet and World Wide Web. The only hope to remain relevant is if the rules are flexible enough to evolve along with the system. But that is so complicated that critics worry that if the wrong rules are negotiated too early, the impact could be negative. The challenge for policy makers is to be sensitive to inputs from firms and NGOs, to figure out which rules are needed (and which are not) and how they should be structured, implemented, and enforced. Nobody has solved the challenge of constructing and implementing a sustainable regime for managing global networks, global firms, and global economies. The task grows ever more complex because there are increasing numbers of relevant players – developing countries, global firms, labor unions, and NGOs. Moreover, as the web powers the transition toward globalization, every country, large firm, and NGO is actively engaged in the process because they realize that the agreements that are struck will determine whether they are winners or losers in the emerging world information economy. Their future is at stake. There is considerable debate about the impact of globalization on risk and uncertainty, growth and inequality, democracy and freedom, and family and social relationships. But globalization is a dynamic process that governments and other actors continuously influence. The information revolution caught policy makers unprepared but, as it continues to unfold, the choices that governments (and other actors) make about policy do matter. So far governments and international institutions have no coherent plan about how or even whether they should guide the information revolution or about how to create an international regime for cyberspace. Here, four key challenges facing policy makers with regard to cyberspace, which knows no geography, are considered. The legal and policy areas most directly affected by the ICT revolution can be grouped into four main areas that impact (1) individuals, (2) the content that flows over global networks, (3) the global communication infrastructure, and (4) the global regulatory environment, and issues related to network security – cybersecurity. Each of these areas requires attention because of the global nature of cyberspace; all of them may require global cooperation and coordination. The relative influence of governments, firms, NGOs, and IGOs (intergovernmental organizations), religious and social movements, criminal and terrorist organizations, and individuals will be critical as the information revolution continues to unfold and globalization proceeds. Yet, the balance of influence among these actors varies from issue to issue.

ICT and the Conduct of Foreign Policy
There has been considerable discussion of the impact of the internet and web on democratic and authoritarian rule ([|Kamarck and Nye 1999]; [|Kalathil and Boas 2003]). Less attention has focused on the impact of ICT breakthroughs on the conduct of foreign policy ([|Dizard 2001]). In general, the foreign policy information cycle unfolds over four stages: (1) relevant information is collected using various technologies from a wide array of sources; (2) information is transmitted across a secure global network; (3) specialists analyze, synthesize, and present masses of information to the appropriate officials who then must take decisions; (4) governments try to implement their decisions by winning support from legislatures, courts, and other powerful interest groups. Advances in ICT significantly improved governments’ ability to collect and transmit information. Progress at the other two stages is more problematic because “in many cases bureaucracies and leaders are overwhelmed by the information they collect and decision-making may actually be impaired by information glut” ([|Aronson 1991]). The failure of the intelligence agencies to prevent the events of 9/11 and the false claims that Saddam Hussein possessed weapons of mass destruction were just the most prominent examples of failure. The global spread of the internet and its bottom-up nature generate terabytes of new information waiting to be analyzed. Surveying opinion is more precise, affordable, and focused. And that is just the publicly available information. Security and intelligence services generate mountains of their own classified data. However, the collection of information does not translate automatically into better outcomes. The gatekeepers may not be able to distinguish relevant information from meaningless garbage. Further, key policy makers may simply fail to take in the information that they need to inform their decisions. Satellites and fiber optic cables made global networks easier to build and more secure. Information can be transmitted with speed and security from any point on the planet to any other point. The cost of transmission and storage of a set amount of information has fallen drastically, even as the amount of information transmitted has skyrocketed. By the mid-1970s it already was possible for the words spoken by an American pilot flying over the SS //Mayagüez//, an American freighter seized in May 1975 by Khmer Rouge forces of Cambodia, to be repeated to President Ford in real time. The speed and capacity to transmit information have increased steeply since then. Still, this is not altogether a good thing. Secure fiber optic cables operated by other countries are much more opaque to US authorities than old cable and satellite transmissions. Decision makers are struggling to cope with masses of information. Information management techniques often have replaced intuition, historical parallels, and years of experience as the main guides to decision making. Policy makers receive piles of data generated by computers, satellites, and human assets, which are analyzed and synthesized by their subordinates. There is a danger that one form of bias is being substituted for another. An additional consequence of the advent of advanced information gathering capabilities is that decision making is growing more centralized. The President and his top political appointees can make most of the important decisions, even when lower-level officials in the field are better positioned to make decisions. In many cases ambassadors are relegated to the role of cheerleaders for American business and have marginal decision-making authority. This is particularly the case in large, important countries when friends and supporters of the President are nominated without much regard to their foreign policy credentials. These ambassadors are symbols of America, but the important decisions are made in Washington. During the lead up to the final implementation of policies, new ICT technologies allow government decisions to be widely disseminated and quickly explained. These same technologies allow other interested parties to communicate their views just as effectively. Bloggers and talking heads, NGO and corporate enterprises all air their views and influence the debate. Further, new ICT technologies make it almost impossible to keep secrets. It probably is more difficult than ever before for quiet diplomacy to succeed because almost everything leaks out. Similarly, policy compromise and agreement are more difficult because so many countries and interest groups are involved. For example, when the list of official representatives who converged on Tunis in November 1995 for the World Summit on the Information Society (WSIS) grew to 335 single-spaced pages, the likelihood of any significant breakthroughs was vanishingly small from the start ([|ITU 2005]).

The CNN Effect: Top Down and Bottom Up
The “CNN effect” relates to the idea that since the late 1980s broadcasts from CNN, BBC, and other news channels have had a major impact on the conduct of foreign policy in the United States and elsewhere. The CNN effect, a phenomenon that may alter “the extent, depth, and speed of the new global media,” is a development of the past two decades ([|Livingtone 1997]). CNN's wall-to-wall coverage of the collapse of communism, the Tiananmen Square protests in 1989, and the first Gulf War all brought critical images and foreign policy issues to the forefront of America's political consciousness. The CNN effect usually refers to a range of real-time modern media, and is not exclusive to CNN or even 24-hour broadcast cable news. Almost 20 years later the polarity of influence reversed. Individuals at the grassroots level could upload their photos and thoughts from any part of the globe onto the internet. Using websites such as YouTube, Flickr, Facebook, and Twitter, individuals can rapidly reach larger numbers of sympathizers and policy makers. These innovative websites helped foster the rise of “citizen journalism,” which allows individuals with no formal connection to news organizations to become an integral part of the news reporting process ([|Gillmor 2004]). Online news is growing in importance and influence. Social networking now allows individuals to coordinate their activities and rapidly gather into “smart mobs” that grab the attention of the media and of policy makers ([|Rheingold 2003]). NGOs, smart mobs, and determined activists may not immediately change policies, but they do elevate issues higher up the policy agenda ([|Keck and Sikkink 1998]). Simultaneously, the future of traditional print media is in doubt.

The Consequences for International Relations
As the ICT revolution spreads across the planet it resets the International Relations playing field. The possibilities for winners and losers going forward are reshuffled. Old ways of doing business and conducting policy are being thrown into question. These shifts have significant consequences for security, and political, economic, social, and cultural interactions.

Consequences for International Security Relations
The information revolution altered the nature of intelligence operations, political opposition, and the waging of war. Robert Keohane and Joseph Nye have distinguished among three different kinds of information: (1) free information that is made available at no charge to the recipient, (2) commercial information that is made available for a price, and (3) “strategic information that confers great advantage on actors only if their competitors do not possess it” ([|Keohane and Nye 1999]). It is this third category that takes precedence and may provide special insight for foreign policy makers. However, access to more information does not automatically translate into better policy decisions or greater national security. Three components of this sea change are discussed: intelligence gathering and its impact on foreign policy; the rise of “activism, hacktivism, and cyberterrorism” ([|Arquilla and Ronfeldt 2001]); and the use of networked information in military conflict ([|Singer 2009]). First, global communication networks help governments collect and analyze vast quantities of information to inform their decisions. However, greater intelligence collection often does not translate into better policy or prevention of terrorism. The information collection capabilities of modern intelligence services were already evident in 1984. Within hours after a Soviet fighter downed Korean Airlines 007 President Reagan released the taped conversations between the Soviet pilot who shot down the plane and his ground base. Twenty-seven years later, despite extensive efforts and intelligence gathering technological advances, efforts failed to prevent the September 11, 2001 terror attacks on the World Trade Center and the Pentagon, or the Madrid train bombings two and a half years to the day later. Similarly, despite confident claims by American and British leaders that Iraq was poised to unleash weapons of mass destruction, a year after the spring 2003 invasion of Iraq no weapons of mass destruction were ever found. Even when important information exists, locating it and recognizing its importance in time to prevent disasters can be challenging. Thus, figuring out which intelligence matters becomes imperative in the conduct of electronic espionage, especially because cyber-terrorists have access to almost the same information on the web. Information overload may also leave less room for intuition, trust, and secret understandings that were traditional instruments of the process. In short, more information may be a blessing when bureaucrats and political leaders can manage, analyze, and synthesize the data. It can be a curse when abundant information overloads or dehumanizes the decision-making process to the detriment of creativity and flexibility. Similarly, global networks allow governments to centralize decision making, increasing the influence of a narrow range of top leaders. This may not translate into sound, efficient policy choices. Indeed, many large firms have decided to decentralize their decision-making processes to give more authority to those closer to the customers. Second, governments and others now routinely try to use “soft power” to influence the views of others through television, radio, and print media and via the web. Those who generate the information view it as “public diplomacy.” Those on the receiving end are more likely to see such broadcasts as propaganda. The United States in the aftermath of 9/11 launched an Arabic-language radio station to provide an American perspective to those who otherwise might not listen. Famously, in the mid-1990s the Zapatistas in Chiapas, Mexico, knowing they could never win a military struggle, launched a social netwar to make their case against the Mexican government to the world. By making their plight transparent to the world, they created a playing field on which they could compete and sometimes triumph ([|Castells 2004]). Those dissatisfied with the current order found in global networks a tool that allowed diverse individuals to organize in order to make their voice heard. Activists and NGOs of all political persuasions have seized on the web as a mechanism to maximize their influence and lobbying clout. Advocacy networks in support of human rights issues and the environment, opposing violence against women, and seeking the end of landmine use have been especially noteworthy ([|Keck and Sikkink 1998]). Similarly, during the Battle of Seattle, anti-globalization activists used new global communications technologies to organize against the WTO and the forces of globalization that they opposed. A more virulent form of activism occurs when hackers, for fun, fame, or politics, break into networks and try to cripple or sabotage them or infect them with viruses, worms, and other forms of attack. There also is significant evidence of government-sponsored cyber attacks. For example, in 2001 at the nadir of US–Chinese relations, Chinese hackers launched waves of cyber attacks on US government computer systems in an effort to penetrate and sabotage them. Moreover, since 2003 American computer networks run by, among others, NASA, the National Laboratories, and major defense contractors have been the target of coordinated attacks (sometimes designated as Titan Rain) that appear to be examples of state-sponsored espionage, originating in China. Other examples include the 2007 massive, crippling cyber attacks launched from Russia that targeted a wide range of Estonian organizations (Economist 2007), and the August 2008 cyber attacks originating in Russia that swamped Georgian websites as Russia and Georgia battled on the ground. In addition, the Pentagon apparently has considered launching direct cyber attacks on its foes to bring down their computer and communications systems, but there is reluctance to go all out because there remains uncertainty regarding cyber warfare's place amid the rules of armed conflict. Weaker states and terrorist organizations cannot compete with the military firepower of the United States and Britain, but they can respond robustly in attacking computer networks. Third, global data communication networks and new information technologies are changing modern warfare. Knowledge is the key to destruction as well as to production. The potential power of information weapons was demonstrated in the 1990 and 2003 invasions of Iraq. The military was bolstered by AWACS (Airborne Warning and Control System), which scanned the sky for enemy aircraft and missiles and sent targeting data to allied forces from modified Boeing 707s. In parallel, J-STARS (the Joint Surveillance and Target Attack System) helped detect, disrupt, and destroy Iraqi ground forces during Desert Storm with speed and precision. Similarly, the battle for Kosovo was fought from the air. Smart planes directed by smart computers delivered smart bombs. In this virtual war the attacking forces suffered no fatalities during the fighting. The continuing conflicts in Iraq and Afghanistan have been notable for substituting drones, robots, and other technologies operated from afar to substitute wherever possible for troops on the ground.

Consequences for International Politics
The political consequences of globalization and global networks also are both positive and negative. “E-government” that engages citizens more directly in the political process is technologically feasible. E-government could evolve into “information government” that concentrates on “information flows within government and between government and citizens” ([|Mayer-Schönberger and Lazer 2007]). At the same time, the process, politics, and political implications that result from the new technologies could foment civil unrest and confusion. On the positive side, new communications and information technologies are beginning to enable advances in e-government, e-democracy, and e-participation ([|UN World Public Sector Report 2003]). Governments and candidates now routinely use the web to provide citizens and supporters with information. Digital media also can promote e-democracy across the globe ([|Boler 2008]). Politicians and parties now rely on the web to solicit contributions. Increasingly, governments and candidates use the web to elicit views from their people and to seek input to assist them in their decision making. A few isolated localities have also experimented with e-voting in elections. The lasting legacy of Governor Howard Dean, the unsuccessful Democratic Party Presidential candidate in 2004 who became head of the Democratic National Committee, was to show the way to the use of the internet to motivate and involve supporters and raise funds. Barack Obama took the use of the internet, the web, and even sites like YouTube to new dimensions in his successful run for the presidency in 2008. Simultaneously, sponsored and independent bloggers informed and commented on all things political. Indeed, it is striking that governments are losing their hegemony over the political process. New communications and information technologies empower NGOs, firms, revolutionaries, terrorists, fundamentalist religious leaders, extremists of all stripes, criminal syndicates, and political subversives as well as well-meaning social movements, reformers, and activists. This raises concerns that decentralized, fragmented, anarchic chaos is on the horizon that may overwhelm the positive benefits of communications and information technology. Or, alternatively, governments well beyond China may feel that their only option is to crack down and reassert their control over the internet and their citizens.

Consequences for International Economic Relations
The strongest case for globalization and global networks is that they promote economic growth through increased trade and investment. Companies and countries that are early adopters of communications and information technologies may enjoy an information edge as they compete and grow. Globalization and global communications do not, however, guarantee that growth will be distributed equitably within or between countries. Furthermore, global flows of funds and information may undermine national policies and facilitate crime and corruption. It is unclear, for example, whether national monetary authorities can control money supply or exchange rates in a globalized economy, especially when large sums are being illegally laundered. In short, national governments are challenged as they try to effectively manage global firms and markets. The problem of the “digital divide” is especially poignant. Manuel Castells notes, “Uneven development is the most dramatic expression of the digital divide.” Moreover, the digital divide within and between countries should not be “measured by the number of connections to the Internet, but by the consequences of both connection and lack of connection.” The “social unevenness of the development process is linked to the networking logic and global reach of the new economy. […] Education, information, science, and technology become the critical sources of value creation in the Internet-based economy” ([|Castells 2001]). To be competitive within a networked world economy countries and firms and individuals within them must have access to global flows of capital and information. It is but a short logical jump from this starting point to contend that if legitimate, legal capital flows and especially information flows are restricted, alternatives will be found. If large parts of the population in poorer countries are shut out of the new economy, global criminal activities will arise to create illicit transnational networks instead. Inevitably, such activities undermine the legitimacy and stability of governments and the civic culture and can, in extreme instances, result in the destruction of the rule of law, the collapse of state authority, and sometimes violence and civil war. Similarly, illegal activities could undermine the trust in and functioning of the world economy. Organized crime has a long history. The Sicilian Mafia, Cali cartel, Chinese triads, Japanese Yakuza, Russian criminal networks, and their predecessors have operated for centuries. But globalization and global networks have prompted criminal networks to form transnational strategic partnerships to ply their illegal, often violent trade. Since the 1980s sophisticated transnational criminal organizations have used global communications and transportation technologies to expand their grasp and become more efficient. The United Nations Conference on Transnational Crime noted in 1994 that criminal organizations were active in crime involving the transnational movement of drugs, weapons and weapons-grade materials, people and body parts, and money. Drug smuggling is the dominant global criminal activity from Colombia to Thailand. Ironically, the greatest threat facing the drug trade may be drug legalization, not government success at shutting down the supply side. Weapons trafficking is a multi-billion dollar business that can easily spill over to supply arms and munitions to revolutionaries, terrorists, and criminals. Smuggling of nuclear weapons-grade material for possible use by “rogue” states or terrorists is a rising concern. Concern for the safekeeping of Russian nuclear material has long worried specialists; in 2004 the head of Pakistan's nuclear program confessed that he had sold materials abroad illegally. The smuggling of illegal immigrants eager for a better life has increased as opportunities diverged in richer and poorer countries. The trafficking in women for menial work and prostitution, of children, and of body parts also has increased. Money laundering through global networks is the glue that holds all of the other transnational criminal activities together.

Social Networking, Global Culture, and Public Diplomacy
The rise of new information and communications technologies creates a second digital divide separating those who are comfortable using new technologies from those who are not. Those who are connected to the technology also are increasingly connected to virtual communities with which they regularly share information and ideas, even if they have never met in physical space. These smart mobs gather and disperse, intellectually and physically with remarkable speed ([|Rheingold 2003]). The rise of the personal network platform also appears to be on the horizon. In short, one consequence of global networks is that it enables individuals and non-state actors to relate and interact with institutions and with one another in new ways. Another consequence, related to the transparency created in an interconnected world, is that individuals lose significant amounts of their privacy. It now is routine to “google” those you meet. A slightly deeper examination will reveal credit reports, parking tickets, and employment records. Ironically, those plotting terrorism often choose not to use new communications sources precisely because that could expose their activities in advance. On the cultural side, communications networks redefine questions of identity, of determining “Who is us?” Again, technology pulls identity in conflicting directions. On the one hand, the internet allows people to get in touch or stay in touch with their roots and maintain their family, ethnic, religious, and cultural ties. Unlike travelers and immigrants in previous generations, those who move across the globe today do not cut ties with family, friends, and their workplace because phone and email connections are usually cheap and available. At the same time, cultures blend into one another and become more global today because of shared attachments to news, movies, video games, fashion, design, and even cuisine. The technology allows people to create new groups of friends and associates online using games like World of Warcraft and by meeting in virtual worlds like Second Life. Thus hyphenated identities are slowly giving way to multiple identities shared among global citizens. On the diplomatic side, communications networks may bolster the prospects for successful public diplomacy. Once, America reached out to citizens of other countries through Voice of America and Radio Marti. The United States sent art exhibits, jazz artists, and cultural exhibitions on tour. Today, the idea of public diplomacy and the possibilities of “soft power” are popular notions, and the tools provided by the information revolution are constantly in flux ([|Nye 2004]). One week after taking office President Obama reached out to the Muslim world by granting his first formal interview as president to Al Arabiya, an Arabic satellite television station (Obama to Arabs). Presidential addresses and press conferences are now routinely streamed live on YouTube. Diplomats may reach out or negotiate via teleconferences, saving time and money and preventing jet lag. Second Life and other virtual worlds may open up new ways for policy makers to coordinate among themselves or to just introduce themselves, their countries, and their cultures to others. In summary, globalization has tremendous consequences in different arenas. However, globalization is a dynamic process, not an end point. As new consequences emerge, companies, countries, and individuals adjust. These adjustments feed back and impact factors driving globalization, and so the process continues to unfold. To borrow a popular notion, globalization is a journey, not a destination. International communications and information technologies shrink the world and make it accessible to people everywhere.

J.P. Singh
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|A Midsummer Night's Dream], [|communication], [|governance], [|international cooperation] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
International cooperation is indispensable toward understanding global politics. International regimes, frequently understood as regularized patterns of cooperative interaction or behavior among international actors such as nation-states, provide the most concrete instances of such cooperation. The overview presented here describes historical communication regimes in telecommunications, broadcasting, and more recently, electronic commerce. Regime theory as it evolved historically is discussed first. Theories of international regimes are currently overlapped and, at times, subsumed in theories of global governance.

Understanding Regimes
Regime theory, a product of the field of International Relations, arose in the United States in the late 1970s to conceptualize global cooperation in economic relations. It was thus no coincidence that regime theory achieved prominence just as scholars and practitioners began to pay heed to the political economy of deepening global interdependence. Until then, international relations researchers primarily studied security relations among nation-states.

Regimes Defined
The word regime comes from French meaning authority. As used in international relations, it refers to regularized cooperative patterns of interaction or behavior among international actors. However, most of regime theory concentrated on cooperation among nation-states, often at the behest of other actors such as international governmental organizations, firms, and non-governmental organizations. Regimes often serve to socialize international actors into cooperation and, in turn, are products of such socialization. A well-accepted definition of regimes comes from the Stanford University scholar [|Stephen Krasner (1983)] who notes that regimes are principles, norms, rules, decision-making procedures, and often times, institutions which are explicitly or implicitly agreed upon by international actors in specific issues (known as issue-areas). Principles, the most general part of regimes, are broad understandings or beliefs about the way the world works. Norms, a slightly narrower conception than principles, specify specific standards of behavior. Lately, international norm development has received a great deal of attention from scholars. Rules abet or prohibit certain courses of action. Decision-making procedures lay down guidelines for everyday interactions and dispute resolution. Communication regimes are founded on the principles of sovereign interaction while encouraging smooth information flows among nation-states, especially for enabling international commerce ([|Zacher with Sutton 1996]). Thus regime norms have encouraged reduction of barriers to information flows, sometimes involving cases where weak states have felt coerced to cooperate. Regime rules have often been spelled out in international agreements, conferences, and organizations. Decision-making procedures for these rules range from informal discussions, international law, international regulatory agencies, to formal dispute resolution bodies such as the one operating under the World Intellectual Property Organization for the arbitration of internet domain name disputes.

Why Regimes?
The motivating basis for international regimes is located in the mutual or convergent interests of international actors that tilt them toward cooperation to achieve their objectives. This departs from orthodox //realpolitik// reasoning about the international system as lacking central authority or being anarchic. Without such authority, nation-states have to fend for themselves and are thus constantly jostling for survival and locked in a struggle for power. International regime formation would then be considered suspect at least by nation-states as it takes authority away from them, and makes them suspicious of any breaches to their sovereignty. Such reasoning is, however, myopic on several counts. First, even nation-states must exist as a society in the international system and while no society is devoid of conflict, the obverse is also true. A society can also cooperate. Thus, the shadow of the future for an international society, where self-interest takes account of the long run, can often lead to cooperation and, consequently, international regime formation. This does not necessarily contradict //realpolitik//; it refines and deepens the analysis. Regimes are often posited as intermediate factors, or intervening variables, between the self-interested motivations of international actors and the particular outcomes of international interaction. Thus, while regimes might find their basis in cooperation, the everyday outcomes arising under particular regimes need not be cooperative.

Theories of Regimes
Under what circumstances will international actors be motivated to create or sustain regimes out of self-interest? Here, several answers are often proposed from various theoretical perspectives.

State Power
Explanations rooted in state power either locate regime formation and sustenance in preponderance of state power or in convergence of state interests ([|Krasner 1991]; [|Zacher 2002]). Preponderant power, aptly described by Thucydides as the strong doing what they can the weak suffering what they must, leads to regime formation when a strong power, such as an international leader or a hegemon, shoulders the burden of getting other states to agree though moral suasion, incentives, or sanctions. At times, weaker states may be coerced to cooperate. Such explanations are favored by realists, especially in security regimes such as NATO where state power plays an important role. The convergence explanation notes that state interests can lead to regime formation under a number of different circumstances. While realists emphasize commonality of interests, neo-liberals underscore their mutual adjustment. Neo-liberals find their intellectual rationale in theories of political idealism favored by Immanuel Kant and Woodrow Wilson, or those of free trade traced back to Adam Smith. First, where states must co-exist in the international society, the shadow of the future may make them cooperate. Second, as international regimes are a form of collective action, such action may be easier for a small group of nation-states to undertake. Thus, international summitry among the most powerful states (such as G-8, the OECD, or the European Union) may lead to cooperation with or without hegemonic persuasion. Another view is that states may see long-term or short-term benefits from cooperation that make them overlook the costs such as risks to their sovereignty. Finally, states may undertake regime formation at the behest of powerful lobbies in their territories. Instances of this include the many human rights regimes, such as the 1997 international ban on landmines treaty, or economic regimes framed because of lobbying from global businesses. Just as the preponderance versus convergence explanations differ on regime creation, they differ along exactly the same lines as above on regime sustenance, too. A famous book by neo-liberal Robert Keohane, //After Hegemony//, in 1984 argued that hegemony is not a necessary condition for regime sustenance. For realists, regimes mostly fail if hegemons do not sustain them; for neo-liberals, regime institutions, once created, have a life of their own and thus enhance cooperation.

Economic Interdependence
Theories of economic interdependence find state mutuality of interests to be their point of entry for analysis, thus overlapping with the explanation provided above, but they move beyond states in noting the many other factors that lead to regime formation and sustenance. First, international institutions can play a big role in bringing about mutuality adjustment of interests. Examples include the role of the European Commission in convincing member states to agree to cooperative measures or that of the WTO (World Trade Organization) in bringing nation-states to the negotiating table. Second, domestic and transnational actors can often be forces in their own right for international cooperation or they can move states toward it. Examples of the former include the accelerating trend toward setting of standards privately among firms instead of going through states or international. The recently formed Internet Corporation for Assigned Names and Numbers (ICANN) is an example of increasingly private forms of international governance. To take another example, states and international organizations are also allowing for industry “self-regulation” as in the Safe-Harbor agreement crafted between the United States and the European Union for data privacy issues in trans-border data flows. All these instances will be discussed in detail later. Third, the role of users of telecommunication services, especially large users such as MNCs, is particularly important for communications regimes. Some of the most important changes domestically and internationally have come about owing to the needs of users. The need for private computer networks for large users finally led the International Telecommunication Union (ITU) to its Recommendation D-6 to allow such independent networks to emerge, often resisted by the state-owned monopoly telecommunication carriers. Large suppliers can play a role, too. Many large telecommunication service providers and equipment manufacturers have led the way since the 1980s in liberalizing telecommunication markets worldwide. Conceptually, interdependence theory also borrows from models in institutional economics in positing cooperation. Most of these theories are rooted in notions of market failure. The reason that regimes exist for enabling international transactions and exchange is because markets are unable to do so by themselves. The way for actors to cooperate and forge mutually beneficial outcomes is via regime formation. The rationale of mutual benefits is often located in the reduction of transaction costs for actors involved in any particular issue-area. Thus, if ITU coordinates the allocation of radio frequencies to different countries, it reduces the transaction costs for all involved rather than each actor seeking a series of bilateral agreements. Regimes thus also help to resolve problems of collective action. Here, the task of regime formation starts with an influential core (or powerful) group of states, which is then “multilateralized” to include other actors.

Collective Understandings
This explanation of regime formation takes into account many of the cognitive processes involved in actors mutually adjusting their interests. It points to the role of the many factors that may produce an intersubjective understanding of cooperation among actors, which facilitate regime formation. Various scholars variously conceptualize these collective understandings as socialization of agents, hegemonic ideologies, or epistemic communities. The latter refers to a small group of influential members, organizations, or actors agreeing upon a particular cognitive framework. For example, in the case of the global environmental accord to reduce CFC transmissions into the atmosphere, the role played by the epistemic community of scientists in trying to convince policy-makers and public about the risks of the depletion of the ozone layer was important ([|Haas [1990]]). Often, the ascendance of liberal ideas in policy-making since the late 1970s with the Reagan–Thatcher era, supplemented by the push by agencies like the World Bank and the IMF for liberalization, are taken to be the collective understandings that lead to regime change in the 1980s in telecommunications. Even in prior periods, there is now evidence that the reason that a monopoly model in telecommunications was accepted was because of the collective understanding of engineers staffing these carriers (whose self-interest was best served by such a market structure). Another central insight coming out of these schools of thought is that before turning to how interests lead to regime formation, we need to understand how these interests arise in the first place. Processes of collective interest formation mentioned above then give a sense of the social purpose to actors involved. In such scenarios, individual interests of actors are secondary to the formation of the social sense of purpose.

Technology
The nature of, and changes in, communication technologies are factors in their own right in understanding communication regimes. While seldom taken as a singular explanation, technology, along with other factors, is often is taken to account for the origins of interest of various international actors. The “natural” monopoly in telecommunications which led to the institutionalization of this market structure for over a hundred years rested on the technological rationale that telecommunications networks required large amounts of investments and users to be profitable. The monopoly model, in fact, broke down, when technological innovations began to challenge the notion of high investment costs. Technological explanations are at the heart of just about every regime feature. For example, content controls in satellite broadcasting have been hard because satellites leave their footprint over large areas. On the other hand, national authorities have often tried to control for content by jamming broadcasts or by tinkering with receivers in radio and television. The decentralized governance model of the internet is often connected to the technological decentralization of the internet itself. More recently [|Cowhey and Aronson (2009)] argue that communication networks are now modular: they can be broken down and network functions – infrastructural conduits or content-related flows – can be broadly distributed, allowing for broad transformations in communications governance if politics would allow for it. In their analysis, technology is the necessary conditions for change, but politics and policy specify the exact direction.

Critique and Synthesis of Theories
Each of the explanations mentioned above by themselves are found to be lacking in one respect or the other and thus scholars often provide a synthetic explanation to account for regime formation. Power-based theories are often too state-centric, coercion based, and fail to account for the mutual convergence of interests in many circumstances. Interdependence theories, on the other hand, are critiqued for being naive about power and failing to account for instances where mutuality of interest might have existed but regimes did not come about. Theories rooted in collective understandings often do not tell us when and how regimes will come about. Finally, as noted above, technology-based explanations need to be modeled through their effects on actors to explain particular outcomes. A couple of well-known theoretical syntheses may be mentioned. [|Stephen Krasner's (1991)] explanation of power-driven regime formation in communication brings in the technological aspects to account for several variations in regime variation. [|Mark Zacher (2002)] combines state-power with interdependence and technology to provide a complete account. [|Peter Cowhey (1990)] finds most of his rationale in interdependence factors but also accounts for the inter-subjective understandings of epistemic communities (of engineers, economists, and policy-makers) that provided the conceptual and intellectual rationale for the particular nature of regimes. [|Singh (2008)] combines varying conditions of power, both from states and markets, with collective understandings to show how international negotiations lead to variable outcomes for communication and global governance.

Strength and Scope of Regimes
Regimes vary in the levels of compliance (strength) and number of issues (scope) that a regime covers. Rules may exist at the international level but if they are not implemented at the national or sub-national level, this may result in weak regimes. For example, Euro-wide directives for telecommunication liberalization issued from European Union headquarters in Brussels were not adhered to by member states. Similarly, a complex telecommunication accord fashioned by the WTO in 1997 in telecommunications was deemed by a few scholars to be too complex and difficult to be implementable. On the other hand, that each nation would have a monopoly can be seen as a mutual agreement that nation-states adhered to vis-à-vis each other until the late 1970s. Sometimes, regimes encompass a number of institutions of varying strength. The UN-convened Internet Governance Forum (IGF), that has convened multiple stakeholders including governments, businesses, and civil society since July 2006, is often viewed as more of a “talking shop” to address concerns regarding US domination of internet governance, whilst the de facto authority of governance rests with ICANN. In terms of scope, scholars until the turn of the century found it convenient to speak of a telecommunications regime, which includes issues such as satellites, radio and television broadcasting, surveillance, and sending of voice or data messages. The fact that the prime international institution dealing with all of these sub-issue areas was the ITU (at least until the late 1980s) also made it possible to speak of all of them as if they were one regime. Convergence of technologies also made it hard to speak of separate regimes. A satellite can carry voice, date, and images and to speak of a satellite regime in broadcasting versus one on telecommunications, when both are often affected by the same rules, is thus difficult. In dealing with the history of communication regimes, this essay follows the dominant academic convention of positing a telecommunication regime that includes a range of issues. However, it does depart from this reasoning toward the end of the essay in positing regimes in internet and electronic commerce, which are no longer centered around the ITU or governed by the same set of rules as earlier regimes. (See [|Tables 1] and [|2] for important dates and glossary of terms.)

Table 1 Important dates for international communication regimes
 * 1837 || Telegraph invented ||
 * 1843 || First telegraph message ||
 * 1851 || Telegraph cable laid under the English Channel ||
 * 1860s || Telegraph channels laid across the Atlantic and the English Channel ||
 * 1865 || Founding of the International Telegraph Union (ITU) ||
 * 1876 || Alexander Graham Bell's telephone patented ||
 * 1901 || Radio signal sent across the Atlantic by Marconi ||
 * 1902 || Radio broadcasting begins ||
 * 1906 || International Radiotelegraph Union (IRU) formed ||
 * 1927 || Telephone service begins across the Atlantic ||
 * 1932 || International Telecommunication Union formed by merging the earlier ITU with IRU ||
 * 1934 || FCC created with the Communications Act of 1934 ||
 * 1939 || TV broadcasting begins ||
 * 1942 || Voice of America broadcasts begin ||
 * 1945 || One of the first computers, ENIAC, performs complex calculations ||
 * 1947 || Transistor invented ||
 * 1947 || ITU is made part of the United Nations. International Frequency Registration Board (IFRB) comes into being taking over work from the Berne Bureau ||
 * 1950 || Color television broadcasts begin ||
 * 1955 || Network computer SAGE introduced ||
 * 1956 || Voice messages sent across transaltlantic cables ||
 * 1959 || Microchip invented ||
 * 1964 || Birth of Intelsat ||
 * 1967 || Signing of Outer Space Treaty ||
 * 1969 || ARPANET, precursor to internet, formed ||
 * 1971 || World Administrative Radio Conference recommends keeping satellite broadcasting flows to a minimum ||
 * 1981 || IBM PC introduced ||
 * 1983 || Consent Decree leads to break-up of AT&T ||
 * 1988–89 || Major ITU conventions reformulate rules to allow digitization and liberalization ||
 * 1989 || World Wide Web comes into being ||
 * 1989 || ITU allows firms’ networks to operate under same rules as telephone companies ||
 * 1992 || European Community's liberalization begins ||
 * 1994 || US privatizes internet governance ||
 * 1996 || Passage of US Telecommunication Act ||
 * 1997 || Signing of the Fourth Protocol for telecommunications liberalization of GATS ||

Table 2 Glossary
 * basic services || Telecommunication services where the content of the message does not change during transmission ||
 * CCIR || International Radio Consultative Committee at the ITU ||
 * CCITT || International Telegraph and Telephone Consultative Committee at the ITU ||
 * DBS || Direct Broadcasting Satellites ||
 * domain name || Name identifying internet site. It usually has two parts: one on the right is the top-level domain name (the most general), the one on the left is the second-level domain name (the most specific) ||
 * geostationary orbit || Orbit circling the globe 36,000 km above the equator where satellites are positioned and move with the Earth's rotation, thus appearing to be stationary ||
 * global commons || Resources such as the sky and the seas owned and used collectively by nation-states ||
 * HDTV || High definition television ||
 * IFRB || International Frequency Registration Board at the ITU ||
 * Intelsat || International Telecommunication Satellite Organization ||
 * Most Favored || Clause in the WTO articles noting that favors conferred by one nation to ||
 * Nation || another must be conferred to all member-states of the WTO ||
 * settlements || Bi-lateral agreements among telecommunication service carriers for division of revenues governing jointly provided (international) services ||
 * value-added || Telecommunication services in which the content of the message is ||
 * services || changed or value is added during transmission ||

The Historical Telecommunications Regime: 1865–1980s
The story of the dominant international communications regime is a familiar one. It was dominated, until the late 1970s/early 1980s, by state- or privately owned monopolies in communication industries. A tacit agreement at the international level sanctioned this monopoly cartel. Recently, this cartel has come undone and communication markets worldwide have moved toward privatization and liberalization. The following analysis points out major features of the regime with reference to the major technologies that underlie it.

Foundations of the Regime: Telegraph and Telephone
The invention of the telegraph in 1837 coincides, not accidentally so, with the growing strength of the commercial and industrial revolutions. The fit between communications technologies and the intra- and inter-organizational needs of capitalism are regularly noted. Mark Zacher has noted that capitalism came with the “mandate for interconnection.” This mandate is explained differently in various paradigms (as noted above); nonetheless, its roots in capitalism as fostered by nation-states are undeniable. Capitalism fostered economic and other exchanges among peoples that led to new regimes in issues such as transportation and shipping, trade, posts and telecommunications, and migration and slavery. Principles and norms of capitalism and the sovereignty of the nation-state, upon which capitalism rested, found their way early into the design of the international telecommunication regime and still remain its guiding pillars. It was not long after the invention of the telegraph that the need for a regime arose. By the mid-1860s, telegraph cables spanned the Atlantic and the distance between London and Calcutta. Napoleon III called for a conference in Paris in 1865 which led to the birth of the International Telegraph Union (precursor to the present day ITU) to ensure that flows of communication will supplement the freer flows of commerce. It was at this time in Europe that the major powers, including Britain, France, Prussia, and Italy, reduced their tariff barriers for each other. The telegraph's spread demanded, and the ITU began to provide, rules for interconnection, equipment standardization, pricing agreements among countries, and a mechanism for decision-making to address all these needs. The latter became even more important after the invention and spread of the telephone after 1876. The emphasis on national sovereignty in Europe directed the shape of everything that the ITU designed. An early compact was that each nation would own its own monopoly in telecommunications and, depending on national capacity, its own torch-bearer for equipment manufacturing. The monopoly rule would later be buffered by the cost calculation of engineers who argued that network benefits could be optimized only if there was a single “natural” monopoly in every nation. What the regime ensured was that these monopolies would be interconnected with each other. The rules of joint provision of services (where two nations were sending messages to each other) and joint ownership (of cables and, later, wireless networks) extended the sovereignty principle to international communications. Bilateral agreements (known as settlements) on division of revenues, usually divided equally between the states involved, were also regularized through the ITU. Another legacy of the early years of the ITU, and its basis in national sovereignty, is its one-nation, one-vote principle. (Later, we will examine how this principle differs from voting weighting according to shares in the Intelsat or particular constituencies as in ICANN.) The system of voting would benefit the weaker powers, especially in the postcolonial times. As with any other regime, the system of national monopolies interconnected with each other were political bargains legitimized through international institutions. For Cowhey (1991), the system was held in place by the epistemic community of engineers and bureaucrats at the ITU and national authorities in telecommunications.

Deepening the Regime: Radio Broadcasting
The principles of the telecommunications regime, as listed in [|Table 3], carry over into radio (and, later, television broadcasting). These principles – unimpeded information flows, development of global commons, standardization, and sovereignty – all began to be developed with respect to radio broadcasts from the early years of the broadcasting regime, formally inaugurated with the founding of the International Radiotelegraphy Union (IRU) in 1906.

Table 3 Features of the international communications regime Just as the ITU came about a couple of decades after telegraph messages being sent, IRU developed after radio signals began to be sent in the 1890s and radio broadcasts began in 1900. The basis of radio came from the identification of electromagnetic radiation by James Clark at Cambridge in 1864 and confirmed by the German physicist Heinrich Hertz in 1888. Commercial exploitation of this wireless technology began with Guglielmo Marconi's efforts in the 1890s when the technology began to be deployed for maritime communications. The eponymous Marconi Company tried to become a global monopoly, supported by the British and Italian governments, through aggressive pursuit of patent suits disallowing stations using Marconi equipment from interconnecting with others using different equipment. An ITU conference in 1903 and the founding of the IRU in 1906 both tried to check the power of the Marconi Company. However, it was not until tragedies like the //Titanic// and stock-trading scandals involving Marconi that Britain finally caved into the interconnection issue at an IRU conference in 1912. Radiotelegraphy and broadcasts brought up the issue of interference among broadcasts. The first action taken by the IRU was to start reserving particular bands of the electromagnetic spectrum for specific services, starting with one for maritime communications in 1906. By the 1980s, there were 35 such bands (including aeronautics, cellular, radar). IRU allotted frequencies within certain bands to assure the safety of maritime and aeronautic travel. Second, the IRU began to register certain frequencies if they were not in use, on the principle of first-come-first-served, although some concessions were made to reserve a part of the spectrum for developing countries in the post-colonial era. The third major feature with broadcasting concerned the development of understandings on the issue of broadcasting jamming. Almost all the major powers used jamming of some sort in the 1930s. However, a minimal interference understanding developed in the post-war era, even as debates over jamming continued, whereby states more or less accepted that radio broadcasts to their territories are not illegal but that they have the sovereign right to jam them if they deem them to be security threats or hostile. The latter rationale was often used by the Eastern bloc and Soviet Union and, at times, by developing countries. The efforts at regime expansion and deepening described above were eventually formalized through organizations, most of them located within the International Telecommunication Union (created in 1932 by merging the old ITU with IRU). As mentioned before, the principle of one-nation, one-vote was followed. Nonetheless, major powers and users exert more influence. The rule-making at the ITU develops out of its conventions and administrative conferences. Of these, the World Administrative Radio Conferences (WARC) and Worldwide Administration Telegraph and Telephone Conference (WATTC) are historically important. Telecommunications rules submitted for approval at these conferences come from the International Consultative Committee for Telephones and Telegraph (CCITT) and the International Consultative Committee for Radio (CCIR) in the ITU. Furthermore, the International Frequency Registration Board (IFRB), and its successor the Radio Registration Board (RRB), were key for frequency allocation. All of the bodies mentioned in this paragraph were re-organized by the ITU in 1992 (see later).
 * ~ //Regime features// ||~ //Monopoly era (1865 to early 1980s)// ||~ //Liberalization era (early 1980s to present)// ||
 * Nature/scope |||| Telecommunications, broadcasting, electronic commerce, and internet ||
 * Strength || Telecommunications: strong || Telecommunications: strong ||
 * || Broadcasting: weak to strong depending on issue || Broadcasting: increasingly strong ||
 * ||  || Electronic commerce: weak to strong dependingon particular issue ||
 * ||  || Internet: strong ||
 * International institutions || Telecom and broadcasting: ITU, UNESCO (1970s) || Telecom and broadcasting: ITU, GATT/WTO ||
 * || Satellites: Intelsat || Satellites: Intelsat ||
 * ||  || Internet: ICANN, WIPO, WSIS ||
 * ||  || Standards: ISO ||
 * Principles and norms || Unimpeded flows of international commerce ||  ||
 * || Global commons ||  ||
 * || Interconnection and standardization ||  ||
 * || National sovereignty (sometimes in conflict with other principles and norms) ||  ||
 * Rules || National monopolies in telecom services and equipment || Liberalization of telecommunication markets ||
 * || International coordination for allocation of frequencies and orbital slots || Moves toward cost-based pricing settlements ||
 * ||  || Privatization and liberalization of cable and satellite providers ||
 * || International agreements for prices and interconnection/joint provision of services ||  ||
 * || Joint ownership for international cables ||  ||
 * Decision-making procedures || ITU: one-nation, one vote: via standing bodies, committees, and important conferences Intelsat: voting weighted by share-holdings || Multilateralization of decision-making: ITU, GATT/WTO, ISO, WIPO, OECD involving international agreementsICANN: internet governance provided througha mix of private and public authorities ||

Regime Challenges: Satellite and Television
The politics of the origins of global satellite and television broadcasting follow from two great interstate rivalries, east–west and north–south, in the post-war period. The lead taken by the Kennedy Administration in the United States toward installing a global satellite system was partially the result of stealing the show from Soviets with this technology after the latter launched the first satellite in space, Sputnik-1, in 1957. The use of the satellites for surveillance and espionage and the difficulty, at least until the 1980s, of obtaining such imagery for civilian uses further underscores power politics. Nonetheless, beyond the impetus provided by the Cold War, another set of satellites developed for commercial uses. The latter's development and influence on the communication regimes goes beyond east–west rivalry. As for north–south politics, direct broadcasting satellites, beginning with NASA's ATS-6 broadcasts to remote areas in the United States and to villages in India in the 1970s, were soon resented by developing countries who contended that television broadcasters must seek their permission before beaming these signals to their territories. These two contentions brought in new concerns to the communications regime, resulting in rule and decision-making procedures that diverged from the old. In Peter Cowhey's words, satellite technology “was the first challenge to the ITU system.” The overall character of the regime, however, centered on state-cartel monopoly provision of services, remained the same. The categorization and acceptance of the outer space, as equivalent to maritime space, and therefore a type of global commons, helped the United States ensure the safety and functioning of satellites in space. This started with efforts by the United States in the 1950s and pushed via an important report in 1955 from the National Security Council that advocated the launching of a scientific satellite “as a test of the principle of ‘Freedom of Space.’” This was codified by the Outer Space Treaty of 1967, which noted that outer space will be “free for exploration and use by all states.” The system of assigning slots to satellites in geostationary orbit (GSO) followed that of the assignment of radio frequencies in organizing the global commons taken up by the ITU. The principle was that of “first-come, first-served” though, over time, the developing world succeeded in reserving a few spots for itself for use in the future through successive international agreements. The way the satellite services themselves began to be owned and provided, however, differed from the traditional ITU regime. The United States led the effort, in a race to prevent the Soviets from doing anything similar, to develop a global satellite system with the quick passing of the Communications Satellite Act of 1962 which created the Communication Satellite Corporation (Comsat). The next move was to create the International Satellite Organization (Intelsat), a global consortium, in 1964 in which the US had a 61 percent share though West European powers succeeded in instituting rules that sought to constrain the US influence. Thus, while country shares weighted Intelsat voting, voting rules required that any important motions must be supported by at least 12.5 percent of the votes in addition to those of a country with the highest share of votes. Thus, the United States could not carry its resolutions with simple majority voting which it did possess. The 1964 agreement creating Intelsat was an interim agreement and was replaced by permanent agreements in 1969 and 1973 with more than 100 countries joining in. The shares in Intelsat were owned by national telecommunication monopolies and services were regulated as jointly provided. The next challenge to the communication regime arose in the 1970s from developing countries (supported by a few developed countries) concerned about DBS flows into their territories. Before the satellite era, TV signals, at the most, went to contiguous states. DBS-TV had a large footprint. What emerged from these negotiations was that DBS broadcasts needed to be based on “prior consent” of receiving countries. The 1971 World Administrative Conference was the first to rule that such broadcasts be minimized, subsequently followed by resolutions from United Nations Educational and Scientific Organization (UNESCO). The DBS issue was itself became part of a larger challenge from the developing world that culminated in 1976 with the calls for a New World Information Communication Order (NWICO) which sought to correct the one-way flows of information from the global north to the south and also called into question the negative news about the developing world. The Soviet Union supported NWICO. Information flows from north to south were often equated with cultural imperialism. The forum of these demands was the ITU as well as UNESCO. Beyond raising the consciousness of the world about the plight of the developing world, the NWICO issue itself did not yield any important gains for these countries.

Telecommunications Regime Change: 1980s to Present
This section documents the liberalization phase from the late 1970s to the present. While the origins for this change lie in technology changes, the main impetus came from the remarkable coalition of powerful states and large users who called for a liberalized competitive marketplace in telecommunications. Telecommunications technology by the 1970s had evolved to a point that a monopoly argument was increasingly unsustainable at national or global levels. Judicial and Federal Communication Commission (FCC) rulings in the United States, starting with the late 1950s, had begun to affirm the rights of potential service providers and large users to own and operate their own networks and interconnect with AT & T's monopoly network. This task was aided by innovations in telecommunications technology as, for example, in the license obtained by Microwave Communications Inc (MCI) to provide service between St. Louis and Chicago in 1969. Nevertheless, the workings of these technology changes were played out through states and service users, which are detailed below. It needs to be emphasized, however, that the regime change has come in the form of changed rules and decision-making procedures (see [|Table 3]) and not so much in terms of the principles and norms. In fact, the calls for a liberalized telecommunication marketplace, detailed below, were in harmony with regime principles and norms in as much as the latter called for unimpeded flows of international commerce, creation of global commons, and ensuring global interconnection and standardization. The only principle to change significantly, and there are a few who argue that it has not changed, is that state sovereignty has declined in telecommunications as a number of authoritative functions that states performed have now moved to international organizations or to private firms.

Coalition for Change and “Big Bang”
A powerful coalition for regime change arose on behalf of large users (multi-national firms) of telecommunications services who accounted for a majority of the long-distance telecommunication traffic in the world. These users, most of them using data-based networks for their operations, had found themselves increasingly hamstrung by the inefficient way in which most of the telecommunications monopolies operated. They were mostly run as overly bureaucratized government departments not that concerned with either expanding or improving the quality of the infrastructures. The irony was that the services were in high demand (facing inelastic demand curves) and thus the government monopolies were often used, especially in the developing world, as “cash cows.” The large users, located in the developed world, put pressure on their home governments for international regulatory reform. Several factors helped their task. First, neo-liberal or pro-market ideas were on the rise in policy-making, academia, and international organizations. The large users saw their needs best met through a competitive marketplace rather than monopolies. Their calls found easy reception initially in those home governments, which had already begun to liberalize many sectors of the economy. The cases of President Ronald Reagan in the United States and Prime Minister Margaret Thatcher in the United Kingdom are particularly important. Second, and relatedly, the demand for international reform followed the liberalization of domestic telecommunication in key countries such as the US, UK, and Japan, which accounted for nearly two-thirds of the global telecommunications market. The United Kingdom was the first in 1982 to privatize its monopoly, British Telecom, and introduce duopolistic competition by licensing a second common carrier, Mercury owned by Cable and Wireless. AT & T was broken up in 1984 and competition introduced in long-distance services. Japan began the privatization and competition process in 1985 but the state retained a big oversight role in introducing particular forms of competition in the various service markets. Third, as telecommunication markets opened up in these countries, competition began to develop among service providers and the preferred national equipment manufacturers, even in countries which had not liberalized then, who now wanted to get into international territories in search for revenues. These service providers and equipment providers joined the large users in the international coalition for reform. Finally, the European Community (now European Union) began to cajole member states and move member states toward liberalizing their telecommunications. This came on the heels of several important national and European Commission reports and policy initiatives that touted the benefits of liberalization. A Green Paper in 1987 urged telecommunications reform and pushed countries toward adding telecommunications to the creation of the European Union in 1992. While individual countries started moving toward liberalization and privatization in the 1980s, the marketplace did not become competitive until 1998. Nonetheless, the early European efforts aided the coalition mentioned above.

GATT/WTO Negotiations in Telecommunications
One of the most significant changes in the structure of the telecommunications regime is the shifting of authority away from ITU to involve institutions like GATT (General Agreement on Tariffs and Trade)/WTO in officiating over the regime. As the pressure for telecommunication liberalization arose among states such as the US, UK, and Japan, it was felt that ITU would have its hands tied from instituting meaningful change. Its close connection with national monopolies would stand in the way. Furthermore, pressures in telecommunications were part of a broader set of pressures for liberalization of services (banking, hotels, airlines, etc.) in general. The Uruguay Round of the GATT (1986–94) was instrumental toward designing a framework for services liberalization through its Group on Negotiation of Services (GNS). This framework served as the backdrop for the WTO telecommunications negotiations from 1994 to 1997. While the GNS agenda applied to many service industries (including financial services and shipping), the agreement which emerged from its deliberations, the General Agreement on Trade in Services (GATS), is particularly important in the case of telecommunications. The implications of the agreement for telecommunications for global governance (or regimes) are seen as a central feature of GATS. Formally, GATS consists of 29 articles, 8 annexes, and 130 schedules of commitments. The annexures cover specific sectors, including telecommunications. GATS is enforceable through the dispute settlement body of the WTO and overseen by one of three new councils established, the Council for Trade in Services. Two thousand pages of “specific commitments” which pertain to progressive liberalization (market access and fair treatment) schedules were tabled by countries, which like the schedules of tariffs under GATT, are considered legally binding upon member states. These commitments pertain to eight sectoral annexes, including one on value-added or specialized telecommunications (others include those on financial services, transport, audio-visuals, and labor mobility). The benefits of GATS are only allowed for signatory countries (there were 106 in 1994) but member states may ask for exceptions for up to ten years. Sixty-seven governments made commitments in the telecommunications annexure of GATS in 1994, which covered value-added services. Initially, the annexure was to cover basic services, too. Developing countries found coalitional partners among the Europeans who were also averse to basic services being negotiated just then. The US also wanted to impose cost-based pricing schemes in telecommunications. Developing countries, whose cause was spearheaded by India in Geneva, would lose important revenue bases if these schemes were introduced immediately. Again, the Europeans helped the developing countries’ causes by not agreeing themselves. Finally, important issues on satellite uplinks and downlinks (which would later almost derail the WTO telecom negotiations) were also not negotiated because of opposition from many states. GATT's Uruguay Round of trade negotiations created the WTO and instituted the GATS agreement, which called for in-going sectoral negotiations. The WTO telecommunication negotiations, begun in May 1994, took up the unfinished agenda of GATS related to the liberalization of basic services. Three years of complicated negotiations followed, almost coming undone in April 1996 when the United States responded to weak liberalization offers from others by walking out of the talks. Nonetheless, the February 15, 1997 accord was hailed by the United States and WTO as a major victory. Ninety-five percent of world trade in telecommunications, at an estimated $650 billion, would fall under WTO purview beginning January 1, 1998, the date of implementation. The WTO telecommunications accord, signed by 69 countries in 1997 including 40 less developed countries, formalized the new regime in telecommunications. A hundred governments had joined by end of 2008. Historically, telecommunications sectors were controlled or operated by national monopolies. The new regime allowed this sector to be governed by global rules underlying WTO processes. Among other things, cross-national investments in telecommunications are allowed (or hastened given that this process precedes 1997), and trade in basic and many value-added telecommunications services are governed by free trade norms, both features backed by WTO rules of transparency and Most Favored Nation (MFN). An important feature of the Fourth Protocol was the Reference Paper that introduced regulatory disciplines to observe the WTO rules. [|Nicolaides (1995]:270) notes that “the GATS does not address the central obstacle to effective governance of the global information economy: the problem of regulatory fragmentation among national jurisdictions.” The Fourth Protocol allowed countries to make market access and national treatment commitments for their telecommunications sectors: the Reference Paper was perhaps the most important outcome of the negotiations in providing regulatory teeth for the market access and national treatment commitments. The Reference Paper was ready by October 1995 and reflected mostly regulatory frameworks in the United States and the European Union although its Universal Service (access for everyone) feature reflected significant input from countries like India. It was divided into six provisions: competitive safeguards, interconnection, universal service, public availability of licensing criteria, independent regulators, and allocation and use of scarce resources. It reflected the general obligations of GATS such as MFN and transparency in its language. However, in terms of appending it to GATS, it would be difficult to amend GATS to accommodate regulatory scheduling; therefore, member states decided to append the Reference Paper as a set of additional commitments along with those of market access and national treatment. While 69 countries signed the accord in 1997, 53 countries signed the Reference Paper appended to the Protocol. Currently, WTO's Fourth Protocol is understood to be the major regime in telecommunications. Several well-known disputes through the WTO dispute settlement have further deepened the rules and decision-making procedures governing the regime. ITU, the major institution, that harbored the old monopoly regime is now involved mostly with technical standards and interconnection protocols governing the regime. As we will examine later, it had recently pushed for involvement in internet governance through encouraging the World Summit on Information Society.

Changes within the ITU and Intelsat
As the thrust of the global telecommunications regime shifted toward liberalization and WTO, major changes took place in the ITU and Intelsat. The ITU was initially seen as bombarding liberalization but the current view is that it eventually came around to supporting it. Its member states, having signed on to the WTO agreement also could hardly not support ITU as it shifted its stance. ITU still remains the premier authority arbitrating interconnection protocols, frequency distribution and arbitrations, and weighs in on prices and standards (see below). A major organizational restructuring also took place at its 1992 plenipotentiary conference to make its decision-making simpler to allow it to adjust to the new regime. Its work was rationalized along three sectors. Telecommunication Standardization (ITU-T) took the place of CCITT, Radiocommunciation (ITU-R) took over from CCIR and IFRB, and Telecommunication Development (ITU-D) was created to streamline the many activities of the ITU addressing the digital divide. Intelsat, meanwhile, is now a much weaker organization as a result of the regime change toward liberalization. As competitive private and regional satellite systems have developed, Intelsat is now one among the many telecommunication satellite carriers in the world, although it remains the largest provider of fixed satellite services. The share of the United States was less than 20 percent in 2001 from the high of 61 percent when it was created. In fact, most developed countries now do not think of Intelsat as that important to their interest. Technological innovations in, and the laying of, undersea fiber optic cables have further eroded Intelsat's primary position in international telecommunications traffic from almost 100 percent of the traffic in 1964 to single digits currently. Intelsat was privatized in 2001 after PanAmSat, a commercial competitor, lobbied the US Congress to pass the Openmarket Reorganization for the Betterment of International Telecommunications (ORBIT) Act in 2000. In July 2007, Intelsat merged with PanAmSat. Collectively, it now owns a fleet of 52 satellites in the GSO and generated a revenue of $2.4 billion in 2008 ([|Intelsat 2009]), a fraction of the total revenues in international telecommunications and video-transmission.

Prices and Standards
Telecommunication prices and standard-setting have both moved toward cost and market-based solutions respectively and parallel the general direction of the liberalized telecommunication regime. As noted before, telecommunication prices (or “tariffs” as they are formally known) were decided according to historical settlement principles. Especially as regime change came about, the costs of terminating calls in many countries, mostly developing ones, were quite high compared to what competitive carriers in other countries charged their customers for the cost of these calls. For countries like the United States, this resulted in settlements deficits (calculated to be about $6 billion in 1997). Efforts by the United States during the GATS negotiations to move countries toward cost-based pricing proved to be unsuccessful. Consequently, the United States, via the Federal Communication Commission, issued its now famous “benchmarks” order for international settlements in 1997, which barred settlements above a certain level. This unilateral move, while severely criticized by other countries, did result in the reduction of international tariffs worldwide and jump-starting talks (at ITU, OECD) for cost-based pricing schemes. Pricing principles, in areas other than telephony, are already quite market driven. Lease and resale of capacity over international networks are faceless and achieved via electronic bids (much like airline prices). Currently, traffic over the internet is also governed by market-based pricing methods. In order to ensure smooth international protocols, standard-setting has been one of the chief activities of the ITU since the 1920s. As new technologies grew and others converged, standard-setting became increasingly more complex. As with regime change, users and carriers demanded smooth information flows for technologies including the radio, television, satellites, computers, data networks, fiber optics, cellular phones, Third Generation (3 – G) Wireless, and, more recently, Web 2.0 and cloud computing applications. Arbitrating among powerful economic interests behind standard-setting was often hard and it has always been a somewhat political process. The processes of frequency allocation and determining slots in the GSO can be viewed as stand-setting exercises. In fact, what comes out of ITU is known as a “recommendation” and not even called a standard because of the difficulty of reconciling conflicting constituency interests. ITU also often has to confer with other agencies such as the International Standards Organization (ISO) and industry consortia in order to decide on standards. ISO may even be more important than ITU now for information technology standards. Sometimes, important standards are set by national or regional (for example, European Telecommunications Standards Institute) authorities and then “internationalized.” Of late, standard-setting is increasingly becoming private. The slow pace of the standard-setting agencies nationally and internationally is the cause of this. It is easier for firms to decide on a standard among themselves than wait for agency arbitration. This can vary from standards set by a dominant single producer (for example, Microsoft), to “standards wars” (for example, HDTV and cellular standards in Europe versus US), to those involving negotiations and consensual decision-making (as in many interconnection and open standards like UNIX and HTML).

Internet Governance
The internet has become exponentially important for commercial and social purposes. The many emerging forms of internet governance reflect a mix of private and public authorities at national and international levels. This section first reviews the emerging arrangements for internet governance and then for electronic commerce. The two are not mutually exclusive as electronic commerce depends on the internet and the issue of domain names in internet governance is directly related to commercial trademarks. However, internet governance is now centered on one organization while electronic commerce reveals a patchwork of arrangements spanning its many facets.

Internet Corporation for Assigned Names and Numbers (ICANN)
ICANN, located in Marina Del Rey, California, was founded in November 1998 to regulate internet domain names and the associated internet protocol addresses used for transferring messages. It is a private organization, which is overseen by a 19-member board of directors. Hailed as a model of self-regulation, it is sometimes seen as a major shift away from intergovernmental organizations serving as regime institutions. The corporation is housed in the US and provides its government with considerable oversight. However, pressures from the EU led to the creation of the Governmental Advisory Committee (GAC) that somewhat diluted the US government's insistence that the corporation remain totally private. Nevertheless, critics of the US domination of ICANN continue to propose alternatives, especially through the United Nations auspices. A closer look at this emerging regime reveals indeed the influence of powerful political and economic interests ([|Mueller 2002]). ICANN would not have been possible had the US government, through its Department of Commerce, not intervened to arbitrate the claims of rival coalitions seeking to assert their dominance over the internet. Such struggles can be traced back to 1992 but they became especially severe after the introduction of the World Wide Web in 1994 led to a proliferation of domain names. Network Solutions Inc., a private firm, received a five-year contract from the US based National Science Foundation to provide these domain name addresses. A rival coalition, centered around Internet Assigned Names Authority (IANA), started with academics and engineers but was able to get, after a couple of failed attempts at establishing its cause, a number of important players on board after which it was called the International Ad-Hoc Committee (IAHC). The US Department of Commerce White Paper on domain names and IP addresses resolved the IAHC-NSI posturing. The White Paper mostly legitimized the IAHC-led coalition. Mostly at the behest of the European Commission, it also requested WIPO to set up a service to resolve domain name disputes, which WIPO did through its Arbitration and Mediation Center and it is known as the Uniform Domain-Name Dispute-Resolution Policy (UDRP). The early history of ICANN's global functioning has been messy politically, even if the actual task of assigning domain names is somewhat easier. The direct elections for the five directors for At-Large Membership were seen by some as international democracy, and by others as messy populism, and were soon discontinued. Critics also do not think that ICANN reflects bottom-up practices in decision-making as it claims to do. The US government has a lot of de facto power in the decision-making. The World Summit for International Society (WSIS) is particularly important in questioning US domination. WSIS began in 1998 as an International Telecommunication Union initiative to examine digital divide issues. Quite soon, it became the forum for addressing the grievances of developing countries for being left out of domain name governance and a host of other issues, many of which – spam, child pornography, data privacy, freedom of speech – went far beyond the ICANN mandate. The main demand of the international coalition, to which the EU lent support in mid-2005, was to bring ICANN under the United Nations. The US government and ICANN, supported by business groups worldwide, resisted these moves and without the support of the incumbents, the moves eventually failed. Internet governance began to emerge as a salient issue in the planning toward the Prepcom meetings for the first of the WSIS summits to be held in Geneva in December 2003. The move was led by influential developing countries such as China, Brazil, India and South Africa. The second WSIS summit, held in Tunis mid-November 2005, pushed this further. The United Nations Secretary General created an Internet Governance Forum led by Special Advisor Nitin Desai to convene “a new forum for multi-stakeholder policy dialogue” reflecting the mandate from WSIS processes ([|Internet Governance Forum 2009]). The IGF features a unique gathering of multiple-stakeholder diplomacy with the forum convening annual meetings and consultations among states, business, and civil society organizations. The fourth annual meeting will take place in November 2009 at Sharm El Sheikh, Egypt. However, critics dismiss IGF as a talking shop and WSIS as ineffective in challenging ICANN dominance. On the issue of internet governance itself, the United States government remains opposed to considering any alternatives to ICANN, while the European Union tries to balance its business groups’ support for ICANN and member states’ varying levels of support for WSIS and IGF.

Electronic Commerce
If capitalism depends on information flows, its electronic counterpart would not have even emerged without information networks. The emerging regime in electronic commerce reflects the overall rubric of the principles and norms of global liberalization in communication, but it also goes a step further in diffusing authority among several constituencies and organizations ([|Singh 2008]). Electronic commerce needs not just an information infrastructure but also an enabling set of sectors that facilitate the flow of commerce over the networks. It is for this reason, that many countries and regions appoint e-commerce czars to coordinate the activities of the many sectors. In the United States, Ira Magaziner is believed to have played this role during the crucial time that the Clinton Administration made available its seminal report, “Framework for Global Electronic Commerce” on July 1, 1997. [|Figure 1] summarizes the three layers that are necessary for electronic commerce to take place ([|Singh and Gilchrist 2002]). As can be seen from the figure, the old layer, the information infrastructure, falls under the rubric of the telecommunications and internet regimes discussed above. Figure 1 Three layers of the electronic commerce network Many regional and international arrangements and negotiations are on-going to provide rules and decision-making procedures for electronic commerce. Many examples can be given. Postal services are becoming competitive in developed countries. The WTO imposed a moratorium on customs duties on electronic commerce in 1998. Encryption, both as an engineering exercise as well as a political purpose, is a priority internationally to ensure the security of transactions. The Safe Harbor Agreement of 2000 concluded three years of negotiations between the United States and European Union over data privacy issues. Interestingly, the agreement favors industry self-regulation with minimal government oversight in order to protect privacy. This speaks again to the development of private authority in international regimes. Others note that the de facto standard of data privacy is emerging from regional and national levels of data privacy governance in the EU, thereby questioning the importance or domination of US-led standards ([|Newman 2008]). The events of September 11, 2001 brought to fore security concerns regarding data flows underlying electronic commerce and the first challenge to the Safe Harbor agreement mentioned above. Although prior to 2000, the US government had argued for unfettered data flows, a dramatic reversal in its position came about after 2001. The first challenge after Safe Harbor came from the new US requirement to turn over vast swathes of passenger data or passenger name records (PNR) to the newly created US Department of Homeland Security. The airlines complied; the EU hedged and worked out a negotiated compromise with the US in [|May 2004]. However, in a ruling on May 30, 2006, the European Court of Justice ruled on behalf of the European Parliament and member states’ European Data Protection Supervisors (EDPS) and annulled the agreement between the Council of Europe and the US as overstepping the Council's competency or jurisdiction. A new EU–USA agreement on PNR was signed on June 28, 2007. It made PNR transfers permissible for law enforcement reasons. Nevertheless, there is considerable skepticism within the EU on this position. Data privacy is becoming an important aspect of internet governance and especially with the diffusion of surveillance and biometric technologies. One example is radio-frequency identification (RFID) technology tags being used increasingly in shopping malls, transportation systems, and passports.

Conclusion
This essay has described the international communication regime from the workings of the telegraph to the transactions of electronic commerce. Except for the declining sovereignty of nation-states, the other principles of the regime have remained fairly stable in promoting international commerce, global commons, and interconnection. However, regime rules and decision-making procedures have changed from national monopolies that held sway until the 1980s to allow for a liberalized and competitive marketplace in communications. A few future trends may be also be discerned from the features of the liberalized communication regime described above. First, as economic interdependence deepens, any residual monopoly features of the communication regime will fade away as users and firms seek seamless networks facilitating information flows. If internet governance is an indication of the future, the international regime will feature multiple stakeholders including states, firms, civil society organizations, and international organizations. Second, regime institutions seem to be governed by diffused authority distributed among several organizations rather than, for example, the ITU as used to be the case. The erstwhile international communications regime is now best described as a set of mini-communication regimes in old- and new-issue areas such as telecommunications, standards, privacy, and surveillance. Third, there is also now evidence that private authority will play a role in governance along with governmental and intergovernmental authorities in regimes. The case of private standard-setting and ICANN was described above. Fourth, international negotiations will become increasingly important in setting the rules and decision-making procedures for regimes. When rules cannot be made by fiat, international negotiations are important. This seems to be the case, especially with the emerging forms of regimes. These negotiations also increasingly feature concerns regarding the de facto US hegemony in issues such as internet governance, or challenges to US domination of the marketplace such as telecommunications. Fifth, evolving communication technologies allow for interconnections and collaborations across infrastructural platforms and geographic distances. This allows for a wide variety of players in any issue-area of the communication regime rather than a vertically organized monopoly as the case used to be. There is a shift in the current literatures away from the notion of regimes to global governance. Whereas regimes tended to focus on global //institutions// in the context of rules and decision-making procedures, a global governance lens focuses on the //processes// underlying coordination, collaboration, negotiations, and problem-solving ([|Singh 2002, 2008]). Global governance also emphasizes the intersubjective element often overlooked in regime theory. In [|Rosenau's (1992]:4) words governance is “a system of rule that is as dependent on intersubjective meanings as on formally sanctioned constitution and charters.” Especially with the decline of power politics understood from the perspective of nation-states, global governance models will increasingly emphasize governance at the diffused levels of socialized intersubjective understandings. At this level, power politics takes on a whole new dimension as in Bourdieu's notion of “diffused power” or Foucault's notion of “governmentality” wherein the structuring power relations are almost invisible but nevertheless preponderant. Future scholarship will need to attend to the evolving features of global governance and also answer another question that this essay has avoided: is there anything conceptually unique about international communication regimes? Or is it qualitatively similar in other issues-areas such as security, environment, and human rights? In answering the latter question, the role of technologies and networks, multiple stakeholders, and intersubjective understandings will be important. Increasingly, scholars will also need to answer interstitial issues between communication and security regimes, and human rights concerns associated with surveillance, privacy, and the digital divide.

Kenneth Rogerson
==== Subject [|International Studies] » [|International Communication] [|Sociology] » [|Social Movements] ==== ==== Key-Topics [|A Midsummer Night's Dream], [|information], [|information and communication technology (ict)], [|interest groups], [|protests] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
Political advocacy is primarily the mobilization of ideas and people with the goal of influencing the thinking of policy makers or society to either (1) promote a specific point of view or (2) enact policy in the form of laws or programs that benefit the ideas or people. Advocacy happens in many places and on many levels and through different methods. Descriptions for these methods often include words like grass-roots movements, interest groups, lobbying, and social movements. Though the concept is often associated with democratic societies, these types of activities can happen in non-democratic societies as well. Among the assumptions about advocacy are that democratic, pluralistic societies are loci through which there are many voices that want to be heard and the political process should, theoretically, provide as many avenues or forums as possible for those voices to be heard. In non-democratic societies, these activities are traditionally meant to challenge the status quo and are manifest in the form of protests, revolutions, and underground, clandestine activities.

Who Advocates for Change?
There are a variety of explanations for how ideas reach the ears of those who can make a change in a society. These explanations derive from multiple disciplines, comprising models of political, societal, and economic factors that open or restrict the processes by which advocacy is considered successful or unsuccessful. But first, it is important to ask who does the advocating for political and social change. One of the recognized theoretical discussions in this area came out of [|Mancur Olson's (1971)] work on how groups work in the political process. Olson's basic premise is that groups are organized to pursue a common good or benefit. It is in an individual's interests to join groups that give them benefits. This is called rational behavior or the maximization of the individual's interests. Another term for Olson's groups is “civil society,” which includes interest groups, but also encompasses families, churches, and neighborhoods, associations that may not be political all the time. But an interest group is inherently political. It is an organization that “becomes active in [the] political process and seeks to have an impact on public policy” ([|Hrebnar 1997]:8). An interest group represents a collection of individuals to the government. Interest groups work for the equitable access to public goods or collective goods. These are groups that are trying to get the attention of policy makers to do something for their benefit. But, since they can be part of the policy making process, they may be seen as policy makers themselves. Another important point that Olson raises is the free rider problem: the larger groups get, the more they provide benefits for which members do not need to give much to receive. A social movement is also a type of group that advocates for political change. The differences between interest groups and social movements are the size and the level of focus. Interest groups usually focus on one particular policy or a very specific issue. Social movements exhibit greater geographic diversity and focus on issues that may require a broad range of policy changes. Social movements are large communities working to change broad social practices and structures. In some cases, interest groups may be subsets of social movements. There are other terms for these types of groups: vested interests, special interests, pressure groups, and lobbyists (see [|Hrebnar 1997]:8). Some of these terms are used interchangeably and may not always be clearly defined, but all refer to societal groups that are advocating in some way for political, social, and/or economic change.

How Do They Advocate for Change?
Scholars studying social movements and interest groups during the past few centuries have identified a number of factors that contribute to their success or failure. These models have a variety of names and emphasize different concepts as key to the success or failure of the movements. One is called the //classical model//, epitomized by the French revolution, which others call a disturbance theory (see [|Salisbury 1969]). The characteristics are a strain in the political structure, such as infighting among the political elites; followed by a disruptive psychological state in those whom the strain affects, like the lack of food; followed by some type of social mobilization leading to change such as riots in the street and massacre of the wealthy. Another model is called the //resource mobilization// model. Its conceptual flow begins with a closed system structure, often led by a group of political elites, followed by tactical responses using existing social resources such as funding, people, and leaders, leading to some type of change. This model is usually applied to research on small, targeted activities such as those by Mothers Against Drunk Driving ([|www.madd.org]), a coalition which advocates stricter penalties for those accused of driving while under the influence of alcohol. Another model comes from [|Doug McAdam's (1982)] work on the civil rights movement. He called the advocacy during the 1960s in the US the “//political process//” model. It included political opportunities defined as a willingness in the political structure to listen to proposals for change; cognitive liberation or having a strong, salient message; sufficient and appropriate resources; a response of the opposition which would lend some legitimacy to the advocacy; leading to some type of change. More recently, [|Keck and Sikkink (1998)] have explained that transnational advocacy networks (TANs) are invaluable in linking domestic interest groups that may be stifled by the system in which they exist with others around the world who may have an interest or a stake in what they are doing. These TANs share information and services, exchange people and experts, and provide outside sources of funding that they hope will lead to policy change. The authors emphasize the importance of “information politics” or the ability to provide information to relevant audiences. Though they have phrased them in various ways, most scholars have included the following interrelated concepts and characteristics as vital to the success of political advocacy: access to resources (money, people, and time), good leadership, a sense of identity or common focus, and the opportunity to be heard. But each also includes a reference to a movement's ability to get its message to any number of constituencies – policy makers, opinion leaders, and decision makers; potential participants; or the public at large. The channel through which to accomplish this has traditionally been the mass media. New communications technologies have expanded the channels (media) through which these groups can do their work and distribute their messages.

International Communication and Political Advocacy
One big change these new information and communications technologies (ICTs) create for any movement of political advocacy is another, new window to both see the world and be seen by the world. Though internet websites and email are often used as the principal examples of these ICTs, they are not the only types. There are cell phones, satellites, digital cameras, social networking sites, blogs, video sharing sites like YouTube and others. Some of these are different technologies altogether, others are simply differing uses of existing technologies, sometimes referred to as Web 2.0. Authors have attempted to place these technologies in the historical context of development. [|Harold Innis's (2007)] seminal work on the political power of communication networks actually ends with the printing press, emphasizing that the tools used to connect people may simply be extensions of past communications and technological achievements. In his book //The Creation of the Media: Political Origins of Modern Communication//, Paul Starr picks up where Innis leaves off, arguing that the connection between political action and citizens “increasingly depended on […] media for access to the public's eyes and ears” through advances in information and technology ([|Starr 2004]:385). More recently Elizabeth Hanson has placed ICTs in their historical development context by arguing that “one innovation leads to another, as each encourages experimentation to address its predecessor's limitations” ([|Hanson 2008]:38). The connection between political advocacy and ICTs is a fluid one. In //The Marketing of Rebellion//, [|Clifford Bob (2005)] notes that one of the reasons for the international notoriety of some very local protests was the combination of smart targeting of their messages (intelligent use of resources) to international audiences with whom the message would resonate (using appropriate channels). But, more importantly, there were similar protest movements that did not get much international attention. Unexpected good timing can be just as vital as targeted messaging. Understanding the caveat that, for the most part, ICTs are used for entertainment purposes, their growth and development potentially adds to both the number of voices and the number of channels for their vocalization for political purposes. Ideally, access to structures of political change is not limited to traditional political contacts, but can now come from anyone either at home or through a publicly or privately funded locale – like libraries, universities, or internet cafes. This provides people or groups with the opportunity to get their message out to a wider audience than before the internet. This characterization is what has come to be known as “internet optimism,” or the belief that ICTs have the potential to have a positive effect on society. There is also the concept of “internet pessimism,” defined as ICTs that are simple tools used to solidify the existing global inequities – in socioeconomic status, in racial and ethnic groups, and in gender, for example. Both of these attitudes have had a profound effect on the manifestation of political advocacy through these technologies or their use for political advocacy. Internet optimists would say that finally unheard groups can be heard and their policy requests can be more seriously considered. Pessimists would argue that existing arms of power can use these new forms of communication to better monitor opposition activities and act to suppress them. There are two, intersecting ways that this relationship has played out. First, there are a number of new channels and techniques for connecting and distributing a politically mobilizing message. Second, these channels and techniques take place at a number of differing political levels: international, regional, national, and local (or subnational). While these characterizations may not capture every single example, they provide a working heuristic for understanding this complex relationship.

New Channels and Techniques
Across levels and issues, ICTs have provided new channels of communication: websites, email interaction, blogs, video-exchange sites, texting, and social networking sites, to name a few. Some of these have built on existing applications of technology and others are truly new ideas. There are also new techniques such as a different type of activism focused on technological tactics, mobilizing existing or new constituents, and fundraising.

Channels
Websites continue to proliferate, expressing the perspectives of many groups on their social, economic, and political status and the potential for change in that status. These range from sites which advocate homes for the homeless ([|www.homesforthehomeless.com/]), food for the hungry ([|http://foodbankscanada.ca/main.cfm]), and better race relations ([|www.sairr.org.za/]) to others which encourage mobilization for better police coverage of certain areas of a city, changes in neighborhood activities ([|www.nettila.net/]), or the removal of a local judge. With access to the requisite equipment and training, these sites can be updated and changed quickly to reflect the needs of the interested parties. Email distribution lists have slashed the investment time needed to get a message out to a large number of people. Also, email messages travel much faster and may even be more reliable than traditional mail or even fax to some places. The use of generic email hosting services such as gmail, aol, hotmail, and yahoo has broadened the possibilities of fairly discreet message exchange. The euphoria about the blogosphere, online personal commentary on innumerable topics, has provided a voice for a variety of politically and socially relevant topics. Blog writers have targeted journalists whom they felt were presenting information that was not sufficiently researched, they have called out politicians for doing things they felt were wrong, and they have mobilized their readers to respond to political situations. In //Typing Politics//, [|Richard Davis (2009)] argues that blogs are becoming political players simply because politicians recognize that bloggers have audiences and, therefore, have influence over them (see Chapter 5). Recognizing, again, that much of the venue is used for entertainment purposes, video and photo exchange sites like YouTube, Daily Motion, and Flickr provide space for the exchange of information and the advocacy of a point of view. During the 2008 US presidential campaign period, the candidates utilized this medium to reach potential voters. Bob Boynton studied this phenomenon and initially found that third party candidate Libertarian Bob Barr had the most viewed single video at the outset, even though the other candidates had more total videos, supporting the idea that these new channels provide space for more voices ([|Boynton 2008]:5). Cell phones and other wireless communications devices can be the locus for many of the applications already discussed. But one specific activity – texting – is unique to cell phones. While this is akin to instant messaging or chatting (or the online activity of Twittering), texting can be politically useful in and of its own right. In their book //Mobile Communication and Society//, the authors provide a summary of the research that has been done on the impact of cell phones. Not all of it is politically related, but there have been examples of texting in political mobilization. For example, in January 2001, thousands of Filipinos participated in massive demonstrations to oust then President Joseph Estrada. Studies indicate that the gatherings happened principally through cell-phone texting ([|Castells et al. 2007]:186 ff). Social networking sites are a more intricate form for online communication, moving past chatting, email, and simply visiting websites. Earlier incarnations of social networking sites were called chat rooms (also known as bulletin boards, instant messaging, discussion groups, and, at one time, USENET groups). Participants (or members) of social networking sites are able find other like-minded individuals with whom they can exchange ideas, rant about others’ actions, and, to a certain extent, flame (criticize on the internet) people or groups who they see as responsible for problems. These forums are relatively anonymous. Overall, participants use them to voice fairly strong feelings about a subject. The anonymity also means that participants do not necessarily have to agree with the other participants, but may pretend to do so in order to change ideas or foment dissension. One drawback to this proliferation of ICT channels is what [|David Shenk (1997)] has called data smog. In his book of the same name, subtitled “Surviving the Information Glut,” he argues that humans have a difficult time processing all the information available through technological sources. Another issue is the passive nature of the medium: users must seek out information and, often, the information they seek conforms to their interests and desires, something [|Andrew Shapiro (1999)] has called “narrowing our horizons,” or the ability to only see what we want to see. But this also means that those who want to mobilize those who think like them to help advocate for political change may be able to accomplish that goal more easily than before.

Techniques
In addition to new channels, there are new techniques that have evolved to utilize the unique characteristics of the World Wide Web, such as hacktivism, mobilization, fundraising, and fact checking. Much of this activity requires a working understanding of the domain name system. Activists must know how to register domain names and be creative in choosing them. Some of this activity has come to be known as cybersquatting, or buying a number of potential domain names (such as potential campaign site names) and then either selling them to the person or group or creating parody or flaming sites. One way to be a good activist is to find domain names that are very similar to, or a common variation on, the name used by the target of the activism. One group who did not like Microsoft Corporation's business practices bought “[|mircosoft.com]” and posted negative material about the computer company's activities. Hacktivists are usually very technically literate people who utilize the internet and other technologies for politically motivated protests. The most visible form of their actions are “denial-of-service” attacks, which are computer programs designed to disable websites by flooding them with repetitive information. Other manifestations include defacing websites by taking over web pages and replacing them with politically charged pages, stealing and possibly publishing sensitive information that was supposed to be secure, and posting watchdog websites that make constant commentary on an organization's actions (see, for example, [|www.icann.org] and [|www.icannwatch.org]). New techniques for mobilization via ICTs are also emerging. [|Howard Rheingold (2002)] discussed this in his book //Smart Mobs: The Next Social Revolution// describing the use of the internet and cell phones to gather people quickly and relatively anonymously for a variety of purposes. In the 2004 Spanish elections, there is some evidence that text-messaging was influential in the outcome. In Spain, political demonstrations are forbidden in the 24 hours before elections. In this election, cell phone traffic increased 20 percent the day before the election and 40 percent on the day of the election in a virtual political gathering ([|Pfanner 2004]). The election outcome was surprising to many observers, with the defeat of the Popular Party. Though a causal connection cannot be made, the potential of wireless activity as a tool for political mobilization is worth exploring. In addition, the internet permits information to be posted and disseminated quickly and then taken down just as quickly. During the 2000 US presidential election, a group of activists followed the three presidential debates around the country, mobilizing protestors through quickly created websites whose URLs were www.(the day of the debate).org. Though the protests did not result in a large number of protestors, some believe that the participation was increased because of these sites. Other techniques for mobilization have included encouraging mass emails through a website, providing contact information for the targets of the protest and the posting of information that might persuade people to be involved. A third technique centers on resources, specifically fundraising. With the increasing effectiveness of security software, accepting money online is becoming more commonplace. This trend is not only national. The ease of credit card transactions – and the concomitant ease of foreign currency exchange – coupled with the willingness of credit card companies to provide consumer fraud protection have made the possibilities for donating to a chosen cause global. One unique way of raising money on the web is the example of thehungersite.com ([|www.thehungersite.com/]). Site creators have convinced some companies to advertise with them. Whatever the companies pay for the advertising goes to provide food for their programs. The site encourages visitors to click on the advertising and to buy from the sponsoring companies. A fourth technique has arisen through blogs and social networking sites. Citizens who monitor government and media are performing their own evaluation of “official” sources of information. In September 2004, a US lawyer watched a story on CBS news discussing documents which claimed that then president George Bush had not fulfilled his duty in the US National Guard. Within 24 hours he had posted evidence on his blog that the documents were fabricated. CBS news anchor Dan Rather backtracked on the story and eventually left the network ([|Devine 2005]:48). ICTs are empowering groups to check the facts from other sources of information. Detractors claim that the shortcomings of technology plus the monitoring and tracking ability of others with greater knowledge or experience negate any positive impact ICTs may have. It is true that technology does exist to monitor and even prevent “undesirable” communication, but the possible uses – some of which are yet to be discovered – still have provided movements with a platform whose uses are deemed valuable, even if to a smaller rather than larger extent. As stated above, there are a variety of characteristics that are required to obtain policy change. The information itself (the messages) and the channels through which it goes seem to work in tandem with the other characteristics of access to resources (money, people, and time), good leadership, a sense of identity or common focus, and the opportunity to be heard. In fact, the interrelationship may be necessary for a movement's success.

Categories of Political Advocacy
Traditional social science has looked at advocacy in two broad areas: (1) social action and social mobilization, which can, but not necessarily, lead to social movements; and (2) interest and lobbying groups. These can also be divided along international and national lines and the distinction is sometimes necessarily blurred as groups target the political entities that can help them achieve their goals. Social movements can be organized at differing levels and across issue areas. There are national movements, which take place within the geopolitical boundaries of an existing state, such as the Civil Rights Movement in the US in the 1960s; subnational movements, which begin as subnational, but take on greater regional or international dimensions, such as Kurdish attempts to find a homeland or any of the conflicts in the Balkans during the 1990s; and international movements, which focus on issues that are not traditionally tied to a state, such as the abolition of land mines, or the consequences of globalization. ICTs have had an impact on each of these types of movements.

National Movements and ICTs
Movements have an extreme variety of purposes or goals. At a national level, movements are generally those internal to an existing state and whose desired audience is the government of that state and not the regional or international community. Many of these are overtly political – advocating a specific political party or position, ranging from the relatively calm, nonpartisan analysis of public policy ([|www.publicagenda.org]) to calls for secession ([|www.partiquebecois.org]) or political/geographical independence ([|www.rojavatv.org.uk]). Others are social – advocating education reform, improved welfare services, or better race and ethnic relations, sometimes focusing on specific regions and even communities, or even neighborhoods. ICTs add to this process by providing an outlet for those who might not have one through traditional media channels. For many movements at this level, even though the target audience of their message may be small, some type of technological prowess is needed and even expected. Regardless of the number of page views or responses, a well-designed and well-maintained website adds a sense of legitimacy to the movement. Connection through wireless communication is vital. Grass-roots movements to cut teen smoking or decrease violence on television, for example, use their websites to link to research that supports their position, but that they would never have been able to fund themselves. Because the target is often a local level politician or policy making body, these sites can be used for some very personal interchanges, with photos of the offending garbage dump or testimonials from local citizens.

Interest Groups and Political Platforms
One large subgroup in this area is the use of ICTs by political interest groups and as a tool for political parties. Throughout the 1990s, ICTs became an extremely politicized tool for elections, campaigns, and policy agenda formation. By the beginning of the twenty-first century most candidates and political parties in countries where elections are considered democratic had regularly updated websites. In addition, there were countless interest groups, think tanks, and grass roots organizations that posted their positions. There were also media websites that gathered and analyzed election-related information in unprecedented scope and depth. There was the explosion of parody sites, web pages that made fun of candidates and positions, some clearly and other subtly. And, finally, there was a proliferation of misinformation distributed via technology. One of the most discussed was an email that circulated during the 2008 US presidential campaign claiming to have evidence that Democratic candidate Barack Obama was Muslim. Even though Obama always insisted he was a Christian, information circulating via technology has an impact. “The rumors and misconceptions are under the radar, but their sway with voters shows up in the polls” ([|Vaughan 2008]). Though the complete, often complex, effects of the internet and other technologies are just beginning to be systematically studied, there is anecdotal evidence of the impact the technologies have had on democratic processes. For example, some scholars credit the internet for the gubernatorial victory of third party candidate Jesse Ventura in the US state of Minnesota. Virtually a neglected candidate for much of the race, there is evidence that Ventura's websites, coupled with his name recognition as a professional wrestler, were instrumental in mobilizing general support and eventually votes in his win. US President Barack Obama announce his vice presidential running mate via text-messaging – only to people who had signed up to be notified through his campaign website ([|Stelter 2008]). One area that has been more of an issue in the US, but is also becoming one in some other democratic countries, is the mobilization of the electorate to actually get out and cast votes. In the example from the 2004 election in Spain, turnout at the polls was 77 percent, up from 69 percent in the previous election. One related response is that experiments with voting over the internet have been somewhat successful and the possibilities for widespread use of this technique are on the horizon. But the internet has also been used as a forum for voter motivation. In the US, for example, the nonprofit “Rock the Vote” campaign is designed to inspire a younger generation to be more involved in politics ([|www.rockthevote.org]). Part of its success is its partnership with the global media organization MTV. Even between elections, the movement has made an attempt via its website and other media to keep youth politically engaged.

Subnational Movements and ICTs
One of the unique characteristics about subnational groups is that they seek international recognition and, hopefully, resources and legitimation, for their cause. ICTs provide methods for reaching this goal by: providing a platform to get the message out, sometimes in real time and usually through numerous sources; uniting small, disparate populations that may have similar goals; and focusing them on the situation in one place. One of the earliest, and most cited, examples of how the internet was instrumental in this process was the rebellion in the Chiapas region of Southern Mexico in 1994. By 1996, the Zapatista rebels were creating web pages and sending emails to the Mexican government demanding protection against the negative effects of neoliberalism on the rural poor. They also used the space to post essays and poetry about the possibility of a “new” democracy in Mexico. Across the internet, they issued a global call for a forum on the issues they were facing and, in July 1996, 3000 grassroots activists from 42 countries met to discuss the situation (see [|Bob 2005]). A second example is the Kosovo war of the spring of 1999, sometimes referred to as the “Internet War.” The Kosovars spent much of their time trying to stay online (via online radio principally) in order to broadcast their situation to the world. One of the techniques used was online diaries, precursors to blogs. Young people from Kosovo would recount some horrific experiences and post them on websites or send them out to the world via mass emails. These came into the hands of some journalists and some policy makers who used them liberally to support intervention in the area. Another type of organization is the Chinese religious movement Falun Dafa. Li Hongzhi, the leader and founder of Falun Dafa ([|www.falundafa.org]), was forced into exile in the US. He uses the organization's website to not only post information, but also mobilize followers. Some believe that an unauthorized demonstration of more than 10,000 people in Beijing was possible because of internet access. The Chinese government has claimed that Falun Dafa is a doomsday cult, that they advocate superstition, provoke disturbances, and threaten social stability. The Chinese government has banned websites about Falun Dafa. It has also installed software that attempts to block internet users in China from all Falun Dafa sites based elsewhere. In addition, it has created an anti–Falun Dafa site in an attempt to counteract the movement. Chinese officials have been known to spam, hack, and put viruses on sites that contain Falun Dafa information.

International (Trans-Border) Social Movements
Truly international social movements usually cut across issues more than geographical or geopolitical boundaries. Groups that have mobilized around certain issues have had varying degrees of success in utilizing ICTs to promote their agendas. Activists around a number of issues have been able to use ICTs: human rights, religion, political oppression, social problems such as hunger and literacy, war/peace, environment, and health. These are different from Keck and Sikkink's TANs which are defined and global networks that aid a group within a country. There are some characteristics of ICTs that provide added value for international movements. One is anonymity, or the ability to hide behind technology. A second, seemingly paradoxical, characteristic is the power of the internet to reach large numbers of people and be known for one's work. Third, as discussed, ICTs can function as a mobilizing agent in the same way as on a national level, but the “governments” which are the focus of attention are international organizations. Though the technology exists to track down IP address owners and server locations, some movements are able to capitalize on the sense of anonymity that the internet can offer. One group that has made its mark on the internet is the environmental movement known as the Earth Liberation Front ([|http://earth-liberation-front.org/]). There is no leader and no main office. All communications go through a press officer who says he receives all communiqués anonymously and distributes them via the website and other traditional media channels. Those who want to participate in the front's activities can go to the website and learn how to accomplish specific tasks like “Setting Fires With Electrical Timers.” If there is interference from governmental authorities, the website provides guidance on what to do: “If an Agent Knocks – Federal Investigators and Your Rights.” The web keeps the movement together in a virtual way. Alternatively, others want the opposite of anonymity. They want to be known by as many as possible for what they do. One of the most cited examples is Nobel Peace Prize winner Jody Williams's efforts to eradicate landmines. With some of the same characteristics as the Earth Liberation Front, the International Campaign to Ban Landmines (ICBL, [|www.icbl.org]) also has no front office, but is an association of independent NGOs working around the world. A description of Williams at the Nobel website acknowledges that something of a “mythology” has arisen that what has made the ICBL so unique has been its reliance on electronic mail. Though not the sole factor, it was an important one. A third benefit is the ability of individuals and groups to mobilize around issues addressed in international organizations. One of the most visible examples of this has been the protests against the World Trade Organization at the end of the 1990s and the early 2000s, many of which have been promoted through the internet. Another less well-known example is the defeat of the Multilateral Agreement on Investment (MAI) in the Organization for Cooperation and Development (OECD). The agreement was intended to develop rules on how member states treat foreign investors. It became clear that a broad coalition of NGOs, which were not very well funded, were able to use the internet as a strong component in assuring that the agreement was not passed.

Effectiveness of ICTs in Political Advocacy
For those who study politics and the relationships between the governing and the governed in the context of new technologies these differences between internet optimism and pessimism have come to be called the mobilization and reinforcement theses. Briefly, the //mobilization thesis// states that use of ICTs will attract those who, for other reasons, are unable, or find it difficult, to make connections in the offline world. These people or groups then organize with the goal of changing a specific policy. [|Richard Davis (1999]:175) states, “This new tool [ICTs] will reinvigorate the public's interest […] since people will see the potential for acquiring information and expressing opinions.” The //reinforcement thesis// claims that all ICTs do is encourage greater activity by those who already would be involved in a politically oriented action. Pippa Norris puts it this way: “The more skeptical perspective suggests that online resources will be used primarily for reinforcement by those citizens who are already active and well connected via traditional channels. […] But this function continues to dash the hopes of those who believe that the internet should facilitate a more deliberative or direct form of democracy” ([|Norris 2001]:218–19). Norris ultimately comes to the conclusion that the reinforcement thesis reflects the reality of the political process. Some scholars have come down on both sides. [|Bruce Bimber (1998]:158) writes: “There are many theoretical and empirical reasons to doubt a simple and direct connection between changes in information and communication technology and the political behavior of the public.” He continues that “this does not lead me to reject the idea that the Net will have significant effects on public life,” like decentralizing the control of media organizations over the flow of news or opening up the government to more public scrutiny. One criticism of the research on these two theses is that, for the most part, they have been limited to a national context. But other scholars have been attempting to broaden the perspective. [|Anna Greenburg (1991)] makes the observation that the analysis makes sense when looking only at the individual level. She describes how an analysis of the two hypotheses might look through the eyes of social movement theory, which focuses on groups. She says that the theory points to the central role of institutions and that groups and organizations are a source of “pre-existing communications networks,” for which the internet can be used as a channel (1991:96). In summary, ICTs are not the only thing that political and social activists need to succeed. But they can add dimensions to existing assets that groups can utilize to achieve their goals. At the same time, there are trade-offs. Groups become more visible and, sometimes, even vulnerable themselves to scrutiny and security breaches. Even though it may be some time before large numbers of individuals around the world are connected to ICTs, groups of activists, as they pool resources, have a good opportunity of exploiting the potential of the internet and other technologies in the near future. In //Information and American Democracy: Technology in the Evolution of Political Power//, [|Bruce Bimber (2003]:191) finds that ICTs may be the most influential at the very local level with small, often ad hoc, groups because of their ability to have the characteristics of “speed, opportunism, and event-driven political organization.” The marriage of ICTs with political and social advocacy not only breaks down traditional geopolitical boundaries, but crosses disciplinary and conceptual boundaries as well. Much of the work on political advocacy comes from sociology, and international relations scholars have adapted it to better explain what they see. The inclusion of international communications has enriched the understanding of how, when, where, and why political advocacy is or is not effective.

Milton Mueller
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|governance], [|Internet], [|networks], [|technology] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
Nothing brings together studies of international relations and communication more completely than the internet and the problem of its global governance. The world's convergence on the internet protocols for computer communications, coupled with the proliferation of a variety of increasingly inexpensive digital devices that can be networked, has created a new set of geopolitical issues around information and communication. These problems are not just related to the management and control of the internet itself, but also to a broader set of public policy issues, such as freedom of expression, privacy, transnational crime, the security of states and critical infrastructure, intellectual property, trade, and economic regulation. Political scientists and IR scholars have been slow to attack these problems. This is partly due to the difficulty of recognizing governance issues when they are embedded in a highly technological context. One cannot assess, for example, the real meaning of the ongoing debate over political oversight of the root of the domain name system unless one understands something about the architecture and technical function of domain names. Another barrier to understanding is that the institutions and processes surrounding the internet are not the established intergovernmental venues IR scholars learned about in school. They are new ones that developed, or are still developing, around the internet: the Regional Internet Address Registries, the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force (IETF) and the Internet Governance Forum. Insofar as the political science literature has noticed the internet at all, it tends to focus on the //use// of the internet by political actors. While that is an important area of inquiry, internet governance studies look at the internet more as an //object// or //target// of political activity than as a new tool for engaging in generic politics. In other words, the field of internet governance examines how international politics are fostered by contention over the //substantive policy issues raised by the growth of the global internet itself//. Businesses, interest groups, governments, and civil society activists all strive to shape the internet's availability, cost, openness, freedom, privacy, messages, and other aspects of its performance or structure. Because the internet constitutes a critical part of the infrastructure for a growing digital economy, internet governance becomes an increasingly high-stakes arena for political activity. Of course, internet as tool of political actors and internet as target of political activity overlap, as transnational policy networks have formed around internet governance which often rely on online methods. If IR and political science have only begun to awaken to this field, it is also true that the technical experts and technology law specialists involved in internet governance often have a less than adequate appreciation of the basic international relations or political science issues with which they are engaged. They often fail to link problems of internet governance to what is known about global governance in other domains. Too often, they tend to consider the internet //sui generis// and fail to draw appropriate lessons from other sectors, even ones with significant parallels and isomorphism, such as environmental governance, security studies, and human rights. In short, internet governance is an area that poses new problems but can nevertheless benefit from what is known about older and existing global governance problems in other contexts. This essay will begin with a definitional discussion of the internet and internet governance. It will then position the IG debates inside the broader topic of information and communication policy, highlighting its close relation to, and evolution out of, debates over digital convergence, telecommunications policy, and media regulation. From there it will move to a topical map of the field, focusing on the specific debates and discussions that have taken place across disciplines such as law, political science, economics, and some of the technical literature.

The Internet
The internet is not a hardware standard or a physical infrastructure. It is a set of software instructions (known as “protocols”) for transmitting data over networks. Contrary to the popular myth that the internet was designed to survive a nuclear war, the protocols were designed to facilitate the movement of data across independently managed networks and different physical media ([|Abbate 2000]). The internet protocols work on wireless and wired networks, on copper and fiber networks; they can be used as the underlying networking platform for almost any kind of higher-level software application, such as the World Wide Web, word processing, streaming video, voice communication or games. Indeed, it is actually a misnomer, though one that is probably impossible to eradicate now, to speak of “the” internet. Internetworking is really a //process// that occurs among many networks, not a single thing. Additionally, like many contemporary ICT standards (e.g., computer operating systems, the major application suites, mobile phone handset specifications), TCP/IP is transnational if not completely global in scope. Global compatibility has always been the objective, and one of the key norms, of the technical community that developed the software and standards.

What is Internet Governance (IG)?
The term “internet governance” first came to prominence in the period from 1996–9, when it became associated with a vital but relatively narrow set of policy issues related to the global coordination of internet domain names and addresses ([|Kahin and Keller 1997]; [|Kahin and Nesson 1997]). The discussion centered on the question of who would control a global, centralized institutional framework to coordinate domain name and address assignment, and how it would be structured. The encounter with those problems culminated in a notable institutional innovation, the Internet Corporation for Assigned Names and Numbers (ICANN), and thus many associate the concept of internet governance with ICANN. ICANN later became the provocation for international clashes over the US role and the position of state and nonstate actors in internet governance. The vehicle for this clash was the United Nations World Summit on the Information Society (WSIS) from 2002 to 2005. We will have more to say about the implications of WSIS later; in this definitional discussion the point is that WSIS altered the prevailing definition. It led to the creation, in 2004, of a UN Working Group on Internet Governance (WGIG) that was charged with developing a working definition of the term. The WGIG succeeded in expanding the meaning of //internet governance// beyond ICANN, applying the term to any and all “shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the internet.” The definition obviously drew on [|Krasner's (1983)] canonical definition of international regimes; but, reflecting the enlarged role of nonstate actors in managing the internet, the definition noted that these shared processes involve not just governments but business and civil society as well. See [|MacLean (2004)] for an edited collection of papers from practitioners and scholars that were fed into the WGIG process in an attempt to come up with a definition. See [|Drake (2005)] for an edited collection of articles by the people who served on the WGIG. The WGIG/WSIS definition ratified the position of nonstate actors in internet governance and put practically all of the traditional problems of communication and information policy within its frame. Indeed, in the early stages of the WSIS process, definitional debates centered on the distinction between a “narrow” definition that encompassed only ICANN-related functions, and a “broad” definition that seemed to include anything and everything related to the governance of information and communication technologies, both international and domestic. Both extremes miss the mark. Confining concepts of internet governance to ICANN was arbitrary, driven more by concrete historical associations than by a coherent concept. Many other governance processes affect the internet, such as standardization bodies, World Intellectual Property Organization treaties, law enforcement activities related to cybercrime and so on. On the other hand, any attempt to stretch the term internet governance to include things like the construction of physical telecommunications infrastructure, spectrum management, open standards, e-government and the like is simply based on an uncritical and unhelpful attempt to conflate all forms of information and communication technology governance with the internet. A more precise definition would characterize internet governance as //collective action by governments, civil society and/or the private sector operators of the networks and services connected by the internet, to establish global agreements about the standards, policies, and rules of conduct governing communications that rely on the TCP/IP protocols//. In other words, internet governance includes only those technical, legal, regulatory, and policy problems that arise as a direct consequence of the involved parties’ mutual use of the internet protocols to communicate. Within that framework, three distinct types of governance functions have been identified ([|Mathiason et al. 2004]) They are: (1) technical standardization, (2) resource allocation and assignment, and (3) policy formulation, policy enforcement, and dispute resolution. Each function is characterized by different processes, requires different kinds of expertise and different methods of enforcement, and is often carried out by different organizations. It clarifies the analysis greatly to keep the three functions distinct.

The Internet and Communication-Information Policy
There has always been a domain of public policy focused on communication and information, and this domain has always had international aspects. To appreciate the importance of internet governance, it is first necessary to understand the nature of this broader policy domain and then to understand how the internet has transformed it. Communications and information policy refers to the role of laws, regulations, and public institutions in shaping the deployment and use of communication and information systems. The period from the 1970s to the first decade of the twenty-first century is distinguished by a revolution in information communication technologies. As this revolution has progressed, the boundaries of communities and polities have been redefined, new industries have arisen and older ones declined, laws and regulations have been rewritten, cultural identities and repertoires have been altered, and economies and organizational capabilities have been transformed. International relations theorists have of course attended to the new importance of this policy domain ([|Keohane and Nye 1998]; [|James N. Rosenau and Singh 2002]; [|Braman 2004]). Within this broader domain there is a set of longstanding issues and problems that have defined the policy agenda for older media as well as the internet. These would include: In addition to these, there are issues and problems that are unique to the internet and/or have been added to the policy dialogue by its emergence:
 * •  freedom of expression, censorship, and content regulation;
 * •  privacy and data protection, state surveillance of the population, and the technologies and institutions for establishing identity;
 * •  copyright, trademark, and patent protection;
 * •  cultural policies regarding subsidization of education, media content production, and the arts;
 * •  competition policy, economic regulation, and trade policy.
 * •  cybercrime and cybersecurity, including internet-based information warfare;
 * •  critical internet resources (domain names and IP addresses);
 * •  the relationship between technical standards, software “code,” and governance.

Why Internet Governance is Interesting – and Hard
The internet's emergence as the dominant platform for global communication has been a disruptive force in communication-information policy. Aside from posing new issues, the internet has dramatically altered the context within which the classical set of communication-information governance issues is resolved. Three reasons for the exceptional impact of the internet on communication-information policy can be identified. One is that operational control over the internet is highly distributed and decentralized. The internet protocols were intended to support a network of networks; i.e., they were designed to make it easy for many different, independently managed networks to interoperate. There are now about thirty-five thousand distinct “autonomous systems” or independently managed networks connected to it (a number that is still increasing). Moreover, the billions of devices connected by these tens of thousands of networks are often intelligent and programmable in and of themselves; thus, they are able to creatively make use of networks in new ways and behave in ways that might not be anticipated. The level of coordination and complexity required for effective governance thus seems to be orders of magnitude higher than before. Related to this is the still-rapid pace of technological change. As noted before, technical structures strongly influence policy and regulation in this domain. Thus, efforts to control or regulate the internet may produce a sequence of countermeasures to elude control or exploit new vulnerabilities, producing what is sometimes called an “arms race.” In this respect the internet represents an extreme case of the “complex interdependence” heralded by theorists of globalization ([|Keohane 1988]; [|Keohane and Nye 1998]). The governance problems posed by the large number of autonomous networks and devices are compounded by the fact that the borders of these networks and authority over their administration do not align with the boundaries of the state. Here is a second reason why the internet has proven to be disruptive of established systems of governance. There have, of course, been cross-border communication technologies for centuries. But the internet is not just cross-border; it is truly global in three unique respects. First, its standards and identifier resources were defined and implemented without regard to national territories or jurisdictions. (The delegation of country code top-level domains is an important exception to this claim – but it is the exception that proves just how significant the alignment of internet domains with national authority could have been and how significant the misalignment was where it was not maintained. See the discussion of ccTLD research in the “Topical Map of the Literature” section, below.) Second, software-based protocols create a virtual space that operates independently of geographic limits. The cost structure of internet infrastructure is not as distance-sensitive as it was in earlier times. Finally, global liberalization of the telecommunications industries has created transnational enterprises and interconnection arrangements. In telecommunications governance, we started with local, regional and national networks and gradually achieved global interconnection and freer trade in services via negotiations among states and national operators. In internet governance we moved in the opposite direction, starting with global compatibility among all users of the internet protocols by default, and then tried to carve out various forms of national or local control by technical, legal, and regulatory means. A third reason why the internet has been disruptive is encapsulated by the term //digital convergence//. Digital convergence refers to the absorption of all media forms by networked digital computers. Voice, video, and data communications used to be distinct technologies and operate under separate industries and separate legal and regulatory regimes. As communications networks and information formats have become digitized, they have all converged on the internet protocols. The internet is now the support system for our post office, television and radio broadcasters, telephone networks, bookstores, libraries, government services, and retail shopping malls. It provides a single platform for multiple functions once performed by different technologies operating under distinct legal and regulatory regimes. It is also the site of newer, unprecedented media forms such as social networking sites, chat rooms, and peer-to-peer file sharing. Thus, convergence forces us to reassess the applicability of older models of regulation and governance, and often leads to institutional change.

The Internet and Global Governance of ICTs
Although the internet represents a historical disjunction, it is important to situate it within broader international trends of the communication industries. Two parallel processes that became evident in the 1980s set the stage for the current problems of global internet governance. One was the aforementioned liberalization of the telecommunications industry, which paved the way for the spread of a distributed and free internet. The other was a concerted effort to globalize the protection of intellectual property rights, which set in motion one of the primary political drivers of efforts to regulate and control the internet. Both seemed to emanate from the United States, and both came to center on trade policy. From the 1980s on, the US and its allies liberalized their domestic telecommunications. These reforms were extended internationally by shifting the rulemaking power for telecommunications away from the International Telecommunication Union (ITU) toward a new institution, the World Trade Organization ([|Cowhey 1990]). ITU was the world's oldest international organization and was dominated by national telephone monopolies in Europe and by protectionist developing countries. The concept of “trade in services” became the rationale for opening international telecommunication markets to competition ([|Drake and Nicolaidis 1992]). In 1997 an agreement was reached on a sweeping free-trade pact in basic telecommunication services. Concluded in February 1997, the WTO Agreement on Basic Telecommunications Services (BTA) is an annex to the Fourth Protocol of the General Agreement on Trade and Services (GATS). It was implemented on February 5, 1998. Classified as a fully liberalized “information service,” the internet thrived in the new environment. This occurred only a few months after a trade agreement on information technology equipment, the Declaration on Trade in Information Technology Products (ITA). The ITA provides for participants to completely eliminate duties on IT products covered by the Agreement. Developing country participants have been granted extended periods for some products. The number of ITA participants represents about 97 percent of world trade in information technology products. During roughly the same period as the spread of telecommunication liberalization, intellectual property protection was also linked to the trade regime ([|Sell 1998]). The 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement was the culmination of a concerted effort by drug companies, the software industry, and motion picture producers ([|May 2000]; [|Drahos 2003]). It established minimum standards for many forms of intellectual property protection and strengthened global enforcement against countries or actors who deviated from those standards. Aggrieved IPR owners or their governments could invoke the WTO's authoritative dispute resolution process to enforce their rights. The globalization of intellectual property protection set the stage for a running battle between copyright and trademark interests and the internetworking protocols that spread virally over the newly diverse telecommunications infrastructure. The dialectic between intellectual property protection and open, global networks capable of facilitating peer-to-peer sharing of media has been one of the chief shaping forces of internet governance. The rise of the internet and a liberalized telecommunication infrastructure tilted the playing field against those seeking control over the distribution and use of digital content. Trademark concerns played a major role in the formation of ICANN, and copyright protection played a major role in debates over notice and takedown of web content. The tension is discussed further in the section below on “Intellectual Property in Digital Media.” This tension has also played a major role in fomenting debate, new law or institutions around the problem of identification and surveillance, e.g., when copyright interests push for surveillance of users by inter-net service providers.

Topical Map of the Literature
Scholarly discourse about internet governance is already stimulating debates about fundamental issues in political science and IR. The growing literature on internet governance deals with questions such as: There are multiple disciplines and fields involved in this discourse. A disproportionate amount of the early literature comes from the field of law, but the disciplinary mix has become more balanced over time. In addition to law, international relations, and political science, scholars in communication and information studies often bridge the gap between the technical disciplines and the more traditional social science disciplines. Contributions also come from sociology and economics and the various flavors of institutionalism associated with those disciplines. Because of the close relationship between governance and technical knowledge in this field, computer scientists and engineers often make important contributions to internet governance discourse.
 * •  To what extent are society and its institutions shaped by technology, and to what extent are technological systems subordinate to cultural, political, and economic forces?
 * •  Is the internet leading to new models of governance, involving multiple stakeholders and less hierarchical, networked relations, or is it simply the latest in a long line of global governance challenges? More broadly, what constitutes institutional change and innovation in the international arena and how would we recognize it?
 * •  What role should territorial jurisdiction play, if any, in the virtual space of digital communications?
 * •  Do powerful new communication technologies overcome collective action problems and thereby alter the relationship between states and civil society at the transnational level?
 * •  Is the US the hegemon of the global internet governance regime, and if so, how much benefit does it derive from that special position and what impact does its dominance have on the global order and industrial policies?
 * •  Is information warfare conducted through the internet a new strategic high ground or a peripheral, overly hyped arena?

Internet vs. Territorial Jurisdiction and the Nation-state
One of the earliest and most fundamental themes in the IG literature is whether the internet's capacity for transnational, borderless communication does or should undermine national control and sovereignty. David Johnson and David Post, two American law scholars, were among the first to call attention to the misalignment of national boundaries and internetworking ([|Johnson and Post 1996; 1997]). Focusing on the internet's operational reliance on voluntary transborder cooperation, they developed an argument for a “decentralized, emergent law” as an alternative to traditional hierarchical, state-centric control. In their argument, the basic unit of governance is the network operator and indirectly the communities organized around them; the basic tool of governance is the decision to connect or disconnect. Johnson and Post's argument is often confused or equated with a cruder, technological-determinist argument that the internet //cannot// be controlled or is inherently resistant to state control. But the emergent law argument was normative, not positive: it asserted that the internet //should// follow a new model of nonnational governance, not that it necessarily //would// be governed in that way. Their arguments mirrored what may or may not have been the independently developed concept of networked governance in political science ([|Scharpf 1993]; [|Kooiman 2003]; [|Sørenson and Torfing 2007]). Indeed, later on [|Johnson, Crawford, and Palfrey (2004)] drew explicitly on concepts of peer production ([|Benkler 2006]) to argue for a form of networked governance that relies on participants’ unilateral decision to disconnect and isolate bad actors and to trust and connect with beneficial partners ([|Johnson et al. 2004]). They advanced this as a more flexible and effective form of governance than either interstate agreements or a supranational world government. This antinational, networked governance argument generated a reaction. The opposing views came from the field of law as well. Law professors such as Jack Goldsmith and Joel Reidenberg attacked the concept of cyberspace as its own jurisdiction as “cyberanarchy” or a “denial of service attack against the legal system” ([|Goldsmith 1998]; [|Reidenberg 2005]). A few political scientists joined in this critique. Daniel Drezner, for example, challenged the assumption that the internet leads to a decline in state autonomy relative to other global actors ([|Drezner 2004]). Later, Goldsmith and Tim Wu (another law scholar) mounted a comprehensive and relatively popular attack on the idea of the borderless internet ([|Goldsmith and Wu 2006]). Their case drew on growing knowledge of China's and other authoritarian states’ efforts to arrest internet-based dissidents and to censor the internet in various ways ([|Kalathil and Boas 2003]). A useful and important empirical body of literature has since grown up around the analysis and identification of efforts by national governments to block and filter internet content. State censorship of the internet involves the systematic use of technological measures to make access to internet content conform to the regulations of the national state. A milestone in this confrontation was a French litigation against Yahoo! for making Nazi memorabilia available on its service ([|Reidenberg 2002]); another was the growing documentation of the methods China used to block and filter website access ([|Zittrain and Edelman 2003]). A project based in Ronald Deibert's lab at the University of Toronto has developed technical tools and methods for the systematic, global analysis of internet filtering by states ([|Deibert et al. 2008]). More important than the argument that the internet //could be// subject to political forms of control was the cyber-conservatives’ argument that human society never fully escapes the need for some kind of coercive authority to enforce basic rules against stealing, fraud, and violence. But what kind of rules, and who makes them? Unlike Johnson, Post, or the theorists of networked governance, cyber-conservatives have been unable or unwilling to consider new forms of governance. Their work tends to imply that the internet has provoked no institutional innovation and raises no new issues in international relations. Goldsmith and Wu flatly claim, for example, that “Public goods and related virtues of government control of the internet are necessary across multiple dimensions for the internet to work, and as a practical matter only traditional territorial governments can provide such public goods.” The claim that //only// traditional territorial governments can solve the public goods problems of internet governance is a strong one, yet the argument has a hard time accounting for the institutional turmoil surrounding the rise of the internet. The creation of ICANN, in particular, is especially problematic for it, and it is to ICANN that we now turn.

ICANN, WSIS, and IGF
A substantial body of internet governance scholarship concentrates less on the debate over the role of nation-states per se and more on the ways in which the problem of governing the internet is actually generating new institutional forms and methods of governance. The Internet Corporation for Assigned Names and Numbers (ICANN) is the inevitable starting point of this strand of literature. This is appropriate because there is little doubt that ICANN was both an institutional innovation in its own right and a disruptive change that led to reactive adjustments in other international organizations and arenas. ICANN's creation led indirectly to the World Summit on the Information Society – a global conference process that tried to consider the full range of communication information policy but became focused on internet governance. And WSIS in turn led to another new international institution, the UN Internet Governance Forum. The origin of ICANN as a global governance scheme is documented most thoroughly in [|Mueller (2002)]. The ICANN regime resolved political conflicts over property rights that had been created by attempts to appropriate a new global resource. It also addressed the coordination problems posed by managing critical internet resources in a manner that would retain global compatibility. As Wolfgang Kleinwächter noted in 2001, ICANN was a “silent subversive” because of the way it altered the role of states in global governance ([|Kleinwächter 2000; 2001]). Indeed, for a time ICANN was perceived as a paradigm of new forms of governance ushered in by the networked age ([|Ahlert 2001]; [|Levinson 2002]; [|Hofmann 2005]). Others, however, while recognizing its novelty, mounted strong challenges to the model's legality and legitimacy ([|Froomkin 2000]; [|Weinberg 2000]). ICANN was controversial because the United States unilaterally delegated to a private nonprofit corporation global authority over the root of the domain name and internet address spaces, and empowered it to resolve a number of key public policy problems through the issuance of private contracts. These contracts were a means of addressing competition policy issues in the commercial market for domain names, domain name–trademark conflicts, the allocation of internet addresses, and related problems. Klein, Palfrey, and several others explore another interesting aspect of the ICANN experiment, namely its early attempt to use global, democratic elections to keep its Board accountable ([|Klein 2001b]; [|Palfrey 2004]). Another strand of this literature explores a fascinating linkage between national geography and the virtual space of the internet, namely the “country code” top-level domains (ccTLDs). Registries for ccTLDs manage the two-letter top-level domains that refer to countries (such as .uk for the UK or .cn for China). The two-letter codes are based on an official international list of countries that correspond (roughly) to national territories. Thus in ccTLDs, national boundaries and internet domain-name administration are more or less aligned, and this purely semantic connection, which emerged as an afterthought in the early history of the internet, has allowed states to assert “sovereignty” or approval rights over the administration and delegation of ccTLDs. This has created for ccTLD administrators a special place in the ICANN regime. Daniel Paré's book contains a detailed case study of Nominet, the organization that operates the .uk ccTLD ([|Paré 2003]). In a more systematic study of ccTLDs, Y. J. Park analyzes ICANN as an international regime that serves as the nexus between domestic and international politics in negotiations between state actors and nonstate actors over the control and regulation of domain names ([|Park 2008]). [|McDowell et al. (2007)] explore the distinction between geographic identity and virtual identity in small-country domains. The country code .tv for the tiny island nation of Tuvalu, for example, proved to be of great economic value and is marketed worldwide as a domain for video content. Such practices, they contend, simultaneously contradict and support existing international institutions and systems of governance. Drawing on international law rather than constructivist political science, Froomkin critically analyzes efforts by national governments to assert property rights over their names in cyberspace ([|Froomkin 2004]). Given the overarching problematique of the relationship between internetworking and national sovereignty, the role of governments in ICANN's formation has always been a topic of interest. Volker Leib and Daniel Drezner, for example, examined EU–US interactions in the initial negotiations over ICANN ([|Leib 2002]; [|Drezner 2007]). In general, however, the changing role and actions of governments who directly participate in ICANN through its Governmental Advisory Committee are underresearched. Another badly underresearched area is the attempt to introduce multilingual scripts into the domain-name system standards, which raises a number of interesting cultural, political, and technical issues. If ICANN was a disruptive change, then the World Summit on the Information Society (WSIS) can be seen as a systemic reaction to it. Both international institutions such as the ITU and developing country governments resented the US unilateralism it embodied and used WSIS as the vehicle to attack it from 2003 to 2005. The WSIS process politicized and broadened debates over internet governance. Before, significant numbers of people involved in ICANN and internet could get away with claiming that what they were doing was not governance or regulation at all, but “technical management.” After WSIS, the political dimension of internet governance was frankly acknowledged. Representatives from large developing economies, such as Brazil and China, succeeded in focusing attention on the political bargains and biases underlying the US position in the ICANN regime, while the established internet technical and commercial interests fretted that WSIS amounted to an attempt by the United Nations to “take over the Internet” ([|Drake 2005]). The literature on WSIS is large and of uneven quality, but contains many important insights into international institutions, the participation of civil society in global governance, the role of the United States, and of course internet governance itself. One of the ironies of WSIS is that it was supposed to address the full range of communication-information policy, but ended up becoming almost entirely focused on internet governance. A good descriptive analysis of the WSIS process from the standpoint of a traditional civil society “development” advocate and UN system insider can be found in [|Souter (2007)]. Hans Klein provides a valuable analysis of the politics of WSIS placed in the context of UN summits ([|Klein 2004]; see also [|Kleinwächter 2004]; [|Raboy 2004]). What emerged from the WSIS process more decisively than any other outcome was an affirmation of the principle of multistakeholder governance. WSIS created a new set of expectations regarding the ability of civil society actors to participate in intergovernmental processes ([|Padovani and Tuzzi 2004]; [|O'Siochru 2004]; [|Raboy 2004]; [|Hintz 2005]). While it often disappointed stronger advocates of participatory democracy and failed to resolve the debates about ICANN and the US unilateral role in internet governance, it did create a new institutional vehicle for carrying on discussion and debate around those issues: the Internet Governance Forum. There is a huge amount of policy literature and occasional papers around IGF, but very little deep scholarly analysis. The one major contribution so far is from [|Malcolm (2008)], who not only carefully traces the developments of the IGF's first two years, but offers a normative analysis of how it can be reformed to fulfill the promise of multistake-holder governance. Another good post-WSIS analysis of internet governance is [|Mathiason (2009)].

Cybersecurity, Privacy, Identity, Surveillance
Along with the mass adoption of the internet protocols has come discovery and exploitation of its technical vulnerabilities. Security – against crime, surveillance, data theft, intrusion or cyberattacks – now forms one of the key preoccupations of internet governance studies. Research in the field approaches this problem from several angles. One strand of this research extends classical law and policy research around electronic privacy and surveillance into the new social configurations formed around the internet ([|Diffie and Landau 1998]). For example, it focuses on the data protection and privacy issues associated with social networking sites or other forms of user-generated content, the tracking of user behavior on the internet by advertisers, facial recognition software and the like ([|Cranor 2002]; [|Solove 2007]). It also carries on longer-term discourses about the appropriate role of identity documents, anonymity, and pseudonyms in the online world ([|Froomkin 1999]; [|Hosein and Whitley 2005]). A related literature, more populated by political scientists than law scholars, investigates the negotiations among great powers over international privacy, surveillance, and data protection ([|C. Bennett and Raab 2006]). In particular, researchers have emphasized the need for the US and Europe to reconcile or compete over their different approaches to data protection and privacy regulation ([|C. Bennett 1992]; [|Farrell 2003]; [|Drezner 2007]; [|Mueller and Chango 2008]). Others focus more on international cooperation over surveillance and identification in the wake of the terrorist attacks on New York and Washington in 2001 ([|Hosein 2004]). Oddly, there is very little scholarly literature on one of the major intergovernmental developments in this area, the Council of Europe's Cybercrime Convention, which was completed in 2001. Governments and industry still conceive of security primarily in technological terms, and thus a huge amount of technical research on internet security has been produced in computer science and engineering. Since about 2000, however, an interdisciplinary social science literature has attempted to synthesize computer science/information systems knowledge with the insights of economics. The new field of //information security economics// is based on the insight that the internet's security problems are not simply technical but are driven by the incentives of actors and firms ([|Anderson 2001]; [|Anderson and Moore 2007]). Work in this area has documented and analyzed spam and various kinds of internet-based fraud as a product of transnational organized crime rings ([|Moore 2008]). It feeds into policy discourse by analyzing, e.g., the cost–benefit tradeoffs of internet service providers’ efforts to secure their networks and customers, the assignment of liability to software producers or internet service providers, the impact of network externalities, and the ways in which markets interact with government action in response to security problems ([|van Eeten and Bauer 2008]). Some interdisciplinary researchers are also adding a political dimension to this analysis, e.g., by exploring the relationship between governance, states, and the development of security-related internet technical standards ([|Kuerbis 2009]). Some political science research in this area is more explicitly focused on the national and transnational power implications of the internet's vulnerabilities. In this literature, the term “security” means exactly what it does in mainstream international relations research ([|Deibert and Rohozinski 2009]). In other words, this research deals with “cyberwar” or internet-based attacks on one state by another, the use of the internet by terrorist groups, and the threat to critical infrastructures that might be posed through network vulnerabilities ([|Arquilla and Ronfelt 2001]). All of this work can be characterized as “internet governance” in that it deals with stakeholder efforts to manage the implications of their exposure to the internet, either through public policy, law, and regulation or through technical adaptations and protections, standards improvements, or market mechanisms. The cybersecurity literature overlaps with the networked governance and multistakeholder governance research. The investigations reveal collective action by the internet service providers and standard-setting organizations, as well as governments and law enforcement agencies. Traditional hierarchical state action is the exception rather than the rule.

Intellectual Property in Digital Media
The tension between the internet's capacity to quickly and easily share digital content and the protection of intellectual property rights has already been mentioned ([|Boyle 1996; 1997]; [|Lessig 2001]; [|Vaidhyanathan 2001]; [|Halchin 2004]). This has emerged as one of the most important global governance issues of our time. The digitization of copyrighted materials has made the internet the perfect distribution mechanism, allowing individuals to locate and share valuable content with unprecedented ease. Peer-to-peer file-sharing systems or new online businesses such as Apple iTunes can make movies, music, and books available globally. But the change is a disruptive one, as users’ ability to transmit and share information often outstrips the ability of those who would own it to erect fences around property so that it can be exchanged in the market. Incumbent media giants have had the resources to pursue a protectionist strategy on a global basis. They have not only vigorously litigated against what they see as copyright and trademark infringement on a transnational basis, they have also sought new international treaties, new regulations of internet service providers, and global standards for digital rights management and anti-circumvention technologies ([|Litman 2001]). The internet domain-name system has also led to conflicts with trademark rights, leading to the creation of a new global dispute resolution system administered by ICANN and WIPO ([|Burk 1995]; [|Froomkin 2002]; [|Galloway and Komaitis 2005]). If the rise of the internet has mobilized copyright and trademark owners to seek institutional changes that can support exclusive property rights in the new context, it has also sparked a new social movement that pushes in the opposite direction ([|Boyle 1997]; [|Lessig 2005]). This movement is inspired by institutional analyses that view a //commons// as a governance model. Open, nonproprietary access to information is seen as especially appropriate because consumption of information is nonrivalrous ([|Ostrom 1990]; [|Kranich 2004]). The “Access to Knowledge” (A2K) movement has its origins in the developers of free/open-source software, who pioneered new contractual mechanisms deliberately designed to prevent informational resources from being privately appropriated ([|Raymond 2001]; [|Stallman 2002]; [|O'Mahony 2003]; [|Weber 2004]). This model of software production and governance has been dubbed “commons-based peer production.” Long before that term was coined, the standards underlying the internet itself were developed in that way; as noted before, the IETF is one of the earliest open standards development institutions. Taking free software as its model, the Creative Commons project extended this institutional innovation to other forms of digitized content, such as music, photos, or texts. There is some critical discourse about this model ([|Elkin-Koren 2005]). By 2000–2 or so, the open-source, open-content actor-network had become a full-fledged social movement that melded the free software movement with critics of the patent system in drugs and biotechnology and advocates of copyright reform. It had a unique method of organizing, a legal/institutional strategy for creating and maintaining a commons, and a designated enemy: the corporate multinational copyright, patent, and trademark interests and their backers in government ([|Coleman and Hill 2004]; [|Elliott and Scacchi 2008]). The A2K movement, like its opponent, was transnational in scope and self-consciously took its cause into international organizations (notably WIPO) as well as national legislatures. Thanks to alliances with some developing country governments (notably Brazil and India), a new “Development Agenda” for the World Intellectual Property Organization was articulated ([|The South Centre 2002]; [|May 2007]). This involved an alliance between developing country states and civil society. Its purpose was to shift the priorities of WIPO away from protecting the patents and copyrights of the richer countries toward a greater emphasis on the impact of information and intellectual property policies on economic development.

Actor-networks
The WSIS inspired a number of research projects on transnational issue networks and social movements around communications and information ([|Klein 2001]; [|W.L. Bennett 2003; 2004]). Many of the civil society networks who converged on the WSIS process were more interested in traditional media and issues in communication-information policy ([|Raboy 2004]). Some studies focus on the self-organization of civil society actors and use network analysis techniques to assess the relationships among the different issue networks that deal with different aspects of communication-information policy ([|Mueller et al. 2007]). Other studies attempt to assess the impact civil society actors had ([|Padovani and Tuzzi 2004]; [|O'Siochru 2004]; [|Hintz 2005]). The formation of the IGF effectively institutionalized the opportunity for civil society actors to remain active around internet governance. Recent research by Elena Pavane takes a detailed look, using network analysis, at online and offline connections among themes and people in the IGF ([|Pavane and Diani 2008]). An ongoing study by [|Levinson and Smith (2008)] also probes the structures and alliances in the Forum. Clifford Bob, on the other hand, examines the “institutionalization of contention” in the IGF around privacy and freedom of expression on the internet ([|Bob 2008]).

Future Directions and Missing Matter
In an area as new and as volatile as internet governance, any attempts to identify major gaps in the current agenda are bound to be of limited value. This is especially true when the author of the current essay has already played a role in shaping the direction of the existing research oeuvre, and thus by definition will be in the worst position to locate and identify existing blind spots. It does not hurt, however, to make some effort to assess future directions in theory and methodology and to try to consider important elements that have been overlooked. Methodologically, some of the most promising new research comes from scholars who are able to exploit the information-generating tools of the internet itself to compile and analyze data about the internet. Some of the best examples of this approach come from areas that have, up to now, been tangential to //governance// research, e.g., the computer science-oriented visualizations of routing structures and network topology by kc claffy at the Cooperative Association for Internet Data Analysis (CAIDA), or the cultural studies-oriented analysis of linking patterns and domain structures on the internet by Richard Rogers at the University of Amsterdam (which he calls “digital methods”). So far these methods have yielded fascinating relational data that, while possibly relevant to governance and policy studies to a well-informed observer, are still one step removed. We take a step closer with the use of software tools by scholars such as Ron Deibert, Rafal Rohozinski, and Nart Villeneuve to detect, analyze, and even circumvent internet censorship; likewise with their efforts to detect and study botnets ([|Deibert et al. 2009]). Economics of security scholars such as Tyler Moore, Jean Camp, and Michel van Eeten also are increasingly able to mine huge, computer-generated databases of phishing activities, routing information, spam sources and the like. It is possible to imagine a broader diffusion and further evolution of these methods to bear more directly on the problems of internet governance. This implies a synthesis of technical knowledge and social science that is still too rare. Areas of research that are underdeveloped have been mentioned in passing. The issue of multilingual domain names is a particularly good example of where internet governance research could go. It brings in differences of culture and localization, but also requires knowledge of the technical standards underlying the internet and how changes in those standards are related to changes in operational practice and the inevitable conflicts over policy issues that come in their wake. If the internet fragments along linguistic or national lines, the cultural and national dimensions of local internets will be a rich topic for investigation. One of the biggest challenges, and open doors, is to incorporate the growing corpus of mid-level internet governance research into higher-level theoretical treatments of globalization and global governance. Does concrete knowledge of internet governance alter or change what the high-level theorists think, or reinforce points they have already made? It is hard to tell yet, because the theorists are too far removed from what they sometimes dismiss as “empirics;” their attempts to incorporate stories from internet governance to make their points usually just demonstrate their ignorance of the new landscape. In a similar vein, we can look forward to the rapid acceleration of research on “infowar” and the “national” or global security studies involving the internet. The nexus between military security and cyberspace, whether real or alleged, is already being exploited in the political arena; we await more detailed, data-driven studies to fully understand what is really happening there.

Nanette S. Levinson
==== Subject [|Geography] » [|Development] [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|dependency], [|evolution], [|institutions], [|technology] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Author Podcasts**]

[|**Comment on this article**]

Introduction
Approaches to communication and development practice and studies have changed dramatically in the last fifty years. Whether it is the roles of individual national governments or the focus on top-down vs. bottom-up or even the assumption that culture matters, discussions centering on communication and development have altered substantially. Examples of these changes include: a focus on the nation-state fifty years ago to a more multistakeholder focus today; a central and top-down direction fifty years ago to a more bottom-up or combination direction today; and from a primary focus on political and economic development to a more nuanced view including cultural components and social development today. However, some issues remain much the same: communication and development as it relates to poverty worldwide; access to a specific communication mode (from the mass media in the 1960s to internet and mobile-technology related media in the early 2000s); media interventions (whether for political development in the sixties or health campaigns in the nineties and beyond); and the ICT (Information and Communication Technology) and development policy-making challenges for governments and, more recently, international organization, private sector organizations, and nongovernmental organizations. This essay traces the major approaches over the last fifty years, highlights the changing panoply of players (and related technologies) involved in discussions of communication and development policy and policy making, and identifies emerging trends in the field. It also briefly describes selected methods and measures used to approach technology and development in international communication. See also the essays on e-commerce, digital divide, and gender for related discussions.

Modernization
With the publication of [|Schramm's 1964] book on //Mass Media and National Development//, the modernization paradigm came to the center of attention for that particular point in history and for that era's scholars and policy makers. Central to this approach is the notion that what worked well for developed, democratic Western nations would work well for developing nations. What was needed was the diffusion of a modernization approach. This involves a linear, one way approach: there is information flow from a government to the people. Moreover, the role of the mass media within a country is central through disseminating information to promote democracy and, ultimately, modernization. [|Lerner (1958)] set the groundwork for a modernization approach, setting forth a stage theory of political development facilitated by the mass media: urbanization, literacy, media exposure, and then, integration into modern, participatory society. Adding an economic focus, still very linear and also involving stages, [|Rostow (1960)] argued that there were five steps in economic development, moving from a traditional society to one with high levels of mass consumption. (Later in his lifetime, he added another step, beyond mass consumption, where quality of life becomes central.) Key to modernization approaches (whether the focus is on political or economic development), in addition to the role of the mass media and the staged or linear nature of the approach, is the central role of a nation-state government. It is a top-down or Western nation to developing nation approach, paralleling the development of an innovation diffusion approach (see below) to communication and development. There is no attention to culture or to social change.

Diffusion of Innovation
Stemming from the modernization approach and, ultimately, writing about its “passing” in 1976, Everett Rogers added a focus on interpersonal sources in the diffusion process. But, again, the diffusion of innovation approach, especially in its earliest form ([|Rogers and Shoemaker 1971]), was a linear and unidirectional approach: a Western government would diffuse an innovation (such as new agricultural processes to promote development). To consider the innovation “adopted” by the recipient government or country or village, the recipient needed to use the innovation in the exact form as in its original dissemination. There was little attention to adaptation in the early days of this approach. As time went by and more studies were completed, [|Rogers (1976)] modified the original diffusion innovation model to take into account the importance of interpersonal sources. He recognized a different role for communication including the role of radio programs. Highlighting three new elements, participation, mass mobilization and group efficacy, he even began to argue for field experiments as opposed to surveys in the study of innovation diffusion.

Dependency Paradigm
Reacting to these primarily Western-based theories and approaches, some theorists especially those from Latin America, began to formulate an alternative paradigm for viewing communication and development. [|Amin (1974)] and [|Cardoso and Faletto (1979)] view the relationship between developed and developing nations as one of core and periphery. The obstacles to development, in their views, are external to the developing nation. Indeed, developed nations at the core exploit and impact those on the periphery. Again, this is primarily an economic view of development. From the developing nation perspective, then, the dependency paradigm argues that a developing nation needs to remove itself from the world market and display self-reliance. Brazil is an example of a government that tried to develop its own computer industry, especially in the mid-1970s, with mixed results. See, for example, [|Crandall and Flamm (1989)].

Monistic-Emancipatory Approach
[|Mowlana and Wilson (1990)] build on the work of [|Ibn Khaldun (1958)] who lived from 1332 to 1406. (Khaldun wrote about moving from a simple to complex organization with no separation between society's religion and politics.) They argue for a monistic–emancipatory approach to communication and development. This approach is nonlinear and involves ethics, spirituality, and an emphasis on the community. Recognizing complexity of communication and development as well as the role of religion, it advocates a bottom-up strategy and popular participation. It also uses a monistic view and requires the unity of god, human beings and nature. While this approach has not become central in the literature, it does presage trends in participatory approaches.

Culture, Power, and Gender Dimensions
By 2001, Wilkins and Mody add a greater focus on the process of communication and social change and highlight the role of culture (missing from modernization and early innovation diffusion studies). They adopt a critical approach and emphasize concerns with power and with “the gendered nature of development discourse” ([|Wilkins and Mody 2001]:387). Adding social movement theory to their repertoire of tools for understanding communication and development, they talk about the role of the media in strategic social change and express concern with cultural homogenization. Thus, they call for a focus on who has knowledge and knowledge as a resource itself. This requires sensitivity to specific cultural contexts at specific times and places. Their work extends communication and development studies to include work on, for example, health communication media campaigns (in order to diminish the spread of HIV/AIDS in developing nations). Adding nuanced dimensions to the study of communication and development, even though focused on media roles, [|Wilkins and Mody (2001)] also discuss environmental concerns, increasing roles of the private sector, and even the impact of corruption. These elements provide a foundation for some of the discussion in the emerging trends section of this essay.

Institutional Theory Approach
[|Wilson (2004)] points out how institutions play a key role in communication and development. As [|Zucker (1987)] argues, institutional theory applies well when looking at groups of organizations over time and assists in examining the environments of organizations as socially constructed normative spheres. Using institutional theory to understand communication and development-related processes calls for a longer time horizon and indepth looks at institutional change processes. While technological discontinuities such as the internet can cause rapid changes, most institutional change is incremental. Institutional theory calls attention both to the “cues” given by institutional frameworks and to isomorphic processes as central in diffusing innovations and effecting institutional change. One major illustration of the application of institutional theory is the abrupt change in many developing and developed nations from a central government agency that planned, controlled, and regulated all of telecommunications in a nation to the increasing role of privatization and a concomitant nation-state institution change. A compelling illustration of these processes at work can be seen in [|Sandholz (1993)] who vividly portrays the rapid institutional change in Europe from nation-state monopolies regulating telecommunications to the dramatic creation of one new regionwide and powerful institution, ETSI (European Telecommunications Standards Institute). Another related example rooted in an institutional theory perspective is the copying by nation-state governments of the idea of privatization of telecommunications. Whether this idea works or not, governments increasingly copied this notion and restructured both agencies (as above) and policy processes to encompass privatization.

Industrial Policy
Industrial policy refers to the ways in which a country can promote its growth, productivity and competitive advantage. Until the advent of the internet, the concept of industrial policy did not really include information-related technologies. In fact, nation-state government agencies charged with promoting economic development did not deal at all with telecommunications policy. That policy space was usually the purview of the agency charged with the provision and regulation of postal and telephone matters. As noted earlier, communications, after the birth of the internet (a discontinuous technology), increasingly became the purview of institutions dealing with economic competition and economic advantage. Thus, the precepts of industrial policy (a nation-state government's toolkit for promoting its economic advantage and competing in the world system) came to include information and communications-related industries. In fact, the early 2000s have also seen the policy space for telecommunications-related issues primarily in developed nations expand to include a number of government agencies such as commerce (with primary responsibility for industrial policy), defense/ security, and state or foreign ministry. The work of [|Mansell and Wehn (1998)] provides an example of industrial policy recommendations for developing nations with regard to information and communication technologies. They provide specific templates and “tools” for ways in which developing nations can use ICTs in achieving sustainable development. Emphasizing education, they also discuss how developing countries can build national information infrastructures, an important topic of that time. Today the big change from the early days of the modernization paradigm is the switch from a role exclusively on what governments can themselves do to what governments can do vis-à-vis the private sector. An additional change is from what nation-state governments in developing countries should do in general to prescriptions that are more individual for specific countries, each of which faces specific challenges. One trend that cuts across these themes is capacity building. What can a developing nation implement to enhance capacity and how should it measure such capacity? Another newer dimension is industrial policy at the regional level. This focus reflects the growth of regional structures and the relative success of Europe as a region. Thus, there have been attempts on the part of ASEAN, OAU, and other regional entities to promote communication and development strategies, some focusing on regional organizations and their nation-state governments and others involving a focus on the private sector.

Strategic Restructuring Model
A more recent take on innovation diffusion with a focus on information and communication technologies as the innovation, as well as on nation-state government policies, is evident in the work of [|Ernest Wilson (2004)] and his Strategic Restructuring Model. This model highlights the following dimensions as central to ICT diffusion in developing nations, especially over time: Structures (including political, economic and social structures in a nation); institutions (ministries of information, state-owned enterprises, etc.); politics and government policies. It also highlights the role of key individuals in developing nations as champions of an innovation and the important role of institutions. Unlike both the modernization and dependency paradigms (where little feedback or social science data were collected), this model stems from extensive field research in developing nations. It adds power to the diffusion model by characterizing diffusion as a negotiation process. This study does not find a major role for multinational corporations in the diffusion process to developing nations. It highlights the local context as well as local institutions and tells an empowering story of social networks in the ICT revolution.

Evolutionary Paradigm
[|Modelski (1996)] links institutions, especially, in world politics, his area of focus, to evolutionary theory and evolutionary change. Examining global political evolution, he looks at the long cycle involving the rise and decline of world powers. Highlighting the role of time, he makes a strong argument for the role of evolutionary frameworks for understanding world politics. Taking a similar stance, but focusing on populations of organizations over time, [|Monge, Heiss, and Margolin (2008)] also argue for an evolutionary or population ecology type approach in their examination of communication networks in organizational communities. The basic argument in each is that over time an environment selects out certain types of organizations for survival. There have been numerous powerful analyses demonstrating the appearance (or disappearance) of a range of organization types over long periods of time. This paradigm can also be profitably applied to the field of communication and development. Others ([|Levinson 2008]) have highlighted the power of evolutionary approaches to help explain the growth of public–private partnerships in communication and development arenas as well as the trend toward multistakeholderism not only in communication and development but also in global environmental and health arenas as well. The evolutionary approach emphasizes environmental characteristics such as complexity and uncertainty as helping to shape over long periods of time those organizations and organizational forms that survive.

A Network or Interorganizational Approach
[|Monge, Heiss, and Margolin (2008)] also link network theory to communication and the evolution of organizations. A network is a collection of nodes or entities that can exist at the individual, organizational, or interorganizational levels. Looking at the environment of a network provides evidence of its resource configuration and the way in which members of a network may use the network to acquire, exchange or shape resources. Community ecology, a subset of evolutionary and population ecology dynamics, studies the very processes by which members of a network – organizations in a community – have relationships that help them acquire needed resources. The evolutionary component refers to the variation, selection, and retention processes at work here. It traces how the organizations adapt to this environment over time. Much work on communication and development today uses network theory rather than evolutionary theory. A network approach captures well today's complex patterns of competition and cooperation among organizations such as private sector organizations, governments at all levels, international organizations, and nongovernmental organizations. It also facilitates analysis of alliances and partnerships to foster communication and development, an especially vital approach in light of today's global financial challenges.

An Ecosystem Approach and Today's Players
A final approach combines both the units of a network at the organizational or interorganizational level and the characteristics and components of the environments in which they are set. This can facilitate both analyses in the short term and the long term. It also captures well today's variegated entities and concomitant patterns of combinations and permutations that are key on the communication and development scene. As noted earlier, nation-state governments no longer are the only game in town when it comes to communication and development. With the advent of complex and converging information-intensive related technologies, the panoply of players (and their interconnections) in the policy shaping and making arenas has changed dramatically. Today, the notion of multistakeholderism is beginning to take hold as a follow-on to the United Nations-convened World Summit on the Information Society, the second phase of which ended in 2005. Stemming from WSIS and its Working Group on Internet Governance (see [|Drake 2008]), the Internet Governance Forum (IGF) had its inaugural meeting in Athens, Greece in 2006. While it was established as an outcome of WSIS as a nondecision-making body that would focus on multistakeholder discussion of internet governance issues, the IGF also had and continues to have access for developing nations and for disadvantaged groups as a key concern. There are, of course, serious questions as to whether all or most citizens of developing nations actually have access to such policy shaping and making whether at the IGF or elsewhere. But there are civil society organizations and some international organizations that are becoming increasingly involved in such discussions. The ecosystem approach ([|Levinson and Smith 2008]) allows for examining both the like and unlike organizations involved and the characteristics of their environments, including possible technological uncertainty/complexity, culture and resources (or the lack thereof). It also includes a focus on the connections and patterns of linkages (including the absence and strength of connections) and what can flow and does flow across those links (information, technology, other resources).

Emerging Trends
This section identifies emerging research trends in communication and development. It begins with a “back to the future” trend, then considers research on new actors in communication and development, and then moves on to the arena of new technologies. Finally, it highlights two very recent developments. The first links environmental studies research to communication and development research and the second treats cyberinfrastructure initiatives, social media and web 2.0 research. This paves the way for a discussion of implications for additional research with a focus on co-creation processes as a new knowledge niche with great potential for contributing to the field of communication and development. Linking co-creation processes to communication and development, together with emerging technologies (such as CI and mobile technologies) and social entrepreneurship research provides both rich potential for future research and new ways of thinking and researching about communication and development in global context.

Back to the Future Trends
The recent “One Laptop Per Child” initiative under the leadership of Nicholas Negroponte of MIT's Media Lab (see [|www.laptop.org]) captures aspects of both the modernization and the diffusion of innovation paradigms. Here is an example of a professor in a leading US technology-focused university with an idea – a specific, inexpensive laptop technology designed especially for children in developing country environments – disseminating this innovation, using the media, and meeting with government leaders in select developing nations to promote his idea and to make a difference ([|Hatch 2009]). A second back-to-the-future trend can be seen in the terminology “ICT4D.” ICT4D refers to the use of information and communication technologies to bring about development. The very phrasing of this term implies a top-down or innovation diffusion approach. Often the nation-state government and/or international organization is at the center of such work. The 2001 volume of the annual //Human Development Report// focused for the first time on this topic. It created a Technology Achievement Index correlated with measures of human development and argued that digital gaps do not have to be permanent. Looking at this Report and other indices of development supplied by international organizations, one can see the nation-state as the central focus. There have also been //Human Development Reports// examining ICTs and development with a regional focus. (See the UNDP's //Promoting ICTs for Human Development in Asia// (2005) for an example.) Examining primarily economic development, UNCTAD produces a yearly report; the most recent is the //Information Economy Report, 2007–2008//. Here, too, the focus is on governments and on policy implications. This UNCTAD report finds that the higher the income in a country, the lower the cost of access. In 2009, the ITU issued a report, //Measuring the Information Society: The ICT Development Index 2009//, as a response to the WSIS meetings outcomes and as a way to make sense of the various indices that have appeared since the 2001 Human Development Report. This 2009 ITU edition concludes that disparities still exist, even though all countries improved (in terms of access, not use!) over the five-year period examined. The least developed countries remain toward the bottom of the index. Formulating an ICT Price Basket, it shows the high costs of access and the lowest access to broadband in the developing nations. The focus is again primarily economic; it does include data from UNESCO regarding literacy in the countries studied. Recognizing this limitation of the term ICT4D, the World Bank has begun to use the term e-development. It still includes primarily a nation-state focus but it provides data on e-governance and other e-related services ([|Schware 2005]). Another option now in use is the term ICT in development rather than for development. [|Warschauer (2004)] reminds us compellingly through his fieldwork that a focus on disseminating innovations is not enough. What is needed is an understanding of a recipient culture to continue to use the model of the innovation diffusion paradigm. Placing a bunch of computers in a developing country classroom does not imply in any way effective use or even any real use. A third trend relates to the roles of the nation-state in communication and development. The nation-state in early communication and development studies was the central actor and key focus. As will be noted below, today there are both new actors and new venues for communication and development policy issues internationally. At the same time, since 2001, nation-state governments, especially in developed nations, appear increasingly concerned with defense and security issues and thus with ICTs as well. Yet renewed attention, especially in the development community, is being paid to information and communication technologies and the ideal of an information society as highlighted in the WSIS discussions. Thus, there are new actors at the table (see below) and the power equations of such actors vis-à-vis nation-states are in flux.

New Actors and Roles in Communication and Development
Technology experts, an epistemic community, are taking their places in communication and development discussions, along with other actors. Indeed, some have argued ([|Mattli and Büthe 2003]) that there is much power in standards-setting exercises; it is the technical experts who often are involved in such meetings. They are also involved in such entities as ICANN, the private sector not-for-profit, headquartered in California, now in its tenth year of dealing with internet domain names and related issues. ICANN, which does involve technology experts in its discussions, has been criticized for not having enough inputs from developing nations in its regular meetings, scheduled in various parts of the world including developing nations. Nor, according to this argument, does it have enough developing country involvement in its Governmental Advisory Body (the GAC). Indeed, a related criticism, especially on the part of some developing nations, is the absence of real power for ICANN's GAC. Similarly, some criticize the Internet Governance Forum (IGF) for not enough emphasis on access and on developing nations, even though the IGF, by its very definition, is supposed to be multistakeholder in nature. The IGF in its yearly agenda does continue to highlight access issues and to discuss capacity building in developing nations. Additionally, the year 2008 saw the beginnings of replication of multistakeholder forums at the regional (Europe, Africa) and nation-state levels (England). Additional IGFs at these more local levels are under discussion. One of the interesting shifts in recent times has been the shift from a sole focus on nation-states to a focus on other actors as can be seen in the creation of the IGF as a multistakeholder forum. International organizations have been reinventing themselves to capture the shifting sands here. For example, both the ITU and the OECD have recently grappled with the roles of civil society. Each has decided to involve actively “civil society.” They have recognized that the number of nongovernmental organizations dealing with communication and development has grown exponentially. The OECD (accessed 3/23/09 at [|www.oecd.org]) notes that its interest in involving civil society stems back a decade ago to the OECD Ministerial on e-commerce and to the WSIS meetings. At the June 2008 ministerial meeting held in South Korea to discuss the internet economy, the OECD Secretary General called for a process of formalizing both civil society's and the technical community's participation. Thus, there are now two new groups: the Civil Society Information Society Advisory Council and the Internet Technical Advisory Committee. They join private sector and labor groups who already participate in the OECD activities. A similar trend is evident in the ITU where they have held a workshop to discuss the roles of civil society. For their part, NGOs very much want a seat at the multistakeholderism table. While it is easy to identify international organizations with variegated interests in communication and development (the ITU, World Bank, IMF, UNCTAD, UNDP, WTO, OECD, for example), it is much more complicated to identify who really is “civil society.” This is particularly significant when talking about development. Are, for example, the civil society organizations at a specific policy table representative of civil society in a developing nation? Related to this question is a new question about the role of diasporas. Very recent research ([|Brinkerhoff 2008]) indicates how a diasporic community can use ICTs, among other options, to help family and home country economic development. The private sector also is keenly watching the multifaceted communication and development arena, now populated by nation-state governments, local governments, regional governments, international organizations, technical experts, and civil society. Scholars such as [|Prahalad (2006)] highlight the potential of developing nations as a market for the private sector or, as he calls it, “The Fortune at the Bottom of the Pyramid.” Additional scholars and nongovernmental organizations have jumped on this bandwagon. See for example the website at [|www.nextbillion.net], highlighting projects all over the world that are linked by their focus (and their business models) on development through enterprise, as the site notes. Related to these initiatives is the role of social enterprise in recent communication and development efforts. Social entrepreneurs in both not-for-profit enterprises and for-profit businesses with a social mission strive to impact economic and/or social change and development. There are tensions among each of these actors, each with its own culture and interests, and each acting in contexts fraught with technological change, increasing interconnections, and sometimes political as well as technological uncertainty. As a result of the Working Group on Internet Governance recommendations to the final session of the World Summit on the Information Society, there is the earlier mentioned IGF, now in its fourth year, and soon (by mandate at the time of its creation) to be evaluated. The research (including [|Cogburn 2006]; [|Kleinwächter 2007]; [|Levinson 2008]; [|Marsden 2008]; [|Mueller 2004]; [|Mueller, Mathiason, McKnight 2004]; [|Weber and Menoud 2008]) on this multistakeholder venue indicates its complexity whether in participants or topics. Each of the first three IGF annual meetings has, as noted earlier, included discussions of access. This microcosm of multistakeholderism captures the complex relations and tensions among nation-states, international organizations, private sector, and civil society actors. Some private sector actors are concerned, among other issues, that an international organization such as the ITU might replace, slow down, or supplement markets and current mechanisms for dealing with internet governance issues, including those of developing nations. There are also concerns that certain governments may continue to impose restrictions on internet use and impede markets. There are, of course, examples of the private sector effectively promoting development at the local level, without direct involvement of nation-state governments or international organizations. The e-Choupal case in India ([|Chitnis et al. 2007]), where a private sector Indian company dramatically changed the way in which farmers do their work, illustrates the use of information and communication-related technologies to improve local farming efforts, recognizing and reflecting local culture effectively. Other examples include that of the Grameen Phone initiative in Bangladesh, building on the Grameen Bank model or Kiva.org, using internet technologies to link directly potential individual funders and development-related projects in other parts of the world. Recent research ([|Bessette 2004]) also highlights the roles of communities and ICTs in developing nations. Possibly paralleling early work on the role of mass media in modernization, today's focus on community radio in developing nations to effect change transforms the media role from being a top-down mechanism for change to being a local, bottom-up and culturally sensitive agent of change and help. (See also UNESCO's work on community media and gender and community media.) The focus on community also allows for work on participatory processes and participatory development approaches. This leads to another emerging research focus, that of sustainability. One of the newest trends in studying communication and development is a focus both on the environment and on communication. There are at least two interrelated aspects to this intertwining of two significant but heretofore rather separate arenas of international issues. One is the interconnection among technology type and environmental impact or the greening of communication technologies in both developed and developing nations. Environmental issues are particularly key to developing nations. The second is the broader issue set of policies encompassing environmental and communication policy decision-making. Indeed, the IGF itself and even its dynamic coalitions provide examples of internet governance-related innovations copied from the rhetoric and practice of multistakeholderism in earlier global UN-led environmental policy-making discussions ([|Levinson 2008]).

Three Technology Types and their Uses
Another emerging trend in communication and development focuses first on the nature of a technology itself and, then, its uses in communication and development contexts. Here there are three related technologies: open source technologies, mobile technologies, and social media/web 2.0 technologies. Research on open source technologies in the context of communication and development highlights ease of access and lowering of costs for using ICTs in developing nations. Some research focuses on government roles in selecting technology standards for acquisitions and operations in their purview. For example, there is research ([|Ghosh 2004]) on Extramadura, Spain where the government selected open source as opposed to Microsoft technology. (Such decision-making may echo the dependency paradigm.) There can be political elements involved in such decisions as well: some developing countries and localities prefer to use software that is open for collaboration and that does not stem from one large country's powerful multinational business. Research here is primarily on government decisions, roles, and outcomes. A second trend focuses more on the infrastructure for collaboration. This is cyber-infrastructure (CI) or e-science or the grid as it is known in Europe. Here most work is at the nation-state or regional level. In 2007 the United States National Science Foundation created a high level office to promote and study cyberinfrastructure. Both England and Europe have offices related to similar endeavors. While there have been efforts to involve scientists and engineers in developing nations through their regional professional associations, there has been much less attention to the roles of civil society – especially developing nation civil society – in fostering dialogue about CI policy and the development of CI. The story is different when looking at research related to the third technology type, mobile technology. Research here is mushrooming, especially research related to developing nations. There appears to be a good “fit” between this technology type and the needs of individuals and organizations in a development setting. Again, as pointed out at the beginning of this essay, culture plays an important role and cannot be forgotten. As [|Kam et al. (2009)] observe in their research on teaching literacy to India's youth using mobile video games, culture shapes what is and is not successful. But there are few studies yet that track both long-term social and economic outcomes. See [|Donner (2008)] for a comprehensive review of mobiles and development. The above discussion of technology types leads to an examination of research on “leapfrogging.” See, for example, [|Singh (1999)]. This argument is a counterbalance to the staged and linear requirements inherent in the modernization and related approaches. By using certain kinds of technologies (such as mobile technologies), this argument states, a developing nation can leapfrog over stages and develop more quickly. Here South Korea's economic progress with regard to mobile phones provides an illustration of successful leapfrogging. The final emerging trend centering on technology is the rapid-fire growth of social media and web 2.0 technologies and the possible convergence with mobile/cell phone technologies. Such technologies link people, craft networks, and shape possible outcomes of the linkages and information exchanged. They have the potential for social, political, and economic outcomes. The newest aspect of social media technologies is their use in contests and challenges to promote social and/or economic and/or political change. Three examples come from the government, university, and NGO sectors: the USAID Challenge, the UC Berkeley Human Rights Mobile Challenge, and the Social Action Change the Web Challenge (see [|www.netsquared.org]). Again, however, there needs to be additional research on the roles of culture, especially in interaction with web 2.0 technologies. There clearly is something new here and much more research is needed to capture possible impacts on development and relationships to international organizations, governments, and the private sector.

Co-creation Processes and Communication and Development
Open source, cyberinfrastructure and mobile technologies foster the presence of innovative co-processes such as co-creation, a final trend in the communication and development field. Research in innovation studies (von [|Hippel 2007]) highlights the role of the user in co-creating innovations; research in labor–management negotiations highlights co-processes and cross-party learning as a result of negotiation ([|Culpepper 2008]); and research on citizens and their local government puts co-creation at the center of new trends in public administration (Boivard 2007). This very recent work in three complementary domains presages the importance of co-processes such as those found in participatory development to shape positive impacts. (Note, however, that these same cyberinfrastructure and mobile technologies can promote co-processes in the conduct of, for example, cybercrime or cyberterrorism in the context of developing nations.) The aforementioned technologies provide a foundation for virtual co-processes as well as face to face. Such processes also bring the field away from a purely top-down or bottom-up approach. Rather they allow for civil society, international organizations, private sector and/or governments at all levels to work in co-creation processes impacting social, political, and economic development. They also allow for looking at processes involving the private sector in developed and developing nations.

Research Needs
More research is needed to capture such processes (including cross-organizational learning and improvisation in terms of communication and development) and to recognize the roles of power and culture (and how they may shape outcomes in these settings). Furthermore, taking a co-processes approach prevents against the early thinking in the field of communication and development that there is one correct pathway and it can be disseminated. As even the World Bank ([|Schware 2005]) points out, there are different challenges for different countries. Indeed, international organizations may not always be necessary in solving development challenges. Recent developments in social entrepreneurship indicate that no governments or international organizations necessarily have to be involved in communication and development efforts for them to be successful; others, however, may argue that such efforts are piecemeal. Another key area of needed research with a focus on multistakeholderism research is the issue of trust. To what extent does trust exist across stakeholder groups? Does this trust level possibly increase if and as the stakeholders interact in networked fashion over time? Additionally, there is the need to focus on possible connections among stakeholders in the practice of multistakeholderism. What is the flow (if any and in what directions and intensities) of ideas and other resources among the stakeholders? What are the outcomes of such processes? Recognizing that there are national cultures, organizational cultures, alliance cultures, and professional cultures, how does crosscultural communication play a role?

Measures and Methods
This leads to a discussion of the aforementioned key need to begin to measure more accurately impacts ([|Heeks and Molla 2009]) in the communication and development arena. Looking back at the fifty years of approaches highlighted here, there has been a change in the methods used to collect data on communication and development. Methods rooted in both the modernization and innovation diffusion approaches tended to be checklists or surveys. Little qualitative or what we today call mixed methods (quantitative as well as qualitative) were present. As technology itself changed and with the advent of internet and now mobile-related technologies, researchers have used a variety of methods including the traditional checklists and surveys. There is a new ITU index (2009) (again with the nation-state as a central component) and also an impact assessment along with monitoring and evaluation frameworks. Today's possible methodological toolkit for understanding complex communication and development issues includes network analysis and mapping ([|Padovani and Pavan 2007]); participant observation and other quasi-ethnographic methods; and content analysis and case studies. Case studies tend to be the most prevalent; they are used in order to capture the rich data needed to understand the complexities of culture and cross-cultural communication in development settings. Also there are methods related to examining long cycles and large populations of organizations in the population and community ecology fields of study. These are less popular in the study of communication and development; but perhaps they will increase in popularity due to the need to capture the growing interest in the interconnections between environmental and communication concerns. One of the major future research challenges is assessing/measuring the presence of multistakeholderism and co-creation processes in this field. To what extent is there change and what types of change? And to what extent, if any, does multistakeholderism make a difference when it comes to social, political, or economic development? What are the comparative roles of developing nations in this new multistakeholder era? Are developing nations shaping such changes and to what extent? Which methods are most appropriate for capturing answers to these complex questions and can technology itself play a role? While there have been extraordinarily audacious changes in communications-related technologies over the last five decades – changes that parallel in magnitude the changes at the beginning of the industrial revolution in the United States, for example – poverty is still a major problem in our world as is the absence of democratic governments. Ideas for using new communication-related technologies to foster development include e-governance, e-government, e-health, and e-education. Recently, some scholars have replaced the “e” with an “m” in order to focus on mobile technologies and their potential power in development. Much of this discourse still centers on the nation-state and its roles in development. Some success stories exist, usually in the form of case studies. The challenge ahead and the way forward, to borrow terminology from the UN and the IGF, is to design research teams that recognize the complexity of today's communication and development issues and include all relevant actors (not just nation-state government-focused studies). Perhaps there is even a role for co-creation processes in the conduct of future research related to communication and development processes, especially with a focus on outcomes and impacts.

Jeffrey A. Hart
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|information and communication technology (ict)], [|intellectual property], [|technology] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
A technical standard is a norm or requirement usually established in a formal document that sets uniform engineering or technical criteria (“Standard”). Three types of technical standards are reference, minimum quality, and compatibility standards. A reference standard is “a material, device, or instrument whose assigned value is known relative to national standards or nationally accepted measurement systems” (United States Nuclear Regulatory Commission). For example, all countries have an agency that sets measurement standards for time, distance, weight, etc. In the United States, the agency currently responsible for this service is the National Institute for Standards and Technology (NIST). NIST is the successor to the Bureau of Weights and Standards in the Department of Commerce that was itself established under the Constitution in 1789. Reference standards have been around for centuries to assure, for example, that a coin has the right amount of gold or silver and a scale that says that a cut of beef weighs a pound is properly calibrated. Markets need reference standards to reduce uncertainty about the metallic content of money and about measurable quantities of goods ([|Kindleberger 1983]; [|Spruyt 1994]). A minimum quality standard sets criteria for quality permitting sellers to certify a good or a service as meeting (or not) those criteria. An example of this is an average fuel-efficiency standard for automobiles. A law requiring that all automobiles of a certain type must meet a given fuel-efficiency standard establishes a minimum standard for gas mileage below which the average vehicle cannot legally go. Consumers may find this useful, especially when the price of gasoline is rising. Compatibility standards set criteria for how a device works with other devices. An example of a compatibility standard is the size of batteries that go into electronic devices. If a device requires an AA size battery with 1.5 volts, then both producers and consumers can be sure that pretty much any AA battery they purchase will work with that device. For components like batteries, the compatibility standard usually includes the physical dimensions as well so that product designers can be certain that all batteries in that category will fit into the designated space. From here on, the terms “compatibility standards” and “technology standards” will be used interchangeably. Compatibility standards can also be about “interfaces” such as connectors or ways of interacting with devices. One popular interface standard in personal computers is the universal serial bus (USB) connector that works to connect any two devices that support the USB standard. Another interface standard that most people are familiar with is the RJ-11 connector used to connect a telephone to a telephone jack. Interfaces are not always physical. Computers use graphical user interfaces (GUIs), such as the icons on a Windows desktop, to make it easier for consumers to go from computer to computer without having to learn a new GUI. An important historical example of a compatibility standard is the gauge size of railroad tracks. Two national railroads with different gauge tracks are incompatible in a particular way. At the border of the two countries with a common border but different track sizes it will be impossible for the trains of one country to continue on into the neighboring country, so passengers and cargo will have to be unloaded, carried across the border, and reloaded on the other side to proceed. Making track sizes incompatible was sometimes justified during periods of international political instability as necessary for protecting against foreign invasions. However, when countries wished to promote free movement of goods and services, or data, across borders, they tended to move to compatible systems ([|Friedlander 1995]; [|Shapiro and Varian 1999a]:208–10). In the area of international network infrastructures, such as the global airline network, it may be useful to have compatibility standards so that participants do not have to learn new procedures as they move across borders. For this reason, all pilots and traffic controllers around the world are required by the International Civil Aviation Organization to use English for communication purposes and to adopt a standard list of terms and commands to ease mutual understanding. This is generally justified as promoting not just economic efficiency but also passenger safety ([|Forster and King 1995]; [|Zacher 1996]; [|Golich 1989]). Economists analyze standard setting in technology from two perspectives. First, they examine the relationship between the development of technology standards and the smooth operation of markets. A general assertion is that standards reduce transaction costs for everyone and therefore are collective goods ([|Kindleberger 1983]:78). More recently economists have focused on networks and how standards help consumers and producers benefit from network economies. When the existence of standards permits rapid growth in the user base of a particular technology, economists hypothesize that users are likely to benefit more rapidly from what they call network economies ([|Katz and Shapiro 1985, 1994]; [|Gandal 2002]). Secondly, economists focus on the strategic interactions among actors (generally firms and governments of nation-states) in standard setting using game theory as their guide to analysis. Actors may benefit disproportionately from the adoption of one standard rather than another, but all actors lose if no standard emerges. Thus, standard setting is analyzed as a game of coordination ([|Abbott and Snidal 2001]; [|Mattli and Büthe 2003]:9–10). Economists have also considered the possibility that standards can be used strategically for the advantage of a subset of actors. There may even be “standards wars” –prolonged conflicts over which standards to adopt and preserve ([|Shapiro and Varian 1999a]). In addition, the setting of standards creates a principal–agent relationship between the actors affected by standards and those charged with creating and enforcing the standards ([|Mattli and Büthe 2005]). Political scientists and international relations (IR) scholars have also adopted these approaches. They go beyond them, however, to talk about standards as part of an overall governance system or regime, especially in international affairs. IR scholars, in particular, have studied this in connection with theories of governance and regime change ([|Abbott and Snidal 2001]; [|Spruyt 2001]; [|Mattli and Büthe 2003]; [|Mattli and Büthe 2005]).

How Technology Standards are Established
There are three main ways for standards to be established:
 * 1 market competition;
 * 2 private standard-setting organization; and
 * 3 governmental imposition ([|David and Greenstein 1990]:3).

Market Competition
Standards that emerge from market competition may do so in a variety of ways. First, there may be a dominant firm that imposes its preferred standard on everyone else. An example is the Microsoft Windows operating system that Microsoft basically imposed on users of PC-compatible computers ([|David and Greenstein 1990]). Another contemporary example is the format used for music that is played only on Apple iPod devices and sold only on iTunes ([|Dedrick, Kramer, and Linden 2008]). This sort of imposed standard is generally not very popular, even though it reduces uncertainty in the marketplace, especially when the imposed standard is used as a barrier to entry on the part of potential competitors. There may be a competition among a small number of major firms that results in multiple standards. For example, in the early days of the VCR, two major standards competed with one another: BetaMax (backed by Sony and Philips) and VHS (backed by everyone else). The fact that VHS eventually triumphed is seen as evidence for the general undesirability of competing standards, especially to the degree that having multiple standards creates uncertainty for consumers and hence retards market growth. The fact that VHS was technologically inferior to BetaMax shows that the winners of standards competition are not always based on the most advanced technologies. Nevertheless, consumers may benefit from the competition between the multiple standards in the market as the final adoption of the VHS standard in the marketplace probably reflected consumer preferences (in this case, preference for the system that provided lower cost players with sufficiently high quality video images). Another example of private standards competitions is the more recent competition between HD-DVD and BluRay players of high definition videos. Just as in the BetaMax/ VHS competition, the market was held back initially because of consumer uncertainty; the victory of BluRay recently did not, as in the case of VHS, indicate that it was the superior technology but rather that both producers and consumers thought that it was the only viable choice in the long run. Toshiba, the backer of HD-DVD, had run out of money and was generating financial losses after aggressively promoting its standard. Finally, there may be no agreement on standards because a number of important market actors believe the setting of standards is not in their interest. An underlying reason for this is that the technology is changing rapidly and neither producers nor consumers are willing to pay the costs of freezing the technology in order to reduce market uncertainty via standards. Often a different sort of standardization occurs during periods of rapid technological change that focuses on interfaces or what has come to be called “interoperability” ([|Lynch 1993]).

Private Standard-Setting Organizations
Standards may also emerge without the intervention of governments if private actors negotiate standards in private (and hence voluntary) standard setting organizations (SSOs). An example of this in the United States is the American National Standards Institute or ANSI. The members of ANSI are individuals, private firms, government agencies, universities, and other standards organizations. Membership is voluntary. Full membership dues for private firms depend on the size of annual revenues, up to a maximum of $26,000 annually. The mission of ANSI is to “enhance both the global competitiveness of US business and the US quality of life by promoting and facilitating voluntary consensus standards and conformity assessment systems, and safeguarding their integrity” (ANSI 2009). ANSI works with international private standards organizations such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The study of national and international nongovernmental SSOs has recently become an active area of research in both economics and political science. The privatization of the telecommunications agencies of major industrialized nations reduced the role of intergovernmental organizations in the setting of telecommunications standards and increased the important of SSOs ([|Genschel and Werle 1993]). Standards for the internet were set largely in nongovernmental bodies, mainly the Internet Engineering Task Force (IETF), although national governments and international intergovernmental organizations intervened from time to time ([|Simcoe 2007]). In recent years, a controversy has developed among legal scholars around the policies of SSOs regarding intellectual property rights. Some SSOs require their members to license patents associated with standards at royalty rates that are reasonable and nondiscriminatory (RAND); but others do not. When patent holders charge unreasonably high royalties, that is called a “patent holdup.” If done for a number of associated patents, it is called “royalty stacking” ([|Anton and Yao 1995]; [|Lemley 2007]; [|Lemley and Shapiro 2007]; [|Sidak 2007]). Patent holdups and royalty stacking can have the effect of delaying the growth of new markets simply for the purpose of extracting payments from those who are inconvenienced.

Government-Imposed Standards
Finally, governments may impose standards on market players, although they might do so only after consulting them and/or after considerable lobbying by private actors with a stake in the outcome of governmental decisions. In the United States, a variety of agencies may be involved in standard-setting activities. Below I will talk about the role of the Federal Communications Commission (FCC) in the setting of standards for digital television, but other agencies are frequently involved. The Department of Defense, for example, establishes minimum quality standards for its contractors under the military specification (MILSPEC) system. The Environmental Protection Agency (EPA) establishes air and water quality standards; the Department of Education establishes standards for education under the No Child Left Behind laws; and so forth. Sometimes public agencies combine government-mandated standards, often referred to as “command-and-control” standards, with voluntary standards in order to achieve goals that would not otherwise be attainable ([|Kollman and Prakash 2001]).

National, Regional, and International Standards
Standards may be set at national, regional, or international levels. The European Union has a strong preference for regional standards because of the desire to have a single European market ([|Crane 1979]; [|Frenkel 1990]; [|Tate 2001]; [|Austin and Milner 2001]; [|Egan 2001]; Nicolaïdis and [|Egan 2001]). The creation of NAFTA has resulted in some pressure for harmonization at least in standards across the members. Incompatibilities can still crop up on either a national or regional basis. When they do, they may be part of a larger program of protecting national or regional firms and other stakeholders. The ability of a particular nation-state to influence regional or international standards is one of the criteria used in assessing national power and prestige. Thus, the US dominance in the setting of computer standards is perceived to be both an indicator and a result of US economic power ([|Kim and Hart 2002]). European successes in challenging US technology standards in, for example, TV broadcasting or cellular phones, are cited as evidence of Europe's increasing power and independence from US influence ([|Lembke 2003]). The Japanese government's initial success in getting its standard for high definition television adopted as an international standard was seen as a sign of growing Japanese economic power ([|Hart 2004]). Incompatible standards may be used primarily to protect national or regional interests, as was the case in the European adoption of PAL and SECAM standards for television broadcasting and equipment ([|Crane 1979]; [|Besen and Johnson 1986]). This incompatibility, of course, imposes a cost in that the incompatibility may limit exports of goods, services, and technology to other regions. Europe opted not to do this with its second-generation cellular phone standard, GSM, and with its rejection of Europe-only standards for networks and the World Wide Web ([|Lembke 2003]).

Who Owns the Standard?
Some standards are proprietary, that is owned, and others are not. Some standards are promulgated by public agencies, others by private firms or private standards bodies. Then there is the possibility for mixed ownership, for example, when a standard is set for technologies developed by a research consortium that has both private and public participants. Many research consortia combine private and public sources of funding. Consortia often create pools of intellectual property rights and permit members to license the technologies underlying a new standard on a more favorable basis than nonmembers. It is possible for a proprietary standard to be openly available to all via licensing of the underlying technologies. There may be a simple application, certification, or fee system so that actors adopting the standard can advertise their compliance with the standard, even if they did not participate in the creation of the underlying technologies. Thus, even if your firm was not involved in creating the US digital television standard (ATSC) you can still produce products that are ATSC compliant and advertise them as such. Similarly, you can license the technology underlying the DVD standard even though you did not participate in developing the standard or the technologies behind it. The logic of technology platforms is closely related to the above. A technology platform is a set of technologies and standards that must be understood and mastered by a firm before it can compete fully in markets related to the platform. Competition in high technology markets often boils down to competition over creating new platforms. Some scholars call this “architectural” competition, because the large players are competing to define the architecture, the overall design, for a new technology platform ([|Kim and Hart 2002]; [|Ernst 2005]). An example of a technology platform is the PC-compatible computer. Another is the MP3 player or the iPod with its attendant technologies, services, add-ons, etc. ([|Dedrick, Kramer, and Linden 2008]). Just as there is certain amount of prestige and profit attached to being able to influence a stand-alone international standard, there is considerably more prestige and profit connected with the ability to influence technology platforms ([|Kim and Hart 2002]). Consider a non-ICT example: the hybrid automobile or hydrogen-fueled motor vehicles. Japanese firms got out ahead of the pack with hybrid technologies and thereby defined the hybrid technology platform. General Motors tried to do the same with hydrogen-fueled vehicles with financial support from the US government. Unfortunately for GM and the US, the timing was wrong. The move to hydrogen fuel was larger and more difficult than the move to hybrids. Hydrogen vehicles required an entirely new distribution network for fuels. So the initiative for developing fuel-efficient vehicles which had already shifted to Japan and East Asia remained there with the development of hybrids.

Globalization and the Global Production Networks
Standards have risen in importance not just because of prestige and profit potential, but also because of the move toward a globalized world economy. The reduction in tariff and nontariff barriers made possible by the GATT/WTO trade regime, the removal of capital controls and progressive liberalization of global capital flows, and the end of the Cold War have resulted in a more open world economy. In the globalizing world economy, firms attempt to locate parts of their value chain activities wherever the costs are low and the quality is high. So most firms have call centers and R&D centers in India, electronics manufacturing in China or elsewhere in East Asia, and engineers from the developing world working in “body shops” both at home and in foreign subsidiaries at lower wages than engineers from the industrialized world ([|Ernst 2005]). The elongation of value chains that is part and parcel of contemporary globalization would be difficult in the absence of technology standards. Tom Friedman is correct to attribute great importance to technological innovations like the internet and open-source software as “flatteners” permitting people previously not able to participate in the global economy (because of difficulty of coordinating economic activities over large geographic distances) to do so ([|Friedman 2005]).

The International Politics of Specific Technology Standards Competitions
In this section of the essay, I would like to turn to an examination of some specific recent technology standards competitions as illustrative of some of the general points above. The ones I have chosen to include here are:
 * • high definition television (HDTV) and digital television (DTV);
 * • ISDN and the internet;
 * • cellular telephones;
 * • high-definition video recorders.

HDTV and DTV
HDTV and DTV standards were developed and promulgated from the early 1980s on. The first country to do so was Japan. Japan's analog HDTV standard, Hi-Vision, was at first embraced in the United States, especially by the film industry, but later rejected in favor of a digital approach. The Europeans decided to adopt their own analog HDTV standard, HD-MAC, in the late 1980s. Shortly after the US adopted the digital approach in 1993, the Europeans abandoned HD-MAC and moved to adopt their own, incompatible standard for digital television, DVB. The Japanese stuck with Hi-Vision for too long before switching to their own incompatible digital television standard, ISBN. Thus, in the case of HDTV and DTV, no international standards consensus emerged. Instead, the result was three incompatible regional standards ([|Hart 2004]).

ISDN and the Internet
Network standards competitions in the 1980s resulted in a variety of proprietary standards put forward by large mainframe computer firms like IBM, Siemens, and NCR, and efforts by the European Union to create a European standard under the ISDN banner and the so-called Open System Integration (OSI) model of networking. All the proprietary network approaches were blown away by the huge and rapid success of the internet and its TCP/IP family of standards. The victory of TCP/IP over ISDN/OSI took Europe by surprise but the region adapted quickly and made a relatively smooth transition. The internet standards, unlike the DTV standards, became global standards ([|Hart 2004]). Incompatible standards were not able to compete. The internet standards were, for the most part, not proprietary. There were few barriers to adoption in the form of intellectual property or licensing fees. In addition, although there were initially problems with security and authentication (important for e-commerce) connected with the internet, the users were happy with the lightness and interoperability of the system in comparison with its proprietary alternatives ([|Genschel 1997]; [|Weber 2004]; [|Drezner 2007]).

Cellular Telephones
There have been three generations of cellular telephone technology since the 1980s. The first generation was analog, the second was digital, and third was digital with internet-like data services. Early innovators like Motorola and Ericsson dominated the first generation. Later entrants like Nokia, Samsung, and Qualcomm became influential in standard setting by the second. The US opted for an anarchic system of competing standards in both the first and second generations, while limiting competition somewhat in the third. Europe successfully promoted a unified European standard, GSM, in the second generation, but has not been able to follow that with a major success in the third. Despite incompatible standards across firms and regions, the market for cellular phones has grown rapidly, especially in the developing world where landline telephone services are mainly provided still by monopoly providers. Third generation cellular phones are becoming the preferred access point for those who can afford them to the internet, so there is considerable controversy and contestation over next-generation Internet 2.0 services like social networking and interactive mobile video ([|Funk 2002, 2009]; [|Lembke 2003]).

High-Definition Video Recorders
With the rapid increase in demand for HDTV receivers, following the deployment of DTV services, there was a standards competition between two incompatible high-definition video playback systems: HD-DVD and BluRay. HD-DVD was the child of Toshiba and its allies; BluRay was championed by Sony and Philips and their allies. Consumers, unable to figure out which of the two standards would prevail, delayed their purchases. Prices remained high. When a few companies made dual-standard players available, the price was too high to win over consumers. Eventually Toshiba threw in the towel, and BluRay emerged victorious. What is a notable difference between this case and the others is the lack of governmental intervention. Neither the European Union nor the US government had much at stake and the Japanese government probably did not want to favor one Japanese firm over another. Everyone is relieved now that the competition is over, however, especially the US film industry because it preferred the BluRay system's copy protection and digital rights management (DRM) features.

Summary and Conclusions
Technology standards have always been important in the world economy, but they are becoming more so in the electronic age. Firms compete with one another for the prestige of establishing a new standard and especially technology platforms or architectures, and governments (including regional regimes like the EU and NAFTA) try to get standards adopted internationally that result in more local jobs and income. This is increasingly difficult as the world economy becomes more global, but that does not stop the various players from trying. The economics of standards is important, but a full understanding of technology standards requires a combination of economic and political perspectives. The challenge for IR scholars will be to add to the overall discourse on technology standards by using their competitive advantage in studying national and international political dynamics. Unlike economists, IR scholars are not centrally concerned with the efficient operation of markets. They will continue to focus on issues that are important to their own discipline, such as the international struggle for power and the evolution of international regimes and institutions. They should join the sociologists in examining the role of nongovernmental actors in standard setting, because the idea of some sociologists that the emergence of some sort of global civil society is occurring is worth investigating ([|Loya and Boli 1999]). They should follow the economists in their studies of the potential abuses of intellectual property rights associated with standards, particularly in the area of patent holdups and royalty stacking, because such practices may be associated with power-seeking behavior on the part of both multinational corporations and national governments. The interdisciplinary work on global production networks and architectural competition has important implications for the evolution of international politics. IR scholars and political scientists should continue to contribute to the research on open standards and the new model of engineering and competition that is associated with the open source software movement. In short, the study of technology standards must become an important part of the overall agenda of IR research.

Gabriel Weimann
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|A Midsummer Night's Dream], [|counterterrorism], [|terrorism] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Comment on this article**]

Introduction
The growing presence of modern terrorism on the internet is at the nexus of two key trends: the democratization of communications driven by user generated content on the internet; and the growing awareness of modern terrorists of the potential of the internet for their purposes. The internet has long been a favorite tool of the terrorists. Decentralized and providing almost perfect anonymity, it cannot be subjected to control or restriction, and allows access to anyone who wants it. Large or small, terrorist groups have their own websites, using this medium to spread propaganda, raise funds and launder money, recruit and train members, communicate and conspire, plan and launch attacks. Al Qaeda, for example, now operates hundreds of websites and many more appear every year. Besides websites, modern terrorists rely on e-mail, chatrooms, e-groups, forums, virtual message boards, and resources like YouTube, Facebook, and Google Earth. Fighting online terrorism raises the issue of countermeasures and their cost. Since the advent of the internet, counterterrorism and security services all over the world have seen it as both a danger and a useful instrument. Official statements have warned us of the ability of modern terrorism to use the internet both for global communications and for cyberattacks on crucial facilities and infrastructure. Recently many security services and agencies are focusing on monitoring the Net, tracking down the terrorists using it, and learning from their internet messages. There are numerous attempts, some secret and some not, to apply various systems and defense mechanisms against terrorists on the internet. We will review some of these efforts and then examine their cost in terms of civil liberties.

The Theater of Terror Conceptualization
From its early days terror has combined psychological and theatrical aspects: the word “terror” comes from the Latin word //terrere// which means “to frighten” or “to scare.” During the “popular” phase of the French Revolution in September 1793, the “Reign of Terror” was officially declared and activated, 16,000 people were guillotined, but executions of those labeled “internal enemies” of France took place throughout the country (about 20,000 to 40,000 people were killed). Executions were conducted before large audiences and were accompanied by sensational publicity thus spreading the intended fear. But contemporary terrorists became exposed to new opportunities for exerting mass psychological impacts as a result of technological advances in communications. During the 1970s, academic observers remarked increasingly on the theatrical proficiency with which terrorists conducted their operations. As Jenkins concluded his analysis of international terrorism: “Terrorism is aimed at the people watching, not at the actual victims. Terrorism is a theater” ([|Jenkins 1975]:4). Modern terrorism can be understood in terms of the production requirements of theatrical engagements. Terrorists pay attention to script preparation, cast selection, sets, props, role playing, and minute-by-minute stage management. Just like compelling stage plays or ballet performances, the media orientation in terrorism requires full attention to detail in order to be effective. Terrorist theory was gradually realizing the potential of the mass media. Acts of terrorism were more and more perceived as means of persuasion and psychological warfare, when the victim is “the skin on a drum beaten to achieve a calculated impact on a wider audience” ([|Schmid and de Graaf 1982]:14). The most powerful and violent performance of the modern “theater of terror” was the September 11, 2001 attack on American targets. In November 2001, shortly after the 9/11 attacks, Osama bin Laden discussed the twin attacks. Referring to the suicide terrorists whom he called “vanguards of Islam,” bin Laden marveled, “Those young men said in deeds, in New York and Washington, speeches that overshadowed other speeches made everywhere else in the world. The speeches are understood by both Arabs and non-Arabs, even Chinese” (the quotes are taken from the translations of a videotape, presumably made in mid-November 2001 in Afghanistan). In her study “The terrorist calculus behind 9–11,” [|Nacos (2003)] argued that bin Laden revealed that he considered terrorism first and foremost as a vehicle to dispatch messages – “speeches” in his words – and, with respect to the events of September 11, 2001, he concluded that Americans in particular had heard and reacted to the intended communication. The psychological impact on the targeted population was not lost on bin Laden and his associates. In commenting on the impact of the terror attack on the American enemy, the al Qaeda leader remarked with obvious satisfaction, “There is America, full of fear from north to south, from west to east. Thank God for that.” Moreover, by striking hard at America, argues [|Nacos (2003)], the terrorists forced the mass media to explore their grievances in ways that by far transcended the quantity and narrow focus of the precrisis coverage. Media coverage of Islam-related issues changed in a rather dramatic fashion after al Qaeda's attacks on September 11 when the US media tried to answer the question that President Bush had posed in his speech before a joint session of the US Congress: Why do they hate us? In the process, the perpetrators of the violence achieved perhaps their most important media-dependent goal, namely to publicize their causes, grievances, and demands.

The Terrorist Production
The emergence of media-oriented terrorism led several communication and terrorism scholars to reconceptualize modern terrorism within the framework of symbolic communication production: “As a symbolic act, terrorism can be analyzed much like other media of communication, consisting of four basic components: transmitter (the terrorist), intended recipient (target), message (bombing, ambush) and feedback (reaction of target audience)” ([|Karber 1971]:529). Dowling suggested applying the concept of “rhetoric genre” to modern terrorism, arguing that “terrorists engage in recurrent rhetorical forms that force the media to provide the access without which terrorism could not fulfill its objectives” (1986:14). Some terrorist events become what Bell has called “terrorist spectaculars” (1978:50) that can be best analyzed by the “media event” conceptualization (for a comparative analysis of media events and terrorist spectaculars, see [|Weimann 1987]). The growing importance attributed to publicity and mass media by terrorist organizations was revealed both in the diffusion of media-oriented terrorism ([|Weimann and Winn 1994]; [|Nacos 2002]) as well as in the tactics of modern terrorists who have become more media-minded. It is clear that media-wise terrorists plan their actions with the media as a major consideration. They select targets, location and timing according to media preferences, trying to satisfy the media criteria for newsworthiness, media timetables, and deadlines. They prepare visuals for the media, like video clips of their actions, taped interviews and declarations of the perpetrators, films, PRs or VNRs (press releases and video news releases). Modern terrorists feed the media, directly and indirectly, with their propaganda material, often disguised as news items. They also monitor the coverage, examining closely the reporting of various media organizations. The pressure of terrorists on journalists takes many forms, from open and friendly hosting to direct threats, blackmailing, and even killings of journalists. Finally, terrorist organizations operate their own media, from television channels (Al-Manar of the Hezbollah), news agencies, newspapers and magazines, radio channels, video and audio cassettes, and – recently – terrorist websites on the internet.

The New Arena: Terror on the Internet
Postmodern terrorists are taking advantage of the fruits of globalization and modern technology – especially the most advanced communication technologies – to plan, coordinate and execute their deadly campaigns. No longer geographically constrained within a particular territory, or politically or financially dependent on a particular state, they rely on advanced communication including the internet. Terrorism and the internet have been related in two ways. First, the internet has become a forum for both terrorist groups and individual terrorists to spread their messages of hate and violence and to communicate with one another, with their supporters and sympathizers and even to launch psychological warfare against their enemies. Secondly, individuals and groups have tried to attack computer networks, including those on the internet – what has become known as cyberterrorism or cyberwarfare. At this point, terrorists are using and abusing the internet and benefiting from it more than they are attacking it. The network of computer-mediated communication (CMC) is ideal for terrorists-as-communicators: it is decentralized, it cannot be subjected to control or restriction, it is not censored, and it allows free access to anyone who wants it. The structure of modern terrorist organizations is making computer-mediated communication even more important and useful for them. The loosely knit network of cells and divisions and subgroups, typical of modern terrorists, all make the internet an ideal and necessary tool for intergroup and intragroup networking. The rise of virtually networked terrorist groups is part of a broader shift to what Arquilla and Ronfeldt (2001a; 2001b) have called “Netwar”: > Netwar refers to an emerging mode of conflict and crime at societal levels, involving measures short of traditional war in which the protagonists are likely to consist of dispersed, small groups who communicate, coordinate, and conduct their campaigns in an internetted manner, without a precise central command. Netwar differs from modes of conflict in which the actors prefer formal, stand-alone, hierarchical organizations, doctrines, and strategies, as in past efforts, for example, to build centralized revolutionary movements along Marxist lines > ([|Arquilla, Ronfeldt, and Zanini 2001]:47). Websites are only one of the internet's services used by modern terrorism; there are many other facilities in the Net – email, chat rooms, e-groups, forums, virtual message boards – that are used more and more by terrorists. Many of the terrorist websites are used for psychological campaigns against enemy states and their military forces. The messages, verbal and graphic, attempt to demoralize and scare the enemy or to create feelings of guilt, doubts and inner splits. The internet is used by terrorists to post scary footage of executions, beheadings, fatal snipers, and deadly bombings to frighten the enemy's troops. They also use the Net to deliver threats and messages to enemy governments and enemy populations. The current literature offers a profuse array of works describing the ways that terrorists use the internet (e.g., [|Whine 1999]; [|Crilley 2001]; [|Hosenball, Hirsh, Soloway, and Flynn 2002]; [|T.L. Thomas 2002; 2003]; [|Gerstenfeld, Grant, and Chiang 2003]; [|Weimann 2004; 2006a]; [|Rosenau 2005]; [|Zanini and Edwards 2005]; [|Conway 2006a; 2006b]; [|Hoffman 2006]; [|Kohlmann 2006]; [|Lachow and Richardson 2007]; [|Freiburger and Crane 2008]).

The Advantages of the Internet for Modern Terrorism
The great virtues of the internet – ease of access, lack of regulation, vast potential audiences, fast flow of information, multimedia applications and so forth – have been converted into the advantage of groups committed to terrorizing societies to achieve their goals. The internet takes very little skill to use, has few regulations, provides a worldwide audience to whom information can be sent quickly at a low cost, and allows for anonymity of the user ([|Whine 1999]; [|Weimann 2004; 2006a]; [|Lachow and Richardson 2007]). These design elements allow terrorists to engage in their activities with minimal risks ([|Whine 1999]; [|Weimann 2004; 2006b]). The anonymity offered by the internet is very attractive for modern terrorists ([|Rogers 2003]). The internet provides this anonymity as well as easy access from everywhere with the option to post messages, to email, to upload or download information – and to disappear into the dark. When American forces in Afghanistan shut down al Qaeda's camps, the terror group moved its base of operations to the internet. The internet has become a valuable tool for the terrorist organization, not just to coordinate operations and launch attacks, but also as a virtual training camp and a tool for indoctrination and recruitment. In reality, the internet became for al Qaeda what experts call an “online terrorism university.” More than 300 new pages of al Qaeda-related manuals, instructions and rhetoric are published on the internet every month. “It is not necessary … for you to join in a military training camp, or travel to another country … you can learn alone, or with other brothers, in [our online] preparation program,” announced al Qaeda leader Abu Hadschir Al Muqrin. Paradoxically, the very decentralized network of communication that the US security services created (out of fear of the Soviet Union) now serves the interests of the greatest foe of the West's security services since the end of the Cold War: international terror. The roots of the modern internet are to be found in the early 1970s, during the days of the Cold War, when the US Department of Defense was concerned with reducing the vulnerability of its communication networks to nuclear attack. The Defense Department decided to decentralize the entire system by creating an interconnected web of computer networks. After twenty years of development and use by academic researchers, the internet quickly expanded and changed its character when it was opened up to commercial users in the late 1980s. By the mid-1990s, the internet connected more than 18,000 private, public, and national networks, with the number increasing daily. Hooked into those networks were about 3.2 million host computers and perhaps as many as 60 million users spread across all seven continents. In 2005, the Net passed a dramatic milestone: the one-billionth user went online. According to Morgan Stanley estimates, 36% of internet users are now in Asia and 24% are in Europe. Only 23% of users are in North America, where it all started. It took 36 years for the internet to get its first billion users. However, internet use has grown by 18% per year, thus by January 2009 the estimated population of internet users is 1.5 billion. The network of computer-mediated communication (CMC) is ideal for terrorists-as-communicators: it is decentralized, it cannot be subjected to control or restriction, it is not censored, and it allows access to anyone who wants it. Moreover, the structure of modern terrorist organizations in many ways is compatible with the structure of the internet. The loosely knit network of cells and divisions and subgroups typical of modern terrorist groups takes full advantage of the internet for intergroup and intragroup networking. Al Qaeda, for example, has shown itself to be a remarkably nimble and adaptive entity, mainly due to its decentralized structure ([|Hoffman 2003]). By its very nature, the internet is in many ways an ideal arena for the activities of terrorist organizations. Most notably, it offers: These advantages have not gone unnoticed by terrorist organizations, no matter what their political orientation. Islamists and Marxists, nationalists and separatists, fundamentalists and extremists, racists and anarchists: all find the internet alluring. Today, all active terrorist organizations maintain websites, and many maintain more than one website and use several different languages. As the following illustrative list shows, these organizations and groups come from all corners of the globe and they all are not active on the Net: In July 2004 the independent National Commission on Terrorist Attacks upon the United States (the 9/11 Commission) released its findings in a 570-page report. The report points to the use of modern communication technologies for planning and execution of the 9/11 attacks: “Terrorists, in turn, have benefited from this same rapid development of communication technologies.” The importance of the internet, and its uses by al Qaeda for the attacks, was noted, too: > The emergence of the World Wide Web has given terrorists a much easier means of acquiring information and exercising command and control over their operations. The operational leader of the 9/11 conspiracy, Mohamed Atta, went online from Hamburg, Germany, to research U.S. flight schools. Targets of intelligence collection have become more sophisticated. These changes have made surveillance and threat warning more difficult > (National Commission on Terrorist Attacks, //The 9/11 Commission Report// 2004:88). The report highlights the uses of the internet by the al Qaeda operatives, including searching the Web for information on US flight schools (p. 157), using internet communications (p. 157), equipping the hijackers with email accounts (p. 529, note 140), coordinating the attackers’ actions using email (p. 530, note 152), downloading anti-American webpages (p. 221), and gathering flight information from the internet (p. 222). Many of the terrorists on the Net belong to radical Islamist groups and organizations. Paradoxically, it is those who criticize and attack Western modernity, technology, and media who are using the West's most advanced modern medium, the internet. This should come as no surprise after the publication of several studies and especially of Gary Bunt's books //Virtually Islamic, Islam in the Digital Age//, and //iMuslims: Rewiring the House of Islam//. Bunt's research is a detailed description of the diverse manifestations of the Islamic presence online. He suggests that there has been a significant redirection of resources into the Net by Islamic organizations that adapted to the digital age, preferring the Net over traditional channels of communication. This trend is reflected in the volume of militant Islamic materials online and in the growing sophistication of Islamic websites. For example, the presentation of video clips and audio broadcasts on Islamic sites applies some of the most recent developments in computer technology. Bunt argues that “the Islamic Internet landscape changes frequently, with new sites emerging on a daily basis. Some very proactive players change their content and format regularly, attempting to draw readers to their message(s) in order to establish links or a sense of community” ([|Bunt 2000]:10). Chat rooms are often unregulated and unmonitored by scholars and clerics, can provide a virtual hangout for teenage and young adult Muslims, and are sometimes rife with anti-//kuffar// (-nonbeliever) sentiment. Bunt concludes, “The Internet is clearly important in disseminating a broad range of Islamic political-religious opinions and concerns to a global audience. Thus, many extremist Islamist activists and terrorists now see the Internet as a vital tool” ([|Bunt 2000]:14). According to Bunt's latest book ([|2009]), the internet has profoundly shaped how Muslims perceive Islam, and how Islamic societies and networks are evolving and shifting within the twenty-first century. While these electronic interfaces appear new and innovative in terms of how the media is applied, much of their content has a basis in classical Islamic concepts, with an historical resonance that can be traced back to the time of the Prophet Muhammad. Monitoring terrorist presence on the Net revealed thousands of terrorist websites. While in the late 1990s, there were merely a dozen terrorist websites; by 2000 virtually all terrorist groups had established their presence on the internet and in 2003 there were over 2,600 terrorist websites ([|Weimann 2004]). The number rose dramatically and by 2006 there were over 5,600 websites serving terrorists and their supporters ([|Weimann 2006a; 2006b]) and the recent estimates are close to 8,000 websites.
 * • easy access;
 * • little or no regulation, censorship, or other forms of government control;
 * • potentially huge audiences spread throughout the world;
 * • anonymity of communication;
 * • fast flow of information;
 * • interactivity;
 * • inexpensive development and maintenance of a Web presence;
 * • a multimedia environment (the ability to combine text, graphics, audio, and video and to allow users to download films, songs, books, posters, and so forth);
 * • the ability to shape coverage in the traditional mass media, which increasingly use the internet as a source for stories.
 * • //From the Middle East//, Hamas (the Islamic Resistance Movement), the Lebanese Hezbollah (Party of God), the Al Aqsa Martyrs Brigades, Fatah Tanzim, the Popular Front for the Liberation of Palestine (PLFP), the Palestinian Islamic Jihad, the Kahane Lives movement, the People's Mujahedin of Iran (PMOI – Mujahedin-e Khalq), the Kurdish Workers’ Party (PKK); the Turkish-based Popular Democratic Liberation Front Party (DHKP/C), and the Great East Islamic Raiders Front (IBDA-C), which is also based in Turkey.
 * • //From Europe//, the Basque ETA movement, Armata Corsa (the Corsican Army), the Real Irish Republican Army (RIRA) and various groups associated with al Qaeda.
 * • //From Latin America//, Peru's Tupak-Amaru (MRTA) and Shining Path (Sendero Luminoso), the Colombian National Liberation Army (ELN-Colombia), and the Armed Revolutionary Forces of Colombia (FARC).
 * • //From Asia//, al Qaeda, the Japanese Supreme Truth (Aum Shinrikyo), Ansar al Islam (Supporters of Islam) in Iraq, the Japanese Red Army (JRA), Hizb-ul Mujehideen in Kashmir, the Liberation Tigers of Tamil Eelam (LTTE), the Islamic Movement of Uzbekistan (IMU), Moro Islamic Liberation Front (MILF) in the Philippines, the Pakistan-based Lashkar-e-Toiba, and the rebel movement in Chechnya.

How Terrorists Use the Internet
Today, all terrorist organizations, large or small, have their own websites ([|Weimann 2004; 2006a]; [|Hoffman 2006]). They use this medium to spread propaganda, raise funds and launder money, recruit and train members, communicate and conspire, and launch attacks while governments are trying to counter and catch them using traditional means ([|Vatis 2001]; [|Conway 2002; 2006a; 2006b]; [|Thomas 2003]; [|Weimann 2004; 2006a]; [|Coll and Glasser 2005]; [|Glasser and Coll 2005]; [|Swartz 2005]; [|Talbot 2005]; [|Cronin 2006]; [|Labi 2006]; [|Lynch 2006]; [|Rogan 2006]). Terrorism and the internet are related in several ways. First, the internet has become a forum for terrorist groups and individual terrorists both to spread their messages of hate and violence and to communicate with one another and with sympathizers. Secondly, individuals and groups may attack computer networks, including those on the internet, in what has become known as cyberterrorism or cyberwarfare. At this point, terrorists are using the internet for propaganda and communication more than they are attacking it, but future terrorists may indeed see greater potential for cyberterrorism than do the terrorists of today. Cyberterrorism may also become more attractive as the real and virtual worlds become more closely coupled. Unless these systems are carefully secured, conducting an online operation that physically harms someone may be as easy tomorrow as penetrating a website is today. Websites are only one of the internet's services used by modern terrorism; many other facilities on the Net – email, chat rooms, e-groups, forums, online magazines, virtual message boards – are used more and more by terrorists. For example, according to Katz and Devon, > Yahoo! has become one of al Qaeda's most significant ideological bases of operation. Utilizing several facets of Yahoo!'s service, including chat functions, e-mail, and most importantly, Yahoo! Groups, al Qaeda and its supporters have inserted themselves like a cancer into a company that screams, “American pop culture,” and made it as much their own as a training camp in Khost…. Creating a Yahoo! Group is free, quick, and extremely easy, and al Qaeda and its supporters have wasted no time in starting up several Yahoo! Groups with topics related to the terrorist group and the downfall of Western civilization. Very often, the groups contain the latest links to jihadist websites, serving as a jihadist directory, and are sometimes the first to post al Qaeda communiqués to the public. > ([|Katz and Devon 2003]:1) More recently, uploading, downloading and viewing videotapes and segments has become very popular. YouTube was established in February 2005 as an online repository facilitating the sharing of video content. YouTube claims to be the “the world's most popular online video community.” A 2007 report from the Pew Internet and American Life Project put the percentage of US online video viewers using YouTube at 27%, ahead of all other video-sharing sites. In the 18–29 year old age groups, this leadership is even more pronounced with 49% of US online video viewers using YouTube. In fact, //CNNMoney// reported that, in January 2008 alone, nearly 79 million users worldwide viewed more than three billion YouTube videos. Terrorist groups realized the potential of this easily accessed platform for the dissemination of their propaganda and radicalization videos. Terrorists themselves praised the usefulness of this new online apparatus: “A lot of the funding that the brothers are getting is coming because of the videos. Imagine how many have gone after seeing the videos. Imagine how many have become shahid [martyrs],” convicted terrorist Younis Tsouli (so-called “Irhabi007”) testified. Hezbollah, Hamas, al Qaeda and its numerous affiliates, the LTTE and the Shining Path of Peru all have propaganda videos on YouTube. In 2008, Hamas allegedly launched its own video-sharing website, although the group denied ownership of the site. AqsaTube, in addition to choosing a similar name, was designed to look just like YouTube and even copied its logo. Once certain internet providers refused to host the website, Hamas launched a PaluTube and TubeZik while the Tamil Tigers have launched TamilTube. These videos are not just aimed at Middle Eastern Muslims youths. More recent videos posted on these video-sharing websites are dubbed in English or have English subtitles. A recent study conducted by [|Conway and McInerney (2008)] analyzed the online supporters of jihad-promoting video content on YouTube, focusing on those posting and commenting upon martyr-promoting material from Iraq. The findings suggest that a majority is under 35 years of age and resident outside the region of the Middle East and North Africa with the largest percentage of supporters located in the United States. As the researchers concluded: > What is clearly evident however is that jihadist content is spreading far beyond traditional jihadist websites or even dedicated forums to embrace, in particular, video sharing and social networking – both hallmarks of Web 2.0 – and thus extending their reach far beyond what may be conceived as their core support base in the Middle East and North Africa region to diaspora populations, converts, and political sympathizers. Recent studies have identified numerous, albeit sometimes overlapping, ways in which contemporary terrorists use the internet. Some of these parallel the uses to which everyone puts the internet – information gathering, for instance. Some resemble the uses made of the medium by traditional political organizations – for example, raising funds and disseminating propaganda. Others, however, are much more unusual and distinctive – for instance, hiding instructions, manuals, and directions in coded messages or encrypted files. The various uses of the Net by modern terrorists may be grouped into two broad categories: communicative uses and instrumental uses.

The Communicative Uses of the Internet by Terrorism
The internet has significantly expanded the opportunities for terrorists to secure publicity. Until the advent of the internet, terrorists’ hopes of winning publicity for their causes and activities depended on attracting the attention of television, radio, or the print media. The fact that terrorists themselves have direct control over the content of their websites offers further opportunities to shape how they are perceived by different target audiences and to manipulate their image and the images of their enemies. Most terrorist sites do not celebrate their violent activities. Instead – regardless of their nature, motives, or location – most terrorist sites emphasize two issues: the restrictions placed on freedom of expression; and the plight of their comrades who are now political prisoners. These issues resonate powerfully with their own supporters and are also calculated to elicit sympathy from Western audiences that cherish freedom of expression and frown on measures to silence political opposition. A common element on the terror sites is the organization's communiqués and the speeches and writings of its leaders, founders, and ideologists. The sites often present a word-for-word series of official statements by the organizations, which the visitor can browse through, along with selected announcements arranged by date. They tend to recycle materials distributed in the past through the mass media and other communication means. Some terrorist sites house a veritable online “gift shop” through which visitors can order and purchase books, video and audiocassettes, stickers, printed shirts, and pins with the organization's badges. Who are the targeted audiences of these sites? Are they appealing to current and potential supporters, to the international community, or to their enemies (namely the public who is part of the opposing sociopolitical community in the conflict)? An analysis of their contents indicates an attempt to approach all three audiences ([|Conway 2002; 2006a; 2006b]; [|Tsfati and Weimann 2002]). The slogans and text at these sites or portions of the site appeal strongly to the supporter public. Of course, the sites in local languages (especially Arabic) target these audiences more directly than do the English and other language versions. These pages include much more detailed information about recent activities of the organizations and elaborate in detail about internal politics and relationships between local groups. Reaching out to supporters is also evinced from the fact that the sites offer appropriate items for sale, including printed shirts, badges, flags, and video and audio cassettes. But an important target audience, in addition to supporters of the organizations, is the international “bystander” public and surfers who are not involved in the conflict. This is evident from the presentation of basic information about the group, the leaders, and the extensive historical background material (with which the supporter public is presumably familiar). Most of the sites offer versions in several languages in order to enlarge their international audience. The sites make use of English in addition to the local language of the organization's supporters. Judging from the content of many of the sites, one might also infer that journalists constitute another bystander target audience. Press releases by the organizations are often placed on the websites. The detailed background information might also be useful for international reporters. Approaches to the “enemy” audiences are not as clearly apparent from the content of many sites. However, in some sites the desire to reach this audience is evident by the efforts to demoralize the enemy or to create feelings of guilt. The jihadists try to utilize their websites to change public opinion in their enemies’ states, to weaken public support for the governing regime, to stimulate public debate and of course, to demoralize the enemy. The internet is used by them to deliver threats and messages to enemy governments and enemy populations. The internet can be used also to harm the credibility of enemy media, enemy officials, and the establishment. In this case the target audience is the enemy population but the attack is on its official media credibility. The internet also grants terrorists a cheap and efficient means of networking. Many terrorist groups, among them Hamas and al Qaeda, have undergone a transformation from strictly hierarchical organizations with designated leaders to affiliations of semi-independent cells that have no single commanding hierarchy. Through the internet, these loosely interconnected groups are able to maintain contact with one another – and with members of other terrorist groups. The internet connects not only members of the same terrorist organizations but also members of different groups. For instance, dozens of sites supporting terrorism in the name of jihad permit terrorists in places as far removed from one another as Chechnya and Malaysia to exchange ideas and practical information about how to build bombs, establish terror cells, and carry out attacks. The use of the internet by modern terrorists is also a key ingredient in the concept of terrorism as psychological warfare. “Cyber-fear,” argues [|Thomas (2003)], > is generated by the fact that what a computer attack //could// do (i.e., bring down airliners, ruin critical infrastructure, destroy the stock market, reveal state secrets, etc.) is too often associated with what //will// happen … It is clear that the Internet empowers small groups and makes them appear much more capable than they might actually be, even turning bluster into a type of virtual fear. The net allows terrorists to amplify the consequences of their activities with follow-on messages and threats directly to the population at large, even though the terrorist group may be totally impotent. In effect, the Internet allows a person or group to appear to be larger or more important or threatening than they really are. A terrifying example is the way that Pakistani captors used the Net to entrap the Jewish-American reporter Daniel Pearl through false email communications, kidnap and murder him, and then post the gruesome video on the internet. This pattern was later repeated by Abu Mussab al Zarqawi and the insurgents in Iraq, who beheaded numerous hostages and posted the videotaped executions online.

Practical Uses of the Internet by Terrorism
In addition to communicative uses of the internet, terrorists use the medium for instrumental purposes. The internet may serve terrorist as an excellent source of useful information. The World Wide Web alone offers about a billion pages of information, much of it free – and much of it of interest to terrorist organizations. Terrorists, for instance, can learn from the internet about the schedules and locations of targets such as transportation facilities, nuclear power plants, public buildings, airports and ports, and even counterterrorism measures. Dan Verton, in his book //Black Ice: The Invisible Threat of Cyber-Terrorism// (2003), explains that “al Qaeda cells now operate with the assistance of large databases containing details of potential targets in the US. They use the internet to collect intelligence on those targets, especially critical economic nodes, and modern software enables them to study structural weaknesses in facilities as well as predict the cascading failure effect of attacking certain systems.” Numerous tools are available to facilitate such data collection, often called datamining, including search engines, email distribution lists, and chat rooms and discussion groups. Many websites offer their own search tools for extracting information from databases on their sites. Word searches of online newspapers and journals can likewise generate useful information for terrorists; some of this information may also be available in the traditional media, but online searching capabilities allow terrorists to capture it anonymously and with very little effort or expense. According to Secretary of Defense Donald Rumsfeld, speaking on January 15, 2003, an al Qaeda training manual recovered in Afghanistan tells its readers, “Using public sources openly and without resorting to illegal means, it is possible to gather at least 80 percent of all information required about the enemy.” Without recruitment terrorism cannot prevail, survive, and develop. Recruitment provides the killers, the suicide bombers, the kidnappers, the executioners, the engineers, the soldiers, the armies of future terrorism. The internet has become a useful instrument for modern terrorists’ recruitment (Wei[|mann 2005]). The internet combines several advantages for the recruiters: it makes information gathering easier for potential recruits by offering more information, more quickly, and in multimedia format; the global reach of the Net allows groups to publicize events to more people; and by increasing the possibilities for interactive communication, new opportunities for assisting groups are offered, along with more chances for contacting the group directly. Online recruitment by terrorist organizations is said to be widespread though the internet is used more for initial attraction, ideological recruitment, and social support than for direct recruitment. Moreover, the online process is more often activated to reward recruits and suicide terrorists thus serving as an additional indirect recruitment initiative. Finally, terrorist recruiters may use interactive internet technology to roam online chat rooms looking for receptive members of the public, particularly young people, using sophisticated profiling procedures. Terrorists use the internet to set up and activate virtual training camps: they use online communications to provide information to fellow terrorists, including maps, photographs, directions, codes, and technical details of how to use explosives (Wei[|mann 2006b]). The Net is home to dozens of sites that provide information on how to build chemical and explosive weapons. Many of these sites post the //Terrorist's Handbook// and //The Anarchist Cookbook//, two well-known manuals that offer detailed instructions of how to construct a wide range of bombs. Another manual, //The Mujahadeen Poisons Handbook//, written by Abdel-Aziz in 1996 and “published” on the official Hamas website, details in 23 pages how to prepare various homemade poisons, poisonous gases, and other deadly materials for use in terrorist attacks. Terrorists use the internet not only to learn how to build bombs and use arms but also to plan and coordinate specific attacks. Al Qaeda operatives relied heavily on the internet in planning and coordinating the September 11 attacks. Thousands of encrypted messages that had been posted in a password-protected area of a website were found by federal officials on the computer of arrested al Qaeda terrorist Abu Zubaydah, who reportedly masterminded the September 11 attacks. The first messages found on Zubaydah's computer were dated May 2001, and the last were sent on September 9, 2001. The frequency of the messages was highest in August 2001. To preserve their anonymity, the al Qaeda terrorists used the internet in public places and sent messages via public email. It is often simple to use the internet in public facilities without being traced or identified; at many public libraries, //hawalas// (store-front money exchanges) or internet cafes, terrorists and their followers can access the internet without presenting identification. Finally, like many other political organizations, terrorist groups use the internet to raise funds. Al Qaeda, for instance, has always depended heavily on donations, and its global fundraising network is built upon a foundation of charities, nongovernmental organizations and other financial institutions that use websites and internet-based chat rooms and forums to solicit and gather funds. The fighters in the Russian breakaway republic of Chechnya have likewise used the internet to publicize the numbers of bank accounts to which sympathizers can contribute. According to [|Thomas (2003)], the internet is also used “to put together profiles”: internet user demographics (culled, for instance, from personal information entered on online questionnaires and order forms) allow terrorists to identify users with sympathy for a particular cause or issue. These individuals are then asked to make donations, typically through emails sent by a front group (i.e., an organization broadly supportive of the terrorists’ aims but operating publicly and legally and usually having no direct ties to the terrorist organization).

Cyberterrorism
The threat posed by cyberterrorism has grabbed the attention of the mass media, the security community, and the information technology (IT) industry. Journalists, politicians, and experts in a variety of fields have popularized a scenario in which sophisticated cyberterrorists electronically break into computers that control dams or air traffic control systems, wreaking havoc and endangering not only millions of lives but national security itself. Because most critical infrastructure in Western societies is networked through computers, the potential threat from cyberterrorism is, to be sure, very alarming. Hackers, although not motivated by the same goals that inspire terrorists, have demonstrated that individuals can gain access to sensitive information and to the operation of crucial services. Terrorists, at least in theory, could thus follow the hackers’ lead and then, having broken into government and private computer systems, cripple or at least disable the military, financial, and service sectors of advanced economies. The growing dependence of our societies on information technology has created a new form of vulnerability, giving terrorists the chance to approach targets that would otherwise be utterly unassailable, such as national defense systems and air traffic control systems. The more technologically developed a country is, the more vulnerable it becomes to cyberattacks against its infrastructure. What should be considered as cyberterrorism? [|Dorothy Denning (2000; 2001; 2002)] has put forward an unambiguous definition in numerous articles: > Cyberterrorism is the convergence of cyberspace and terrorism. It refers to unlawful attacks and threats of attacks against computers, networks and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives. Further, to qualify as cyberterrorism, an attack should result in violence against persons or property, or at least cause enough harm to generate fear. Attacks that lead to death or bodily injury, explosions, or severe economic loss would be examples. Serious attacks against critical infrastructures could be acts of cyberterrorism, depending on their impact. Attacks that disrupt nonessential services or that are mainly a costly nuisance would not. It is important to distinguish between cyberterrorism, hacking, and “hacktivism,” a term coined by scholars to describe the marriage of hacking with political activism. Hacktivism, although politically motivated, does not constitute cyberterrorism. Hacktivists do want to protest and disrupt; they do not want to kill or maim or terrorize. However, hacktivism does highlight the threat of cyberterrorism: the potential that individuals with no moral restraint may use methods similar to those developed by hackers to wreak havoc. Moreover, the line between cyberterrorism and hacking or hacktivism may sometimes blur, especially if terrorist groups are able to recruit or hire computer-savvy hacktivists or if hacktivists decide to escalate their actions by attacking the systems that operate critical elements of the national infrastructure, such as electric power networks and emergency services. Why are hackers seen as threatening, and why are they often associated with terrorism? First, because the hackers themselves like to exaggerate their abilities. Douglas Thomas, a professor at the University of Southern California, spent seven years studying computer hackers in an effort to understand better who they are and what motivates them. According to Thomas, > Hacking stories make good copy, but they are very rarely accurate, tending to exaggerate threats and downplay the realities of the event. There is a big difference between hacking into NASA's central control system (which has //not// happened) and hacking into the server that hosts their web page (which has happened repeatedly). Most media reports fail to distinguish between the two (or to explain that hacking a web page is essentially the same as spray painting a billboard, posing very little actual risk). > ([|Thomas 2002]) So how real is the threat of cyberterrorism? Cyberterrorism conjures up images of vicious terrorists unleashing catastrophic attacks against computer networks, wreaking havoc and paralyzing nations. This is a frightening scenario, but how likely is it to occur? Could terrorists cripple critical military, financial, and service computer systems? The vulnerability of the energy industry is at the heart of //Black Ice: The Invisible Threat of Cyber-Terror//, a book written by former intelligence officer [|Dan Verton (2003)] Verton argues that America's energy sector would be the first domino to fall in a strategic cyberterrorist attack against the United States. He explores in frightening detail how the impact of such an attack could rival, or even exceed, the consequences of a more traditional, physical attack. Verton claims that during any given year, the average large utility company experiences about one million cyberintrusions that require investigation to ensure that critical system components have not been compromised. Amid all the dire warnings and alarming statistics that the subject of cyberterrorism generates, it is important to remember one simple statistic: so far, there has been no recorded instance of a terrorist cyberattack on US public facilities, transportation systems, nuclear power plants, power grids, or other key components of the national infrastructure. Cyberattacks are common, but they have not been conducted by terrorists and they have not sought to inflict the kind of damage that would qualify them as cyberterrorism. As Joshua Green reports in “The Myth of Cyberterrorism” (2002), when US troops recovered al Qaeda laptops in Afghanistan, officials were surprised to find its members more technologically adept than previously believed. They discovered structural and engineering software, electronic models of a dam, and information on computerized water systems, nuclear power plants, and US and European stadiums. But the evidence did //not// suggest that al Qaeda operatives were planning cyberattacks, only that they were using the internet to communicate and coordinate physical attacks. Neither al Qaeda nor any other terrorist organization appears to have tried to stage a serious cyberattack. As Denning concluded (2002), “At least for now, hijacked vehicles, truck bombs, and biological weapons seem to pose a greater threat than cyber terrorism. However, just as the events of September 11 caught us by surprise, so could a major cyber assault. We cannot afford to shrug off the threat.” There is growing evidence that modern terrorists consider seriously adding cyberterrorism to their arsenal. [|Verton (2003)], for example, argues that “al Qaeda [has] shown itself to have an incessant appetite for modern technology,” and provides numerous citations from bin Laden and other al Qaeda leaders that show their recognition of this new cyberweapon. Paradoxically, success in “the war on terror” is likely to make terrorists turn increasingly to unconventional weapons such as cyberterrorism. Furthermore, the next generation of terrorists are now growing up in a digital world, one in which hacking tools are sure to become more powerful, simpler to use, and easier to access. “While bin Laden may have his finger on the trigger, his grandchildren may have their fingers on the computer mouse,” remarked Frank Cilluffo of the Office of Homeland Security in a statement that has been widely cited. The notion of “coupled” attacks, or use of “magnifiers” (combining conventional strikes and cyberattacks), is the most alarming: for instance, a terrorist group might simultaneously explode a bomb at a train station and launch a cyberattack on the communications infrastructure, thus compounding the destructive impact of the event. The challenge before us is to assess what needs to be done to address this ambiguous but potential threat of cyberterrorism, but to do so without inflating its real significance and manipulating the fear it inspires.

The Challenge: Online Counterterrorism
Counterterrorism on the Net is certainly lingering behind the terrorists’ manipulative use of this medium. Given the growth of internet research in recent years, it is rather surprising that research of online countermeasures has been overlooked, or at least has not provided efficient strategy and fruitful devices or tactics. Several factors combine to explain this gap: (a) difficulties in tracking and tracing cyber communications, (b) the lack of globally accepted processes and procedures for the investigation and prevention of cyberterrorism, and (c) inadequate or ineffective information sharing systems between the public and private sectors, between governments and between counterterrorism agencies ([|Westby 2006]). But the technological reasons are marginal when compared with the legal problems. Responding to terrorist websites is an extremely sensitive and delicate issue since most of the rhetoric disseminated on the internet is considered protected speech under the First Amendment. The case of Carnivore may illustrate the problematic state of online countermeasures. In February 1998, Attorney General Janet Reno unveiled plans to establish a new FBI command center to fight “cyberattacks” against the nation's critical computer networks. In October 2001 the US House of Representatives approved an antiterrorism bill that gave law enforcement officials expanded surveillance powers to monitor internet behavior and e-mail. After the 9/11 attacks FBI agents were already visiting the offices of internet service providers (ISPs), network providers, and email vendors around the country in search of those who perpetrated the attacks. The tool they used to conduct that investigation was the controversial email surveillance system, Carnivore. The system forces internet service providers to attach a black box to their networks – essentially a powerful computer running specialized software – through which all of their subscribers’ communications flow. In traditional wiretaps, the government is required to minimize its interception of nonincriminating – or innocent – communications. But Carnivore does just the opposite by scanning through tens of millions of emails and other communications from innocent internet users as well as the targeted suspect. To use an analogy, Carnivore is like the telephone company being forced to give the FBI access to all the calls on its network when it only has permission to seek the calls for one subscriber. Carnivore can be configured to do one of several things: it can record all of the email messages sent to and from a specific email account. It can record all of the network traffic to and from a specific IP address. It can record all of the email headers (i.e., TO and FROM addresses) sent to and from a specific email account. It can record all of the servers, webpages, or FTP files visited by a particular IP address. And it can track everyone who accesses a particular webpage or FTP file. When the FBI's use of Carnivore was revealed in July 2000, there was a concern expressed by members of Congress, who stated their intent to examine the issues and draft appropriate legislation. Because Carnivore provides the FBI with access to the communications of all subscribers of a monitored internet service provider (and not just those of the court-designated target), it raises substantial privacy issues for millions of internet users. The virtual war between terrorists and counterterrorism forces and agencies is certainly a vital, dynamic, and ferocious one. The National Security Agency, the CIA, the FBI, the Defense Intelligence Agency, other US and foreign intelligence agencies and some private contractors are fighting back, cracking terrorist passwords, monitoring suspicious websites, cyberattacking others and planting bogus information. However, as some argue, there could be better ways to counter the threat: “The government efforts are inadequate. The private sector is doing a better job than the government. Our enemies have embraced the internet. We have to ask how closely the government is monitoring it” ([|Hoffman 2007]). This is not the place to discuss a definitive answer to terrorist exploitation of the internet but two conclusions are to be stated. First, we must become better informed about the uses to which terrorists put the internet and better able to monitor their activities. Journalists, scholars, policy makers, and even security agencies have tended to focus on the exaggerated threat of cyberterrorism and paid insufficient attention to the more routine uses made of the internet ([|Weimann 2004; 2006a]). Those uses are numerous and, from the terrorists’ perspective, invaluable. Hence, it is imperative that security agencies continue to improve their ability to study and monitor terrorist activities on the internet and explore measures to limit the usability of this medium by modern terrorists. Second, while we clearly must defend our societies better against terrorism, we must not in the process erode the very qualities and values that make our societies worth defending. The internet is in many ways an almost perfect embodiment of the democratic ideals of free speech and open communication; it is a marketplace of ideas unlike any that has existed before. Unfortunately, the freedom offered by the internet is vulnerable to abuse from groups that, paradoxically, are themselves often hostile to uncensored thought and expression. But if, fearful of further terrorist attacks, we circumscribe our own freedom to use the internet, then we hand the terrorists a victory and deal democracy a blow. The use of advanced techniques to monitor, search, track, and analyze communications carries inherent dangers. Although such technologies might prove very helpful in the fight against cyberterrorism and internet-savvy terrorists, they would also hand participating governments, especially authoritarian governments and agencies with little public accountability, tools with which to violate civil liberties domestically and abroad. It does not take much imagination to recognize that the long-term implications could be profound and damaging for democracies and their values, adding a heavy price in terms of diminished civil liberties to the high toll exacted by terrorism itself.

Joachim K. Rennstich
==== Subject [|International Studies] » [|International Communication] ==== ==== Key-Topics [|communication], [|information and communication technology (ict)], [|world system analysis] ====

DOI: 10.1111/b.9781444336597.2010.x
[|**Author Podcasts**]

[|**Comment on this article**]

Introduction
The world system here is understood as the structural world-historical development of an interconnected social system that has developed over the past centuries. The accounts of its development vary and are discussed elsewhere in detail ([|Modelski and Thompson 1996]; [|Chase-Dunn and Hall 1997]; [|Rennstich 2008]). A central focus of the study of the development of a global world system centers on the distribution of power within it and its manifestation in the structure of the world system. While the central position of leadership within the world system has periodically shifted, the question arises whether this leadership can still be exerted by (single) states or alternative units in such a global world system, especially in the light of the existence of new communication networks and digital technologies and their role in the rise of a new “information era” as a result of the rise of a new “Information Age” – that is, a system in which the control of information becomes the most critical aspect of system development and control. The term “Information Age” commonly describes the rise of centrality of information in societies as a result of technological change, especially in the rise of digital forms of communication (for a more detailed discussion, see footnote 33 in [|Castells 1996]:21). The dating of the start of the information age differs substantially; however, it often is seen as starting with innovations in communication technologies since the 1970s or with the rise of digital information networks in the mid-1990s. While differing on the role and types of subunits, most sides of the debate agree on one thing: the transformation of the world system into a system that now spans and includes the entire globe. The focus here is therefore on the question of the current and possible future development of the world system: Has the evolution of the system come to a halt or is the current state of systemic “chaos” just part of the regular transformation or maybe of a similar transformation than the one that took place with the rise of Europe as its (new) center in the sixteenth century? What impact does the development of new, and especially digital, technologies have on its future development and structure? To ask these questions is critical if one is to seek an answer for the future modus operandi and thus the necessary means of control within it, or put differently, what constitutes power and who can aim to wield it? Does systemic leadership continue to exert itself in a similar fashion as in the past (a single state possessing a disproportionate share of power in a system of states that acts as the overarching organizing principle of the world system) or not (new power-centers striving for the creation of far-reaching systems under their control, i.e., a return to empire-systems)? To put these questions in a proper context, it is necessary to understand the structural formation and development of the world system, with a special focus on the type of linkages (or networks) that mark the development of the world system as a global “web of webs” ([|McNeill and McNeill 2003]) and the role of information in this development.

The Structural Formation of the World System
For Wallerstein and authors in his tradition ([|Wallerstein 1974]; 2000; [|Hopkins and Wallerstein 1996]), the //differentiae specificae// of the world system born out of sixteenth century Europe was the ceaseless accumulation of capital, a feature characterizing no other historical system that ever existed before. This view does not deny the existence of previous existing interaction networks. However, they are viewed as so systemically different in their operating principle that they need to be analytically categorized as separate entities and based on their different organizational principle marked as “world empires.” From this perspective, the expansion of the world system into a truly global, all-encompassing interaction network of social (including cultural, economic, and political) relations results in a new phase of world system development, marking, if not the end, then at least an unknown outcome of the current state of systemic chaos and thus the “end of the world as we know it.” In this view, the source and location of power changes dramatically. Nation-states lose their previous power status and thus their ability to leave their dominating imprint on the structure and future development of the world system. The operational mode of production and thus the critical mode of world system development in this view has shifted (although whereto seems unclear to most authors), as did the main unit of systemic development and control (from the modern, sovereign state to a multitude of actors). The next logical step is to ask: might this be the time now when the world system reverts to a world characterized by the protocapitalist empire-created modus operandi or alternative, far more participatory, inclusive forms of development and global governance (or world system control)? An alternative view (e.g., [|Goldstein 1988]; [|Thompson 2000]; [|Denemark et al. 2000]; [|Perez 2002]; [|Geels 2005]; [|Rennstich 2008]) on the evolution of the historical world system into today's (global) world system argues instead that it is not (primarily) the mode of production which determines the overall developmental patterns and outcomes of this game (i.e., world system development), but the nature of the evolution of the world system itself (i.e., the evolutionary process of world system development), of which the various modes, and thus the mode of production, are (only) an element. In this view, the driver of all world system history influencing the outcome of “development” in any particular part of the system is an element of the prevailing conditions of development (in particular capital accumulation) of the whole world system. If one can accept this notion of system development, world system development takes on a rather evolutionary character: the nature and the rules of [|Frank and Gills's (1993)] “game” (i.e., the process of world system evolution) do not change as much as implied by the Wallersteinian world system view of development. What does change are techniques of competition, of which the basic modi operandi have in fact been around for a considerably longer time than since the sixteenth century. The actors, however, are merely changing positions. From this perspective, systems change in character and developmental style (largely driven by technological and organizational change, described by [|Perez (2002)] as a “technological style”) and control over much of the past century of world history, but not so significantly as to merit a world system of their own. A world system, in this view, is conceived of as the social organization of the human species, viewed as one population. This population exists in either an organized or unorganized state, united by basic institutions such as cities or writing, states, or state systems, technologies, or intersubsystem networks such as trading networks. However, there does exist a wide degree of different identification of the composition of its population (e.g., system identity, the level and existence of interaction between its subsystems, its evolutionary development, etc.). The singular perspective is based on the concept of a Kantian universal history of mankind and argues that the various human cultures have experienced a significant degree of interaction with each other at every stage of their history (and never more so than during great transformations of the world system). This view contrasts with a plural perspective of a number of (more than twenty) separate civilizations pursuing essentially independent careers ([|Wilkinson 2000]). For the purpose of the discussion here it is useful to accept the concept of “predominant modes of production” in the broader sense of technological styles, as they play a critical role in the developmental process not only of societies but of the world system as a whole. The evolutionary world economic process, which establishes the major modes of organization of production and exchange in agriculture, mining, industry, and other economic activities, has so far developed over a number of centuries (or millennia, in the view of others). During this process, periods of productive development, and surge of new technologies (enabling new technological styles), such as bronze or iron, alternate with others that expand networks of interchange, pioneering new trade routes, and thus enabling the broader disperse of innovations. A major shift (in terms of the general mode of organization) has taken place during the emergence of the modern era with a shift from a command economy toward a market structure, slowly covering the entire globe. A number of authors ([|Castells 1996]; [|Rosecrance 1999]; [|Sassen 2006]; [|Rennstich 2008]) suggest that another such shift of similar magnitude is currently taking place as part of the transition from analog-based information to digital-based forms of information. The following section places this transition in a historical and world systemic context (for alternative accounts, see e.g., [|Rosenau and Singh 2002]).

The Role of Information and Information Technologies in World System Development
The pace of the most inner process in the world system developmental process, captured in the successive development of Kontratieff waves ([|Tinbergen 1983]; [|Goldstein 1988]; [|Thompson 1990]; [|Barnett 1998]), is determined by two biological control parameters: the cognitive (i.e., collective learning rate), driving the rate of exchanging and processing information at the micro-level; and the generational (i.e., the development of successive human cohorts) that constrain the rate of transfer of knowledge (i.e., information integrated into a context) between successive generations at the macro-level ([|Devezas and Modelski 2003]). Information and knowledge are two separate but intertwined concepts and the centrality of both in the developmental process of the world system (especially one that is part of an “information age”) requires a closer look at the historical development of their organization and development. A classic definition of information (from a mathematical and scientific viewpoint) refers to the reduction of uncertainty in a communication system ([|Shannon 1948]). It thus includes any pattern of energy or matter we can find in nature as a container of information. Information should not, however, be confused with the concept of knowledge. Knowledge does not simply equal information, but rather refers to ideas and facts that human mind has internalized and understood, often acquired and assembled in a complex fashion, a complexity that makes it nearly impossible to simulate in a mechanical fashion (i.e., artificial intelligence). It is, in other words, information embedded into a larger socioeconomic, cultural, and political context. As societies grow more complex and the amount of accumulated knowledge rises, information handling becomes an important determinate of successful organization and mastery of this complexity ([|Headrick 2000]). Rather than aiming to identify a starting point for a knowledge society (characterized by the rise of an “information era”), it seems more useful to view the entire development of humankind as the development of a knowledge society. This development has not been a linear progress, but rather a process marked by periods of sharp accelerations in the amount of information that people had access to and in the creation of information systems to deal with it. To understand the evolution of the world system marked by the rise of the new global digital network it is thus necessary to have an understanding of the forms of information systems that mark its development. According to [|Innis (1950)], a crucial element of the interaction between “cultures” (i.e., different social groups that have embedded information as knowledge in different contexts) is their adoption and use of different communication systems to control space. [|Headrick (2000)] defines information systems as the methods and techniques by which people organize and manage information, rather than the content of the information itself. Information systems in this understanding are supplements of the mental functions of thought, memory, and speech and thus the technologies of knowledge. He uses five dimensions on which to categorize information systems, namely information: (1) gathering; (2) classification; (3) transformation; (4) storage; and (5) communication. Employing these dimensions, he identifies the rise of a new information system, driven as the previous information systems by the combination of information-demand, -supply, and -organization, emerging in the period 1700–1850. This new information system ultimately provided the basis for the digital informational system that is now emerging as the main central nervous system of the global system. Hobart and Schiffman (1998) highlight the rise of a distinct new information system based on its digital character rooted in the cultural (combined with the technological) developments in the eighteenth and nineteenth century. In this system information no longer acts as a universal, abstract model of the world, either classifying or analytical, but rather has become a world unto itself, in which abstract symbols can be assigned arbitrarily to any objects and procedures whatsoever. As an important precursor, the rise of relational mathematics in the modern age realized the information potential of number and organized it in a broad-reaching, reductionist hierarchy; digital information has elicited the information potential of purely abstract symbols, fabricating a realm of pure technique apart from any foundation in knowledge. [|Hugill (1993]; 1999) emphasizes the two-way flows of information that predominate as mechanisms of military (i.e., political) and economic control. He argues that the geopolitical interests from trading states (states that exert their power mainly in external networks) and territorial states (i.e., internal network-based states) differ in terms of the military and communications systems they employ. Whereas trading states have an interest in exerting weak control over long distances, territorial states wish to exert strong control over short distances. The former thus tend to invest in long-range military and communications systems, in other words they aim to establish external networks of control. The pattern of existing technology being transformed in innovative spurts and clusters again proves the breeding ground for the emergence of a new long cycle of global system development. [|Spar (2001)] connects the ventures of Portuguese explorers of the fifteenth century to the development of the telegraph and radio in the middle of the nineteenth century, and the advent of satellite television and the internet in the twentieth century. She identifies a common dynamic in the development of new information systems, with bursts of innovation at the beginning, creating new commercial opportunities, creating a gap between economic, social, and technological activity and political control, with economic and technological development driving political advancement of the system. [|Hall and Preston (1988)] make a similar argument that the origins of the newly emerging system must be traced back to the transformations in communication system technologies beginning roughly around the middle of the nineteenth century with the invention of the electrical telegraph (1830s) as well as the telephone, the typewriter, and the phonograph (1875–90). These new inventions marked the emergence of what the authors call “New Information Technology” industries, embracing the technologies (i.e., mechanical, electrical, electromechanical, electronic) that record, transmit, process, and distribute information.

The Role of Networks on the Development of the World System
In effect, the global world system is made up of a variety of complex intraorganizational and interorganizational networks (or “webs”) intersecting with geographical networks structured particularly around linked clusters of socioeconomic activity ([|Cioffi-Revilla and Merritt 1981]; [|Cioffi-Revilla et al. 1987]; [|Gunaratne 2002]). These networks are at once characterized by their path dependencies, as well as the major transformation these networks undergo as a result of major technological innovations, especially in transportation and communication technologies ([|Modelski and Thompson 1996]; [|Thompson 2000]; [|McNeill and McNeill 2003]; [|Rennstich 2005b]). For a better understanding of the world system in the information age, it is therefore necessary to get a better understanding of the close relationship between communication and transportation networks. Some authors ([|Hall and Preston 1988]) have even argued that the information infrastructure may be just as important as the infrastructure of physical transport or even more so. What differentiates the currently developing technological style from that of previous network-centric technological styles is its digital nature. This affects its scale (geographically as well as the units it connects) and its impact on the creation of new leading sectors.

Network Economics
What characterizes (and distinguishes) network-centric markets from others are: (1) complementarity, compatibility, and standards; (2) consumption externalities; (3) switching costs and lock-in; and (4) significant economies of scale in production ([|Shy 2001]). Goods and services that are part of network markets should thus be viewed as systems of complements rather than individual products (e.g., computer hardware and digital software or Digital Versatile Disc [DVD] players and DVDs). One is relatively useless without the other; the real use of the services or items only comes into effect within a system. This raises the need for compatibility (e.g., software running on certain hardware platforms, cable connections featuring compatible designs) and thus raises the issue of common standards and the need for coordination. In other words, questions of coordination and standards become crucial in network markets. These standards in turn unlock the unique features of network externalities, which can profoundly affect market behavior of firms and individuals. Once users of these systems have invested in their use (by obtaining certain technology, licensing contracts, training and learning, etc.), they experience so-called lock-ins because switching costs (from one system to another) can be relatively high (e.g., reinvestments of the above lock-in factors, as well as additional search costs and loyalty costs). Switching costs affect price competition in two opposing ways. First, in the case of already locked-in customers, firms may raise prices knowing that consumers will not switch unless the price difference exceeds the switching costs to an alternative system. Second, in case of consumers not yet locked into one system, systems providers/sellers will compete fiercely (e.g., through discounts, free trials, complimentary products and services, etc.) in order to attract customers and create a critical mass of installed bases of consumers (i.e., customers locked into the providers/sellers system). In economic terms, the combination of often very high fixed sunk costs with almost negligible marginal costs implies that the average cost function declines sharply with the number of items sold. Once a critical mass is obtained, network markets can be extremely profitable (e.g., Microsoft's system software, Windows, enables a profit margin of over 80 percent for the firm). Since all successful networks rely on a critical mass to develop their network externalities (i.e., when the value of a good depends on the number of other people who use it) and thus raise the value of the offered system, the establishment of standards and their control becomes a clear determinant of commercial success. The convergence between organizational requirements and technological change has established networking as the fundamental form of competition in the now truly globalized economy ([|Ernst 1994]; [|Hatzichronoglou 1996]). Those networks also act as gatekeepers. Barriers to entry into the most advanced industries (such as electronics or biotechnology) have skyrocketed, making it extremely difficult for challengers to enter the market by themselves. It even hampers, as best demonstrated in the case of biotechnology, the ability of large corporations to open up new product lines or to innovate their own processes in accordance with the pace of technological change. Cooperation and networking offer the only possibility to share costs and risks, as well as to keep up with constantly renewed information. Inside the networks, new possibilities are abundant. Outside the networks, survival is increasingly difficult. Under the conditions of fast technological change, networks, not firms, have become the actual operating unit. However, firms continue to be the main organizational framework for the operating units. It is the form of the corporational organizational structure that changes, not the role of the corporation as the organizational structure ([|Castells 1996]). The role of the network, or what [|Castells (1996)] calls “the networking logic,” therefore substantially changes the character of the global economic environment. Whereas traditional rules of competitive strategy focus on competitors, suppliers, and customers, companies selling complementary components in the informational network economy become equally important. Cross-national production networks permit firms to weave together the constituent elements of the value chain into competitively effective new production systems while facilitating diverse points of innovation and in turn have turned large segments of complex manufacturing into a commodity available in the market. Taken together with the above-discussed merging of all networks into one supranetwork (or digital web of webs, the internet), this global network and its digital information infrastructure enable the culmination of many different markets, both on a horizontal and vertical scale, which in the past had been separate entities, into one exchange space.

Protocols and Standards
In markets driven by network economies, standards and protocols reign supreme ([|Talalay et al. 1997]; [|Brynjolfsson and Kahin 2000]; [|Latham and Sassen 2005]; [|Brousseau and Curien 2007]). The transition from an internal network system to an external network-based one is not only reflected in the structure of economic organization, but also in the kinds of networks themselves ([|Borrus and Zysman 1997]). Provider-supplied networks are defined and controlled by the network company which provides a set of services or possibilities to its customers. User-driven networks are at least in part defined and controlled by the user who designs them to fulfill specific functions. These user-driven networks generate a competitive market for the systems enabling these networks and often constitute disruptive technologies (i.e., technologies that are based on previous technologies, but establish a new lineage and force users into a new network thus understood). In this respect, provider-supplied networks (for example, the telegraph or initially also phone networks) were the natural extension of network industries based on an internal networking logic. The control over the network was regarded as essential for systemic control. By contrast, user-driven networks are much more attuned to an external networking logic, since they allow for competing systems relying on a set of standards that allow and enable end-to-end interoperability of the corporate communications infrastructure. Suppliers in such an environment rely on open-but-owned systems: open at the interface to permit interconnection of systems from other vendors, but owned to reap a return from innovation. In short, users demanded highly functional and interoperable systems. The by now classic example of this transition from dominance of system providers controlling the entire or large parts of the value chain to dominance of standard setters in the disintegrated value chain is the rise of the Wintel standard accompanying the personal computer (PC) revolution. Rather than controlling the production of entire systems, as was traditionally the case in the information processing industry, IBM's personal computer strategy encouraged the provision of alternative system (i.e., IBM-compatible PCs) provision to ensure a faster growth of the overall platform. This allowed the main critical component providers (Intel for the hardware and Microsoft for the software) over time to set the standards that enabled the interoperability of the various system component providers (i.e., hardware and software providers competing against alternative computing systems). A crucial factor allowing firms to wage (and win) a war of standards is control over intellectual property rights and patents, or put differently, an advantage in its intellectual capital (presumably the main currency of control and power in the information age). The importance of standards and protocols as captured in the code that physically manifests them has been widely theorized and studied by [|Lessig (1999]; 2001), who argues that “code” becomes governing “law” and as such becomes an object of traditional manifestations of political control. His work rightly predicted not only the interest and ability of private actors (such as firms as well as individuals) in obtaining control over these standards, but also existing governing institutions, most significantly nation-states (see also discussions of this question in [|Post 1995]; [|Spar 2001]; [|Sassen 2006]).

New Technologies and Leading Sectors
Most authors who study current and future world system development in one way or another stress the importance of information and communication technology (ICT) as a new leading sector ([|Mensch 1979]; [|Tinbergen 1983]; van [|Duijn 1983]; [|Hall 1985]; [|Bruckmann 1987]; [|Goldstein 1988]; [|Modelski and Thompson 1996]; [|Lipsey 1999]; [|Bornschier and Chase-Dunn 1999]; [|Rennstich 2008]). A leading sector is not necessarily very large in comparison to other economic sectors. What determines the “leadership” role of a sector is whether the sector's impact on growth tends to be disproportionate in its early stages of development. Although the establishment of a new economic sector will initially require much greater shares of investment resources than its early output would seem to justify, the expected long-term returns provide incentives for investors to make the necessary resources available. As the sector continues to exploit its particular contribution to efficiency and productivity, its linkage to overall economic growth should stabilize. Most of these authors focus more on the hardware side of ICT (microprocessors, electronic components). Others include the software aspect (including a new sector made possible only by recent developments in ICT, biotechnology). As in earlier leading sectors, we can trace the development of ICTs through various K-waves. [|Hall and Preston (1988)] show how information technology emerged through the last four K-waves, beginning with the development of the telegraph in the 1830s. [|Standage (1998)] traces the modern-time internet back to the mid-1800s with the development of a telegraph network. However, simply identifying ICT industries as a key sector is not sufficient. Many countries (and certainly all the likely candidates for systemic leadership) now realize the strategic importance of ICT and try to develop their industry accordingly. What is crucial is the emergence of an appropriate socioeconomic complex ([|Freeman and Louçã 2001]; [|Perez 2002]). This combination of the creation of economic innovation and fostering sociopolitical institutions form the nucleus of a new technological style. As a result, these complexes also foster the development of an early head start in emerging leading sector development. Focusing on the analytical level of states, the superiority of past nations appears (apart from certain key necessary, but not sufficient, elements, such as population size, geographical features, etc., for every potential leading state) in no way mere fate or some sort of “destiny.” Instead, states in the past obtained leadership positions through the establishment of a supportive and enabling institutional environment for other agents (individuals, firms, etc.). Through a combination of private and public, institutions can under certain circumstances foster the “innovational milieu” that leads to the typical clustering of economic and sociopolitical innovations which in turn can lead to the development of new leading sectors and ultimately the development of a new technological style. The development of a leading economy based on these leading sectors formed the foundation of political and economic leadership in the world economy. Britain's success during the period of the Industrial Revolution was well observed by other states at the time. However, nowhere were these improvements so widespread and effective as in Britain, in large part as a result of her formal and informal institutions. Britain featured the optimal set of institutions to allow for the development of a new socioeconomic paradigm of doing things better and more efficiently. This enabled her to set new standards that others had to follow. In other words, Britain was able to set the standards of a new technological style. This also holds true for the development of leading sectors in the information-era world system, namely ICT, networking, biotechnology, and the development of new energy forms ([|Rennstich 2008]). As discussed earlier, what characterizes the current technological revolution that enables the basis of innovative clustering of this new phase of world system development is not so much the centrality of knowledge and information, but the application of such knowledge and information to knowledge generation and information processing/communication devices, in a cumulative feedback loop between innovation and the use of innovation. Whereas the leading sectors of earlier external network dominated K-waves, first the Baltic and Atlantic trade routes and later the Eastern trades, were dominantly maritime, the leading sector of this period are increasingly turning into digital commercial trade routes ([|Rennstich 2005b]). A critical part of this digital network is the so-called internet.

The Internet
The internet serves as a trade route in the sense that the new commodity of the possibly newly emerging, digital information-based long-wave ([|Rennstich 2005b]; 2008) is transported along its lines. However, information itself is not the only commodity. E-commerce, or the electronically enabled retailing of software, digital books, digital services (e.g., online-brokering and e-banking), and digital outsourcing (e.g., data-processing) are now common phenomena. It is reasonable to add the growing number of web-enabled transactions (e-business) of nondigital items and services, both business-to-business and consumer-to-business, to this count. In sum, the internet as infrastructure and enabler of networking already constitutes a significant global digital trade route and is increasingly developing into the central interchange circuit not only for commercial exchange but also for almost any other form of human interrelations. From its root as a US defense network over its role as an international virtual college of scientific and academic researchers to the globally expanding World Wide Web (WWW), the history of the internet has been one of exponential growth in both number of users and number of hosts connected to the network and is well documented elsewhere ([|Abbate 1999]; [|Berners-Lee and Fischetti 1999]). In essence, the internet is a “network of networks.” Its most important feature is a set of standardized protocols (i.e., conventions by which computers exchange data, sliced into little packets, over various kinds of carriers). Central to the success of the internet was the development of two main protocols governing this process: IP (Internet Protocol) and TCP (Transmission Control Protocol). Other protocols, such as the Hypertext Markup Language (HTML) or the domain name system governed by the Internet Assigned Number Authority (IANA) have proven equally important. While still heavily dominated by the United States in terms of numbers of both users and hosts, the internet is now widely accessible on a global scale. As a result of the emergence of the internet as a global common standard of the digital network, the US still maintains its central role in this global network. Whereas every region and nearly every country is now tied into the digital network in the form of a direct internet connection to the United States, direct connections between other countries are less common. This is especially visible in the connection structure between different major regions, such as Europe and Asia, where direct connections are almost nonexistent. As a result, the United States still serves as a central switching facility for interregional data traffic and thus as the central node of the digital external network system ([|Townsend 2001a]). Also important in the larger context of the historical development of the world system is the re-emergence of major cities as important nodes of the external network development ([|Sassen 1997]; [|Townsend 2001b]). During the transition from an internal network-based system to an external-based one, so-called global cities acted mainly as sites (or network nodes) where transnational flows of goods, capital, and people were tied into national and regional economies. Evidence exists to demonstrate that new telecommunications networks reflect a more complex system of interurban information flows than implied by earlier works centering on the global city hypothesis, connecting a wider range of cities in a more complex way. This renewed focus on a centers and hinterlands structure of the global system, as well as the geographic centrality of the United States for the functioning (and control), makes it clear that despite its increasingly digital nature the global system is still very much a geopolitical one in the traditional sense of the meaning ([|Barnett 2001]). Geography continues to matter as an organizing principle and as a constituent of social relations ([|Kitchin 1998]). It cannot be entirely eliminated because of the interaction of the virtual space with the world beyond ICT networks and cyberspace, which only in combination constitute the external networks on which the global world system is based. Increases in capacity, speed, and digitalization have provided possibilities to integrate graphics, text, video, and sound (including voice) in applications, while the integration of computing and communication technologies has created possibilities of accessing and using interactively services and applications. Increasing bandwidth and speeds now permit transport integration and unprecedented flexibility and performance in network use as infrastructure for economic activities. The trend towards large numbers of highly sophisticated devices increasingly relying upon a network also constitutes a discontinuous transformation in the demands being placed upon the network infrastructure in terms of both the transmission volumes and the new pattern of use it will have to accommodate. With the increasing sophistication of the mobile-based technology, the “poor man's e-mail” ([|Rennstich 2008]), the so-called Short Message System (SMS), has become the interface of choice for access to the digital commercial system for many users outside of the US and increasingly there as well. Now firmly integrated into other existing technologies of the digital nervous central system, this technology has allowed the essential integration of the hinterlands into the major center network. A digital divide is certainly a reality in terms of level of integration, both in width and depth. It is, however, by now a divide that is being bridged by new forms of digital technologies of different levels of sophistication, creating together a truly global (in terms of its geographic reach) digital external network system ([|Donner 2008]).

Biotechnology
Following in the footsteps of the rise of ICT as a leading sector (and to a large degree now intricately interwoven with this technology) the biotechnology industry can trace its origin in its current form back to the late 1960s and early 1970s ([|Ouzounis and Valencia 2003]). The scientific results enabling genetic engineering techniques built upon more than twenty years of basic research in molecular biology, microbiology, and related fields on DNA (deoxyribonucleic acid), genes, and on cells ([|McKelvey 2000]). The genetic engineering techniques developed in the 1970s enabled controlled changes to DNA and followed largely the logic and possibilities that molecular biologists had understood when they possessed the theoretical knowledge but no practical techniques. Alongside basic research unlocking the genetic information of molecules such as DNA, the commercial uses of genetic engineering, mainly for the production of pharmaceuticals, began to develop in the 1970s ([|Ouzounis and Valencia, 2003]). Often seen as the start of the new biotech industry, the 1976 founding of the Californian biotech firm Genentech provided a model in which basic scientists and venture capitalists joined together. In general, these firms would sell R&D contracts to established firms in order to develop new scientific knowledge and techniques for the use and adaptation of scientific activities to commercial purposes. Up until the mid-1980s, most biologists had little use for computers other than to compare DNA sequences and as a communication tool, in the form of electronic mail over the precursor of the internet. In the late 1980s, however, a significant transformation within the biomedical science finally became a widespread phenomenon, namely the computer-enabled shift from single-gene studies to experiments involving thousands of genes at a time, from small-scale academic studies to industrial-scale ones, and from a molecular approach to life to an information-based one, highly dependent on sophisticated computing and processing power ([|McKelvey 2000]). By now, biology, electronics, and informatics were converging and interacting in their applications, in their materials, and in their conceptual approach. In addition, the manipulation and duplication of genes and genetic patterns (i.e., cloning and recombination) has become a standardized technical process ([|Enriquez and Goldberg 2000]). The convergence of supercomputers, advanced mathematics, and robotics has made possible the fully automatized process of mapping genomes, creating vast amounts of biological data. These data are now at the center of the most important area commercially in the software side of biotechnology, bioinformatics ([|Ouzounis and Valencia 2003]; [|Perez-Iratxeta et al. 2007]). Bioinformatics is a spectrum of technologies, covering such things as computer architecture (e.g., workstations, servers, supercomputers, and the like), storage and data management systems, knowledge management and collaboration tools, and the lifescience equipment needed to handle biological samples. Bioinformatics companies sell both software and services for manipulating of all this sort of data. Most do not produce any novel data themselves: they only find ways to transform other organizations’ data. Biological data are flooding in at an unprecedented rate and biotechnology is now mainly an information industry. New partnerships between IT and biotech companies are being formed at a very fast rate, as our global system process model anticipating an innovative surge of commercial activity in this area and organizational model would suggest. The list covers bioinformatics, DNA microarrays (gene chips), data analysis and visualization, chemical and biological library integration, detection of human genetic variation (SNPs), microfluidics, and in silico research.

New Energy Forms
The development of new energy forms has been an important component of long-term world system development and transformation ([|Hoffmann, 2001]; [|Devezas et al. 2008]; [|Koh and Magee 2008]). Especially the environmental impact of the current use of fossil fuels and its relative low levels of energy intensity and efficiency (measured as the ratio of economic activity to the rate of energy use) would lead us to expect the rise of new energy forms in the near future of world system transformation. While the specific future mix of renewables and nuclear energy sources is uncertain, the more general logistic dynamics pattern of the energy system seems to be continuing as it has for nearly two centuries now. A discussion of concrete technologies remains too speculative at this point, however, in comparison to the transformations in the development of digital networks and biotechnologies.

Leadership and World System in the Information Age
Alongside neorealist structural studies of power distribution in a Westphalian state system (marked by the dominance of the sovereign modern state), a new debate has emerged about the meaning of what constitutes power that would enable a state to exert influence over others. The new currency of power in this view is thought to be coined through cooperation rather than coercion. So-called soft power (in the form of cooperation, or “pull”) replaces hard power (in the form of coercive force, or “push”) as the critical element in such an environment ([|Nye 1990; 2004b]). From this perspective, the effects of complex interdependencies on the rules of engagement in a new, transforming and globalizing new world system, and the rise of regional powers rather than the question of a possible challenge to the old hegemonic power status of the current leader, the United States, seem of interest ([|Keohane 1989]; [|Keohane and Nye 1997]; [|Grant and Keohane 2005]). Especially relevant here is the work of Nye, who argues for the need to put any discussion of power in the modern world system in the context of “the information age” and to recognize that the distribution of power resources in the contemporary information age varies greatly on different issues ([|Nye 2004a]). For these observers, the very concept of the possibility of single-center power status in the traditional sense (of a mostly coercive nature executed by states) seemed to have lost any explanatory or predictive strength. But through the introduction of “new” forms of power, it seems possible for the existing leader to lose one kind of power and substitute it with another, thus securing its relative share of power in the system as a whole. These analytical developments were largely the result of a division of labor in political science, where security students focused largely on an independent system of sovereign states battling over high politics (and hard power), whereas studies of the international political economy focused on the issues of low politics (and soft power) in at times overlapping, but mostly separate, systems. As a result, many political scientists lost interest in the world systems-based (and other long-term structural systemic) concepts of hegemonic power and systemic leadership in and over the world system that centers on a single power. However, the lack of an emergent “new world order” after the fall of the Berlin Wall based on a cooperative, interdependent world of states and nonstate actors, and the aftermath of 9/11, muddied the analytical waters deeply and put to rest the “end of history.” Not surprisingly, the concepts of hegemony and world system leadership have new popularity in both the academic and more popular treatments of the subject ([|Hardt and Negri 2000]; [|Chomsky 2003]; [|Ferguson 2004]; [|Johnson 2005]). While it is true that the constituting elements of an interdependent world have not suddenly vanished, recent events in world politics (the American but also European responses to the 9/11 attacks, the rise of China as a regional – if not global – power, to name but a few) have demonstrated the continued role and importance of “traditional” (i.e., coercive) capabilities for the establishment and projection of power in the global system. The disproportionate massing of a traditionally critical set of capabilities in a single and similarly traditional unit (the sovereign modern state) has brought back the analytical focus on the need for a thorough understanding of the historical and cyclical system of world system leadership. However, just as in the political science literature, scholars more traditionally associated with the question of global hegemony struggled throughout the 1990s to connect the world they seemed to experience with the traditional world system concepts. Some declared an age of transition to the world system (and even the end of the world) as we know it ([|Hopkins and Wallerstein 1996]). [|Bornschier (1999)], for example, argues for “hegemony without a hegemon,” and the predominant question seems to be that of (expected) systemic chaos and a rather uncertain future, characterized by weakened states and lack of alternatives to the structures instilled by the declining hegemonic power. This view of “declining” states has however been challenged recently ([|Weiss 2003]). Rather than observing a decline of the state, these authors instead argue for a transformation of the roles of states to the one they occupied in earlier stages of the world system ([|Cederman 1997]; [|Rosecrance 1999]; [|Everard 2000]; [|Sassen 2006]; [|Dunn et al. 2007a; 2007b]). Both views might not agree on a common story line of world system development. They do, however, agree on its most powerful actors (at least in the past). An apparent characteristic of systemic leadership within the world system in most treatments of the subject seems to be located in the state and is marked by the inability of the existing leader to prevent its own decline in relative position dominance. This shift in the geographical and sociopolitical location of power has been explained as the outcome of the leader's experience of success in the current setting, creating an entrenched institutional setting (in a broader sense) that proves adaptive in defending its turf but less so in fostering the rise of new leading sectors. It is important in this context to keep the evolutionary development of the world system in mind: leadership during the early stages of the weaving of the world system required different capabilities and took different forms than the exercise of a disproportionate share of power in a network-centric system in existence today ([|Rosecrance 1999]; [|Rennstich 2002]; [|Göransson and Söderberg 2005]). This development also is by no means linear ([|Sterman 1989]; [|Devezas and Corredine 2002]). Being able to exercise leadership in the global web of 2020 does not simply require x-times more capabilities than it did in the 1800s. Rather, it is important to differentiate between divergent types of capabilities, different meanings of control, and as a result different concepts of what establishes leadership of the world system and thus the ability to (re)shape its structure ([|Modelski and Thompson 1996]). Whereas previous innovations and technologies that developed into new leading sectors dominating the development of the world system were largely enablers of external network (and thus mostly trade network) domination, the leading sectors and their accompanying technologies of the industrial phase allowed control of complexity on a much larger scale than previous technologies did ([|Rennstich 2005b]). This transition can best be viewed in the structural change of textile manufacturing under British organization. In the seventeenth and eighteenth centuries, production factories set up by companies such as the English East India Company on the (eastern) outer realms of the British-controlled (and thus European-centered) network of the world economy spanned entire continents and included a sophisticated system of financing and the outsourcing of production to external, independent contractors. In the latter half of the eighteenth century and in the nineteenth century, this production system was replaced by factories organized around individual firms in the center of a less externally oriented, but more vertically integrated, world economy with its center in Britain and later in the United States. With the increasing demise and unraveling of the Fordist model of the dominance of internal networks beginning the 1970s, the punctuation of the global system seemed to have given birth to a new phase of extending external network dominance ([|Rennstich 2005a]). Whereas the Japanese manufacturing mastery in the 1970s and 1980s of internal network management through a closed network model of production provided the basis of increasing share over existing leading sector production, the parallel development, mainly in the high-tech regions of the United States, created a new “open systems business model.” This created the decentralized environment for the emergence of new innovative clusters that allowed for the crucial diffusion characteristic of all previous new leading sector developments. Initially these external networks remained mainly within the boundaries of national economies, with networking emerging as a means of coordination enhancing the resource creation activities of enterprises. Later on, however, these networks increasingly extended across national borders and regions. Fostered by the rise of digital communication interfaces (including mobile technologies) lowering significantly the cost of access and creation of open systems and the availability of standardized and truly global logistical solutions, a multitude of cost-efficient organizational open systems have replaced previously closed systems or open national systems ([|Rennstich 2008]). One of the main characteristics of systemic leadership transitions in most treatments of the subject seems to be the inability of the existing leader to establish a similar leadership position in a newly emerging and structurally different commercial and organizational arrangement. This shift in the geographical and political location of power has been explained as the outcome of the leader's experience of success in the current setting, creating an entrenched institutional setting (in a broader sense) that proves adaptive in defending its turf but less so in fostering the rise of new leading sectors ([|Rennstich 2004]). However, the case of Britain's continued leadership over an extended period of time (and separate long waves) has shown that this is not always the case ([|Rasler and Thompson 1994]). This has been attributed to the occurrence of a switch from one network system to another ([|Rennstich 2004]), itself the result of the change in the type of main mode of production that tied to the overall mode of “global web weaving” (commercial maritime, industrial, and digital commercial). A similar transition seems to be taking place at this moment of world system development as well. The possibility of dominance over network flows therefore seems to extend, at least in the foreseeable future, the ability of states to continue not only their dominant position within the world system, but also their ability to yield important control over the structure of the world system as a whole.