As we now understand in today's world, information is power. A majority of the people in developing societies have significant barriers in gaining access to information that we take for granted. This is especially true for children in the developing world. The nonprofit, One Laptop Per Child (OLPC), formed at MIT in 2005 by Nicholas Negroponte, conceived of a low-cost laptop that would remove the barriers that impede access to education and information for the world's most needy children. He conceived of the idea of a $100 laptop freely distributed to children in third world countries. The OLPC is a unique low cost and high design product with particular attention to user needs and a challenging use environment. The computer is designed to be drop proof, splash proof, and kid friendly.
In October 2007, Uruguay placed an order for 100,000 laptops, making Uruguay the first country to purchase a full order of laptops for every child in its borders. The first real, non-pilot deployment of the OLPC technology happened in Uruguay in December 2007 (Krstić, 2007). Since then, 200,000 more laptops have been ordered to cover all public school children between 6 and 12 years old. Uruguay reported in the press that they had become the first nation in the world where every primary school child received a free laptop (Jarvis 2009) as part of the Plan Ceibal (Education Connect). Since then, over 2.5 million laptops have been distributed around the world in dozens of countries – with mixed results. Since connectivity for children was the primary goal, the program has met with mixed reviews. Some of these include no technical support, ease-of-use, security, content-filtering, and privacy issues. Government officials in some countries have criticized the project for its appropriateness in terms of price, cultural emphasis, and priority as compared to other basic needs of people in their countries. Other complaints include little teacher training and a cost that is roughly double the original design.
I will leave it to the reader to judge whether the effort has been worth the cost to date. This will be a subject for discussion later in the forum. Please do a little research on your own to assist in voicing your opinion in the forum later. What should be obvious after the next section, though, is the potential that could be unleashed with both universal connectivity to information and universal access to content. Other ideas adjunct to this are the rapid expansions of Massive Online Open Classes (MOOCs) as typified by efforts in Coursera. Scientific American recently detailed the issues and potentials of adding universal university level content to the oncoming connectivity.
So, in a little more than 30 years, the Web has expanded from a nascent technology to a tool that transforms how people, businesses, and governments communicate and engage, the Arab Spring being one example as well as the protests for democracy in Hong Kong. The Web's economic impact has been expansive, making significant contributions to many national gross domestic product (GDP) and has fueled new, innovative industries. It has generated societal change by connecting individuals and communities (or allowing them to connect), provides access to information and education, and promotes greater transparency. However, not all countries have harnessed the Internet’s benefits to the same degree. If we examine of the evolution of Internet penetration globally, we can observe some factors that enable development of a vibrant Internet ecosystem, and the barriers that are impeding more than 60 percent of the global population from getting online.
A September 2014 report by the McKinsey Group detailed some of these factors. The Executive Summary is uploaded in the Professional Online Library.
Several key findings emerged:
- Over the last decade, the global online population grew to just over 2.7 billion people, driven by five trends. The worldwide Internet user population was around 3.5 billion people in 2016, with 1.8 billion joining the ranks since 2004. This growth has been fueled by five trends: the expansion of mobile network coverage and increasing mobile Internet adoption, urbanization, shrinking device and data plan prices, a growing middle class, and the increasing utility of the Internet.
- At the current trajectory, an additional 500 million to 900 million people are forecast to join the online population by 2017. However, these gains will still leave up to 4.2 billion people offline. The rate of growth of worldwide Internet users slowed from a three-year compound annual growth rate (CAGR) of 15.1 percent in 2005–2008 to 10.4 percent in 2009–2013.2 Without a significant change in technology, in income growth or in the economics of access, or policies to spur Internet adoption, the rate of growth will continue to slow.
- About 75 percent of the offline population is concentrated in 20 countries and is disproportionately rural, low income, elderly, illiterate, and female. We estimate that approximately 64 percent of these offline individuals live in rural areas, whereas 24 percent of today’s Internet users are considered rural. As much as 50 percent of offline individuals have an income below the average of their respective country’s poverty line and median income.
- The offline population faces barriers to Internet adoption spanning four categories: incentives, low incomes and affordability, user capability, and infrastructure. Despite the increasing utility of the Internet in providing access to information, opportunities, and resources to improve quality of life, there remain large segments of the offline population that lack a compelling reason to go online. Barriers in this category include a lack of awareness of the Internet or use cases that create value for the offline user, a lack of relevant (that is, local or localized) content and services, and a lack of cultural or social acceptance.
- The issues cannot be considered in isolation (or can they?) — McKinsey found a systematic positive large correlation between barrier categories and with Internet penetration rates. They measured the performance of 25 countries against a basket of metrics relating to each category of barriers to develop the Internet Barriers Index and found that all factors correlate strongly and separately with Internet penetration, and all regressions indicate an elastic effect on Internet penetration — that is, improvements on each individual pillar of the Internet Barriers Index will have a disproportionately positive impact on Internet penetration. One of these factors was infrastructure which implies that improving the infrastructure might have secondary and tertiary effects on other aspects.\
- Approximately 2 billion people, or nearly half the offline population, reside in ten countries that face significant challenges across all four barrier categories. An additional 1.1 billion people live in countries in which a single barrier category dominates.
- Some nations around the world have recognized the transformational impact of bringing more of their population online and are moving aggressively on several fronts to do just that. Governments are setting ambitious goals for mobile Internet coverage and investing to extend fixed-broadband infrastructure and increase public Wi-Fi access. At the same time, network operators and device manufacturers are exploring ways to further reduce the cost of access and provide service to underserved populations. In addition, content and service providers are innovating on services that could improve the economic prospects and quality of life of Internet users.
Projects like One Laptop Per Child (OLPC) and The Other 3 Billion (O3B) are expanding access and attempting to drive connectivity costs down. The long term impacts of such infrastructure expansion are being met with mixed results. The future is open for debate as transaction costs continue to decrease.
Listen to the Experts (Optional Talk)
An optional video talk by Mr. Negroponte on the OLPC program can be viewed below. In it, he talks about how One Laptop Per Child is doing, two years in. He was speaking at a conference while the first XO laptops were rolling off the production line and recaps the controversies and recommits to the goals of this far-reaching project (16:33).
Eric Schmidt, a 57-year-old software engineer from Google, has stated that technology has the potential to be a “great leveler” which would empower the poor like never before. In contrast, dictatorial regimes are increasingly looking to control who has access to the web by filtering information content. Schmidtk who stepped down as Google’s CEO in 2011 after more than a decade, has called on the international community to “fight for the future of the web” and has stated that at least 40 governments are now known to engage in online censorship compared to only a few just 10 years ago. In May 2012, he stated, “Last year we saw in Egypt what happened when a government tried to turn the Internet off," referencing the moment when the then embattled regime of Hosni Mubarak tried to block the web in the face of mass street protests. “Now many governments are attempting to build their own walled Internet, a Balkanized web in which you and I do not see the same information and no one knows what has been censored.” He added: “States will struggle to sell propaganda to the public as citizens get constant access to mobile phone and social networks. In times of war and suffering it will be harder to ignore the voices that cry out for help”. Google's own struggles with the Chinese government are now part of ICT history and are well documented. Google initially went into China agreeing to self-censor search results - in contrast to its corporate philosophy for openness.
Google has changed the way knowledge is accessed. Most of us have been impacted by the free flow of information made available since Google was incorporated in the 1990s. While no business is completely transparent in its corporate goals, Google offers a degree of transparency to information to individuals around the world. It has stated from its founding that it wanted to organize all the world’s information. The result has often been a conflict with not only governments such as China, but also other US companies such as Amazon, Comcast, eBay, Apple, and Microsoft.
As far back as 2007, according to Pandia, Google was estimated to run between 1-3 million servers in data centers around the world, and in 2012 processed over 1.2 trillion searches. Google annually reports this on its “Zeitgeist” site. While this seems like high number, if you consider that only about 1/3 of the planet’s population has access to the Web (let alone a Web without censorship controls), it shows how many questions can get asked on this website.
One of Google’s latest innovations centers around the goal to return identical results to a user’s query from anywhere on the planet, so that a guarantee of identical search results can be experienced regardless of their geographic location. Think about this for a moment - exactly identical search results regardless of geographic location! This is a game changer and can only be perceived as a threat to repressive governments wanting to control the "message."
Google has developed a globe spanning database, “Spanner,” that stretches around the planet and behaves like it is all in one place simultaneously (another death of distance?). It was unveiled in the fall of 2012 and is the first worldwide database that “spans” seamlessly across hundreds of data centers' caches of information. Time synchronization is the key to the success of the global architecture of Spanner. Historically, attempts at database synchronization rely on the Network Time Protocol (NTP) which provides an online connection of machines to atomic clocks and is used by organizations all over the world. The only flaw in this architecture lies in the delta of time that it takes to transmit information over the globe. The accuracy has never reached 100% and is not robust enough to accomplish 100%. Google developed their own original alternative called the TrueTime API. Google mounts its “Spanner” data centers with their own atomic clocks and GPS receivers connected directly to the machines themselves. This approach provides not only a location to the database but also the time as well. Actually, there is a redundancy built in since one aspect of GPS also is time based on atomic clocks on board the GPS constellation. Perhaps the duality of the time pieces has to do with the possible relativistic effects of bodies in orbit?
The synchronization process used by Spanner involves connecting the server atomic clocks together with the GPS receivers, which provides a level of accuracy achieved based on a consensus of the clocks with each other as well as GPS satellites in orbit. The result becomes a common clock “spanning” all servers regardless of their geographic location. This provides the capability for global database synchronization and replication combined with a degree of robustness, redundancy, and accuracy never possible previously. It’s both global spanning, consistent, and also more resistant to network delays, data-center outages, and other software and hardware issues typical to IT architecture. It’s used by Google to replicate its data across numerous data centers and information between replicas as necessary. If one replica is unavailable for some reason, Spanner shifts to another, but can also do so to improve overall performance. The implementation hasn’t yet been completed, but may become the basis for a ubiquitous uniform information retrieval service worldwide. This will not resolve the connectivity issues for underserved regions or regions (China for instance) where the content is censored, but it is a beginning of a uniform global architecture. Uniform delivery of search results runs into all the issues of censorship we have discussed in prior weeks. The full research paper can be downloaded. The goal of Article 19 of the U.N. Universal Declaration of Human Rights states that people shall have the right to access information "through any media and regardless of frontiers,” seems potentially to be fulfilled in this approach to access and content, and this is the threat as perceived by many non-progressive nations.
Imagine being able to have the same search results delivered to your desktop regardless of your location - San Francisco, Rio, or even Beijing. If access and content cannot be controlled, governments lose power. Information typically censored in some nations is censored no longer. This is a game changer for some countries - China being among them. Another game changer will be when everyone has ubiquitous access to the same knowledge from anywhere.
Listen to the Experts. Five billion people can’t use the Internet. Economist Aleph Molinari is working to close the digital divide and empower people by providing access to technology education. Aleph Molinari empowers the digitally excluded by giving them access to computers and the know-how to use them. In 2008, he founded Fundación Proacceso, and in 2009 launched the Learning and Innovation network, which uses community centers to educate under-served communities about different technologies and tools. To date, the network has graduated 28,000 users through 42 educational centers throughout Mexico.
Finally, there is another aspect to information access - control, whether automatically or via governance. We've talked in the course about the differences in access due to either availability of technology infrastructure, differences in governance, etc., and all these are the results of human activity. One aspect we haven't talked about is the use of algorithms to push us only information that we might like. The foremost example of this type of technology is apparent in when you shop on Amazon. "People who bought X (some product) also looked at or bought W, Y and Z (other products of a similar type)."
This is going on with Google and others as well. The information being sent to us as search results is often being filtered due to our captured and recorded expectations. While the Google technology "Spanner" is designed to give you the same search results regardless of geographic location, it is not designed to give everyone the same results in any one location. Google (as well as others) is pushing you search results based on data they have collected on you that determines what their algorithms determine to send you based on what they expect you to see!
As I was learning to use the Internet in the early years, it felt like something very different to me than it is today. It was a connection to the world. It was something that would connect us all together globally. 5% of the planet was connected in 2000. 40% is connected as of this writing in August 2014. Anyway, in those early years, I was certain that it was going to be great for society, democracy, and the globe in general. In the current day, there has been a shift in how information flows online based on these filtering algorithms, and how they work is transparent to the user for the most part. It could become a real problem. I've always gone out of my way to meet and discuss issues with persons of all political leanings. I enjoy the dialogue and enjoy hearing what other people are thinking about and why they make decisions like they do. What's happening in mediums like Google, Amazon, and Facebook is that my information appetite is being filled more and more based on my past searches and preferences. This is potentially a huge problem in the future when it might contribute to the ultimate example of "group think".
Try an experiment - ask 3 or 4 friends to do a Google on a topic like "Morocco" or "Israel" about the same time of day and then send you a screen capture of the search results. You will be amazed at the differences in returned results. As observed by Eli Pariser, the former Executive Director of Moveon.org, "different people get different things. Huffington Post, the Washington Post, the New York Times --all flirting with personalization in various ways. And this moves us very quickly toward a world in which the Internet is showing us what it thinks we want to see,but not necessarily what we need to see." Eric Schmidt, the CEO of Google has stated, "In the future, it will be very hard for people to watch or consume information that has not in some sense been tailored for them."
We've been here before as a culture. Early in the 20th century, newspapers weren't concerned much bout the civic responsibilities of the reporting. Then it became noticed that the printed news was important and we now realize that you can't have a functioning democracy if citizens don't get a good flow of information. 100 years ago, the newspapers were critical because they were acting as a filter and, as a result, a form of journalistic ethics evolved. It was not perfect by any means, but we learned to depend on it for the next 100 years or so.
We are back in the same situation today on the Internet except that there seem to be few ethics in place at this time when it comes to the filtering. Just at a time when we need to engage with people who think differently - maybe radically so, the Internet is putting in place this automated filtering that is lumping us together with people of like mind - "communities of interest" so to speak - that we do not either intentionally or knowingly choose. We need to better understand what the filters are doing and how we can overcome them if we choose, or they threaten to become a medium that may create a more fractionalized and myopic society, even more partisan than it is today.