So there’s been all of this hype (most likely unknown to most people) about what is being called Web 2.0. It’s a geeky name for a geeky idea about the Internet community recovering from the burst bubble of 2000 and figuring out new ways to exist. A lot of it has to do with things I’ve written about before here: a more participatory Internet that encourages the kind of back-and-forth that creates and sustains a civil society and that is used by a community dedicated to the philosophy of open source and free information. I find some of this encouraging; for my work I’ve been reading Robert McChesney, Pippa Norris, and others who, five or six years ago, were writing critically about the economic e-wonder of the information e-highway (the language gets dated real quick) and all of the optimism they reported, and were critical of, centered around the mountains of money that could be made online.
The techie world soon sobered up to this idea and spent the last five years regrouping. The result? The Web 2.0, an Internet that focuses on user participation and community building. In Wired’s words: “Blogs, Wikipedia, open source, peer-to-peer behold the power of the people.”
People are excited about del.icio.us, a social bookmarking system in which users keep an online list of bookmarks and share them with other members of the community; the continued popularity of blogging; and the ongoing efforts to make software and information free from corporate interests.
These are completely laudable efforts, but I’d like to go a step further and use the enthusiasm behind these technologies to introduce them to people who so far have had little to no contact with computers and the Internet, using a new kind of language to do it. Instead of going into poor communities in the U.S. or developing nations in Africa with the attitude that learning these skills will make people richer, or give them jobs, we can instead encourage people to learn how to better create and maintain communities and develop new ways to overcome social, economic, and political barriers to communication. This is idealism in the extreme, but I do think these new technologies can help by, for example, making blogging not only easy, but a skill as ingrained as making a phone call, or by people getting used to the idea of communicating with relatives in another country in chat rooms or voice-over-the-Internet.
However, we also need skeptics to keep us in our seats and contextualize things. Nicholas Carr offers the kind of reality-check we need:
And so all the things that Web 2.0 represents – participation, collectivism, virtual communities, amateurism – become unarguably good things, things to be nurtured and applauded, emblems of progress toward a more enlightened state. But is it really so? Is there a counterargument to be made? Might, on balance, the practical effect of Web 2.0 on society and culture be bad, not good? To see Web 2.0 as a moral force is to turn a deaf ear to such questions.
Although he lands on side of pessimism versus a restrained optimism, Carr is right to bring up these questions. In the work I’m doing I’ll be exploring why and how some see the Internet as a moral force and I’ll subject those ideas to serious scrutiny because in reality, the moral force comes from the people that use the tools, not the tools themselves.