Open Source television
Originally on Disinfo.net, but the link keeps diappearing everytime i put it on Blogger.
Mark Pesce, author of The Playful World: How Technology Is Transforming Our Imagination (Ballantine Books, New York, 2000), has given a paradigm-breaking speech to Australia's Smart Internet CRC on Open Source Television. © 2004, Mark D. Pesce. Rights for reuse granted the Creative Commons Attribution License.
Mark Pesce
Lecturer, Digital Media Programme
Australian Film Television and Radio School
Playful World
14 July 2004
Preamble
The worldwide consolidation of media industries has led to a consequent closure of the public airwaves with respect to matters of public interest. As control of this public resource becomes more centralized, the messages transmitted by global media purveyors become progressively less relevant, less diverse, and less reflective of ground truth.
At present, individuals and organizations work to break the stranglehold of these anti-market-media-mega-corporations through the application of the courts and the law. However, because of the inherent monopoly that anti-market media maintain on the public mindset, legislators have been understandably reluctant to make moves toward media diversification. We are thus confronted with a situation where many people have interesting things to say, but there are progressively fewer outlets where these views can be shared.
The public airwaves, because they are a limited resource, are managed by public bodies for the public interest. While honorable, the net effect of this philosophy of resource management has been negative: a public resource has become the equivalent of a beachfront property, its sale generating enormous license revenues, but its transfer to the private domain denying the community access to the sea of ideas.
If a well-informed public is the necessary prerequisite to the democratic process, then we must frankly admit that any private ownership of public airwaves represents a potential threat to the free exchange of ideas. Now that private property has mostly collectivized the electromagnetic spectrum, and with little hope that this will soon change, we must look elsewhere to find a common ground for the public discourse.
We are fortunate that such ground already exists on the Internet.
Introduction: Computational Communication and Social Emergence
The architecture of Linux, the Internet, and the World Wide Web are such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. In other words, these technologies demonstrate some of the same network effect as eBay and Napster, simply through the way that they have been designed.
– Tim O'Reilly, "The Open Source Paradigm Shift"
First and foremost, the Internet is a communications medium. We tend to think of it as a medium for communication between computers, but this is mistaking the forest for banks of trees. The computers communicate only to service the needs of human communication. Flexible communication strategies are the one identifiable trait that separates us from the great apes, the cetaceans, and dogs. This flexible communication capability is the essence of what makes us social beings. Organic defects in communication, such as aphasia or autism, produce an organic sympathy in us, because we understand, from birth, that no man is an island so long as he remains in communication with his fellow men.
Our philosophers believe that language defines the scope of consciousness. Wittgenstein believed that all philosophy was a result of the imperfections of language; Orwell created Newspeak to portray the prison of a language which extinguished all political thought; FOX NEWS renames a “war of choice” as a “war of liberation,” and – voila! – an entirely new set of associations are brought to mind. By definition, communication is a social act, and because it is a social act, it is also a political act.
Thus, despite all of the efforts of anti-market forces to constrain the Internet into a particular or narrow category of possibilities – of politically correct speech, if you will – the Internet is inherently a social medium, and thus, by extension, a political medium. The Chinese know this; they maintain gateways through which the entire nation’s TCP/IP traffic pours, the better to be examined, weighed, and judged. You can be arrested in China for accessing the wrong website – as you can in Australia, the United States, or the UK, if you’re using the Internet to satisfy a pedophilic fancy.
Although academics in the sciences prefer to think of their own research as value-neutral, the reality of the situation is always more complex; the process of science is itself overlaid with various and unseemly rivalries, competitions and intrigues; the products of the scientific process are highly coveted by governmental and commercial entities. The political and economic fate of nations now depends upon scientific endeavor. He who has the most toys wins.
Nowhere is that more true than in Internet research. What began as an investigation into fault-tolerant networking (the better to survive nuclear war) became a new form of human communication, the like of which has not been seen since Gutenberg. For those of us, including myself, who lived through the hype and burst-bubble of the last decade, such statements might provoke a studied cynicism: nothing is that revolutionary, nothing quite so disruptive to the status quo. But if that were really true, would “Google” have gone from brand name to verb in just 2 years? The utopian promise of the Internet – that all information would be freely available to everyone all the time – is coming to pass. And, although anti-market forces have attempted to constrain the development of Internet into purely commercial directions (so as to reinforce their anti-market hegemony) this strategy has been unsuccessful. For every “walled garden” or “Chinese wall” erected to block the free flow of information, an alternative has spontaneously arisen, lacking the impediments of its progenitor. In this sense, the Internet is endlessly protean and autopoeic.
The best example of this phenomenon is the story of Encyclopedia Britannica, which launched itself onto the Web in the middle of 1999. In a classic example of underestimating your market, Britannica’s servers crashed after about 72 hours, because there was so much demand for their nearly endless supply of high-quality articles. Once restored, it became one of the shining lights of the Internet – an continuously available supply of accurate information on almost any subject you might think up. Demand was strong, and only grew stronger. The strength of that demand caused problems for Britannica, because it required an endlessly-increasing supply of bandwidth and servers to handle the growing set of users. The business minds at Britannica had not developed a business strategy to cope with the incredible demand for their services; the world’s most popular information repository began to lose money. It seems incredible, but there it is: popularity killed Britannica.
What do I mean? In order to cover their operating expenses, Britannica adopted a “walled garden” strategy. It became a subscription-based service; you could pay a fee of US $5 a month to have access to Britannica, or you could satisfy yourself with an abstract from its database of articles. There were a number of Britannica’s users who were less than satisfied with this new arrangement, and decided to set the matter aright, by creating their own encyclopedia, an open-source work, distributed and freely available – either to read articles within it, or to add articles to it. Using a simple but powerful web technology known as Wiki, the “Wikipedia” has grown from its inception in 2001 to span more than a quarter of a million articles on nearly every topic imaginable. Although many of its articles lack the polish of similar entries within Britannica, its open and collaborative nature has produced an ever-more-accurate database of content, one which will no doubt soon surpass Britannica in its scope, relevance and value.
This is but one of a growing number of examples of O’Reilly’s Law of the relationship between software architectures and social engineering. There are many others, including the World Wide Web, LINUX, Gnutella, and Apache. In fact, we have enough examples of this relationship that we can choose to design for it. We can leverage the emergent social behaviors produced by new Internet-enabled forms of communication in our software designs. These principles of emergent communication must be taught longside the basics of packet windows, protocol stacks and error-correcting codes. If we honestly expect to educate engineers for who will work across the bulk of the 21st century, we need to give them the complete specifications of the operating environment within which they work, an environment that includes both the communications hardware of computing devices and the social software of human beings. Any research which explores one but ignores the other not only gives the student a woefully inadequate articulation of the realities of the comprehensive operating environment, but deprives these students (and their instructors) of an opportunity to think comprehensively about the true potential of the combination of computational communication and social emergence.
I would not entertain the idea of issuing such a pedagogical challenge to a roomful of researchers without an example to back it up; this discussion should not be bootless. We need something to sink our teeth into, a problem to solve. (Knowing the social dynamic of research communities, I know what gets you out of bed in the morning.) The rest of this paper describes my own attempt to put this principle into practice. At this point it is little more than a proposal, but it is my own attempt to formalize the preceding thesis with practice.
Part One: Liberty
They might be better off I think,
The way it seems to me
Making up their own shows
Which might be better than TV.
– Talking Heads, “Found a Job”
June 2004 was a watershed month on the Internet. For the first time, the volume of video traffic surpassed the volume of audio traffic. In a practical sense this means that that vast and mostly invisible file-sharing networks are now being used to distribute television programmes and motion pictures. I have used these networks myself in order to find television programming which is not broadcast in Australia, but which I am thoroughly addicted to in the USA. This is arguably a form of copyright violation, as I haven’t the permission of the copyright holder to display this programming, even for my own private use. But since this programming isn’t available at all in this country, either on a free-to-air or cable TV network, I am forced to satisfy my habit by lurking in the Internet’s shadier districts, where, if I am not careful, I could pick up a viral infection, or worse.
There’s a lot of television programming produced every year in the major English-speaking nations: the US, UK, Canada, Australia and New Zealand. There are many hundreds of thousands to millions of hours of programming available in back catalogs. There’s such a wealth that there is no way that the five free-to-air broadcasters could hope to deliver even the smallest part of it to Australian audiences. Even with a few hundred FOXTEL DIGITAL cable channels, it’s still beyond all bounds. There’s just too much out there. Its like trying to read through the US Library of Congress; more books are issued every day than you can ever read, so you’re constantly be falling further and further behind.
Short of having an infinite number of television channels available to the viewer, there is no way to deliver the existing catalog of English-language audiovisual content to a viewer. And so far we’ve only considered professional television productions, excluding the ever-increasing selection of “amateur” content: short films, documentaries and home movies that must be considered as equal in importance to professional works, even if their audience is limited to a handful of viewers. And when you expand your scope to non-English language content (as we should, because the world is primarily Mandarin-speaking, not English-speaking) you realize that the idea of the television channel has become a restrictive anachronism.
The television channel was created as a way to regulate the limited resource of radio frequency spectrum. Before the advent of spread-spectrum communications, radio spectrum was seen as a zero-sum proposal: giving to one would mean taking away from someone else. Thus all available FM radio frequencies in Sydney and Melbourne are occupied by broadcasters. However, there are only five VHF television broadcasters in Australia, while the spectrum could easily accommodate seven, plus at least twenty UHF broadcasters. Why so few? Again, a scarcity argument is used, only this time the scarce resource is advertiser dollars: if there were, say, a fourth commercial free-to-air television network, the billions of dollars in advertising revenue collected by the three commercial networks would have to be divided among four players, depriving them of the revenue they need to satisfy the “local content” requirements mandated by the Federal government. In other words, they need to keep the monopoly on television channels so they can continue to churn out an embarrassingly poor string of Australian-produced television dramas.
But we know this is all so much lies, damned lies and public relations: the real reason they want to keep the spectrum under their tight control is so they can continue to act in collusion as anti-market forces. The free-to-air commercial broadcasters pay a pretty penny for protection, donating liberally (so to speak) to the political parties, thereby ensuring their continued stranglehold on commercial television broadcasting. The same situation exists in the United States, with the same results: broadcast television is a filter and block for the television audience, dictating what should and should not be seen. The free speech implications of this situation are obvious, and emerge from the social engineering of so-called “mass media”.
That’s enough discussion of things as they were. All of this has changed, because of the advent of digital television broadcasting. DTV programmes are delivered as an MPEG2 format data stream, small packets of audiovisual data encoded, transmitted, delivered and assembled at the television set into a high-resolution image. Because this data is entirely digital, the stream can be recorded to digital media and endlessly reproduced with no loss in quality. As it is in MPEG2 format, it can easily be burned onto a DVD, or transcoded into another, more compact format, such as DivX or Windows Media 9, burned to a Video CD, posted on a website, disseminated through a peer-to-peer file sharing network, published into the BitTorrent, sent as an email attachment, etc. Bits are bits are bits, and it matters not whether they’re the latest Britney Spears song, an episode of The Sopranos, or Spider-Man 2.
Now, while I can’t advocate wholesale copyright violation – and we’ll come to that a bit further along – I do believe that it is appropriate to examine the politics of scarcity with respect to television broadcasting, and engineer a solution which effectively routes around the problem (to steal a phrase from John Gilmore), recapitulating the Britannica to Wikipedia process. As media consumers, we need to liberate ourselves from the anti-market forces of the free-to-air commercial networks, and, as creators and purveyors of audiovisual content, we need to free ourselves from the anti-market forces of commercial networks as programme distributors. In other words, we need to develop a comprehensive computational and emergent strategy to disintermediate the distributors of audiovisual media, directly connecting producers to consumers, and further, erasing the hard definition between producer and consumer, so that a producer’s product will only be identifiable by its inherent quality, in the eyes of the viewer, and not by the imprimatur of the distributor.
The idea of audiovisual media delivered over the Internet, or IPTV, is hardly new. Apple’s QuickTime was released in 1991. Since 1995 you have been able to use the RealPlayer to deliver streaming video on the computer desktop. In the last decade the efficiency of coding algorithms has improved tremendously – the best of these is arguably Microsoft’s Windows Media 9 Series (which proves that while money can’t buy everything, but it can buy some very clever mathematicians) – so VHS-quality video can be delivered to the desktop in a 240 Kbps stream, and DVD-quality in just 1 Mbps.
Technologically, the pieces are in place for a radical reconfiguration of the technology of programme delivery to the TV viewer. Digital television, thought to be the endpoint of this revolution, was actually only its beginning, and while digital televisions are very useful as display monitors, their broadcast tuners with their sophisticated analog electronics will be completely obsolete once broadband supplants broadcast as the delivery medium. The digital TV is a great output device, but a lousy tuner, because the design of the device reinforces the psychology of spectrum scarcity.
What we need, therefore, is a new device, which sits between the Internet, on one hand, and the digital television set, on the other, and acts as a new kind of tuner, thereby enabling a new, disintermediated distribution mechanism. The basic specification for this device is quite simple: it would be capable of locating, downloading and displaying audiovisual content, in any common format, on the viewer’s chosen display device. That display device doesn’t even need to be a digital television - it could be a PC. Or the soon-to-be-released PSP, the PlayStation Portable. Or a 3G cell phone. This intermediary device – the “Internet tuner,” if you will – could be a hardware-based set-top box, or a piece of software running on a more general-purpose computing device – it doesn’t really matter. But, as we know from Tim O’Reilly, the software architecture of this tuner is key to producing a emergent social effect.
Part Two: Facility
“Kids come in to tour the studios expecting to see something amazing. They’re disappointed to see a whole bunch of folks hunched over computer monitors, making funny faces. I really want to get a giant red button in my office and tell the kids, ‘This is the button we press to make Gollum. Just press it and you’ll make some art.’ They’d love it.”
– Bay Raitt, chief character animator of Weta Studios, and creator of Gollum
The beauty of television is its elegant simplicity. There are just two options which a television viewer manipulates in the course of normal viewing: volume and channel. Before fifteen years ago, these were the only exposed controls on a television set. When the remote control arrived, all of that changed. Remote controls are crowded with buttons for all sorts of features that most people rarely use, yet must be on the remote in order to present control interfaces to a television which has sacrificed physical interfaces on the unit in favor of virtual interfaces via the remote control.
Few people own a television in isolation. Increasingly, television is the video display device for a home theatre, and is connected to a VCR, a DVD player, a digital video recorder (DVR), a cable TV set top box, a video game console, an audio amplifier, and so forth. Each of these devices has its own controls and therefore its own remote control. It is not uncommon to have five or six separate remote controls in a home theatre system. This creates a culture of expertise – not unlike the early days of home computing – where one family member is the “master” of the various remotes, while other family members use the default system settings, which are most often adequate for their immediate needs. This situation is so common – and so vexing – that now that there is a thriving market in thousand-dollar remote controls, complete with GUIs, which attempt to simplify complex home theatre configuration tasks. Although the claim can still be made that television remains a very accessible and easy-to-use medium, the truth of the situation is actually somewhat different. And as home theatre devices proliferate, interface problem will only grow worse.
I mention this because the most formidable task in the creation of an Internet-based television tuner is the design of an interface which is accessible to all TV viewers, regardless of their level of expertise. A television viewer using the Internet tuner must be completely unaware of the sophisticated processes required acquire and play Internet-delivered audiovisual content. Any interface more complex than the FOXTEL DIGITAL Electronic Programme Guide – which is actually quite usable – will present an insurmountable barrier to the usage of the tuner. The tuner must be able to configure itself automatically, without any worries about firewalls or IP address configuration or DNS hosts, etc. All of the things that an Internet expert does without thinking must be completely invisible to the user of the tuner. In that sense, one design requirement of the tuner interface is to keep the details of its operation hidden from the user.
Interfaces are too frequently the most overlooked component of consumer electronics devices, but are absolutely the most vital element of their design, for interfaces are where the user meets the actual capabilities of the device. Interface is communication, interface is a language, and the user’s fluency in that language determines their level of satisfaction as they interact with the device. Interactions do not occur in isolation. Every time a user interacts with a device, he brings with him the memory of all prior interactions with that device. Some of the best interfaces maintain a memory of their use, and employ persistent data to reconfigure their behavior. Interaction must be seen as a continuum; it is not a single event but an evolving relationship which, if properly constructed, evolves the interface as the user grows more familiar with it. The Internet tuner should have an interface which concisely represents the enormous sea of audiovisual programming available to the viewer, without overwhelming the viewer in a sea of choices. Hence, the interface serves a role similar to a free-to-air broadcaster, distilling the overwhelming set of possibilities to a manageable few.
How does the tuner interface perform this magical act of reduction? To borrow from the example of the TiVO PVR, it must understand likes and dislikes. Programmes can be grouped by genre, and, for this reason, TiVO can make recommendations on the order of, “If you liked that programme, you’ll probably like this one as well.” If the Internet tuner keeps an exhaustive record of the viewer’s interactions with audiovisual programming (including which programs were abandoned midway through) it will be able to make recommendations drawn from the viewer’s choices, using these as a basis to scour the immensity of the Internet to find those few thousand programmes which might most interest the viewer. If this part of the interface works effectively – which is to say, invisibly – the biggest problem won’t be that the user will be drowned in choices, but rather that these recommendations will be too constrained by all the choices the viewer has already made. Any predictive capability can too easily spiral into a negative feedback of diminishing choice which only appears to be effective because there is such a wealth of choice that any subset of it seems prodigiously rich.
Thus, the interface which seeks to shield the user must also provide an opening, a craft to sail the crowded seas of Internet audiovisual content. An interface to unbridled complexity is more difficult to create than one that which filters it away, but it is a design necessity, lest the Internet tuner simply reinforce the scarcity model of free-to-air broadcasting. How can we approach this? By applying the principles of social networking.
As stated in my thesis, all networking is inherently social. Yet, over the last year we’ve seen the birth of explicitly social networking, with systems such as Friendster, LinkedIn and the ever-more-popular Orkut, Google’s contribution to the field. These systems rely upon the “six degrees of separation” principle to link you to your friends, their friends, their friends’ friends, and so on, so that your place within an ever-widening social network is thoroughly delineated. Recently I signed up for Orkut, and spent a few hours creating my social network, mostly by visiting my friends lists of friends and adding their friends – who are also my friends – to my list of friends. After I had completed that task, I created an Orkut “Community” – basically a bulletin board for people of aligned interests. I named my community “The List”, and described it in the following words:
Please share with the list the best book, movie, or track you've come across: this week, this year, this lifetime. Share the best. Enlighten your friends. Long explanations are unnecessary, and beside the point. Just share your list. Share early. Share often. Peace.
My goal was to create a way that I could learn from my friends – people whose opinions I might be inclined to trust – what I should be reading, viewing, playing, etc. I wanted to create an explicit place for one of the functions of real-world social networks, “word of mouth” which spreads good ideas and filters out the bad. “Quality will out,” my friend Susan Mainzer is fond of saying, and discussions of quality experiences are a main preoccupation of social networks. How do I determine the difference between hype and reality? I listen to my friends, consider their opinions, then make decisions, drawing in part from their experiences.
For this reason, the Internet tuner should fully but invisibly incorporate the features of social networking. The tuner knows what your friends have been watching, and it knows how much you trust (or ignore) your friends’ opinions. This is sufficient for the tuner to generate a wider selection of viewing options, ones which you might not have considered on your own. This procedure is similar in nature to the collaborative filtering methodology of Patty Mae’s Firefly system, which is now incorporated into Amazon.com and many other online shopping systems, but instead of than measuring your tastes against a faceless group of people with similar tastes, it adds the discord and chaos of friends who almost certainly do not share your tastes, yet with whom you share strong affiliations. Collaborative filtering a la Firefly should also be included in the tuner interface, because it allows the tuner to find not-exactly-alike-but-closely-related selections.
Finally, for those who want to surf the full richness of the entire sea of possibilities, it should be possible to step behind the curtain, abandoning the idea of interface-as-filter, and give the user an unadulterated (but easily manipulated) view into the chaos of the whole. People should be able to get anything they want, might want – even what they don’t want – just by looking for it. This interface might look something like Google – very spare in interface elements, but capable of overloading the inquisitive surfer with a wealth of possibilities.
The physical interface to the tuner’s logical interface should require neither a keyboard or mouse; it should be operated by a remote control with just a few buttons for scrolling and selection, as found on a DVD remote control. The constraints of a simple physical interface will keep interface designers from adding unnecessary sophistication to the tuner.
When all of these architectural elements are combined behind a deceptively simple interface, the Internet tuner becomes a true portal and a companion as one surfs through the ever-growing audiovisual content of the Internet. Because the tuner preferences are ultimately reducible to a set of data, they can easily be transferred between instances of the Internet tuner. Hence, wherever you go, your particular version of television travels with you. Indeed, you’ll be disinclined to access television through someone else’s tuner configuration; it will feel as though you’re trawling through an alien environment, without any of the familiarity generated by your own interactions with the tuner. In its ultimate aspect, the Internet tuner will behave much like Google’s “I’m Feeling Lucky” button – just turn the tuner on, and you’ll be watching what you want to watch, when you want to watch it, wherever you want to watch it.
A device like that wouldn’t be harder to use than television. It’d be easier.
Part Three: Equality
Technology that disrupts copyright does so because it simplifies and cheapens creation, reproduction and distribution. The existing copyright businesses exploit inefficiencies in the old production, reproduction and distribution system, and they'll be weakened by the new technology. But new technology always gives us more art with a wider reach: that's what tech is for.
Tech gives us bigger pies that more artists can get a bite out of. That's been tacitly acknowledged at every stage of the copyfight since the piano roll. When copyright and technology collide, it's copyright that changes.
Cory Doctorow, Microsoft Research DRM Talk, 17 June 2004
Any attempt to replace the broadcast model of audiovisual programme distribution has to compete against the enormous efficiencies offered by radio spectrum broadcasting. A truism in broadcasting is that it costs no more money to add more viewers. The inverse is true with broadband; every new viewer requires more bandwidth, more server space, more electricity, more physical infrastructure. That fact turned out to be the Achilles’ heel of Encyclopedia Britannica. Yet it need not have been so.
This month the BBC begins testing their “Flexible TV” system, which is something like the Internet tuner I’ve described here, except that their software package is designed to receive only the previous and forthcoming weeks of BBC programming. It is a closed system in that respect, and, even when it goes into wider distribution, will only be available for UK residents. The BBC faces the same problem with broadband distribution as was faced by Britannica, but their solution is novel: they will be using peer-to-peer file sharing techniques to superdistribute content throughout the growing legion of viewers. Each programme distributed through the system is segmented into small pieces, and each computer running the BBC Internet Media Player (and Dirac, an open-source codec designed by the BBC) freely shares these programme segments with its peers on the Internet. Thus you don’t have to rely on the BBC’s servers being available, or even reliable – as content is superdistributed into the file sharing network, it becomes increasingly easy to find and retrieve. The more Internet Media Players in use on the Internet, the faster the response of the network as a whole. Under this methodology, every new viewer yields more total bandwidth, a virtuous cycle which gives broadband distribution its own economy of scale.
(A small aside: when I came up with the idea to use superdistribution within the Internet tuner to defeat network congestion and failures, I assumed it was a unique innovation. As it turns out, I’m just thinking with the crowd on this one. Peer-to-peer superdistribution is the future of distribution.)
Thus, if broadband distribution is to compete with broadcast distribution, every consumer of content must also become a broadcaster of content. This does not mean that every consumer must necessarily become a producer of content; but there’s such a wealth of home movies, dance tracks created in Acid or GarageBand, Flash movies, etc., that it is very likely that every consumer will produce and publish at least some of their own content, much as many people today have their own web sites. This equality between producer and consumer is one of the key concepts designed into architecture of the Internet tuner.
Although the Internet tuner will open its doors to a wide class of “amateur” content, most viewers will be interested in watching “professional” content, that is, content produced by professional production crews. This could be the latest TV series from FOX or HBO, an archival programme, a feature motion picture, and so forth. The Internet tuner makes no distinction between amateur and professional content; indeed amateur creations are sometimes of higher quality than professional productions, making up for modest production values with better storytelling. The difference between amateur and professional content has nothing to do with the programming itself, but rather, on the copyright restrictions placed upon that content. Professional content must be paid for – either with television commercials, or a theatre ticket, the purchase of hard media, or a service subscription. That’s the only recognizable point of differentiation. Until the advent of the Internet tuner, professional content also held a monopoly over the distribution channel, but as the tuner replaces the distribution channel with its own peer-to-peer form of superdistribution, this monopoly will be disintermediated out of existence. Nonetheless, viewers will want to watch professional content, and content producers want to sell their programming to viewers – by any means necessary.
Economics can be thought of as an emergent quality of a sufficiently complexified society; any society which grows beyond a tribal stage necessarily establishes a system of value exchange. This means that the Internet tuner, as something which links computational communication and social emergence, invariably has an economic aspect. Because the tuner will deal with works protected by copyright, its economic capabilities must be explicit, after in the manner of Apple’s iTunes, rather than implicit, as is the case with a Web browser.
Although it is obvious to me that the Internet tuner itself must be FOSS (free and open source software – more on this further along), it must have the capability to deal with transactions – micropayments – in order to open it to the world of professional content. If the Internet tuner is simply a tool for piracy – standing in as the audiovisual equivalent of Napster – it will be fought against by the anti-market forces which control the creation and distribution of materials protected by copyright. If, on the other hand, the Internet tuner provides an attractive platform for the distribution of for-pay audiovisual media, copyright holders will not fight against it – even if they do not at first wholeheartedly embrace it.
There are numerous systems available to handle micropayments - including PayPal, and Ron Rivest’s Peppercoin – so this presents neither a technical nor a legal barrier to the architecture of the tuner. The larger issue comes in the form of digital rights management (DRM), a hotly-contested battleground fought by copyright holders against the consumers who seek the full exercise of their rights of ownership (as they perceive them) over materials legitimately purchased from those copyright holders. For example, in the US, although it is legal to make a copy of a video DVD, it is not legal to sell software which will do so, because that software defeats the Content Scrambling System used to encrypt DVDs, and thus violates the anti-circumvention clause of the Digital Millennium Copyright Act. Consumers need to make copies of DVDs (because DVDs are far more delicate than videotapes) but run the risk of fines and jail time if they do so.
Copyright holders will not release DRM-free versions of their content onto the Internet, because they have already experienced what pirates will do with unencrypted movie files. While DRM systems can have flexible “policies” which allow for a wide range of possible uses of accessible media files, copyright holders tend toward extreme paranoia with respect to digital technologies. This means, for example, that a song purchased though Apple’s iTunes can only be copied to a few machines, and can only be burned to a limited number of CDs. DRM inevitably sacrifices the flexibility of the digital medium on the altar of commerce. Nonetheless, the Internet tuner must have a complete, flexible and strong DRM capability, with a well-integrated micropayments system. Without these two basic architectural components, the Internet tuner would remain a curiosity, lacking the branded content that people have grown to expect from television. In this case, the unendurable must be endured: an open-source project must collaborate with the archons of copyright to create a system sufficiently secure to attract their offerings. When copyright-encumbered offerings are freely and equally available through the Internet tuner, viewers will be able to draw their own conclusions about the relative value of professional content.
Once the distinction between consumer and distributor has been erased, and both professional and amateur offerings are equally available through the tuner, a thought must be given to equality of access: the tuner should be able to play every conceivable form of audiovisual content available on the Internet. There is a very wide range of audiovisual CODECs available, and new ones are created nearly every day. Even if one could incorporate every known codec into the Internet tuner, there is no way that a software engineer of today could anticipate new CODECs, or new media delivery formats, such as 3D or Ultra High-Definition video. The Internet tuner must be open to all audiovisual formats, treating them as equals within the broader environment of the Internet.
The only way this can be achieved as a design goal is by making the Internet tuner a FOSS project. With the code to the Internet tuner free and fully exposed to both developers and users, it will be easy (in a relative sense) to adapt the tuner architecture to improvements in encoding, communications, superdistribution methodologies, and so forth. Beyond that, the Internet tuner effort should be prepared to accept object library contributions where appropriate (as is often the case with CODECs and DRM architectures), as these are a middle ground where FOSS and proprietary methodologies can safely meet.
With equality between consumers and distributors, between amateurs and professionals, and between developers and users, the Internet tuner stands the best possible chance of fulfilling Tim O’Reilly’s rule that an architecture which plays into selfish interests can produce emergent strengths.
Conclusion: Aux Armes, Citoyens!
It steam-engines when it comes steam-engine time.
– Charles Fort
Somewhere in the middle years of the Web bubble, I heard venture capitalist Ann Winblad of Hummer Winblad Venture Partners lecture about innovation in the Internet era. Among the rules she gave as truisms, one struck home: “If you’re working on a product,” she said, “take it as a given that at least five other teams are hard at work on it, too.” When the idea for the Internet tuner popped into my head – just about 9 weeks ago, after I’d given a lecture in Melbourne about digital television and the death of radio spectrum television broadcasting – I presumed that I’d stumbled onto a completely novel idea. In the interregnum, I’ve discovered how wrong I was. Projects like the BBC Internet Media Player, MythTV on LINUX, Media Portal for Xbox and Windows, Video LAN Controller for Mac OS X, Windows and LINUX – the list goes on and on. Just four weeks ago TiVO announced that they’re going to release a software upgrade which will make their PVRs Internet-aware, so that they can locate and download Internet audiovisual content. These ideas are floating around the commercial software community, too, in products like Microsoft IPTV, and SnapStream’s Beyond TV.
Many people are working toward the features of the Internet tuner, but none of them – to my knowledge – have brought these pieces together with an emphasis on the emergent qualities of the tuner as a tool for communication. The rise of broadband “peercasting” and the death of radio spectrum television broadcasting makes the Internet tuner a disruptive technology of the first order. Let me be clear: the Internet tuner or something very much like it will do for audiovisual media what the Web did for print – make it immediately accessible from anywhere, at any time, for any reason. Because of the Web, libraries are transforming from repositories of knowledge into centers where people come to be pointed toward online repositories. The library is evolving into a physically constituted Google. Although some libraries view the Web as a new form of competition, the wisest have also learned how to adapt to the wealth of Internet-based information available through them, adding context to content.
The same will be true for Internet-based audiovisual distribution. The world we’re heading into isn’t an either/or, but a concatenation of “and”s. This and this and this and this and this, ad infinitum. In that sense, it doesn’t matter that there are competitors for this proposed-but-as-yet-still-quite-mythical Internet tuner. In fact, it will work to the tuner’s advantage.
Not so very long ago, when a software engineer needed something, they wrote a program or a function to produce the desired result. Object-oriented programming was supposed to introduce the idea of “reusability” to the software engineering process, but it hasn’t worked out that way. That doesn’t mean that programmers “roll their own” every time they need something done. Most often the first thing a programmer will do is Google for the specific function they need to perform, to see if someone else has done it, somewhere else, and made that code available for others to use. While this principle of using other’s work within your own is a particular feature of the UNIX/GNU/LINUX operating environments, it has also thoroughly infected both Microsoft’s and Apple’s offerings.
The truth of the matter is that, in 2004, nearly any task that needs to be performed by a piece of software has already been written into some other piece of freely available software. The smartest software engineers harvest the collective intelligence of the Internet, using that intelligence as a complement to their own creative practice. That’s why it’s a wonderful thing that the Internet tuner has more than a few competitors; each of these competitors (in the FOSS space) can be used to form core components of its architecture.
Let’s break that out into some real-world examples. The media portion of the tuner could be patterned on a combination of Video LAN Controller and Real Network’s Helix; the superdistribution architecture could be adapted from gIFT, the FOSS peer-to-peer networking library; the interface databases could be written to run atop MySQL; and so forth. That said, not quite everything that the Internet tuner needs is freely available. In some cases, the project’s software engineers – distributed geographically and collaborating through the Internet – will “roll their own,” adding that work to the ever-increasing list of freely available software designs.
If this sounds a lot like the process that gave us LINUX and Apache, that’s no accident. The collective intelligence of open source software development has already become one of the greatest engines of creation in the 21st century. Even so, LINUX and Apache are invisible to the average computer user. The Internet tuner, on the other hand, is the very visible interface to a much larger set of services and audiovisual content distributed throughout the Internet. With so much wealth – in code and in content – just lying around, waiting to be harnessed by “insanely cool” projects, it is almost as though the Internet tuner is an idea that has been coolly biding its time, waiting until the critical moment when it passes from pleasant fantasy into inevitability.
It is my belief that the day of the Internet tuner has come, that the dam which has thus far held back the overwhelming torrent of audiovisual content is about to burst. Those who have built their houses on the sand of anti-markets will be swept away in the flood. Some will be caught unawares, while others will stubbornly try to stick their finger in the dike, hoping that time reverses, the genie re-enters the bottle, that the Internet and personal computers are somehow un-invented. That doesn’t seem likely. Indeed, the Internet tuner seems thoroughly in tune with the times – and the great advantage of good timing is that obstacles are removed by historic processes rather than individual efforts.
So we find ourselves, on this revolutionary day, in a unique historical space. We could be the peasants, storming the Bastille of media. We should be.
Aux Armes, Citoyens!
Mark Pesce
12 Ix – 13 Men
10 – 11 July 2004
Sydney
Open Source Television – © 2004, Mark D. Pesce. Rights for reuse granted the Creative Commons Attribution License.
Mark Pesce, author of The Playful World: How Technology Is Transforming Our Imagination (Ballantine Books, New York, 2000), has given a paradigm-breaking speech to Australia's Smart Internet CRC on Open Source Television. © 2004, Mark D. Pesce. Rights for reuse granted the Creative Commons Attribution License.
Mark Pesce
Lecturer, Digital Media Programme
Australian Film Television and Radio School
Playful World
14 July 2004
Preamble
The worldwide consolidation of media industries has led to a consequent closure of the public airwaves with respect to matters of public interest. As control of this public resource becomes more centralized, the messages transmitted by global media purveyors become progressively less relevant, less diverse, and less reflective of ground truth.
At present, individuals and organizations work to break the stranglehold of these anti-market-media-mega-corporations through the application of the courts and the law. However, because of the inherent monopoly that anti-market media maintain on the public mindset, legislators have been understandably reluctant to make moves toward media diversification. We are thus confronted with a situation where many people have interesting things to say, but there are progressively fewer outlets where these views can be shared.
The public airwaves, because they are a limited resource, are managed by public bodies for the public interest. While honorable, the net effect of this philosophy of resource management has been negative: a public resource has become the equivalent of a beachfront property, its sale generating enormous license revenues, but its transfer to the private domain denying the community access to the sea of ideas.
If a well-informed public is the necessary prerequisite to the democratic process, then we must frankly admit that any private ownership of public airwaves represents a potential threat to the free exchange of ideas. Now that private property has mostly collectivized the electromagnetic spectrum, and with little hope that this will soon change, we must look elsewhere to find a common ground for the public discourse.
We are fortunate that such ground already exists on the Internet.
Introduction: Computational Communication and Social Emergence
The architecture of Linux, the Internet, and the World Wide Web are such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. In other words, these technologies demonstrate some of the same network effect as eBay and Napster, simply through the way that they have been designed.
– Tim O'Reilly, "The Open Source Paradigm Shift"
First and foremost, the Internet is a communications medium. We tend to think of it as a medium for communication between computers, but this is mistaking the forest for banks of trees. The computers communicate only to service the needs of human communication. Flexible communication strategies are the one identifiable trait that separates us from the great apes, the cetaceans, and dogs. This flexible communication capability is the essence of what makes us social beings. Organic defects in communication, such as aphasia or autism, produce an organic sympathy in us, because we understand, from birth, that no man is an island so long as he remains in communication with his fellow men.
Our philosophers believe that language defines the scope of consciousness. Wittgenstein believed that all philosophy was a result of the imperfections of language; Orwell created Newspeak to portray the prison of a language which extinguished all political thought; FOX NEWS renames a “war of choice” as a “war of liberation,” and – voila! – an entirely new set of associations are brought to mind. By definition, communication is a social act, and because it is a social act, it is also a political act.
Thus, despite all of the efforts of anti-market forces to constrain the Internet into a particular or narrow category of possibilities – of politically correct speech, if you will – the Internet is inherently a social medium, and thus, by extension, a political medium. The Chinese know this; they maintain gateways through which the entire nation’s TCP/IP traffic pours, the better to be examined, weighed, and judged. You can be arrested in China for accessing the wrong website – as you can in Australia, the United States, or the UK, if you’re using the Internet to satisfy a pedophilic fancy.
Although academics in the sciences prefer to think of their own research as value-neutral, the reality of the situation is always more complex; the process of science is itself overlaid with various and unseemly rivalries, competitions and intrigues; the products of the scientific process are highly coveted by governmental and commercial entities. The political and economic fate of nations now depends upon scientific endeavor. He who has the most toys wins.
Nowhere is that more true than in Internet research. What began as an investigation into fault-tolerant networking (the better to survive nuclear war) became a new form of human communication, the like of which has not been seen since Gutenberg. For those of us, including myself, who lived through the hype and burst-bubble of the last decade, such statements might provoke a studied cynicism: nothing is that revolutionary, nothing quite so disruptive to the status quo. But if that were really true, would “Google” have gone from brand name to verb in just 2 years? The utopian promise of the Internet – that all information would be freely available to everyone all the time – is coming to pass. And, although anti-market forces have attempted to constrain the development of Internet into purely commercial directions (so as to reinforce their anti-market hegemony) this strategy has been unsuccessful. For every “walled garden” or “Chinese wall” erected to block the free flow of information, an alternative has spontaneously arisen, lacking the impediments of its progenitor. In this sense, the Internet is endlessly protean and autopoeic.
The best example of this phenomenon is the story of Encyclopedia Britannica, which launched itself onto the Web in the middle of 1999. In a classic example of underestimating your market, Britannica’s servers crashed after about 72 hours, because there was so much demand for their nearly endless supply of high-quality articles. Once restored, it became one of the shining lights of the Internet – an continuously available supply of accurate information on almost any subject you might think up. Demand was strong, and only grew stronger. The strength of that demand caused problems for Britannica, because it required an endlessly-increasing supply of bandwidth and servers to handle the growing set of users. The business minds at Britannica had not developed a business strategy to cope with the incredible demand for their services; the world’s most popular information repository began to lose money. It seems incredible, but there it is: popularity killed Britannica.
What do I mean? In order to cover their operating expenses, Britannica adopted a “walled garden” strategy. It became a subscription-based service; you could pay a fee of US $5 a month to have access to Britannica, or you could satisfy yourself with an abstract from its database of articles. There were a number of Britannica’s users who were less than satisfied with this new arrangement, and decided to set the matter aright, by creating their own encyclopedia, an open-source work, distributed and freely available – either to read articles within it, or to add articles to it. Using a simple but powerful web technology known as Wiki, the “Wikipedia” has grown from its inception in 2001 to span more than a quarter of a million articles on nearly every topic imaginable. Although many of its articles lack the polish of similar entries within Britannica, its open and collaborative nature has produced an ever-more-accurate database of content, one which will no doubt soon surpass Britannica in its scope, relevance and value.
This is but one of a growing number of examples of O’Reilly’s Law of the relationship between software architectures and social engineering. There are many others, including the World Wide Web, LINUX, Gnutella, and Apache. In fact, we have enough examples of this relationship that we can choose to design for it. We can leverage the emergent social behaviors produced by new Internet-enabled forms of communication in our software designs. These principles of emergent communication must be taught longside the basics of packet windows, protocol stacks and error-correcting codes. If we honestly expect to educate engineers for who will work across the bulk of the 21st century, we need to give them the complete specifications of the operating environment within which they work, an environment that includes both the communications hardware of computing devices and the social software of human beings. Any research which explores one but ignores the other not only gives the student a woefully inadequate articulation of the realities of the comprehensive operating environment, but deprives these students (and their instructors) of an opportunity to think comprehensively about the true potential of the combination of computational communication and social emergence.
I would not entertain the idea of issuing such a pedagogical challenge to a roomful of researchers without an example to back it up; this discussion should not be bootless. We need something to sink our teeth into, a problem to solve. (Knowing the social dynamic of research communities, I know what gets you out of bed in the morning.) The rest of this paper describes my own attempt to put this principle into practice. At this point it is little more than a proposal, but it is my own attempt to formalize the preceding thesis with practice.
Part One: Liberty
They might be better off I think,
The way it seems to me
Making up their own shows
Which might be better than TV.
– Talking Heads, “Found a Job”
June 2004 was a watershed month on the Internet. For the first time, the volume of video traffic surpassed the volume of audio traffic. In a practical sense this means that that vast and mostly invisible file-sharing networks are now being used to distribute television programmes and motion pictures. I have used these networks myself in order to find television programming which is not broadcast in Australia, but which I am thoroughly addicted to in the USA. This is arguably a form of copyright violation, as I haven’t the permission of the copyright holder to display this programming, even for my own private use. But since this programming isn’t available at all in this country, either on a free-to-air or cable TV network, I am forced to satisfy my habit by lurking in the Internet’s shadier districts, where, if I am not careful, I could pick up a viral infection, or worse.
There’s a lot of television programming produced every year in the major English-speaking nations: the US, UK, Canada, Australia and New Zealand. There are many hundreds of thousands to millions of hours of programming available in back catalogs. There’s such a wealth that there is no way that the five free-to-air broadcasters could hope to deliver even the smallest part of it to Australian audiences. Even with a few hundred FOXTEL DIGITAL cable channels, it’s still beyond all bounds. There’s just too much out there. Its like trying to read through the US Library of Congress; more books are issued every day than you can ever read, so you’re constantly be falling further and further behind.
Short of having an infinite number of television channels available to the viewer, there is no way to deliver the existing catalog of English-language audiovisual content to a viewer. And so far we’ve only considered professional television productions, excluding the ever-increasing selection of “amateur” content: short films, documentaries and home movies that must be considered as equal in importance to professional works, even if their audience is limited to a handful of viewers. And when you expand your scope to non-English language content (as we should, because the world is primarily Mandarin-speaking, not English-speaking) you realize that the idea of the television channel has become a restrictive anachronism.
The television channel was created as a way to regulate the limited resource of radio frequency spectrum. Before the advent of spread-spectrum communications, radio spectrum was seen as a zero-sum proposal: giving to one would mean taking away from someone else. Thus all available FM radio frequencies in Sydney and Melbourne are occupied by broadcasters. However, there are only five VHF television broadcasters in Australia, while the spectrum could easily accommodate seven, plus at least twenty UHF broadcasters. Why so few? Again, a scarcity argument is used, only this time the scarce resource is advertiser dollars: if there were, say, a fourth commercial free-to-air television network, the billions of dollars in advertising revenue collected by the three commercial networks would have to be divided among four players, depriving them of the revenue they need to satisfy the “local content” requirements mandated by the Federal government. In other words, they need to keep the monopoly on television channels so they can continue to churn out an embarrassingly poor string of Australian-produced television dramas.
But we know this is all so much lies, damned lies and public relations: the real reason they want to keep the spectrum under their tight control is so they can continue to act in collusion as anti-market forces. The free-to-air commercial broadcasters pay a pretty penny for protection, donating liberally (so to speak) to the political parties, thereby ensuring their continued stranglehold on commercial television broadcasting. The same situation exists in the United States, with the same results: broadcast television is a filter and block for the television audience, dictating what should and should not be seen. The free speech implications of this situation are obvious, and emerge from the social engineering of so-called “mass media”.
That’s enough discussion of things as they were. All of this has changed, because of the advent of digital television broadcasting. DTV programmes are delivered as an MPEG2 format data stream, small packets of audiovisual data encoded, transmitted, delivered and assembled at the television set into a high-resolution image. Because this data is entirely digital, the stream can be recorded to digital media and endlessly reproduced with no loss in quality. As it is in MPEG2 format, it can easily be burned onto a DVD, or transcoded into another, more compact format, such as DivX or Windows Media 9, burned to a Video CD, posted on a website, disseminated through a peer-to-peer file sharing network, published into the BitTorrent, sent as an email attachment, etc. Bits are bits are bits, and it matters not whether they’re the latest Britney Spears song, an episode of The Sopranos, or Spider-Man 2.
Now, while I can’t advocate wholesale copyright violation – and we’ll come to that a bit further along – I do believe that it is appropriate to examine the politics of scarcity with respect to television broadcasting, and engineer a solution which effectively routes around the problem (to steal a phrase from John Gilmore), recapitulating the Britannica to Wikipedia process. As media consumers, we need to liberate ourselves from the anti-market forces of the free-to-air commercial networks, and, as creators and purveyors of audiovisual content, we need to free ourselves from the anti-market forces of commercial networks as programme distributors. In other words, we need to develop a comprehensive computational and emergent strategy to disintermediate the distributors of audiovisual media, directly connecting producers to consumers, and further, erasing the hard definition between producer and consumer, so that a producer’s product will only be identifiable by its inherent quality, in the eyes of the viewer, and not by the imprimatur of the distributor.
The idea of audiovisual media delivered over the Internet, or IPTV, is hardly new. Apple’s QuickTime was released in 1991. Since 1995 you have been able to use the RealPlayer to deliver streaming video on the computer desktop. In the last decade the efficiency of coding algorithms has improved tremendously – the best of these is arguably Microsoft’s Windows Media 9 Series (which proves that while money can’t buy everything, but it can buy some very clever mathematicians) – so VHS-quality video can be delivered to the desktop in a 240 Kbps stream, and DVD-quality in just 1 Mbps.
Technologically, the pieces are in place for a radical reconfiguration of the technology of programme delivery to the TV viewer. Digital television, thought to be the endpoint of this revolution, was actually only its beginning, and while digital televisions are very useful as display monitors, their broadcast tuners with their sophisticated analog electronics will be completely obsolete once broadband supplants broadcast as the delivery medium. The digital TV is a great output device, but a lousy tuner, because the design of the device reinforces the psychology of spectrum scarcity.
What we need, therefore, is a new device, which sits between the Internet, on one hand, and the digital television set, on the other, and acts as a new kind of tuner, thereby enabling a new, disintermediated distribution mechanism. The basic specification for this device is quite simple: it would be capable of locating, downloading and displaying audiovisual content, in any common format, on the viewer’s chosen display device. That display device doesn’t even need to be a digital television - it could be a PC. Or the soon-to-be-released PSP, the PlayStation Portable. Or a 3G cell phone. This intermediary device – the “Internet tuner,” if you will – could be a hardware-based set-top box, or a piece of software running on a more general-purpose computing device – it doesn’t really matter. But, as we know from Tim O’Reilly, the software architecture of this tuner is key to producing a emergent social effect.
Part Two: Facility
“Kids come in to tour the studios expecting to see something amazing. They’re disappointed to see a whole bunch of folks hunched over computer monitors, making funny faces. I really want to get a giant red button in my office and tell the kids, ‘This is the button we press to make Gollum. Just press it and you’ll make some art.’ They’d love it.”
– Bay Raitt, chief character animator of Weta Studios, and creator of Gollum
The beauty of television is its elegant simplicity. There are just two options which a television viewer manipulates in the course of normal viewing: volume and channel. Before fifteen years ago, these were the only exposed controls on a television set. When the remote control arrived, all of that changed. Remote controls are crowded with buttons for all sorts of features that most people rarely use, yet must be on the remote in order to present control interfaces to a television which has sacrificed physical interfaces on the unit in favor of virtual interfaces via the remote control.
Few people own a television in isolation. Increasingly, television is the video display device for a home theatre, and is connected to a VCR, a DVD player, a digital video recorder (DVR), a cable TV set top box, a video game console, an audio amplifier, and so forth. Each of these devices has its own controls and therefore its own remote control. It is not uncommon to have five or six separate remote controls in a home theatre system. This creates a culture of expertise – not unlike the early days of home computing – where one family member is the “master” of the various remotes, while other family members use the default system settings, which are most often adequate for their immediate needs. This situation is so common – and so vexing – that now that there is a thriving market in thousand-dollar remote controls, complete with GUIs, which attempt to simplify complex home theatre configuration tasks. Although the claim can still be made that television remains a very accessible and easy-to-use medium, the truth of the situation is actually somewhat different. And as home theatre devices proliferate, interface problem will only grow worse.
I mention this because the most formidable task in the creation of an Internet-based television tuner is the design of an interface which is accessible to all TV viewers, regardless of their level of expertise. A television viewer using the Internet tuner must be completely unaware of the sophisticated processes required acquire and play Internet-delivered audiovisual content. Any interface more complex than the FOXTEL DIGITAL Electronic Programme Guide – which is actually quite usable – will present an insurmountable barrier to the usage of the tuner. The tuner must be able to configure itself automatically, without any worries about firewalls or IP address configuration or DNS hosts, etc. All of the things that an Internet expert does without thinking must be completely invisible to the user of the tuner. In that sense, one design requirement of the tuner interface is to keep the details of its operation hidden from the user.
Interfaces are too frequently the most overlooked component of consumer electronics devices, but are absolutely the most vital element of their design, for interfaces are where the user meets the actual capabilities of the device. Interface is communication, interface is a language, and the user’s fluency in that language determines their level of satisfaction as they interact with the device. Interactions do not occur in isolation. Every time a user interacts with a device, he brings with him the memory of all prior interactions with that device. Some of the best interfaces maintain a memory of their use, and employ persistent data to reconfigure their behavior. Interaction must be seen as a continuum; it is not a single event but an evolving relationship which, if properly constructed, evolves the interface as the user grows more familiar with it. The Internet tuner should have an interface which concisely represents the enormous sea of audiovisual programming available to the viewer, without overwhelming the viewer in a sea of choices. Hence, the interface serves a role similar to a free-to-air broadcaster, distilling the overwhelming set of possibilities to a manageable few.
How does the tuner interface perform this magical act of reduction? To borrow from the example of the TiVO PVR, it must understand likes and dislikes. Programmes can be grouped by genre, and, for this reason, TiVO can make recommendations on the order of, “If you liked that programme, you’ll probably like this one as well.” If the Internet tuner keeps an exhaustive record of the viewer’s interactions with audiovisual programming (including which programs were abandoned midway through) it will be able to make recommendations drawn from the viewer’s choices, using these as a basis to scour the immensity of the Internet to find those few thousand programmes which might most interest the viewer. If this part of the interface works effectively – which is to say, invisibly – the biggest problem won’t be that the user will be drowned in choices, but rather that these recommendations will be too constrained by all the choices the viewer has already made. Any predictive capability can too easily spiral into a negative feedback of diminishing choice which only appears to be effective because there is such a wealth of choice that any subset of it seems prodigiously rich.
Thus, the interface which seeks to shield the user must also provide an opening, a craft to sail the crowded seas of Internet audiovisual content. An interface to unbridled complexity is more difficult to create than one that which filters it away, but it is a design necessity, lest the Internet tuner simply reinforce the scarcity model of free-to-air broadcasting. How can we approach this? By applying the principles of social networking.
As stated in my thesis, all networking is inherently social. Yet, over the last year we’ve seen the birth of explicitly social networking, with systems such as Friendster, LinkedIn and the ever-more-popular Orkut, Google’s contribution to the field. These systems rely upon the “six degrees of separation” principle to link you to your friends, their friends, their friends’ friends, and so on, so that your place within an ever-widening social network is thoroughly delineated. Recently I signed up for Orkut, and spent a few hours creating my social network, mostly by visiting my friends lists of friends and adding their friends – who are also my friends – to my list of friends. After I had completed that task, I created an Orkut “Community” – basically a bulletin board for people of aligned interests. I named my community “The List”, and described it in the following words:
Please share with the list the best book, movie, or track you've come across: this week, this year, this lifetime. Share the best. Enlighten your friends. Long explanations are unnecessary, and beside the point. Just share your list. Share early. Share often. Peace.
My goal was to create a way that I could learn from my friends – people whose opinions I might be inclined to trust – what I should be reading, viewing, playing, etc. I wanted to create an explicit place for one of the functions of real-world social networks, “word of mouth” which spreads good ideas and filters out the bad. “Quality will out,” my friend Susan Mainzer is fond of saying, and discussions of quality experiences are a main preoccupation of social networks. How do I determine the difference between hype and reality? I listen to my friends, consider their opinions, then make decisions, drawing in part from their experiences.
For this reason, the Internet tuner should fully but invisibly incorporate the features of social networking. The tuner knows what your friends have been watching, and it knows how much you trust (or ignore) your friends’ opinions. This is sufficient for the tuner to generate a wider selection of viewing options, ones which you might not have considered on your own. This procedure is similar in nature to the collaborative filtering methodology of Patty Mae’s Firefly system, which is now incorporated into Amazon.com and many other online shopping systems, but instead of than measuring your tastes against a faceless group of people with similar tastes, it adds the discord and chaos of friends who almost certainly do not share your tastes, yet with whom you share strong affiliations. Collaborative filtering a la Firefly should also be included in the tuner interface, because it allows the tuner to find not-exactly-alike-but-closely-related selections.
Finally, for those who want to surf the full richness of the entire sea of possibilities, it should be possible to step behind the curtain, abandoning the idea of interface-as-filter, and give the user an unadulterated (but easily manipulated) view into the chaos of the whole. People should be able to get anything they want, might want – even what they don’t want – just by looking for it. This interface might look something like Google – very spare in interface elements, but capable of overloading the inquisitive surfer with a wealth of possibilities.
The physical interface to the tuner’s logical interface should require neither a keyboard or mouse; it should be operated by a remote control with just a few buttons for scrolling and selection, as found on a DVD remote control. The constraints of a simple physical interface will keep interface designers from adding unnecessary sophistication to the tuner.
When all of these architectural elements are combined behind a deceptively simple interface, the Internet tuner becomes a true portal and a companion as one surfs through the ever-growing audiovisual content of the Internet. Because the tuner preferences are ultimately reducible to a set of data, they can easily be transferred between instances of the Internet tuner. Hence, wherever you go, your particular version of television travels with you. Indeed, you’ll be disinclined to access television through someone else’s tuner configuration; it will feel as though you’re trawling through an alien environment, without any of the familiarity generated by your own interactions with the tuner. In its ultimate aspect, the Internet tuner will behave much like Google’s “I’m Feeling Lucky” button – just turn the tuner on, and you’ll be watching what you want to watch, when you want to watch it, wherever you want to watch it.
A device like that wouldn’t be harder to use than television. It’d be easier.
Part Three: Equality
Technology that disrupts copyright does so because it simplifies and cheapens creation, reproduction and distribution. The existing copyright businesses exploit inefficiencies in the old production, reproduction and distribution system, and they'll be weakened by the new technology. But new technology always gives us more art with a wider reach: that's what tech is for.
Tech gives us bigger pies that more artists can get a bite out of. That's been tacitly acknowledged at every stage of the copyfight since the piano roll. When copyright and technology collide, it's copyright that changes.
Cory Doctorow, Microsoft Research DRM Talk, 17 June 2004
Any attempt to replace the broadcast model of audiovisual programme distribution has to compete against the enormous efficiencies offered by radio spectrum broadcasting. A truism in broadcasting is that it costs no more money to add more viewers. The inverse is true with broadband; every new viewer requires more bandwidth, more server space, more electricity, more physical infrastructure. That fact turned out to be the Achilles’ heel of Encyclopedia Britannica. Yet it need not have been so.
This month the BBC begins testing their “Flexible TV” system, which is something like the Internet tuner I’ve described here, except that their software package is designed to receive only the previous and forthcoming weeks of BBC programming. It is a closed system in that respect, and, even when it goes into wider distribution, will only be available for UK residents. The BBC faces the same problem with broadband distribution as was faced by Britannica, but their solution is novel: they will be using peer-to-peer file sharing techniques to superdistribute content throughout the growing legion of viewers. Each programme distributed through the system is segmented into small pieces, and each computer running the BBC Internet Media Player (and Dirac, an open-source codec designed by the BBC) freely shares these programme segments with its peers on the Internet. Thus you don’t have to rely on the BBC’s servers being available, or even reliable – as content is superdistributed into the file sharing network, it becomes increasingly easy to find and retrieve. The more Internet Media Players in use on the Internet, the faster the response of the network as a whole. Under this methodology, every new viewer yields more total bandwidth, a virtuous cycle which gives broadband distribution its own economy of scale.
(A small aside: when I came up with the idea to use superdistribution within the Internet tuner to defeat network congestion and failures, I assumed it was a unique innovation. As it turns out, I’m just thinking with the crowd on this one. Peer-to-peer superdistribution is the future of distribution.)
Thus, if broadband distribution is to compete with broadcast distribution, every consumer of content must also become a broadcaster of content. This does not mean that every consumer must necessarily become a producer of content; but there’s such a wealth of home movies, dance tracks created in Acid or GarageBand, Flash movies, etc., that it is very likely that every consumer will produce and publish at least some of their own content, much as many people today have their own web sites. This equality between producer and consumer is one of the key concepts designed into architecture of the Internet tuner.
Although the Internet tuner will open its doors to a wide class of “amateur” content, most viewers will be interested in watching “professional” content, that is, content produced by professional production crews. This could be the latest TV series from FOX or HBO, an archival programme, a feature motion picture, and so forth. The Internet tuner makes no distinction between amateur and professional content; indeed amateur creations are sometimes of higher quality than professional productions, making up for modest production values with better storytelling. The difference between amateur and professional content has nothing to do with the programming itself, but rather, on the copyright restrictions placed upon that content. Professional content must be paid for – either with television commercials, or a theatre ticket, the purchase of hard media, or a service subscription. That’s the only recognizable point of differentiation. Until the advent of the Internet tuner, professional content also held a monopoly over the distribution channel, but as the tuner replaces the distribution channel with its own peer-to-peer form of superdistribution, this monopoly will be disintermediated out of existence. Nonetheless, viewers will want to watch professional content, and content producers want to sell their programming to viewers – by any means necessary.
Economics can be thought of as an emergent quality of a sufficiently complexified society; any society which grows beyond a tribal stage necessarily establishes a system of value exchange. This means that the Internet tuner, as something which links computational communication and social emergence, invariably has an economic aspect. Because the tuner will deal with works protected by copyright, its economic capabilities must be explicit, after in the manner of Apple’s iTunes, rather than implicit, as is the case with a Web browser.
Although it is obvious to me that the Internet tuner itself must be FOSS (free and open source software – more on this further along), it must have the capability to deal with transactions – micropayments – in order to open it to the world of professional content. If the Internet tuner is simply a tool for piracy – standing in as the audiovisual equivalent of Napster – it will be fought against by the anti-market forces which control the creation and distribution of materials protected by copyright. If, on the other hand, the Internet tuner provides an attractive platform for the distribution of for-pay audiovisual media, copyright holders will not fight against it – even if they do not at first wholeheartedly embrace it.
There are numerous systems available to handle micropayments - including PayPal, and Ron Rivest’s Peppercoin – so this presents neither a technical nor a legal barrier to the architecture of the tuner. The larger issue comes in the form of digital rights management (DRM), a hotly-contested battleground fought by copyright holders against the consumers who seek the full exercise of their rights of ownership (as they perceive them) over materials legitimately purchased from those copyright holders. For example, in the US, although it is legal to make a copy of a video DVD, it is not legal to sell software which will do so, because that software defeats the Content Scrambling System used to encrypt DVDs, and thus violates the anti-circumvention clause of the Digital Millennium Copyright Act. Consumers need to make copies of DVDs (because DVDs are far more delicate than videotapes) but run the risk of fines and jail time if they do so.
Copyright holders will not release DRM-free versions of their content onto the Internet, because they have already experienced what pirates will do with unencrypted movie files. While DRM systems can have flexible “policies” which allow for a wide range of possible uses of accessible media files, copyright holders tend toward extreme paranoia with respect to digital technologies. This means, for example, that a song purchased though Apple’s iTunes can only be copied to a few machines, and can only be burned to a limited number of CDs. DRM inevitably sacrifices the flexibility of the digital medium on the altar of commerce. Nonetheless, the Internet tuner must have a complete, flexible and strong DRM capability, with a well-integrated micropayments system. Without these two basic architectural components, the Internet tuner would remain a curiosity, lacking the branded content that people have grown to expect from television. In this case, the unendurable must be endured: an open-source project must collaborate with the archons of copyright to create a system sufficiently secure to attract their offerings. When copyright-encumbered offerings are freely and equally available through the Internet tuner, viewers will be able to draw their own conclusions about the relative value of professional content.
Once the distinction between consumer and distributor has been erased, and both professional and amateur offerings are equally available through the tuner, a thought must be given to equality of access: the tuner should be able to play every conceivable form of audiovisual content available on the Internet. There is a very wide range of audiovisual CODECs available, and new ones are created nearly every day. Even if one could incorporate every known codec into the Internet tuner, there is no way that a software engineer of today could anticipate new CODECs, or new media delivery formats, such as 3D or Ultra High-Definition video. The Internet tuner must be open to all audiovisual formats, treating them as equals within the broader environment of the Internet.
The only way this can be achieved as a design goal is by making the Internet tuner a FOSS project. With the code to the Internet tuner free and fully exposed to both developers and users, it will be easy (in a relative sense) to adapt the tuner architecture to improvements in encoding, communications, superdistribution methodologies, and so forth. Beyond that, the Internet tuner effort should be prepared to accept object library contributions where appropriate (as is often the case with CODECs and DRM architectures), as these are a middle ground where FOSS and proprietary methodologies can safely meet.
With equality between consumers and distributors, between amateurs and professionals, and between developers and users, the Internet tuner stands the best possible chance of fulfilling Tim O’Reilly’s rule that an architecture which plays into selfish interests can produce emergent strengths.
Conclusion: Aux Armes, Citoyens!
It steam-engines when it comes steam-engine time.
– Charles Fort
Somewhere in the middle years of the Web bubble, I heard venture capitalist Ann Winblad of Hummer Winblad Venture Partners lecture about innovation in the Internet era. Among the rules she gave as truisms, one struck home: “If you’re working on a product,” she said, “take it as a given that at least five other teams are hard at work on it, too.” When the idea for the Internet tuner popped into my head – just about 9 weeks ago, after I’d given a lecture in Melbourne about digital television and the death of radio spectrum television broadcasting – I presumed that I’d stumbled onto a completely novel idea. In the interregnum, I’ve discovered how wrong I was. Projects like the BBC Internet Media Player, MythTV on LINUX, Media Portal for Xbox and Windows, Video LAN Controller for Mac OS X, Windows and LINUX – the list goes on and on. Just four weeks ago TiVO announced that they’re going to release a software upgrade which will make their PVRs Internet-aware, so that they can locate and download Internet audiovisual content. These ideas are floating around the commercial software community, too, in products like Microsoft IPTV, and SnapStream’s Beyond TV.
Many people are working toward the features of the Internet tuner, but none of them – to my knowledge – have brought these pieces together with an emphasis on the emergent qualities of the tuner as a tool for communication. The rise of broadband “peercasting” and the death of radio spectrum television broadcasting makes the Internet tuner a disruptive technology of the first order. Let me be clear: the Internet tuner or something very much like it will do for audiovisual media what the Web did for print – make it immediately accessible from anywhere, at any time, for any reason. Because of the Web, libraries are transforming from repositories of knowledge into centers where people come to be pointed toward online repositories. The library is evolving into a physically constituted Google. Although some libraries view the Web as a new form of competition, the wisest have also learned how to adapt to the wealth of Internet-based information available through them, adding context to content.
The same will be true for Internet-based audiovisual distribution. The world we’re heading into isn’t an either/or, but a concatenation of “and”s. This and this and this and this and this, ad infinitum. In that sense, it doesn’t matter that there are competitors for this proposed-but-as-yet-still-quite-mythical Internet tuner. In fact, it will work to the tuner’s advantage.
Not so very long ago, when a software engineer needed something, they wrote a program or a function to produce the desired result. Object-oriented programming was supposed to introduce the idea of “reusability” to the software engineering process, but it hasn’t worked out that way. That doesn’t mean that programmers “roll their own” every time they need something done. Most often the first thing a programmer will do is Google for the specific function they need to perform, to see if someone else has done it, somewhere else, and made that code available for others to use. While this principle of using other’s work within your own is a particular feature of the UNIX/GNU/LINUX operating environments, it has also thoroughly infected both Microsoft’s and Apple’s offerings.
The truth of the matter is that, in 2004, nearly any task that needs to be performed by a piece of software has already been written into some other piece of freely available software. The smartest software engineers harvest the collective intelligence of the Internet, using that intelligence as a complement to their own creative practice. That’s why it’s a wonderful thing that the Internet tuner has more than a few competitors; each of these competitors (in the FOSS space) can be used to form core components of its architecture.
Let’s break that out into some real-world examples. The media portion of the tuner could be patterned on a combination of Video LAN Controller and Real Network’s Helix; the superdistribution architecture could be adapted from gIFT, the FOSS peer-to-peer networking library; the interface databases could be written to run atop MySQL; and so forth. That said, not quite everything that the Internet tuner needs is freely available. In some cases, the project’s software engineers – distributed geographically and collaborating through the Internet – will “roll their own,” adding that work to the ever-increasing list of freely available software designs.
If this sounds a lot like the process that gave us LINUX and Apache, that’s no accident. The collective intelligence of open source software development has already become one of the greatest engines of creation in the 21st century. Even so, LINUX and Apache are invisible to the average computer user. The Internet tuner, on the other hand, is the very visible interface to a much larger set of services and audiovisual content distributed throughout the Internet. With so much wealth – in code and in content – just lying around, waiting to be harnessed by “insanely cool” projects, it is almost as though the Internet tuner is an idea that has been coolly biding its time, waiting until the critical moment when it passes from pleasant fantasy into inevitability.
It is my belief that the day of the Internet tuner has come, that the dam which has thus far held back the overwhelming torrent of audiovisual content is about to burst. Those who have built their houses on the sand of anti-markets will be swept away in the flood. Some will be caught unawares, while others will stubbornly try to stick their finger in the dike, hoping that time reverses, the genie re-enters the bottle, that the Internet and personal computers are somehow un-invented. That doesn’t seem likely. Indeed, the Internet tuner seems thoroughly in tune with the times – and the great advantage of good timing is that obstacles are removed by historic processes rather than individual efforts.
So we find ourselves, on this revolutionary day, in a unique historical space. We could be the peasants, storming the Bastille of media. We should be.
Aux Armes, Citoyens!
Mark Pesce
12 Ix – 13 Men
10 – 11 July 2004
Sydney
Open Source Television – © 2004, Mark D. Pesce. Rights for reuse granted the Creative Commons Attribution License.
4 Comments:
The site is Disinfo.com not disinfo.net
top [url=http://www.001casino.com/]free casino bonus[/url] hinder the latest [url=http://www.casinolasvegass.com/]casinolasvegass.com[/url] autonomous no consign perk at the best [url=http://www.baywatchcasino.com/]baywatchcasino
[/url].
|
No too much that it Fistracts your viewers [url=http://www.germanylovelv.com/]louis vuitton knolckoffs[/url]
what is Fe Facto important. Even though you replenish your pages with so many artworks, when they usually Fo not attract your viewers [url=http://www.germanylovelv.com/]louis vuitton knolckoffs[/url]
your message then you certainly are just introFucing conFusion [url=http://www.germanylovelv.com/]louis vuitton knolckoffs[/url]
your aFvert. It can only mean something [url=http://www.germanylovelv.com/]Louis Vuitton Outlet[/url]
your auFience your clutter means you happen to be unproFessional [url=http://www.germanylovelv.com/]louis vuitton knolckoffs[/url]
boot..
Consumer – You anF I are in business mainly because we have something http://www.germanylovelv.com/
oFFer that the customer neeFhttp://www.germanylovelv.com/
IF he Foes not, then he is not your client. You have hunFreFs oF leaFs [url=http://www.germanylovelv.com/]Louis Vuitton kopierte Tasche kaufen[/url]
Follow up. The message each manager sent out by his actions coulF not have been more FiFFerent. The retail outlet owner is comFortable operating on a insteaF hierachial basi[url=http://www.germanylovelv.com/]Louis Vuitton Outlet[/url]
The hotel manager sees little Fistinction in his job anF that oF his staFF.
[url=http://www.squidoo.com/saclongchampa]longchamp soldes[/url] Heavy BagThe heavy bag is 40 pounds to 60 pounds, and it hangs from a support. It is used to build strength, explosive punching power, endurance and improve concentration. When you punch the heavy bag, Mulberry Classic Natural Leather Messenger Bag Light Brown for Men,2012 new arrival Mulberry bags here with low price but high quality. you are not just punching with your hands and arms.
[url=http://longchamppaschersa.webnode.cn/]sac longchamp moins cher[/url] The world has been egged and abetted by the zeal of networking. With all its possibilities of exciting returns, As you can see, Mulberry Bags with its unique design style, Highly Appreciated Mulberry Women's Smaller Bayswater Leather Shoulder Brown Bag sale with its atmosphere, delicate design idea based on the constantly changing fashion trend. it has shaped out as one of the most important facets of the fiscal market. Dealing and transacting in the resourceful reserve of foreign exchange may be laden with alluring opportunities.
[url=http://saclongchampa.angelfire.com/]sac longchamp[/url] Lolla Palooza bags are mostly available at high-end fashion boutiques Combined with fashionable and charming of 2013 New Style Mulberry Women's East West Bayswater Mixed Buckle shoulder Orange Bag for you that stock designer bags and fashion accessories. These days, you can also buy them online. There are many online fashion boutiques that offer Lolla Palooza bags and several other designer handbags by leading fashion designers and fashion houses from across the world... Besides the weaving veins, some handbags are inserted with skeleton rivet, or tied in a certain position, or sutured the crossed grids. Even some are desperately cut on the grid to make it protrude when used long. The purpose is to work in concert with the lucky saying of prosperous goldfish... Massie always has sleek, shiny, beautiful hair. To achieve this you need a very good shampoo and conditioner. If you have money, try Bumble and Bumble.
Post a Comment
<< Home