Observations from where technology meets business

Open Source and Enterprise 2.0

I wonder what Matt Asay is thinking now that Lockheed Martin has officially announced they are open sourcing the software they developed to run on top of Microsoft SharePoint to implement Enterprise 2.0 capabilities on their intranet. This announcement shouldn’t be a surprise for those who attended the Enterprise 2.0 Conference since Christopher Keohane and Shawn Dahlen told us they were doing this in a keynote interview with Andrew McAfee two weeks ago on the main stage.

Last year Asay said he would have been disappointed if he attended the E2.0 Conference since open source didn’t play a prominent role. I posted a summary that refuted these claims. This year’s E2.0 conference was no different. For example, in addition to Lockheed Martin’s announcement there was also a lively panel session on open source search (which I wrote about on the CCS blog). I’m not going to run through all of the examples of where open source played a role in the E2.0 conference this year since I think it has matured to the point where open source plays a supporting role in many IT initiatives and it is no longer the surprise it once was.

But this official announcement by Lockheed Martin seems like a watershed moment. It’s great to see large companies seeing the value of open sourcing code they would never sell, nor do they see any competitive advantage from keeping it to themselves. The advantage comes from knowing how to use it.

Web Squared, In Wordle

Below is the Wordle cloud representation of O’Reilly’s and Battelle’s Web Squared paper.

Wordle - Web Squared

I am fascinated by the prominence of the word “data.”

Border Disputes in Enterprise Use of Open Source 3C

This is the fourth in a series of blog posts intended to help IT managers understand open source licensing and its implications. In this post I cover the risk of inadvertently licensing proprietary software as open source by mixing it with a GPL-licensed product.

Recall that my research focuses on the use of communication, collaboration, and content management (3C) solutions within enterprises. As it turns out, a large number of the leading open source 3C products are licensed under the GPL. So care must be taken if organizations choose to integrate these products with enterprise systems. The concern has to do with the GPL’s Copyleft provision, which states that a system must be licensed under the GPL if any GPL-licensed source code was used to create it and it was distributed. A previous blog post discussed hereditary and permissive open source licenses in more detail.

But let’s say this up-front, if your plans do not include modifying the source code of (or integrating with) a GPL-licensed product, or if you have no intention to distribute any software which involves GPL-licensed software (either linked or integrated with another system) then you should have nothing to worry about.

The issue I want to highlight here is where organizations are considering integrating a GPL-licensed product with an enterprise system. In some cases it is fairly straightforward to determine if a piece of software falls under the GPL. For example, if a developer links their source code with GPL-licensed source code then the resulting program is considered a derived work and would have to be licensed under the GPL, if it were distributed.

In her book “The Open Source Alternative: Understanding Risks and Leveraging Opportunities” Heather Meeker does a good job describing the “border dispute” of the Copyleft provision in the GPL. The problem is the GPL itself is not always clear as to what defines a program that is considered derived from GPL software. Meeker is very good at setting the scope of the issues and then exploring a number of scenarios regarding the applicability of Copyleft.

However, while Meeker’s background and analysis is interesting I don’t intend to explore the legal issues brought on by the GPL in cases such as loadable kernel modules in Linux or proprietary operating systems running within a Xen host. What I want to briefly touch on here are the issues surrounding the use of GPL-licensed software within enterprise environments, most notably those which may have to integrate with corporate systems.

So let’s explore these issues by going through two scenarios:

  1. Configuring Drupal to use an enterprise’s directory system. For example, an enterprise uses Microsoft Active Directory as a central source of usernames and passwords. Should we be concerned with integrating Active Directory with a Drupal installation via an LDAP interface using the LDAP integration module?
  2. Surfacing data originating from an enterprise system within a sidebar on a blog running WordPress. For example, the latest sales numbers are displayed on a blog written by a marketing director, via a plugin which pulls data from their proprietary sales tracking system.

At first it may not be obvious that either of these scenarios would qualify as a derived work and be subjected to Copyleft. In both cases the GPL-licensed products involved (Drupal or WordPress) would probably not run on the same computers as the proprietary systems with which they are communicating.

To get some guidance we can refer to the Free Software Foundation. As Meeker says in her book: “The Free Software Foundation (FSF) is in some ways the de facto enforcer of the GNU General Public License (GPL).” To help clarify these types of questions about the GPL, the FSF has a frequently asked question list on their website. The answer to the question “I'd like to incorporate GPL-covered software in my proprietary system. Can I do this?” provides some illumination as to how the GPL applies to the above scenarios. It says in part:

“However, in many cases you can distribute the GPL-covered software alongside your proprietary system. To do this validly, you must make sure that the free and non-free programs communicate at arms length, that they are not combined in a way that would make them effectively a single program.”

In other words, it matters a great deal how a GPL-licensed program integrates with a proprietary system, regardless of whether they are on the same system (and, we should also note, given the pervasive connectivity of the Internet, it doesn’t matter if the two systems are on the same network of even in the same company). If they operate as a single program then the FSF considers them a single program now subject to Copyleft.

So, the key is to be able to demonstrate that the two systems involved can still be considered separate. In the first scenario (Drupal using Active Directory) communication between the two systems is done via a well-known protocol (LDAP). All of the software used in the Drupal system can operate as a separate entity by connecting to any LDAP-compatible directory, not just this particular Active Directory. The two systems are clearly separate programs.

The second scenario is less clear. In her book Meeker suggests “to focus on the spirit of the GPL rather than its letter” and goes on to say the degree in which the proprietary code can be considered a “black box” will help clarify whether there is sufficient “arms length” (as the FSF describes it above) between the two.

If the proprietary sales tracking system was custom-written by company employees and the company wrote a custom extension to serve the data along with a custom WordPress plugin to pull the data and display it within the blog, then this sounds like two systems performing as a single program and subject to Copyleft, if either of the two were distributed.

On the other extreme, let’s say the company purchased a widely available sales tracking system, which provides sales data in RSS feed, and an RSS feed is fetched by a WordPress RSS plugin (which can be used with any RSS feed) that simply displays the feed items on the blog. This certainly seems to qualify as a “black box” and the two systems can be considered separate.

In closing, while is makes sense for enterprises to use GPL-licensed software (after all, a large number of the leading open source 3C products are GPL-licensed) care must be taken when integrating them with proprietary systems.

This is a repost of a blog originally posted on the Collaboration and Content Strategies Blog

GSA 6.0: Details and Analysis

Last week Google announced the release of GSA 6.0, software which runs on Google Search Appliances used by enterprises to provide a Google-like search experience for intranet-based content. Major features of this release include:

  • New options for combining and coordinating the operation of multiple appliances.
  • Support for a new appliance offering, the GB-9009, and enhanced support for the existing GB-7007. The GB-1001, GB-5005, and GB-8008 are being retired.
  • A new Policy ACL construct which implements “early binding” security trimming of search results (only return content to which the user has access).

The GSA home page and Google blog posts announcing the new release aren’t entirely clear on which of the listed features are fully supported, which are considered “beta,” and which are consider a “Google Lab Feature.” However, the Guide to Software Release 6.0 summarizes this information well.

Online Resources

Overall Impressions

As much as Google likes to tout user features, such as query suggestions or user user-added results, these features are still considered in “beta” and most enterprises won’t turn these on until they are fully supported. For all intents and purposes, the new supported capabilities of this release are mostly infrastructure-focused.

New software features permit multiple appliances to operate as a single federated collection of appliances. This comes at the same time Google is completing a turnover of its entire GSA product lineup. The GB-1001, GB-5005, and GB-8008 appliances are being replaced with the GB-7007 and GB-9009. By enabling the appliances to work together customers can now scale their search capacity by adding more appliances rather than by upgrading to a higher capacity appliance.

The new Policy ACL feature looks to be an innovative approach to security trimming search results. However, only adventurous enterprises will benefit from its use at this time. Policy ACLs empowers the appliance to make security access decisions itself (without the assistance of content source repositories) before returning search results. In other words, only return links to content to which the user requesting the search has access.

Prior to this release GSA provide this capability through “late binding” techniques which deferred to the content source repository for access decisions at query-time, a much less efficient method. However, the Policy ACL feature only provides the building blocks (e.g., administrative screens, APIs, new databases on the appliance) for enabling early binding security trimmed search results. Content repository connectors (e.g., for searching Sharepoint, Documentum, and others) available for use with GSA still use the late binding methods.

The rest of this blog post provides additional details on these new features.

Details on combining and coordinating the operation of multiple appliances

Two new features in GSA 6.0 enable flexibility in how appliances are purchased and deployed. The feature called  “GSA-to-GSA Unification” in the announcement is referred to as “Federation” and “Dynamic Scalability” in the product documentation. This enables multiple search appliances to be deployed in different locations to work together by providing a combined user experience that search across all of the appliances. An appliance designated as the “primary node” works with other secondary node appliances to coordinate the entire setup. Each appliance can operate independently serving search results for collections they manage. In addition, a search submitted to the primary node appliance can cause the search to be executed (federated) across all participating appliances. The primary node appliance then combines these results into a single set of search results for the user.

This is an intriguing capability that can enable large enterprises to distribute appliances around the world to search local content stores while still offering a unified/global search experience for those who need it. However, properly configuring the appliances to work cooperatively looks to require significant planning, especially when dealing with secured content (which most enterprise networks deal with).

This feature can also be used to permit multiple appliances to work together so enterprises can scale their GSA farms horizontally (by adding more appliances) rather than upgrading to a larger capacity appliance. However, the “Multibox” feature also mentioned in this announcement will make this much easier.

Multibox provides a method of making multiple appliances operate as a single large appliance by sharing crawling and index serving duties. However, although included in GSA 6.0, this feature is still considered in beta.

Details Regarding Support for Early Binding Security Trimming

GSA 6.0 supports the use of early binding for checking access to content. Early binding is an approach for filtering search results so only content is returned to which the user submitting the query has access. Another way this has been described is “security trimming” of search results.

Earlier versions of GSA only supported late binding to implement security trimming. Late binding methods perform an access control check during a search operation (therefore, it is done late in the process, as opposed to early binding, which checks early in the process, during indexing). In short, GSA late binding impersonates the user submitting the search and tries to access a piece of content before displaying a link to it. If the access check fails then the matching search result is not shown. [for a more discussion on early versus late binding read this presentation from Mark Bennett and Miles Kehoe given at the 2008 Enterprise 2.0 Conference]

The challenge in implementing early binding is in the complexities that can arise when indexing content from multiple sources. With early binding, access control information is added to the search index as additional metadata stored along with the content in the search index. This additional metadata allows query processing to take into account the security semantics of the user submitting the search (my ID, my groups, what roles I assume). So, a search request like “vacation policies” is translated into something like “vacation policies which user smith has access to.” There is an index-time component to early binding (i.e., store the access control list along with the content) and a search-time component (i.e., determine who is submitting a query and what groups they belong to). With early binding there are no additional post search query access checks to external systems required (like late binding).

GSA implements early binding through the use of a construct called “Policy ACLs” (Policy Access Control Lists). These are made up of a URL pattern (these define the URLs this policy secures) and a list of allowed users and groups. Policy ACLs can be added through administrative screens (one by one via a simple web form or uploaded via a specially formatted text file), programmatically via the new Policy ACL API or as exact-match URL patterns embedded within feeds along with content and metadata (however, at this time, I cannot find the documentation for defining a Policy ACL within a feed). Identities of users are determined by the same methods the appliance authenticates (HTTP Basic, NTLM, Kerberos, etc.). Groups can be mastered by an LDAP directory or within a group database stored on the appliance that can be programmatically updated via the Policy ACL API.

Google has prebuilt content connectors for SharePoint, Documentum, LiveLink, and FileNet. These enable the GSA to index and provide search for these repositories. However, none of these connectors have been updated to support GSA early binding. To properly support early binding these connectors will have to map the security semantics of each of the source repositories to GSA’s Policy ACLs via the ACL API or within a feed.

So it appears the only enterprises that will benefit from this level of support for early binding are those capable of writing custom feed connectors that map the security semantics of their source application to what is provided by GSA. It will be interesting to see if Google developers add early binding support to their standard GSA connectors anytime soon (or at all). Their challenge will be in normalizing the security semantics of these content repositories with the semantics of the GSA’s Policy ACL construct.

Why Do Wikis Have More Adoption?

John Tropia says “What’s happening is that wikis are actually replacing a process, they are becoming a new way to do group work.” I think the explanation is simpler. To me, wikis represent documents and people understand documents. However, wikis store documents on a web page rather than within a file.

Wikis are tapping into a schema which people already use. After a little introduction it’s easier for people to understand wikis than other social software because they are based on something familiar.

So wikis aren’t necessarily replacing processes but they are replacing directories of documents (and improving navigation among them). People understand documents. They are necessary to get much of their work done. Documents are built into existing processes.

I may hate files, but I absolutely need documents. What I am hoping we will see is a redefinition of document.

Why wikis have more adoption?

What sparked today’s post is a post from Sameer, 2009 is the year of Enterprise 2.0? Hold your horses….

In his post we see that Wikis are gaining more traction. I think this is because they are more:

  • group based tools
  • based around a task (an environment of certainty)
  • help with process failure, and
  • don’t require network effects like blogs and social networks …ie. wikis and forums don’t need lots of people to take off, all they require is a small group of people.

Library clips: Do group tools get more traction due to not requiring network effects, and being in the context of certainty

This Week’s “Well Duh!” Moment

Being a father of teenagers I try not to use too many of their idioms within my own vocabulary (I embarrass them enough as-is). But sometimes I am at such a lost for words to describe a head-shaking moment of disbelief that the phrase “Well Duh!” fits all too well.

While I know there are probably many reasons why this hasn’t been done before, the announcement on the SharePoint Team Blog (a team which works on a product that provides, among many features, web content management) that the SharePoint marketing site is now being run on SharePoint seems kind of…well…late.

Lights… Camera… Action!

Today, we launched the SharePoint marketing website on SharePoint Server 2007. 

SharePoint on SharePoint: Launch of new website

btw, I think this means that about 0.08% of the entire Microsoft.com site (based on the results of these two queries at Live.com) is managed by the company’s web content management software. This appears to run counter to Microsoft's reputation for eating their own dogfood.

The Roles Open Source Can Play in an Enterprise Vendor's Business

This is the third in a series of blog posts intended to help IT managers understand open source licensing and its implications. In this post I cover the roles an open source product can play in an enterprise vendor’s business (which are enabled by open source licenses) and what this means for the enterprise itself.

A previous post discussed the four key levers of open source licensing that can support a vendor’s business model:

  1. Hereditary versus permissive licensing (and the impact of Copyleft)
  2. Source code ownership rights
  3. Limitations around attribution
  4. What defines “distribution”

These levers are important to enabling the roles a vendor intends a software product to play in support of their business objectives, which are listed below. I’ll go through each of these in detail:

  • Commoditizing a Function
  • Increasing Accessibility to Code, But Retaining Ownership
  • Sustaining a Community
  • Delivering Services
  • Providing Support

Commoditizing a Function: This role is enabled by permissive open source licenses. For example, the Apache web server has commoditized basic web serving. Large companies like IBM have been the biggest contributors to these types of projects, often supplying the most developers and even contributing code that becomes the basis of new projects.

Nearly all open source products benefit from these products since they provide the foundation for other opportunities (e.g., LAMP-based solutions). IBM benefits by incorporating these in commercial product offerings, enabling contributions to come from a wider audience of developers (thereby sharing some cost), providing a forum to explore new technologies and standards (reducing risk and increasing interoperability), and reducing the chance of competing technologies from becoming standards (for example, the Apache web server has remained the most popular web server despite efforts from companies like Microsoft).

Increasing Accessibility to Code, But Retaining Ownership: This role is enabled by dual licensing of software. In particular, companies have licensed software they own with both a proprietary license and a hereditary open source license, such as the GPL. The classic example of this is MySQL. In the content management market, Alfresco is a good example.

The key to enabling this dual licensing approach is having ownership rights to the source code. For example, take a look at the Sun Contributor Agreement. The author of a contribution that makes it into the core product must give ownership rights to Sun.

In this role the GPL discourages competitors from incorporating MySQL source code into a proprietary product (since distributing the resulting product would require it to be licensed under the GPL) while also increasing visibility enabling others to interoperate with it. Nearly every Linux distribution comes packaged with the GPL version of MySQL, making it an easy choice for many developers to use. I’m sure Alfresco would love to have the same type of relationship with Linux at some point.

Sustaining a Community: For all of the complaints made against the GPL (mostly by commercial software vendors) it is hard to dispute the success of a number of communities supporting GPL-licensed products. For example, the  Drupal community has over 350,000 members and the Wordpress community has produced over 4,200 plugins. These are extremely active communities and are examples of what many established commercial vendors have been striving to build for years.

None of the main commercial entities involved in these products (Acquia for Drupal and Automattic for Wordpress) control the source code, nor license it separately. The GPL seems to level the playing field, enabling community members to compete on aspects other than proprietary software offerings. Rather, community members use the source code as a basis for competing in downstream markets such as custom website development, various types of services, and support. The result is a robust open source product that incorporates cutting-edge innovations at a pace commercial software vendors have difficulty matching.

Delivering Services: One of the ways Acquia and Automattic leverage their open source products is to provide services and support. For example, basic hosted Wordpress.com blogs are free. But, Automattic also provides premium blogging services as well as software support for enterprises. Acquia also provides support and later this year will offer site hosting and a premium site-search service (based on Solr, an Apache-licensed product, and provided via a Drupal module). Of course, many online service providers use open source software. The most recognizable being Google and Yahoo.

Providing Support: Although community leaders (such as Acquia, Alfresco, and Automattic) offer paid support services for their open source products, there are also companies that provide support for a broader selection. These include OpenLogic and SpringSource. For example, OpenLogic provides support for MediaWiki (the software which runs Wikipedia) as well as hundreds of other open source products including development tools.

Most enterprises will need some type of commercial support for open source products, unless they intend to maintain a staff with sufficient skills to do it themselves. So which type of support should you use, a contract with a community leader or a support-only vendor? Each situation can be different but, in most cases, I would first consider going with the community leader (since this contributes to the long-term viability of the product), unless the use of the product is limited and the risk of problems low. Either way, I think there is room in the market for both types of open source vendors and their positioning will be sorted out over time.

Clearly, open source products need viable community leaders and these companies will likely provide the best support. However, enterprises can only be expected to maintain so many vendor relationships and there are literally hundreds of opportunities for enterprises to use open source (many, of which, do not have a commercial entities behind them) so the open source support vendors should have plenty of business to go after.

What does this all mean to enterprises wanting to use open source solutions? When considering a particular open source product,first understand the role it plays in the plans of vendors supporting it.

  • Products with strong communities behind them will tend to be more innovative, likely have enterprise support options, and be around for some time.
  • Those products without strong communities need to be assessed as if they were a commercial product being sold by the vendor providing support.
  • If they have neither a strong community nor a viable commercial vendor behind them, either reconsider your options or proceed with caution and look for signs that the community will continue growing.

The key is understanding the long-term viability of an open source product, whether it is backed by a vendor or a community.

In my next post I will discuss some of the risks involved with open source licensing, thoughts on mitigating these risks, and open questions that still remain.

This is a repost of a blog originally posted on the Collaboration and Content Strategies Blog

Key Levers of Open Source Licenses

This is the second in a series of blog posts intended to help IT managers understand open source licensing and its implications. In this post I cover the basics of open source licensing while also highlighting aspects that can enable vendor business models and influence how an enterprise approaches using open source products.

The definition of an open source license is generally considered to be the responsibility of the Open Source Initiative (OSI), a non-profit corporation formed in 1998 to promote the development and use of open source software. The OSI definition of an open source license is called the Open Source Definition (OSD) and is made up of 10 requirements that must be met before a license can be considered open source. OSI maintains a list of approved software licenses that have gone through their license approval process. Software creators can use the OSI Certification Mark logo and notification text if their software is licensed with one of the OSI approved licenses.

OSI presently lists 72 approved open source licenses. However, only a few of them are in wide use. For example, Freshmeat (a site that tracks open source software) says that over 60% of open source projects use the GNU General Public License (GPL). To make open source licenses easier to understand it is best to break these down into two broad categories: hereditary licenses and permissive licenses.

The most controversial, and most recognizable, hereditary open source license is the GPL. The terms of the GPL most relevant to enterprises using open source products are:

  • The source code must be made freely available.
  • Once software is licensed as GPL its licensing terms cannot be changed (the only exception being an owner, who retains all rights to the original source code).
  • Any modifications to the GPL-licensed source code, or any software incorporating GPL software, must also be licensed under the GPL and may also be required to be made freely available. This is also known as the “Copyleft” provision. However, this provision is only triggered when the software is distributed.

In the diagram below, source code licensed under the GPL is mixed with “Your Source Code,” which could be as simple as small changes to the GPL-licensed source code or as complex as software intended to be kept proprietary (either custom developed or purchased). The GPL’s “Copyleft” provision requires the resulting software to also be licensed under the GPL, if it has been distributed. Resulting software not distributed is not required to be licensed under the GPL.

 Hereditary License Example

For many businesses, which sell commercially licensed software, the GPL is viewed as “viral” and a threat to their business model. Open source products licensed under a permissive license, such as the Apache License, are generally acceptable to commercial software vendors and not seen as a threat. The diagram below illustrates why:

Permissive License Example 

The key points from the above diagrams are:

  • Enterprises need to be careful when integrating with products with open source hereditary licenses. However, I should also note that care should be taken for integrating with any licensed software.
  • The act of distributing the resultant software (“Something New”) triggers the Copyleft provision of the GPL. Unless the software is distributed then the provision is not invoked. In an enterprise context, distribution of source code generally means giving copies of it to someone outside of the enterprise. However, there are other hereditary open source licenses that define “distribution” much more broadly and includes the act of using the software over a network (for example, a web application being used over the Internet) and may not involve the transfer of source code at all.

Other levers that are involved but are not obvious from the above diagrams:

  • A source code owner (the original author or somebody who has obtained ownership rights) can license the software any way they like. For example, they can release one version under the GPL and another under a commercial license.
  • A few open source licenses have other restrictions requiring some form of attribution. This can be simply attributing ownership in the source code or retaining the use of the owner’s name or logo in screens seen by someone using the product.

To sum it up, the key levers of open source licensing, which can support a vendor’s business model, and which enterprises should be aware of are:

  1. Hereditary versus permissive licensing (and the impact of Copyleft)
  2. Source code ownership rights
  3. Limitations around attribution
  4. What defines “distribution”

My next post will discuss the types of open source businesses these levers enable and what this means for enterprises wanting to use open source solutions.

This is a repost of a blog originally posted on the Collaboration and Content Strategies Blog

Open Source Software Licensing for IT Managers

A recent article from Bruce Perens, a leading open source advocate, entitled “How Many Open Source Licenses Do You Need?” reminded me of how confusing open source licensing can be for IT managers who aren’t plugged into the open source world. Although the article is published on a website intended to be read by IT managers it was clearly written for companies producing open source software. After writing down some of my own ideas for explaining open source licenses to IT managers I decided to turn these into a series of posts.

But first, some background. My research at Burton Group focuses on the use of communication, collaboration, and content management (3C) solutions within large enterprises. What this means is I don’t come to this topic from only a developer’s or a system or network administrator’s perspective. The 3C solutions we cover are usually considered infrastructure components that can be leveraged by applications. We tend to have a foot in both the infrastructure and application side of IT.

What we see driving IT management interest in open source is:

  1. Saving money through lower licensing costs.
  2. A desire to tap into innovative solutions, particularly those driven by active open source communities.
  3. Using open source software to learn about emerging technologies (for example, much of what is called Web 2.0 was first built with open source)

The concerns IT managers have with open source software are:

  • How well does the software work?
  • Will it integrate with my existing infrastructure and applications?
  • How well is it supported?
  • Is it secure?
  • Are there any risks with using open source software?

In my client dialogues about open source a topic that often drives discussion is licensing. One of the keys to effectively applying open source solutions is to understand a vendor’s business model and the role the software plays in their market plans. An open source license supports that role and business model. Therefore, understanding open source licenses is important because:

  • They are an indication of the business model used by an open source company (and commercial software and services companies using open source software).
  • Each of these business models has a different set of factors to pay attention to in terms of how they use open source and what they must do to succeed.
  • They also influence how an enterprise might use an open source product. For example, maybe you are using an open source library in one of your applications or perhaps you are using a full product (like an open source blog) but are thinking of integrating it with an application. Each of these scenarios comes with a different set of opportunities and risk.

My next post will be an introduction to specific open source licenses, the levers they can bring to bear on a software market, and what they can mean to an enterprise using open source software.

This is a repost of a blog originally posted on the Collaboration and Content Strategies Blog

YouTube For Wii Review

This weekend I tried the new YouTube on your TV site using our Wii console. It’s pretty good for a beta. In my opinion, this is the first site that made spending the $5 for the Internet Channel (Opera browser) worth it. Of course, YouTube picture quality is low, but for many types of videos it is more than sufficient.

For example, I was pleasantly surprised to find presentations from the TED conference on YouTube. The TEDtalksDirector channel is exceptional. The speakers TED attracts are smart, articulate, often have provocative ideas, and the talks are delivered in a concise, approachable format.

The YouTube/TV/Wii viewing experience is decent and will be familiar to anyone who’ve used YouTube on a computer. However, finding any videos beyond those featured on the front page is a chore. Until they improve navigation I am browsing channels and finding the videos I’d like to see with my laptop and then finding them using the search function through the Wii.

In any case, this effort shows how sites like YouTube are changing television viewing. I don’t care for most of what is on YouTube but it does provide a new channel to potential viewers. The TED recordings are good examples of how the site can be used to deliver quality material.

Syndicate content