I sat in on an interesting meeting on Wednesday where I was helping a client with some content management issues. Content management, you ask? Chris, I thought you worked primarily in social media? We are and this story ultimately gets there… so bear with me. Besides, I did build the very first version of Stratfor.com back in 1998 so I do know portals and information management. And even though Stratfor’s first site was really, really ugly, it did permanently change the company for the better. A few of its early stage investors may disagree… but the survivors have thrived there. But I digress… I’m getting way off topic here.

It is a social project because this client is trying to build an intranet site to help geographically distributed employees communicate better. The problem is that portals have been launched time and time again by the client’s company with much fanfare, but also with very little adoption. As it turns out, each portal turns out to be a very expensive well-intended experiment but the problem remains the same year after year.

As I sat in this meeting, I heard about the problems, the failures of the past, the feedback my client had taken about previous sites. And then the light bulb went off. I had a moment of clarity about content management for the enterprise and where things seem to be heading now in 2008 and beyond. The best way for me to walk you through my thinking is probably to walk you through trends on the Web over the last 10 years, give you my epiphany, and wrap up with implications. Work with me here though – this is perhaps one of my longest blog posts yet.

Web 1.0 – 1995-2002

Let’s start with the early days of the Web. In the early days, search wasn’t particularly popular. The algorithms weren’t very good and many web sites weren’t indexed. It was easier for a user to just go to a portal like Yahoo or remember a destination site & go there to get what he/she needed. But something changed as the Web grew & sites proliferated. It became too difficult for a typical user to remember the hundreds if not thousands of web sites necessary to gather information on a regular basis. Yahoo tried to help people with this information overload problem by creating a directory of Web content with a pseudo-functional Search function built atop its directory. But their search wasn’t too good either and the directory was just another proprietary system for managing content. If you were in one part of the directory, were you in the right place? And content often needed to be cross-referenced. We ran into this issue at Stratfor when we had multiple classifications for our content early. For example, a story about a skirmish between Colombia and Venezuela over drug interdiction could be classified into any number of categories: Latin America, Military Conflict, Venezuela, Drug War, Drug Interdiction, Colombia. That’s a limited example but you can probably imagine how this can get out of hand very quickly when you go through this exercise for every piece of new content you create.

As content management systems were born to solve some of these problems, Google cut right to the chase & put together a superior search algorithm that helped a user cut through the navigational nonsense of the hierarchical web content organization system. Now all you have to do as a user is just enter a few keyword terms, and Google will bring you the most relevant content from the Web for your needs. It makes sense in retrospect – millions of sites build content and nobody really organizes it well… so enter a solution!

Web 2.0 & the rise of the high-value snippet – 2003-2010?

Fast forward to the world of the participatory web – where anyone can create content via a blog, an article, a white paper, etc. In the participatory web, thought leaders emerge through their consistent commitment to creating & posting high quality information. As these sites become more valuable, they get more and more adoption and a long tail of users emerges. For the average Joe, keeping up with the latest news, commentary, developments, etc. and doing so in real-time becomes a big deal. Two realities emerge: 1) site managers need better tools for managing content. Let’s face it, in this new world, the content management systems of 1998 are probably more & more irrelevant every day. Perhaps more importantly, 2) frustrated users get weary of going to multiple sites to keep up with the participatory web. It becomes harder and harder just to keep up with what is happening.

We’ll get back to the implications of content management in a moment. But on the users’ side, RSS Readers emerged to help people navigate through the glut of content that is available. With the RSS Reader, a user can now scan updates on hundreds of web sites, news outlets, and blogs to which he/she is subscribed. A user can drill down & read full articles, or just look through everything to see what is happening all through a single interface. It’s a killer time saver… every day after breakfast, I can be up to speed on 80% of what I really need to know each day by scanning 90% of the sites I need to monitor. And odds are a friend will probably forward me anything that is highly relevant to my interests… so there is yet another backstop.

How is this possible? I would argue it is the utility of what I call the “snippet” or “twit” for you twitterers out there. With RSS, you get a ton of value because you quickly consume all the content updates on a given site by reading only what is important to you. Instead of surfing to a Web site to find that nothing has changed of any significance, you can quickly scan your RSS reader and check everything quickly. The collective weight of the all the titles you ignore (or “snippets” you blast through) makes you decidedly more productive *and* it ensures that you aren’t caught off guard by any information you should know.

Twitter and FriendFeed do this with the mundane details of my friends’ lives. Most of what is broadcast via Twitter & FriendFeed is drivel that I don’t really need to know. However, there are nuggets. And I would argue that the 5% of relevant nuggets can make a person’s social life more interesting.

The Epiphany:  the coming revolution in enterprise content management

Now bringing all this back to content management… enterprises I have worked with haven’t yet figured this out. Call it risk aversion, call it a lack of awareness of what is going on in the Web today – a lot of things probably contribute. But when history is written, I think these Web 2.0 technologies will be remembered more for their ultimate impact on communication within an enterprise than their impact on consumers. Call it a hunch on my part, but I think that we’ll see corporations catch up – to gain efficiencies in communication inside the firewall and to consumers using these technologies in new, innovative ways.

As for navigation, I believe that any site that hosts a library content should avoid a traditional nav structure at all costs. A traditional “left or top nav” structure is effectively dead and furthermore, it is a relic of the Web 1.0 world. By definition, any nav structure is by definition proprietary and therefore doomed to failure. Sure information architects & site designers/developers can supplement a traditional nav with search. But assuming that the search algorithms even work at all, the site won’t be effective unless customers are employees are forced to use it. Why?

  1. Users may not understand what a nav structure means – as a user, should I find news on the border skirmish between Colombia & Venezuela in a certain part of the site? Or should I just do the easy thing & search for it?
  2. Nav structures don’t deal well with cross-indexed content – this problem becomes much more complex over time, *especially* if you intend for users of a site to contribute to that site.
  3. Nav structures lend themselves to content silos – it is almost impossible to get into the mind of the content publisher to ensure that users have the same understanding of the nav structure & the philosophy behind posting content in certain places.
  4. People consume content differently now & nav structures are a relic of 1997 – Nav structures have given way to keyword search, content tags, and rich advanced search features (popularity, date added, etc.).
  5. Users don’t look for new content – Users are busy and they use tools that help them monitor what is happening at Web sites of interest to them. They expect to be notified the arrival of new content because technologies like RSS make it possible.
  6. Users don’t care where new content is in your nav, but they do care that your site is regularly updated – Nobody likes going to a stale site. Activity and a vibrant community are now critical to a site’s success. Nothing communicates activity like a site that has fresh updates every day. The time/date stamp is your friend. Inadvertently, I think blogs and web sites like Techcrunch have really nailed point 6, but people building portals and content repositories have not yet caught on.

The fact is that users will adopt your site & subscribe to it if 1) there is interest in the subject matter, 2) your information architecture prioritizes the refresh of content/new information as Job 1, and 3) you use new technologies to reinforce that your site is alive and always relevant to a user’s interests. Then you can build communities of engaged Users working together more productively for fun or professional success.

Successful content sites in the future will shy away from nav structures and site maps in favor of Tags and the treatment of new content as “Articles” rather than pages that Users are forced to find upon an update. Users will be able to subscribe to just about any content site via RSS and they’ll be able to comment and/or revise things as needed. When enterprises ultimately use the best of the Web 2.0 world in content management projects, they’ll see levels of success they have never seen before.

In my rambling, I didn’t get to implications, but I’ll talk about that more in a future post.