Monday, May 31, 2010

Why Standards? Why Interoperability?

The Importance of Standards

Standards and interoperability have always been the foundation of the video conferencing industry and for many people in this business the need for standards and interoperability is self-explanatory. Using standards to connect systems from different vendors into a seamless network is the best way to assure end-to-end audio and video quality and protect customer investments. However, we frequently forget that – as video becomes an intrinsic part of all kinds of communication tools – the overwhelming majority of new users do not quite appreciate the importance of standards. As a result, questions what standards are and why do we need them pop up much more often lately. Instead of answering the questions around standards and interoperability in separate emails, I decided to write a detailed, and maybe somewhat lengthy, but hopefully balanced and useful overview of this topic.

A technical standard is an established norm or requirement, usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. Technical standards have been important in all areas of our lives. They make sure, for example, that electrical appliances support 120V in the USA and 230V in Germany, and that when you plug a toaster into the outlet, it just works.

Another demonstration of the power of standards can be found in the railway system. Most of the railways in the world are built to standard gauge that allows engines and cars from different vendors to use the same rail. The decision of the Royal Commission in 1845 to adopt George Stephenson’s gauge (4 feet 8 1⁄2 inches, or 1435 mm) as a standard gouge stimulated commerce because different railway systems could be seamlessly inter-connected.

In the area of communications technology, analog telephone standards allow until today connection of billions of PSTN users. With the migration to digital technology, new standards for voice (and later video and other communications) had to be agreed on to enable systems and their users across the world to communicate.

How do Communications Standards Emerge?

Standards are usually created by consensus, that is, a number of vendors meet at an industry standardization organization, and over a period of time (which may be months or years) work out a specification that is acceptable to all parties involved. Difficulties emerge when participating vendors already have proprietary implementations and are trying to model the standard after what they already have. The motivation is mostly financial: redesigning existing products to meet a standard is expensive. When implementing a standard, vendors may also be giving up some intellectual property and competitive differentiation. Negotiation of standards is therefore a balancing act between vendor’s interests and the interest of the community/industry.

Sometimes the standardization process stalls, and other means are used to set standards. Governments may get tired of waiting for an agreement in a particular industry, pick a standard, and allow only sales of products complying with it. Governments are especially concerned with security standards, and there are many examples of government-set standards for communication security.

Markets sometimes enforce de-facto standards, when a player in certain market segment has such a large market share that its specification becomes the standard for everybody else, who wants to connect to the market leader. This creates a lot of problems in emerging markets where market shares change rapidly, and companies that are rising fast today are losing to the “next big thing” tomorrow. Standards are designed for the long run while proprietary implementations may come and go really fast.

Skype and Google

Today is not different from any other day in human history, and the battle between standards and proprietary implementations continues. Skype is getting a lot of attention with its hundreds of millions of users (a small portion of them active, but nevertheless), and analysts and consultants frequently ask me “Is Skype not too big to ignore?” and “Shouldn’t everybody make sure they can connect to Skype?” Yet another group of analysts and consultants is very excited about Google’s recent buying spree in voice and video technology. Google’s acquisition of GIPS and On2, and last week’s announcement about making the On2 VP8 video codec open source led to another set of questions about the impact of Google’s move on the reigning champion - H.264 - and the alternative candidate for HTML5 codec called Ogg Theora.

In general, there are two ways to promote a proprietary codec: claim that it has better quality than standard codecs and claim that it is “clean” from intellectual property rights (IPR) claims. Google tried both arguments, positioning VP8 as both higher quality than H.264 and as fully royalty free (as compared to the licensable H.264). Unfortunately, both arguments are not easy to support. Independent comparisons showed that VP8 quality is lower than H.264 Main and High Profile, somewhat comparable with H.264 Baseline Profile. H.264 is a toolbox that includes many options to improve codec performance and its capabilities have not been exhausted. A recent proof point is Polycom’s first and only implementation of H.264 High Profile for real-time communication that led to dramatic reduction of network bandwidth for video calls (white paper).

The IPR situation is also not as clear as Google wishes us to believe because VP8 uses a lot of the mechanisms in H.264 and other codecs, probably covered by someone’s IPR. If you dig deeper into any video and audio compressing technology, you will at some point find similarities, for example, in the areas of frame prediction and encoding, so the IPR situation is never completely clear.

By the way, “open” only means that Google discloses the specs and allows others to implement it. Licensing is however only a small portion of the implementation cost, and redesigning products to add yet another open codec is an expensive proposition for any vendor.

Gateways and Chips

The introduction of proprietary codecs decreases interoperability in the industry. People sometimes dismiss the issue by saying “Just put a gateway between the two networks to translate between the different audio and video formats”. This sound simple but the impact on both the network and the user experience is dramatic. Gateways dramatically decrease scalability and become network bottlenecks, while the cost goes up since gateways, especially for video, require powerful and expensive hardware. Did I mention that the user experience suffers because gateways introduce additional delay (which reduces interactivity) and degrade audio/video quality?

So the real goal is to achieve interoperability and standards are the means to do that without using gateways. But some more technically inclined folks could say “Why don’t you support multiple codecs in every endpoint and server, and just select one of them based on who you are talking to?” This is a great idea and in fact works to a certain extent. For example, Polycom video endpoint today support at least three video codecs (H.264, obviously, but also H.263 and H.261 for backward compatibility) and several audio codecs (G.719, G.722, G.711 …) You can of course add few more codecs but very quickly you will reach complexity in the codec negotiation process that makes the whole call setup a nightmare.

Also, codecs are most efficiently implemented in hardware (chips), and adding more codecs to a chip is not a trivial matter; it requires stable specs and a business case that goes over a long period of time. Adding capabilities to chips increases the price, no matter if you use that codec or not. The worst case scenario for a chip vendor is to spend the effort and add support of a proprietary codec in a chip just to find out that the vendor owning the proprietary codec is not around anymore or has decided to move on to something else. The benefit of established standards is that there is already a substantial investment in hardware and software to support them. Therefore, while encouraging technology innovation, we at Polycom always highlight the need to support and cherish established industry standards that provide foundation for universal connectivity around the world.

Is “Proprietary” Good or Bad?

There are a lot of good reasons to avoid proprietary implementations, and vendors that care about the industry as a whole collaborate and cooperate with other vendors for the common good. A recent example I can think of was when Polycom submitted Siren 22 (proprietary audio codec) to ITU-T for standardization. Siren 22 is a great codec with outstanding quality used by musicians for live music performance over IP networks. Ericsson submitted an alternative proposal and Polycom worked with Ericsson to combine the two codecs into one superior codec that was accepted as the ITU-T G.719 standard (full story). This takes us to another benefit of a standard: it has been evaluated and discussed in wider industry audience, which has concluded that the standard is necessary to satisfy a particular need. The process also guarantees that there is no functional or application overlap with other standards.

Some readers may be confused by the term “proprietary” used in negative connotation. Proprietary and patented technologies are being advertised as a guarantee that one product is better than another. We at Polycom are also proud of a lot of proprietary technologies that make our phones sound better and our video endpoint provide crisper pictures. There is nothing wrong with improving the quality and the user experience through proprietary means. “Proprietary” gets in the way when it hinders communication with other devices and servers in the network, and when it only allows communication within the proprietary system. Such closed systems create islands of communication that do not connect well with the outside world.

How can a proprietary implementation become a standard? Vendors can submit their proprietary implementation to a standardization organization such as ITU-T and IETF, and argue for the benefits of the particular specification with other vendors, scientists, independent experts from the research community, and government. Depending on how crowded this part of the market is, discussions may conclude fast or drag over years. Main complain for vendors who opt for going to market with proprietary implementations is that they need fast Time To Market (TTM) to capture a business opportunity, while the standardization process takes time. It is again a discussion about the balance between personal gain and community benefit. Business opportunities come and go but the need for stability and continuation in the communication market remains. While getting a standard approved takes a substantial effort and patience, standards are still the only way to assure stability, backward compatibility, and customer investment protection across the industry.

The Wisdom of Standards

It required a lot of effort and hard work to create the standards we have today, and dropping them in favor of myriad of proprietary implementations (“open” or not) seriously undermines interoperability efforts in the industry.

There are so many examples of companies who tried and were partially successful with proprietary implementations but later realized the wisdom of standards. For years, PBX vendors like Avaya, Cisco, and Siemens have marketed proprietary systems that only provided enhance functionality internally or when connected to another system from the same vendor. Once connected to a system from another vendor, functionality went down to just the basics. If you have monitored this market, you have seen how over the past few years all vendors moved to SIP-based systems which, while having proprietary extensions, provide high level of interoperability. In another example, Adobe introduced proprietary On2 VP6 video codec into Flash and ran with it until Flash Version 8. Then in Version 9 they yielded to the pressure from partners and customers, and added support of the H.264 standard.

Beyond Standards and Towards True Interoperability

Does standards compliance guarantee full interoperability? Standards have few mandatory functions and a wide range of optional functions. Vendors who only support mandatory functions have basic interoperability while coordination of options support is required for advanced interoperability. Many people asked me about the recent announcement of the Unified Communications Interoperability Forum (UCIF) in which Polycom is a founding member with a board seat. Similar to a judge who interprets the law, UCIF will interpret standards in the Unified Communications space and come up with specifications and guideline how to make UC standards work 100%. The foundation is already laid by the existence of standards for voice, video, presence, IM, etc. UCIF members will together make sure these communication tools work across networks, and provide advanced functionality through a seamless user interface.


In summation, the discussion about standards vs. proprietary is really about fierce competition versus collaborative “the rising tide lifts all boats” approach. There are plenty of areas where vendors can compete (user interfaces, audio/video capture and playback, compression profiles) but there are also areas where working jointly with existing standards and towards emerging standards drives growth of the communications market, and prosperity for the entire industry.

1 comment:

  1. This is an excellent post, describing standards and interoperability that are critical in all industries. Mine, the electronic design automation industry, faces the same challenges and seeks innovative solutions as well.

    Someday, we should compare notes.

    Karen Bartleson
    Author, "The Ten Commandments for Effective Standards"

    www.synopsys.com/blogs/thestandardsgame

    ReplyDelete