Monday, May 31, 2010

Why Standards? Why Interoperability?

The Importance of Standards

Standards and interoperability have always been the foundation of the video conferencing industry and for many people in this business the need for standards and interoperability is self-explanatory. Using standards to connect systems from different vendors into a seamless network is the best way to assure end-to-end audio and video quality and protect customer investments. However, we frequently forget that – as video becomes an intrinsic part of all kinds of communication tools – the overwhelming majority of new users do not quite appreciate the importance of standards. As a result, questions what standards are and why do we need them pop up much more often lately. Instead of answering the questions around standards and interoperability in separate emails, I decided to write a detailed, and maybe somewhat lengthy, but hopefully balanced and useful overview of this topic.

A technical standard is an established norm or requirement, usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. Technical standards have been important in all areas of our lives. They make sure, for example, that electrical appliances support 120V in the USA and 230V in Germany, and that when you plug a toaster into the outlet, it just works.

Another demonstration of the power of standards can be found in the railway system. Most of the railways in the world are built to standard gauge that allows engines and cars from different vendors to use the same rail. The decision of the Royal Commission in 1845 to adopt George Stephenson’s gauge (4 feet 8 1⁄2 inches, or 1435 mm) as a standard gouge stimulated commerce because different railway systems could be seamlessly inter-connected.

In the area of communications technology, analog telephone standards allow until today connection of billions of PSTN users. With the migration to digital technology, new standards for voice (and later video and other communications) had to be agreed on to enable systems and their users across the world to communicate.

How do Communications Standards Emerge?

Standards are usually created by consensus, that is, a number of vendors meet at an industry standardization organization, and over a period of time (which may be months or years) work out a specification that is acceptable to all parties involved. Difficulties emerge when participating vendors already have proprietary implementations and are trying to model the standard after what they already have. The motivation is mostly financial: redesigning existing products to meet a standard is expensive. When implementing a standard, vendors may also be giving up some intellectual property and competitive differentiation. Negotiation of standards is therefore a balancing act between vendor’s interests and the interest of the community/industry.

Sometimes the standardization process stalls, and other means are used to set standards. Governments may get tired of waiting for an agreement in a particular industry, pick a standard, and allow only sales of products complying with it. Governments are especially concerned with security standards, and there are many examples of government-set standards for communication security.

Markets sometimes enforce de-facto standards, when a player in certain market segment has such a large market share that its specification becomes the standard for everybody else, who wants to connect to the market leader. This creates a lot of problems in emerging markets where market shares change rapidly, and companies that are rising fast today are losing to the “next big thing” tomorrow. Standards are designed for the long run while proprietary implementations may come and go really fast.

Skype and Google

Today is not different from any other day in human history, and the battle between standards and proprietary implementations continues. Skype is getting a lot of attention with its hundreds of millions of users (a small portion of them active, but nevertheless), and analysts and consultants frequently ask me “Is Skype not too big to ignore?” and “Shouldn’t everybody make sure they can connect to Skype?” Yet another group of analysts and consultants is very excited about Google’s recent buying spree in voice and video technology. Google’s acquisition of GIPS and On2, and last week’s announcement about making the On2 VP8 video codec open source led to another set of questions about the impact of Google’s move on the reigning champion - H.264 - and the alternative candidate for HTML5 codec called Ogg Theora.

In general, there are two ways to promote a proprietary codec: claim that it has better quality than standard codecs and claim that it is “clean” from intellectual property rights (IPR) claims. Google tried both arguments, positioning VP8 as both higher quality than H.264 and as fully royalty free (as compared to the licensable H.264). Unfortunately, both arguments are not easy to support. Independent comparisons showed that VP8 quality is lower than H.264 Main and High Profile, somewhat comparable with H.264 Baseline Profile. H.264 is a toolbox that includes many options to improve codec performance and its capabilities have not been exhausted. A recent proof point is Polycom’s first and only implementation of H.264 High Profile for real-time communication that led to dramatic reduction of network bandwidth for video calls (white paper).

The IPR situation is also not as clear as Google wishes us to believe because VP8 uses a lot of the mechanisms in H.264 and other codecs, probably covered by someone’s IPR. If you dig deeper into any video and audio compressing technology, you will at some point find similarities, for example, in the areas of frame prediction and encoding, so the IPR situation is never completely clear.

By the way, “open” only means that Google discloses the specs and allows others to implement it. Licensing is however only a small portion of the implementation cost, and redesigning products to add yet another open codec is an expensive proposition for any vendor.

Gateways and Chips

The introduction of proprietary codecs decreases interoperability in the industry. People sometimes dismiss the issue by saying “Just put a gateway between the two networks to translate between the different audio and video formats”. This sound simple but the impact on both the network and the user experience is dramatic. Gateways dramatically decrease scalability and become network bottlenecks, while the cost goes up since gateways, especially for video, require powerful and expensive hardware. Did I mention that the user experience suffers because gateways introduce additional delay (which reduces interactivity) and degrade audio/video quality?

So the real goal is to achieve interoperability and standards are the means to do that without using gateways. But some more technically inclined folks could say “Why don’t you support multiple codecs in every endpoint and server, and just select one of them based on who you are talking to?” This is a great idea and in fact works to a certain extent. For example, Polycom video endpoint today support at least three video codecs (H.264, obviously, but also H.263 and H.261 for backward compatibility) and several audio codecs (G.719, G.722, G.711 …) You can of course add few more codecs but very quickly you will reach complexity in the codec negotiation process that makes the whole call setup a nightmare.

Also, codecs are most efficiently implemented in hardware (chips), and adding more codecs to a chip is not a trivial matter; it requires stable specs and a business case that goes over a long period of time. Adding capabilities to chips increases the price, no matter if you use that codec or not. The worst case scenario for a chip vendor is to spend the effort and add support of a proprietary codec in a chip just to find out that the vendor owning the proprietary codec is not around anymore or has decided to move on to something else. The benefit of established standards is that there is already a substantial investment in hardware and software to support them. Therefore, while encouraging technology innovation, we at Polycom always highlight the need to support and cherish established industry standards that provide foundation for universal connectivity around the world.

Is “Proprietary” Good or Bad?

There are a lot of good reasons to avoid proprietary implementations, and vendors that care about the industry as a whole collaborate and cooperate with other vendors for the common good. A recent example I can think of was when Polycom submitted Siren 22 (proprietary audio codec) to ITU-T for standardization. Siren 22 is a great codec with outstanding quality used by musicians for live music performance over IP networks. Ericsson submitted an alternative proposal and Polycom worked with Ericsson to combine the two codecs into one superior codec that was accepted as the ITU-T G.719 standard (full story). This takes us to another benefit of a standard: it has been evaluated and discussed in wider industry audience, which has concluded that the standard is necessary to satisfy a particular need. The process also guarantees that there is no functional or application overlap with other standards.

Some readers may be confused by the term “proprietary” used in negative connotation. Proprietary and patented technologies are being advertised as a guarantee that one product is better than another. We at Polycom are also proud of a lot of proprietary technologies that make our phones sound better and our video endpoint provide crisper pictures. There is nothing wrong with improving the quality and the user experience through proprietary means. “Proprietary” gets in the way when it hinders communication with other devices and servers in the network, and when it only allows communication within the proprietary system. Such closed systems create islands of communication that do not connect well with the outside world.

How can a proprietary implementation become a standard? Vendors can submit their proprietary implementation to a standardization organization such as ITU-T and IETF, and argue for the benefits of the particular specification with other vendors, scientists, independent experts from the research community, and government. Depending on how crowded this part of the market is, discussions may conclude fast or drag over years. Main complain for vendors who opt for going to market with proprietary implementations is that they need fast Time To Market (TTM) to capture a business opportunity, while the standardization process takes time. It is again a discussion about the balance between personal gain and community benefit. Business opportunities come and go but the need for stability and continuation in the communication market remains. While getting a standard approved takes a substantial effort and patience, standards are still the only way to assure stability, backward compatibility, and customer investment protection across the industry.

The Wisdom of Standards

It required a lot of effort and hard work to create the standards we have today, and dropping them in favor of myriad of proprietary implementations (“open” or not) seriously undermines interoperability efforts in the industry.

There are so many examples of companies who tried and were partially successful with proprietary implementations but later realized the wisdom of standards. For years, PBX vendors like Avaya, Cisco, and Siemens have marketed proprietary systems that only provided enhance functionality internally or when connected to another system from the same vendor. Once connected to a system from another vendor, functionality went down to just the basics. If you have monitored this market, you have seen how over the past few years all vendors moved to SIP-based systems which, while having proprietary extensions, provide high level of interoperability. In another example, Adobe introduced proprietary On2 VP6 video codec into Flash and ran with it until Flash Version 8. Then in Version 9 they yielded to the pressure from partners and customers, and added support of the H.264 standard.

Beyond Standards and Towards True Interoperability

Does standards compliance guarantee full interoperability? Standards have few mandatory functions and a wide range of optional functions. Vendors who only support mandatory functions have basic interoperability while coordination of options support is required for advanced interoperability. Many people asked me about the recent announcement of the Unified Communications Interoperability Forum (UCIF) in which Polycom is a founding member with a board seat. Similar to a judge who interprets the law, UCIF will interpret standards in the Unified Communications space and come up with specifications and guideline how to make UC standards work 100%. The foundation is already laid by the existence of standards for voice, video, presence, IM, etc. UCIF members will together make sure these communication tools work across networks, and provide advanced functionality through a seamless user interface.


In summation, the discussion about standards vs. proprietary is really about fierce competition versus collaborative “the rising tide lifts all boats” approach. There are plenty of areas where vendors can compete (user interfaces, audio/video capture and playback, compression profiles) but there are also areas where working jointly with existing standards and towards emerging standards drives growth of the communications market, and prosperity for the entire industry.

Monday, May 3, 2010

Science Discovery and Advanced Networking 1.5 Miles Below the Earth's Surface

The Spring 2010 Internet2 Conference was superb! I have witnessed the increase in quality and diversity of the Internet2 conferences over years, and hope that I have contributed to these changes, too. Fact is that Internet2 events are larger and more diverse, and that they now include not only educational and research institutes but also government and health organizations as well as growing international attendance. Polycom’s participation in these events has also increased over time. In addition to numerous presentations I have given (links are below in the ‘Speaking Engagements’ section), we have done amazing demos, including the TPX three-screen telepresence system that we built at the Fall 2009 Internet2 Conference, as part of the telepresence interoperability effort. The spring event last week gathered 700 participants and was another excellent opportunity to experience collaboration tools with video capabilities, including Polycom CMA Desktop and PVX soft clients, while Polycom HDX equipment was used in many sessions to connect remote participants from all around the world.

But nothing can compare to the astonishing video and audio quality used to connect LIVE both the former Homestake gold mine near Lead, South Dakota and the office of the Governor of South Dakota to the conference hotel Marriot Crystal Gateway in Arlington, Virginia. All attendees gathered in the big ballroom for the general session "Science Discovery and Advanced Networking 1.5 Miles Below the Earth's Surface" which focused on the plans to convert the Homestake mine in South Dakota into a Deep Underground Science and Engineering Lab (DUSEL), where physicists, biologists and geologists could research fundamental questions about matter, energy, life and the Earth.

It looks like every kind of scientific research would benefit from the underground lab, for example, geologists want to study the rocks and figure out why there is no more gold in the mine, while physicists want to study neutrino and dark matter, and hide from the cosmic radiation that seems to screw up a lot of the experiments. Whatever they end up doing in this lab, it will result in a lot of data that has to be transported to research institutes around the world over a very fast network. And since getting in and out of the mine is not easy, advanced voice and video communication is needed for scientists underground to stay in touch with their peers on the surface. The general session gave a preview of what Polycom audio-video technology can do in the tough mine environment characterized by dust, water, and wide temperature variation.

The mine itself is up to 8,000 feet or 2,438 meters deep (and therefore the deepest in North America) but most of the work today is done at 4,850 feet / 1,478 meters underground, and that’s exactly where the Polycom HDX 8000 system was installed. Optical fiber goes to the surface, and connects to the South Dakota’s Research, Education and Economic Development network (REED), which supports two 10 gigabit/second waves and links the state’s six public universities. REED also connects with the Great Plains regional research and education network at Kansas City, which peers with Internet2. Internet2 links with the Mid Atlantic regional network, which had a 1 Gigabit per second link to the conference site in Arlington. Pretty much the same network – except the underground part - was used to connect the second remote participant in the session: the Governor of South Dakota Michael Rounds. The original plan to have him in the mine was scrapped because of safety concerns and another Polycom HDX 8000 system connected the governor’s office to Arlington.

I have seen many demos of Polycom technology over good networks. The Polycom corporate IP network is designed for audio and video and provides very good quality. BUT nothing I have seen compares to the perfect network used during the general session last week. Not a single packet was lost and the delay was just not there, so that the interaction among on-site and remote participants was flawless. The HDX 8000 systems worked at High Definition 1080p video quality and full-band (22 kHz) audio quality over connections of 6 megabits per second. On one hand, the audience could see, hear, and almost smell the thick air in the deep mine. On the other hand, the pristine quality delivered a fully immersive experience, and made everyone in Arlington feel ‘in the mine’. It felt surreal to be so close and so far away at the same time. 700 conference attendees joined me in that experience.

It is impossible to capture the immersive experience during the session but I will try to at least give my blog readers some feeling of the event.

I took a picture of the Governor of South Dakota Michael Rounds speaking about the creation of an underground science lab in the Homestake mine. I also shot a short video of this part of the session.

When Kevin Lesko, DUSEL Principal Investigator, spoke from the Homestake mine, I took a still picture and shot a short video of him, too.

The Q&A part of the session used a split screen to allow conference attendees to see both the Governor and the team underground at the same time, and engage in live discussion. Here is a picture and a video clip from that part of the session.

The interaction in the Q&A session was spectacular. The Governor and the team in the Homestake mine answered numerous questions from the audience and the interaction across distances was just spectacular. In conclusion of the session the President and CEO of Internet2 Doug Van Houweling thanked all contributors to the session. He thanked Polycom for providing the video equipment for this incredible discussion that highlighted both the advances of audio-video technology and the enormous capabilities of the Internet2 network.

Throughout the 75 minute session, the audio and video quality was impressive. Several attendees came to me after the session to share their surprise and excitement about the immersive experience. Most of them wanted to know how to make their own video conferencing systems deliver similar quality, which of course led to discussion about the recent advances of audio and video technology including compression, cameras, microphones, and networking.

I am sure several of my blog followers attended the session "Science Discovery and Advanced Networking 1.5 Miles Below the Earth's Surface", and I would love to get their comments.