Monday, March 9, 2009

Summary of International SIP Conference, Paris, January 2009

I finally found time to post a summary of this year's (10th) International SIP Conference http://www.upperside.fr/sip2009/sip2009intro.htm. I presented at the event last year and this year, and can say that there was huge difference between the two events. The discussions last year were mostly around peer-to-peer architectures based on SIP. This year’s event was more of a reality check of what SIP accomplished and what promises remained unfulfilled. The SIP Conference reinforced my impression from other industry events in 2008 that standards in any communication field are under attack and that proprietary implementations are gaining momentum. This could be explained by the wave of additional applications that emerged in the last few years and that were not around when current standards were designed. And while it is possible to implement new applications using existing standards such as SIP, many developers and companies make the choice to develop proprietary technologies that better fit their particular application. Unfortunately, this approach while optimizing application performance leads to non-interoperable islands.

Back to the SIP Conference … The audience included representatives from the SIP research and development communities in Europe, North America, and Asia-Pacific. Two of the SIP creators - Henry Sinnreich and Henning Schulzrinne – presented and I also enjoyed the keynote by Venky Krishnaswamy from Avaya and the presentation from my friend Ingvar Aaberg from Paradial.

Venky kicked off the event with a keynote about SIP history and present status. SIP was meant to be the protocol of convergence but got wide adoption in Voice over IP while having mixed success in non-VoIP applications such as instant messaging and presence. Venky focused on the changes in the way people communicate today and on the many alternatives to voice communication: SMS, chat, social networking, blogs, etc. Most interest today is therefore in using SIP for advanced services, video and collaboration, and Web2.0 apps. The bottom line is that SIP is now one of many protocols and has to coexist with all other standard and proprietary protocols out there.

Henning focused on the need for better interoperability across SIP implementations. Today, interoperability is vendors’ responsibility but interoperability problems hurt the entire SIP community. SIPit interoperability test events are great for new SIP devices in development but do not scale to cover all SIP devices and their frequent software updates. Henning argued that an online SIP interoperability test tool is required to automate the test process. He suggested to start with testing simple functions such as registration call flow, codec negotiation, and measuring signaling delay, and then expand the test with more complex functions, e.g. security.

Henry works now for Adobe and is trying to persuade their application guys (Flash, AIR, and Connect) to use SIP for collaboration applications. Since Adobe limits software size to assure fast downloads, the SIP client must have small footprint and Henry’s message to the Conference was about the need for a simplified SIP specification. There are currently about 140 SIP RFCs (Request For Comment, or RFC, is how standards are called in IETF). While this complexity is business as usual for telecom vendors, it seems to be too much for software companies such as Adobe, Google and Yahoo. Henry suggested focusing on the endpoint / user agent functionality - since only endpoint knows what the user wants to do - and combining the ten most important RFCs into a base SIP specification: draft-sinnreich-sip-tools.txt at ftp://ftp.rfc-editor.org/in-notes/internet-drafts/. Henry is also very interested in cloud computing that allows reducing the number of servers.

Thinking back, the complexity discussion comes up every time a new set of companies enter the communications market. I lived through two waves of complexity discussions. The first one was when VOIP emerged and new VOIP companies criticized legacy PBX vendors for the complexity of their protocols and the hundreds of obscure features that they support in PBXs. Later, there was a complexity discussion – if not outright war – between companies pioneering SIP and companies using H.323. At the time SIP was just a couple of RFCs and H.323 was this big binder including several specifications and all sorts of annexes. So the SIP proponents called for simplicity, and argued that SIP has to replace H.323, and make everything simpler. Now that SIP has reached 140 RFCs the argument comes from the proprietary camp that SIP is too complex. I think it is important to put these things into perspective. Nevertheless, I really hope that Henry’s effort succeeds in IETF and I am looking forward to meeting him at the 74th IETF Meeting in San Francisco, March 22-27.

Ingvar talked about the tradeoff between SIP accessibility and security. The ICE firewall traversal standard is emerging but there are still interoperability issues. ICE does dynamic end-to-end probing and deploys STUN and TURN when necessary. What are the ICE alternatives? VPN creates private network, is complex to deploy, and since all traffic is relayed, has high bandwidth consumption. Session Border Controllers have connectivity and QOS issues. HTTP tunneling has serious QOS issues. So I guess there are no real alternatives to ICE, then.

My presentation ‘SIP as the Glue for Visual Communications in UC’ was about applications and key characteristics of visual communications and the need to integrate it with VoIP, IM and presence applications. I focused on the integrations of Polycom video systems in Microsoft, IBM, Alcatel and Nortel environments.

3 comments:

  1. Proprietary implementations will always be favored because of the performance benefits it brings. If you use VoIP over EVDO as an example, it all boils down to a bandwidth efficiency issue. A binary format is needed so that it can be sent over the control channel for a quick response time. Fast call set up times and reduced latencies are sometimes the only differentiating factors amongst different vendors. If there is ever a need to make the product interoperable with another vendor’s, a gateway can be used.

    From a business side, when a proprietary technology earns the status of de facto standard the licensor of that technology is afforded a legal monopoly and an unprecedented amount of market control. You’re seeing this from QUALCOMM with its CDMA technology in the 3G marketplace and Intel with its x86 architecture in the PC market.

    ReplyDelete
  2. There are examples for successful business models around both standard and proprietary technologies. If you argue that Qualcomm was successful developing CDMA and pushing it as a standard, my counter argument would be that GSM found much wider adoption than CDMA, mainly because it was created by consensus and did not require license from a single company.

    Proprietary technologies create competitive advantage and using them is fine as long as they do not hinder interoperability and limit competition. So while there is place for proprietary technologies in communication products (e.g. acoustic technologies in phones and other clients), communication protocols tend to be standards.

    If you look at the history of networks, standardization started with the lower layers (physical, link, transport), which led to the worldwide adoption of Ethernet and IP. Applications on the other hand reside at the highest layer and are still mostly proprietary.

    The battle between standards and proprietary implementations is now fought at the session layer where SIP and H.323 reside. The session layer is interesting because it is the border line between networking and application vendors, and more players are involved.

    ReplyDelete
  3. Part of why CDMA was behind in terms of early adoption is that it was standardized later, and the well known fact that people didn't want to pay Qualcomm royalties. As CDMA proved to be the more techical superior solution, the global GSM community chose WCDMA as the air interface for its 3G evolution path.

    Open standards and proprietary technologies give people a range of choices in how they build their systems. It forces vendors to continually update their products in order to remain competitive and for standard bodies to evolve the protocol they're championing.

    Has CDMA hinder interoperability? The answer is NO; it has actually supported interoperability in the way customers demand. An AT&T customer can contact a Verizon customer and vice versa.

    ReplyDelete