I finally found time to post a summary of this year's (10th) International SIP Conference http://www.upperside.fr/sip2009/sip2009intro.htm. I presented at the event last year and this year, and can say that there was huge difference between the two events. The discussions last year were mostly around peer-to-peer architectures based on SIP. This year’s event was more of a reality check of what SIP accomplished and what promises remained unfulfilled. The SIP Conference reinforced my impression from other industry events in 2008 that standards in any communication field are under attack and that proprietary implementations are gaining momentum. This could be explained by the wave of additional applications that emerged in the last few years and that were not around when current standards were designed. And while it is possible to implement new applications using existing standards such as SIP, many developers and companies make the choice to develop proprietary technologies that better fit their particular application. Unfortunately, this approach while optimizing application performance leads to non-interoperable islands.
Back to the SIP Conference … The audience included representatives from the SIP research and development communities in Europe, North America, and Asia-Pacific. Two of the SIP creators - Henry Sinnreich and Henning Schulzrinne – presented and I also enjoyed the keynote by Venky Krishnaswamy from Avaya and the presentation from my friend Ingvar Aaberg from Paradial.
Venky kicked off the event with a keynote about SIP history and present status. SIP was meant to be the protocol of convergence but got wide adoption in Voice over IP while having mixed success in non-VoIP applications such as instant messaging and presence. Venky focused on the changes in the way people communicate today and on the many alternatives to voice communication: SMS, chat, social networking, blogs, etc. Most interest today is therefore in using SIP for advanced services, video and collaboration, and Web2.0 apps. The bottom line is that SIP is now one of many protocols and has to coexist with all other standard and proprietary protocols out there.
Henning focused on the need for better interoperability across SIP implementations. Today, interoperability is vendors’ responsibility but interoperability problems hurt the entire SIP community. SIPit interoperability test events are great for new SIP devices in development but do not scale to cover all SIP devices and their frequent software updates. Henning argued that an online SIP interoperability test tool is required to automate the test process. He suggested to start with testing simple functions such as registration call flow, codec negotiation, and measuring signaling delay, and then expand the test with more complex functions, e.g. security.
Henry works now for Adobe and is trying to persuade their application guys (Flash, AIR, and Connect) to use SIP for collaboration applications. Since Adobe limits software size to assure fast downloads, the SIP client must have small footprint and Henry’s message to the Conference was about the need for a simplified SIP specification. There are currently about 140 SIP RFCs (Request For Comment, or RFC, is how standards are called in IETF). While this complexity is business as usual for telecom vendors, it seems to be too much for software companies such as Adobe, Google and Yahoo. Henry suggested focusing on the endpoint / user agent functionality - since only endpoint knows what the user wants to do - and combining the ten most important RFCs into a base SIP specification: draft-sinnreich-sip-tools.txt at ftp://ftp.rfc-editor.org/in-notes/internet-drafts/. Henry is also very interested in cloud computing that allows reducing the number of servers.
Thinking back, the complexity discussion comes up every time a new set of companies enter the communications market. I lived through two waves of complexity discussions. The first one was when VOIP emerged and new VOIP companies criticized legacy PBX vendors for the complexity of their protocols and the hundreds of obscure features that they support in PBXs. Later, there was a complexity discussion – if not outright war – between companies pioneering SIP and companies using H.323. At the time SIP was just a couple of RFCs and H.323 was this big binder including several specifications and all sorts of annexes. So the SIP proponents called for simplicity, and argued that SIP has to replace H.323, and make everything simpler. Now that SIP has reached 140 RFCs the argument comes from the proprietary camp that SIP is too complex. I think it is important to put these things into perspective. Nevertheless, I really hope that Henry’s effort succeeds in IETF and I am looking forward to meeting him at the 74th IETF Meeting in San Francisco, March 22-27.
Ingvar talked about the tradeoff between SIP accessibility and security. The ICE firewall traversal standard is emerging but there are still interoperability issues. ICE does dynamic end-to-end probing and deploys STUN and TURN when necessary. What are the ICE alternatives? VPN creates private network, is complex to deploy, and since all traffic is relayed, has high bandwidth consumption. Session Border Controllers have connectivity and QOS issues. HTTP tunneling has serious QOS issues. So I guess there are no real alternatives to ICE, then.
My presentation ‘SIP as the Glue for Visual Communications in UC’ was about applications and key characteristics of visual communications and the need to integrate it with VoIP, IM and presence applications. I focused on the integrations of Polycom video systems in Microsoft, IBM, Alcatel and Nortel environments.