Saturday, November 19, 2011

LTE and the Future of Mobile Networking


Upperside Conferences invited me to present about the advances in voice and video technology at the Voice over LTE conference last week. Upperside's conferences always focus on a specific technology subject and perfect if you want to learn "everything" about it. This time the talks were all about LTE, which stands for Long Term Evolution (LTE), and is a key new technology that revolutionizes mobile networks.

LTE versus VoLTE

LTE is a radio access technology that was developed in 3GPP to enhance performance and efficiency in mobile networks, more specifically, increase bit rates (up to 150 Mbps), improve cell spectrum efficiency, and reduce air interface latency. Since LTE itself is a pure IP packet service, transporting voice, SMS, and other media like video and IM has to be specified separately.

Voice over LTE is defined as an end-to-end service that includes not only the efficient LTE radio but also the IP Multimedia Subsystem (IMS) core, the Evolved Packet Core (EPC), and LTE capable devices (dongles, tablets, and smart phones). The key difference between LTE and VoLTE is therefore that LTE is the wireless interface while VoLTE is a complete solution for transporting voice and SMS over the new network.

The problem is that the solution – although fairly new – already needs extension with IM, video, and other functions, and the "voice" in VoLTE is very limiting. So the mobile networking community created something called Real-time Communication Services extension (RCS-e), that extends the voice and SMS services with, for example, IM/chat, file transfer, image and video share. It is roughly the equivalent of what we call Unified Communications in the enterprise communication space. Similar to UC, RCS-e is based on service/capability discovery and uses the SIP protocol. The role of SIP in enterprise UC is described here.

As with any other technology, getting the acronyms right is half of the work, so here are the important ones.

IMS is an architectural framework for delivering multimedia services. The idea is decouple the application from the access method, so that the same functionality can be accesses over wireless (3G, 4G) and fixed networks. This idea is not new: web applications, for example, do not care about the underlying network as long as it carries HTTP; this is probably why they are dubbed Over The Top (OTT) applications in mobile networking lingo. IMS is a revolution in mobile networks where traditionally separate functions were implemented for each type of network. By developing applications only once in the IMS core mobile SPs can shortens time-to-market and compete more successfully with OTT applications. IMS is also designed to provide QOS for applications through the different access networks, and this is a major differentiator for mobile SPs.

EPC is basically an IP router with mobility intelligence that includes handover (switching the connection when the mobile device moves from one radio cell to another), roaming (provides access when users leave their own SP network and enter the network of another SP), etc.

From Circuit Switched Voice to VoLTE

The LTE radio technology is different from 3G and building LTE networks requires substantial capital investment, as pioneers Verizon and MetroPCS in the USA and Vodafone in Europe know very well. As a result, LTE networks will coexist with 2G and 3G networks for long time, and it is critical to find a way to switch calls seamlessly from LTE to non-LTE networks when the user leaves LTE coverage. There are two ways to do that - Single Radio Voice Call Continuity (SRVCC) and Circuit Switched Fall Back (CSFB) – the former using single radio and being less expensive, the latter using dual-radio and costing more.

QOS

In terms of bandwidth, LTE can theoretically provide up to 150Mbps shared in a radio cell. LTE bandwidth is symmetric, that is, upstream bitrate can be equal to downstream bitrate; this makes LTE best choice for symmetric services such as online gaming and real-time voice and video. If there are 200 users in the radio cell, each user can get 0.75Mbps which is enough for high-quality H.264 video.

A bigger QOS problem in LTE networks is network congestion that leads to rapidly increasing delay (and packet loss) with very little advance notice. In video conferencing, the receiving endpoint sends congestion notification to the sender (SIP does that via RTCP while H.323 uses the H.245 Flow Control message), and the sender down-speeds, that is, reduce either resolution or frame rate. Mobile networking vendors are researching ways to detect congestion on a radio cell level; that would require the radio node to send congestion notifications. Since the radio node sees all traffic from all users in the radio cell, it can give an advance warning. It will be important for mobile UC application with video capabilities to listen for such notifications and down-speed – even if their own session is performing well.

Packet loss in LTE networks usually becomes a problem when the user is at the periphery of the cell where the radio signal is weak. When the signal strength is good and the user does not move, frame error rate can be as low as 0.2% which results in negligible packet loss.

To provide Quality of Service to applications, VoLTE defines QOS Class Identifiers (QCI); each of them is appropriate for certain type of traffic. For example, QCI1 bearer is optimized for VoIP/VoLTE. Tests show transmission latency of 140-160ms, which has to be added to the RTP delay and voice/video codec delay. The resulting end-to-end latency can therefore be higher than 200ms, and more work has to be done to reduce the latency below the 200ms limit critical for interactivity on voice and video calls.

Another important QCI for real time communication is QCI5 that is used for signaling (call setup/tear down). Currently, the end-to-end call setup time in 2G/3G networks is about 6 sec, and VoLTE performs better: call setup times measured over the QCI5 bearer are about 2-3 seconds, even when the LTE device is in battery saving mode.

The QOS issues in mobile networks listed above are not very different from the QOS issues in fixed IP networks a decade ago. Video conferencing technology has matured over longer period of time and has therefore already implemented mechanisms for compensating bandwidth reduction (for example, down-speeding implemented in Polycom HDX video endpoints), packet loss (for example, Polycom Lost Packet Recovery), lip sync, etc.

LTE as a Fixed Network Replacement

I often hear that LTE is not a substitute for fixed access networks such as VDSL and FTTH, and my reaction is always "Why not?" As I learned at the VoLTE conference, there is a business case for using LTE instead of DSL to provide high-speed network access. For example, DSL providers in Germany pay a fee of 10 Euros for using the last mile of copper, and Internet SP are very eager to get rid of this cost. Since LTE can provide bandwidths comparable to fixed lines, modem vendors are adding LTE to the portfolio of access technologies. A great example is the FRITZ!Box, the most popular home gateway in Germany, that combines a modem (DSL, cable, and since October - LTE), a router, a firewall, a Wi-Fi access point and a DECT base station. It is a perfect solution for people like me who hate cables lying in the living room or office. Reported throughput over LTE is up to 100Mbps downstream and 50Mbps upstream which makes it Category 3 LTE device.

I have already heard that some service providers in the USA are experimenting with LTE as a fixed line replacement for services to businesses. Considering the cost and time necessary to setup a fixed line (like T1 or T3) to the customer premises, using LTE is a very attractive alternative for service providers. Now imagine that the business has an IP-PBX that uses SIP trunking to connect to a service provider. Bundling LTE and SIP trunking services is suddenly not a far-fetched idea.

All in all, I think LTE will impact both residential high-speed access and business services provided by service providers. The only "if" is Quality of Service. According to the discussion at VoLTE QOS mechanisms in LTE perform well in test environments and initial field deployments but will they do a good job once the LTE network is flooded with LTE capable devices that compete for resources?

Conclusion

The great thing about LTE is that it makes the mobile networks like any other IP network. Enterprise IP applications that used to require complex gateways to interface to mobile networks will now be able to run on the mobile network without much customization. The low hanging fruit is running Unified Communication soft clients like Polycom Real Presence Mobile on media tablets and leveraging the LTE network to connect back to the enterprise network. Going IP end-to-end will help reduce complexity but also cut latency to the absolute minimum. More work will be required in the area of QOS, especially in the congestion detection and notification on a radio cell level.

Mobile Service Providers will continue to develop IMS based application, also leveraging RCS-e, and it will be interesting to track the adoption of IMS applications and other, so called OTT, applications. While most of the Video Networker followers are from enterprise background, and are therefore familiar with the efforts in the enterprise UC environment, we should not ignore the efforts in the mobile networking space to solve the fundamental UC problem.

Thursday, October 6, 2011

EduTech and the New Polycom Office in Moscow

The EduTech conference in Moscow was a gathering of representatives from schools, universities, and corporate training organizations in the Russian Federation to discuss new technologies and methods for teaching and training remotely. Understandably, this topic is very hot in a country that stretches over 9 time zones (11 before the reform in 2010) and that requires a lot of communication between Moscow and the regions.

I had the pleasure to present on my favorite topic "Music Performance and Instruction over High-Speed Networks" (in Russian "Видео для музыкального обучения и трансляции концертов"). The presentation was part of a session dedicated to video technology for education, and resulted in many questions and discussions during and after the session. Did I mention that I enjoy presenting in Russian?

While in Moscow, I also got an early peek of the new Polycom office and Executive Briefing Center that officially opened on October 4. The office is located at the Paveletskaya Square which makes it easily accessible from the Domodedovo International Airport (DME) – a non-stop train connects the two - and through the Moscow subway system. The Business Center "Paveletskaya Plaza" is a beautiful 26-story tower visible from afar, and has a modern lobby with well-organized security.

Polycom has the entire 23rd floor of the building which results in spectacular views in all directions.

The new office is not only home for the Polycom employees in Moscow but also has the latest Polycom technology, including RPX 400 and OTX 300 immersive telepresence systems, all connected via high-bandwidth networks to other Polycom offices. I could not resist the temptation and placed a couple of telepresence calls across the Atlantic. The picture was crystal-clear and stats showed 6Mbps network bandwidth with no packet loss.

Followers of Video Networker in the Russian Federation, I would highly recommend making an appointment (otherwise security will not let you in) and visiting the new Polycom office in Moscow.

Monday, September 12, 2011

How the Migration to IP Improves Voice Quality

Back to the early years of Voice over IP, the quality was not great in comparison to TDM systems. Since IP networks did not have enough bandwidth and quality of service, voice had to be compressed a lot to be sent over the IP network. TDM solutions by contrast did not compress voice and since there was a physical connection between the TDM system and the TDM phone, they did not need to do much packetization either. The result was better voice quality on TDM phones than on IP phones – up until the advance of fast IP LANs and wideband audio codecs in the 2000s entirely changed the balance.

Most VOIP phones shipping today have some sort of HD Voice support. Polycom has been shipping HD voice for 10 years, starting with 7 kHz voice, then moving to 14 kHz audio in 2003 and 20+ kHz audio in 2006. While the voice industry as a whole is only now moving to 7 kHz voice, Polycom has moved further beyond – to support 14 kHz and even 20+ kHz audio (with Siren 22 and G.719 codecs). It is not just about "voice" anymore but rather about "audio" - the technology has gone beyond speech/voice transmission and allows for high-quality music and mixed content.

HD Voice is not only about better quality codecs. The acoustics of the handset were improved while microphones and speakers have to be modified to capture/play higher quality voice and audio. Echo cancelation and other algorithms have to be adjusted to support the wider frequency band. The challenges around transmitting high-quality audio are described in a joint white paper of Polycom and the Manhattan School of Music. The paper highlights our focus on audio quality and demonstrates our capability to meet the requirements of the most demanding users: musicians. The technology developed for this high-end application trickles down to room and personal telepresence systems and telephones, effectively spreading across the entire Polycom portfolio.

So how does a communication system capture the value of high-quality audio now available in VOIP phones? The key is migration to a distributed architecture that routes voice streams without transcoding them back to the TDM format (G.711 codec). If audio is delivered without transcoding between two communication partners on the system, the quality remains the highest (assuming that both partners have high-quality VOIP phones). The matter gets a little more complicated with multipoint calls because most voice conferencing servers embedded in enterprise voice systems support only G.711. If a video conference server such as Polycom RMX is part of a Unified Communication solution, the unused video ports on this server can be configured to support audio (up to the highest audio quality of 20+ kHz). Audio requires far less performance than video; therefore, one video port becomes 40 audio ports, and that is enough scalability for an enterprise deployments. Long-term, however, wideband audio will be gradually supported on all conference servers in enterprise systems, starting with the 7kHz G.722 wideband codec which is widely supported in newer IP phones.

Once the multipoint problem is solved, the only one remaining is connectivity to other systems across service provider networks. Most voice systems today still use TDM connection (such as T1, PRI) to connect to service providers. This TDM connection takes the voice quality down to G.711 due to physical limitations. Newer systems however support the so-called SIP trunking standard (specification is managed by the SIP Forum) that allows connecting the enterprise voice system with a service provider using an IP connection and a virtual trunk with SIP signaling. This virtual trunk does not impose any physical limitations on the voice streams (it is just IP packets crossing the network); therefore, any voice quality can be supported - as long as both the enterprise and the SP systems can handle it. SIP trunks enable wideband voice to travel among enterprise communications systems around the world without any transcoding and quality loss.

Will wideband voice make its way to wireless handsets? The latest generation of wireless handsets – for example Polycom Spectralink 8400 - already support wideband voice (7kHz, G.722, G.722.1), and as long as the voice stream can reach the destination in its original form, the receiver enjoys the clarity and superior understanding of wideband voice communication. The challenges related to multipoint conferencing and SP trunking apply equally to wireless phones, as they are treated as any other phone in the IP communication system environment.

In conclusion, voice technology has made an amazing progress over the past decade. The work of researchers and engineers is now finally finding its way in enterprise communication solutions that provide better quality, reduce misunderstandings and fatigue, and in general, makes human interactions over distances more natural and effortless.

Thursday, July 14, 2011

Focus Webinar “The Truth about Unified Communications for SMB”

I got invited to speak about UC and video in a Focus webinar for the Small and Medium Business Community. Since I usually talk to larger organizations, this webinar was a great opportunity to evaluate how the solutions available in the UC and video space apply (or don't apply) to SMBs.

First of all, SMB definition varies by country, for example, the US Government define companies with less than 500 employees as SMBs while in Germany it is companies with less than 250 employees. Since the audience of the webinar was mostly in North America, I created a story around a fictional SMB with about 300 employees distributed across three larger offices and several small sales offices.

The webinar had two parts. In the first 30 minutes, I covered definitions of "collaboration" and "communication", UC scope and market size, and talked about the real value of UC to SMBs. Then I described the types of UC solutions, the value of video as part of the UC solution, and finished by dispelling the myth about superior single-vendor solutions, which also directly relates to the trend towards UC ecosystems, and the increased importance of standards and interoperability.

In the second part, I focused on what SMBs should consider when deploying video. The presentation basically led the audience through the steps of building a video network from scratch to a fully functional multi-site network that spans across geographies and connects to partners, suppliers, and customers. The extensibility and scalability aspect is very important because many SMBs are growing fast and want to make sure a video starter kit can later be expanded to support more users.

I was able to cover not only video fundamentals but also IP network readiness aspects - when connecting distributed offices external organizations.

Webinar attendance was great, and resulted in many questions, some of which I answered in the Q&A session and some online. I am getting a lot of follow-up questions and it looks like the webinar had a huge impact.

If you want to watch the webinar recording, please click here.

Monday, June 13, 2011

“Music Performance and Instruction over High Speed Networks”

I have written many white papers but the one that elicits the strongest emotional response has always been “Music Performance and Instruction over High Speed Networks” which Christianne Orto and I wrote back in 2008. This paper tells the fascinating story of collaboration between Polycom and the Manhattan School of Music to enable remote music performances and instructions.

A lot of things have happened since 2008, for example, Polycom introduced new technologies in the area of voice and video communication while MSM found new applications for the technology. Few weeks ago, Christianne and I talked about the need to update the paper in early June, just before the International Society for Technology in Education (ITSE) conference in Philadelphia and the Network Performing Arts Production workshop in Barcelona. The updated white paper has just been posted here. Please have a look and send us your feedback!

Friday, June 10, 2011

How Will the Migration from IPv4 to IPv6 Impact Voice and Visual Communication? Take 2

The “World IPv6 Day” (June 8, 2011) was the first global test intended to help service providers and vendors prepare for the inevitable migration to IPv6. How is IPv6 different from IPv4? Why is IPv6 so important to the Internet and private intranets? What is driving IPv6 adoption? How will the migration to IPv6 affect voice and visual communication? Is Polycom ready for IPv6? Find answers in my new white paper.

Friday, May 6, 2011

What is New in the US Research and Education Community?



Internet2 is a non-profit organization that operates the high-speed backbone for the US Research and Education (R&E) community. It counts 200+ of the largest US universities and research organizations as members plus a lot of other members - international partners, vendors, etc. - to a total of about 350 members. I have represented Polycom in Internet2 since 2007, and sit on one of the governing councils called Application, Middleware, and Services Advisory Council, or AMSAC. Internet2 members meet twice a year. While the Fall Internet2 Member Meeting moves around the country (next one will be in Raleigh, NC), the spring event is always in Arlington, Virginia. The latest meeting took place April 18-20, 2011, and was another great opportunity to meet the US R&E community and some international participants.

Here are some meeting highlights:
  • Internet2 CEO David Lambert announced the new Internet2 initiative in cloud services and the new Network Development and Deployment Initiative (NDDI).
  • Internet2 is expanding its high-speed backbone network
  • Video is a hot topic for the US R&E community
  • There is a need for new audio-video infrastructure to connect the R&E community
  • Migration from IPv4 to IPv6 may not be an issue in the backbone anymore but local R&E networks are still struggling, as are some commercial providers
  • Wide deployment of digital certificates in the R&E community improves network security  

New Internet2 Initiatives

David Lambert, Internet2 CEO, announced that I2 and HP were working on cloud services. Internet2 has spent quite a lot of time looking for the appropriate partner in this space and the HP offer was best suited for the needs of the R&E community. David announced the Network Development and Deployment Initiative (NDDI) that includes I2, Indiana University, and Stanford University (Clean Slate Program). Internet2 will offer a new service - Open Science Scholarship and Science Exchange (OS3E) - to meet community requirements. The service will be first available in fall 2011 and will use OpenFlow technology. The goal is to create the equivalent of Linux for networking and allow for open source development. They basically asked switch/router vendors to turn off the control plane and allow remote computers to control them. Internet2 will be a national test bed for OpenFlow. Matt Davy from the Global Research NOC at Indiana University and Rob Vietzke from Internet2 will lead the project. They will work closely with international partners: CANARIE (Canada), JANET (UK), GEANT (Europe), JGNX (Japan), and RNP (Brazil).

How does that relate to video communications? There have been efforts in the industry to make the IP networks video application aware, and that requires communication between a call control engine on the application side and a policy engine on the IP networking side. The limitation is that each IP networking vendor uses a different policy engine, and there is no single application that can control the entire mixed-vendor network. With the new OpenFlow architecture there is a "standard" API to talk to all IP networking equipment, no matter who makes it. That will potentially give us even more control of the end-to-end QOS in the IP network, which is a benefit to video applications.

Internet2 is Expanding its High-speed Backbone Network

US UCAN funding will be used to expand the Internet2 network. Map of the network expansion was presented in the demo area. The middle section connecting West and East Coasts will be built first, followed by the south span, then the north span of the network. The expansion will require building several new so Giga-PoPs that host optical and IP routing equipment. I took a picture of the equipment that is installed in such GigaPoP. On the left side is the Ciena optical equipment. On the right side are a small Cisco 2600 router, an HP server, and a giant Juniper T1600 router with huge blades. The expansion of the Internet2 backbone is necessary to carry the additional traffic from anchor institutions: community centers, rural hospitals, etc. Applications such as distance learning and telehealth will drive video traffic from and to these institutions, and result in a lot of video traffic over the expanded Interent2 backbone.

Video is a Hot Topic for the US R&E Community

The session "Where Videoconferencing and Telepresence Meet Immersion and Interoperability" drew a lot of attention. Internet2 members are big video users and Internet2 itself offers video services to the community. Polycom has been partnering with Intrenet2 for many years and a lot of the services are leveraging Polycom infrastructure. Ben Fineman from Internet2, talked about a successful telepresence interop test with 32 telepresence screens, connecting equipment from Polycom, Cisco, LifeSize, etc. My presentation focused on telepresence interop and the challenges of connecting multi-screen (multi-codec) systems. I provided an overview of the Telepresence Interoperability Protocol (TIP) that Polycom will be supporting within few months to enable short-term interop across Polycom and Cisco telepresence systems. Then I focused on the long-term telepresence interoperability efforts in the IETF CLUE Working Group, and on Polycom's work in this area. Since I attended the last IETF meeting, I was able to provide a lot of detail about CLUE and answer questions from the audience.

The third presenter in the session was supposed to be Sean Lessman from Cisco but meeting he canceled right before the since he was on his way out of Cisco. In the last minute Michael Harttree from the Cisco CTO office jumped in. Michael was not very familiar with telepresence and talked instead about the trend towards more video (streaming, surveillance, etc.) in the network. There are many types of video floating around and the challenge is how to separate them and treat them appropriately (in terms of latency budget) on the IP network. This reminded me of the discussion in the IETF MMUSIC group about more detailed description of the type of traffic in SDP so that this description can be preserved across SP networks (which modify QOS settings).

New Infrastructure for Audio and Video Services to the R&E Community

I have been attending meetings of the Audio Video Communication Infrastructure Special Interest Group (SIG) for quite a while. The group focuses on connecting VOIP and video networks with PBX and PSTN to deliver seamless communication across the R&E community. Hot topic is the use of E.164 numbers versus alternatives such as SIP URIs, leveraging standards such as ENUM and existing systems such as GDS.

Ben Fineman from Internet2 and Walt Magnussen from Texas A&M are very active in this group, and I always enjoy the opportunity to discuss with them. The consensus so far is that Internet2 should request from ITU an international "country code" that would allow Internet2 to assign numbers across the R&E community. Agreement with commercial SPs have to be signed to make sure the traffic is routed appropriately. I am very excited about that topic because a lot of new Unified Communications services can be developed for the R&E community on the IP network. (Unfortunately,) the connectivity to PSTN is still essential for the success of UC deployments.

Migration from IPv4 to IPv6

Leslie Daigle from the Internet Society (ISOC) delivered a keynote about the importance of IPv6. Only a very small portion of Internet traffic today is IPv6, and businesses have claimed for long time that there is no business case for IPv6. On the other hand, the need for IP address space is big, and companies are trying to buy address space from other users. Based on the Avaya-Nortel acquisition, we know that the price for an IPv4 address is $11.25. But residential and mobile providers need even bigger IP address space than enterprise. Content providers also need to enable IPv6 in their services. IPv6 is gradually starting to make business sense because IPv4 addresses have price attached, NATs are hard and expensive, certain apps, e.g. games, do not work well in NAT environment.

When Google turned on IPv6 on YouTube, IPv6 traffic spiked. That means there are a lot of IPv6 clients out there. It is estimated that about 0.5% of Google customers will not be able to reach the service if Google alone turns on IPv6. They do not want to lose customers to others; therefore, Google, Yahoo, and Facebook agreed to turn on IPv6 for 24 hours on June 8, 2011 (World IPv6 Day). Note that IPv4 will not be turned off.

The call to action to service providers is to announce plans for IPv6 and create momentum around it. It is important for network administrators to include the World IPv6 Day in their change plan, so that no other changes happen on that exact day. With all of the excitement around the migration to IPv6, I decided to write a white paper on that issue. I have tons of information about IPv6 (some is captured in a previous post) and intend to focus on the impact of IPv6 on video communications. Stay tuned! I will post a link to the paper when it is ready.

Digital Certificates to Improve Network Security in the R&E Community

InCommon is a part of Internet2 that provides services to the R&E community. These services range from authentication to group management to – recently – low-cost digital certificates. Security is very important for voice and video communications, and Polycom products support digital certificates, so I was curious how universities deploy them.

John Krienke, Internet2 COO, talked about the partnership between Internet2 and Comodo. Comodo listened to the requirements for campus administration, and allows sub-domains for local certificate management. They provide tools to find all server certificates, and since the Comodo license is per site, you can assign a certificate to each server and do not need wildcard certificates.

Paul Kaski from the University of Texas System shared his experience with the InCommon certificate service. His organization used VeriSign for 11 years but due to budgetary constraints could not afford the steep price tag anymore and started using the InCommon service in 2H'2010. The estimated cost saving is $325K per year. The main advantages of the InCommon service are very quick SSL certificates approval, easy admin interface, and available API for both SSL and user certificates.

Digital certificates are a great way to authenticate users, devices, and servers in the network. Certificates definitely increase security in the network, and the only drawback I can think of is the cost. Now that R&E organizations have access to lower cost certificates and to the tools to manage them in campus environment, I expect wide adoption.

Conclusion

The Spring Interenet2 Member Meeting was a great opportunity to take a snapshot of the technology developments in the US R&E community. It is ahead of the commercial sector in some areas (advanced networking, IPv6 migration) and lagging is others (applications). I think there is an opportunity for commercial vendors, especially the ones like Polycom who rely on open standards and interoperability, to participate in the creation of new applications and services for the R&E community.

Friday, April 15, 2011

Unified Communications Forum in Moscow


The UC Forum took place March 22-23, 2011 in Moscow, and was the first industry event of this kind in the Russian Federation. The Forum was very well organized and piggybacked on an well-established Call Center conference that has been running for 10+ years and provided great facilities, registration desk, audio-video support, etc. The venue Radisson Slavyanskaya Hotel and Business Center was excellent, and I did appreciate having the conference center, the hotel, and the restaurants under one roof when the weather outside is not quite spring-like. The event organizers would like to establish the UC Forum as an annual event and stay at the same location, so that over time participants have the option to gradually shift from call center sessions to UCF sessions. Having seen the struggles of many new industry events, I think this is a smart approach.

Polycom's partner CROC Inc. had a booth showing Microsoft-Polycom integration, and a range of Polycom products. The booth was centrally located and quickly became a convenient meeting point.

The conference was moderated by Denis Klementiev who did an excellent job introducing the speakers and managing questions from the audience. I counted about 80 people in the room (there were 150 registered participants but people are coming in for a particular session and then moving on). Almost all presentations, including my talk, were in Russian, and this put the audience at ease, led to many questions, side discussions, and introductions.

My speaking slot was on the first day of the conference, and I always sit in the sessions before me so that I do not repeat things and can refer to information already covered by previous speakers. Here are the highlights.

Mikhail Kochergin from Microsoft talked about the business case for UC. One low-hanging fruit is unified directories which eliminate the need to enter the same employee information in multiple directories (PBX, Email, Web, etc.), and save cost and time. Mikhail then focused on cost savings from teleworking (that seems to be very important for Moscow with its horrendous commute traffic) and from lower real estate cost (less office space). He also touched on some vertical applications such as telehealth where UC truly saves lives. It turns out that 8-9 people die every year in the Russian Federation while travelling to a medical facility; these and other lives could have been saved through telehealth applications. Mikhail analyzed how the major players approach UC, and stressed that Microsoft was focusing on ease-of-use and on allowing any device to access UC services.

Stanislav Cherkov from CROC Inc. presented 5 case studies with Microsoft Lync and Exchange but also with audio and video equipment from Polycom. He talked about savings from IP telephony among distributed corporate offices across the Russian Federation and highlighted the tremendous traffic increase once the UC solutions were deployed. Most demand seems to be for integration of voice, instant messaging, presence, email, and calendaring but multipoint video is often required, as is integration with Avaya that has a strong position in the Russian voice communications market.

Next was my presentation that focused on the global developments around UC, as well as on the standardization and interoperability work in international organizations. I included some UC market segmentation information and market forecast that shows robust growth of both 'Basic UC' that enables presence indicators to guide manual user selection of voice, email, or IM from a unified communications client and 'Enhanced UC' that augments basic UC by tying into business processes, supporting mobile workers, and seamlessly integrating videoconferencing to drive business differentiation. I covered the different deployment models – on-premise, hosted, and cloud-based – and focused on the BroadCloud service developed jointly by BroadSoft and Polycom. Finally, I provided a summary of the work in UCIF, IMTC, and other organizations with focus on interoperability. The global perspective was very well received and resulted in a lot of questions, so that the session ran over, and I had to "borrow" time from the next speaker: Pavel Teplov from Cisco (sorry!)

The bottom line is that UC is impacting all areas of communication. Since no one vendor can address all UC areas, vendor ecosystems are gaining momentum, while standards and interoperability are becoming more critical … as are organizations such as UCIF that tests and certify interoperability. The presentation gave me the opportunity to reiterate Polycom's commitment to the Russian market, the agreement with РКСС to manufacture Polycom equipment in the Russian Federation, and the opening of a new demonstration center in Moscow in fall 2011.

I stayed for the rest of the UC Forum, and found all presentations very practical and informative. They provided a great overview of what is happening in the Russian Federation in terms of UC deployments. In particular, I enjoyed the presentation by Andrey German who is responsible for the video communications of the Superior Court of the Russian Federation. Apart from the fact that they are using a lot of Polycom equipment, I found the application very unique and compelling. It turns out the Russian Federation has a law that allows court proceedings to be conducted over video, if the court decides it is appropriate. That is very cost effective in a country that spans over 9 time zones (11 before President Dmitry Medvedev cut the number to 9 last year) and is in fact the largest country in the world - with 17 million square kilometers or 6.56 million square miles.

The Russian trade press was in the audience and a lot of the questions came from journalists. I did a quick search today, and found several articles about the UC Forum, for example, by IKS Media and World Info Comm. Detailed description of the UC Forum in Russian language is here

Wednesday, April 6, 2011

What Did the 80th IETF Meeting Mean to HD Voice, HD Video, and Unified Communications?

IETF has been making Internet standards - called unpretentiously "Request For Comments" or "RFCs" but nevertheless working quite well – for exactly 25 years now. Happy birthday, IETF! May the next 25 be equally exciting! This anniversary also means that the Internet has matured, and I could feel it in the discussions at the 80th IETF meeting (IETF 80) last week. Prague was a beautiful venue for the event and the meeting hotel Hilton Prague provided excellent facilities.
I do not attend most IETF meeting – that would be quite difficult with my busy schedule and with three IETF meetings on three different continents happening every year. The last one I attended (IETF 74) was in San Francisco two years ago, and I was very excited to find out what had changed in 2 years. The very strong Polycom team included Mary Barnes, a long-time IETFer, Stephen Botzko, who also covers ITU-T for Polycom, Mark Duckworth, and me.

From the 133 or so session I counted in the program I attended the ones that were related to voice and video (over IP, of course, since it is all about the Internet), Unified Communications, and related technologies. I could recognize three main discussion topics: connecting IP communication islands, enabling UC applications on the network, and handling of multiple media streams.


Connecting IP Communications Islands

IETF recognizes that there are still islands of IP communication and the vision of IP networks replacing the Public Switched Telephone Network (PSTN) is far from being fulfilled. This led to the VIPR activities (VIPR stands for "Verification Involving PSTN Reachability") that leverage nothing else but the PSTN to allow more voice to flow over IP and never cross PSTN. Since IP communication islands do not trust each other, the VIPR idea is to use a basic phone call to verify the destination is what it claims to be. On more generic level, the mechanism can be used to extend trust established in one network (e.g. PSTN) to another network (e.g. IP) but the VIPR working group seems to be focusing on the narrow and practical application of connecting voice over IP islands without PSTN gateways. VIPR is very important to HD voice because it enables direct end-to-end HD voice connections. PSTN gateways on the other hand always take the voice quality down to "toll quality" (3.4kHz, G.711), even if handsets and conference servers support HD voice. VIPR can be used for video, and in fact is even more beneficial for video, since PSTN does not support video at all. Once PSTN is used to verify the destination, all subsequent calls between source and destination can be completed over IP. Again, the quality is not limited by any gateway, only by available bandwidth, and HD video can flow freely end-to-end.

Another interesting discussion that reminded us – who mostly live in the IP world - about the existence of PSTN was about Q.850 error codes generated by switches in the PSTN network. I remember the discussions about mapping these error codes to SIP error codes from previous IETF meetings but it turns out these mappings do not work well because some of the Q.850 have no equivalent in SIP and inventing new error codes only complicates SIP and confuses SIP servers. So the proposal on the table is to update RFC 3326 "The Reason Header Field in SIP" to transport the original Q.850 codes. Well, as they say, when mapping does not work, it is best to encapsulate and let network elements decide what to do with the information inside.

And then there was the discussion about lost Quality of Service (a.k.a. DSCP) settings when IP traffic passes through a service provider IP network. Since SPs do not really like any of their customers telling them how to prioritize traffic in their network, they basically resets the QoS values in the IP packets coming from the customer LANs to something they can use in the SP networks. The problem is that the destination IP LAN may want to honor the original QoS but the packets coming in from the SP do not have the real DSCP values. This leads to all sorts of creative ideas how to pass more granular information about the type of application end-to-end. One proposal in the MMUSIC working group was to update RFC 4598, so that the session description (Session Description Protocol, or SDP) has more detailed description of the application (for example telepresence, desktop video, personal video, web collaboration, etc.), so that the destination LAN knows what priority to assign to the traffic in that session.

Finally, there is always the topic of end-to-end security. Many obscure mechanisms defined in some RFC and being use in some application somewhere in the world turned out to have problems with the relatively new ICE firewall traversal mechanism. ICE stands for Interactive Connectivity Establishment, and is finally an RFC (RFC 5245 to be precise). As a result, there were numerous presentations at IETF 80 – mostly in the MMUSIC working group - about things that do not work with ICE: some fax scenarios, the relatively new DCCP protocol (which was all the rage back at IETF 74), simulcast streaming scenarios, and media aggregating scenarios. At this point, however, no one is considering changing ICE and the pretty universal response to such contributions was "sorry but we cannot help you". Another security issue was discussed in the XMPP group, where the consensus was that Transport Layer Security, or TLS, was not working at all on the interface between XMPP servers, that is, in XMPP federation scenarios. The proposal in discussion was to use DNS SEC to verify SRV records and define rules how authorizations and permissions are handled across domains. The simplest explanation is that if you have Gmail with 1000 domains and WebEx with another 1000 domains, trying to establish XMPP federations among all domains would drive the number of connections towards a million, which leads to scalability and performance issues. Instead, the folks in the XMPP group want to establish one connection between Google and WebEx and have all domains use it. There is also a certain time pressure to solve the secure XMPP federation issue because the US government is considering implementation of XMPP federation - if the security issues are fixed.

Enabling UC Applications

The second big topic at IETF 80 was standards around UC functions. While there are many different opinions where UC will happen – in a soft client, in the mobile phone, in the browser, etc. - at IETF 80, the attention was on "UC in the browser". RTC Web, or Real Time Communication on the World Wide Web, was feverishly discussed, starting with the Birds of Feather event on Tuesday (from the saying "Birds of a feather flock together", or an informal discussion group) and concluding with the first RTC Web working group meeting (although the group has not officially been established yet) on Friday. Web browsers today use incompatible plug-ins to communicate with each other and there is no interoperability across browser vendors. RTC Web's vision is a standard that allows real-time communication functionality (voice, video, and some associated data) to be exchanged across web browsers. The architecture is still fluid but it is clear that a group of companies, including Google and to a certain extent Skype, is interested in standardizing the media stream across browsers. As for using standard signaling (which basically means including SIP stack in the browser), there is no consensus, as browser vendors seem more comfortable with their own proprietary HTTP-based signaling, and promise gateways to SIP networks. I guess I understand why Google wants to do it all in the browser (this is the environment they can control more or less) but I am still puzzled by Skype's position – maybe they are willing to give up the Skype client for a strong play in the infrastructure. I am mostly interested in standards, so SIP in the browser sounds more interesting than proprietary protocols that then require gateways to connect to standards-based networks.

Another interesting discussion around UC took place in the XMPP working group, and was about interaction between XMPP and SIP clients. Since UC in its basic form is presence, IM, voice, and video, and since XMPP is used for instant messaging (IM) and presence while SIP is used for voice and video over IP, I would say this particular discussion was really about UC (outside the browser, though). The Nokia team presented a proposal for interworking between XMPP and SIP based on a dual-stack (SIXPAC). We at Polycom love dual-stack implementations – with most new video endpoints and even some high-end business media phones supporting SIP, H.323 and even XMPP while multipoint conferencing servers support many protocol stacks simultaneously. The proposed XMPP-SIP dual stack would make gateways between SIP and XMPP unnecessary, and would allow for richer user experience on the dual-stack client. The key benefit I see is if SIP URIs that are used to place a SIP call can be automatically resolved (by some server) into Jabber Identifiers (JIDs) used in XMPP, and vice versa. This would for example allow the user to check presence, start an IM session, and then seamlessly escalate it to voice and video.

Multiple Media Streams

The third big topic at IETF 80 was handling of multiple streams, or as the IETFers say, multiple m lines in the session description. Applications for multiple streams range from multi-codec telepresence systems to video walls in situation rooms to just sending multiple media streams for redundancy. The most important of all is of course the CLUE activity. CLUE stands for "ControLling mUltiple streams for tElepresence" (I know – picking random letters to create a catchy group name is an art form that I do not appreciate enough), and the CLUE working group had its first meeting at IETF 80. Mary Barnes moderated the session. With about 80 people in the room, the discussion covered symmetric and asymmetric point-to-point and multipoint use cases. Encouraging was that many people in the room had read the use case description and 20+ people agreed to review and contribute. Later the group spent quite a lot of time going over assumptions (about 8 in total) and requirements (about 12 originally but one was later dropped). It became clear that CLUE will only focus on handling multiple media streams and leave architecture, signaling, etc. to other IETF groups. Most questions were about definition of terms such as "stream", "source", "endpoint", "middle box", "asymmetric", "heterogeneous", and "synchronization". The CLUE group will continue discussions at a virtual meeting in May and possibly a face-to-face meeting in June. Polycom's Mary Barnes in a chair of the CLUE working group, and will keep me updated on the progress.

CLUE is very important because the video industry needs a consensus how to handle telepresence and other multi-stream applications. Since Polycom has announced support for the Telepresence Interoperability Protocol (TIP), I frequently get asked how CLUE relates to TIP. The short answer is "It does not". CLUE's charter is to develop standards for describing the spatial relationships between multiple audio and video streams and negotiation methods for SIP-based systems. This new work is completely separate from TIP and will support many use cases that TIP does not. The work was originally proposed by the IMTC Telepresence activity group (which Polycom also co-chairs), and was chartered by the IETF early this year. Participating companies include Polycom, Cisco, HP, Huawei, ZTE, and others.

Note that ITU-T Study Group 16 is also active in that area but has a different charter, which includes multiple-stream signaling for H.32x systems, the creation of services (like accessibility), establishing requirements for telepresence media coding, telepresence control systems, and media quality recommendations for telepresence systems. ITU-T is planning to harmonize the H.32x multi-stream signaling with CLUE but, more importantly, the above mentioned companies are participating in both IETF and ITU-T, which is the best way to make sure the standards do not contradict. As far as the future of TIP goes, it is an interim solution for vendors to interoperate with Cisco telepresence systems. We will have to see how long it lasts in the market place - certainly once solutions are deployed they tend to stay for a while. As usual, various vendors will provide their own migration paths to the standard, for example, Polycom will continue to support TIP as long as it is necessary for our customers, and gradually migrate telepresence systems to CLUE - once the work in the CLUE working group is completed and the standard is ready.

Back to the broader topic of handling multiple media streams. The topic came up in several presentations in the MMUSIC group. For example, Huawei proposed transmitting 3D video via multiple streams. Since the 3D image can be created through two 2D images – one for each eye - simulcast (i.e. sending Left and Right views in separate streams) can be used. Other options include frame packing (combine Left and Right views into a single stream) and 2D+auxiliary (synthesize Left and Right views from 2D video using auxiliary data such as depth and parallax maps). The draft introduces a new SDP attribute called "Parallax-Info" with parameters "position" and "parallax". While some IETFers expressed concerns about breaking the Real Time Protocol (RTP), there are interesting elements in the draft and I will keep following it.

Conclusion

IETF 80 was a very productive meeting and a great gathering of technical experts from around the world. They all brought new ideas and very different areas of expertise: networking, voice, video, web, mobile, etc. The meeting provided an excellent opportunity for discussions in and outside the conference rooms. A lot of topics that I know from IETF 74 progressed quite well but did not disappear, just led to additional areas that require standardization.

My takeaway from the meeting is that standardization work is like a marathon – it requires patience and persistence to get to the final line. The Prague Marathon on Saturday was therefore a fitting metaphor and a great way to conclude a very productive, well-attended, and amazingly versatile IETF 80. (IETF participants were not required to run the marathon!)

Wednesday, February 9, 2011

Video Interview at ITEXPO

On the last day of ITEXPO East 2011, I had a chance to sit with Erik Linask, Group Editorial Director at TMCnet, and talk about my experience during the ITEXPO event, about the key priorities for Polycom in 2011, and about our engagement in the Unified Communications Interoperability Forum.

The link to the 13-minutes video interview is here. Let me know what you think!

Friday, January 28, 2011

ITEXPO in Miami Next Week

My next week will be very busy because ITEXPO and other industry events are running in parallel in Miami, Florida. I looked through the schedule and - as of today - I will be presenting in five sessions: three of them are part of ITEXPO while the other two are part of the Ingate Summit.

My first session at ITEXPO is on February 2 at 1:30pm local time and is titled “How Does The Traditional Desktop Phone Fit Into The Evolving Enterprise User Experience?” Frank Stinson from Intellicom Analytics is moderating and I will be sitting next to speakers representing other business phone makers. This session will explore how the trend to Unified Communications impacts the way people access voice services – through telephones but also through soft clients, smart phones, and tablet devices. How should desktop telephones evolve in this environment and how are desktop phone manufacturers planning to increase the value of their products given evolving user expectations?

Then on February 3 at 1pm I will present in the session “Making Telepresence Affordable and Reliable” which will be moderated by TMC Executive Director Paula Bernier and will also include Matt Collier from LifeSize Communications. This session will discuss the perception that telepresence is expensive and will clog your network with HD video traffic. My talk will focus on the reduced network bandwidth consumption (i.e. minimizing or avoiding IP network upgrades) and on the new network architectures that allow for virtualization of conference resources shared within the entire organization or deployed by service providers to serve vast user communities.

My last ITEXPO presentation is in the session “UC Interoperability” on February 3 at 2pm. David Yedwab from Partner Market Strategies and Analytics will moderate and there will be two other panelists: Alan Percy from AudioCodes and Allen Mendelshon from Avaya’s UC Strategy team. The session will address the need for interop and standards in multivendor environments and explore different aspects of multi-vendor UC interoperability.

The Ingate Summit is organized in parallel to ITEXPO and I had the pleasure to present at previous Summits, the last one being in Los Angeles in October. This time, the Summit starts early with pre-conference service provider workshops and I will present in the session "Generating Revenue from HD Video” on February 1 at 5:30pm. Joel Maloff of Maloff NetResults is moderating, and I will share the time with Karl Stahl from Intertex Data. Since the audience is serive providers, I will focus on the managed and hosted telepresence services, and also address ITSPs with current hosted voice offering that would like to add HD video services without much CAPEX. I will also provide an update of the industry efforts in the areas of telepresence interoperability and B2B video communications.

My second session at the Ingate Summit is the "Town Hall Meeting: Unified Communications" on February 3 at 9am. The list of panelists is fairly long: Chad Krantz from Brodvox, Dan York from VOIPSA, Karl Stahl from Intertex, Jeff Ridley from ShoreTel, David Yedwab from Market Strategy & Analytics Partners, and Gary Mading from Aastra. I will be wearing my UCIF hat in that session, i.e., representing the Unified Communications Interoperability Forum. In such large panel, there will be no time for slides but we will have a discussion around the scope of Unified Communications, and how different vendors approach UC. I will focus on the UCIF philosophy (certification, not standards development) and call for other companies to join and influence the discussions in UCIF.

All in all, next week in Miami will be very hot - although they do tend to set the air conditioning in the Miami Beach Convention Center on “very cold”. I am sure there will be a lot of interesting discussions and I hope to meet some of the blog readers there in person. For the rest, I would recommend watching the video interview that I will give on the last day of the conference (February 4). I will summarize the news from the conference in my answers during this interview. Once the link to the recording becomes available, I will add it to this blog post.

Monday, January 24, 2011

Industry events, speaking engagements, and white papers

In addition to blog posts, Video Networker keeps a complete list of the industry events that I attend and the topics of my speaking engagements. It also has a section with links to my white papers. See below!