Monday, April 27, 2009

Telepresence Interoperability Discussion at Internet2 Meeting

A last-minute change in the Internet2 meeting program led to my participation in the panel 'Telepresence Perspectives and Interoperability'
http://events.internet2.edu/2009/spring-mm/agenda.cfm?go=session&id=10000509&event=909.

The speaker selection will result in some interesting discussions.

This session will be recorded and streamed, so please click on the streaming icon to watch the session.

As a result of this session, a series of telepresence interoperability tests were organized in summer 2009, and the results were presented at the Internet2 conference in October 2009. Read the full story here: http://videonetworker.blogspot.com/2009/10/telepresence-interoperability.html.

Monday, April 20, 2009

New white paper "G.719: The First ITU-T Standard for Full-Band Audio"

Conferencing systems are increasingly used for more elaborate presentations, often including music and sound effects. While speech remains the primary means for communication, content sharing is becoming more important and now includes presentation slides with embedded music and video files. In today’s multimedia presentations, playback of high-quality audio (and video) from DVDs and PCs is becoming a common practice; therefore, both the encoder and decoder must be able to handle this input, transmit the audio across the network, and play it back in sound quality that is true to the original.

New communications and telepresence systems provide High Definition (HD) video and audio quality to the user, and require a corresponding quality of media delivery to fully create the immersive experience. While most people focus on the improved video quality, telepresence experts and users point out that the superior audio is what makes the interaction smooth and natural. In fact, picture quality degradation has much lower impact on the user experience than degradation of the audio. Since telepresence rooms can seat several dozens of people, advanced fidelity and multichannel capabilities are required that allow users to acoustically locate the speaker in the remote room. Unlike conventional teleconference settings, even side conversations and noises have to be transmitted accurately to assure interactivity and a fully immersive experience.

Audio codecs for use in telecommunications face more severe constraints than general-purpose media codecs. Much of this comes from the need for standardized, interoperable algorithms that deliver high sound quality at low latency, while operating with low computational and memory loads to facilitate incorporation in communication devices that span the range from extremely portable, low-cost devices to high-end immersive room systems. In addition, they must have proven performance, and be supported by an international system that assures that they will continue to be openly available worldwide.

Audio codecs that are optimized for the special needs of telecommunications have traditionally been introduced and proven starting at the low end of the audio spectrum. However, as media demands increase in telecommunications, the International Telecommunication Union (ITU-T) has identified the need for a telecommunications codec that supports full human auditory bandwidth, that is, all sounds that a human can hear. This has led to the development and standardization of the G.719 audio codec...

The new white paper "G.719 - The First ITU-T Standard for Full-Band Audio" is available here:
http://www.polycom.com/global/documents/whitepapers/g719-the-first-itut-standard-for-full-band-audio.pdf.

Wednesday, April 15, 2009

The Art of Teleworking

My new white paper "The Art of Teleworking" is now available online at
http://www.polycom.com/global/documents/whitepapers/art-of-teleworking.pdf
Comments are welcome.

Summary of the 74th IETF Meeting in San Francisco, March 23-27, 2009

The Internet Engineering Task Force http://www.ietf.org/ meets three times a year (fall, spring, and summer) in different parts of the world to discuss standards (called Request For Comments or RFCs) for the Internet. These meetings are the place to discuss everything related to the Internet Protocol (IP), the User Datagram Protocol (UDP), the Transmission Control Protocol (TCP), Session Initiation Protocol (SIP), etc.

If you have not been to IETF meetings, here are two impressions of working group sessions:

http://www.youtube.com/watch?v=uJHtecw8lcU&feature=channel_page

http://www.youtube.com/watch?v=D4ga3iaU8zU&feature=channel_page

This was the second IETF meeting I attended (the first one was in 1997), and it was quite fascinating to observe the changes. First, many of the 100 or so IETF working groups are now running out of work items. IETF seems to be losing 5 working groups per meeting or 15 per year. If this trend continues, IETF can disappear by 2015, someone commented. At the meeting in San Francisco, most groups finished early because they ran out of agenda items.

Another change is in the number of participants representing vendors as compared to folks from education and research. It looks like a lot of vendors have flooded IETF over the years and - some say - made it slower, more competitive, and less efficient. You can look at the list of attendees https://www.ietf.org/registration/ietf74/attendance.py and make up your own mind.

Key topic of the meeting was IPv4–IPv6 migration. Service providers are really run out of IPv4 addresses – especially in Asia Pacific – and there was a sense of urgency to help. The Internet Architecture Board (IAB) created a document with their thoughts on the issue http://tools.ietf.org/html/draft-iab-ipv6-nat-00. The firewall folks discussed what functions to put in an IPv6-to-IPv6 firewall. There was a BOF (initial discussion of a new topic) on sharing IPv4 addresses among multiple users – this is to temporarily alleviate the pain of ISPs that are running out of IP addresses. Migration to IPv6 is important for Voice over IP and Video over IP products (basically the entire Polycom product portfolio) because they all have to support IPv6 and run in a dual stack (IPv4 and IPv6) mode for the transitional period that can span over many years. Note that IPv6 support is trivial. In addition to supporting the news IP packet header, endpoints have to also support a version of the Dynamic Host Configuration Protocol (DHCP) that supports IPv6, a special specification that describes how the Domain Name System (DNS) will support IPv6, etc.

Another new thing at IETF is the work on new transport protocols that enhance UDP and TCP. The Datagram Congestion Control Protocol (DCCP) is like UDP but with congestion control. While adding congestion control to the datagram transport protocol is not a bad technical idea, the business implications are huge. It looks like today even the two existing transport protocols (UDP and TCP) are one too many, and applications migrate from TCP to UDP because of its simplicity. Even signaling for real-time applications, which is the best fit for TCP, is frequently transported over UDP. There is also an effort to specify Transport Layer Security (TLS) over UDP.

Video and telepresence systems – such as Polycom RPX, TPX, and HDX – use UDP for transport of real time traffic (voice and video packets). Migrating to the DCCP protocol may make sense in the future if the congestion control mechanisms in DCCP are supported end-to-end. This is not the case today.

The Secure Transport Control Protocol (STCP) is another new transport protocol, a better version of TCP. I am not sure why STCP is better than just running Transport Layer Security (TLS) on top of TCP but the big question is whether there is space for additional transport protocols (beyond UDP and TCP). Video systems today are using TLS and TCP for secure transport of signaling messages during call setup and call tear-down STCP will therefore have no impact on video equipment, since TLS over TCP and TLS over UDP are doing beautiful job securing the communication.

DIAMETER was originally developed as a authentication protocol (to replace RADIUS) but is now adopted by many service providers and IETF is trying other uses, for example, for negotiating Quality of Service (QOS) and even for management of Network Address Translation (NAT) functions in carrier-to-carrier firewalls. Video applications require a lot of network resources (bandwidth, low latency and jitter, and low packet loss) and communicating QOS requirements from a video application such as Polycom CMA 5000 to a policy engine controlling the routers and switches in the IP network is a great idea. A standard solution based on DIAMETER would help interoperability among video vendors and IP networking equipment vendors.

As I already wrote in the summary of the International SIP Conference http://videonetworker.blogspot.com/2009/03/summary-of-international-sip-conference.html, SIP is getting too complex - with 140 RFCs and hundreds of internet drafts. IETF understands that the complexity of SIP is a problem and wants to create base SIP specification that includes only the key functionality. The problem is that different people within IETF want to include different set of RFCs (subset of the 140 SIP-related RFCs) in the base specification. Polycom has implemented a great deal of SIP RFCs in its voice and video products and is indeed interested in a simpler version of SIP that will ensure robust interoperability across the industry and include the basic call features that everyone uses, not the fancy ones that are rarely used.

I attended the meetings of the two IETF working groups that discuss conferencing: Centralized Conferencing (XCON) and Media Server Control. While XCON is focused on conference call setup, the Media Server Control group defines a protocol between a Media Resource Broker (MRB) and Media Server (MS), which is useful when you have a conferencing application control one or more conference servers (MCUs). When completed this standard could allow the Polycom Distributed Management Application to control non-Polycom MCUs, for example, Scopia MCUs from RadVision.

IETF is obviously getting into a mature phase and did what I would call ‘tribute to MPLS’ in Hollywood Oscar ceremony-style. They tried to portrait Multi-Protocol Label Switching as a great success of IETF standardization but many in the audience pointed out that MPLS interoperability among vendors and among operators just was not there.

Session on Telepresence at next BBWF Europe

The session ‘Tele-Presence, Managed Services 2.0, and Reducing Carbon Footprint’ at the next Broadband World Forum Europe 2009 (Paris, September 9, 2009) is taking shape. The idea for the session is to gather video technology and market experts and look at telepresence from different angles.

Marshall Eubanks from Iformata Communications will talk about lessons learned from offering managed telepresence services over many years.

Eric Toperzer from Juniper Networks will talk about the technical and financial challenges to effectively deploy the right resource management capabilities in the IP network and support the telepresence application.

I will cover telepresence from the perspective of a telepresence system manufacturer and focus on the features and functions that make telepresence unique new way to overcome distance.

I am still looking for an end user - or an European analyst who can cover the end user perspective - for the session. If you have recommendations, please let me know.

The session information is here:
http://www.iec.org/events/2009/bbwf/attendees/schedule_details.asp?sId=2114

Monday, April 13, 2009

Conferencing Service Providers Meet at TeleSpan

It is the first time I was invited to speak at the TeleSpan's Future of Conferencing Workshop in Las Vegas. This year’s event gathered participants from Affinity VideoNet, AT Conference, Global Crossing, InterCall, Premiere, Verizon, etc. The vendor community was represented by Compunetix, Polycom, Citrix, RadiSys, etc. There were about 90-100 people on site and additional 30 received audio and video over streaming. A very brief impression of the event is here http://www.youtube.com/watch?v=zGxoeHtDMRs&feature=channel_page.

My presentation “HD & Telepresence: Better Quality Audio and Video” was in the morning on Day 1, and I focus on high-definition (super-wideband, full-band) audio and its importance to audio conferencing, video conferencing, and telepresence. I summarized Polycom’s contribution to the development and standardization of new audio codecs (ITU-T G.722.1, G.722.1C, and G.719) and highlighted the main benefits of HD audio: speeding things up (less “what did you say?”), cutting through strong accents, cutting fatigue, and restoring accuracy (“fifty million” or “sixty million?”). I also talked about the benefits of HD video (immersive experience, body language, recognizing people in a large room, less fatigue, and HD content sharing) and then focused on recent advances in video network architecture that allow building scalable and manageable video networks.

Presenting in the beginning of an event is definitely a huge advantage because almost everyone came at some point during Day 1 and Day 2 to ask follow-up questions, discuss industry trends, white papers, and Polycom solutions. The hard copies of the white paper ‘Scalable Infrastructure for Distributed Video’ (http://www.polycom.com/global/documents/whitepapers/wp_scalable_architecture_for_distributed_video.pdf) were quickly gone - which indicates that many CSPs are thinking about rolling out video services - but most of the interest was focused on the Polycom audio technology. Fortunately, we have a solid set of white papers that answer most of the audio questions: The Effect of Bandwidth on Speech Intelligibility’ (http://www.polycom.com/global/documents/whitepapers/effect_of_bandwidth_on_speech_intelligibility_2.pdf), ‘Music Performance and Instruction over High-speed Networks’ (http://www.polycom.com/global/documents/whitepapers/music_performance_and_instruction_over_highspeed_networks.pdf), and ‘G.719-The First ITU-T Standard for Full-band Audio’ (will be made publically available shortly).

The audience also responded to the video network architecture part – CSPs need scalability, redundancy and failover mechanisms to roll out ubiquitous video service. All in all, CSPs were truly excited about both HD Audio and HD Video application.

The president of TeleSpan Elliot Gold opened the conference and talked about the increased number of participants over 2008. The Sandbox is a new venue for customers to play with new things rather than show ready products. TeleSpan hopes that the Sandbox will become a place to announce new conferencing products in the future.

Steven Augustino from Kelley Drye & Warren discussed the implications of the FCC ruling from June 2008 that made audio bridging service subject to the Universal Service Fund. USF is managed by Universal Service Administrative Company (USAC), http://www.usac.org/. It looks like the impact on the CSP industry is huge because USF fees could be up to 11% of revenues. Web conferencing shows in different line of the form and is not subject to USF while Skype has been very careful to describe their service in a way that it does not meet the “interconnected voice” definition, and in this way avoid paying USF.

David Seavers from Aonta talked about security threats. CSPs experience attacks from hackers who try to get control of audio bridges and use their out-dial capabilities to dial premium numbers in obscure countries. The service providers providing the premium number make money and often they are the ones who hack into audio bridges. The CSP industry has to work in concert to combat this problem.

Jonathan Christensen from Skype talked about Skype video and how it fits existing business conferencing. Skype claims 8% of international calling and estimates its user base at 148M in Europe, 52M in North America, 147M in Asia-Pacific, and 59M in the rest of the world. The higher adoption in EU and APAC is because of arbitrage even for local calls. USA was slower in adoption because arbitrage existed mostly on international calls. Jonathan presented statistics of video usage on Skype calls and talked about the newly announced Skype client for iPhone - as a native VOIP application in Wi-Fi mode. Even more interestingly, the client will run on iTouch – this is a device that does not make any calls today, and the use case is very compelling.

Emily Magrish from Affinity criticized the CSP industry for not creating ‘cell phone like plans for video’ and for not bundling video services with high-speed Internet services – all of that to increase adoption. People want to use video but want to get out of the management business because it is still difficult for them to do it themselves - this is a huge opportunity for CSPs. Toni Alonso supported that. The economic crisis means that companies will need to change the way they work. They have fewer resources. They are also trying to reduce their balance sheets, and managed services are a way to keep conferencing off the balance sheet. Why do so few of the CSPs offer managed services? she asked.