Sunday, October 25, 2009

PART 8: ‘TELEPRESENCE INTEROPERABILITY IS HERE!’

The results from the telepresence interoperability demo were discussed on October 7 in the session “Telepresence Interoperability is Here!” http://events.internet2.edu/2009/fall-mm/agenda.cfm?go=session&id=10000758&event=980. Bob Dixon used visual and sound effects (including love songs and Hollywood-style explosions) to explain interoperability to people who are less involved in the topic. His presentation inspired me to write about telepresence interoperability for less technical and more general audience. (I hope that my series of blog posts achieved that). Bob highlighted that this was not only the first multi-vendor telepresence interoperability but also the first time systems on Interent2, Commodity Internet, Polycom’s, Tandberg’s, and IBM’ networks successfully connected.

Gabe connected through an HDX video endpoint to RSS 2000 and played back some key parts of the recording from the interoperability demos on October 6 (http://www.flickr.com/photos/20518315@N00/4015164486/). I was actually pleasantly surprised how much information the RSS 2000 captured during the demos. I later found out that Robbie had created a special layout using the MLA application on RMX2000, and this layout allowed us to see multiple sites in the recording.

Robbie (over video from Ohio State) commented that connecting the telepresence systems was the easier part while modifying the layouts turned out to be more difficult. He was initially surprised when RMX/MLA automatically associated video rooms 451, 452, and 453 at Ohio State into a telepresence system but then used this automation mechanism throughout the interoperability tests.

Jim talked about the need to improve usability.

Gabe talked about monitoring the resources on RMX 2000 during the tests and reported that it never used more than 50% of the resource.

I talked mainly about the challenges to telepresence interoperability (as described in Part 2) and about the need to port some of the unique video functions developed in H.323 into the SIP, which is the protocol used in Unified Communications.

Bill (over video from IBM) explained that his team has been testing video interoperability for a year. The results are used for deployment decisions within IBM but also for external communication. IBM is interested in more interoperability among vendors.

During the Q&A session, John Chapman spontaneously joined the panel to answer questions about the demo call to Doha and about the modifications of their telepresence rooms to make them feel more like classrooms.

The Q&A session ran over time and number of attendees stayed after that to discuss with the panelists.

There was a consensus in the room that the telepresence interoperability demo was successful and very impressive. This success proves that standards and interoperability are alive and can connect systems from different vendors running on different networks. The series of tests were also a great team work experience in which experts from several independent, sometimes competing, organizations collaborated towards a common goal.

Back to beginning of the article ... http://videonetworker.blogspot.com/2009/10/telepresence-interoperability.html

Friday, October 23, 2009

PART 7: TELEPRESENCE INTEROPERABILITY DEMO

The demo on October 6 was the first immersive telepresence demo at Internet2. Note that Cisco showed their CTS 1000 telepresence system at the previous Internet2 conference; however, this system has only one screen, and feels more like an HD video conferencing system than an immersive telepresence system. Also, the Cisco demo was on stage and far away from viewers while the TPX demo was available for everyone at the conference to experience.

The following multi-codec systems participated in the telepresence interoperability demo:- Polycom TPX HD 306 three-screen system in Chula Vista Room, Hyatt Regency Hotel, - Polycom TPX HD 306 three-screen system located in Andover, Massachusetts, - LifeSize Room 100 three-screen system located at OARnet in Columbus, Ohio, - Polycom RPX 200 at iFormata in Dayton, Ohio- Polycom RPX 400 at IBM Research in Armonk, NY - Tandberg T3 three-screen system located in Lisbon, Portugal (the afternoon demos were too late for Rui and Bill connected a T3 system in New York instead)

The systems were connected either to the Polycom RMX 2000 located at Ohio State University in Columbus, Ohio, or to the Tandberg Telepresence Server at IBM Research in Yorktown Heights, NY.

As for the setup in Chula Vista, TPX comes with 6 chairs, and there were additional 30 chairs building several rows behind the system. There was enough space for people to stand in the back of the room. (http://www.flickr.com/photos/20518315@N00/4014401487/)

I can only share my experience sitting in the TPX system in the Chula Vista Room. I am sure other participants in the demo have experienced it a little differently. I was tweeting on the step-by-step progress throughout the demos.

The final test plan included both continuous presence scenarios and voice switching scenarios. Voice switching is a mechanism widely used in video conferencing; the conference server detects who speaks, waits for 2-3 seconds to make sure it is not just noise or a brief comment, and then starts distributing video from that site to all other sites. The twist - when telepresence systems are involved - is that not only one but all 2, 3, or 4 screens that belong to the ‘speaking’ site must be distributed to all other sites. Voice switched tests worked very well; sites were appearing as expected.

Continuous presence – also technology used in video conferencing – allows the conference server to build customized screen layouts for each site. The layout can be manipulated by management applications, e.g. RMX Manager and MLA manipulate the layouts in RMX 2000. (http://www.flickr.com/photos/20518315@N00/4014401683/)

TPX performed flawlessly. On October 5, most calls were at 2Mbps per screen due to some bottlenecks when crossing networks. This issue was later resolved and on October 6 TPX connected at 4Mbps per screen (total of 12 Mbps). TPX was using the new Polycom EagleEye 1080 HD cameras that support 1080p @30fps and 720p @60fps. We used 720p@ 60fps which provides additional motion smoothness.

About quality: The quality of multipoint telepresence calls on RMX 2000 was excellent. A video recorded in Chula Vista is posted at http://www.youtube.com/watch?v=XpfNmJtAtVg. In few test cases, we connected the TPX systems directly to TTPS, and the quality decreased noticeably.

About reliability: In addition to the failure during the first test (described in Part 5), TTPS failed during the morning demo on October 6 (I was tweeting throughout the demo and have the exact time documented here http://twitter.com/StefanKara/status/4633989195). RMX 2000 performed flawlessly.

About layouts: Since TTPS is advertised as a customized solution for multipoint telepresence, I expected that it will handle telepresence layouts exceptionally well. Throughout the demos, Robbie Nobel used the MLA application to control RMX 2000 while Bill Rippon controlled TTPS. In summation, RMX 2000 handled telepresence layouts better than TTPS. The video http://www.youtube.com/watch?v=XpfNmJtAtVg shows a layout created by RMX 2000 – T3 system is connected to RMX through TTPS. In comparison, when the telepresence systems were connected directly to TTPS, even the best layout was a patchwork covering small portion of the TPX screens. (http://www.flickr.com/photos/20518315@N00/4014401367/) I understand that due to the built-in automation in TTPS, the user has limited capability to influence the layouts. While MLA includes layout automation, it does allow the user to modify layouts and select the best layout for the conference.

About capacity: TTPS is 16-port box and each codec takes a port, so it can connect maximum five 3-screen systems or four 4-screen systems. Bill therefore could not connect all available systems on TTPS – the server just ran out of ports. In comparison, RMX 2000 had 160 resources and each HD connection took 4 resources, so that RMX 2000 could connect maximum of 40 HD codecs, i.e., thirteen 3-screen systems or ten 4-screen systems. RMX therefore never ran out of capacity during the demo.

The morning and lunch interoperability demos were recorded on a Polycom RSS 2000 recorder @ IP address 198.109.240.221.

We ran three interoperability demos during the morning, lunch, and afternoon conference breaks. In addition, we managed to squeeze in two additional demos that highlighted topics relevant to Internet2 and the education community. In the first one, we connected the TPX Chula Vista to the Polycom RPX 218 system at Georgetown University in Doha, Qatar on the Arabian Peninsula, and had a very invigorating discussion about the way Georgetown uses telepresence technology for teaching and learning. John Chapman from the Georgetown University and Ardoth Hassler from the National Science Foundation joined us in the Chula Vista room. If you are interested in that topic, check out the joint Georgetown-Polycom presentation at the spring’09 Interent2 conference http://events.internet2.edu/2009/spring-mm/sessionDetails.cfm?session=10000467&event=909. The discussion later went into using telepresence technology for grant proposal review panels.

Another interesting demo was meeting Scott Stevens from Juniper Networks over telepresence and discussing with him how Juniper’s policy management engine interacts with Polycom video infrastructure to provide high-quality of experience for telepresence.

Throughout all interoperability and other demos, the Interent2 network performed flawlessly – we did not notice any packet loss and jitter was very low.

Stay tuned for Part 8 with summary of the test and demo results … http://videonetworker.blogspot.com/2009/10/part-8-telepresence-interoperability-is.html

PART 6: TELEPRESENCE INTEROPERABILITY LOGISTICS

Bringing a TPX to San Antonio required a lot of preparation. We had to find a room in the conference hotel Hyatt Regency that had enough space for the system and for additional chairs for conference attendees to see the demo.

Another important consideration for the room selection was how close it was to the loading dock. The TPX come in 7 large crates and we did not want to move them all over the hotel. And the size of the truck had to fit the size of Hyatt’s loading dock.

It was critical to have the IP network on site up and running before the TPX system could be tested. Usually a lot of the work and cost is related to bringing a high-speed network connection to the telepresence system. This was not an issue at the Internet2 conference since I2 brings 10Gbps to each conference site. We needed only about 12 Mbps (or approximately 0.1% from that) for TPX.

Timing was critical too. The Polycom installation team had to do the installation on the weekend, so that everything would work on Monday morning. The room that we identified was Chula Vista on lobby level. It was close to the loading dock and had enough space. The only issue was that the room was booked for another event on Wednesday, so TPX had to be dismantled on Tuesday, right after the last interoperability demo finished at 4:30pm.

Stay tuned for Part 7 about the telepresence interoperability demo at the Internet2 Conference on October 6, 2009 … http://videonetworker.blogspot.com/2009/10/part-7-telepresence-interoperability.html

Thursday, October 22, 2009

TELEPRESENCE INTEROPERABILITY PART 5: TANDBERG TELEPRESENCE SERVER

At this point, the team was comfortable with the functionality of the RMX, TPX, and Room 100. Adding another infrastructure component – the Tandberg Telepresence Server – to the test bed increased complexity but that was a risk we had to take in order to evaluate T3’ capabilities. It was also my first opportunity to see TTPS in action, and I was curious to find out what it could do. I knew that TTPS was a 16-port MCU, and that it has some additional capabilities to support multi-screen telepresence systems. But I still did not understand what functionality differentiated it from a standard MCU.

The team’s first experiences were not great. The Tandberg Telepresence Server crashed during the first test in which it participated. It also had problems in what is called 'Room Switched Continuous Telepresence' mode: when a T3 site was on TPX full screen and someone in the LifeSize Room 100 started talking, LifeSize was not shown on full screen on TPX but remained in a small preview window on the bottom of the screen and the border around it was flashing in red. We saw this behavior again during the interoperability demos on October 6 http://www.youtube.com/watch?v=GCwUWfgw9ig.

However as we worked with it we found that by cascading to TTPS from an RMX 2000 worked quite well. Gabe or Robbie configured TTPS as three-screen telepresence system on RMX, while Bill configured RMX as three-screen telepresence system on TTPS. And with every test, interoperability got better…

Stay tuned for Part 6 about the logistics around bringing a telepresence system to an industry event … http://videonetworker.blogspot.com/2009/10/part-6-telepresence-interoperability.html

PART 4: TELEPRESENCE INTEROPERABILITY TESTS

Then things started happening very fast. Tests were scheduled in every week, sometimes two times a week, throughout September. Gabe and Robbie learned how to use the Multipoint Layout Application (MLA) that controls telepresence layouts on RMX 2000 and found out that if you name the codecs sequentially, e.g. Room451, Room452, Room453, RMX/MLA automatically recognize that these codecs belong to the same multi-codec telepresence system.

The only setback was that we could not find a way around the ‘filmstrip’ generated by Tandberg T3. It did not matter if you connect Rui’s T3 directly to TPX or to Room 100 (point-to-point calls) or if you connect T3 to RMX 2000, T3 always sent a ‘filmstrip’ to third-party systems (http://www.flickr.com/photos/20518315@N00/4015164378/). The only advice we got is that we need a Tandberg Telepresence Server (TTPS) to reconstruct the original three images. Leveraging endpoints to sell infrastructure is not a new idea, but with all due respect to Tandberg, forcing customers to buy Tandberg Telepresence Server just to be able to get the original images generated by each of the three codecs in T3 is borderline proprietary, no matter if they use H.323 signaling or not.

In my blog post http://videonetworker.blogspot.com/2009/08/curious-story-of-resource-management-in.html I have already argued that a standard conference server (MCU) can handle telepresence calls and there is no need for a separate Telepresence Server. I looked at the comments following the post, and two of them (from Ulli and from Jorg) call for more products similar to the Tandberg Telepresence Server from other vendors. Now that I have some experience with TTPS, I am trying to imagine what would happen if Polycom and LifeSize decided to follow Tandberg’s example and develop TTPS-like servers, let’s call them Polycom Telepresence Server (PTPS) and LifeSize Telepresence Server (LSTPS). In this version of the future, the only way for telepresence systems from Polycom, LifeSize, and Tandberg to talk is by cascading the corresponding Telepresence Servers. Calls would go TPX-PTPS-TTPS-T3 or TPX-PTPS-LSTPS-LS Room 100, i.e., we are looking at double transcoding plus endless manual configuration of cascading links. I really believe this separate server approach represents a backward step on the road to interoperability.

Since we had no access to TTPS, Bob Dixon asked Bill Rippon from IBM Research if they could help. I have known Bill since January 2003. At the time, he was testing SIP telephones for a deployment at IBM Palisades Executive Briefing Center and hotel. I was product manager for SIP telephones at Siemens, and naturally very interested in getting the phones certified… Anyway, it was great to hear from Bill again. It turned out Bill had access not only to TTPS but also to an impressive collection of telepresence and other video systems, including Polycom’s largest telepresence system, a 4-screen RPX 400 in Armonk, NY.

Stay tuned for Part 5 about the Tandberg Telepresence Server … http://videonetworker.blogspot.com/2009/10/telepresence-interoperability-part-5.html

Wednesday, October 21, 2009

TELEPRESENCE INTEROPERABILITY PART 3: HERDING CATS

I hoped that summer’09 would be quieter than the extremely busy spring conference season, and I had great plans to write new white papers. But on June 29, Bob Dixon asked me if Polycom could take the lead and bring a telepresence system to the Internet2 meeting in San Antonio. He needed a real telepresence system on site to run real live telepresence interoperability demos. I agreed in principle but asked for time to check if we could pull it off logistically. Installing any of the larger Polycom Real Presence (RPX) systems was out of the question – RPX comes with walls, floor, and ceiling, and it was not feasible to install an RPX for just 2 days of demos. The 3-screen TPX system was much more appropriate. I will discuss logistics in more detail in Part 6.

While I was gathering support for the idea within Polycom, Bob Dixon, Gabe Moulton, and Robbie Nobel (Gabe and Robbie are with Ohio State University) started tests with the LifeSize Room 100 systems and the RMX 2000 they had at OARnet http://www.oar.net/. But they needed a TPX system similar to the one that would be installed in San Antonio. The best candidate was the North Church TPX in the Polycom office in Andover, Massachusetts, and I started looking for ways to support the test out of the Andover office.

In the meantime, Bob continued looking for other participants on the interoperability demo. Teleris declined participation. That was understandable since they only could connect through a gateway with all the negative consequences from using a gateway.

Cisco have been making efforts to position themselves as a standard-compliant vendor in the Interenet2 community, and promised to show up for the test, even talked about specific plans to upgrade their OEM gateway from RadVision to Beta software that would allow better interoperability. However, when the tests were about to start in late August, they suddenly withdrew. I guess at this point they had made the decision to acquire Tandberg and this had impact on their plans for RadVision.

Tandberg seemed uncertain whether to participate or not. Initially they expressed interest but, in the end they opted not to participate. Given Tandberg’s past history of actively championing interoperability, their decision not to participate in this forum seems inexplicable. Some have speculated that their decision was colored by ongoing talks with Cisco regarding acquisition. That may or may not be true but it will be interesting to observe whether Tandberg’s enthusiasm for standards compliance dampens once the Cisco acquisition is finalized.

Anyway, we did not get any direct support from Tandberg, and we really needed access to a T3 room to expand the tests. That is when the Megaconference email distribution list came in handy. The list (megacon@lists.acs.ohio-state.edu) is a great tool for finding video resources worldwide, so on August 19, I sent a note asking for people interested in telepresence interoperability. Rui Ribeiro from FCCN in Portugal responded enthusiastically. He had a T3 system in Lisbon and wanted to participate. Due to the 5-hour time difference to the East Coast, including Lisbon in the tests meant testing only in the morning, which is busy time for both people and telepresence rooms … but we needed Rui.

We scheduled the first three-way test – with Polycom, LifeSize, and Tandberg systems – for the first week of September. Everyone was available and rooms were booked but it was not meant to happen. On the morning of the test day, my colleague Mark Duckworth who was scheduled to support the test out of the TPX room in Andover had a motorcycle accident, and ended up in the hospital. The team was in shock and had to reschedule the test for the subsequent week. Mark is doing well, and participated in the interoperability tests between doctors’ visits.

Stay tuned for Part 4 about the telepresence interoperability tests in summer 2009 … http://videonetworker.blogspot.com/2009/10/part-4-telepresence-interoperability.html

Monday, October 19, 2009

PART 2: TELEPRESENCE INTEROPERABILITY CHALLENGES

The challenges around telepresence interoperability are related to both logistics and technology. Logistics are probably the bigger problem. Vendors usually conduct interoperability tests by gathering at an interoperability event, bringing their equipment to a meeting location, and running test plans with each other. This is the way IMTC manages H.323 interoperability tests and also the way SIPit http://www.sipit.net/ manages tests based on the SIP protocol. While developers can pack their new video codec in a suitcase and travel to the meeting site, multi-codec telepresence systems are large and difficult to transport. A full-blown telepresence system comes on a large truck and takes substantial time to build – usually a day or more. Therefore, bringing telepresence systems to interoperability test events is out of the question.

An alternative way to test interoperability is by vendors purchasing each other’s equipment and running tests in their own labs. While this is acceptable approach for $10K video codecs, it is difficult to replicate with telepresence systems that cost upwards of $200K. One could ask “Why don’t you just connect the different systems through the Internet for tests?” The issue is that telepresence systems today are run on fairly isolated segments of the IP network - mostly to guarantee quality but also due to security concerns - and connecting these systems to the Internet is not trivial. It requires rerouting network traffic, and use of video border proxies to traverse firewalls.

The technology challenges require more detailed explanation. Vendors like HP and Teleris run closed proprietary telepresence networks and their telepresence systems cannot talk directly to other vendor’s systems. There are of course gateways that can be used for external connectivity but gateways mean transcoding, i.e. decrease of quality, limited capacity, and decreased reliability of end-to-end communication. For those not familiar with the term ‘transcoding’, it is basically translation from one video format into another video format. Telepresence systems send and receive HD video at 2-10 megabits per second (Mbps) for each screen/codec in the system, and all that information has to go through the gateway and be translated into a format that standards-based systems can understand.

Some telepresence vendors state that they support standards such as H.323 or SIP (Session Initiation Protocol). However, standard- compliance is not black-and-white, and telepresence systems can support standards and still not allow good interoperability with other vendors’ systems. When Cisco introduced its three-screen CTS 3000, they made the primary video codec multiplex three video streams – its own and the two captured by the other two codecs – into a single stream that traversed the IP network to the destination’s primary codec. Third-party codecs cannot understand the multiplexed bit stream, and that is basically why you cannot connect a Polycom, LifeSize, or Tandberg telepresence system to Cisco CTS. Note that Cisco uses SIP for signaling and claims therefore standard-compliance; however, the net result is that third-party systems cannot connect. If you decide to spend more money and buy a gateway from Cisco, you could connect to third-party system but at a decreased video and audio quality that is far from the telepresence promise of immersive communication and replacement of face-to-face meetings. The discrepancy between the ‘standard compliance’ claim and the reality that its systems just do not talk to any other vendor has haunted Cisco since they entered the video market.

When Tandberg introduced its three-screen telepresence system T3, they made another technological decision that impacts interoperability. T3 combines the video streams from three codecs (one per screen) into one stream, and any non-Tandberg system that connects to T3 receives what we call a ‘filmstrip’, i.e. three small images next to each other (http://www.flickr.com/photos/20518315@N00/4015164378/). The ‘filmstrip’ covers maybe one-third of one screen (or one-ninth of the total screen real estate of a three-screen system). So, yes, you can connect to T3 but you lose the immersive, face to face feeling that is expected of a telepresence system. Note that T3 uses standard H.323 signaling to communicate with other systems, so it is standard-compliant; however, the result is that if you want to see the three images from T3 on full screens, you have to add an expensive Tandberg Telepresence Server (TTPS). I will discuss TTPS in more detail in parts 4 and 5.

To come back to my original point, due to a range of logistical and technological issues, establishing telepresence interoperability is quite a feat that requires serious vendor commitment and a lot of work across the industry.

Stay tuned for Part 3 about the organizational issues around telepresence interoperability testing … http://videonetworker.blogspot.com/2009/10/telepresence-interoperability-part-3.html

PART 1: WHY TELEPRESENCE INTEROPERABILITY?

On October 6, 2009, Bob Dixon from OARnet moderated successful telepresence interoperability demonstration at the Fall Internet2 meeting in San Antonio, Texas. It included systems from Polycom, LifeSize, and Tandberg, and the short version of the story is in the joint press release http://finance.yahoo.com/news/Polycom-Internet2-OARnet-iw-1109370064.html?x=0&.v=1. While the memories from this event are still very fresh, I would like to spend some time and reflect on the long journey that led to this success.

First of all, why is telepresence interoperability so important?

The video industry is built on interoperability among systems from different vendors, and customers enjoy the ability to mix and match elements from Polycom, Tandberg, LifeSize, RadVision and other vendors in their video networks. As a result, video networks today rarely have equipment from only one vendor. It was therefore natural for the video community to strive for interoperability among multi-screen/multi-codec telepresence systems.

Most industry experts and visionaries in our industry subscribe to the idea that visual communication will become as pervasive as telephony today, and it has been widely recognized that the success of the good old Public Switch Telephone Network (PSTN) is based on vendors adhering to standards. Lack of interoperability, on the other hand, leads to inefficient network implementations of media gateways that transcode (translate) the digital audio and video information from one format to another thus increasing delay and decreasing quality. While gateways exist in voice networks, e.g. between PSTN and Voice over IP networks, their impact on delay and quality is far smaller than the impact of video gateways. Therefore, interoperability of video systems – telepresence and others – is even more important than interoperability of voice systems.

The International Multimedia Teleconferencing Consortium (IMTC) has traditionally driven interoperability based on the H.323 protocol. At the IMTC meeting in November’08 http://www.imtc.org/imwp/download.asp?ContentID=14027, the issue came up in three of the sessions and there were heated discussions how to tackle telepresence interoperability. The conclusion was that IMTC had expertise in signaling protocols (H.323) but not in the issues around multi-codec systems.

In February’09, fellow blogger John Bartlett wrote on NoJitter about the need for interoperability to enable business-to-business (B2B) telepresence and I replied on Video Networker http://videonetworker.blogspot.com/2009/03/business-to-business-telepresence.html, basically saying that proprietary mechanisms used in some telepresence systems create obstacles to interoperability.

In April’09, Bob Dixon from Ohio State and OARnet invited all telepresence vendors to the session ‘Telepresence Perspectives and Interoperability’ at the Spring Internet2 conference http://events.internet2.edu/2009/spring-mm/agenda.cfm?go=session&id=10000509&event=909. He chaired the session and, in conclusion, challenged all participating vendors to demonstrate interoperability of generally available products at the next Intrenet2 event. All vendors but HP were present. Initially, everyone agreed that this was a great idea. Using Internet2 to connect all systems would allow vendors to test without buying each others’ expensive telepresence systems. Bandwidth would not be an issue since Internet2 has so much of it. And since the interoperability would be driven by an independent third party, i.e. Bob Dixon, there would be no competitive fighting.

In June’09, I participated in the session ‘Interoperability: Separating Myth from Reality’ at the meeting of the Interactive Multimedia & Collaborative Communications Alliance (IMCCA) during InfoComm in Orlando, Florida http://www.infocommshow.org/infocomm2009/public/Content.aspx?ID=984&sortMenu=105005, and telepresence interoperability was on top of the agenda.

During InfoComm, Tandberg demonstrated connection between their T3 telepresence system and Polycom RPX telepresence system through the Tandberg Telepresence Server. The problem with such demos is always that you do not how much of it is real and how much is what we call ‘smoke and mirrors’. For those not familiar with this term, ‘smoke and mirrors’ refers to demos that are put together by modifying products and using extra wires, duct tape, glue and other high tech tools just to make it work for the duration of the demo. The main question I had around this demo was why a separate product like the Tandberg Telepresence Server was necessary? Couldn’t we just use a standard MCU with some additional layout control to achieve the same or even better results? To answer these questions, we needed an independent interoperability test. Ohio State, OARnet, and Internet2 would be the perfect vehicle for such test; they are independent and have a great reputation in the industry.

Stay tuned for Part 2 about the challenges to telepresence interoperability … http://videonetworker.blogspot.com/2009/10/part-2-telepresence-interoperability.html

Thursday, October 1, 2009

Cisco to Acquire Tandberg

Cisco announced today that they will acquire Tandberg, and this will have significant impact on the video communications market. It will reduce competition, and limit customers’ choices, especially in the telepresence space. It will, hurt Radvision who now fills the gap in Cisco’s video infrastructure portfolio.

I am however more concerned about the standards-compliance that have been the pillar of the video communication industry for years. Tandberg and Polycom worked together in international standardization bodies such as ITU-T and in industry consortiums such as IMTC to define standard mechanisms for video systems to communicate.

Cisco on the other hand is less interested in standards, and considers proprietary extensions as a way to gain competitive advantage. The concern of the video communication industry right now should be that the combined company will be so heavily dominated by Cisco that standards will become last priority, far after integrating Tandberg products with Cisco Call Manager and WebEx.

Telling is the fact that both Tandberg and Cisco declined participating in interoperability events over the last few months.