Cloud computing is defined as ‘virtualization of computing assets delivered on demand over the IP network’. It promises availability and scalability for applications ranging from storage to collaboration. Clouds are in particularly popular because the concept is easier to grasp than previous attempts to define similar services through Application Service Providers (ASPs) and Software as a Service (SaaS).
The Cloud is a more general concept that includes not only SaaS but also storage, platform, and infrastructure as a service. Clouds are in better position to deliver on the promise for interactivity. While slow networks in the past have made the user experience with ASPs quite negative, better networks available today allow for fast response times, and increased interactivity.
Analysts are excited about cloud services and see above-average growth; while the average IT market growth is expected to be 4% per year until 2013, IT cloud services are expected to grow 25% over same period. The current bidding war between HP and Dell for cloud storage technology provider 3PAR is a great example for how hot this market segment has become.
Evolution of Service Architectures
In the legacy approach, each enterprise application runs on separate server(s) that resides in one of the enterprise’s offices. This leads to inefficient use of space, energy (power-up and cooling), and substandard user experience. In the next stage of the evolution, all servers were collocated in a data center where they can share space and power. In the third stage, services are provided by the Cloud that can be within the enterprise (‘private cloud’) or outside the enterprise (‘public cloud’).
The term ‘virtualization’ is often used in relation to Clouds, and it is important to clarify virtualization’s role. Virtualization saves money by increasing server utilization, i.e. reducing the number of servers (hardware) necessary to support applications. Virtualization can be used in a traditional data center or in the Cloud. In both environments, virtualization reduces the hardware necessary to run enterprise applications. It has very strong positive financial and environmental impact.
Unified Collaboration
Unified collaboration combines variety of communication tolls – voice, video, email, presence, IM, etc. – into a seamless user experience, and into workflows, through a single user interface. Since UC installations are far past pilot deployments and are now being rolled out across large organizations, scalability is an important requirement.
Global teams span the entire world, and different time zones do not allow everyone on the team to participate live in all collaboration sessions. Audio, video, and shared content must therefore be stored and streamed. This leads to the requirement for efficient and scalable storage.
Accessibility of UC applications has two sides. First, users should be able to access them from anywhere, not just the office bit also from remote locations. Second, any device should provide access, including computers, telephones, appliances such as personal and group video systems, and even immersive telepresence systems.
To meet these UC requirements, UC architectures must follow IT architectures towards Clouds.
Bandwidth Requirements
UC applications, such as voice and video, require higher quality of service (QOS) than applications such as email, scheduling, or management. QOS are defined by bandwidth, latency, packet loss, and jitter. And while there are mechanisms in place to combat packet loss and jitter, bandwidth remains the most important resource necessary to support voice and video collaboration applications.
If multiple systems have to be connected in a multi-point conference, the traffic quickly grows, and may overwhelm the Cloud. Cloud throughput is critical for successful deployment of voice and video collaboration application. Recent advances in video compression technology, in particular Polycom’s implementation of the H.264 High Profile for real-time video, allow for ‘thinner’ connections between premise and Cloud without sacrificing the quality of experience.
In general, voice transmission requires less bandwidth than video. Even the highest audio quality does not require more than 128 kbps per channel and this is usually not an issue for the interface between customer premise and the cloud service provider.
Security Requirements
Numerous surveys of CIOs and IT administrators have shown that security is the leading concern around deploying Cloud services. With data applications, hackers try to capture and copy the customer data. With real-time collaboration applications, such as voice and video, hackers try to redirect and record voice and video calls.
There is currently a fairly robust security framework for user authentication, authorization, and media encryption – both in SIP and H.323 environments - that can be deployed to prevent interception of voice and video calls at the interface between customer and SP. However, this security framework has to be reevaluated and extended to cover new security threats that come with new cloud service use cases.
Many industry experts believe that cloud services will lead to improved security due to the centralization of data and the increased security-focused resource. SPs are able to devote resources to solving security issues that many customers cannot afford to solve themselves.
Availability Requirements
In the cloud services scenario, a lot of the infrastructure functionality that today resides on customer premise will be moved to the Cloud, and become a shared resource among enterprises. The availability of these resources is of paramount importance to the success of voice and video services in the Cloud.
One successfully deployed approach to increased availability (and scale) is the use of a redundant resource management application that controls a pool of multipoint conferencing resources in the network.
To make a pool of conference servers behave as one huge conference server, the resource management application tracks incoming calls, routes them to the appropriate resource (for instance, this can be done based on available server resources but also based on available bandwidth to the location of this server) and that automatically creates cascading links if a conference overflows to another server. If the conference is prescheduled, the application server can select a conference server that has sufficient resources to handle the number of participants at the required video quality (bandwidth). Overflow situations are probable with ad-hoc conferences where participants spontaneously join without any upfront reservation of resources.
The resource management application runs on two servers to ensure 100% redundancy and auto-failover. It is designed to provide uninterrupted service by routing calls around failed or busy conference servers. It also allows administrators to “busy out” media severs for maintenance activities while still providing an ‘always available’ experience from the Cloud user point of view. The system can gradually grow from small deployments of 1-2 conference servers to large deployments with many geographically dispersed conference servers based on the dynamic needs of growing organizations. System administrators can monitor daily usage and plan the expansion as necessary. This approach also provides a centralized mechanism to deploy a front-end application to control and monitor conferencing activities across all conference servers.
The resource management application also serves as a load balancer in this scenario, that is, it distributes the conference load over a group of servers, ensuring that a server is not oversubscribed, while another being underutilized. The larger the resource pool, the more efficient the load balancing function is, a feature that is very important to Cloud service providers who can offer conference services globally by using the resource management application and placing conference servers in central points of their networks. More approaches to increased availability and scale are discussed in the paper ‘Polycom UC Intelligent Core: Scalable Infrastructure for Distributed Video’.
Bringing Collaboration and Clouds Closer
The trend towards cloud services is driving both technology and business model changes.
On the technology side, UC technology providers have to make significant changes in the architecture for voice and video applications to better align with the architecture of Clouds. Reducing the complexity in the infrastructure and pushing it to the endpoints is a viable approach although the impact on the user experience through complex endpoint implementation is still being evaluated.
Cloud service providers have to meet challenges on their own. Clouds today are designed with data processing in mind, and throughput (bandwidth to and from the Cloud) and Quality of Service (latency, for example) are not at the level required for real-time interaction. Cloud providers therefore will need to increase throughput for real-time applications, and develop new service pricing to accommodate the specifics of real-time collaboration.
This blog discusses collaboration market and technologies including video conferencing, web conferencing, and team collaboration tools.
Showing posts with label video. Show all posts
Showing posts with label video. Show all posts
Wednesday, September 1, 2010
Wednesday, March 17, 2010
Will the US Stimulus Package Lead to Wider Video Adoption?
When I first heard about the priorities of the US stimulus package (the official name is American Recovery and Reinvestment Act, or ARRA), I was very hopeful that it will drastically improve the broadband infrastructure and pave the way for wider adoption of video communication across the United States. Video - and to a lesser extent wideband audio - require quite a lot of bandwidth combined with some quality of service requirements , for example, packet loss should not be more than 5%, jitter should not be more than 40ms, and latency … well, latency is negotiable but a real nice real-time interaction calls for maximum 250ms end-to-end.
I live in a large city which is part of a large metropolitan area of 5-6 million people, and I do have choices among cable, xDSL, FTTH, etc. to get high-speed access to the IP network. In addition, Polycom’s office is not far away and once I connect to the corporate network, I can use much faster and more predictable links to connect to other Polycom offices around the world. But what if I lived in a remote rural area? What if I only could get modem or satellite connection, or connect through the packet service of a mobile network? I would not be able to use video communication – at least not at quality level that makes it useful - and even wideband audio would be a challenge.
A huge part of the US population cannot use video communication because the broadband access network just does not support this application, and the stimulus money spent on broadband initiatives should improve the situation. Wouldn’t it be great to allow patients at remote locations to access best specialist over video and rural schools to connect to world-class education institutions such as the Manhattan School of Music and teach music over advanced audio-video technology?
But how does the stimulus package apply to broadband access? The National Telecommunications and Information Administration (NTIA) established the Broadband Technology Opportunities Program (BTOP) which makes available grants for deploying broadband infrastructure in ‘unserved’ and ‘underserved’ areas in the United States, enhancing broadband capacity at public computer centers, and promoting sustainable broadband adoption projects. The Rural Utilities Service (RUS) has a program called BIP (Broadband Initiatives Program); it extends loans, grants, and loan/grant combinations to facilitate broadband deployment in rural areas. When NTIA or RUS announce a Notice of Funds Availability (NOFA), there is a lot of excitement in the market.
I am actually less interested in the logistics of fund distribution but am rather concerned about the ‘broadband service’ definition used in all NOFA documents. It originates from the Federal Communication Commission (FCC) and stipulates that ‘broadband service’ is everything above 768 kilobits per second downstream (i.e. from service provider to user) and 200 kilobits per second upstream (i.e. from user to service provider). Real-time video requires symmetric bandwidth, although video systems would adjust the audio and video quality level depending on the available bandwidth in each direction. At the minimum ‘broadband service’ level defined above, the user could see acceptable video quality coming from the far-end but would be able to only send low-quality video to the far-end.
I understand that when the broadband service definition was discussed at FCC, the wire-line companies wanted higher limits, in line with what cable and xDSL technology can support, while wireless companies wanted far lower limits, like the ones adopted, so that they can play in broadband access as well. FCC decided to set the bar low enough for everyone to be able to play but allow competition in offering higher speeds. There is fair amount of skepticism that this model will get us to higher speeds than the defined minimums. Several organizations including Internet2 proposed two-tier approach with a lower broadband service limit set for households and a higher limit set for institutions/organizations; however, FCC’ final definition did not recognize that broadband for institutions is different from broadband for end users.
At Polycom, we take network bandwidth limitations very seriously, and have been working on new compression techniques that reduce bandwidth usage for video communication. This resulted in the implementation of the H.264 High Profile which I described in detail in my previous post. And while we can now compress Standard Definition video to about 128 kilobits per second, the additional bit rate necessary for good quality audio and the IP protocol overhead still does not allow us to fit into the very thin 200-kilobits-per-second pipe. Don’t forget that bandwidth is not the only requirement for real-time video; latency, jitter and packet loss are very important and none of these parameters is explicitly required or defined in any NTIA or RUS documents.
I live in a large city which is part of a large metropolitan area of 5-6 million people, and I do have choices among cable, xDSL, FTTH, etc. to get high-speed access to the IP network. In addition, Polycom’s office is not far away and once I connect to the corporate network, I can use much faster and more predictable links to connect to other Polycom offices around the world. But what if I lived in a remote rural area? What if I only could get modem or satellite connection, or connect through the packet service of a mobile network? I would not be able to use video communication – at least not at quality level that makes it useful - and even wideband audio would be a challenge.
A huge part of the US population cannot use video communication because the broadband access network just does not support this application, and the stimulus money spent on broadband initiatives should improve the situation. Wouldn’t it be great to allow patients at remote locations to access best specialist over video and rural schools to connect to world-class education institutions such as the Manhattan School of Music and teach music over advanced audio-video technology?
But how does the stimulus package apply to broadband access? The National Telecommunications and Information Administration (NTIA) established the Broadband Technology Opportunities Program (BTOP) which makes available grants for deploying broadband infrastructure in ‘unserved’ and ‘underserved’ areas in the United States, enhancing broadband capacity at public computer centers, and promoting sustainable broadband adoption projects. The Rural Utilities Service (RUS) has a program called BIP (Broadband Initiatives Program); it extends loans, grants, and loan/grant combinations to facilitate broadband deployment in rural areas. When NTIA or RUS announce a Notice of Funds Availability (NOFA), there is a lot of excitement in the market.
I am actually less interested in the logistics of fund distribution but am rather concerned about the ‘broadband service’ definition used in all NOFA documents. It originates from the Federal Communication Commission (FCC) and stipulates that ‘broadband service’ is everything above 768 kilobits per second downstream (i.e. from service provider to user) and 200 kilobits per second upstream (i.e. from user to service provider). Real-time video requires symmetric bandwidth, although video systems would adjust the audio and video quality level depending on the available bandwidth in each direction. At the minimum ‘broadband service’ level defined above, the user could see acceptable video quality coming from the far-end but would be able to only send low-quality video to the far-end.
I understand that when the broadband service definition was discussed at FCC, the wire-line companies wanted higher limits, in line with what cable and xDSL technology can support, while wireless companies wanted far lower limits, like the ones adopted, so that they can play in broadband access as well. FCC decided to set the bar low enough for everyone to be able to play but allow competition in offering higher speeds. There is fair amount of skepticism that this model will get us to higher speeds than the defined minimums. Several organizations including Internet2 proposed two-tier approach with a lower broadband service limit set for households and a higher limit set for institutions/organizations; however, FCC’ final definition did not recognize that broadband for institutions is different from broadband for end users.
At Polycom, we take network bandwidth limitations very seriously, and have been working on new compression techniques that reduce bandwidth usage for video communication. This resulted in the implementation of the H.264 High Profile which I described in detail in my previous post. And while we can now compress Standard Definition video to about 128 kilobits per second, the additional bit rate necessary for good quality audio and the IP protocol overhead still does not allow us to fit into the very thin 200-kilobits-per-second pipe. Don’t forget that bandwidth is not the only requirement for real-time video; latency, jitter and packet loss are very important and none of these parameters is explicitly required or defined in any NTIA or RUS documents.
Labels:
audio,
bandwidth,
broadband access,
video
Subscribe to:
Posts (Atom)