FAQs

WebRTC is an open standard that defines a transport protocol and browser API for real-time communications including audio, video and data streams. The initial use case was one-to-one, and few-to-few browser-based video chat applications. Similar to other standards like HTTP, it is a building block designed to be used within a website or application and not an application or complete solution to any specific use case by itself.

Phenix's central mission has been to provide high-quality video distribution to millions of concurrent viewers in real-time. In order to achieve these high goals and tight tolerances, we have taken a ground up, end-to-end approach to our system design, video encoding and network architecture. We chose a WebRTC compliant protocol in our overall design because of its widespread browser support, which means viewers don’t have to download or install any sort of software or browser plug-in.

However, as with other video transport standards such as HLS and DASH, WebRTC does not address many of the critical design decisions required to build a complete video experience. These critical components are left out of the standard entirely, such as encoding configurations, player behavior, bitrate selection and switching logic, startup strategies and more. Even if there was consensus on those player based topics, there is still more required to create a system that can deliver a high-quality video experience to millions of concurrent viewers in real-time, such as real-time automatic scaling to handle flash crowds, real-time encoding, viewer access control, resiliency in the face of outages and degradation and recovery, analytics, advertising, and other features and challenges.

In summary, WebRTC is a great component on which to build and provides significant benefits in browser compatibility, but it's not enough. Phenix’s patented technology and unique system architecture is a result of our end-to-end approach and dedication to solving the problem of delivering high-quality video to broadcast-sized audiences with real-time (< 500ms) latency.

WebRTC is an open standard point-to-point transmission protocol. In this way, it is similar to HTTP. Saying WebRTC is not scalable is like saying that HTTP is not scalable. While it is true that the initial WebRTC use case was one-to-one, and few-to-few browser-based video chat applications, the myth of WebRTC not being scalable derives from the initial implementations as chat servers that were not built for scalable one to many streaming. Similar to other standards like HTTP, WebRTC is a building block designed to be used within a website or application and not an application or complete solution to any specific use case by itself.

Phenix's central mission has been to provide high-quality video distribution to millions of concurrent viewers in real-time. In order to achieve these high goals and tight tolerances, we have taken a ground up, end-to-end approach to our system design, video encoding and network architecture. We chose a WebRTC compliant protocol in our overall design because of its widespread browser support, which means viewers don’t have to download or install any sort of software or browser plug-in.

As with other video transport standards such as HLS and DASH, WebRTC does not address many of the critical design decisions required to build a complete video experience. These critical components are left out of the standard entirely, such as encoding configurations, player behavior, bitrate selection and switching logic, startup strategies and more. Even if there was consensus on those player based topics, there is still more required to create a system that can deliver a high-quality video experience to millions of concurrent viewers in real-time, such as real-time automatic scaling to handle flash crowds, real-time encoding, viewer access control, resiliency in the face of outages and degradation and recovery, analytics, advertising and more.

The scalability of Phenix’s platform does not come from the protocol itself, but from the systems built and deployed to accept WebRTC connections and deliver content through them. Our platform is built to scale out horizontally. In order to serve millions of concurrent users subscribing to the same stream in a short period of time, resources need to be provisioned timely or be available upfront. We achieve this cost-effective capability by reserving capacity for each viewer and utilizing our autonomous scaling engine to predict spikes in load as early as possible. The platform can provision new infrastructure resources as needed in seconds.

Our distributed provisioning algorithm is able to allocate capacity for flash crowds of viewers in a short time due to its memory-only based design. All data is stored in memory and cached appropriately in individual nodes so that the system is capable of handling requests proportional to the amount of available resources.

In summary, WebRTC is a great component on which to build and provides significant benefits in browser compatibility, but it's not enough. Phenix’s patented technology and unique system architecture is a result of our end-to-end approach and dedication to solving the problem of delivering high-quality video to broadcast-sized audiences with real-time (< 500ms) latency.

Feature CDN Phenix
Built For File Delivery Video Streaming
Data Model Files on Disk Streaming Data
Infrastructure Model Static Dynamic
Fundamental IP Protocol TCP UDP

CDNs, or Content Delivery Networks, were built for file downloads over HTTP. Early ones like Akamai focused on images initially to speed up the delivery of web pages. As the web grew so too did CDNs and their capabilities to deliver all sorts of web content, HTML, CSS, images, etc.

Along came internet video, initially in the form of files that you could download and then playback locally. Then came progressive download video where you could begin the video playback before the entire file was played.

Around this time, streaming platforms like Real Networks and Microsoft Windows Media Server came onto the scene for video streaming. These protocols were proprietary and utilized UDP to enable the streaming of video data vs. the download of video files.

Flash from Macromedia came onto the scene and started to dominate the world of online video streaming with its Real Time Media Protocol or RTMP. Around this time, CDNs like Akamai began to license technology from all three major streaming vendors, running their services within their networks to enable video. However, these licenses were not cheap, and the third-party proprietary software between the CDN and the user motivated the CDN to push forward a new solution that utilized the existing HTTP infrastructure investment. This was not a decision that was based on user experience or an analysis of the problem and choosing a better technology but borne out of cost reduction and existing investment re-use.

CDNs collaborated to standardize on HTTP based streaming, initially forcing Adobe to release HTTP Dynamic Streaming or HDS to allow the use of CDN infrastructure and ultimately when Apple released its HTTP Live Streaming or HLS specification the world adopted it because it was cheap, standards based and scalable.

With protocols like HLS and HDS the video stream is made up of many small file downloads played back one after the other. This approach was great for VOD and okay for live streaming at high latencies where large client-side buffers are not a problem. However, chunk/file based workflows are not great for low latency and impossible for real-time streaming.

Overall, CDN infrastructure was built for handling files, storing them, moving them, and serving them. None of it was built with the concept of time as being a critical variable for optimization.

Now here comes WebRTC or Real Time Communications for the web. A reboot of previous protocols like Real, WMS, RTP, RTSP, etc., but sponsored by the W3C and built into browsers.

Phenix built a global infrastructure based on WebRTC around the idea that the end-to end latency should be less than 1/2 second. To do this, you need a significantly different approach and a different fundamental architecture. Nothing can be written to disk; it takes too long. Everything must be ready to stream, and transcoding must be completed once prior to streaming out to end users; per user transcoding causes major issues in terms of resource utilization.

The other industry innovation that Phenix has taken full advantage of is dynamic resource allocation via Cloud Infrastructure providers. Every Phenix service can be provisioned in real-time to meet spontaneous demand, using only what is needed when it’s needed. This is new since the inception of CDNs and allows Phenix to start and grow with its customers without requiring an expensive infrastructure build out. Dynamic scaling also allows the licensed version of Phenix to be deployed within existing cloud environments to enable the creation and selling of profitable services on existing underutilized resource investments.

Latency

The combination of CTE (Chunked Transfer Encoding) with CMAF (Common Media Application Format) promises to deliver content with lower latency than with typical delivery of CMAF-encoded and packaged content. The CMAF+CTE examples that have been deployed have all been demonstrating 3 or more seconds of latency, which is significantly longer than Phenix’s < 0.5-second latency.

Scale

In addition, no CMAF + CTE (Chunked Transfer Encoding) deployments have demonstrated the ability to deliver content at scale. Phenix has proven scale both vertically - serving clients with 200,000+ concurrent viewer events every day - and horizontally - serving clients streaming 1000 channels simultaneously.

Fundamental Technology Stability

The technology fundamentals of the CMAF+CTE approach prevent it from providing a stable solution with high quality video at < 1 second. CMAF+CTE is built on HTTP and TCP which are designed for reliability of data transfer vs. real-time. The Phenix Interactive Transport Protocol (ITP) is built on top of the W3C standard called WebRTC (built on top of UDP instead of TCP), which is designed for real-time communications. This difference is necessary in order to gain the flexibility to handle network events like packet loss and jitter and maintain real-time latencies.

Synchonization

CMAF + CTE does not provide a mechanism for audience synchronization. The Phenix solution does provide audience synchronization, allowing the audience to interact with one another, as everyone watching the same thing at the same time.

Feature CMAF + CTE Phenix
Latency 3+ seconds < 0.5 seconds
Scalability Not proven Clients with 200,000+ concurrent viewer events every day
Clients streaming 1,000 channels simultaneously
Fundamental Technology HTTP and TCP WebRTC (UDP)
Audience Synchonization Not available Available

Phenix proprietary Adaptive Bitrate technology (United States Patent Application No. 62/663,182) transcodes streams into multiple resolutions and bitrates, enabling each viewer to dynamically receive the bitrate most suitable to his or her connection speed at any given time. All of this is accomplished while maintaining less than 500ms of end-to-end latency. The Adaptive Bitrate (ABR) capabilities for optimal stream quality are handled by the Phenix system according to each individual viewer’s network conditions.

The default ABR policy uses a resolution bitrate ladder similar to YouTube recommendations. Phenix will automatically transcode to the applicable quality layers below the published quality level. For example, if you publish an HD stream as the top layer, then Phenix automatically creates the appropriate SD, LD, VLD and ULD layers.

There are no special requirements on the viewer side to enable ABR streaming. Phenix will automatically connect viewers to the highest quality layer that is sustainable for their network connection. This differs from other technologies such as DASH, which requires the viewer side to parse a manifest and choose a presentation based on bitrate and other factors. Viewers will switch between quality layers as needed throughout the duration of their streams.

Phenix real-time streaming is a packet based frame-by-frame approach, in contrast to the chunk-based approaches of HLS and DASH. Phenix uses a proprietary algorithm and architecture for dynamic keyframe generation that scales across large audiences in order to allow switching quality levels at any time for any viewer.

The kind of latency that is talked about in the context of comparing cable internet to DSL internet to 5G is what is also referred to as “last mile” latency. Of the overall end-to-end video pipeline latency, last mile latency makes up a tiny fraction of a percent of the typical 30-90 seconds.

Therefore, the reduction of last mile latency from even a very high 150ms down to 1ms will have little impact on the 30-90 second video pipeline latency.

Phenix addresses the major sources of end-to-end video pipeline latency including the encoding, middle mile transport, transformation and player technologies to achieve less than 1/2 second of delay.

In a nutshell, 5G will enable Phenix (and all streaming companies for that matter) to deliver a higher quality signal on a more stable internet connection. However, 5G will not affect end-to-end video latency in any significant way for Phenix or its competitors.

The Phenix platform runs on a combination of cloud infrastructure providers with Google and Oracle as the primary vendors. Each data center is connected with a global private fiber network that enables us to minimize packet loss over long links and maximize throughput.

Clients connect to the Phenix platform to view streams from the nearest Point of Presence (PoP) to minimize last mile connectivity challenges. Cross PoP and cross Cloud provider functionality is built into the Phenix solution.

Phenix encoders are built specifically to meet the demands of real-time streaming and enable Phenix customers to input raw HD-SDI or HDMI signals. The encoders are comprised of off-the-shelf, rack-mountable servers paired with Phenix real-time encoding software. Unlike traditional encoders, they require no special tuning or configuration to create a multi-bitrate video signal in real-time while maintaining high quality and < 500ms end-to-end latency.

Phenix Encoder Overview:
  • Rackmount chassis
    • Case is designed to 3U EIA-310 standard
    • Case is designed to be cantilevered from the front, no rack rail supports required
  • Signal acquisition & real-time encoding
  • High availability ingest options
    • Active-Active, Primary-Backup
  • Can connect to private internal resources
  • Monitoring and operational control
  • Input capabilities: 3G-SDI (optionally HDMI)

Phenix drives revenue

Find out how we can take your business to the next level by calling +1 (312) 801-5535 or emailing [email protected]