Sorry, you need to enable JavaScript to visit this website.

Feedback

Your feedback is important to keep improving our website and offer you a more reliable experience.

Media Processing and Delivery

Media Processing and Delivery includes such technologies as: video encoding, video decoding, and video streaming. The predominant use cases involving media workloads are live streaming, broadcast media, and over-the-top (OTT) media. 

Dockers - Setup

To find out more about using Docker and the Dockerfiles to build Open Visual Cloud pipelines, visit the Get Started With Docker page.  

Reference Pipelines for Media Processing and Delivery 

Reference pipelines are provided to serve as a starting point for end-to-end Open Visual Cloud service creation and innovation, with everything needed to build these sample services. 

Content Delivery Network (CDN) - Transcode Sample

Media transcoding is a key function for live video broadcasting, streaming, and video on demand use cases in a CDN network. The CDN Transcode sample (not a finished product) provides a reference pipeline to build an out-of-box 1:N CDN streaming transcode service example.

VIDEO CONFERENCING SAMPLE

The video conferencing sample implements a web meeting demo based on Open WebRTC Toolkit (OWT) media server and client SDK, which fully demonstrates OWT media streaming and processing features in both mix and forward modes. It also provides basic conferencing actions like screen sharing, instant messaging, and meeting control in web UI.

SMART CITY TRAFFIC MANAGEMENT SAMPLE

The smart city traffic management reference pipeline shows how the integration of the various media building blocks, including SVT, with analytics powered by the OpenVINO™ Toolkit for smart city use cases including street-corner traffic control (city planning) as starting point. This sample (not a finished product) can be referenced by developers to ease application development challenges. It enables real time analytics of live video feeds from IP cameras.

Intelligent Ad-Insertion Sample

The intelligent ad-insertion reference pipeline shows how the integration of the various media building blocks, including SVT, with analytics powered by the OpenVINO™ Toolkit, work together to accelerate your converged media pipeline. This is a server-side ad insertion sample (not a finished product) that can be referenced by developers to ease application development challenges. It enables real time analytics of media content that powers intelligent selection of advertisements, resulting in personalized and higher affinity targeted ad placements.
 

Media Processing and Delivery - Related Dockerfiles

In addition to the end-to-end reference pipelines above, several other Dockerfile packages are provided offering just the components needed for your project or custom pipeline. Updates will be provided here on a regular basis.

-------------------------------------------------------------------------------------------------------------------------------------------------

Open Visual Cloud Dockerfile-FFmpeg

Getting Started -> FFMPEG readme file

What's inside the FFmpeg Github Repo?

  • CMakeList.txt -Text file needed to install CMake
  • Dockerfile - Dockerfile that Docker runs to install FFmpeg
  • Dockerfile.m4 - Dockerfiles
  • build.sh - Build script
  • shell.sh - Container script 

FFmpeg image optimized for media creation and delivery. Included codecs: aac, mp3, opus, ogg, vorbis, x264, x265, vp8/9, av1 and SVT-HEVC. Optimized SW and HW accelerated encode plugins for supported Intel HW are provided.

Select the platform target, whether CPU or accelerator, and the corresponding target OS desired to go straight to the GitHub repo. An overview of the Intel® Visual Compute Accelerator 2 (Intel® VCA2) can be found here: https://www.intel.com/content/www/us/en/products/servers/accelerators.html

Platform: Intel® Xeon® (CPU) - Read Me

Platform: Intel® VCA2 - Read Me

Platform: Intel® Xeon® E3 (GPU) - Read Me


-------------------------------------------------------------------------------------------------------------------------------------------------

Open Visual Cloud Dockerfile- GStreamer

Getting Started -> GStreamer readme file

What's inside the GStreamer Github Repo?

  • CMakeList.txt - Text file needed to install CMake
  • Dockerfile - Dockerfile that Docker runs to install GStreamer
  • Dockerfile.m4 - Dockerfiles
  • build.sh - Build script
  • shell.sh - Container script 

GStreamer image optimized for media creation and delivery, typically most common with streaming applications. Included are the base, good, bad, ugly and libav set of plugins. Optimized SW and HW accelerated encode plugins for supported Intel HW are provided. 

Select the platform target, whether CPU or accelerator, and the corresponding target OS desired to go straight to the GitHub repo. An overview of the Intel® Visual Compute Accelerator 2 (Intel® VCA2) can be found here: https://www.intel.com/content/www/us/en/products/servers/accelerators.html

Platform: Intel® Xeon® (CPU) - Read Me

Platform: Intel® VCA2 - Read Me

Platform: Intel® Xeon® E3 (GPU) - Read Me

-------------------------------------------------------------------------------------------------------------------------------------------------

Open Visual Cloud Dockerfile-  FFMPEG+GStreamer+dev

Getting Started -> FFMPEG readme file

Getting Started -> GStreamer readme file

What's inside the FFmpeg+GStreamer+dev Github Repo?

  • CMakeList.txt - Text file needed to install CMake
  • Dockerfile - Dockerfile that Docker runs to install FFmpeg+GStreamer+dev
  • Dockerfile.m4 - Dockerfiles
  • build.sh - Build script
  • shell.sh - Container script 

FFmpeg+GStreamer+dev This package contains the FFmpeg + GStreamer + C++ development files. Also included is the model optimizer for importing and optimizing existing DL models from TensorFlow* or other supported frameworks. 

Select the platform target, whether CPU or accelerator, and the corresponding target OS desired to go straight to the GitHub repo. An overview of the Intel® Visual Compute Accelerator 2 (Intel® VCA2) can be found here: https://www.intel.com/content/www/us/en/products/servers/accelerators.html

Platform: Intel® Xeon (CPU) - Read Me

Platform: Intel® VCA2 - Read Me

Platform: Intel® Xeon® E3 (GPU) - Read Me

 -------------------------------------------------------------------------------------------------------------------------------------------------

Open Visual Cloud Dockerfile- NGINX+RTMP

Getting Started -> NGINX+RTMP readme file

What's inside the NGINX+RTMP Github Repo?

  • CMakeList.txt - Text file needed to install CMake
  • Dockerfile - Dockerfile that Docker runs to install NGINX+RTMP
  • Dockerfile.m4 - Dockerfiles
  • build.sh - Build script
  • nginx.conf - Config script
  • nginx.conf.m4 - script
  • shell.sh - Container script 

NGINX+RTMP: image optimized for web hosting and caching. Based on FFmpeg, this package includes NGINX with RTMP support for DASH and HLS streaming. 

Select the platform target, whether CPU or accelerator, and the corresponding target OS desired to go straight to the GitHub repo. An overview of the Intel® Visual Compute Accelerator 2 (Intel® VCA2) can be found here: https://www.intel.com/content/www/us/en/products/servers/accelerators.html

Platform: Intel® Xeon® (CPU) Read Me

Platform: Intel® VCA2 - Read Me

Platform: XeonE3 (GPU) Read Me

 -------------------------------------------------------------------------------------------------------------------------------------------------

OPEN VISUAL CLOUD DOCKERFILE- OWT (OPEN WEBRTC TOOLKIT)

Getting Started -> OWT readme file

What's inside the OWT Github Repo?
 
  • CMakeList.txt -Text file needed to install CMake
  • Dockerfile - Dockerfile that Docker runs to install OWT
  • Dockerfile.m4 - Dockerfiles
  • build.sh - Build script
  • shell.sh - Container script 
 
OWT image optimized for video conferencing service based on the WebRTC technology. Included conferencing modes 1:N and N:N with video and audio processing capability.
 
Select the platform target, whether CPU or accelerator, and the corresponding target OS desired to go straight to the GitHub repo.
 
Platform: Intel® Xeon® (CPU) - Read Me
 
 
Platform: Intel® Xeon® E3 (GPU) - Read Me