Video Decoder Usage Guide

This section provides an extension to WASM Player Usage Guide article with RenderingMode::kVideoTexture of ElementaryMediaStreamSource. It presents how to extend existing WASM Player application with Video Decoder functionalities using Tizen WASM Video Decoder Sample.


Related Info


Overview

The aim of this article is to present how to modify existing WASM Player application to use RenderingMode::kVideoTexture functionality of ElementaryMediaStreamSource.

This mode allows the application to fill requested GL texture with decoded frame, instead of rendering it on a HTMLMediaElement.

WASM Player -> WASM Video Decoder steps

Setting Video Texture rendering mode for WASM Player

To change ElementaryMediaStreamSource rendering mode from Media Element to Video Texture RenderingMode::kMediaElement should be replaced by RenderingMode::kVideoTexture:

using LatencyMode = samsung::wasm::ElementaryMediaStreamSource::LatencyMode;
using RenderingMode = samsung::wasm::ElementaryMediaStreamSource::RenderingMode;

auto elementary_media_stream_source = std::make_unique<samsung::wasm::ElementaryMediaStreamSource>(LatencyMode::kNormal, RenderingMode::kVideoTexture);

GL context in Emscripten

Make canvas accessible from Emscripten

GL context in WASM is associated with a canvas HTML element. To make it possible for WASM to use it, the following steps need to be applied:

  1. Create a canvas element in the application's HTML file that runs WASM module:

    <canvas id="canvas" width=1600 height=900></canvas>
    
  2. Extend Emscripten Module object with information about the created canvas element:

    Module = {
       ...
    
       canvas: (function() {
          return document.getElementById('canvas');
       })(),
    }
    

    Now you can access canvas HTML element from the WASM module.

  3. Get canvas dimensions from C++ code:

    int width;
    int height;
    emscripten_get_canvas_element_size("#canvas", &width, &height);
    

    These variables will be used later for context initialization.

Using SDL for GL Initialization

To initialize GL using SDL, the following steps need to be performed:

  1. Create SDL_Window with the desired parameters:

    window_ = SDL_CreateWindow("VideoTexture", SDL_WINDOWPOS_CENTERED,
                               SDL_WINDOWPOS_CENTERED, width, height,
                               SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
    
  2. Get a SDL_GLContext context from the window and make it the current context:

    gl_context_ = SDL_GL_CreateContext(window_);
    SDL_GL_MakeCurrent(window_, gl_context_);
    
  3. Initialize SDL. Sample configuration:

    SDL_Init(SDL_INIT_VIDEO);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);  // Indicates GLES version to use
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
    SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);            // Indicates GL color depth
    SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
    SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);     // Turns on multisampling
    

Using EGL for GL Initialization

As an alternative for SDL initialization, initialization using EGL wrapper can be performed:

  1. Initialize EGL config. Sample configuration:

    const EGLint attrib_list[] = {
        EGL_RED_SIZE, 8,
        EGL_GREEN_SIZE, 8,
        EGL_BLUE_SIZE, 8,
        EGL_ALPHA_SIZE, EGL_DONT_CARE,
        EGL_DEPTH_SIZE, EGL_DONT_CARE,
        EGL_STENCIL_SIZE, EGL_DONT_CARE,
        EGL_SAMPLE_BUFFERS, 0,
        EGL_NONE
    };
    
    const EGLint context_attribs[] = {
        EGL_CONTEXT_CLIENT_VERSION, 2,
        EGL_NONE
    };
    
    EGLint num_configs;
    EGLint major_version;
    EGLint minor_version;
    EGLConfig config;
    
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    eglInitialize(display, &major_version, &minor_version);
    eglGetConfigs(display, NULL, 0, &num_configs), EGL_TRUE)
    eglChooseConfig(display, attrib_list, &config, 1, &num_configs)
    
  2. Create EGLSurface:

    EGLSurface surface = eglCreateWindowSurface(display, config, NULL, NULL);
    
  3. Get EGLContext from the window surface and make it the current context:

    EGLContext context =
        eglCreateContext(display, config, EGL_NO_CONTEXT, context_attribs);
    eglMakeCurrent(display, surface, surface, context);
    

GL Initialization

Create a texture

This texture will be filled with video frames decoded by WASM Player

glGenTextures(1, &texture_);

Set viewport

Setting viewport is required to allow GL to automatically scale rendering to provided viewport

glViewport(0, 0, width, height);

Compiling shaders and linking program

  1. Define a vertex shader. Sample shader:

    const char kVertexShader[] =
      "varying vec2 v_texCoord;               \n"
      "attribute vec4 a_position;             \n"
      "attribute vec2 a_texCoord;             \n"
      "uniform vec2 v_scale;                  \n"
      "void main()                            \n"
      "{                                      \n"
      "    v_texCoord = v_scale * a_texCoord; \n"
      "    gl_Position = a_position;          \n"
      "}";
    
  2. Define a fragment shader. Sample shader:

    const char kFragmentShaderExternal[] =
      "#extension GL_OES_EGL_image_external : require       \n"
      "precision mediump float;                             \n"
      "varying vec2 v_texCoord;                             \n"
      "uniform samplerExternalOES s_texture;                \n"
      "void main()                                          \n"
      "{                                                    \n"
      "    gl_FragColor = texture2D(s_texture, v_texCoord); \n"
      "}
    
  1. Create compile shader function:

    void CreateShader(GLuint program, GLenum type, const char* source, int size) {
      GLuint shader = glCreateShader(type);
      glShaderSource(shader, 1, &source, &size);
      glCompileShader(shader);
      glAttachShader(program, shader);
      glDeleteShader(shader);
    }
    
  2. Create program, compile shaders and link them into the program:

    program_ = glCreateProgram();
    CreateShader(program_, GL_VERTEX_SHADER, kVertexShader,
                 strlen(kVertexShader));
    CreateShader(program_, GL_FRAGMENT_SHADER, kFragmentShaderExternal,
                 strlen(kFragmentShaderExternal));
    glLinkProgram(program_);
    glUseProgram(program_);
    

GLES 3 (WebGL 2)

It is possible to use GLES 3 for WASM Video Decoder functionality.

To do so:

  1. Add version information at the beginning of both vertex and fragment shaders:

    #version 300 es
    
  2. Change:

    #extension GL_OES_EGL_image_external : require
    

    to

    #extension GL_OES_EGL_image_external_essl3 : require
    

    in the fragment shader.

  3. Use texture keyword instead of texture2D in the fragment shader definition.

  4. Set GL major version to 3:

    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
    

Registering GL context for WASM Video Decoder

Inform ElementaryMediaTrack about current graphics context:

video_track_.RegisterCurrentGraphicsContext();

Video Decoder rendering loop

Requesting video texture fill

The decoding loop that fills the texture with a decoded video frame can be started after OnTrackOpen event is received or when HTMLMediaElement::Play callback is called.

When the texture is filled with the video frame, drawing should be performed:

void VideoDecoderTrackDataPump::RequestNewVideoTexture() {
  video_track_.FillTextureWithNextFrame(
      texture_, [this](samsung::wasm::OperationResult result) {
        if (result != samsung::wasm::OperationResult::kSuccess) {
          std::cout << "Filling texture with next frame failed" << std::endl;
          return;
        }

        Draw();
      });
}

Drawing

Like in any C++ application, we need to provide a rendering loop.

In WASM Video Decoder application, we achieve it by providing the following cycle:

ElementaryMediaTrack::FillTextureWithNextFrame -async-> GL rendering operations -sync-> emscripten_request_animation_frame -async-> CAPIOnDrawTextureCompleted -sync-> ElementaryMediaTrack::RecycleTexture -sync-> ElementaryMediaTrack::FillTextureWithNextFrame

  1. Prepare GL texture for drawing:

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_EXTERNAL_OES, texture_);
    
  1. Request drawing the texture:

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
    
  2. Request an animation frame:

    emscripten_request_animation_frame(&CAPIOnDrawTextureCompleted, this);
    
  3. Define a global callback function for emscripten_request_animation_frame:

    int CAPIOnDrawTextureCompleted(double /* time */, void* thiz) {
      if (thiz)
        static_cast<VideoDecoderTrackDataPump*>(thiz)->OnDrawCompleted();
    
      return 0;
    }
    

Recycling video picture

It is important to always recycle video picture after it has been drawn.

To do so, we need to call RecycleTexture method in emscripten_request_animation_frame callback:

void VideoDecoderTrackDataPump::OnDrawCompleted() {
  video_track_.RecycleTexture(texture_);
  RequestNewVideoTexture();
}

End rendering loop

To properly end a rendering loop when rendering should be stopped, the application should always handle ElementaryMediaTrack::FillTextureWithNextFrame errors - eg. OperationResult::kAlreadyDestroyed when a track

was stopped before calling this method or OperationResult::kAborted when a track was stopped after calling this method.

Before invalidating the pointer that has already been provided to the emscripten_request_animation_frame callback, you should abort that callback with the emscripten_cancel_animation_frame by providing a callback id returned by the emscripten_request_animation_frame method.