Using Tizen WASM Video Decoder

This section provides an extension to Using Tizen WASM Player with RenderingMode::kVideoTexture of ElementaryMediaStreamSource. It presents how to extend an existing WASM Player application with Video Decoder functionalities using the Tizen WASM Video Decoder Sample.

To enable the WASM Video Decoder in an existing WASM Player application, you must:

  • Set the video rendering mode
  • Configure the GL context
  • Initialize the GL
  • Implement the rendering loop

The WASM Video Decoder allows the application to fill a requested GL texture with decoded frame, instead of rendering it on a HTMLMediaElement.

Setting Video Texture Rendering Mode for WASM Player

To change the ElementaryMediaStreamSource rendering mode from Media Element to Video Texture, RenderingMode::kMediaElement must be replaced by RenderingMode::kVideoTexture:

  using LatencyMode = samsung::wasm::ElementaryMediaStreamSource::LatencyMode;
  using RenderingMode = samsung::wasm::ElementaryMediaStreamSource::RenderingMode;

  auto elementary_media_stream_source = std::make_unique<samsung::wasm::ElementaryMediaStreamSource>(LatencyMode::kNormal, RenderingMode::kVideoTexture);

Configuring GL Context

This section covers the configuration of GL context using the Samsung Emscripten SDK. You must first make the canvas accessible, and then prepare either SDL or EGL for GL initialization.

Making Canvas Accessible from Emscripten

GL context in WASM is associated with a <canvas> HTML element. To make it possible for WASM to use it:

  1. Create a <canvas> element in the application's HTML file that runs the WASM module:
   <canvas id="canvas" width=1600 height=900></canvas>
  1. Extend the Emscripten Module object with information about the created canvas element:
   Module = {

      canvas: (function() {
         return document.getElementById('canvas');

Now you can access the <canvas> HTML element from the WASM module.

  1. For context initialization, get the canvas dimensions from the C++ code:
  int width;
  int height;
  emscripten_get_canvas_element_size("#canvas", &width, &height);

Using SDL for GL Initialization

To initialize GL using SDL:

  1. Create SDL_Window with the desired parameters:
  window_ = SDL_CreateWindow("VideoTexture", SDL_WINDOWPOS_CENTERED,
                             SDL_WINDOWPOS_CENTERED, width, height,
                             SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
  1. Get an SDL_GLContext context from the window, and make it the current context:
  gl_context_ = SDL_GL_CreateContext(window_);
  SDL_GL_MakeCurrent(window_, gl_context_);
  1. Initialize SDL. For example:
  SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);  // Indicates GLES version to use
  SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);            // Indicates GL color depth
  SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4);     // Turns on multisampling

Using EGL for GL Initialization

As an alternative for SDL initialization, GL can also be initialized using the EGL wrapper:

  1. Initialize the EGL config. For example:
  const EGLint attrib_list[] = {
      EGL_RED_SIZE, 8,
      EGL_GREEN_SIZE, 8,
      EGL_BLUE_SIZE, 8,

  const EGLint context_attribs[] = {

  EGLint num_configs;
  EGLint major_version;
  EGLint minor_version;
  EGLConfig config;

  EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
  eglInitialize(display, &major_version, &minor_version);
  eglGetConfigs(display, NULL, 0, &num_configs), EGL_TRUE)
  eglChooseConfig(display, attrib_list, &config, 1, &num_configs)
  1. Create EGLSurface:
  EGLSurface surface = eglCreateWindowSurface(display, config, NULL, NULL);
  1. Get EGLContext from the window surface and make it the current context:
  EGLContext context =
      eglCreateContext(display, config, EGL_NO_CONTEXT, context_attribs);
  eglMakeCurrent(display, surface, surface, context);

Initializing GL

To initialize GL:

  1. Create a texture.

      glGenTextures(1, &texture_);

    The texture is filled with video frames decoded by WASM Player.

  2. Set a viewport to allow GL to automatically scale rendering to the provided viewport.

      glViewport(0, 0, width, height);
  3. Compile shaders and link them to the program:

    1. Define a vertex shader. For example:
      const char kVertexShader[] =
        "varying vec2 v_texCoord;               \n"
        "attribute vec4 a_position;             \n"
        "attribute vec2 a_texCoord;             \n"
        "uniform vec2 v_scale;                  \n"
        "void main()                            \n"
        "{                                      \n"
        "    v_texCoord = v_scale * a_texCoord; \n"
        "    gl_Position = a_position;          \n"
    1. Define a fragment shader. For example:
      const char kFragmentShaderExternal[] =
        "#extension GL_OES_EGL_image_external : require       \n"
        "precision mediump float;                             \n"
        "varying vec2 v_texCoord;                             \n"
        "uniform samplerExternalOES s_texture;                \n"
        "void main()                                          \n"
        "{                                                    \n"
        "    gl_FragColor = texture2D(s_texture, v_texCoord); \n"

    #extension GL_OES_EGL_image_external : require and the sampler uniform samplerExternalOES are required for Video Decoder functionality.

    1. Create the compile shader function:
    void CreateShader(GLuint program, GLenum type, const char*    source, int size) {
      GLuint shader = glCreateShader(type);
      glShaderSource(shader, 1, &source, &size);
      glAttachShader(program, shader);
    1. Create a program, compile shaders, and link them into the program:
      program_ = glCreateProgram();
      CreateShader(program_, GL_VERTEX_SHADER, kVertexShader,
      CreateShader(program_, GL_FRAGMENT_SHADER, kFragmentShaderExternal,

To use GLES 3 (WebGL 2) for WASM Video Decoder functionality (optional):

  1. Add version information at the beginning of both vertex and fragment shaders:
  #version 300 es
  1. In the fragment shader, change the following line:
  #extension GL_OES_EGL_image_external : require


  #extension GL_OES_EGL_image_external_essl3 : require
  1. In the fragment shader definition, use the texture keyword instead of texture2D.
  2. Set the GL major version to 3:

Note that the EGL provided by the Emscripten SDK does not support setting the GL major version, so it is not possible to use GLES 3.0 using EGL wrapper.

Register the GL Context for WASM Video Decoder by informing ElementaryMediaTrack about current graphics context:


Implementing the Rendering Loop

To implement the WASM Video Decoder rendering loop:

  1. Request the video texture fill.
    The decoding loop that fills the texture with a decoded video frame can be started after OnTrackOpen event is received or when the HTMLMediaElement::Play callback is called.
    When the texture is filled with the video frame, draw it:

      void VideoDecoderTrackDataPump::RequestNewVideoTexture() {
            texture_, [this](samsung::wasm::OperationResult result) {
              if (result != samsung::wasm::OperationResult::kSuccess) {
                std::cout << "Filling texture with next frame failed" << std::endl;
  2. Draw the texture.
    Like in any C++ application, you need to provide a rendering loop.
    In the WASM Video Decoder application, the loop is created with the cycle illustrated in the following image:

    Figure 1. WASM Video Decoder Rendering Loop

    Figure 1. WASM Video Decoder Rendering Loop

    1. Prepare the GL texture for drawing:
      glBindTexture(GL_TEXTURE_EXTERNAL_OES, texture_);

    The texture target used for Video Decoder functionality must always be set to GL_TEXTURE_EXTERNAL_OES.

    1. Request drawing the texture:
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
    2. Request an animation frame:
        emscripten_request_animation_frame(&CAPIOnDrawTextureCompleted, this);
    3. Define a global callback function for emscripten_request_animation_frame:
        int CAPIOnDrawTextureCompleted(double /* time */, void* thiz) {
          if (thiz)
          return 0;
  3. Recycle the video picture
    It is important to always recycle the video picture after it has been drawn.
    To do so, call the RecycleTexture method in the emscripten_request_animation_frame callback:

      void VideoDecoderTrackDataPump::OnDrawCompleted() {
  4. When rendering must be stopped, end the rendering loop.
    To properly end a rendering loop, the application needs to handle ElementaryMediaTrack::FillTextureWithNextFrame errors, such as OperationResult::kAlreadyDestroyed when a track
    was stopped before calling this method or OperationResult::kAborted when a track was stopped after calling this method.

    Before invalidating the pointer that has already been provided to the emscripten_request_animation_frame callback, abort that callback withemscripten_cancel_animation_frame, by providing the callback ID returned by the emscripten_request_animation_frame method.