This topic describes in detail how to develop a media player application using the NaCl Player API. The topic uses the Native Player sample application as an example.
The Native Player sample application implements some of the main NaCl (Native Client) Player API use cases. The sample demonstrates the media data flow: data download, the demuxing process, and displaying the content on the TV.
The sample application is designed in a modular and extensible way. Its source code is released under the permissive MIT license, allowing you to reuse and customize the code to meet your own needs. You can access the latest Native Player source code from the Native Player GitHub repository.
The NaCl Player API is used by the NaCl module. All player logic is implemented in the NaCl module, including user authorization, downloading and demuxing (parsing) media content, decrypting DRM-protected content, and passing demuxed elementary stream packets to the NaCl Player.
The following figure shows the main structural layers, the components within the layers, and the data flow of a typical NaCl Player application:
The sample application supports MPEG4 media playback from a URL data source, with external or internal subtitles. You can show or hide external subtitles, and you can show, hide, and select internal subtitle tracks.
In this scenario, the platform is responsible for downloading and demuxing content. The application is responsible only for playback control, using the controller and communicator components.
In the "inc/player/url_player/url_player_controller.h" file, manage playback from the URL data source using the UrlPlayerController class. It aggregates all the necessary objects:
class UrlPlayerController : public PlayerController { // ... private: // ... PlayerListeners listeners_; std::shared_ptr<Samsung::NaClPlayer::MediaDataSource> data_source_; std::shared_ptr<Samsung::NaClPlayer::MediaPlayer> player_; std::unique_ptr<Samsung::NaClPlayer::TextTrackInfo> text_track_; std::vector<Samsung::NaClPlayer::TextTrackInfo> text_track_list_; std::shared_ptr<Communication::MessageSender> message_sender_; // ... };
Define the listeners in the "src/player/player/player_listeners.cc" file:
Implement the SubtitleListener class to handle subtitle events. It forwards the received subtitle text and duration to the JavaScript application component, which is responsible for displaying the subtitle:
void SubtitleListener::OnShowSubtitle(TimeTicks duration, const char* text) { Var varText = Var(text); LOG("Got subtitle: %s , duration: %f", text, duration); if (auto message_sender = message_sender_.lock()) { message_sender->ShowSubtitles(duration, varText); } }
Implement the MediaEventsListener class to handle player and playback events. The OnBufferingComplete() event notifies the JavaScript application component that the NaCl Player has buffered enough data to start playback:
OnBufferingComplete()
void MediaBufferingListener::OnBufferingComplete() { LOGM("Event: Buffering complete! Now you may play."); if (auto message_sender = message_sender_.lock()) { message_sender->BufferingCompleted(); } // ... }
In the "src/player/url_player/url_player_controller.cc" file, initialize and register the listeners to a MediaPlayer object:
void UrlPlayerController::InitPlayer(const std::string& url, const std::string& subtitle, const std::string& encoding) { // For external subtitles, the subtitle argument points to the location of the subtitle file // For internal subtitles, the subtitle argument must be an empty string // ... player_ = make_shared<MediaPlayer>(); // Initialize listeners and register them to the player listeners_.player_listener = make_shared<MediaPlayerListener>(message_sender_); listeners_.buffering_listener = make_shared<MediaBufferingListener>(message_sender_); listeners_.subtitle_listener = make_shared<SubtitleListener>(message_sender_); player_->SetMediaEventsListener(listeners_.player_listener); player_->SetBufferingListener(listeners_.buffering_listener); player_->SetSubtitleListener(listeners_.subtitle_listener);
Register external subtitles to the player. To check whether an external subtitle URL has been defined, and pass the address to the player:
// Register external subtitles source, if defined if (!subtitle.empty()) { text_track_ = MakeUnique<TextTrackInfo>(); int32_t ret = player_->AddExternalSubtitles(subtitle, encoding, *text_track_); // ... }
When all the player objects are set up, initialize and construct a URLDataSource object to handle a given URL address, and attach it to the player:
void UrlPlayerController::InitializeUrlPlayer( const std::string& content_container_url) { // ... data_source_ = make_shared<URLDataSource>(content_container_url); player_->AttachDataSource(*data_source_); // ...
Retrieve information about the available internal and external text tracks using the UrlPlayerController::PostTextTrackInfo() function, and send the track list to the HTML5 application component for showing in the UI:
UrlPlayerController::PostTextTrackInfo()
void UrlPlayerController::PostTextTrackInfo() { int32_t ret = player_->GetTextTracksList(text_track_list_); if (ret == ErrorCodes::Success) { LOG("GetTextTrackInfo called successfully"); message_sender_->SetTextTracks(text_track_list_); } else { LOG_ERROR("GetTextTrackInfo call failed, code: %d", ret); } }
When the player receives the OnBufferingComplete() event, start playback:
void UrlPlayerController::Play() { // ... int32_t ret = player_->Play(); // ... }
During playback, the application receives SubtitleListener::OnShowSubtitle() events containing subtitle text and timing information. To display the subtitles, pass the subtitle event data to the HTML5 application component.
SubtitleListener::OnShowSubtitle()
The sample application supports MPEG-DASH media playback from elementary stream data sources. It demonstrates the following use cases:
In these scenarios, the application is responsible for downloading and demuxing content, and controlling the media player.
To play MPEG-DASH content, the DashManifest class uses the libdash library to parse a DASH manifest file and download the appropriate media file or media chunk.
The DashManifest class is also responsible for extracting Content Protection DRM initialization information included in the DASH manifest, through the ContentProtectionVisitor class.
DashManifest
In the sample application, the "src/dash" directory contains the data provider implementation.
In the sample application "src/demuxer" directory, the FFMpegDemuxer class implements the StreamDemuxer interface. The StreamDemuxer interface supports clear and encrypted elementary stream content. It demuxes the media content into elementary stream packets.
FFMpegDemuxer
StreamDemuxer
The FFMpegDemuxer class uses the ffmpeg library. It extracts the elementary stream configuration and protection data, such as PSSH box and DRM initialization data, from the media content.
To implement the demuxer:
When data must be parsed, call the StreamDemuxer::Parse() function:
StreamDemuxer::Parse()
void FFMpegDemuxer::Parse(const std::vector<uint8_t>& data) { // ... parser_thread_.message_loop().PostWork( callback_factory_.NewCallback(&FFMpegDemuxer::StartParsing)); // ... }
To keep the application responsive, perform the parsing in the side thread:
void FFMpegDemuxer::StartParsing(int32_t) { // ... if (!streams_initialized_) InitStreamInfo(); AVPacket pkt; av_init_packet(&pkt); pkt.data = NULL; pkt.size = 0; while (!exited_) { // ... unique_ptr<ElementaryStreamPacket> es_pkt; int32_t ret = av_read_frame(format_context_, &pkt); if (ret < 0) { if (ret == AVERROR_EOF) { exited_ = true; packet_msg = kEndOfStream; } // ... } else { es_pkt = MakeESPacketFromAVPacket(&pkt); } if (es_data_callback_) es_data_callback_(packet_msg, std::move(es_pkt)); av_free_packet(&pkt); } // ... }
es_data_callback_()
Convert the FFMpeg elementary packet (AVPacket) into an elementary stream packet:
AVPacket
unique_ptr<ElementaryStreamPacket> FFMpegDemuxer::MakeESPacketFromAVPacket( AVPacket* pkt) { auto es_packet = MakeUnique<ElementaryStreamPacket>(pkt->data, pkt->size); AVStream* s = format_context_->streams[pkt->stream_index]; // Set ES packet information es_packet->SetPts(ToTimeTicks(pkt->pts, s->time_base) + timestamp_); es_packet->SetDts(ToTimeTicks(pkt->dts, s->time_base) + timestamp_); es_packet->SetDuration(ToTimeTicks(pkt->duration, s->time_base)); es_packet->SetKeyFrame(pkt->flags == 1); AVEncInfo* enc_info = reinterpret_cast<AVEncInfo*>( av_packet_get_side_data(pkt, AV_PKT_DATA_ENCRYPT_INFO, NULL)); if (!enc_info) return es_packet; // Packet encrypted es_packet->SetKeyId(enc_info->kid, kKidLength); es_packet->SetIv(enc_info->iv, enc_info->iv_size); for (uint32_t i = 0; i < enc_info->subsample_count; ++i) { es_packet->AddSubsample(enc_info->subsamples[i].bytes_of_clear_data, enc_info->subsamples[i].bytes_of_enc_data); } return es_packet; }
To implement MPEG-DASH content playback:
In the "inc/player/es_dash_player/es_dash_player_controller.h" file, manage playback from the elementary stream data source using the ESDashPlayerController class. It aggregates all the necessary objects:
class EsDashPlayerController : public PlayerController { // ... private: // ... PlayerListeners listeners_; std::shared_ptr<Samsung::NaClPlayer::MediaDataSource> data_source_; std::shared_ptr<Samsung::NaClPlayer::MediaPlayer> player_; std::unique_ptr<Samsung::NaClPlayer::TextTrackInfo> text_track_; std::shared_ptr<Communication::MessageSender> message_sender_; std::unique_ptr<DashManifest> dash_parser_; std::array<std::shared_ptr<StreamManager>, static_cast<int32_t>(StreamType::MaxStreamTypes)> streams_; std::vector<VideoStream> video_representations_; std::vector<AudioStream> audio_representations_; // ... };
Create the player and listeners, and assign the listeners to the player object:
void EsDashPlayerController::InitPlayer(const std::string& mpd_file_path, const std::string& subtitle, const std::string& encoding) { // ... player_ = make_shared<MediaPlayer>(); listeners_.player_listener = make_shared<MediaPlayerListener>(message_sender_); listeners_.buffering_listener = make_shared<MediaBufferingListener>(message_sender_, shared_from_this()); player_->SetMediaEventsListener(listeners_.player_listener); player_->SetBufferingListener(listeners_.buffering_listener); // ... InitializeSubtitles(subtitle, encoding); InitializeDash(mpd_file_path); }
Register external subtitles to the player. Check whether an external subtitle URL has been defined, and pass the address to the player:
void EsDashPlayerController::InitializeSubtitles(const std::string& subtitle, const std::string& encoding) { if (subtitle.empty()) return; text_track_ = MakeUnique<TextTrackInfo>(); int32_t ret = player_->AddExternalSubtitles(subtitle, encoding, *text_track_); listeners_.subtitle_listener = make_shared<SubtitleListener>(message_sender_); player_->SetSubtitleListener(listeners_.subtitle_listener); }
In the "src/player/es_dash_player/es_dash_player_controller.cc" file, initialize the DASH parser. The application must handle DRM configuration. Because DRM-specific data is not part of the DASH standard, it cannot be parsed by the libdash library. The DRM-specific data is stored in the ContentProtection DASH manifest section. The ContentProtectionVisitor class passes the DRM-specific data to the DashManifest::ParseMPD() function.
libdash
ContentProtection
DashManifest::ParseMPD()
void EsDashPlayerController::InitializeDash( const std::string& mpd_file_path) { // Native Player only supports PlayReady unique_ptr<DrmPlayReadyContentProtectionVisitor> visitor = MakeUnique<DrmPlayReadyContentProtectionVisitor>(); dash_parser_ = DashManifest::ParseMPD(mpd_file_path, visitor.get()); // ... data_source_ = std::make_shared<ESDataSource>(); // ... video_representations_ = dash_parser_->GetVideoStreams(); audio_representations_ = dash_parser_->GetAudioStreams(); // ... }
Initialize a video stream from the DASH manifest.
void EsDashPlayerController::InitializeVideoStream( Samsung::NaClPlayer::DRMType drm_type) { if (video_representations_.empty()) return; // ... VideoStream s = GetHighestBitrateStream(video_representations_); // ... if (s.description.content_protection) { // DRM content detected auto drm_listener = make_shared<DrmPlayReadyListener>(instance_, player_); drm_listener->SetContentProtectionDescriptor( std::static_pointer_cast<DrmPlayReadyContentProtectionDescriptor>( s.description.content_protection)); player_->SetDRMListener(drm_listener); } // Create stream manager to manage downloading and demuxing auto& stream_manager = streams_[static_cast<int32_t>(StreamType::Video)]; stream_manager = make_shared<StreamManager>(instance_, StreamType::Video); bool success = stream_manager->Initialize( dash_parser_->GetVideoSequence(s.description.id), data_source_, std::bind(&EsDashPlayerController::OnStreamConfigured, this, _1), drm_type); // ... }
Audio streams are initialized in a similar way.
In the "src/player/es_dash_player/stream_manager.cc" file, initialize the StreamManager object. It manages elementary streams by downloading the appropriate representation, demuxing it into elementary stream packets, and sending the packets to the player for playback.
bool StreamManager::Impl::Initialize( unique_ptr<MediaSegmentSequence> segment_sequence, shared_ptr<ESDataSource> es_data_source, std::function<void(StreamType)> stream_configured_callback, DRMType drm_type, std::shared_ptr<ElementaryStreamListener> listener) { // ... // dd a stream to ESDataSource if (stream_type_ == StreamType::Video) { auto video_stream = make_shared<VideoElementaryStream>(); result = es_data_source->AddStream(*video_stream, listener); elementary_stream_ = std::static_pointer_cast<ElementaryStream>(video_stream); } else if (stream_type_ == StreamType::Audio) { auto audio_stream = make_shared<AudioElementaryStream>(); result = es_data_source->AddStream(*audio_stream, listener); elementary_stream_ = std::static_pointer_cast<ElementaryStream>(audio_stream); } // ... // Initialize stream demuxer - create and set stream configs listeners if (!InitParser()) { LOG_ERROR("Failed to initialize parser or config listeners"); return false; } return ParseInitSegment(); }