[Android audio and video development and upgrade: FFmpeg audio and video encoding and decoding articles] 5. Android FFmpeg + OpenGL ES to play video

[Android audio and video development and upgrade: FFmpeg audio and video encoding and decoding articles] 5. Android FFmpeg + OpenGL ES to play video

statement

First of all , this series of articles are based on my own understanding and practice, there may be something wrong, everyone is welcome to correct me.
Secondly , this is an introductory series, and the knowledge involved is limited to enough. There are also many blog posts on the in-depth knowledge network for everyone to learn.
Finally , in the process of writing the article, I will refer to the articles shared by other people and will be listed at the end of the article. Thank these authors for sharing.

The codeword is not easy, please indicate the source for reprinting!

Tutorial code: [ Github Portal ]

table of Contents

1. Android audio and video hard decoding articles:
2. use OpenGL to render video pictures
3. Android FFmpeg audio and video decoding articles

In this article you can learn

How to call OpenGL ES on the NDK layer and use OpenGL ES to render the video data decoded by FFmpeg.

1. Introduction to the rendering process

In the Javalayer, Androidhas been provided to us GLSurfaceViewfor OpenGL ESrendering, we do not care OpenGL ESregarding EGLthe content portion, and need not concern OpenGL ESthe rendering process.

In the NDKfloor, was not so lucky, and Androiddid not provide us with a good package OpenGL EStools, so I want to use OpenGL ES, everything is only a scratch.

But do not worry about the EGLuse, in the previous article [ in-depth understanding of EGL OpenGL ] in specially made a detailed introduction, in NDKlayers is the same, but is using C/C++to achieve it again.

The following figure is the flow chart of the entire decoding and rendering of this article.

Rendering process

In [ Android FFmpeg video decoding and playback ], we have establishedFFMpeg decode thread, and outputs the decoded data to a local window for rendering, uses only one thread.

Use OpenGL ESto render video, you need to establish another independent thread OpenGL ESto bind.

So, here it comes to data synchronization between two threads here, we will FFmpegdecoded data sent to and wait for OpenGL ESthe calling thread.

In particular
, during the rendering process of the OpenGL thread, the drawer is not directly called to render, but indirectly called through an agent, so that the OpenGL thread does not need to care about how many drawers need to be called, and all of them are managed by the agent. Enough.

2. create OpenGL ES rendering thread

And Javalayer, the first of the EGLrelevant content packages.

EGLCorePackage EGLunderlying operating, such as

  • init initialization
  • eglCreateWindowSurface/eglCreatePbufferSurface Create a rendering surface
  • MakeCurrent Bind OpenGL threads
  • SwapBuffers Exchange data buffer
  • ......

EGLSurfaceFor EGLCorefurther packaging, primarily for the EGLCorecreation of EGLSurfacethe management, and external calls to provide more concise method.

EGLFor the principle, please read the article "In- depth understanding of OpenGL EGL ", which will not be introduced here.

Encapsulate EGLCore

The header file elg_core.h

//egl_core.h

extern "C" {
#include <EGL/egl.h>
#include <EGL/eglext.h>
};

class EglCore {
private:
    const char *TAG = "EglCore";

    //EGL 
    EGLDisplay m_egl_dsp = EGL_NO_DISPLAY;
    //EGL 
    EGLContext m_egl_cxt = EGL_NO_CONTEXT;
    //EGL 
    EGLConfig m_egl_cfg;

    EGLConfig GetEGLConfig();

public:
    EglCore();
    ~EglCore();

    bool Init(EGLContext share_ctx);

    // 
    EGLSurface CreateWindSurface(ANativeWindow *window);

    EGLSurface CreateOffScreenSurface(int width, int height);

    // OpenGL 
    void MakeCurrent(EGLSurface egl_surface);

    // 
    void SwapBuffers(EGLSurface egl_surface);

    // 
    void DestroySurface(EGLSurface elg_surface);

    // ELG
    void Release();
};
 

Specific implementation egl_core.cpp

  • initialization
//egl_core.cpp

bool EglCore::Init(EGLContext share_ctx) {
    if (m_egl_dsp != EGL_NO_DISPLAY) {
        LOGE(TAG, "EGL already set up")
        return true;
    }

    if (share_ctx == NULL) {
        share_ctx = EGL_NO_CONTEXT;
    }

    m_egl_dsp = eglGetDisplay(EGL_DEFAULT_DISPLAY);

    if (m_egl_dsp == EGL_NO_DISPLAY || eglGetError() != EGL_SUCCESS) {
        LOGE(TAG, "EGL init display fail")
        return false;
    }

    EGLint major_ver, minor_ver;
    EGLBoolean success = eglInitialize(m_egl_dsp, &major_ver, &minor_ver);
    if (success != EGL_TRUE || eglGetError() != EGL_SUCCESS) {
        LOGE(TAG, "EGL init fail")
        return false;
    }

    LOGI(TAG, "EGL version: %d.%d", major_ver, minor_ver)

    m_egl_cfg = GetEGLConfig();

    const EGLint attr[] = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE};
    m_egl_cxt = eglCreateContext(m_egl_dsp, m_egl_cfg, share_ctx, attr);
    if (m_egl_cxt == EGL_NO_CONTEXT) {
        LOGE(TAG, "EGL create fail, error is %x", eglGetError());
        return false;
    }

    EGLint egl_format;
    success = eglGetConfigAttrib(m_egl_dsp, m_egl_cfg, EGL_NATIVE_VISUAL_ID, &egl_format);
    if (success != EGL_TRUE || eglGetError() != EGL_SUCCESS) {
        LOGE(TAG, "EGL get config fail")
        return false;
    }

    LOGI(TAG, "EGL init success")
    return true;
}
 
//egl_core.cpp

EGLConfig EglCore::GetEGLConfig() {
    EGLint numConfigs;
    EGLConfig config;
    
    static const EGLint CONFIG_ATTRIBS[] = {
          EGL_BUFFER_SIZE, EGL_DONT_CARE,
          EGL_RED_SIZE, 8,
          EGL_GREEN_SIZE, 8,
          EGL_BLUE_SIZE, 8,
          EGL_ALPHA_SIZE, 8,
          EGL_DEPTH_SIZE, 16,
          EGL_STENCIL_SIZE, EGL_DONT_CARE,
          EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
          EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
          EGL_NONE
    };

    EGLBoolean success = eglChooseConfig(m_egl_dsp, CONFIG_ATTRIBS, &config, 1, &numConfigs);
    if (!success || eglGetError() != EGL_SUCCESS) {
        LOGE(TAG, "EGL config fail")
        return NULL;
    }
    return config;
}
 
  • Create a rendering surface

Explain that EGLyou can create both a foreground rendering surface and an off-screen rendering surface. Off-screen rendering is mainly used when composing videos later.

//egl_core.cpp

EGLSurface EglCore::CreateWindSurface(ANativeWindow *window) {
    EGLSurface surface = eglCreateWindowSurface(m_egl_dsp, m_egl_cfg, window, 0);
    if (eglGetError() != EGL_SUCCESS) {
        LOGI(TAG, "EGL create window surface fail")
        return NULL;
    }
    return surface;
}

EGLSurface EglCore::CreateOffScreenSurface(int width, int height) {
    int CONFIG_ATTRIBS[] = {
            EGL_WIDTH, width,
            EGL_HEIGHT, height,
            EGL_NONE
    };

    EGLSurface surface = eglCreatePbufferSurface(m_egl_dsp, m_egl_cfg, CONFIG_ATTRIBS);
    if (eglGetError() != EGL_SUCCESS) {
        LOGI(TAG, "EGL create off screen surface fail")
        return NULL;
    }
    return surface;
}
 
  • Thread binding and buffered data exchange
//egl_core.cpp

void EglCore::MakeCurrent(EGLSurface egl_surface) {
    if (!eglMakeCurrent(m_egl_dsp, egl_surface, egl_surface, m_egl_cxt)) {
        LOGE(TAG, "EGL make current fail");
    }
}

void EglCore::SwapBuffers(EGLSurface egl_surface) {
    eglSwapBuffers(m_egl_dsp, egl_surface);
}
 
  • Release resources
//egl_core.cpp

void EglCore::DestroySurface(EGLSurface elg_surface) {
    eglMakeCurrent(m_egl_dsp, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
    eglDestroySurface(m_egl_dsp, elg_surface);
}

void EglCore::Release() {
    if (m_egl_dsp != EGL_NO_DISPLAY) {
        eglMakeCurrent(m_egl_dsp, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
        eglDestroyContext(m_egl_dsp, m_egl_cxt);
        eglReleaseThread();
        eglTerminate(m_egl_dsp);
    }
    m_egl_dsp = EGL_NO_DISPLAY;
    m_egl_cxt = EGL_NO_CONTEXT;
    m_egl_cfg = NULL;
}
 

Create EglSurface

The header file egl_surface.h

//egl_surface.h

#include <android/native_window.h>
#include "egl_core.h"

class EglSurface {
private:

    const char *TAG = "EglSurface";

    ANativeWindow *m_native_window = NULL;

    EglCore *m_core;

    EGLSurface m_surface;

public:
    EglSurface();
    ~EglSurface();
    
    bool Init();
    void CreateEglSurface(ANativeWindow *native_window, int width, int height);
    void MakeCurrent();
    void SwapBuffers();
    void DestroyEglSurface();
    void Release();
};
 

Specific implementation egl_surface.cpp

//egl_surface.cpp

EglSurface::EglSurface() {
    m_core = new EglCore();
}

EglSurface::~EglSurface() {
    delete m_core;
}

bool EglSurface::Init() {
    return m_core->Init(NULL);
}

void EglSurface::CreateEglSurface(ANativeWindow *native_window,
                                  int width, int height) {
    if (native_window != NULL) {
        this->m_native_window = native_window;
        m_surface = m_core->CreateWindSurface(m_native_window);
    } else {
        m_surface = m_core->CreateOffScreenSurface(width, height);
    }
    if (m_surface == NULL) {
        LOGE(TAG, "EGL create window surface fail")
        Release();
    }
    MakeCurrent();
}

void EglSurface::SwapBuffers() {
    m_core->SwapBuffers(m_surface);
}

void EglSurface::MakeCurrent() {
    m_core->MakeCurrent(m_surface);
}

void EglSurface::DestroyEglSurface() {
    if (m_surface != NULL) {
        if (m_core != NULL) {
            m_core->DestroySurface(m_surface);
        }
        m_surface = NULL;
    }
}

void EglSurface::Release() {
    DestroyEglSurface();
    if (m_core != NULL) {
        m_core->Release();
    }
}
 

Create OpenGL ES rendering thread

Define member variables

//opengl_render.h

class OpenGLRender {
private:

    const char *TAG = "OpenGLRender";

    //OpenGL  
    enum STATE {
        NO_SURFACE, //surface
        FRESH_SURFACE, //surface
        RENDERING, //
        SURFACE_DESTROY, //surface 
        STOP //
    };

    JNIEnv *m_env = NULL;

    // JVM 
    JavaVM *m_jvm_for_thread = NULL;

    //Surface 
    jobject m_surface_ref = NULL;

    // 
    ANativeWindow *m_native_window = NULL;

    //EGL 
    EglSurface *m_egl_surface = NULL;

    // 
    DrawerProxy *m_drawer_proxy = NULL;

    int m_window_width = 0;
    int m_window_height = 0;

    STATE m_state = NO_SURFACE;
    
    // ...
}
 

In addition to defining EGLthe relevant member variables, explain two places:

1. the definition of the status of the rendering thread, we will be in accordance with these state OpenGLdo corresponding operation thread.

enum STATE {
    NO_SURFACE, //surface
    FRESH_SURFACE, //surface
    RENDERING, //
    SURFACE_DESTROY, //surface 
    STOP //
};
 

The second is that a renderer proxy is included here DrawerProxy. The main consideration is that multiple videos may be decoded at the same time. If only one renderer is included, it cannot be processed, so here the rendering is passed to the proxy to be processed by the proxy. We will introduce it in detail in the next section.

Define member method

//opengl_render.h

class OpenGLRender {
private:

    // ...
    
    // 
    void InitRenderThread();
    bool InitEGL();
    void InitDspWindow(JNIEnv *env);
    
    /// Surface
    void CreateSurface();
    void DestroySurface();
    
    // 
    void Render();
    
    // 
    void ReleaseRender();
    void ReleaseDrawers();
    void ReleaseSurface();
    void ReleaseWindow();
    
    // 
    static void sRenderThread(std::shared_ptr<OpenGLRender> that);

public:
    OpenGLRender(JNIEnv *env, DrawerProxy *drawer_proxy);
    ~OpenGLRender();

    void SetSurface(jobject surface);
    void SetOffScreenSize(int width, int height);
    void Stop();
}
 

Specific implementation of opengl_rend.cpp

  • Start thread
//opengl_render.cpp

OpenGLRender::OpenGLRender(JNIEnv *env, DrawerProxy *drawer_proxy):
m_drawer_proxy(drawer_proxy) {
    this->m_env = env;
    //JVM 
    env->GetJavaVM(&m_jvm_for_thread);
    InitRenderThread();
}

OpenGLRender::~OpenGLRender() {
    delete m_egl_surface;
}

void OpenGLRender::InitRenderThread() {
    // 
    std::shared_ptr<OpenGLRender> that(this);
    std::thread t(sRenderThread, that);
    t.detach();
}
 
  • Thread state switching
//opengl_render.cpp

void OpenGLRender::sRenderThread(std::shared_ptr<OpenGLRender> that) {
    JNIEnv * env;

    //env
    if (that->m_jvm_for_thread->AttachCurrentThread(&env, NULL) != JNI_OK) {
        LOGE(that->TAG, " ");
        return;
    }

    //  EGL
    if(!that->InitEGL()) {
        //jvm 
        that->m_jvm_for_thread->DetachCurrentThread();
        return;
    }

    while (true) {
        switch (that->m_state) {
            case FRESH_SURFACE:
                LOGI(that->TAG, "Loop Render FRESH_SURFACE")
                that->InitDspWindow(env);
                that->CreateSurface();
                that->m_state = RENDERING;
                break;
            case RENDERING:
                that->Render();
                break;
            case SURFACE_DESTROY:
                LOGI(that->TAG, "Loop Render SURFACE_DESTROY")
                that->DestroySurface();
                that->m_state = NO_SURFACE;
                break;
            case STOP:
                LOGI(that->TAG, "Loop Render STOP")
                //jvm 
                that->ReleaseRender();
                that->m_jvm_for_thread->DetachCurrentThread();
                return;
            case NO_SURFACE:
            default:
                break;
        }
        usleep(20000);
    }
}

bool OpenGLRender::InitEGL() {
    m_egl_surface = new EglSurface();
    return m_egl_surface->Init();
}
 

Entering while(true)before rendering cycle, created EglSurface(both top package EGL tool), and calls its Initmethod to initialize.

Enter whilethe cycle:

I. Upon receiving the outside SurfaceView, it will enter the FRESH_SURFACEstate, then the window will be initialized, and binds to the window EGL.

II. Then, automatically enter the RENDERINGstate start rendering.

iii. At the same time, if it detects player to exit into the STOPstate, it will free up resources, and exit the thread.

  • Set up SurfaceView, start rendering
//opengl_render.cpp

void OpenGLRender::SetSurface(jobject surface) {
    if (NULL != surface) {
        m_surface_ref = m_env->NewGlobalRef(surface);
        m_state = FRESH_SURFACE;
    } else {
        m_env->DeleteGlobalRef(m_surface_ref);
        m_state = SURFACE_DESTROY;
    }
}

void OpenGLRender::InitDspWindow(JNIEnv *env) {
    if (m_surface_ref != NULL) {
        // 
        m_native_window = ANativeWindow_fromSurface(env, m_surface_ref);

        // 
        m_window_width = ANativeWindow_getWidth(m_native_window);
        m_window_height = ANativeWindow_getHeight(m_native_window);

        //
        ANativeWindow_setBuffersGeometry(m_native_window, m_window_width,
                                         m_window_height, WINDOW_FORMAT_RGBA_8888);

        LOGD(TAG, "View Port width: %d, height: %d", m_window_width, m_window_height)
    }
}

void OpenGLRender::CreateSurface() {
    m_egl_surface->CreateEglSurface(m_native_window, m_window_width, m_window_height);
    glViewport(0, 0, m_window_width, m_window_height);
}
 

It can be seen that ANativeWindowthe initialization of the window is the same as when the local window is directly used to display the video screen in " Android FFmpeg Video Decoding and Playing ".

Followed by CreateSurfacebinding to the window in the EGL.

  • Rendering

Rendering is very simple, direct call render proxy draw, then call EGLthe SwapBuffersexchange buffer data.

//opengl_render.cpp

void OpenGLRender::Render() {
    if (RENDERING == m_state) {
        m_drawer_proxy->Draw();
        m_egl_surface->SwapBuffers();
    }
}
 
  • Release resources

When an external call Stop()after method, status changes STOP, will be called ReleaseRender()to release resources.

//opengl_render.cpp

void OpenGLRender::Stop() {
    m_state = STOP;
}

void OpenGLRender::ReleaseRender() {
    ReleaseDrawers();
    ReleaseSurface();
    ReleaseWindow();
}

void OpenGLRender::ReleaseSurface() {
    if (m_egl_surface != NULL) {
        m_egl_surface->Release();
        delete m_egl_surface;
        m_egl_surface = NULL;
    }
}

void OpenGLRender::ReleaseWindow() {
    if (m_native_window != NULL) {
        ANativeWindow_release(m_native_window);
        m_native_window = NULL;
    }
}

void OpenGLRender::ReleaseDrawers() {
    if (m_drawer_proxy != NULL) {
        m_drawer_proxy->Release();
        delete m_drawer_proxy;
        m_drawer_proxy = NULL;
    }
}
 

3. create an OpenGL ES renderer

NDKLayer OpenGLrendering process and the Javalayers are exactly the same, so will not repeat this process, the specific see " preliminary understanding ES OpenGL " and " Use OpenGL rendering the video screen ." The code is also as simple as possible, mainly introducing the overall process, and the specific code can be viewed in [ Demo source code draw ].

Drawer

1. encapsulate the basic operations into the base class. Here we will not post the code in detail, just look at the drawn "skeleton": function.

Header file drawer.h

//drawer.h
class Drawer {
private:
    // ...
    
    void CreateTextureId();
    void CreateProgram();
    GLuint LoadShader(GLenum type, const GLchar *shader_code);
    void DoDraw();
    
public:

    void Draw();

    bool IsReadyToDraw();
    
    void Release();

protected:
    // 
    void *cst_data = NULL;

    void SetSize(int width, int height);
    void ActivateTexture(GLenum type = GL_TEXTURE_2D, GLuint texture = m_texture_id,
                         GLenum index = 0, int texture_handler = m_texture_handler);
    
    // 
    virtual const char* GetVertexShader() = 0;
    virtual const char* GetFragmentShader() = 0;
    virtual void InitCstShaderHandler() = 0;
    virtual void BindTexture() = 0;
    virtual void PrepareDraw() = 0;
    virtual void DoneDraw() = 0;
  }
 

Here are two key points to explain,

I. void *cst_data: This variable is used to store data to be drawn, its type void *, may store any type of data pointer, for storing FFmpegdecoded picture data is good.

II. Finally, several virtualfunctions similar to Javathe abstractfunctions required to achieve subclass.

Concrete realization drawer.cpp

Mainly to see the Draw()method in detail see [ Source ]

//drawer.cpp
void Drawer::Draw() {
    if (IsReadyToDraw()) {
        CreateTextureId();
        CreateProgram();
        BindTexture();
        PrepareDraw();
        DoDraw();
        DoneDraw();
    }
}
 

Drawing process and Javalayer OpenGLrendering process is the same:

  • Create texture ID
  • Create GL program
  • Activate and bind texture ID
  • draw

Finally, look at the specific implementation of the subclass.

Video Drawer VideoDrawer

In the previous series of articles, for the scalability of the program, the renderer interface was defined VideoRender. In a video decoder VideoDecoder, a renderer will be called after the completion of the decoding Render()method.

class VideoRender {
public:
    virtual void InitRender(JNIEnv *env, int video_width, int video_height, int *dst_size) = 0;
    virtual void Render(OneFrame *one_frame) = 0;
    virtual void ReleaseRender() = 0;
};
 

In the above, although we have defined OpenGLRenderto render OpenGL, but did not inherit from VideoRender, but as I said before, OpenGLRenderwill call the agent renderer to achieve real paint.

Thus, there is a subclass VideoDrawerinheriting the Draweroutside, but also inherited VideoRender. Take a look specifically:

The header file video_render.h

//video_render.h

class VideoDrawer: public Drawer, public VideoRender {
public:

    VideoDrawer();
    ~VideoDrawer();

    //  VideoRender  
    void InitRender(JNIEnv *env, int video_width, int video_height, int *dst_size) override ;
    void Render(OneFrame *one_frame) override ;
    void ReleaseRender() override ;

    // 
    const char* GetVertexShader() override;
    const char* GetFragmentShader() override;
    void InitCstShaderHandler() override;
    void BindTexture() override;
    void PrepareDraw() override;
    void DoneDraw() override;
};
 

Concrete realization video_render.cpp

//video_render.cpp

VideoDrawer::VideoDrawer(): Drawer(0, 0) {
}

VideoDrawer::~VideoDrawer() {

}

void VideoDrawer::InitRender(JNIEnv *env, int video_width, int video_height, int *dst_size) {
    SetSize(video_width, video_height);
    dst_size[0] = video_width;
    dst_size[1] = video_height;
}

void VideoDrawer::Render(OneFrame *one_frame) {
    cst_data = one_frame->data;
}

void VideoDrawer::BindTexture() {
    ActivateTexture();
}

void VideoDrawer::PrepareDraw() {
    if (cst_data != NULL) {
        glTexImage2D(GL_TEXTURE_2D, 0, //level 0
                     GL_RGBA, //
                     origin_width(), origin_height(), // 
                     0, // 0
                     GL_RGBA, // 
                     GL_UNSIGNED_BYTE, //RGBA BYTE: 1 byte
                     cst_data);// 
    }
}

const char* VideoDrawer::GetVertexShader() {
    const GLbyte shader[] = "attribute vec4 aPosition;\n"
                            "attribute vec2 aCoordinate;\n"
                            "varying vec2 vCoordinate;\n"
                            "void main() {\n"
                            "  gl_Position = aPosition;\n"
                            "  vCoordinate = aCoordinate;\n"
                            "}";
    return (char *)shader;
}

const char* VideoDrawer::GetFragmentShader() {
    const GLbyte shader[] = "precision mediump float;\n"
                            "uniform sampler2D uTexture;\n"
                            "varying vec2 vCoordinate;\n"
                            "void main() {\n"
                            "  vec4 color = texture2D(uTexture, vCoordinate);\n"
                            "  gl_FragColor = color;\n"
                            "}";
    return (char *)shader;
}

void VideoDrawer::ReleaseRender() {
}

void VideoDrawer::InitCstShaderHandler() {

}

void VideoDrawer::DoneDraw() {
}
 

The two main methods here are:

Render(OneFrame *one_frame): Good picture data decoded and stored into cst_datathe.

PrepareDraw(): Prior to drawing, the cst_datadata by glTexImage2Dthe method, is mapped to OpenGLthe 2Dtexture.

Drawing agent

Previously talked about before, to be compatible with a plurality of video decoding rendering, you need to define a proxy Painter, the Drawercall to it to achieve, here's a look at how to achieve.

Define the painter proxy

//drawer_proxy.h

class DrawerProxy {
public:
    virtual void Draw() = 0;
    virtual void Release() = 0;
    virtual ~DrawerProxy() {}
};
 

Very simple, there are only two external methods of drawing and releasing.

Implement the default proxy DefDrawerProxyImpl

  • The header file def_drawer_proxy_impl.h
//def_drawer_proxy_impl.h

class DefDrawerProxyImpl: public DrawerProxy {

private:
    std::vector<Drawer *> m_drawers;

public:
    void AddDrawer(Drawer *drawer);
    void Draw() override;
    void Release() override;
};
 

Here, a container is used to maintain multiple painters Drawer.

  • Specific implementation def_drawer_proxy_impl.cpp
//def_drawer_proxy_impl.cpp

void DefDrawerProxyImpl::AddDrawer(Drawer *drawer) {
    m_drawers.push_back(drawer);
}

void DefDrawerProxyImpl::Draw() {
    for (int i = 0; i < m_drawers.size(); ++i) {
        m_drawers[i]->Draw(); 
    }
}

void DefDrawerProxyImpl::Release() {
    for (int i = 0; i < m_drawers.size(); ++i) {
        m_drawers[i]->Release();
        delete m_drawers[i];
    }

    m_drawers.clear();
}
 

Implementation is very simple, you will need to draw Draweradded to the container, in the OpenGLRendercall Draw()when the method of traversing all Drawer, a real draw.

4. integrated playback

Above, done

  • OpenGL Thread establishment
  • EGL Initialization
  • DrawerThe definition of the plotter, VideoDrawerthe establishment of
  • DrawerProxyAs well as DefDrawerProxyImplthe definition and implementation

In the end, it is a matter of combining them to achieve a closed loop of the entire process.

Define GLPlayer

Header file gl_player.h

//gl_player.h

class GLPlayer {

private:
    VideoDecoder *m_v_decoder;
    OpenGLRender *m_gl_render;

    DrawerProxy *m_v_drawer_proxy;
    VideoDrawer *m_v_drawer;

    AudioDecoder *m_a_decoder;
    AudioRender *m_a_render;

public:
    GLPlayer(JNIEnv *jniEnv, jstring path);
    ~GLPlayer();

    void SetSurface(jobject surface);
    void PlayOrPause();
    void Release();
};
 

Implement gl_player.cpp


GLPlayer::GLPlayer(JNIEnv *jniEnv, jstring path) {
    m_v_decoder = new VideoDecoder(jniEnv, path);

    //OpenGL  
    m_v_drawer = new VideoDrawer();
    m_v_decoder->SetRender(m_v_drawer);

    // 
    DefDrawerProxyImpl *proxyImpl =  new DefDrawerProxyImpl();
    // video drawer  
    proxyImpl->AddDrawer(m_v_drawer);

    m_v_drawer_proxy = proxyImpl;
    
    // OpenGL 
    m_gl_render = new OpenGLRender(jniEnv, m_v_drawer_proxy);

    // 
    m_a_decoder = new AudioDecoder(jniEnv, path, false);
    m_a_render = new OpenSLRender();
    m_a_decoder->SetRender(m_a_render);
}

GLPlayer::~GLPlayer() {
    //  delete  
    // BaseDecoder   OpenGLRender  
}

void GLPlayer::SetSurface(jobject surface) {
    m_gl_render->SetSurface(surface);
}

void GLPlayer::PlayOrPause() {
    if (!m_v_decoder->IsRunning()) {
        m_v_decoder->GoOn();
    } else {
        m_v_decoder->Pause();
    }
    if (!m_a_decoder->IsRunning()) {
        m_a_decoder->GoOn();
    } else {
        m_a_decoder->Pause();
    }
}

void GLPlayer::Release() {
    m_gl_render->Stop();
    m_v_decoder->Stop();
    m_a_decoder->Stop();
}
 

Define the JNI interface

//native-lib.cpp

extern "C" {
    JNIEXPORT jint JNICALL
    Java_com_cxp_learningvideo_FFmpegGLPlayerActivity_createGLPlayer(
        JNIEnv *env,
        jobject  /* this */,
        jstring path,
        jobject surface) {
        
        GLPlayer *player = new GLPlayer(env, path);
        player->SetSurface(surface);
        return (jint) player;
    }

    JNIEXPORT void JNICALL
    Java_com_cxp_learningvideo_FFmpegGLPlayerActivity_playOrPause(
        JNIEnv *env,
        jobject  /* this */,
        jint player) {
        
        GLPlayer *p = (GLPlayer *) player;
        p->PlayOrPause();
    }


    JNIEXPORT void JNICALL
    Java_com_cxp_learningvideo_FFmpegGLPlayerActivity_stop(
        JNIEnv *env,
        jobject  /* this */,
        jint player) {
        
        GLPlayer *p = (GLPlayer *) player;
        p->Release();
    }
}
 

Start playback in the page

class FFmpegGLPlayerActivity: AppCompatActivity() {

    val path = Environment.getExternalStorageDirectory().absolutePath + "/mvtest.mp4"

    private var player: Int? = null

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_ff_gl_player)
        initSfv()
    }

    private fun initSfv() {
        if (File(path).exists()) {
            sfv.holder.addCallback(object : SurfaceHolder.Callback {
                override fun surfaceChanged(holder: SurfaceHolder, format: Int,
                                            width: Int, height: Int) {}
                override fun surfaceDestroyed(holder: SurfaceHolder) {
                    stop(player!!)
                }

                override fun surfaceCreated(holder: SurfaceHolder) {
                    if (player == null) {
                        player = createGLPlayer(path, holder.surface)
                        playOrPause(player!!)
                    }
                }
            })
        } else {
            Toast.makeText(this, "  mvtest.mp4", Toast.LENGTH_SHORT).show()
        }
    }

    private external fun createGLPlayer(path: String, surface: Surface): Int
    private external fun playOrPause(player: Int)
    private external fun stop(player: Int)

    companion object {
        init {
            System.loadLibrary("native-lib")
        }
    }
}