1# Video Encoding
2
3You can call the native APIs provided by the VideoEncoder module to encode a video, that is, to compress video data into video streams.
4
5<!--RP3--><!--RP3End-->
6
7For details about the supported encoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-encoding).
8
9<!--RP1--><!--RP1End-->
10
11The following table lists the video encoding capabilities supported:
12
13<!--RP4-->
14|          Capability                      |                              How to Use                                           |
15| --------------------------------------- | ---------------------------------------------------------------------------------- |
16| Layered encoding<br> Setting the LTR frame and reference frame                     | For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md).       |
17<!--RP4End-->
18
19## Restrictions
20
21- The buffer mode does not support 10-bit image data.
22- Due to limited hardware encoder resources, you must call **OH_VideoEncoder_Destroy** to destroy every encoder instance when it is no longer needed.
23- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed.
24- Once **Flush**, **Reset**, or **Stop** is called, the system reclaims the OH_AVBuffer. Therefore, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
25- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
26- In buffer mode, after obtaining the pointer to an OH_AVBuffer object through the callback function **OH_AVCodecOnNeedInputBuffer**, call **OH_VideoEncoder_PushInputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system will proceed with encoding the data contained in the buffer. If the OH_NativeBuffer object is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer object, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
27
28## Surface Input and Buffer Input
29
30- Surface input and buffer input differ in data sources.
31
32- They are applicable to different scenarios.
33  - Surface input indicates that the OHNativeWindow is used to transfer passed-in data. It supports connection with other modules, such as the camera module.
34  - Buffer input refers to a pre-allocated memory area. The caller needs to copy original data to this memory area. It is more applicable to scenarios such as reading video data from files.
35
36- The two also differ slightly in the API calling modes:
37  - In buffer mode, the caller calls **OH_VideoEncoder_PushInputBuffer** to input data. In surface mode, the caller, before the encoder is ready, calls **OH_VideoEncoder_GetSurface** to obtain the OHNativeWindow for video data transmission.
38  - In buffer mode, the caller uses **attr** in **OH_AVBuffer** to pass in the End of Stream (EOS) flag, and the encoder stops when it reads the last frame. In surface mode, the caller calls **OH_VideoEncoder_NotifyEndOfStream** to notify the encoder of EOS.
39
40For details about the development procedure, see [Surface Input](#surface-input) and [Buffer Input](#buffer-input).
41
42## State Machine Interaction
43
44The following figure shows the interaction between states.
45
46![Invoking relationship of state](figures/state-invocation.png)
47
481. An encoder enters the Initialized state in either of the following ways:
49   - When an encoder instance is initially created, the encoder enters the Initialized state.
50   - When **OH_VideoEncoder_Reset** is called in any state, the encoder returns to the Initialized state.
51
522. When the encoder is in the Initialized state, you can call **OH_VideoEncoder_Configure** to configure the encoder. After the configuration, the encoder enters the Configured state.
533. When the encoder is in the Configured state, you can call **OH_VideoEncoder_Prepare()** to switch it to the Prepared state.
544. When the encoder is in the Prepared state, you can call **OH_VideoEncoder_Start** to switch it to the Executing state.
55   - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Stop** to switch it back to the Prepared state.
56
575. In rare cases, the encoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
58   - When the encoder is in the Error state, you can either call **OH_VideoEncoder_Reset** to switch it to the Initialized state or call **OH_VideoEncoder_Destroy** to switch it to the Released state.
59
606. The Executing state has three substates: Flushed, Running, and End-of-Stream.
61   - After **OH_VideoEncoder_Start** is called, the encoder enters the Running substate immediately.
62   - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Flush** to switch it to the Flushed substate.
63   - After all data to be processed is transferred to the encoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the encoder transits to the End-of-Stream substate. In this state, the encoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
64
657. When the encoder is no longer needed, you must call **OH_VideoEncoder_Destroy** to destroy the encoder instance. Then the encoder enters the Released state.
66
67## How to Develop
68
69Read [VideoEncoder](../../reference/apis-avcodec-kit/_video_encoder.md) for the API reference.
70
71The figure below shows the call relationship of video encoding.
72
73- The dotted line indicates an optional operation.
74
75- The solid line indicates a mandatory operation.
76
77![Call relationship of video encoding](figures/video-encode.png)
78
79### Linking the Dynamic Libraries in the CMake Script
80
81```cmake
82target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
83target_link_libraries(sample PUBLIC libnative_media_core.so)
84target_link_libraries(sample PUBLIC libnative_media_venc.so)
85```
86
87> **NOTE**
88>
89> The word 'sample' in the preceding code snippet is only an example. Use the actual project directory name.
90>
91
92### Defining the Basic Structure
93
94The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it.
95
961. Add the header files.
97
98    ```c++
99    #include <condition_variable>
100    #include <memory>
101    #include <mutex>
102    #include <queue>
103    #include <shared_mutex>
104    ```
105
1062. Define the information about the encoder callback buffer.
107
108    ```c++
109    struct CodecBufferInfo {
110        CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {}
111        CodecBufferInfo(uint32_t index, OH_AVFormat *parameter): index(index), parameter(parameter), isValid(true) {}
112        // Callback buffer.
113        OH_AVBuffer *buffer = nullptr;
114        // In surface mode, pass the frame-specific parameter of the callback, which can be used only after the frame-specific parameter callback function is registered.
115        OH_AVFormat *parameter = nullptr;
116        // Index of the callback buffer.
117        uint32_t index = 0;
118        // Check whether the current buffer information is valid.
119        bool isValid = true;
120    };
121    ```
122
1233. Define the input and output queue for encoding.
124
125    ```c++
126    class CodecBufferQueue {
127    public:
128        // Pass the callback buffer information to the queue.
129        void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo)
130        {
131            std::unique_lock<std::mutex> lock(mutex_);
132            bufferQueue_.push(bufferInfo);
133            cond_.notify_all();
134        }
135
136        // Obtain the information about the callback buffer.
137        std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000)
138        {
139            std::unique_lock<std::mutex> lock(mutex_);
140            (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); });
141            if (bufferQueue_.empty()) {
142                return nullptr;
143            }
144            std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
145            bufferQueue_.pop();
146            return bufferInfo;
147        }
148
149        // Clear the queue. The previous callback buffer becomes unavailable.
150        void Flush()
151        {
152            std::unique_lock<std::mutex> lock(mutex_);
153            while (!bufferQueue_.empty()) {
154                std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
155                // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid.
156                bufferInfo->isValid = false;
157                bufferQueue_.pop();
158            }
159        }
160
161    private:
162        std::mutex mutex_;
163        std::condition_variable cond_;
164        std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_;
165    };
166    ```
167
1684. Define global variables.
169
170    These global variables are for reference only. They can be encapsulated into an object based on service requirements.
171
172    ```c++
173    // Video frame width.
174    int32_t width = 320;
175    // Video frame height.
176    int32_t height = 240;
177    // Video pixel format.
178     OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12;
179    // Video width stride.
180    int32_t widthStride = 0;
181    // Video height stride.
182    int32_t heightStride = 0;
183    // Pointer to the encoder instance.
184    OH_AVCodec *videoEnc = nullptr;
185    // Encoder synchronization lock.
186    std::shared_mutex codecMutex;
187    // Encoder input queue.
188    CodecBufferQueue inQueue;
189    // Encoder output queue.
190    CodecBufferQueue outQueue;
191    ```
192
193### Surface Input
194
195The following walks you through how to implement the entire video encoding process in surface mode. In this example, surface data is input and encoded into a H.264 stream.
196
197Currently, the VideoEncoder module supports only data rotation in asynchronous mode.
198
1991. Add the header files.
200
201    ```c++
202    #include <multimedia/player_framework/native_avcodec_videoencoder.h>
203    #include <multimedia/player_framework/native_avcapability.h>
204    #include <multimedia/player_framework/native_avcodec_base.h>
205    #include <multimedia/player_framework/native_avformat.h>
206    #include <multimedia/player_framework/native_avbuffer.h>
207    #include <fstream>
208    ```
209
2102. Create an encoder instance.
211
212    You can create an encoder by name or MIME type. In the code snippet below, the following variables are used:
213
214    - **videoEnc**: pointer to the video encoder instance.
215    - **capability**: pointer to the encoder's capability.
216    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
217
218    The following is an example:
219
220    ```c++
221    // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
222    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true);
223    // Create a hardware encoder instance.
224    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE);
225    const char *codecName = OH_AVCapability_GetName(capability);
226    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName);
227    ```
228
229    ```c++
230    // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way.
231    // Only hardware encoders can be created.
232    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
233    ```
234
2353. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions.
236
237    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
238
239    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
240    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change.
241    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required. This callback does not take effect, since you input data through the obtained surface.
242    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete.
243
244    <!--RP2--><!--RP2End-->
245
246    The following is an example:
247
248    <!--RP5-->
249    ```c++
250    // Set the OH_AVCodecOnError callback function, which is used to report a codec operation error.
251    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
252    {
253        // Process the error code in the callback.
254        (void)codec;
255        (void)errorCode;
256        (void)userData;
257    }
258    ```
259    <!--RP5End-->
260
261    <!--RP12-->
262    ```c++
263    // Set the OH_AVCodecOnStreamChanged callback function, which is used to report an encoding stream change.
264    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
265    {
266        // In surface mode, this callback function is triggered when the surface resolution changes.
267        (void)codec;
268        (void)userData;
269        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_WIDTH, &width);
270        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_HEIGHT, &height);
271    }
272    ```
273    <!--RP12End-->
274
275    ```c++
276    // Set the OH_AVCodecOnNeedInputBuffer callback function, which is used to send an input frame to the data queue.
277    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
278    {
279        // In surface mode, this callback function does not take effect. Data is input through the obtained surface.
280        (void)userData;
281        (void)index;
282        (void)buffer;
283    }
284    ```
285
286    <!--RP6-->
287    ```c++
288    // Set the OH_AVCodecOnNewOutputBuffer callback function, which is used to send an encoded frame to the output queue.
289    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
290    {
291        // The data buffer of the finished frame and its index are sent to outQueue.
292        (void)codec;
293        (void)userData;
294        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
295    }
296    ```
297    <!--RP6End-->
298
299    ```c++
300    // Call OH_VideoEncoder_RegisterCallback() to register the callback functions.
301    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
302    int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, NULL); // NULL: userData is null.
303    if (ret != AV_ERR_OK) {
304        // Exception handling.
305    }
306    ```
307
308    > **NOTE**
309    >
310    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
311
3124. (Optional) Call **OH_VideoEncoder_RegisterParameterCallback()** to register the frame-specific parameter callback function.
313
314    For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md).
315
316    <!--RP7-->
317    ```c++
318    // 4.1 Implement the OH_VideoEncoder_OnNeedInputParameter callback function.
319    static void OnNeedInputParameter(OH_AVCodec *codec, uint32_t index, OH_AVFormat *parameter, void *userData)
320    {
321        // The data parameter of the input frame and its index are sent to inQueue.
322        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, parameter));
323    }
324
325    // 4.2 Register the frame-specific parameter callback function.
326    OH_VideoEncoder_OnNeedInputParameter inParaCb = OnNeedInputParameter;
327    OH_VideoEncoder_RegisterParameterCallback(videoEnc, inParaCb, NULL); // NULL: userData is null.
328    ```
329    <!--RP7End-->
330
3315. Call **OH_VideoEncoder_Configure()** to configure the encoder.
332
333    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
334
335    For details about the parameter verification rules, see [OH_VideoEncoder_Configure()](../../reference/apis-avcodec-kit/_video_encoder.md#oh_videoencoder_configure).
336
337    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
338
339    Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format. In the code snippet below, the following variables are used:
340
341    - **DEFAULT_WIDTH**: 320 pixels
342    - **DEFAULT_HEIGHT**: 240 pixels
343    - **DEFAULT_PIXELFORMAT**: **AV_PIXEL_FORMAT_NV12** (the pixel format of the YUV file is NV12)
344
345    ```c++
346    // Configure the video frame rate.
347    double frameRate = 30.0;
348    // Configure the video YUV range flag.
349    bool rangeFlag = false;
350    // Configure the video primary color.
351    int32_t primary = static_cast<int32_t>(OH_ColorPrimary::COLOR_PRIMARY_BT709);
352    // Configure the transfer characteristics.
353    int32_t transfer = static_cast<int32_t>(OH_TransferCharacteristic::TRANSFER_CHARACTERISTIC_BT709);
354    // Configure the maximum matrix coefficient.
355    int32_t matrix = static_cast<int32_t>(OH_MatrixCoefficient::MATRIX_COEFFICIENT_IDENTITY);
356    // Configure the encoding profile.
357    int32_t profile = static_cast<int32_t>(OH_AVCProfile::AVC_PROFILE_HIGH);
358    // Configure the encoding bit rate mode.
359    int32_t rateMode = static_cast<int32_t>(OH_VideoEncodeBitrateMode::VBR);
360    // Configure the key frame interval, in milliseconds.
361    int32_t iFrameInterval = 1000;
362    // Configure the bit rate.
363    int64_t bitRate = 5000000;
364    // Set the encoding quality.
365    int64_t quality = 90;
366
367    OH_AVFormat *format = OH_AVFormat_Create();
368    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory
369    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory
370    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); // Mandatory
371
372    OH_AVFormat_SetDoubleValue(format, OH_MD_KEY_FRAME_RATE, frameRate);
373    OH_AVFormat_SetIntValue(format, OH_MD_KEY_RANGE_FLAG, rangeFlag);
374    OH_AVFormat_SetIntValue(format, OH_MD_KEY_COLOR_PRIMARIES, primary);
375    OH_AVFormat_SetIntValue(format, OH_MD_KEY_TRANSFER_CHARACTERISTICS, transfer);
376    OH_AVFormat_SetIntValue(format, OH_MD_KEY_MATRIX_COEFFICIENTS, matrix);
377    OH_AVFormat_SetIntValue(format, OH_MD_KEY_I_FRAME_INTERVAL, iFrameInterval);
378    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PROFILE, profile);
379    // Configure OH_MD_KEY_QUALITY only when OH_MD_KEY_BITRATE = CQ is used.
380    if (rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::CQ)) {
381        OH_AVFormat_SetIntValue(format, OH_MD_KEY_QUALITY, quality);
382    } else if (rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::CBR) ||
383               rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::VBR)){
384        OH_AVFormat_SetLongValue(format, OH_MD_KEY_BITRATE, bitRate);
385    }
386    OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENCODE_BITRATE_MODE, rateMode);
387    int32_t ret = OH_VideoEncoder_Configure(videoEnc, format);
388    if (ret != AV_ERR_OK) {
389        // Exception handling.
390    }
391    OH_AVFormat_Destroy(format);
392    ```
393
394    > **NOTE**
395    >
396    > If an optional parameter is incorrectly configured, the error code **AV_ERR_INVAILD_VAL** is returned. However, **OH_VideoEncoder_Configure()** does not fail. Instead, its execution continues with the default value.
397
3986. Obtain a surface.
399
400    Obtain the OHNativeWindow in surface mode. The surface must be obtained before the encoder is prepared.
401
402    ```c++
403    // Obtain the surface used for data input.
404    OHNativeWindow *nativeWindow;
405    int32_t ret = OH_VideoEncoder_GetSurface(videoEnc, &nativeWindow);
406    if (ret != AV_ERR_OK) {
407        // Exception handling.
408    }
409    // Use the OHNativeWindow* variable to obtain the address of the data to be filled through the producer interface.
410    ```
411
412    For details about how to use the OHNativeWindow* variable-type, see [OHNativeWindow](../../reference/apis-arkgraphics2d/_native_window.md#ohnativewindow).
413
4147. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder.
415
416    ```c++
417    int32_t ret = OH_VideoEncoder_Prepare(videoEnc);
418    if (ret != AV_ERR_OK) {
419        // Exception handling.
420    }
421    ```
422
4238. Call **OH_VideoEncoder_Start()** to start the encoder.
424
425    ```c++
426    // Configure the paths of the input and output files.
427    std::string_view outputFilePath = "/*yourpath*.h264";
428    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
429    outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate);
430    // Start the encoder.
431    int32_t ret = OH_VideoEncoder_Start(videoEnc);
432    if (ret != AV_ERR_OK) {
433        // Exception handling.
434    }
435    ```
436
4379. (Optional) Call **OH_VideoEncoder_SetParameter()** to dynamically configure encoder parameters during running.
438
439    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
440
441    <!--RP8-->
442    ```c++
443    OH_AVFormat *format = OH_AVFormat_Create();
444    // Dynamically request IDR frames.
445    OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true);
446    int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format);
447    if (ret != AV_ERR_OK) {
448        // Exception handling.
449    }
450    OH_AVFormat_Destroy(format);
451    ```
452    <!--RP8End-->
453
45410. Write the image to encode.
455
456    In step 6, you have configured the **OHNativeWindow*** variable type returned by **OH_VideoEncoder_GetSurface**. The data required for encoding is continuously input by the surface. Therefore, you do not need to process the **OnNeedInputBuffer** callback function or use **OH_VideoEncoder_PushInputBuffer** to input data.
457    <!--RP13--><!--RP13End-->
458
45911. (Optional) Call **OH_VideoEncoder_PushInputParameter()** to notify the encoder that the frame-specific parameter configuration is complete.
460
461    In step 4, you have registered the frame-specific parameter callback function.
462
463    In the code snippet below, the following variables are used:
464
465    - **index**: parameter passed by the callback function **OnNeedInputParameter**, which uniquely corresponds to the buffer.
466
467    ```c++
468    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
469    std::shared_lock<std::shared_mutex> lock(codecMutex);
470    if (bufferInfo == nullptr || !bufferInfo->isValid) {
471        // Exception handling.
472    }
473    // The value is determined by the caller.
474    int32_t isIFrame;
475    OH_AVFormat_SetIntValue(bufferInfo->parameter, OH_MD_KEY_REQUEST_I_FRAME, isIFrame);
476    int32_t ret = OH_VideoEncoder_PushInputParameter(videoEnc, bufferInfo->index);
477    if (ret != AV_ERR_OK) {
478        // Exception handling.
479    }
480    ```
481
48212. Call **OH_VideoEncoder_NotifyEndOfStream()** to notify the encoder of EOS.
483
484    ```c++
485    // In surface mode, you only need to call this API to notify the encoder of EOS.
486    // In buffer mode, you need to set the AVCODEC_BUFFER_FLAGS_EOS flag and then call OH_VideoEncoder_PushInputBuffer to notify the encoder of EOS.
487    int32_t ret = OH_VideoEncoder_NotifyEndOfStream(videoEnc);
488    if (ret != AV_ERR_OK) {
489        // Exception handling.
490    }
491    ```
492
49313. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames.
494
495    In the code snippet below, the following variables are used:
496
497    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
498    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
499
500    ```c++
501    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
502    std::shared_lock<std::shared_mutex> lock(codecMutex);
503    if (bufferInfo == nullptr || !bufferInfo->isValid) {
504        // Exception handling.
505    }
506    // Obtain the encoded information.
507    OH_AVCodecBufferAttr info;
508    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
509    if (ret != AV_ERR_OK) {
510        // Exception handling.
511    }
512    // Write the encoded frame data (specified by buffer) to the output file.
513    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
514    // Free the output buffer. index is the index of the buffer.
515    ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, bufferInfo->index);
516    if (ret != AV_ERR_OK) {
517        // Exception handling.
518    }
519    ```
520
52114. (Optional) Call **OH_VideoEncoder_Flush()** to refresh the encoder.
522
523    After **OH_VideoEncoder_Flush** is called, the encoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the encoder are cleared.
524
525    To continue encoding, you must call **OH_VideoEncoder_Start** again.
526
527    ```c++
528    std::unique_lock<std::shared_mutex> lock(codecMutex);
529    // Refresh the encoder.
530    int32_t ret = OH_VideoEncoder_Flush(videoEnc);
531    if (ret != AV_ERR_OK) {
532        // Exception handling.
533    }
534    inQueue.Flush();
535    outQueue.Flush();
536    // Start encoding again.
537    ret = OH_VideoEncoder_Start(videoEnc);
538    if (ret != AV_ERR_OK) {
539        // Exception handling.
540    }
541    ```
542
54315. (Optional) Call **OH_VideoEncoder_Reset()** to reset the encoder.
544
545    After **OH_VideoEncoder_Reset** is called, the encoder returns to the Initialized state. To continue, you must call **OH_VideoEncoder_Configure** and **OH_VideoEncoder_Prepare** again.
546
547    ```c++
548    std::unique_lock<std::shared_mutex> lock(codecMutex);
549    // Reset the encoder.
550    int32_t ret = OH_VideoEncoder_Reset(videoEnc);
551    if (ret != AV_ERR_OK) {
552        // Exception handling.
553    }
554    inQueue.Flush();
555    outQueue.Flush();
556    // Reconfigure the encoder.
557    ret = OH_VideoEncoder_Configure(videoEnc, format);
558    if (ret != AV_ERR_OK) {
559        // Exception handling.
560    }
561    // The encoder is ready again.
562    ret = OH_VideoEncoder_Prepare(videoEnc);
563    if (ret != AV_ERR_OK) {
564        // Exception handling.
565    }
566    ```
567
56816. (Optional) Call **OH_VideoEncoder_Stop()** to stop the encoder.
569
570    After **OH_VideoEncoder_Stop** is called, the encoder retains the encoding instance and releases the input and output buffers. You can directly call **OH_VideoEncoder_Start** to continue encoding.
571
572    The first **buffer** passed must carry the parameter set, starting from the IDR frame.
573
574    ```c++
575    std::unique_lock<std::shared_mutex> lock(codecMutex);
576    // Stop the encoder.
577    int32_t ret = OH_VideoEncoder_Stop(videoEnc);
578    if (ret != AV_ERR_OK) {
579        // Exception handling.
580    }
581    inQueue.Flush();
582    outQueue.Flush();
583    ```
584
58517. Call **OH_VideoEncoder_Destroy()** to destroy the encoder instance and release resources.
586
587    > **NOTE**
588    >
589    > This API cannot be called in the callback function.
590    > After the call, you must set the encoder to NULL to prevent program errors caused by wild pointers.
591
592    ```c++
593    std::unique_lock<std::shared_mutex> lock(codecMutex);
594    // Release the nativeWindow instance.
595    if(nativeWindow != NULL){
596        OH_NativeWindow_DestroyNativeWindow(nativeWindow);
597        nativeWindow = NULL;
598    }
599    // Call OH_VideoEncoder_Destroy to destroy the encoder.
600    int32_t ret = AV_ERR_OK;
601    if (videoEnc != NULL) {
602        ret = OH_VideoEncoder_Destroy(videoEnc);
603        videoEnc = NULL;
604    }
605    if (ret != AV_ERR_OK) {
606        // Exception handling.
607    }
608    inQueue.Flush();
609    outQueue.Flush();
610    ```
611
612### Buffer Input
613
614The following walks you through how to implement the entire video encoding process in buffer mode. It uses the YUV file input and H.264 encoding format as an example.
615Currently, the VideoEncoder module supports only data rotation in asynchronous mode.
616
6171. Add the header files.
618
619    ```c++
620    #include <multimedia/player_framework/native_avcodec_videoencoder.h>
621    #include <multimedia/player_framework/native_avcapability.h>
622    #include <multimedia/player_framework/native_avcodec_base.h>
623    #include <multimedia/player_framework/native_avformat.h>
624    #include <multimedia/player_framework/native_avbuffer.h>
625    #include <fstream>
626    ```
627
6282. Create an encoder instance.
629
630    The procedure is the same as that in surface mode and is not described here.
631
632    ```c++
633    // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
634    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true);
635    const char *codecName = OH_AVCapability_GetName(capability);
636    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName);
637    ```
638
639    ```c++
640    // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way.
641    // If multiple codecs need to be created, create hardware encoder instances first. If the hardware resources are insufficient, create software encoder instances.
642    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
643    ```
644
6453. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions.
646
647    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
648    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
649    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change.
650    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the encoder is ready for receiving YUV/RGB data.
651    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete.
652
653    You need to process the callback functions to ensure that the encoder runs properly.
654
655    <!--RP2--><!--RP2End-->
656
657    <!--RP9-->
658    ```c++
659    bool isFirstFrame = true;
660    ```
661    <!--RP9End-->
662
663    ```c++
664    // Implement the OH_AVCodecOnError callback function.
665    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
666    {
667        // Process the error code in the callback.
668        (void)codec;
669        (void)errorCode;
670        (void)userData;
671    }
672    ```
673
674    ```c++
675    // Implement the OH_AVCodecOnStreamChanged callback function.
676    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
677    {
678        // In buffer mode, this callback function does not take effect.
679        (void)codec;
680        (void)format;
681        (void)userData;
682    }
683    ```
684
685    ```c++
686    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
687    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
688    {
689        // Obtain the video width stride and height stride.
690        if (isFirstFrame) {
691            OH_AVFormat *format = OH_VideoEncoder_GetInputDescription(codec);
692            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
693            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
694            OH_AVFormat_Destroy(format);
695            isFirstFrame = false;
696        }
697        // The data buffer of the input frame and its index are sent to inQueue.
698        (void)codec;
699        (void)userData;
700        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
701    }
702    ```
703
704    <!--RP10-->
705    ```c++
706    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
707    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
708    {
709        // The data buffer of the finished frame and its index are sent to outQueue.
710        (void)userData;
711        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
712    }
713    ```
714    <!--RP10End-->
715
716    ```c++
717    // Call OH_VideoEncoder_RegisterCallback() to register the callback functions.
718    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
719    int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, NULL);
720    if (ret != AV_ERR_OK) {
721        // Exception handling.
722    }
723    ```
724
725    > **NOTE**
726    >
727    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
728    >
729
7304. Call **OH_VideoEncoder_Configure()** to configure the encoder.
731
732    The procedure is the same as that in surface mode and is not described here.
733
734    ```c++
735    OH_AVFormat *format = OH_AVFormat_Create();
736    // Set the format.
737    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory
738    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory
739    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); // Mandatory
740    // Configure the encoder.
741    int32_t ret = OH_VideoEncoder_Configure(videoEnc, format);
742    if (ret != AV_ERR_OK) {
743        // Exception handling.
744    }
745    OH_AVFormat_Destroy(format);
746    ```
747
7485. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder.
749
750
751
752    ```c++
753    ret = OH_VideoEncoder_Prepare(videoEnc);
754    if (ret != AV_ERR_OK) {
755        // Exception handling.
756    }
757    ```
758
7596. Call **OH_VideoEncoder_Start()** to start the encoder.
760
761    As soon as the encoder starts, the callback functions will be triggered to respond to events. Therefore, you must configure the input file and output file first.
762
763    ```c++
764    // Configure the paths of the input and output files.
765    std::string_view inputFilePath = "/*yourpath*.yuv";
766    std::string_view outputFilePath = "/*yourpath*.h264";
767    std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>();
768    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
769    inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary);
770    outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate);
771    // Start the encoder.
772    int32_t ret = OH_VideoEncoder_Start(videoEnc);
773    if (ret != AV_ERR_OK) {
774        // Exception handling.
775    }
776    ```
777
7787. (Optional) Dynamically configure encoder parameters during running.
779
780   <!--RP11-->
781    ```c++
782    OH_AVFormat *format = OH_AVFormat_Create();
783    // Dynamically request IDR frames.
784    OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true);
785    int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format);
786    if (ret != AV_ERR_OK) {
787        // Exception handling.
788    }
789    OH_AVFormat_Destroy(format);
790    ```
791    <!--RP11End-->
792
7938. Call **OH_VideoEncoder_PushInputBuffer()** to push the image to the input queue for encoding.
794
795    In the code snippet below, the following variables are used:
796
797    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
798    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
799    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
800    - **widthStride**: stride of the obtained buffer data.
801
802    ```c++
803    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
804    std::shared_lock<std::shared_mutex> lock(codecMutex);
805    if (bufferInfo == nullptr || !bufferInfo->isValid) {
806        // Exception handling.
807    }
808    // Write image data.
809    if (widthStride == width) {
810        // Process the file stream and obtain the frame length, and then write the data to encode to the buffer of the specified index.
811        int32_t frameSize = width * height * 3 / 2; // Formula for calculating the data size of each frame in NV12 pixel format.
812        inputFile->read(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), frameSize);
813    } else {
814        // If the stride is not equal to the width, perform offset based on the stride. For details, see the following example.
815    }
816    // Configure the buffer information.
817    OH_AVCodecBufferAttr info;
818    info.size = frameSize;
819    info.offset = 0;
820    info.pts = 0;
821    info.flags = flags;
822    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
823    if (ret != AV_ERR_OK) {
824        // Exception handling.
825    }
826    // Configure the buffer frame-specific information.
827    // The value is determined by the caller.
828    int32_t isIFrame;
829    OH_AVFormat *parameter = OH_AVBuffer_GetParameter(bufferInfo->buffer);
830    OH_AVFormat_SetIntValue(parameter, OH_MD_KEY_REQUEST_I_FRAME, isIFrame);
831    ret = OH_AVBuffer_SetParameter(bufferInfo->buffer, parameter);
832    if (ret != AV_ERR_OK) {
833        // Exception handling.
834    }
835    OH_AVFormat_Destroy(parameter);
836    // Send the data to the input buffer for encoding. index is the index of the buffer.
837    ret = OH_VideoEncoder_PushInputBuffer(videoEnc, bufferInfo->index);
838    if (ret != AV_ERR_OK) {
839        // Exception handling.
840    }
841    ```
842    Offset the stride. The following uses an NV12 image as an example, presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
843
844    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
845    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
846    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
847    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
848
849    ![copy by line](figures/copy-by-line.png)
850
851    Add the header file.
852
853    ```c++
854    #include <string.h>
855    ```
856
857    The following is the sample code:
858
859    ```c++
860    struct Rect // Width and height of the source buffer. They are set by the caller.
861    {
862        int32_t width;
863        int32_t height;
864    };
865
866    struct DstRect // Width stride and height stride of the destination buffer. They are obtained by calling OH_VideoEncoder_GetInputDescription.
867    {
868        int32_t wStride;
869        int32_t hStride;
870    };
871
872    struct SrcRect // Width stride and height stride of the source buffer. They are set by the caller.
873    {
874        int32_t wStride;
875        int32_t hStride;
876    };
877
878    Rect rect = {320, 240};
879    DstRect dstRect = {320, 256};
880    SrcRect srcRect = {320, 250};
881    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area.
882    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area.
883    uint8_t* dstTemp = dst;
884    uint8_t* srcTemp = src;
885
886    // Y: Copy the source data in the Y region to the target data in another region.
887    for (int32_t i = 0; i < rect.height; ++i) {
888        // Copy a row of data from the source to a row of the target.
889        memcpy_s(dstTemp, srcTemp, rect.width);
890        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
891        dstTemp += dstRect.wStride;
892        srcTemp += srcRect.wStride;
893    }
894    // padding
895    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
896    dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride;
897    srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride;
898    rect.height >>= 1;
899    // UV: Copy the source data in the UV region to the target data in another region.
900    for (int32_t i = 0; i < rect.height; ++i) {
901        memcpy_s(dstTemp, srcTemp, rect.width);
902        dstTemp += dstRect.wStride;
903        srcTemp += srcRect.wStride;
904    }
905
906    delete[] dst;
907    dst = nullptr;
908    delete[] src;
909    src = nullptr;
910    ```
911
912    When processing buffer data (before pushing data) during hardware encoding, you must copy the image data after width and height alignment to the input callback AVBuffer. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the data to encode. For details, see step 3 in [Buffer Input](#buffer-input).
913
9149. Notify the encoder of EOS.
915
916    In the code snippet below, the following variables are used:
917    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
918    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
919
920     The API **OH_VideoEncoder_PushInputBuffer** is used to notify the encoder of EOS. This API is also used in step 8 to push the stream to the input queue for encoding. Therefore, in the current step, you must pass in the **AVCODEC_BUFFER_FLAGS_EOS** flag.
921
922    ```c++
923    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
924    std::shared_lock<std::shared_mutex> lock(codecMutex);
925    if (bufferInfo == nullptr || !bufferInfo->isValid) {
926        // Exception handling.
927    }
928    OH_AVCodecBufferAttr info;
929    info.size = 0;
930    info.offset = 0;
931    info.pts = 0;
932    info.flags = AVCODEC_BUFFER_FLAGS_EOS;
933    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
934    if (ret != AV_ERR_OK) {
935        // Exception handling.
936    }
937    ret = OH_VideoEncoder_PushInputBuffer(videoEnc, bufferInfo->index);
938    if (ret != AV_ERR_OK) {
939        // Exception handling.
940    }
941    ```
942
94310. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames.
944
945    The procedure is the same as that in surface mode and is not described here.
946
947    ```c++
948    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
949    std::shared_lock<std::shared_mutex> lock(codecMutex);
950    if (bufferInfo == nullptr || !bufferInfo->isValid) {
951        // Exception handling.
952    }
953    // Obtain the encoded information.
954    OH_AVCodecBufferAttr info;
955    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
956    if (ret != AV_ERR_OK) {
957        // Exception handling.
958    }
959    // Write the encoded frame data (specified by buffer) to the output file.
960    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
961    // Free the output buffer. index is the index of the buffer.
962    ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, bufferInfo->index);
963    if (ret != AV_ERR_OK) {
964        // Exception handling.
965    }
966    ```
967
968The subsequent processes (including refreshing, resetting, stopping, and destroying the encoder) are the same as those in surface mode. For details, see steps 14–17 in [Surface Input](#surface-input).
969