1# Video Decoding
2
3You can call the native APIs provided by the VideoDecoder module to decode video, that is, to decode media data into a YUV file or render it.
4
5<!--RP3--><!--RP3End-->
6
7For details about the supported decoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-decoding).
8
9<!--RP1--><!--RP1End-->
10
11Through the VideoDecoder module, your application can implement the following key capabilities.
12
13|          Capability                      |             How to Configure                                                                    |
14| --------------------------------------- | ---------------------------------------------------------------------------------- |
15| Variable resolution        | The decoder supports the change of the input stream resolution. After the resolution is changed, the callback function **OnStreamChanged()** set by **OH_VideoDecoder_RegisterCallback** is triggered. For details, see step 3 in surface mode or step 3 in buffer mode. |
16| Dynamic surface switching | Call **OH_VideoDecoder_SetSurface** to configure this capability. It is supported only in surface mode. For details, see step 6 in surface mode.   |
17| Low-latency decoding | Call **OH_VideoDecoder_Configure** to configure this capability. For details, see step 5 in surface mode or step 5 in buffer mode.     |
18
19## Restrictions
20
21- The buffer mode does not support 10-bit image data.
22- After **flush()**, **reset()**, or **stop()** is called, the PPS/SPS must be transferred again in the **start()** call. For details about the example, see step 14 in [Surface Output](#surface-output).
23- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed.
24- Due to limited hardware decoder resources, you must call **OH_VideoDecoder_Destroy** to destroy every decoder instance when it is no longer needed.
25- The input streams for video decoding support only the AnnexB format, and the supported AnnexB format supports multiple slices. However, the slices of the same frame must be sent to the decoder at a time.
26- When **flush()**, **reset()**, or **stop()** is called, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
27- The DRM decryption capability supports both non-secure and secure video channels in [surface mode](#surface-output), but only non-secure video channels in buffer mode (#buffer-output).
28- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
29- In buffer mode, after obtaining the pointer to an OH_AVBuffer object through the callback function **OH_AVCodecOnNewOutputBuffer**, call **OH_VideoDecoder_FreeOutputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system can write the subsequently decoded data to the corresponding location. If the OH_NativeBuffer object is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer object, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
30
31## Surface Output and Buffer Output
32
33- Surface output and buffer output differ in data output modes.
34- They are applicable to different scenarios.
35  - Surface output indicates that the OHNativeWindow is used to transfer output data. It supports connection with other modules, such as the **XComponent**.
36  - Buffer output indicates that decoded data is output in shared memory mode.
37
38- The two also differ slightly in the API calling modes:
39  - In surface mode, the caller can choose to call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer (without rendering the data). In buffer mode, the caller must call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer.
40  - In surface mode, the caller must call **OH_VideoDecoder_SetSurface** to set an OHNativeWindow before the decoder is ready and call **OH_VideoDecoder_RenderOutputBuffer** to render the decoded data after the decoder is started.
41  - In buffer mode, an application can obtain the shared memory address and data from the output buffer. In surface mode, an application can obtain the data from the output buffer.
42
43For details about the development procedure, see [Surface Output](#surface-output) and [Buffer Output](#buffer-output).
44
45## State Machine Interaction
46
47The following figure shows the interaction between states.
48
49![Invoking relationship of state](figures/state-invocation.png)
50
511. A decoder enters the Initialized state in either of the following ways:
52   - When a decoder instance is initially created, the decoder enters the Initialized state.
53   - When **OH_VideoDecoder_Reset** is called in any state, the decoder returns to the Initialized state.
54
552. When the decoder is in the Initialized state, you can call **OH_VideoDecoder_Configure** to configure the decoder. After the configuration, the decoder enters the Configured state.
563. When the decoder is in the Configured state, you can call **OH_VideoDecoder_Prepare** to switch it to the Prepared state.
574. When the decoder is in the Prepared state, you can call **OH_VideoDecoder_Start** to switch it to the Executing state.
58   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Stop** to switch it back to the Prepared state.
59
605. In rare cases, the decoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
61   - When the decoder is in the Error state, you can either call **OH_VideoDecoder_Reset** to switch it to the Initialized state or call **OH_VideoDecoder_Destroy** to switch it to the Released state.
62
636. The Executing state has three substates: Flushed, Running, and End-of-Stream.
64   - After **OH_VideoDecoder_Start** is called, the decoder enters the Running substate immediately.
65   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Flush** to switch it to the Flushed substate.
66   - After all data to be processed is transferred to the decoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the decoder transits to the End-of-Stream substate. In this state, the decoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
67
687. When the decoder is no longer needed, you must call **OH_VideoDecoder_Destroy** to destroy the decoder instance. Then the decoder enters the Released state.
69
70## How to Develop
71
72Read [VideoDecoder](../../reference/apis-avcodec-kit/_video_decoder.md) for the API reference.
73
74The figure below shows the call relationship of video decoding.
75
76- The dotted line indicates an optional operation.
77
78- The solid line indicates a mandatory operation.
79
80![Call relationship of video decoding](figures/video-decode.png)
81
82### Linking the Dynamic Link Libraries in the CMake Script
83
84``` cmake
85target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
86target_link_libraries(sample PUBLIC libnative_media_core.so)
87target_link_libraries(sample PUBLIC libnative_media_vdec.so)
88```
89
90> **NOTE**
91>
92> The word 'sample' in the preceding code snippet is only an example. Use the actual project directory name.
93>
94
95### Defining the Basic Structure
96
97The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it.
98
991. Add the header files.
100
101    ```c++
102    #include <condition_variable>
103    #include <memory>
104    #include <mutex>
105    #include <queue>
106    #include <shared_mutex>
107    ```
108
1092. Define the information about the decoder callback buffer.
110
111    ```c++
112    struct CodecBufferInfo {
113        CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {}
114        // Callback buffer.
115        OH_AVBuffer *buffer = nullptr;
116        // Index of the callback buffer.
117        uint32_t index = 0;
118        // Check whether the current buffer information is valid.
119        bool isValid = true;
120    };
121    ```
122
1233. Define the input and output queue for decoding.
124
125    ```c++
126    class CodecBufferQueue {
127    public:
128        // Pass the callback buffer information to the queue.
129        void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo)
130        {
131            std::unique_lock<std::mutex> lock(mutex_);
132            bufferQueue_.push(bufferInfo);
133            cond_.notify_all();
134        }
135
136        // Obtain the information about the callback buffer.
137        std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000)
138        {
139            std::unique_lock<std::mutex> lock(mutex_);
140            (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); });
141            if (bufferQueue_.empty()) {
142                return nullptr;
143            }
144            std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
145            bufferQueue_.pop();
146            return bufferInfo;
147        }
148
149        // Clear the queue. The previous callback buffer becomes unavailable.
150        void Flush()
151        {
152            std::unique_lock<std::mutex> lock(mutex_);
153            while (!bufferQueue_.empty()) {
154                std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
155                // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid.
156                bufferInfo->isValid = false;
157                bufferQueue_.pop();
158            }
159        }
160
161    private:
162        std::mutex mutex_;
163        std::condition_variable cond_;
164        std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_;
165    };
166    ```
167
1684. Define global variables.
169
170    These global variables are for reference only. They can be encapsulated into an object based on service requirements.
171
172    ```c++
173    // Video frame width.
174    int32_t width = 320;
175    // Video frame height.
176    int32_t height = 240;
177    // Video pixel format.
178     OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12;
179    // Video width stride.
180    int32_t widthStride = 0;
181    // Video height stride.
182    int32_t heightStride = 0;
183    // Pointer to the decoder instance.
184    OH_AVCodec *videoDec = nullptr;
185    // Decoder synchronization lock.
186    std::shared_mutex codecMutex;
187    // Decoder input queue.
188    CodecBufferQueue inQueue;
189    // Decoder output queue.
190    CodecBufferQueue outQueue;
191    ```
192
193### Surface Output
194
195The following walks you through how to implement the entire video decoding process in surface mode. In this example, an H.264 stream file is input, decoded, and rendered.
196
197Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
198
1991. Add the header files.
200
201    ```c++
202    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
203    #include <multimedia/player_framework/native_avcapability.h>
204    #include <multimedia/player_framework/native_avcodec_base.h>
205    #include <multimedia/player_framework/native_avformat.h>
206    #include <multimedia/player_framework/native_avbuffer.h>
207    #include <fstream>
208    ```
209
2102. Create a decoder instance.
211
212    You can create a decoder by name or MIME type. In the code snippet below, the following variables are used:
213
214    - **videoDec**: pointer to the video decoder instance.
215    - **capability**: pointer to the decoder's capability.
216    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
217
218    ```c++
219    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
220    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
221    // Create hardware decoder instances.
222    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE);
223    const char *name = OH_AVCapability_GetName(capability);
224    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
225    ```
226
227    ```c++
228    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
229    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
230    // Create an H.264 decoder for software/hardware decoding.
231    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
232    // Create an H.265 decoder for software/hardware decoding.
233    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
234    ```
235
2363. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
237
238    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
239
240    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
241    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
242    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
243    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete. (Note: The **buffer** parameter in surface mode is null.)
244
245    You need to process the callback functions to ensure that the decoder runs properly.
246
247    <!--RP2--><!--RP2End-->
248
249    ```c++
250    // Implement the OH_AVCodecOnError callback function.
251    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
252    {
253        // Process the error code in the callback.
254        (void)codec;
255        (void)errorCode;
256        (void)userData;
257    }
258
259    // Implement the OH_AVCodecOnStreamChanged callback function.
260    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
261    {
262        // The changed video width, height, and stride can be obtained through format.
263        (void)codec;
264        (void)userData;
265        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
266        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
267    }
268
269    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
270    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
271    {
272        // The data buffer of the input frame and its index are sent to inQueue.
273        (void)codec;
274        (void)userData;
275        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
276    }
277
278    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
279    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
280    {
281        // The data buffer of the finished frame and its index are sent to outQueue.
282        (void)codec;
283        (void)userData;
284        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
285    }
286    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
287    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
288    // Set the asynchronous callbacks.
289    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, NULL); // NULL: userData is null.
290    if (ret != AV_ERR_OK) {
291        // Exception handling.
292    }
293    ```
294
295    > **NOTE**
296    >
297    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
298    >
299    > During video playback, if the SPS of the video stream contains color information, the decoder will return the information (RangeFlag, ColorPrimary, MatrixCoefficient, and TransferCharacteristic) through **OH_AVFormat parameter** in the **OH_AVCodecOnStreamChanged** callback.
300    >
301    > In surface mode of video decoding, the internal data is processed by using High Efficiency Bandwidth Compression (HEBC) by default, and the values of **widthStride** and **heightStride** cannot be obtained.
302
3034. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demuxing](audio-video-demuxer.md).  In surface mode, the DRM decryption capability supports both secure and non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
304
305    Add the header files.
306
307    ```c++
308    #include <multimedia/drm_framework/native_mediakeysystem.h>
309    #include <multimedia/drm_framework/native_mediakeysession.h>
310    #include <multimedia/drm_framework/native_drm_err.h>
311    #include <multimedia/drm_framework/native_drm_common.h>
312    ```
313
314    Linking the Dynamic Libraries in the CMake Script
315
316    ``` cmake
317    target_link_libraries(sample PUBLIC libnative_drm.so)
318    ```
319
320    <!--RP4-->The following is the sample code:<!--RP4End-->
321
322    ```c++
323    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
324    MediaKeySystem *system = nullptr;
325    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
326    if (system == nullptr) {
327        printf("create media key system failed");
328        return;
329    }
330
331    // Create a decryption session. If a secure video channel is used, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_HW_CRYPTO or higher.
332    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
333    MediaKeySession *session = nullptr;
334    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
335    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
336    if (ret != DRM_OK) {
337        // If the creation fails, check the DRM interface document and logs.
338        printf("create media key session failed.");
339        return;
340    }
341    if (session == nullptr) {
342        printf("media key session is nullptr.");
343        return;
344    }
345
346    // Generate a media key request and set the response to the media key request.
347
348    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
349    // If the DRM scheme supports a secure video channel, set secureVideoPath to true and create a secure decoder before using the channel.
350    // That is, in step 2, call OH_VideoDecoder_CreateByName, with a decoder name followed by .secure (for example, [CodecName].secure) passed in, to create a secure decoder.
351    bool secureVideoPath = false;
352    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
353    ```
354
3555. Call **OH_VideoDecoder_Configure()** to configure the decoder.
356
357    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
358
359    For details about the parameter verification rules, see [OH_VideoDecoder_Configure()](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_configure).
360
361    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
362
363    Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format.
364
365    ```c++
366
367    OH_AVFormat *format = OH_AVFormat_Create();
368    // Set the format.
369    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory
370    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory
371    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
372    // (Optional) Configure low-latency decoding.
373    OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY, 1);
374    // Configure the decoder.
375    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
376    if (ret != AV_ERR_OK) {
377        // Exception handling.
378    }
379    OH_AVFormat_Destroy(format);
380    ```
381
3826. Set the surface.
383
384    You can obtain the native window in either of the following ways:
385    - If the image is directly displayed after being decoded, obtain the native window from the **XComponent**. For details about the operation, see [XComponent](../../reference/apis-arkui/arkui-ts/ts-basic-components-xcomponent.md).
386    - If OpenGL post-processing is performed after decoding, obtain the native window from NativeImage. For details about the operation, see [NativeImage](../../graphics/native-image-guidelines.md).
387
388    You perform this step during decoding, that is, dynamically switch the surface.
389
390    ```c++
391    // Set the window parameters.
392    int32_t ret = OH_VideoDecoder_SetSurface(videoDec, window); // Obtain the window from the XComponent.
393    if (ret != AV_ERR_OK) {
394        // Exception handling.
395    }
396    ```
397
3987. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the surface parameters of the decoder.
399
400    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
401
402    ```c++
403    OH_AVFormat *format = OH_AVFormat_Create();
404    // Configure the display rotation angle.
405    OH_AVFormat_SetIntValue(format, OH_MD_KEY_ROTATION, 90);
406    // Configure the matching mode (scaling or cropping) between the video and the screen.
407    OH_AVFormat_SetIntValue(format, OH_MD_KEY_SCALING_MODE, SCALING_MODE_SCALE_CROP);
408    int32_t ret = OH_VideoDecoder_SetParameter(videoDec, format);
409    OH_AVFormat_Destroy(format);
410    ```
411
4128. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
413
414    ```c++
415    ret = OH_VideoDecoder_Prepare(videoDec);
416    if (ret != AV_ERR_OK) {
417        // Exception handling.
418    }
419    ```
420
4219. Call **OH_VideoDecoder_Start()** to start the decoder.
422
423    ```c++
424    // Start the decoder.
425    int32_t ret = OH_VideoDecoder_Start(videoDec);
426    if (ret != AV_ERR_OK) {
427        // Exception handling.
428    }
429    ```
430
43110. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the Common Encryption Scheme (CENC) information.
432
433    If the program to play is DRM encrypted and the application implements media demuxing instead of using the system's [demuxer](audio-video-demuxer.md), you must call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information to the AVBuffer. In this way, the AVBuffer carries the data to be decrypted and CENC information, so that the media data in the AVBuffer can be decrypted. You do not need to call this API when the application uses the system's [demuxer](audio-video-demuxer.md).
434
435    Add the header files.
436
437    ```c++
438    #include <multimedia/player_framework/native_cencinfo.h>
439    ```
440
441    Link the dynamic library in the CMake script.
442
443    ``` cmake
444    target_link_libraries(sample PUBLIC libnative_media_avcencinfo.so)
445    ```
446
447    In the code snippet below, the following variable is used:
448    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**.
449    ```c++
450    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
451    uint8_t keyId[] = {
452        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
453        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
454    uint32_t ivLen = DRM_KEY_IV_SIZE;
455    uint8_t iv[] = {
456        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
457        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
458    uint32_t encryptedBlockCount = 0;
459    uint32_t skippedBlockCount = 0;
460    uint32_t firstEncryptedOffset = 0;
461    uint32_t subsampleCount = 1;
462    DrmSubsample subsamples[1] = { {0x10, 0x16} };
463    // Create a CencInfo instance.
464    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
465    if (cencInfo == nullptr) {
466        // Exception handling.
467    }
468    // Set the decryption algorithm.
469    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
470    if (errNo != AV_ERR_OK) {
471        // Exception handling.
472    }
473    // Set KeyId and Iv.
474    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
475    if (errNo != AV_ERR_OK) {
476        // Exception handling.
477    }
478    // Set the sample information.
479    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
480        subsampleCount, subsamples);
481    if (errNo != AV_ERR_OK) {
482        // Exception handling.
483    }
484    // Set the mode. KeyId, Iv, and SubSamples have been set.
485    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
486    if (errNo != AV_ERR_OK) {
487        // Exception handling.
488    }
489    // Set CencInfo to the AVBuffer.
490    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
491    if (errNo != AV_ERR_OK) {
492        // Exception handling.
493    }
494    // Destroy the CencInfo instance.
495    errNo = OH_AVCencInfo_Destroy(cencInfo);
496    if (errNo != AV_ERR_OK) {
497        // Exception handling.
498    }
499    ```
500
50111. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
502
503    In the code snippet below, the following variables are used:
504
505    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the virtual address of the input stream by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
506    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
507    - **size**, **offset**, **pts**, and **frameData**: size, offset, timestamp, and frame data. For details about how to obtain such information, see [Media Data Demuxing](./audio-video-demuxer.md).
508    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
509
510    ```c++
511    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
512    std::shared_lock<std::shared_mutex> lock(codecMutex);
513    if (bufferInfo == nullptr || !bufferInfo->isValid) {
514        // Exception handling.
515    }
516    // Write stream data.
517    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
518    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
519    if (size > capcacity) {
520        // Exception handling.
521    }
522    memcpy(addr, frameData, size);
523    // Configure the size, offset, and timestamp of the frame data.
524    OH_AVCodecBufferAttr info;
525    info.size = size;
526    info.offset = offset;
527    info.pts = pts;
528    info.flags = flags;
529    // Write the information to the buffer.
530    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
531    if (ret != AV_ERR_OK) {
532        // Exception handling.
533    }
534    // Send the data to the input buffer for decoding. index is the index of the buffer.
535    ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
536    if (ret != AV_ERR_OK) {
537        // Exception handling.
538    }
539    ```
540
54112. Call **OH_VideoDecoder_RenderOutputBuffer()** or **OH_VideoDecoder_RenderOutputBufferAtTime()** to render the data and free the output buffer, or call **OH_VideoDecoder_FreeOutputBuffer()** to directly free the output buffer.
542
543    In the code snippet below, the following variables are used:
544
545    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
546    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
547
548    ```c++
549    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
550    std::shared_lock<std::shared_mutex> lock(codecMutex);
551    if (bufferInfo == nullptr || !bufferInfo->isValid) {
552        // Exception handling.
553    }
554    // Obtain the decoded information.
555    OH_AVCodecBufferAttr info;
556    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
557    if (ret != AV_ERR_OK) {
558        // Exception handling.
559    }
560    // The value is determined by the caller.
561    bool isRender;
562    bool isNeedRenderAtTime;
563    if (isRender) {
564        // Render the data and free the output buffer. index is the index of the buffer.
565        if (isNeedRenderAtTime){
566            // Obtain the system absolute time, and call renderTimestamp to display the time based on service requirements.
567            int64_t renderTimestamp =
568                std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
569            ret = OH_VideoDecoder_RenderOutputBufferAtTime(videoDec, bufferInfo->index, renderTimestamp);
570        } else {
571           ret = OH_VideoDecoder_RenderOutputBuffer(videoDec, bufferInfo->index);
572        }
573
574    } else {
575        // Free the output buffer.
576        ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
577    }
578    if (ret != AV_ERR_OK) {
579        // Exception handling.
580    }
581
582    ```
583
584    > **NOTE**
585    >
586    > To obtain the buffer attributes, such as **pixel_format** and **stride**, call [OH_NativeWindow_NativeWindowHandleOpt](../../reference/apis-arkgraphics2d/_native_window.md#oh_nativewindow_nativewindowhandleopt).
587
58813. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder.
589
590    After **OH_VideoDecoder_Flush** is called, the decoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the decoder are cleared.
591
592    To continue decoding, you must call **OH_VideoDecoder_Start** again.
593
594    In the code snippet below, the following variables are used:
595
596    - **xpsData** and **xpsSize**: PPS/SPS information. For details about how to obtain such information, see [Media Data Demuxing](./audio-video-demuxer.md).
597
598    ```c++
599    std::unique_lock<std::shared_mutex> lock(codecMutex);
600    // Refresh the decoder.
601    int32_t ret = OH_VideoDecoder_Flush(videoDec);
602    if (ret != AV_ERR_OK) {
603        // Exception handling.
604    }
605    inQueue.Flush();
606    outQueue.Flush();
607    // Start decoding again.
608    ret = OH_VideoDecoder_Start(videoDec);
609    if (ret != AV_ERR_OK) {
610        // Exception handling.
611    }
612
613    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
614    if (bufferInfo == nullptr || !bufferInfo->isValid) {
615        // Exception handling.
616    }
617    // Retransfer PPS/SPS.
618    // Configure the frame PPS/SPS information.
619    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
620    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
621    if (xpsSize > capcacity) {
622        // Exception handling.
623    }
624    memcpy(addr, xpsData, xpsSize);
625    OH_AVCodecBufferAttr info;
626    info.flags = AVCODEC_BUFFER_FLAG_CODEC_DATA;
627    // Write the information to the buffer.
628    ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
629    if (ret != AV_ERR_OK) {
630        // Exception handling.
631    }
632    // Push the frame data to the decoder. index is the index of the corresponding queue.
633    ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
634    if (ret != AV_ERR_OK) {
635        // Exception handling.
636    }
637
638    ```
639
640    > **NOTE**
641    >
642    > When **OH_VideoDecoder_Start** s called again after the flush operation, the PPS/SPS must be retransferred.
643
644
64514. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder.
646
647    After **OH_VideoDecoder_Reset** is called, the decoder returns to the Initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure**, **OH_VideoDecoder_SetSurface**, and **OH_VideoDecoder_Prepare** in sequence.
648
649    ```c++
650    std::unique_lock<std::shared_mutex> lock(codecMutex);
651    // Reset the decoder.
652    int32_t ret = OH_VideoDecoder_Reset(videoDec);
653    if (ret != AV_ERR_OK) {
654        // Exception handling.
655    }
656    inQueue.Flush();
657    outQueue.Flush();
658    // Reconfigure the decoder.
659    ret = OH_VideoDecoder_Configure(videoDec, format);
660    if (ret != AV_ERR_OK) {
661        // Exception handling.
662    }
663    // Reconfigure the surface in surface mode. This is not required in buffer mode.
664    ret = OH_VideoDecoder_SetSurface(videoDec, window);
665    if (ret != AV_ERR_OK) {
666        // Exception handling.
667    }
668    // The decoder is ready again.
669    ret = OH_VideoDecoder_Prepare(videoDec);
670    if (ret != AV_ERR_OK) {
671        // Exception handling.
672    }
673    ```
674
67515. (Optional) Call **OH_VideoDecoder_Stop()** to stop the decoder.
676
677    After **OH_VideoDecoder_Stop()** is called, the decoder retains the decoding instance and releases the input and output buffers. You can directly call **OH_VideoDecoder_Start** to continue decoding. The first input buffer must carry the parameter set, starting from the IDR frame.
678
679    ```c++
680    std::unique_lock<std::shared_mutex> lock(codecMutex);
681    // Stop the decoder.
682    int32_t ret = OH_VideoDecoder_Stop(videoDec);
683    if (ret != AV_ERR_OK) {
684        // Exception handling.
685    }
686    inQueue.Flush();
687    outQueue.Flush();
688    ```
689
69016. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources.
691
692    > **NOTE**
693    >
694    > This API cannot be called in the callback function.
695    >
696    > After the call, you must set the decoder to NULL to prevent program errors caused by wild pointers.
697
698    ```c++
699    std::unique_lock<std::shared_mutex> lock(codecMutex);
700    // Call OH_VideoDecoder_Destroy to destroy the decoder.
701    int32_t ret = AV_ERR_OK;
702    if (videoDec != NULL) {
703        ret = OH_VideoDecoder_Destroy(videoDec);
704        videoDec = NULL;
705    }
706    if (ret != AV_ERR_OK) {
707        // Exception handling.
708    }
709    inQueue.Flush();
710    outQueue.Flush();
711    ```
712
713### Buffer Output
714
715The following walks you through how to implement the entire video decoding process in buffer mode. In this example, an H.264 file is input and decoded into a YUV file.
716Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
717
7181. Add the header files.
719
720    ```c++
721    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
722    #include <multimedia/player_framework/native_avcapability.h>
723    #include <multimedia/player_framework/native_avcodec_base.h>
724    #include <multimedia/player_framework/native_avformat.h>
725    #include <multimedia/player_framework/native_avbuffer.h>
726    #include <native_buffer/native_buffer.h>
727    #include <fstream>
728    ```
729
7302. Create a decoder instance.
731
732    The procedure is the same as that in surface mode and is not described here.
733
734    ```c++
735    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
736    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
737    const char *name = OH_AVCapability_GetName(capability);
738    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
739    ```
740
741    ```c++
742    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
743    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
744    // Create an H.264 decoder for software/hardware decoding.
745    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
746    // Create an H.265 decoder for hardware decoding.
747    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
748    ```
749
7503. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
751
752    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
753
754    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
755    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
756    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
757    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete.
758
759    You need to process the callback functions to ensure that the decoder runs properly.
760
761    <!--RP2--><!--RP2End-->
762
763    ```c++
764    int32_t cropTop = 0;
765    int32_t cropBottom = 0;
766    int32_t cropLeft = 0;
767    int32_t cropRight = 0;
768    bool isFirstFrame = true;
769    // Implement the OH_AVCodecOnError callback function.
770    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
771    {
772        // Process the error code in the callback.
773        (void)codec;
774        (void)errorCode;
775        (void)userData;
776    }
777
778    // Implement the OH_AVCodecOnStreamChanged callback function.
779    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
780    {
781        // Optional. Configure the data when you want to obtain the video width, height, and stride.
782        // The changed video width, height, and stride can be obtained through format.
783        (void)codec;
784        (void)userData;
785        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
786        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
787        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
788        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
789        // (Optional) Obtain the cropped rectangle information.
790        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
791        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
792        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
793        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
794    }
795
796    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
797    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
798    {
799        // The data buffer of the input frame and its index are sent to inQueue.
800        (void)codec;
801        (void)userData;
802        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
803    }
804
805    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
806    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
807    {
808        // Optional. Configure the data when you want to obtain the video width, height, and stride.
809        // Obtain the video width, height, and stride.
810        if (isFirstFrame) {
811            OH_AVFormat *format = OH_VideoDecoder_GetOutputDescription(codec);
812            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
813            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
814            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
815            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
816            // (Optional) Obtain the cropped rectangle information.
817            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
818            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
819            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
820            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
821            OH_AVFormat_Destroy(format);
822            isFirstFrame = false;
823        }
824        // The data buffer of the finished frame and its index are sent to outQueue.
825        (void)userData;
826        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
827    }
828    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
829    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
830    // Set the asynchronous callbacks.
831    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, NULL); // NULL: userData is null.
832    if (ret != AV_ERR_OK) {
833        // Exception handling.
834    }
835    ```
836
837    > **NOTE**
838    >
839    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
840    >
841
8424. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demuxing](audio-video-demuxer.md).  In buffer mode, the DRM decryption capability supports only non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
843
844    Add the header files.
845
846    ```c++
847    #include <multimedia/drm_framework/native_mediakeysystem.h>
848    #include <multimedia/drm_framework/native_mediakeysession.h>
849    #include <multimedia/drm_framework/native_drm_err.h>
850    #include <multimedia/drm_framework/native_drm_common.h>
851    ```
852    Link the dynamic library in the CMake script.
853
854    ``` cmake
855    target_link_libraries(sample PUBLIC libnative_drm.so)
856    ```
857
858    The following is the sample code:
859
860    ```c++
861    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
862    MediaKeySystem *system = nullptr;
863    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
864    if (system == nullptr) {
865        printf("create media key system failed");
866        return;
867    }
868
869    // Create a media key session.
870    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
871    MediaKeySession *session = nullptr;
872    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
873    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
874    if (ret != DRM_OK) {
875        // If the creation fails, check the DRM interface document and logs.
876        printf("create media key session failed.");
877        return;
878    }
879    if (session == nullptr) {
880        printf("media key session is nullptr.");
881        return;
882    }
883    // Generate a media key request and set the response to the media key request.
884    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
885    bool secureVideoPath = false;
886    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
887    ```
888
8895. Call **OH_VideoDecoder_Configure()** to configure the decoder.
890
891    The procedure is the same as that in surface mode and is not described here.
892
893    ```c++
894    OH_AVFormat *format = OH_AVFormat_Create();
895    // Set the format.
896    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory
897    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory
898    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
899    // Configure the decoder.
900    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
901    if (ret != AV_ERR_OK) {
902        // Exception handling.
903    }
904    OH_AVFormat_Destroy(format);
905    ```
906
9076. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
908
909    ```c++
910    int32_t ret = OH_VideoDecoder_Prepare(videoDec);
911    if (ret != AV_ERR_OK) {
912        // Exception handling.
913    }
914    ```
915
9167. Call **OH_VideoDecoder_Start()** to start the decoder.
917
918    ```c++
919    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
920    outputFile->open("/*yourpath*.yuv", std::ios::out | std::ios::binary | std::ios::ate);
921    // Start the decoder.
922    int32_t ret = OH_VideoDecoder_Start(videoDec);
923    if (ret != AV_ERR_OK) {
924        // Exception handling.
925    }
926    ```
927
9288. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information.
929
930    The procedure is the same as that in surface mode and is not described here.
931
932    The following is the sample code:
933
934    ```c++
935    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
936    uint8_t keyId[] = {
937        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
938        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
939    uint32_t ivLen = DRM_KEY_IV_SIZE;
940    uint8_t iv[] = {
941        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
942        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
943    uint32_t encryptedBlockCount = 0;
944    uint32_t skippedBlockCount = 0;
945    uint32_t firstEncryptedOffset = 0;
946    uint32_t subsampleCount = 1;
947    DrmSubsample subsamples[1] = { {0x10, 0x16} };
948    // Create a CencInfo instance.
949    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
950    if (cencInfo == nullptr) {
951        // Exception handling.
952    }
953    // Set the decryption algorithm.
954    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
955    if (errNo != AV_ERR_OK) {
956        // Exception handling.
957    }
958    // Set KeyId and Iv.
959    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
960    if (errNo != AV_ERR_OK) {
961        // Exception handling.
962    }
963    // Set the sample information.
964    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
965        subsampleCount, subsamples);
966    if (errNo != AV_ERR_OK) {
967        // Exception handling.
968    }
969    // Set the mode. KeyId, Iv, and SubSamples have been set.
970    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
971    if (errNo != AV_ERR_OK) {
972        // Exception handling.
973    }
974    // Set CencInfo to the AVBuffer.
975    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
976    if (errNo != AV_ERR_OK) {
977        // Exception handling.
978    }
979    // Destroy the CencInfo instance.
980    errNo = OH_AVCencInfo_Destroy(cencInfo);
981    if (errNo != AV_ERR_OK) {
982        // Exception handling.
983    }
984    ```
985
9869. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
987
988    The procedure is the same as that in surface mode and is not described here.
989
990    ```c++
991    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
992    std::shared_lock<std::shared_mutex> lock(codecMutex);
993    if (bufferInfo == nullptr || !bufferInfo->isValid) {
994        // Exception handling.
995    }
996    // Write stream data.
997    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
998    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
999    if (size > capcacity) {
1000        // Exception handling.
1001    }
1002    memcpy(addr, frameData, size);
1003    // Configure the size, offset, and timestamp of the frame data.
1004    OH_AVCodecBufferAttr info;
1005    info.size = size;
1006    info.offset = offset;
1007    info.pts = pts;
1008    info.flags = flags;
1009    // Write the information to the buffer.
1010    ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
1011    if (ret != AV_ERR_OK) {
1012        // Exception handling.
1013    }
1014    // Send the data to the input buffer for decoding. index is the index of the buffer.
1015    int32_t ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
1016    if (ret != AV_ERR_OK) {
1017        // Exception handling.
1018    }
1019    ```
1020
102110. Call **OH_VideoDecoder_FreeOutputBuffer()** to release decoded frames.
1022
1023    In the code snippet below, the following variables are used:
1024
1025    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
1026    - buffer: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the virtual address of an image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
1027
1028    ```c++
1029    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
1030    std::shared_lock<std::shared_mutex> lock(codecMutex);
1031    if (bufferInfo == nullptr || !bufferInfo->isValid) {
1032        // Exception handling.
1033    }
1034    // Obtain the decoded information.
1035    OH_AVCodecBufferAttr info;
1036    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
1037    if (ret != AV_ERR_OK) {
1038        // Exception handling.
1039    }
1040    // Write the decoded data (specified by data) to the output file.
1041    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
1042    // Free the buffer that stores the output data. index is the index of the buffer.
1043    ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
1044    if (ret != AV_ERR_OK) {
1045        // Exception handling.
1046    }
1047    ```
1048
1049    To copy the Y, U, and V components of an NV12 or NV21 image to another buffer in sequence, perform the following steps (taking an NV12 image as an example),
1050
1051    presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
1052
1053    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
1054    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
1055    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
1056    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
1057
1058    ![copy by line](figures/copy-by-line.png)
1059
1060    Add the header files.
1061
1062    ```c++
1063    #include <string.h>
1064    ```
1065
1066    The following is the sample code:
1067
1068    ```c++
1069    // Obtain the width and height of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1070    struct Rect
1071    {
1072        int32_t width;
1073        int32_t height;
1074    };
1075
1076    struct DstRect // Width stride and height stride of the destination buffer. They are set by the caller.
1077    {
1078        int32_t wStride;
1079        int32_t hStride;
1080    };
1081    // Obtain the width and height stride of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1082    struct SrcRect
1083    {
1084        int32_t wStride;
1085        int32_t hStride;
1086    };
1087
1088    Rect rect = {320, 240};
1089    DstRect dstRect = {320, 240};
1090    SrcRect srcRect = {320, 256};
1091    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area.
1092    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area.
1093    uint8_t* dstTemp = dst;
1094    uint8_t* srcTemp = src;
1095
1096    // Y: Copy the source data in the Y region to the target data in another region.
1097    for (int32_t i = 0; i < rect.height; ++i) {
1098        // Copy a row of data from the source to a row of the target.
1099        memcpy_s(dstTemp, srcTemp, rect.width);
1100        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
1101        dstTemp += dstRect.wStride;
1102        srcTemp += srcRect.wStride;
1103    }
1104    // padding
1105    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
1106    dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride;
1107    srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride;
1108    rect.height >>= 1;
1109    // UV: Copy the source data in the UV region to the target data in another region.
1110    for (int32_t i = 0; i < rect.height; ++i) {
1111        memcpy_s(dstTemp, srcTemp, rect.width);
1112        dstTemp += dstRect.wStride;
1113        srcTemp += srcRect.wStride;
1114    }
1115
1116    delete[] dst;
1117    dst = nullptr;
1118    delete[] src;
1119    src = nullptr;
1120    ```
1121
1122    When processing buffer data (before releasing data) during hardware decoding, the output callback AVBuffer receives the image data after width and height alignment. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the decoded data. For details, see step 3 in [Buffer Output](#buffer-output).
1123
1124The subsequent processes (including refreshing, resetting, stopping, and destroying the decoder) are basically the same as those in surface mode. For details, see steps 13-16 in [Surface Output](#surface-output).
1125
1126<!--RP5-->
1127<!--RP5End-->
1128