1# Audio 2 3## Introduction 4 5The audio framework is used to implement audio-related features, including audio playback, audio recording, volume management, and device management. 6 7**Figure 1** Architecture of the audio framework 8 9 10 11 12### Basic Concepts 13 14- **Sampling** 15 16 Sampling is a process to obtain discrete-time signals by extracting samples from analog signals in a continuous time domain at a specific interval. 17 18- **Sampling rate** 19 20 Sampling rate is the number of samples extracted from a continuous signal per second to form a discrete signal. It is measured in Hz. Generally, the human hearing range is from 20 Hz to 20 kHz. Common audio sampling rates include 8 kHz, 11.025 kHz, 22.05 kHz, 16 kHz, 37.8 kHz, 44.1 kHz, 48 kHz, 96 kHz, and 192 kHz. 21 22- **Channel** 23 24 Channels refer to different spatial positions where independent audio signals are recorded or played. The number of channels is the number of audio sources used during audio recording, or the number of speakers used for audio playback. 25 26- **Audio frame** 27 28 Audio data is in stream form. For the convenience of audio algorithm processing and transmission, it is generally agreed that a data amount in a unit of 2.5 to 60 milliseconds is one audio frame. This unit is called sampling time, and its length is specific to codecs and the application requirements. 29 30- **PCM** 31 32 Pulse code modulation (PCM) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. 33 34## Directory Structure 35 36The structure of the repository directory is as follows: 37 38``` 39/foundation/multimedia/audio_standard # Service code of the audio framework 40├── frameworks # Framework code 41│ ├── native # Internal native API implementation 42│ └── js # External JS API implementation 43│ └── napi # External native API implementation 44├── interfaces # API code 45│ ├── inner_api # Internal APIs 46│ └── kits # External APIs 47├── sa_profile # Service configuration profile 48├── services # Service code 49├── LICENSE # License file 50└── bundle.json # Build file 51``` 52 53## Usage Guidelines 54 55### Audio Playback 56 57You can use the APIs provided in the current repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following describes how to use the **AudioRenderer** class to develop the audio playback feature: 58 591. Call **Create()** with the required stream type to create an **AudioRenderer** instance. 60 61 ``` 62 AudioStreamType streamType = STREAM_MUSIC; // Stream type example. 63 std::unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(streamType); 64 ``` 65 662. (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters. 673. Prepare the device and call **SetParams()** to set parameters. 68 69 ``` 70 AudioRendererParams rendererParams; 71 rendererParams.sampleFormat = SAMPLE_S16LE; 72 rendererParams.sampleRate = SAMPLE_RATE_44100; 73 rendererParams.channelCount = STEREO; 74 rendererParams.encodingType = ENCODING_PCM; 75 76 audioRenderer->SetParams(rendererParams); 77 ``` 78 794. (Optional) Call **GetParams(rendererParams)** to obtain the parameters set. 805. Call **Start()** to start an audio playback task. 816. Call **GetBufferSize()** to obtain the length of the buffer to be written. 82 83 ``` 84 audioRenderer->GetBufferSize(bufferLen); 85 ``` 86 877. Call **bytesToWrite()** to read the audio data from the source (such as an audio file) and pass it to a byte stream. You can repeatedly call this API to write rendering data. 88 89 ``` 90 bytesToWrite = fread(buffer, 1, bufferLen, wavFile); 91 while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { 92 bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten); 93 if (bytesWritten < 0) 94 break; 95 } 96 ``` 97 988. Call **Drain()** to clear the streams to be played. 999. Call **Stop()** to stop the output. 10010. After the playback task is complete, call **Release()** to release resources. 101 102The preceding steps describe the basic development scenario of audio playback. 103 104 10511. Call **SetVolume(float)** and **GetVolume()** to set and obtain the audio stream volume, which ranges from 0.0 to 1.0. 106 107For details, see [**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiorenderer/include/audio_renderer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 108 109### Audio Recording 110 111You can use the APIs provided in the current repository to record audio via an input device, convert the audio into audio data, and manage recording tasks. The following describes how to use the **AudioCapturer** class to develop the audio recording feature: 112 1131. Call **Create()** with the required stream type to create an **AudioCapturer** instance. 114 115 ``` 116 AudioStreamType streamType = STREAM_MUSIC; 117 std::unique_ptr<AudioCapturer> audioCapturer = AudioCapturer::Create(streamType); 118 ``` 119 1202. (Optional) Call the static APIs **GetSupportedFormats()**, **GetSupportedChannels()**, **GetSupportedEncodingTypes()**, and **GetSupportedSamplingRates()** to obtain the supported values of parameters. 1213. Prepare the device and call **SetParams()** to set parameters. 122 123 ``` 124 AudioCapturerParams capturerParams; 125 capturerParams.sampleFormat = SAMPLE_S16LE; 126 capturerParams.sampleRate = SAMPLE_RATE_44100; 127 capturerParams.channelCount = STEREO; 128 capturerParams.encodingType = ENCODING_PCM; 129 130 audioCapturer->SetParams(capturerParams); 131 ``` 132 1334. (Optional) Call **GetParams(capturerParams)** to obtain the parameters set. 1345. Call **Start()** to start an audio recording task. 1356. Call **GetBufferSize()** to obtain the length of the buffer to be written. 136 137 ``` 138 audioCapturer->GetBufferSize(bufferLen); 139 ``` 140 1417. Call **bytesRead()** to read the captured audio data and convert it to a byte stream. The application will repeatedly call this API to read data until it is manually stopped. 142 143 ``` 144 // set isBlocking = true/false for blocking/non-blocking read 145 bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlocking); 146 while (numBuffersToCapture) { 147 bytesRead = audioCapturer->Read(*buffer, bufferLen, isBlockingRead); 148 if (bytesRead < 0) { 149 break; 150 } else if (bytesRead > 0) { 151 fwrite(buffer, size, bytesRead, recFile); // example shows writes the recorded data into a file 152 numBuffersToCapture--; 153 } 154 } 155 ``` 156 1578. (Optional) Call **Flush()** to clear the recording stream buffer. 1589. Call **Stop()** to stop recording. 15910. After the recording task is complete, call **Release()** to release resources. 160 161For details, see [**audio_capturer.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocapturer/include/audio_capturer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 162 163### Audio Management 164You can use the APIs provided in the [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) to control the volume and devices. 1651. Call **GetInstance()** to obtain an **AudioSystemManager** instance. 166 ``` 167 AudioSystemManager *audioSystemMgr = AudioSystemManager::GetInstance(); 168 ``` 169#### Volume Control 1702. Call **GetMaxVolume()** and **GetMinVolume()** to obtain the maximum volume and minimum volume allowed for an audio stream. 171 ``` 172 AudioVolumeType streamType = AudioVolumeType::STREAM_MUSIC; 173 int32_t maxVol = audioSystemMgr->GetMaxVolume(streamType); 174 int32_t minVol = audioSystemMgr->GetMinVolume(streamType); 175 ``` 1763. Call **SetVolume()** and **GetVolume()** to set and obtain the volume of the audio stream, respectively. 177 ``` 178 int32_t result = audioSystemMgr->SetVolume(streamType, 10); 179 int32_t vol = audioSystemMgr->GetVolume(streamType); 180 ``` 1814. Call **SetMute()** and **IsStreamMute** to set and obtain the mute status of the audio stream, respectively. 182 ``` 183 int32_t result = audioSystemMgr->SetMute(streamType, true); 184 bool isMute = audioSystemMgr->IsStreamMute(streamType); 185 ``` 1865. Call **SetRingerMode()** and **GetRingerMode()** to set and obtain the ringer mode, respectively. The supported ringer modes are the enumerated values of **AudioRingerMode** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 187 ``` 188 int32_t result = audioSystemMgr->SetRingerMode(RINGER_MODE_SILENT); 189 AudioRingerMode ringMode = audioSystemMgr->GetRingerMode(); 190 ``` 1916. Call **SetMicrophoneMute()** and **IsMicrophoneMute()** to set and obtain the mute status of the microphone, respectively. 192 ``` 193 int32_t result = audioSystemMgr->SetMicrophoneMute(true); 194 bool isMicMute = audioSystemMgr->IsMicrophoneMute(); 195 ``` 196#### Device Control 1977. Call **GetDevices**, **deviceType_**, and **deviceRole_** to obtain information about the audio input and output devices. For details, see the enumerated values of **DeviceFlag**, **DeviceType**, and **DeviceRole** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 198 ``` 199 DeviceFlag deviceFlag = OUTPUT_DEVICES_FLAG; 200 vector<sptr<AudioDeviceDescriptor>> audioDeviceDescriptors 201 = audioSystemMgr->GetDevices(deviceFlag); 202 sptr<AudioDeviceDescriptor> audioDeviceDescriptor = audioDeviceDescriptors[0]; 203 cout << audioDeviceDescriptor->deviceType_; 204 cout << audioDeviceDescriptor->deviceRole_; 205 ``` 2068. Call **SetDeviceActive()** and **IsDeviceActive()** to activate or deactivate an audio device and obtain the device activation status, respectively. 207 ``` 208 ActiveDeviceType deviceType = SPEAKER; 209 int32_t result = audioSystemMgr->SetDeviceActive(deviceType, true); 210 bool isDevActive = audioSystemMgr->IsDeviceActive(deviceType); 211 ``` 2129. (Optional) Call other APIs, such as **IsStreamActive()**, **SetAudioParameter()**, and **GetAudioParameter()**, provided in [**audio_system_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_system_manager.h) if required. 21310. Call **AudioManagerNapi::On** to subscribe to system volume changes. If a system volume change occurs, the following parameters are used to notify the application: 214**volumeType**: type of the system volume changed. 215**volume**: current volume level. 216**updateUi**: whether to show the change on the UI. (Set **updateUi** to **true** for a volume increase or decrease event, and set it to **false** for other changes.) 217 ``` 218 const audioManager = audio.getAudioManager(); 219 220 export default { 221 onCreate() { 222 audioManager.on('volumeChange', (volumeChange) ==> { 223 console.info('volumeType = '+volumeChange.volumeType); 224 console.info('volume = '+volumeChange.volume); 225 console.info('updateUi = '+volumeChange.updateUi); 226 } 227 } 228 } 229 ``` 230 231#### Audio Scene 23211. Call **SetAudioScene()** and **getAudioScene()** to set and obtain the audio scene, respectively. 233 ``` 234 int32_t result = audioSystemMgr->SetAudioScene(AUDIO_SCENE_PHONE_CALL); 235 AudioScene audioScene = audioSystemMgr->GetAudioScene(); 236 ``` 237For details about the supported audio scenes, see the enumerated values of **AudioScene** defined in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_framework/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 238#### Audio Stream Management 239You can use the APIs provided in [**audio_stream_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiomanager/include/audio_stream_manager.h) to implement stream management. 2401. Call **GetInstance()** to obtain an **AudioSystemManager** instance. 241 ``` 242 AudioStreamManager *audioStreamMgr = AudioStreamManager::GetInstance(); 243 ``` 244 2452. Call **RegisterAudioRendererEventListener()** to register a listener for renderer state changes. A callback will be invoked when the renderer state changes. You can override **OnRendererStateChange()** in the **AudioRendererStateChangeCallback** class. 246 ``` 247 const int32_t clientPid; 248 249 class RendererStateChangeCallback : public AudioRendererStateChangeCallback { 250 public: 251 RendererStateChangeCallback = default; 252 ~RendererStateChangeCallback = default; 253 void OnRendererStateChange( 254 const std::vector<std::unique_ptr<AudioRendererChangeInfo>> &audioRendererChangeInfos) override 255 { 256 cout<<"OnRendererStateChange entered"<<endl; 257 } 258 }; 259 260 std::shared_ptr<AudioRendererStateChangeCallback> callback = std::make_shared<RendererStateChangeCallback>(); 261 int32_t state = audioStreamMgr->RegisterAudioRendererEventListener(clientPid, callback); 262 int32_t result = audioStreamMgr->UnregisterAudioRendererEventListener(clientPid); 263 ``` 264 2653. Call **RegisterAudioCapturerEventListener()** to register a listener for capturer state changes. A callback will be invoked when the capturer state changes. You can override **OnCapturerStateChange()** in the **AudioCapturerStateChangeCallback** class. 266 ``` 267 const int32_t clientPid; 268 269 class CapturerStateChangeCallback : public AudioCapturerStateChangeCallback { 270 public: 271 CapturerStateChangeCallback = default; 272 ~CapturerStateChangeCallback = default; 273 void OnCapturerStateChange( 274 const std::vector<std::unique_ptr<AudioCapturerChangeInfo>> &audioCapturerChangeInfos) override 275 { 276 cout<<"OnCapturerStateChange entered"<<endl; 277 } 278 }; 279 280 std::shared_ptr<AudioCapturerStateChangeCallback> callback = std::make_shared<CapturerStateChangeCallback>(); 281 int32_t state = audioStreamMgr->RegisterAudioCapturerEventListener(clientPid, callback); 282 int32_t result = audioStreamMgr->UnregisterAudioCapturerEventListener(clientPid); 283 ``` 2844. Call **GetCurrentRendererChangeInfos()** to obtain information about all running renderers, including the client UID, session ID, renderer information, renderer state, and output device details. 285 ``` 286 std::vector<std::unique_ptr<AudioRendererChangeInfo>> audioRendererChangeInfos; 287 int32_t currentRendererChangeInfo = audioStreamMgr->GetCurrentRendererChangeInfos(audioRendererChangeInfos); 288 ``` 289 2905. Call **GetCurrentCapturerChangeInfos()** to obtain information about all running capturers, including the client UID, session ID, capturer information, capturer state, and input device details. 291 ``` 292 std::vector<std::unique_ptr<AudioCapturerChangeInfo>> audioCapturerChangeInfos; 293 int32_t currentCapturerChangeInfo = audioStreamMgr->GetCurrentCapturerChangeInfos(audioCapturerChangeInfos); 294 ``` 295 For details, see **audioRendererChangeInfos** and **audioCapturerChangeInfos** in [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/interfaces/inner_api/native/audiocommon/include/audio_info.h). 296 2976. Call **IsAudioRendererLowLatencySupported()** to check whether low latency is supported. 298 ``` 299 const AudioStreamInfo &audioStreamInfo; 300 bool isLatencySupport = audioStreamMgr->IsAudioRendererLowLatencySupported(audioStreamInfo); 301 ``` 302#### Using JavaScript APIs 303JavaScript applications can call the audio management APIs to control the volume and devices. 304For details, see [**js-apis-audio.md**](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/apis/js-apis-audio.md#audiomanager). 305 306### Bluetooth SCO Call 307You can use the APIs provided in [**audio_bluetooth_manager.h**](https://gitee.com/openharmony/multimedia_audio_standard/blob/master/services/include/audio_bluetooth/client/audio_bluetooth_manager.h) to implement Bluetooth calls over synchronous connection-oriented (SCO) links. 308 3091. Call **OnScoStateChanged()** to listen for SCO link state changes. 310 ``` 311 const BluetoothRemoteDevice &device; 312 int state; 313 void OnScoStateChanged(const BluetoothRemoteDevice &device, int state); 314 ``` 315 3162. (Optional) Call the static API **RegisterBluetoothScoAgListener()** to register a Bluetooth SCO listener, and call **UnregisterBluetoothScoAgListener()** to unregister the listener when it is no longer required. 317## Supported Devices 318The following lists the device types supported by the audio framework. 319 3201. **USB Type-C Headset** 321 322 A digital headset that consists of its own digital-to-analog converter (DAC) and amplifier that functions as part of the headset. 323 3242. **WIRED Headset** 325 326 An analog headset that does not contain any DAC. It can have a 3.5 mm jack or a USB-C socket without DAC. 327 3283. **Bluetooth Headset** 329 330 A Bluetooth Advanced Audio Distribution Mode (A2DP) headset for wireless audio transmission. 331 3324. **Internal Speaker and MIC** 333 334 A device with a built-in speaker and microphone, which are used as default devices for playback and recording, respectively. 335 336## Repositories Involved 337 338[multimedia\_audio\_framework](https://gitee.com/openharmony/multimedia_audio_framework) 339