

# IVS 广播 SDK：自定义音频源 \$1 实时直播功能
<a name="broadcast-custom-audio-sources"></a>

**注意：**本指南仅适用于 IVS 实时直播功能 Android 广播 SDK。适用于 iOS 和 Web SDK 的信息将在未来发布。

通过使用自定义音频输入源，应用程序将可以向广播 SDK 提供自己的音频输入，而不仅限于设备的内置麦克风。通过使用自定义音频源，应用程序能够流式传输处理后带特效的音频、混合多个音频流或与第三方音频处理库集成。

使用自定义音频输入源时，广播 SDK 将不再负责直接管理麦克风。而由您的应用程序负责捕获、处理音频数据并将其提交到自定义源。

自定义音频源的工作流步骤如下：

1. 音频输入：创建具有指定音频格式（采样率、声道、格式）的自定义音频源。

1. 您进行的处理：从您的音频处理管道中捕获或生成音频数据。

1. 自定义音频源：使用 `appendBuffer()` 向该自定义源提交音频缓冲区。

1. 舞台：在 `LocalStageStream` 中打包并通过您的 `StageStrategy` 发布到舞台。

1. 参与者：舞台参与者实时接收处理后的音频。

## Android
<a name="custom-audio-sources-android"></a>

### 创建自定义音频源
<a name="custom-audio-sources-android-creating-a-custom-audio-source"></a>

创建 `DeviceDiscovery` 会话后，请创建一个自定义音频输入源：

```
DeviceDiscovery deviceDiscovery = new DeviceDiscovery(context); 
 
// Create custom audio source with specific format 
CustomAudioSource customAudioSource = deviceDiscovery.createAudioInputSource( 
   2,  // Number of channels (1 = mono, 2 = stereo) 
   BroadcastConfiguration.AudioSampleRate.RATE_48000,  // Sample rate 
   AudioDevice.Format.INT16  // Audio format (16-bit PCM) 
);
```

此方法会返回一个 `CustomAudioSource`，后者会接受原始 PCM 音频数据。自定义音频源配置的音频格式必须与音频处理管道生成的音频格式相同。

#### 支持的音频格式
<a name="custom-audio-sources-android-submitting-audio-data-supportedi-audio-formats"></a>


| 参数 | 选项 | 说明 | 
| --- | --- | --- | 
| 通道 | 1（单声道）、2（立体声）。 | 音频声道数。 | 
| 采样率 | RATE\$116000、RATE\$144100、RATE\$148000 | 音频采样率（以 Hz 为单位）。建议使用 48kHz 来获得高品质。 | 
| Format | INT16、FLOAT32 | 音频采样格式。INT16 是指 16 位定点 PCM，FLOAT32 是指 32 位浮点 PCM。同时支持交错格式和平面格式。 | 

### 提交音频数据
<a name="custom-audio-sources-android-submitting-audio-data"></a>

要将音频数据提交到自定义源，请使用 `appendBuffer()` 方法：

```
// Prepare audio data in a ByteBuffer 
ByteBuffer audioBuffer = ByteBuffer.allocateDirect(bufferSize); 
audioBuffer.put(pcmAudioData);  // Your processed audio data 
 
// Calculate the number of bytes 
long byteCount = pcmAudioData.length; 
 
// Submit audio to the custom source 
// presentationTimeUs should be generated by and come from your audio source
int samplesProcessed = customAudioSource.appendBuffer( 
   audioBuffer, 
   byteCount, 
   presentationTimeUs 
); 
 
if (samplesProcessed > 0) { 
   Log.d(TAG, "Successfully submitted " + samplesProcessed + " samples"); 
} else { 
   Log.w(TAG, "Failed to submit audio samples"); 
} 
 
// Clear buffer for reuse 
audioBuffer.clear();
```

**重要注意事项：**
+ 音频数据必须为创建自定义源时指定的格式。
+ 时间戳应单调增加，并由您的音频源提供，以确保流畅的音频播放。
+ 定期提交音频，以避免流式传输间断。
+ 该方法会返回已处理的样本数（0 表示失败）。

### 发布到舞台
<a name="custom-audio-sources-android-publishing-to-a-stage"></a>

将 `CustomAudioSource` 打包到某个 `AudioLocalStageStream` 中，然后它将从您的 `StageStrategy` 中返回：

```
// Create the audio stream from custom source 
AudioLocalStageStream audioStream = new AudioLocalStageStream(customAudioSource); 
 
// Define your stage strategy 
Strategy stageStrategy = new Strategy() { 
   @NonNull 
   @Override 
   public List<LocalStageStream> stageStreamsToPublishForParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      List<LocalStageStream> streams = new ArrayList<>(); 
      streams.add(audioStream);  // Publish custom audio 
      return streams; 
   } 
 
   @Override 
   public boolean shouldPublishFromParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return true;  // Control when to publish 
   } 
 
   @Override 
   public Stage.SubscribeType shouldSubscribeToParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return Stage.SubscribeType.AUDIO_VIDEO; 
   } 
}; 
 
// Create and join the stage 
Stage stage = new Stage(context, stageToken, stageStrategy);
```

### 完整示例：音频处理集成
<a name="custom-audio-sources-android-complete-example"></a>

以下展示了与一个音频处理 SDK 集成的完整示例：

```
public class AudioStreamingActivity extends AppCompatActivity { 
   private DeviceDiscovery deviceDiscovery; 
   private CustomAudioSource customAudioSource; 
   private AudioLocalStageStream audioStream; 
   private Stage stage; 
 
   @Override 
   protected void onCreate(Bundle savedInstanceState) { 
      super.onCreate(savedInstanceState); 
 
      // Configure audio manager 
      StageAudioManager.getInstance(this) 
         .setPreset(StageAudioManager.UseCasePreset.VIDEO_CHAT); 
 
      // Initialize IVS components 
      initializeIVSStage(); 
 
      // Initialize your audio processing SDK 
      initializeAudioProcessing(); 
   } 
 
   private void initializeIVSStage() { 
      deviceDiscovery = new DeviceDiscovery(this); 
 
      // Create custom audio source (48kHz stereo, 16-bit) 
      customAudioSource = deviceDiscovery.createAudioInputSource( 
         2,  // Stereo 
         BroadcastConfiguration.AudioSampleRate.RATE_48000, 
         AudioDevice.Format.INT16 
      ); 
 
      // Create audio stream 
      audioStream = new AudioLocalStageStream(customAudioSource); 
 
      // Create stage with strategy 
      Strategy strategy = new Strategy() { 
         @NonNull 
         @Override 
         public List<LocalStageStream> stageStreamsToPublishForParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Collections.singletonList(audioStream); 
         } 
 
         @Override 
         public boolean shouldPublishFromParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return true; 
         } 
 
         @Override 
         public Stage.SubscribeType shouldSubscribeToParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Stage.SubscribeType.AUDIO_VIDEO; 
         } 
      }; 
 
      stage = new Stage(this, getStageToken(), strategy); 
   } 
 
   private void initializeAudioProcessing() { 
      // Initialize your audio processing SDK 
      // Set up callback to receive processed audio 
      yourAudioSDK.setAudioCallback(new AudioCallback() { 
         @Override 
         public void onProcessedAudio(byte[] audioData, int sampleRate, 
                                     int channels, long timestamp) { 
            // Submit processed audio to IVS Stage 
            submitAudioToStage(audioData, timestamp); 
         } 
      }); 
   } 
 
   // The timestamp is required to come from your audio source and you  
   // should not be generating one on your own, unless your audio source 
   // does not provide one. If that is the case, create your own epoch  
   // timestamp and manually calculate the duration between each sample  
   // using the number of frames and frame size. 

   private void submitAudioToStage(byte[] audioData, long timestamp) { 
      try { 
         // Allocate direct buffer 
         ByteBuffer buffer = ByteBuffer.allocateDirect(audioData.length); 
         buffer.put(audioData); 
 
         // Submit to custom audio source 
         int samplesProcessed = customAudioSource.appendBuffer( 
            buffer, 
            audioData.length, 
            timestamp > 0 ? timestamp : System.nanoTime() / 1000 
         ); 
 
         if (samplesProcessed <= 0) { 
            Log.w(TAG, "Failed to submit audio samples"); 
         } 
 
         buffer.clear(); 
      } catch (Exception e) { 
         Log.e(TAG, "Error submitting audio: " + e.getMessage(), e); 
      } 
   } 
 
   @Override 
   protected void onDestroy() { 
      super.onDestroy(); 
      if (stage != null) { 
          stage.release(); 
      } 
   } 
}
```

### 最佳实践
<a name="custom-audio-sources-android-best-practices"></a>

#### 音频格式一致性
<a name="custom-audio-sources-android-best-practices-audio-format-consistency"></a>

确保您提交的音频格式与创建自定义源时指定的格式相符：

```
// If you create with 48kHz stereo INT16 
customAudioSource = deviceDiscovery.createAudioInputSource( 
   2, RATE_48000, INT16 
); 
 
// Your audio data must be: 
// - 2 channels (stereo) 
// - 48000 Hz sample rate 
// - 16-bit interleaved PCM format
```

#### 缓冲区管理
<a name="custom-audio-sources-android-best-practices-buffer-managemetn"></a>

使用直接 `ByteBuffers` 并重复利用这些缓冲区，以尽可能减少垃圾回收：

```
// Allocate once 
private ByteBuffer audioBuffer = ByteBuffer.allocateDirect(BUFFER_SIZE); 
 
// Reuse in callback 
public void onAudioData(byte[] data) { 
   audioBuffer.clear(); 
   audioBuffer.put(data); 
   customAudioSource.appendBuffer(audioBuffer, data.length, getTimestamp()); 
   audioBuffer.clear(); 
}
```

#### 定时与同步
<a name="custom-audio-sources-android-best-practices-timing-and-synchronization"></a>

您必须使用音频源提供的时间戳以确保流畅地播放音频。如果音频源未提供自己的时间戳，请自行创建纪元时间戳，并根据帧数和帧大小来手动计算采样间隔时间。

```
// "audioFrameTimestamp" should be generated by your audio source
// Consult your audio source’s documentation for information on how to get this 
long timestamp = audioFrameTimestamp;
```

#### 错误处理
<a name="custom-audio-sources-android-best-practices-error-handling"></a>

请务必检查 `appendBuffer()` 的返回值：

```
int samplesProcessed = customAudioSource.appendBuffer(buffer, count, timestamp); 
 
if (samplesProcessed <= 0) { 
   Log.w(TAG, "Audio submission failed - buffer may be full or format mismatch"); 
   // Handle error: check format, reduce submission rate, etc. 
}
```