

# IVS 廣播 SDK：自訂音訊來源 \$1 即時串流
<a name="broadcast-custom-audio-sources"></a>

**備註：**本指南僅適用於 IVS 即時串流 Android 廣播 SDK。iOS 和 Web SDK 的資訊將在未來發布。

自訂音訊輸入來源可讓應用程式向廣播 SDK 提供自己的音訊輸入，而不必受限於裝置的內建麥克風。自訂音訊來源可讓應用程式串流處理過的音訊與效果、混合多個音訊串流，或與第三方音訊處理程式庫整合。

當您使用自訂音訊輸入來源時，廣播 SDK 不再負責直接管理麥克風。相較之下，音訊資料的擷取、處理，乃至於提交至自訂來源，皆由您的應用程式負責完成。

custom-audio-source 工作流程遵循下列步驟：

1. 音訊輸入 — 建立具有指定音訊格式的自訂音訊來源 (取樣率、聲道、格式)。

1. 您的處理 — 從音訊處理管道擷取或生成音訊資料。

1. 自訂音訊來源 — 使用 `appendBuffer()` 將音訊緩衝區提交至自訂來源。

1. 階段 — 透過 `StageStrategy` 包裝在 `LocalStageStream` 中，並發布至階段。

1. 參與者 — 階段參與者會即時接收已處理的音訊。

## Android
<a name="custom-audio-sources-android"></a>

### 建立自訂音訊來源
<a name="custom-audio-sources-android-creating-a-custom-audio-source"></a>

建立 `DeviceDiscovery` 工作階段後，建立音訊輸入來源：

```
DeviceDiscovery deviceDiscovery = new DeviceDiscovery(context); 
 
// Create custom audio source with specific format 
CustomAudioSource customAudioSource = deviceDiscovery.createAudioInputSource( 
   2,  // Number of channels (1 = mono, 2 = stereo) 
   BroadcastConfiguration.AudioSampleRate.RATE_48000,  // Sample rate 
   AudioDevice.Format.INT16  // Audio format (16-bit PCM) 
);
```

此方法會傳回接受原始 PCM 音訊資料的 `CustomAudioSource`。自訂音訊來源必須設定為與音訊處理管道產生的相同音訊格式。

#### 支援的音訊格式
<a name="custom-audio-sources-android-submitting-audio-data-supportedi-audio-formats"></a>


| 參數 | 選項 | Description | 
| --- | --- | --- | 
| 頻道 | 1 (單聲道)、2 (立體聲) | 音訊聲道的數量。 | 
| 取樣率 | RATE\$116000、RATE\$144100、RATE\$148000 | 音訊取樣率以 Hz 為單位。建議使用 48kHz 以獲得高品質。 | 
| 格式 | INT16、FLOAT32 | 音訊範例格式。INT16 是 16 位元固定點 PCM，FLOAT32 是 32 位元浮點 PCM。同時提供交錯格式和平面格式。 | 

### 提交音訊資料
<a name="custom-audio-sources-android-submitting-audio-data"></a>

若要將音訊資料提交至自訂來源，請使用 `appendBuffer()` 方法：

```
// Prepare audio data in a ByteBuffer 
ByteBuffer audioBuffer = ByteBuffer.allocateDirect(bufferSize); 
audioBuffer.put(pcmAudioData);  // Your processed audio data 
 
// Calculate the number of bytes 
long byteCount = pcmAudioData.length; 
 
// Submit audio to the custom source 
// presentationTimeUs should be generated by and come from your audio source
int samplesProcessed = customAudioSource.appendBuffer( 
   audioBuffer, 
   byteCount, 
   presentationTimeUs 
); 
 
if (samplesProcessed > 0) { 
   Log.d(TAG, "Successfully submitted " + samplesProcessed + " samples"); 
} else { 
   Log.w(TAG, "Failed to submit audio samples"); 
} 
 
// Clear buffer for reuse 
audioBuffer.clear();
```

**重要考量：**
+ 音訊資料必須採用建立自訂來源時指定的格式。
+ 時間戳記應單調增加並由音訊來源提供，藉此順暢播放音訊。
+ 定期提交音訊，避免串流中的差距。
+ 方法會傳回已處理的樣本數量 (0 表示失敗)。

### 發布至階段
<a name="custom-audio-sources-android-publishing-to-a-stage"></a>

將 `CustomAudioSource` 包裝在 `AudioLocalStageStream` 中，並從 `StageStrategy` 傳回它：

```
// Create the audio stream from custom source 
AudioLocalStageStream audioStream = new AudioLocalStageStream(customAudioSource); 
 
// Define your stage strategy 
Strategy stageStrategy = new Strategy() { 
   @NonNull 
   @Override 
   public List<LocalStageStream> stageStreamsToPublishForParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      List<LocalStageStream> streams = new ArrayList<>(); 
      streams.add(audioStream);  // Publish custom audio 
      return streams; 
   } 
 
   @Override 
   public boolean shouldPublishFromParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return true;  // Control when to publish 
   } 
 
   @Override 
   public Stage.SubscribeType shouldSubscribeToParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return Stage.SubscribeType.AUDIO_VIDEO; 
   } 
}; 
 
// Create and join the stage 
Stage stage = new Stage(context, stageToken, stageStrategy);
```

### 完整範例：音訊處理整合
<a name="custom-audio-sources-android-complete-example"></a>

以下是顯示與音訊處理 SDK 整合的完整範例：

```
public class AudioStreamingActivity extends AppCompatActivity { 
   private DeviceDiscovery deviceDiscovery; 
   private CustomAudioSource customAudioSource; 
   private AudioLocalStageStream audioStream; 
   private Stage stage; 
 
   @Override 
   protected void onCreate(Bundle savedInstanceState) { 
      super.onCreate(savedInstanceState); 
 
      // Configure audio manager 
      StageAudioManager.getInstance(this) 
         .setPreset(StageAudioManager.UseCasePreset.VIDEO_CHAT); 
 
      // Initialize IVS components 
      initializeIVSStage(); 
 
      // Initialize your audio processing SDK 
      initializeAudioProcessing(); 
   } 
 
   private void initializeIVSStage() { 
      deviceDiscovery = new DeviceDiscovery(this); 
 
      // Create custom audio source (48kHz stereo, 16-bit) 
      customAudioSource = deviceDiscovery.createAudioInputSource( 
         2,  // Stereo 
         BroadcastConfiguration.AudioSampleRate.RATE_48000, 
         AudioDevice.Format.INT16 
      ); 
 
      // Create audio stream 
      audioStream = new AudioLocalStageStream(customAudioSource); 
 
      // Create stage with strategy 
      Strategy strategy = new Strategy() { 
         @NonNull 
         @Override 
         public List<LocalStageStream> stageStreamsToPublishForParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Collections.singletonList(audioStream); 
         } 
 
         @Override 
         public boolean shouldPublishFromParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return true; 
         } 
 
         @Override 
         public Stage.SubscribeType shouldSubscribeToParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Stage.SubscribeType.AUDIO_VIDEO; 
         } 
      }; 
 
      stage = new Stage(this, getStageToken(), strategy); 
   } 
 
   private void initializeAudioProcessing() { 
      // Initialize your audio processing SDK 
      // Set up callback to receive processed audio 
      yourAudioSDK.setAudioCallback(new AudioCallback() { 
         @Override 
         public void onProcessedAudio(byte[] audioData, int sampleRate, 
                                     int channels, long timestamp) { 
            // Submit processed audio to IVS Stage 
            submitAudioToStage(audioData, timestamp); 
         } 
      }); 
   } 
 
   // The timestamp is required to come from your audio source and you  
   // should not be generating one on your own, unless your audio source 
   // does not provide one. If that is the case, create your own epoch  
   // timestamp and manually calculate the duration between each sample  
   // using the number of frames and frame size. 

   private void submitAudioToStage(byte[] audioData, long timestamp) { 
      try { 
         // Allocate direct buffer 
         ByteBuffer buffer = ByteBuffer.allocateDirect(audioData.length); 
         buffer.put(audioData); 
 
         // Submit to custom audio source 
         int samplesProcessed = customAudioSource.appendBuffer( 
            buffer, 
            audioData.length, 
            timestamp > 0 ? timestamp : System.nanoTime() / 1000 
         ); 
 
         if (samplesProcessed <= 0) { 
            Log.w(TAG, "Failed to submit audio samples"); 
         } 
 
         buffer.clear(); 
      } catch (Exception e) { 
         Log.e(TAG, "Error submitting audio: " + e.getMessage(), e); 
      } 
   } 
 
   @Override 
   protected void onDestroy() { 
      super.onDestroy(); 
      if (stage != null) { 
          stage.release(); 
      } 
   } 
}
```

### 最佳實務
<a name="custom-audio-sources-android-best-practices"></a>

#### 音訊格式一致性
<a name="custom-audio-sources-android-best-practices-audio-format-consistency"></a>

確保您提交的音訊格式符合建立自訂來源時指定的格式：

```
// If you create with 48kHz stereo INT16 
customAudioSource = deviceDiscovery.createAudioInputSource( 
   2, RATE_48000, INT16 
); 
 
// Your audio data must be: 
// - 2 channels (stereo) 
// - 48000 Hz sample rate 
// - 16-bit interleaved PCM format
```

#### 緩衝區管理
<a name="custom-audio-sources-android-best-practices-buffer-managemetn"></a>

直接使用 `ByteBuffers` 並重複使用它們，將垃圾回收降至最低：

```
// Allocate once 
private ByteBuffer audioBuffer = ByteBuffer.allocateDirect(BUFFER_SIZE); 
 
// Reuse in callback 
public void onAudioData(byte[] data) { 
   audioBuffer.clear(); 
   audioBuffer.put(data); 
   customAudioSource.appendBuffer(audioBuffer, data.length, getTimestamp()); 
   audioBuffer.clear(); 
}
```

#### 計時和同步
<a name="custom-audio-sources-android-best-practices-timing-and-synchronization"></a>

您必須使用音訊來源提供的時間戳記，才能順暢播放音訊。如果您的音訊來源不提供自己的時間戳記，請建立自己的 epoch 時間戳記，並使用影格數量和影格大小手動計算每個樣本之間的持續時間。

```
// "audioFrameTimestamp" should be generated by your audio source
// Consult your audio source’s documentation for information on how to get this 
long timestamp = audioFrameTimestamp;
```

#### 錯誤處理
<a name="custom-audio-sources-android-best-practices-error-handling"></a>

一律檢查 `appendBuffer()` 的傳回值：

```
int samplesProcessed = customAudioSource.appendBuffer(buffer, count, timestamp); 
 
if (samplesProcessed <= 0) { 
   Log.w(TAG, "Audio submission failed - buffer may be full or format mismatch"); 
   // Handle error: check format, reduce submission rate, etc. 
}
```