

# IVS 廣播 SDK：第三方攝影機濾鏡 \$1 即時串流
<a name="broadcast-3p-camera-filters"></a>

本指南假設您已熟悉[自訂影像](broadcast-custom-image-sources.md)來源，並且已將 [IVS 即時串流廣播 SDK](broadcast.md) 整合到您的應用程式中。

攝影機濾鏡可讓即時串流創作者增強或改變臉部或背景外觀。這可能會增加觀眾參與度，吸引觀眾並增強即時串流體驗。

# 整合第三方攝影機濾鏡
<a name="broadcast-3p-camera-filters-integrating"></a>

藉由將濾鏡 SDK 的輸出提供給[自訂影像輸入來源](broadcast-custom-image-sources.md)，您可以整合第三方攝影機濾鏡 SDK 與 IVS 廣播 SDK。自訂影像輸入來源讓應用程式能將自己的影像輸入提供給廣播 SDK。第三方濾鏡提供者的 SDK 可能會管理攝影機的生命週期，以處理來自攝影機的影像、套用濾鏡效果，並輸出可傳遞至自訂影像來源的格式。

![\[藉由將濾鏡 SDK 的輸出提供給自訂影像輸入來源，整合第三方攝影機濾鏡 SDK 與 IVS 廣播 SDK。\]](http://docs.aws.amazon.com/zh_tw/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Integrating.png)


請參閱第三方濾鏡提供者的說明文件，瞭解將套用了濾鏡效果的攝影機影格轉換為可傳遞至[自訂影像輸入來源](broadcast-custom-image-sources.md)的格式的內建方法。此程序會因為使用的 IVS 廣播 SDK 版本而有所不同：
+ **Web**：濾鏡提供者必須能夠將其輸出轉譯到畫布元素。然後，[captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) 方法可以用來回傳畫布內容的 MediaStream。接著，MediaStream 可以轉換為 [LocalStageStream](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/classes/LocalStageStream) 的執行個體，並發布到階段。
+ **Android**：濾鏡提供者的 SDK 可以將影格轉譯到 IVS 廣播 SDK 所提供的 Android `Surface`，也可以將影格轉換為點陣圖。如果使用點陣圖，則可以透過解鎖並寫入畫布，將其轉譯到自訂影像來源所提供的基礎 `Surface`。
+ **iOS**：第三方濾鏡提供者的 SDK 必須提供套用了濾鏡效果 `CMSampleBuffer` 的攝影機影格。如需有關如何在處理攝影機影像後取得 `CMSampleBuffer` 作為最終輸出的資訊，請參閱第三方濾鏡廠商 SDK 的文件。

# 搭配 IVS 廣播 SDK 使用 BytePlus
<a name="broadcast-3p-camera-filters-integrating-byteplus"></a>

本文說明如何搭配 IVS 廣播 SDK 使用 BytePlus Effects SDK。

## Android
<a name="integrating-byteplus-android"></a>

### 安裝和設定 BytePlus 效果 SDK
<a name="integrating-byteplus-android-install-effects-sdk"></a>

有關如何安裝、初始化和設定 BytePlus 效果 SDK 的詳細資訊，請參閱 BytePlus [Android Access Guide](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide)。

### 設定自訂影像來源
<a name="integrating-byteplus-android-setup-image-source"></a>

初始化 SDK 後，將已處理且套用了濾鏡效果的攝影機影格提供給自訂影像輸入源。若要這麼做，請建立 `DeviceDiscovery` 物件的執行個體並建立自訂影像來源。請注意，當您使用自訂影像輸入來源對攝影機進行自訂控制時，就不再由廣播 SDK 負責管理攝影機。而是由應用程式負責正確處理攝影機的生命週期。

#### Java
<a name="integrating-byteplus-android-setup-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)
```

### 將輸出轉換為點陣圖並提供給自訂影像輸入來源
<a name="integrating-byteplus-android-convert-to-bitmap"></a>

要讓從 BytePlus 效果 SDK 套用濾鏡效果的攝影機影格直接轉至 IVS 廣播 SDK，請將 BytePlus 效果 SDK 的紋理輸出轉換為點陣圖。處理影像時，SDK 會調用 `onDrawFrame()` 方法。`onDrawFrame()` 方法是 Android [GLSurfaceView.Renderer](https://developer.android.com/reference/android/opengl/GLSurfaceView.Renderer) 介面的公用方法。在 BytePlus 提供的 Android 範例應用程式中，此方法會受到每個攝影機影格的呼叫，繼而輸出紋理。同時，您可以使用邏輯來補充 `onDrawFrame()` 方法，將此紋理轉換為點陣圖並將其提供給自訂影像輸入來源。如下列程式碼範例所示，請使用 BytePlus SDK 提供的 `transferTextureToBitmap` 方法來執行此轉換。這個方法由來自 BytePlus 效果 SDK 的 [com.bytedance.labcv.core.util.ImageUtil](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide#Appendix:%20convert%20input%20texture%20to%202D%20texture%20with%20upright%20face) 程式庫提供，如下列程式碼範例所示。然後藉由將產生的點陣圖寫入至 Surface 的 Canvas，將其轉譯到 `CustomImageSource` 的基礎 Android `Surface`。對 `onDrawFrame()` 的許多成功調用會帶來一系列點陣圖，並會在合併時建立影片串流。

#### Java
<a name="integrating-byteplus-android-convert-to-bitmap-code"></a>

```
import com.bytedance.labcv.core.util.ImageUtil;
...
protected ImageUtil imageUtility;
...


@Override
public void onDrawFrame(GL10 gl10) {
  ...	
  // Convert BytePlus output to a Bitmap
  Bitmap outputBt = imageUtility.transferTextureToBitmap(output.getTexture(),ByteEffect     
  Constants.TextureFormat.Texture2D,output.getWidth(), output.getHeight());

  canvas = surface.lockCanvas(null);
  canvas.drawBitmap(outputBt, 0f, 0f, null);
  surface.unlockCanvasAndPost(canvas);
```

# 搭配 IVS 廣播 SDK 使用 DeepAR
<a name="broadcast-3p-camera-filters-integrating-deepar"></a>

本文說明如何搭配 IVS 廣播 SDK 使用 DeepAR SDK。

## Android
<a name="integrating-deepar-android"></a>

有關如何整合 DeepAR SDK 與 Android IVS 廣播 SDK 的詳細訊息，請參閱 [Android Integration Guide from DeepAR](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/android/)。

## iOS
<a name="integrating-deepar-ios"></a>

有關如何整合 DeepAR SDK 與 iOS IVS 廣播 SDK 的詳細訊息，請參閱 [iOS Integration Guide from DeepAR](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/ios/)。

# 搭配 IVS 廣播 SDK 使用 Snap
<a name="broadcast-3p-camera-filters-integrating-snap"></a>

本文說明如何搭配 IVS 廣播 SDK 使用 Snap 的攝影機套件 SDK。

## Web
<a name="integrating-snap-web"></a>

本節假設您已熟悉[使用 Web 廣播 SDK 發布和訂閱影片](getting-started-pub-sub-web.md)。

若要整合 Snap 的攝影機套件 SDK 與 IVS 即時串流 Web 廣播 SDK，您需要：

1. 安裝攝影機套件 SDK 和 Webpack。(我們的範例使用 Webpack 作為打包工具，但您可以自行選擇任何打包工具。)

1. 建立 `index.html`。

1. 新增設定元素。

1. 建立 `index.css`。

1. 顯示和設定參與者。

1. 顯示連接的攝影機和麥克風。

1. 建立攝影機套件工作階段。

1. 擷取鏡頭並填入鏡頭選擇器。

1. 將攝影機套件工作階段的輸出轉譯至畫布。

1. 建立函數以填入「鏡頭」下拉式清單。

1. 為攝影機套件提供用於轉譯和發布 `LocalStageStream` 的媒體來源。

1. 建立 `package.json`。

1. 建立一個 Webpack 組態檔。

1. 設定 HTTPS 伺服器和測試。

下文將介紹上述每個步驟。

### 安裝攝影機套件 SDK 和 Webpack
<a name="integrating-snap-web-install-camera-kit"></a>

在此範例中，我們使用 Webpack 作為封裝程式；但是，您可以使用任何封裝程式。

```
npm i @snap/camera-kit webpack webpack-cli
```

### 建立 index.html
<a name="integrating-snap-web-create-index"></a>

接下來，建立 HTML 樣板並將 Web 廣播 SDK 匯入為指令碼標籤。在下列程式碼中，請務必用您的廣播 SDK 版本取代 `<SDK version>`。

#### HTML
<a name="integrating-snap-web-create-index-code"></a>

```
<!--
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */
-->
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <title>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</title>

  <!-- Fonts and Styling -->
  <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.css" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/milligram/1.4.1/milligram.css" />
  <link rel="stylesheet" href="./index.css" />

  <!-- Stages in Broadcast SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>
  <!-- Introduction -->
  <header>
    <h1>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</h1>

    <p>This sample is used to demonstrate basic HTML / JS usage. <b><a href="https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/multiple-hosts.html">Use the AWS CLI</a></b> to create a <b>Stage</b> and a corresponding <b>ParticipantToken</b>. Multiple participants can load this page and put in their own tokens. You can <b><a href="https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#glossary" target="_blank">read more about stages in our public docs.</a></b></p>
  </header>
  <hr />
  
  <!-- Setup Controls -->
 
  <!-- Display Local Participants -->
  
  <!-- Lens Selector -->

  <!-- Display Remote Participants -->

  <!-- Load All Desired Scripts -->
```

### 新增設定元素
<a name="integrating-snap-web-add-setup-elements"></a>

建立 HTML 來選取攝影機、麥克風及鏡頭，並指定參與者權杖：

#### HTML
<a name="integrating-snap-web-setup-controls-code"></a>

```
<!-- Setup Controls -->
  <div class="row">
    <div class="column">
      <label for="video-devices">Select Camera</label>
      <select disabled id="video-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="audio-devices">Select Microphone</label>
      <select disabled id="audio-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="token">Participant Token</label>
      <input type="text" id="token" name="token" />
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="join-button">Join Stage</button>
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="leave-button">Leave Stage</button>
    </div>
  </div>
```

在其下方新增額外的 HTML 來顯示來自本機和遠端參與者的攝影機供稿：

#### HTML
<a name="integrating-snap-web-local-remote-participants-code"></a>

```
 <!-- Local Participant -->
<div class="row local-container">
    <canvas id="canvas"></canvas>

    <div class="column" id="local-media"></div>
    <div class="static-controls hidden" id="local-controls">
      <button class="button" id="mic-control">Mute Mic</button>
      <button class="button" id="camera-control">Mute Camera</button>
    </div>
  </div>

  
  <hr style="margin-top: 5rem"/>
  
  <!-- Remote Participants -->
  <div class="row">
    <div id="remote-media"></div>
  </div>
```

載入額外邏輯，包括用於設定攝影機和已綁定 JavaScript 檔案的輔助方法。(在本節的稍後部分，您要建立這些 JavaScript 檔案並將它們綁定到單一檔案中，以便將攝影機套件匯入為模組。綁定的 JavaScript 檔案將包含設定攝影機套件、套用鏡頭和將套用了鏡頭的攝影機供稿發布到階段的邏輯。) 新增 `body` 和 `html` 元素的結尾標籤以完成 `index.html` 的建立。

#### HTML
<a name="integrating-snap-web-load-all-scripts-code"></a>

```
<!-- Load all Desired Scripts -->
  <script src="./helpers.js"></script>
  <script src="./media-devices.js"></script>
  <!-- <script type="module" src="./stages-simple.js"></script> -->
  <script src="./dist/bundle.js"></script>
</body>
</html>
```

### 建立 index.css
<a name="integrating-snap-web-create-index-css"></a>

建立 CSS 來源檔案以設定頁面樣式。我們不會討論此程式碼，以著重於管理舞台和與 Snap 攝影機套件 SDK 整合的邏輯。

#### CSS
<a name="integrating-snap-web-create-index-css-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

html,
body {
  margin: 2rem;
  box-sizing: border-box;
  height: 100vh;
  max-height: 100vh;
  display: flex;
  flex-direction: column;
}

hr {
  margin: 1rem 0;
}

table {
  display: table;
}

canvas {
  margin-bottom: 1rem;
  background: green;
}

video {
  margin-bottom: 1rem;
  background: black;
  max-width: 100%;
  max-height: 150px;
}

.log {
  flex: none;
  height: 300px;
}

.content {
  flex: 1 0 auto;
}

.button {
  display: block;
  margin: 0 auto;
}

.local-container {
  position: relative;
}

.static-controls {
  position: absolute;
  margin-left: auto;
  margin-right: auto;
  left: 0;
  right: 0;
  bottom: -4rem;
  text-align: center;
}

.static-controls button {
  display: inline-block;
}

.hidden {
  display: none;
}

.participant-container {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: column;
  margin: 1rem;
}

video {
  border: 0.5rem solid #555;
  border-radius: 0.5rem;
}
.placeholder {
  background-color: #333333;
  display: flex;
  text-align: center;
  margin-bottom: 1rem;
}
.placeholder span {
  margin: auto;
  color: white;
}
#local-media {
  display: inline-block;
  width: 100vw;
}

#local-media video {
  max-height: 300px;
}

#remote-media {
  display: flex;
  justify-content: center;
  align-items: center;
  flex-direction: row;
  width: 100%;
}

#lens-selector {
  width: 100%;
  margin-bottom: 1rem;
}
```

### 顯示和設定參與者
<a name="integrating-snap-web-setup-participants"></a>

接下來建立 `helpers.js`，其中包含您會用來顯示和設定參與者的輔助方法：

#### JavaScript
<a name="integrating-snap-web-setup-participants-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

function setupParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);

  const participantContainerId = isLocal ? 'local' : id;
  const participantContainer = createContainer(participantContainerId);
  const videoEl = createVideoEl(participantContainerId);

  participantContainer.appendChild(videoEl);
  groupContainer.appendChild(participantContainer);

  return videoEl;
}

function teardownParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);
  const participantContainerId = isLocal ? 'local' : id;

  const participantDiv = document.getElementById(
    participantContainerId + '-container'
  );
  if (!participantDiv) {
    return;
  }
  groupContainer.removeChild(participantDiv);
}

function createVideoEl(id) {
  const videoEl = document.createElement('video');
  videoEl.id = id;
  videoEl.autoplay = true;
  videoEl.playsInline = true;
  videoEl.srcObject = new MediaStream();
  return videoEl;
}

function createContainer(id) {
  const participantContainer = document.createElement('div');
  participantContainer.classList = 'participant-container';
  participantContainer.id = id + '-container';

  return participantContainer;
}
```

### 顯示連接的攝影機和麥克風
<a name="integrating-snap-web-display-cameras-microphones"></a>

接下來建立 `media-devices.js`，其中包含用於顯示連接到裝置的攝影機和麥克風的輔助方法：

#### JavaScript
<a name="integrating-snap-web-display-cameras-microphones-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

/**
 * Returns an initial list of devices populated on the page selects
 */
async function initializeDeviceSelect() {
  const videoSelectEl = document.getElementById('video-devices');
  videoSelectEl.disabled = false;

  const { videoDevices, audioDevices } = await getDevices();
  videoDevices.forEach((device, index) => {
    videoSelectEl.options[index] = new Option(device.label, device.deviceId);
  });

  const audioSelectEl = document.getElementById('audio-devices');

  audioSelectEl.disabled = false;
  audioDevices.forEach((device, index) => {
    audioSelectEl.options[index] = new Option(device.label, device.deviceId);
  });
}

/**
 * Returns all devices available on the current device
 */
async function getDevices() {
  // Prevents issues on Safari/FF so devices are not blank
  await navigator.mediaDevices.getUserMedia({ video: true, audio: true });

  const devices = await navigator.mediaDevices.enumerateDevices();
  // Get all video devices
  const videoDevices = devices.filter((d) => d.kind === 'videoinput');
  if (!videoDevices.length) {
    console.error('No video devices found.');
  }

  // Get all audio devices
  const audioDevices = devices.filter((d) => d.kind === 'audioinput');
  if (!audioDevices.length) {
    console.error('No audio devices found.');
  }

  return { videoDevices, audioDevices };
}

async function getCamera(deviceId) {
  // Use Max Width and Height
  return navigator.mediaDevices.getUserMedia({
    video: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
    audio: false,
  });
}

async function getMic(deviceId) {
  return navigator.mediaDevices.getUserMedia({
    video: false,
    audio: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
  });
}
```

### 建立攝影機套件工作階段
<a name="integrating-snap-web-camera-kit-session"></a>

建立 `stages.js`，其中包含將鏡頭套用至攝影機供稿並將供稿發布至階段的邏輯。我們建議將下列程式碼區塊複製並貼上 `stages.js`。接著，您可以逐項檢閱程式碼，以了解下列區段中的狀況。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);

  const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};

async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);

  // Create StageStreams for Audio and Video
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
      lensSelector.disabled = false;
    } else {
      controls.classList.add('hidden');
      lensSelector.disabled = true;
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

在本檔案的第一部分，我們匯入廣播 SDK 和攝影機套件 Web SDK，並初始化我們將在每個 SDK 中使用的變數。我們在[引導攝影機套件 Web SDK](https://kit.snapchat.com/reference/CameraKit/web/0.7.0/index.html#bootstrapping-the-sdk) 後透過呼叫 `createSession` 建立起攝影機套件工作階段。請注意，畫布元素物件會被傳遞給工作階段；這將告知攝影機套件轉譯至該畫布。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code-2"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
```

### 擷取鏡頭並填入鏡頭選擇器
<a name="integrating-snap-web-fetch-apply-lens"></a>

若要擷取您的鏡頭，請將鏡頭組 ID 的預留位置取代為您自己的 ID，該 ID 可以在 [Camera Kit Developer Portal](https://camera-kit.snapchat.com/) 中找到。使用我們稍後建立的 `populateLensSelector()` 函數填入「鏡頭」選項下拉式清單。

#### JavaScript
<a name="integrating-snap-web-fetch-apply-lens-code"></a>

```
session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);
```

### 將攝影機套件工作階段的輸出轉譯到畫布
<a name="integrating-snap-web-render-output-to-canvas"></a>

使用 [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) 方法回傳畫布內容中的 `MediaStream`。畫布將包含套用了鏡頭的攝影機供稿的影片串流。此外，新增了用於攝影機和麥克風靜音按鈕的事件接聽程式，以及用於加入和離開階段的事件接聽程式。在用於加入階段的事件接聽程式中，我們從畫布傳遞攝影機套件工作階段和 `MediaStream`，以便將其發布到階段。

#### JavaScript
<a name="integrating-snap-web-render-output-to-canvas-code"></a>

```
const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};
```

### 建立函數以填入鏡頭下拉式清單
<a name="integrating-snap-web-populate-lens-dropdown"></a>

建立下列函數，以將先前擷取的鏡頭填入**鏡頭**選擇器。**鏡頭**選擇器為頁面中的 UI 元素，可讓您從鏡頭清單中選取要套用至攝影機畫面的鏡頭。此外，建立 `handleLensChange` 回呼函數，以便從**鏡頭**下拉式清單中選取指定鏡頭時加以套用。

#### JavaScript
<a name="integrating-snap-web-populate-lens-dropdown-code"></a>

```
const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};
```

### 為攝影機套件提供用於轉譯的媒體來源並發布 LocalStageStream
<a name="integrating-snap-web-publish-localstagestream"></a>

若要發布套用了鏡頭的影片串流，請建立名為 `setCameraKitSource` 的函數來傳遞稍早從畫布擷取的 `MediaStream`。來自畫布的 `MediaStream` 目前沒有作用，因為我們還未納入本地攝影機供稿。我們可以透過呼叫 `getCamera` 輔助方法並將其分配給 `localCamera` 來合併本地攝影機供稿。然後，我們可以將本地攝影機供稿 (透過 `localCamera`) 和工作階段物件傳遞給 `setCameraKitSource`。`setCameraKitSource` 函數能透過呼叫 `createMediaStreamSource` 將本地攝影機供稿轉換為 [CameraKit 媒體來源](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#creating-a-camerakitsource)。接著將 `CameraKit` 媒體來源[轉換](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#2d-transforms)成前置攝影機的鏡像。然後，鏡頭效果被套用到媒體來源，並通過呼叫 `session.play()` 轉譯到輸出畫布。

此時，鏡頭已套用到擷取自畫布的 `MediaStream`，接著可以繼續將其發布到階段。使用來自的 `MediaStream` 影片軌道建立 `LocalStageStream`，即可實現此目的。然後，`LocalStageStream` 的執行個體可以傳入到要發布的 `StageStrategy`。

#### JavaScript
<a name="integrating-snap-web-publish-localstagestream-code"></a>

```
async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);
  // Create StageStreams for Audio and Video
  // cameraStageStream = new LocalStageStream(localCamera.getVideoTracks()[0]);
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };
```

下面的其餘代碼用於建立和管理我們的階段：

#### JavaScript
<a name="integrating-snap-web-create-manage-stage-code"></a>

```
stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events

  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
    } else {
      controls.classList.add('hidden');
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

### 建立 package.json
<a name="integrating-snap-web-package-json"></a>

建立 `package.json` 並新增下列 JSON 組態。此檔案會定義我們的相依性，並包含用於綁定程式碼的指令碼命令。

#### JSON 組態
<a name="integrating-snap-web-package-json-code"></a>

```
{
  "dependencies": {
    "@snap/camera-kit": "^0.10.0"
  },
  "name": "ivs-stages-with-snap-camerakit",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "build": "webpack"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "devDependencies": {
    "webpack": "^5.95.0",
    "webpack-cli": "^5.1.4"
  }
}
```

### 建立一個 Webpack 組態檔
<a name="integrating-snap-web-webpack-config"></a>

建立 `webpack.config.js` 並新增以下程式碼。如此會綁定我們目前為止所建立的程式碼，以便我們利用 import 陳述式來使用攝影機套件。

#### JavaScript
<a name="integrating-snap-web-webpack-config-code"></a>

```
const path = require('path');
module.exports = {
  entry: ['./stage.js'],
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
};
```

最後，按照 Webpack 組態檔的定義執行 `npm run build` 來綁定自己的 JavaScript。如為測試用途，您可以從本機電腦提供 HTML 和 JavaScript。在此範例中，我們會使用 Python 的 `http.server` 模組。

### 設定 HTTPS 伺服器和測試
<a name="integrating-snap-web-https-server-test"></a>

若要測試程式碼，我們需要設定 HTTPS 伺服器。使用 HTTPS 伺服器進行 Web 應用程式與 Snap 攝影機套件 SDK 整合的本機開發和測試，將有助於避免 CORS (跨來源資源共用) 問題。

開啟終端並導覽至您目前為止建立所有程式碼的目錄。執行下列命令，以產生自我簽署的 SSL/TLS 憑證和私有金鑰：

```
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
```

如此會建立兩個檔案：`key.pem` (私有金鑰) 和 `cert.pem` (自我簽署憑證)。建立名稱為 `https_server.py` 的新 Python 檔案，並新增下列程式碼：

#### Python
<a name="integrating-snap-web-https-server-test-code"></a>

```
import http.server
import ssl

# Set the directory to serve files from
DIRECTORY = '.'

# Create the HTTPS server
server_address = ('', 4443)
httpd = http.server.HTTPServer(
    server_address, http.server.SimpleHTTPRequestHandler)

# Wrap the socket with SSL/TLS
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain('cert.pem', 'key.pem')
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print(f'Starting HTTPS server on https://localhost:4443, serving {DIRECTORY}')
httpd.serve_forever()
```

開啟終端，導覽至您建立 `https_server.py` 檔案的目錄，然後執行下列命令：

```
python3 https_server.py
```

如此會在 https://localhost:4443 中啟動 HTTPS 伺服器，從目前的目錄提供檔案。確保 `cert.pem` 和 `key.pem` 檔案皆位於與 `https_server.py` 檔案相同的目錄中。

開啟瀏覽器並導覽至 https://localhost:4443。由於此為自我簽署的 SSL/TLS 憑證，因此您的 Web 瀏覽器不會信任該憑證，且您會收到警告。由於此僅作為測試用途，因此您可以略過該警告。然後，您應該會在畫面上看到您先前指定且已套用至攝影機供稿的 Snap 鏡頭 AR 效果。

請注意，此設定使用 Python 的內建 `http.server` 和 `ssl` 模組，適用於本機開發和測試用途，但不建議用於生產環境。此設定中使用的自我簽署 SSL/TLS 憑證不受 Web 瀏覽器和其他用戶端信任，如此表示使用者在存取伺服器時會遇到安全警告。此外，雖然我們在此範例中使用 Python 的內建 http.server 和 ssl 模組，但您可以選擇使用其他 HTTPS 伺服器解決方案。

## Android
<a name="integrating-snap-android"></a>

若要整合 Snap 的攝影機套件 SDK 與 IVS Android 廣播 SDK，您必須安裝攝影機套件 SDK、初始化攝影機套件工作階段、套用鏡頭，然後將攝影機套件工作階段的輸出提供給自訂影像輸入來源。

要安裝攝影機套件 SDK，請將以下內容新增到模組的 `build.gradle` 檔案中。將 `$cameraKitVersion` 替換為[攝影機套件 SDK 的最新版本](https://docs.snap.com/camera-kit/integrate-sdk/mobile/changelog-mobile)。

### Java
<a name="integrating-snap-android-install-camerakit-sdk-code"></a>

```
implementation "com.snap.camerakit:camerakit:$cameraKitVersion"
```

初始化並取得 `cameraKitSession`。攝影機套件還為 Android 的 [CameraX](https://developer.android.com/media/camera/camerax) API 提供了一個方便的包裝函式，讓您無需編寫複雜的邏輯即可共用 CameraX 與攝影機套件。您可以使用 `CameraXImageProcessorSource` 物件作為r [ImageProcessor](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-image-processor/index.html) 的 [Source](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-source/index.html)，讓自己啟動攝影機預覽串流影格。

### Java
<a name="integrating-snap-android-initialize-camerakitsession-code"></a>

```
 protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

        // Camera Kit support implementation of ImageProcessor that is backed by CameraX library:
        // https://developer.android.com/training/camerax
        CameraXImageProcessorSource imageProcessorSource = new CameraXImageProcessorSource( 
            this /*context*/, this /*lifecycleOwner*/
        );
        imageProcessorSource.startPreview(true /*cameraFacingFront*/);

        cameraKitSession = Sessions.newBuilder(this)
                .imageProcessorSource(imageProcessorSource)
                .attachTo(findViewById(R.id.camerakit_stub))
                .build();
    }
```

### 擷取並套用鏡頭
<a name="integrating-snap-android-fetch-apply-lenses"></a>

您可以在 [Camera Kit Developer Portal](https://camera-kit.snapchat.com/) 的輪播中設定並訂購鏡頭：

#### Java
<a name="integrating-snap-android-configure-lenses-code"></a>

```
// Fetch lenses from repository and apply them
 // Replace LENS_GROUP_ID with Lens Group ID from https://camera-kit.snapchat.com
cameraKitSession.getLenses().getRepository().get(new Available(LENS_GROUP_ID), available -> {
     Log.d(TAG, "Available lenses: " + available);
     Lenses.whenHasFirst(available, lens -> cameraKitSession.getLenses().getProcessor().apply(lens, result -> {
          Log.d(TAG,  "Apply lens [" + lens + "] success: " + result);
      }));
});
```

若要廣播，請將已處理的影格傳送至自訂影像來源的基礎 `Surface`。使用 `DeviceDiscovery` 物件並建立 `CustomImageSource` 來回傳 `SurfaceSource`。然後，您可以將 `CameraKit` 工作階段的輸出轉譯至由 `SurfaceSource` 提供的基礎 `Surface`。

#### Java
<a name="integrating-snap-android-broadcast-code"></a>

```
val publishStreams = ArrayList<LocalStageStream>()

val deviceDiscovery = DeviceDiscovery(applicationContext)
val customSource = deviceDiscovery.createImageInputSource(BroadcastConfiguration.Vec2(720f, 1280f))

cameraKitSession.processor.connectOutput(outputFrom(customSource.inputSurface))
val customStream = ImageLocalStageStream(customSource)

// After rendering the output from a Camera Kit session to the Surface, you can 
// then return it as a LocalStageStream to be published by the Broadcast SDK
val customStream: ImageLocalStageStream = ImageLocalStageStream(surfaceSource)
publishStreams.add(customStream)

@Override
fun stageStreamsToPublishForParticipant(stage: Stage, participantInfo: ParticipantInfo): List<LocalStageStream> = publishStreams
```

# 搭配 IVS 廣播 SDK 使用背景替換
<a name="broadcast-3p-camera-filters-background-replacement"></a>

背景替換是一種攝影機濾鏡，可讓即時串流創作者更改背景。如下圖所示，替換背景包含：

1. 從即時攝影機供稿獲取攝影機影像。

1. 使用 Google ML Kit 將其分割成前景和背景組件。

1. 組合產生的分割遮罩與自訂背景影像。

1. 將其傳遞給自訂影像來源以進行廣播。

![\[實作背景替換的工作流程。\]](http://docs.aws.amazon.com/zh_tw/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Background_Replacement.png)


## Web
<a name="background-replacement-web"></a>

本節假設您已熟悉[使用 Web 廣播 SDK 發布和訂閱影片](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/getting-started-pub-sub-web.html)。

若要以自訂影像替換即時串流的背景，請使用具有 [MediaPipe 影像分割器](https://developers.google.com/mediapipe/solutions/vision/image_segmenter)的[自拍分割模型](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model)。這是一種機器學習模型，可識別影片影格中的哪些像素位於前景或背景中。然後，您可以使用模型的結果來替換即時串流的背景，方法是將影片供稿中的前景像素複製到代表新背景的自訂影像。

若要整合背景替換與 IVS 即時串流 Web 廣播 SDK，您需要：

1. 安裝 MediaPipe 和 Webpack。(我們的範例使用 Webpack 作為打包工具，但您可以自行選擇任何打包工具。)

1. 建立 `index.html`。

1. 新增媒體元素。

1. 新增指令碼標籤。

1. 建立 `app.js`。

1. 載入自訂背景影像。

1. 建立 `ImageSegmenter` 的執行個體。

1. 將影片供稿轉譯到畫布。

1. 建立背景替換邏輯。

1. 建立 Webpack 組態檔。

1. 綁定自己的 JavaScript 檔案。

### 安裝 MediaPipe 和 Webpack
<a name="background-replacement-web-install-mediapipe-webpack"></a>

若要開始，請先安裝 `@mediapipe/tasks-vision` 和 `webpack` npm 套件。以下範例使用 Webpack 作為 JavaScript 打包工具；如果願意，您也可以使用不同的打包工具。

#### JavaScript
<a name="background-replacement-web-install-mediapipe-webpack-code"></a>

```
npm i @mediapipe/tasks-vision webpack webpack-cli
```

請務必更新自己的 `package.json` 將 `webpack` 指定為建置指令碼：

#### JavaScript
<a name="background-replacement-web-update-package-json-code"></a>

```
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack"
  },
```

### 建立 index.html
<a name="background-replacement-web-create-index"></a>

接下來，建立 HTML 樣板並將 Web 廣播 SDK 匯入為指令碼標籤。在下列程式碼中，請務必用您的廣播 SDK 版本取代 `<SDK version>`。

#### JavaScript
<a name="background-replacement-web-create-index-code"></a>

```
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <!-- Import the SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>

</body>
</html>
```

### 新增媒體元素
<a name="background-replacement-web-add-media-elements"></a>

接下來，在 body 標籤中新增一個影片元素和兩個畫布元素。影片元素會包含即時攝影機供稿，並將用作 MediaPipe 影像分割器的輸入。第一個畫布元素將用於轉譯要廣播的供稿的預覽。第二個畫布元素將用於轉譯要當作背景的自訂影像。由於具有自訂影像的第二個畫布僅用於將像素以編程方式複製到最終畫布的來源，檢視中會隱藏該畫布。

#### JavaScript
<a name="background-replacement-web-add-media-elements-code"></a>

```
<div class="row local-container">
      <video id="webcam" autoplay style="display: none"></video>
    </div>
    <div class="row local-container">
      <canvas id="canvas" width="640px" height="480px"></canvas>

      <div class="column" id="local-media"></div>
      <div class="static-controls hidden" id="local-controls">
        <button class="button" id="mic-control">Mute Mic</button>
        <button class="button" id="camera-control">Mute Camera</button>
      </div>
    </div>
    <div class="row local-container">
      <canvas id="background" width="640px" height="480px" style="display: none"></canvas>
    </div>
```

### 新增指令碼標籤
<a name="background-replacement-web-add-script-tag"></a>

新增指令碼標籤來載入綁定的 JavaScript 檔案，該檔案會包含執行背景替換並將其發布至階段的程式碼：

```
<script src="./dist/bundle.js"></script>
```

### 建立 app.js
<a name="background-replacement-web-create-appjs"></a>

接下來建立一個 JavaScript 檔案，獲取在 HTML 頁面中建立的畫布和影片元素的元素物件。匯入 `ImageSegmenter` 和 `FilesetResolver` 模組。`ImageSegmenter` 模組將用於執行分割任務。

#### JavaScript
<a name="create-appjs-import-imagesegmenter-fileresolver-code"></a>

```
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";
```

接下來建立一個名為 `init()` 的函數，從使用者的攝影機擷取 MediaStream，並在每次攝影機影格完成加載時調用回呼函數。為加入和離開階段按鈕新增事件接聽程式。

請注意，加入階段時，我們會傳遞一個名為 `segmentationStream` 的變數。這是從畫布元素擷取的影片串流，其中包含疊加在代表背景的自訂影像上的前景影像。稍後，此自訂串流將用於建立可發布至階段的 `LocalStageStream` 執行個體。

#### JavaScript
<a name="create-appjs-create-init-code"></a>

```
const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });
};
```

### 載入自訂背景影像
<a name="background-replacement-web-background-image"></a>

在 `init` 函數底部新增代碼來呼叫名為 `initBackgroundCanvas` 的函數，該函數會從本地檔案加載自訂影像並將其轉譯到畫布上。我們將在下一個步驟中定義此函數。將從使用者攝影機擷取的 `MediaStream` 指派給影片物件。稍後，此影片物件將傳遞給影像分割器。另外，設定一個名為 `renderVideoToCanvas` 的回呼函數，在影片影格完成加載時調用。我們將在後續步驟中定義此函數。

#### JavaScript
<a name="background-replacement-web-load-background-image-code"></a>

```
initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
```

讓我們實現從本地檔案載入影像的 `initBackgroundCanvas` 函數。此範例使用海灘影像作為自訂背景。包含自訂影像的畫布將被隱藏而不顯示，這是因為您會將其與包含攝影機供稿的畫布元素的前景像素合併。

#### JavaScript
<a name="background-replacement-web-implement-initBackgroundCanvas-code"></a>

```
const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};
```

### 建立 ImageSegmenter 的執行個體
<a name="background-replacement-web-imagesegmenter"></a>

接下來建立 `ImageSegmenter` 的執行個體，該執行個體會分割影像並將結果回傳為遮罩。建立 `ImageSegmenter` 的執行個體時，您會用到[自拍分割模型](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model)。

#### JavaScript
<a name="background-replacement-web-imagesegmenter-code"></a>

```
const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};
```

### 將影片供稿轉譯到畫布
<a name="background-replacement-web-render-video-to-canvas"></a>

接下來，建立將影片供稿轉譯到另一個畫布元素的函數。我們需要將影片供稿轉譯到畫布，以便使用 Canvas 2D API 從中提取前景像素。執行此操作時，我們也會將影片影格傳遞給我們的 `ImageSegmenter` 執行個體，使用 [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) 方法分割影片影格中的前景和背景。當 [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) 方法返回時，它會調用我們的自訂回呼函數 `replaceBackground` 來執行背景替換。

#### JavaScript
<a name="background-replacement-web-render-video-to-canvas-code"></a>

```
const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};
```

### 建立背景替換邏輯
<a name="background-replacement-web-logic"></a>

建立 `replaceBackground` 函數，將自訂背景影像與攝影機供稿的前景合併以替換背景。該函數會首先從先前建立的兩個畫布元素中，檢索自訂背景影像的基礎像素資料和影片供稿。然後，它反复執行 `ImageSegmenter` 提供的遮罩，其中指出哪些像素屬於前景。在反复執行遮罩時，它會選擇性地將包含使用者攝影機供稿的像素複製到對應的背景像素資料中。完成後，它會將前景複本上的最終像素資料轉換為背景並繪製到畫布上。

#### JavaScript
<a name="background-replacement-web-logic-create-replacebackground-code"></a>

```
function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
  // Only copy pixels on to the background image if the mask indicates they are in the foreground
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }

 // Convert the pixel data to a format suitable to be drawn to a canvas
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}
```

作為參考，這裡的完整 `app.js` 檔案包含了上述所有邏輯：

#### JavaScript
<a name="background-replacement-web-logic-app-js-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

// All helpers are expose on 'media-devices.js' and 'dom.js'
const { setupParticipant } = window;

const { Stage, LocalStageStream, SubscribeType, StageEvents, ConnectionState, StreamType } = IVSBroadcastClient;
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";

let cameraButton = document.getElementById("camera-control");
let micButton = document.getElementById("mic-control");
let joinButton = document.getElementById("join-button");
let leaveButton = document.getElementById("leave-button");

let controls = document.getElementById("local-controls");
let audioDevicesList = document.getElementById("audio-devices");
let videoDevicesList = document.getElementById("video-devices");

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;
let imageSegmenter;
let lastWebcamTime = -1;

const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });

  initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
};

const joinStage = async (segmentationStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById("token").value;

  if (!token) {
    window.alert("Please enter a participant token");
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localMic = await getMic(audioDevicesList.value);

  cameraStageStream = new LocalStageStream(segmentationStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove("hidden");
    } else {
      controls.classList.add("hidden");
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log("Participant Joined:", participant);
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
    console.log("Participant Media Added: ", participant, streams);

    let streamsToDisplay = streams;

    if (participant.isLocal) {
      // Ensure to exclude local audio streams, otherwise echo will occur
      streamsToDisplay = streams.filter((stream) => stream.streamType === StreamType.VIDEO);
    }

    const videoEl = setupParticipant(participant);
    streamsToDisplay.forEach((stream) => videoEl.srcObject.addTrack(stream.mediaStreamTrack));
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log("Participant Left: ", participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = "Hide Camera";
  micButton.innerText = "Mute Mic";
  controls.classList.add("hidden");
};

function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}

const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};

const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};

const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};

createImageSegmenter();
init();
```

### 建立一個 Webpack 組態檔
<a name="background-replacement-web-webpack-config"></a>

將此組態新增到自己的 Webpack 組態檔來綁定 `app.js`，讓匯入呼叫起作用：

#### JavaScript
<a name="background-replacement-web-webpack-config-code"></a>

```
const path = require("path");
module.exports = {
  entry: ["./app.js"],
  output: {
    filename: "bundle.js",
    path: path.resolve(__dirname, "dist"),
  },
};
```

### 綁定自己的 JavaScript 檔案
<a name="background-replacement-web-bundle-javascript"></a>

```
npm run build
```

從包含 `index.html` 的目錄啟動一個簡單的 HTTP 伺服器，然後打開 `localhost:8000` 查看結果：

```
python3 -m http.server -d ./
```

## Android
<a name="background-replacement-android"></a>

要替換即時串流中的背景，您可以使用 [Google ML Kit](https://developers.google.com/ml-kit/vision/selfie-segmentation) 的自拍分割 API。自拍分割 API 接受攝影機影像作為輸入，並可傳回遮罩為影像的每個像素提供信賴度分數，指出該像素是在前景中還是背景中。然後，您就能根據信賴度分數從背景影像或前景影像擷取對應的像素顏色。這個過程會持續進行，直到檢查完遮罩中的所有信賴度分數為止。結果會產生一個新的像素顏色陣列，其中包含前景像素與背景影像中像素的組合。

若要整合背景替換與 IVS 即時串流 Android 廣播 SDK，您需要：

1. 安裝 CameraX 程式庫和 Google ML Kit。

1. 初始化樣板變數。

1. 建立自訂影像來源。

1. 管理攝影機影格。

1. 將攝影機影格傳遞給 Google ML Kit。

1. 將攝影機影格前景覆疊到自訂背景上。

1. 將新影像提供給自訂影像來源。

### 安裝 CameraX 程式庫和 Google ML Kit
<a name="background-replacement-android-install-camerax-googleml"></a>

要從即時攝影機供稿中提取影像，請使用 Android 的 CameraX 程式庫。要安裝 CameraX 程式庫和 Google ML Kit，請將以下內容新增到模組的 `build.gradle` 檔案中。用最新版本的 [CameraX](https://developer.android.com/jetpack/androidx/releases/camera) 和 [Google ML Kit](https://developers.google.com/ml-kit/vision/selfie-segmentation/android) 程式庫分別替換 `${camerax_version}` 與 `${google_ml_kit_version}`。

#### Java
<a name="background-replacement-android-install-camerax-googleml-code"></a>

```
implementation "com.google.mlkit:segmentation-selfie:${google_ml_kit_version}"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
```

匯入下列程式庫：

#### Java
<a name="background-replacement-android-import-libraries-code"></a>

```
import androidx.camera.core.CameraSelector
import androidx.camera.core.ImageAnalysis
import androidx.camera.core.ImageProxy
import androidx.camera.lifecycle.ProcessCameraProvider
import com.google.mlkit.vision.segmentation.selfie.SelfieSegmenterOptions
```

### 初始化樣板變數
<a name="background-replacement-android-initialize-variables"></a>

初始化 `ImageAnalysis` 的執行個體和 `ExecutorService` 的執行個體：

#### Java
<a name="background-replacement-android-initialize-imageanalysis-executorservice-code"></a>

```
private lateinit var binding: ActivityMainBinding
private lateinit var cameraExecutor: ExecutorService
private var analysisUseCase: ImageAnalysis? = null
```

在 [STREAM\$1MODE](https://developers.google.com/ml-kit/vision/selfie-segmentation/android#detector_mode) 中初始化一個分割器執行個體：

#### Java
<a name="background-replacement-android-initialize-segmenter-code"></a>

```
private val options =
        SelfieSegmenterOptions.Builder()
            .setDetectorMode(SelfieSegmenterOptions.STREAM_MODE)
            .build()

private val segmenter = Segmentation.getClient(options)
```

### 建立自訂影像來源
<a name="background-replacement-android-create-image-source"></a>

在活動的 `onCreate` 方法中，建立 `DeviceDiscovery` 物件的執行個體，並建立一個自訂影像來源。自訂影像來源提供的 `Surface` 會收到前景疊加在自訂背景影像上的最終影像。然後，您要使用自訂影像來源建立 `ImageLocalStageStream` 的執行個體。之後，`ImageLocalStageStream` 的執行個體 (在此範例中名為 `filterStream`) 就能發布至階段。如需如何設定階段的說明，請參閱 [IVS Android 廣播 SDK 指南](broadcast-android.md)。最後，也要建立一個用於管理攝影機的線程。

#### Java
<a name="background-replacement-android-create-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)

cameraExecutor = Executors.newSingleThreadExecutor()
```

### 管理攝影機影格
<a name="background-replacement-android-camera-frames"></a>

接下來，建立一個函數來初始化攝影機。此函數使用 CameraX 程式庫從即時攝影機供稿中提取影像。首先，您要建立名為 `cameraProviderFuture` 的 `ProcessCameraProvider` 執行個體。該物件表示獲得攝影機提供者的未來結果。然後，您將專案中的影像載入為點陣圖。此範例使用海灘影像作為背景，但您可以使用任何影像。

接著，您將接聽程式新增到 `cameraProviderFuture`。當攝影機變得可用或在取得攝影機提供者的過程中發生錯誤，此接聽程式機會受到通知。

#### Java
<a name="background-replacement-android-initialize-camera-code"></a>

```
private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.beach
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;


        cameraProviderFuture.addListener({
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            
                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```

在接聽程式中，建立 `ImageAnalysis.Builder` 存取即時攝影機供稿中的每個單獨影格。將背壓策略設定為 `STRATEGY_KEEP_ONLY_LATEST`。這樣可以確保一次僅交付一個攝影機影格進行處理。將每個單獨的攝影機影格轉換為點陣圖，以便您可以提取其像素，並於稍後將其與自訂背景影像合併。

#### Java
<a name="background-replacement-android-create-imageanalysisbuilder-code"></a>

```
val imageAnalyzer = ImageAnalysis.Builder()
analysisUseCase = imageAnalyzer
    .setTargetResolution(Size(360, 640))
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build()

analysisUseCase?.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
    val mediaImage = imageProxy.image
    val tempBitmap = imageProxy.toBitmap();
    val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())
```

### 將攝影機影格傳遞給 Google ML Kit
<a name="background-replacement-android-frames-to-mlkit"></a>

接下來，建立 `InputImage` 並將其傳遞給分割器的執行個體進行處理。可在 `ImageAnalysis` 執行個體提供的 `ImageProxy` 中建立 `InputImage`。只要將 `InputImage` 提供給分割器，就會回傳一個帶有信賴度分數的遮罩，指示像素處於前景或背景的可能性。這個遮罩還提供寬高屬性，可供您建立一組新的陣列，其中包含先前載入的自訂背景影像的背景像素。

#### Java
<a name="background-replacement-android-frames-to-mlkit-code"></a>

```
if (mediaImage != null) {
        val inputImage =
            InputImage.fromMediaImag


segmenter.process(inputImage)
    .addOnSuccessListener { segmentationMask ->
        val mask = segmentationMask.buffer
        val maskWidth = segmentationMask.width
        val maskHeight = segmentationMask.height
        val backgroundPixels = IntArray(maskWidth * maskHeight)
        bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)
```

### 將攝影機影格前景覆疊到自訂背景上
<a name="background-replacement-android-overlay-frame-foreground"></a>

有了包含信賴度分數的遮罩、當成點陣圖的攝影機影格以及自訂背景影像中的色彩像素，您就擁有將前景覆疊到自訂背景上所需的一切。接著，就能使用下列參數呼叫 `overlayForeground` 函數：

#### Java
<a name="background-replacement-android-call-overlayforeground-code"></a>

```
resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
```

此函數會反复執行遮罩，並檢查信賴度值，從而決定是從背景影像還是攝影機影格取得對應的像素顏色。如果信賴度值表示遮罩中的像素很可能出現在背景中，將從背景影像中獲取相應的像素顏色；否則，將從攝影機影格中獲取相應的像素顏色來建置前景。函數完成對遮罩的反覆處理後，就會使用新的色彩像素陣列建立新的點陣圖並傳回。這個新的點陣圖包含疊加在自訂背景上的前景。

#### Java
<a name="background-replacement-android-run-overlayforeground-code"></a>

```
private fun overlayForeground(
        byteBuffer: ByteBuffer,
        maskWidth: Int,
        maskHeight: Int,
        cameraBitmap: Bitmap,
        backgroundPixels: IntArray
    ): Bitmap {
        @ColorInt val colors = IntArray(maskWidth * maskHeight)
        val cameraPixels = IntArray(maskWidth * maskHeight)

        cameraBitmap.getPixels(cameraPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

        for (i in 0 until maskWidth * maskHeight) {
            val backgroundLikelihood: Float = 1 - byteBuffer.getFloat()

            // Apply the virtual background to the color if it's not part of the foreground
            if (backgroundLikelihood > 0.9) {
                // Get the corresponding pixel color from the background image
                // Set the color in the mask based on the background image pixel color
                colors[i] = backgroundPixels.get(i)
            } else {
                // Get the corresponding pixel color from the camera frame
                // Set the color in the mask based on the camera image pixel color
                colors[i] = cameraPixels.get(i)
            }
        }

        return Bitmap.createBitmap(
            colors, maskWidth, maskHeight, Bitmap.Config.ARGB_8888
        )
    }
```

### 將新影像提供給自訂影像來源
<a name="background-replacement-android-custom-image-source"></a>

然後，您可以將新的點陣圖寫入由自訂影像來源提供的 `Surface`。這會將其廣播到您的階段。

#### Java
<a name="background-replacement-android-custom-image-source-code"></a>

```
resultBitmap = overlayForeground(mask, inputBitmap, mutableBitmap, bgBitmap)
canvas = surface.lockCanvas(null);
canvas.drawBitmap(resultBitmap, 0f, 0f, null)
```

以下是獲取攝影機影格、傳遞給分割器並覆疊在背景上的完整函數：

#### Java
<a name="background-replacement-android-custom-image-source-startcamera-code"></a>

```
@androidx.annotation.OptIn(androidx.camera.core.ExperimentalGetImage::class)
    private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.clouds
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;

        cameraProviderFuture.addListener({
            // Used to bind the lifecycle of cameras to the lifecycle owner
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            val imageAnalyzer = ImageAnalysis.Builder()
            analysisUseCase = imageAnalyzer
                .setTargetResolution(Size(720, 1280))
                .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                .build()

            analysisUseCase!!.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
                val mediaImage = imageProxy.image
                val tempBitmap = imageProxy.toBitmap();
                val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())

                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                    segmenter.process(inputImage)
                        .addOnSuccessListener { segmentationMask ->
                            val mask = segmentationMask.buffer
                            val maskWidth = segmentationMask.width
                            val maskHeight = segmentationMask.height
                            val backgroundPixels = IntArray(maskWidth * maskHeight)
                            bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```