

# IVS 广播 SDK：第三方相机滤镜 \$1 实时直播功能
<a name="broadcast-3p-camera-filters"></a>

本指南假设您已经熟悉[自定义图像](broadcast-custom-image-sources.md)源以及将 [IVS 实时流式广播 SDK](broadcast.md) 集成到您的应用程序中。

相机滤镜使直播创作者能够增强或改变他们的面部或背景外观。这有可能提高观众的参与度、吸引观众并增强直播体验。

# 集成第三方相机滤镜
<a name="broadcast-3p-camera-filters-integrating"></a>

通过将滤镜 SDK 的输出馈送到[自定义图像输入源](broadcast-custom-image-sources.md)，您可以将第三方相机滤镜 SDK 与 IVS 广播 SDK 集成。自定义图像输入源允许应用程序向广播 SDK 提供自己的图像输入。第三方滤镜提供商的 SDK 可以管理相机的生命周期，以处理来自相机的图像、应用滤镜效果，并以可传递到自定义图像源的格式将其输出。

![\[通过将滤镜 SDK 的输出馈送到自定义图像输入源，将第三方相机滤镜 SDK 与 IVS 广播 SDK 集成。\]](http://docs.aws.amazon.com/zh_cn/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Integrating.png)


请参阅第三方滤镜提供者的文档，了解将应用了滤镜效果的相机帧转换为可以传递到[自定义图像输入](broadcast-custom-image-sources.md)源的格式的内置方法。该流程因所使用的 IVS 广播 SDK 版本而异：
+ **Web** — 滤镜提供者必须能够将其输出渲染到画布元素。然后，可以使用 [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) 方法返回画布内容的 MediaStream。然后，可以将 MediaStream 转换为 [LocalStageStream](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/classes/LocalStageStream) 的实例并发布到舞台。
+ **Android** — 滤镜提供者的 SDK 可以将帧渲染到 IVS 广播 SDK 提供的安卓 `Surface`，也可以将帧转换为位图。如果使用位图，则可以通过解锁并写入画布，将其渲染到自定义图像源提供的底层 `Surface`。
+ **iOS** — 第三方滤镜提供者的 SDK 必须提供应用了滤镜效果的相机帧作为 `CMSampleBuffer`。有关如何在处理相机图像之后获取 `CMSampleBuffer` 作为最终输出的信息，请参阅第三方滤镜提供者 SDK 的文档。

# 将 BytePlus 与 IVS 广播 SDK 结合使用
<a name="broadcast-3p-camera-filters-integrating-byteplus"></a>

本文档介绍如何将 BytePlus Effects SDK 与 IVS 广播 SDK 结合使用。

## Android
<a name="integrating-byteplus-android"></a>

### 安装和设置 BytePlus Effects SDK
<a name="integrating-byteplus-android-install-effects-sdk"></a>

有关如何安装、初始化和设置 BytePlus Effects SDK 的详细信息，请参阅 BytePlus [Android 访问指南](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide)。

### 设置自定义图像源
<a name="integrating-byteplus-android-setup-image-source"></a>

初始化 SDK 后，将经过处理并应用了滤镜效果的相机帧馈送到自定义图像输入源。为此，请创建 `DeviceDiscovery` 对象的实例并创建自定义图像源。请注意，当您使用自定义图像输入源对相机进行自定义控制时，广播 SDK 不再负责管理相机。相反，应用程序负责正确处理相机的生命周期。

#### Java
<a name="integrating-byteplus-android-setup-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)
```

### 将输出转换为位图并馈送到自定义图像输入源
<a name="integrating-byteplus-android-convert-to-bitmap"></a>

为了使来自 BytePlus Effect SDK 的应用了滤镜效果的相机帧直接转发到 IVS 广播 SDK，请将纹理的 BytePlus Effects SDK 的输出转换为位图。处理图像时，SDK 会调用 `onDrawFrame()` 方法。`onDrawFrame()` 方法是 Android 的 [GLSurfaceView.Renderer](https://developer.android.com/reference/android/opengl/GLSurfaceView.Renderer) 界面中的一种公共方法。在 BytePlus 提供的 Android 示例应用程序中，在每个相机帧上都调用此方法；它输出纹理。同时，您可以使用逻辑来补充 `onDrawFrame()` 方法，将此纹理转换为位图并将其馈送到自定义图像输入源。如以下代码示例中所示，使用 BytePlus SDK 提供的 `transferTextureToBitmap` 方法进行此转换。此方法由来自 BytePlus Effects SDK 的 [com.bytedance.labcv.core.util.ImageUtil](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide#Appendix:%20convert%20input%20texture%20to%202D%20texture%20with%20upright%20face) 库提供，如以下代码示例中所示。然后，您可以将生成的位图写入 Surface 的画布以渲染到 `CustomImageSource` 的底层 Android `Surface`。多次连续调用 `onDrawFrame()` 会生成一系列位图，组合后会形成视频流。

#### Java
<a name="integrating-byteplus-android-convert-to-bitmap-code"></a>

```
import com.bytedance.labcv.core.util.ImageUtil;
...
protected ImageUtil imageUtility;
...


@Override
public void onDrawFrame(GL10 gl10) {
  ...	
  // Convert BytePlus output to a Bitmap
  Bitmap outputBt = imageUtility.transferTextureToBitmap(output.getTexture(),ByteEffect     
  Constants.TextureFormat.Texture2D,output.getWidth(), output.getHeight());

  canvas = surface.lockCanvas(null);
  canvas.drawBitmap(outputBt, 0f, 0f, null);
  surface.unlockCanvasAndPost(canvas);
```

# 将 DeepAR 与 IVS 广播 SDK 结合使用
<a name="broadcast-3p-camera-filters-integrating-deepar"></a>

本文档介绍如何将 DeepAR SDK 与 IVS 广播 SDK 结合使用。

## Android
<a name="integrating-deepar-android"></a>

有关如何将 DeepAR SDK 与 Android IVS 广播 SDK 集成的详细信息，请参阅 [Deepar 的 Android 集成指南](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/android/)。

## iOS
<a name="integrating-deepar-ios"></a>

有关如何将 DeepAR SDK 与 iOS IVS 广播 SDK 集成的详细信息，请参阅 [Deepar 的 iOS 集成指南](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/ios/)。

# 将 Snap 与 IVS 广播 SDK 结合使用
<a name="broadcast-3p-camera-filters-integrating-snap"></a>

本文档介绍如何将 Snap 的 Camera Kit SDK 与 IVS 广播 SDK 结合使用。

## Web
<a name="integrating-snap-web"></a>

本节假设您已经熟悉[使用 Web 广播 SDK 发布和订阅视频](getting-started-pub-sub-web.md)。

要集成 Snap 的 Camera Kit SDK 与 IVS 实时流式 Web 广播 SDK，您需要：

1. 安装 Camera Kit SDK 和 Webpack。（我们的示例使用 Webpack 作为捆绑程序，但您可以使用自己选择的任何捆绑程序。）

1. 创建 `index.html`。

1. 添加安装元素。

1. 创建`index.css`。

1. 显示和设置参与者。

1. 显示连接的相机和麦克风。

1. 创建 Camera Kit 会话。

1. 获取镜头并填充镜头选择器。

1. 将 Camera Kit 会话的输出渲染到画布。

1. 创建用于填充“镜头”下拉列表的函数。

1. 为 Camera Kit 提供用于渲染的媒体来源并发布 `LocalStageStream`。

1. 创建`package.json`。

1. 创建 Webpack 配置文件。

1. 设置 HTTPS 服务器并进行测试。

这些步骤中的各自的介绍如下。

### 安装 Camera Kit SDK 和 Webpack
<a name="integrating-snap-web-install-camera-kit"></a>

在此示例中，我们使用了 Webpack 作为捆绑程序；但是，您也可以使用任何捆绑程序。

```
npm i @snap/camera-kit webpack webpack-cli
```

### 创建 index.html
<a name="integrating-snap-web-create-index"></a>

接下来，创建 HTML 样板，并将 Web 广播 SDK 作为脚本标签导入。在下面的代码中，请务必将 `<SDK version>` 替换为您正在使用的广播 SDK 版本。

#### HTML
<a name="integrating-snap-web-create-index-code"></a>

```
<!--
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */
-->
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <title>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</title>

  <!-- Fonts and Styling -->
  <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.css" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/milligram/1.4.1/milligram.css" />
  <link rel="stylesheet" href="./index.css" />

  <!-- Stages in Broadcast SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>
  <!-- Introduction -->
  <header>
    <h1>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</h1>

    <p>This sample is used to demonstrate basic HTML / JS usage. <b><a href="https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/multiple-hosts.html">Use the AWS CLI</a></b> to create a <b>Stage</b> and a corresponding <b>ParticipantToken</b>. Multiple participants can load this page and put in their own tokens. You can <b><a href="https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#glossary" target="_blank">read more about stages in our public docs.</a></b></p>
  </header>
  <hr />
  
  <!-- Setup Controls -->
 
  <!-- Display Local Participants -->
  
  <!-- Lens Selector -->

  <!-- Display Remote Participants -->

  <!-- Load All Desired Scripts -->
```

### 添加安装元素
<a name="integrating-snap-web-add-setup-elements"></a>

创建 HTML，用于选择相机、麦克风和镜头，并指定参与者令牌：

#### HTML
<a name="integrating-snap-web-setup-controls-code"></a>

```
<!-- Setup Controls -->
  <div class="row">
    <div class="column">
      <label for="video-devices">Select Camera</label>
      <select disabled id="video-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="audio-devices">Select Microphone</label>
      <select disabled id="audio-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="token">Participant Token</label>
      <input type="text" id="token" name="token" />
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="join-button">Join Stage</button>
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="leave-button">Leave Stage</button>
    </div>
  </div>
```

在其下方添加其他 HTML 以显示来自本地和远程参与者的相机源：

#### HTML
<a name="integrating-snap-web-local-remote-participants-code"></a>

```
 <!-- Local Participant -->
<div class="row local-container">
    <canvas id="canvas"></canvas>

    <div class="column" id="local-media"></div>
    <div class="static-controls hidden" id="local-controls">
      <button class="button" id="mic-control">Mute Mic</button>
      <button class="button" id="camera-control">Mute Camera</button>
    </div>
  </div>

  
  <hr style="margin-top: 5rem"/>
  
  <!-- Remote Participants -->
  <div class="row">
    <div id="remote-media"></div>
  </div>
```

加载其他逻辑，包括用于设置相机的辅助方法和捆绑的 JavaScript 文件。（在本节的后面部分，您将创建这些 JavaScript 文件并将它们捆绑到单个文件中，这样您就可以将 Camera Kit 作为模块导入。捆绑的 JavaScript 文件将包含设置 Camera Kit、应用镜头以及将相机源及所应用镜头发布到舞台的逻辑。） 添加 `body` 和 `html` 元素的结束标签，以完成 `index.html` 的创建。

#### HTML
<a name="integrating-snap-web-load-all-scripts-code"></a>

```
<!-- Load all Desired Scripts -->
  <script src="./helpers.js"></script>
  <script src="./media-devices.js"></script>
  <!-- <script type="module" src="./stages-simple.js"></script> -->
  <script src="./dist/bundle.js"></script>
</body>
</html>
```

### 创建 index.css
<a name="integrating-snap-web-create-index-css"></a>

创建 CSS 源文件来设置页面样式。我们不会详细介绍这段代码，以便可以将重点放在用于管理暂存区和与 Snap 的 Camera Kit SDK 集成的逻辑上。

#### CSS
<a name="integrating-snap-web-create-index-css-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

html,
body {
  margin: 2rem;
  box-sizing: border-box;
  height: 100vh;
  max-height: 100vh;
  display: flex;
  flex-direction: column;
}

hr {
  margin: 1rem 0;
}

table {
  display: table;
}

canvas {
  margin-bottom: 1rem;
  background: green;
}

video {
  margin-bottom: 1rem;
  background: black;
  max-width: 100%;
  max-height: 150px;
}

.log {
  flex: none;
  height: 300px;
}

.content {
  flex: 1 0 auto;
}

.button {
  display: block;
  margin: 0 auto;
}

.local-container {
  position: relative;
}

.static-controls {
  position: absolute;
  margin-left: auto;
  margin-right: auto;
  left: 0;
  right: 0;
  bottom: -4rem;
  text-align: center;
}

.static-controls button {
  display: inline-block;
}

.hidden {
  display: none;
}

.participant-container {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: column;
  margin: 1rem;
}

video {
  border: 0.5rem solid #555;
  border-radius: 0.5rem;
}
.placeholder {
  background-color: #333333;
  display: flex;
  text-align: center;
  margin-bottom: 1rem;
}
.placeholder span {
  margin: auto;
  color: white;
}
#local-media {
  display: inline-block;
  width: 100vw;
}

#local-media video {
  max-height: 300px;
}

#remote-media {
  display: flex;
  justify-content: center;
  align-items: center;
  flex-direction: row;
  width: 100%;
}

#lens-selector {
  width: 100%;
  margin-bottom: 1rem;
}
```

### 显示和设置参与者
<a name="integrating-snap-web-setup-participants"></a>

接下来，创建 `helpers.js`，其中包含用于显示和设置参与者的辅助方法：

#### JavaScript
<a name="integrating-snap-web-setup-participants-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

function setupParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);

  const participantContainerId = isLocal ? 'local' : id;
  const participantContainer = createContainer(participantContainerId);
  const videoEl = createVideoEl(participantContainerId);

  participantContainer.appendChild(videoEl);
  groupContainer.appendChild(participantContainer);

  return videoEl;
}

function teardownParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);
  const participantContainerId = isLocal ? 'local' : id;

  const participantDiv = document.getElementById(
    participantContainerId + '-container'
  );
  if (!participantDiv) {
    return;
  }
  groupContainer.removeChild(participantDiv);
}

function createVideoEl(id) {
  const videoEl = document.createElement('video');
  videoEl.id = id;
  videoEl.autoplay = true;
  videoEl.playsInline = true;
  videoEl.srcObject = new MediaStream();
  return videoEl;
}

function createContainer(id) {
  const participantContainer = document.createElement('div');
  participantContainer.classList = 'participant-container';
  participantContainer.id = id + '-container';

  return participantContainer;
}
```

### 显示已连接的相机和麦克风
<a name="integrating-snap-web-display-cameras-microphones"></a>

接下来，创建 `media-devices.js`，其中包含用于显示连接到设备的相机和麦克风的辅助方法：

#### JavaScript
<a name="integrating-snap-web-display-cameras-microphones-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

/**
 * Returns an initial list of devices populated on the page selects
 */
async function initializeDeviceSelect() {
  const videoSelectEl = document.getElementById('video-devices');
  videoSelectEl.disabled = false;

  const { videoDevices, audioDevices } = await getDevices();
  videoDevices.forEach((device, index) => {
    videoSelectEl.options[index] = new Option(device.label, device.deviceId);
  });

  const audioSelectEl = document.getElementById('audio-devices');

  audioSelectEl.disabled = false;
  audioDevices.forEach((device, index) => {
    audioSelectEl.options[index] = new Option(device.label, device.deviceId);
  });
}

/**
 * Returns all devices available on the current device
 */
async function getDevices() {
  // Prevents issues on Safari/FF so devices are not blank
  await navigator.mediaDevices.getUserMedia({ video: true, audio: true });

  const devices = await navigator.mediaDevices.enumerateDevices();
  // Get all video devices
  const videoDevices = devices.filter((d) => d.kind === 'videoinput');
  if (!videoDevices.length) {
    console.error('No video devices found.');
  }

  // Get all audio devices
  const audioDevices = devices.filter((d) => d.kind === 'audioinput');
  if (!audioDevices.length) {
    console.error('No audio devices found.');
  }

  return { videoDevices, audioDevices };
}

async function getCamera(deviceId) {
  // Use Max Width and Height
  return navigator.mediaDevices.getUserMedia({
    video: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
    audio: false,
  });
}

async function getMic(deviceId) {
  return navigator.mediaDevices.getUserMedia({
    video: false,
    audio: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
  });
}
```

### 创建 Camera Kit 会话
<a name="integrating-snap-web-camera-kit-session"></a>

创建 `stages.js`，其中包含将镜头应用于相机视频源并将该视频源发布到舞台的逻辑。我们建议将以下代码块复制并粘贴到 `stages.js`。然后，您可以逐段查看代码，以了解以下各节中将会发生什么。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);

  const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};

async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);

  // Create StageStreams for Audio and Video
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
      lensSelector.disabled = false;
    } else {
      controls.classList.add('hidden');
      lensSelector.disabled = true;
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

在本文件的第一部分，我们导入广播 SDK 和 Camera Kit Web SDK，并初始化将用于每个 SDK 的变量。我们通过在[启动 Camera Kit Web SDK](https://kit.snapchat.com/reference/CameraKit/web/0.7.0/index.html#bootstrapping-the-sdk) 后调用 `createSession` 来创建 Camera Kit 会话。请注意，画布元素对象会传递到会话；这会告诉 Camera Kit 渲染到该画布中。

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code-2"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
```

### 获取镜头并填充镜头选择器
<a name="integrating-snap-web-fetch-apply-lens"></a>

要获取镜头，请将镜头组 ID 占位符替换为自己的占位符，可在 [Camera Kit 开发人员门户](https://camera-kit.snapchat.com/)中找到此占位符。使用我们稍后将创建的 `populateLensSelector()` 函数填充“镜头”选择下拉列表。

#### JavaScript
<a name="integrating-snap-web-fetch-apply-lens-code"></a>

```
session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);
```

### 将 Camera Kit 会话的输出渲染到画布
<a name="integrating-snap-web-render-output-to-canvas"></a>

使用 [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) 方法返回画布内容的 `MediaStream`。画布将包含已应用镜头的相机视频源的视频流。此外，为用于将相机和麦克风静音的按钮添加事件侦听器，以及用于加入和离开舞台的事件侦听器。在加入舞台的事件侦听器中，我们传入 Camera Kit 会话以及来自画布的 `MediaStream`，这样它就可以发布到舞台。

#### JavaScript
<a name="integrating-snap-web-render-output-to-canvas-code"></a>

```
const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};
```

### 创建用于填充“镜头”下拉列表的函数
<a name="integrating-snap-web-populate-lens-dropdown"></a>

创建以下函数，用之前获取的镜头来填充**镜头**选择器。**镜头**选择器是页面上的一个 UI 元素，可让您从镜头列表中选择要应用于相机源的镜头。此外，请创建 `handleLensChange` 回调函数，以便从**镜头**下拉列表中选择指定镜头时应用该镜头。

#### JavaScript
<a name="integrating-snap-web-populate-lens-dropdown-code"></a>

```
const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};
```

### 为 Camera Kit 提供用于渲染的媒体来源并发布 LocalStageStream
<a name="integrating-snap-web-publish-localstagestream"></a>

要发布应用了镜头的视频流，请创建一个调用 `setCameraKitSource` 的函数，用于传入之前从画布上捕获的 `MediaStream`。画布中的 `MediaStream` 目前没有执行任何操作，因为我们还没有整合本地相机源。我们可以通过调用 `getCamera` 辅助方法并将其分配给 `localCamera` 来整合本地相机源。然后，我们可以将本地相机源（通过 `localCamera`）和会话对象传递给 `setCameraKitSource`。`setCameraKitSource` 函数通过调用 `createMediaStreamSource` 将我们的本地相机源转换为 [CameraKit 的媒体源](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#creating-a-camerakitsource)。然后，[转换](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#2d-transforms) `CameraKit` 的媒体源以生成前置相头的镜像。然后，镜头效果应用于媒体源，并通过调用 `session.play()` 将其渲染到输出画布。

现在，将镜头应用于从画布捕获的 `MediaStream` 后，我们可以继续将其发布到舞台。为此，我们使用来自 `MediaStream` 的视频轨道创建 `LocalStageStream`。然后，可以将 `LocalStageStream` 的实例传递到要发布的 `StageStrategy`。

#### JavaScript
<a name="integrating-snap-web-publish-localstagestream-code"></a>

```
async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);
  // Create StageStreams for Audio and Video
  // cameraStageStream = new LocalStageStream(localCamera.getVideoTracks()[0]);
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };
```

下面的其余代码用于创建和管理我们的舞台：

#### JavaScript
<a name="integrating-snap-web-create-manage-stage-code"></a>

```
stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events

  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
    } else {
      controls.classList.add('hidden');
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

### 创建 package.json
<a name="integrating-snap-web-package-json"></a>

创建 `package.json` 并添加以下 JSON 配置。该文件定义了我们的依赖项，并包含用于捆绑代码的脚本命令。

#### JSON 配置
<a name="integrating-snap-web-package-json-code"></a>

```
{
  "dependencies": {
    "@snap/camera-kit": "^0.10.0"
  },
  "name": "ivs-stages-with-snap-camerakit",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "build": "webpack"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "devDependencies": {
    "webpack": "^5.95.0",
    "webpack-cli": "^5.1.4"
  }
}
```

### 创建 Webpack 配置文件
<a name="integrating-snap-web-webpack-config"></a>

创建 `webpack.config.js` 并添加以下代码。这捆绑了我们到目前为止所创建的代码，以便我们能够利用 import 语句来使用 Camera Kit。

#### JavaScript
<a name="integrating-snap-web-webpack-config-code"></a>

```
const path = require('path');
module.exports = {
  entry: ['./stage.js'],
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
};
```

最后，运行 `npm run build` 以按照 Webpack 配置文件中的定义捆绑您的 JavaScript。出于测试目的，您随后可以从本地计算机提供 HTML 和 JavaScript。在此示例中，我们使用了 Python 的 `http.server` 模块。

### 设置 HTTPS 服务器并进行测试
<a name="integrating-snap-web-https-server-test"></a>

要测试代码，我们需要设置 HTTPS 服务器。使用 HTTPS 服务器进行本地开发并测试 Web 应用程序与 Snap Camera Kit SDK 的集成将有助于避免 CORS（跨源资源共享）问题。

打开终端并导航到创建了目前为止所有代码的目录。运行以下命令以生成自签名 SSL/TLS 证书和私有密钥：

```
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
```

将创建两个文件：`key.pem`（私有密钥）和 `cert.pem`（自签名证书）。新建一个名为 `https_server.py` 的 Python 文件，并添加以下代码：

#### Python
<a name="integrating-snap-web-https-server-test-code"></a>

```
import http.server
import ssl

# Set the directory to serve files from
DIRECTORY = '.'

# Create the HTTPS server
server_address = ('', 4443)
httpd = http.server.HTTPServer(
    server_address, http.server.SimpleHTTPRequestHandler)

# Wrap the socket with SSL/TLS
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain('cert.pem', 'key.pem')
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print(f'Starting HTTPS server on https://localhost:4443, serving {DIRECTORY}')
httpd.serve_forever()
```

打开终端，导航到创建 `https_server.py` 文件的目录，然后运行以下命令：

```
python3 https_server.py
```

这将启动 https://localhost:4443 上的 HTTPS 服务器，提供来自当前目录的文件。确保 `cert.pem` 和 `key.pem` 文件与 `https_server.py` 文件位于同一目录中。

打开浏览器并导航至 https://localhost:4443。由于这是一个自签名 SSL/TLS 证书，您的 Web 浏览器不会信任它，所以您将会收到警告。由于这仅用于测试目的，您可以绕过该警告。然后，您应该会在屏幕上看到之前指定的 Snap Lens 的 AR 效果已应用到相机视频源。

请注意，这种使用 Python 内置 `http.server` 和 `ssl` 模块的设置适用于本地开发和测试目的，但不建议将其用于生产环境。Web 浏览器和其他客户端不信任此设置中使用的自签名 SSL/TLS 证书，这意味着用户在访问服务器时会遇到安全警告。此外，尽管我们在此示例中使用了 Python 的内置 http.server 和 ssl 模块，但您仍可以选择使用其他 HTTPS 服务器解决方案。

## Android
<a name="integrating-snap-android"></a>

要将 Snap 的 Camera Kit SDK 与 IVS Android 广播 SDK 集成，您必须安装 Camera Kit SDK，初始化 Camera Kit 会话，应用镜头并将 Camera Kit 会话的输出馈送到自定义图像输入源。

要安装 Camera Kit SDK，请将以下内容添加到您的模块的 `build.gradle` 文件中。将 `$cameraKitVersion` 替换为[最新的 Camera Kit SDK 版本](https://docs.snap.com/camera-kit/integrate-sdk/mobile/changelog-mobile)。

### Java
<a name="integrating-snap-android-install-camerakit-sdk-code"></a>

```
implementation "com.snap.camerakit:camerakit:$cameraKitVersion"
```

初始化并获取 `cameraKitSession`。Camera Kit 还为 Android 的 [CameraX](https://developer.android.com/media/camera/camerax) API 提供了便捷的包装器，因此您无需编写复杂的逻辑即可将 CameraX 与 Camera Kit 一起使用。您可以将 `CameraXImageProcessorSource` 对象用作 [ImageProcessor](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-image-processor/index.html) 的[来源](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-source/index.html)，这样您就可以开始相机预览流式帧了。

### Java
<a name="integrating-snap-android-initialize-camerakitsession-code"></a>

```
 protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

        // Camera Kit support implementation of ImageProcessor that is backed by CameraX library:
        // https://developer.android.com/training/camerax
        CameraXImageProcessorSource imageProcessorSource = new CameraXImageProcessorSource( 
            this /*context*/, this /*lifecycleOwner*/
        );
        imageProcessorSource.startPreview(true /*cameraFacingFront*/);

        cameraKitSession = Sessions.newBuilder(this)
                .imageProcessorSource(imageProcessorSource)
                .attachTo(findViewById(R.id.camerakit_stub))
                .build();
    }
```

### 获取并应用镜头
<a name="integrating-snap-android-fetch-apply-lenses"></a>

您可以在 [Camera Kit 开发人员门户](https://camera-kit.snapchat.com/)的轮盘中配置镜头及其顺序：

#### Java
<a name="integrating-snap-android-configure-lenses-code"></a>

```
// Fetch lenses from repository and apply them
 // Replace LENS_GROUP_ID with Lens Group ID from https://camera-kit.snapchat.com
cameraKitSession.getLenses().getRepository().get(new Available(LENS_GROUP_ID), available -> {
     Log.d(TAG, "Available lenses: " + available);
     Lenses.whenHasFirst(available, lens -> cameraKitSession.getLenses().getProcessor().apply(lens, result -> {
          Log.d(TAG,  "Apply lens [" + lens + "] success: " + result);
      }));
});
```

要进行广播，请将处理后的帧发送到自定义图像源的底层 `Surface`。使用 `DeviceDiscovery` 对象并创建 `CustomImageSource` 以返回 `SurfaceSource`。然后，您可以将 `CameraKit` 会话的输出渲染到 `SurfaceSource` 提供的底层 `Surface`。

#### Java
<a name="integrating-snap-android-broadcast-code"></a>

```
val publishStreams = ArrayList<LocalStageStream>()

val deviceDiscovery = DeviceDiscovery(applicationContext)
val customSource = deviceDiscovery.createImageInputSource(BroadcastConfiguration.Vec2(720f, 1280f))

cameraKitSession.processor.connectOutput(outputFrom(customSource.inputSurface))
val customStream = ImageLocalStageStream(customSource)

// After rendering the output from a Camera Kit session to the Surface, you can 
// then return it as a LocalStageStream to be published by the Broadcast SDK
val customStream: ImageLocalStageStream = ImageLocalStageStream(surfaceSource)
publishStreams.add(customStream)

@Override
fun stageStreamsToPublishForParticipant(stage: Stage, participantInfo: ParticipantInfo): List<LocalStageStream> = publishStreams
```

# 将背景替换功能与 IVS 广播 SDK 结合使用
<a name="broadcast-3p-camera-filters-background-replacement"></a>

背景替换是一种相机滤镜，它使直播创作者能够更改其背景。如下图所示，替换背景涉及：

1. 从实时相机源中获取相机图像。

1. 使用 Google 机器学习套件将其分为前景和背景分量。

1. 将生成的分割遮罩与自定义背景图像相结合。

1. 将其传递给自定义图像源进行广播。

![\[实现背景替换的工作流程。\]](http://docs.aws.amazon.com/zh_cn/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Background_Replacement.png)


## Web
<a name="background-replacement-web"></a>

本节假设您已经熟悉[使用 Web 广播 SDK 发布和订阅视频](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/getting-started-pub-sub-web.html)。

要使用自定义图像替换直播的背景，请使用带有 [MediaPipe Image Segmenter](https://developers.google.com/mediapipe/solutions/vision/image_segmenter) 的[自拍分割模型](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model)。这是一种机器学习模型，用于识别视频帧中的哪些像素位于前景或背景。然后，您可以使用该模型的结果来替换直播的背景，方法是将视频源中的前景像素复制到表示新背景的自定义图像中。

要将背景替换与 IVS 实时流式 Web 广播 SDK 集成，您需要：

1. 安装 MediaPipe 和 Webpack。（我们的示例使用 Webpack 作为捆绑程序，但您可以使用自己选择的任何捆绑程序。）

1. 创建 `index.html`。

1. 添加媒体元素。

1. 添加脚本标签。

1. 创建`app.js`。

1. 加载自定义背景图像。

1. 创建 `ImageSegmenter` 的实例。

1. 将视频源渲染到画布上。

1. 创建背景替换逻辑。

1. 创建 Webpack 配置文件。

1. 捆绑您的 JavaScript 文件。

### 安装 MediaPipe 和 Webpack
<a name="background-replacement-web-install-mediapipe-webpack"></a>

首先，请安装 `@mediapipe/tasks-vision` 和 `webpack` npm 包。下面的示例使用 Webpack 作为 JavaScript 捆绑程序；如果愿意，您可以使用不同的捆绑程序。

#### JavaScript
<a name="background-replacement-web-install-mediapipe-webpack-code"></a>

```
npm i @mediapipe/tasks-vision webpack webpack-cli
```

请务必更新您的 `package.json` 以指定 `webpack` 作为构建脚本：

#### JavaScript
<a name="background-replacement-web-update-package-json-code"></a>

```
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack"
  },
```

### 创建 index.html
<a name="background-replacement-web-create-index"></a>

接下来，创建 HTML 样板，并将 Web 广播 SDK 作为脚本标签导入。在下面的代码中，请务必将 `<SDK version>` 替换为您正在使用的广播 SDK 版本。

#### JavaScript
<a name="background-replacement-web-create-index-code"></a>

```
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <!-- Import the SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>

</body>
</html>
```

### 添加媒体元素
<a name="background-replacement-web-add-media-elements"></a>

接下来，在正文标签内添加一个视频元素和两个画布元素。视频元素将包含您的实时相机源，并将用作 MediaPipe Image Segmenter 的输入。第一个画布元素将用于渲染将要广播的源的预览。第二个画布元素将用于渲染将用作背景的自定义图像。由于带有自定义图像的第二个画布仅用作以编程方式将像素从其复制到最终画布的来源，因此在视图中被隐藏。

#### JavaScript
<a name="background-replacement-web-add-media-elements-code"></a>

```
<div class="row local-container">
      <video id="webcam" autoplay style="display: none"></video>
    </div>
    <div class="row local-container">
      <canvas id="canvas" width="640px" height="480px"></canvas>

      <div class="column" id="local-media"></div>
      <div class="static-controls hidden" id="local-controls">
        <button class="button" id="mic-control">Mute Mic</button>
        <button class="button" id="camera-control">Mute Camera</button>
      </div>
    </div>
    <div class="row local-container">
      <canvas id="background" width="640px" height="480px" style="display: none"></canvas>
    </div>
```

### 添加脚本标签
<a name="background-replacement-web-add-script-tag"></a>

添加脚本标签以加载捆绑的 JavaScript 文件，该文件将包含用于进行背景替换的代码并将其发布到舞台：

```
<script src="./dist/bundle.js"></script>
```

### 创建 app.js
<a name="background-replacement-web-create-appjs"></a>

接下来，创建一个 JavaScript 文件以获取在 HTML 页面中创建的画布和视频元素的元素对象。导入 `ImageSegmenter` 和 `FilesetResolver` 模块。`ImageSegmenter` 模块将用于执行分割任务。

#### JavaScript
<a name="create-appjs-import-imagesegmenter-fileresolver-code"></a>

```
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";
```

接下来，创建一个调用 `init()` 的函数，用于从用户的摄像机中检索 MediaStream，并在每次摄像机画面完成加载时调用回调函数。为按钮添加事件侦听器以加入和离开舞台。

请注意，在加入舞台时，我们会传入一个名为 `segmentationStream` 的变量。这是从画布元素捕获的视频流，包含叠加在代表背景的自定义图像上的前景图像。稍后，此自定义流将用于创建 `LocalStageStream` 的实例，该实例可以发布到舞台。

#### JavaScript
<a name="create-appjs-create-init-code"></a>

```
const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });
};
```

### 加载自定义背景图像
<a name="background-replacement-web-background-image"></a>

在 `init` 函数的底部，添加用于调用名为 `initBackgroundCanvas` 的函数的代码，该函数从本地文件加载自定义图像并将其渲染到画布上。我们将在下一个步骤中定义此函数。将从用户相机检索到的 `MediaStream` 分配给视频对象。稍后，该视频对象将传递到 Image Segmenter。此外，还要设置一个名为 `renderVideoToCanvas` 的函数作为回调函数，以便在视频帧加载完毕时调用。我们将在稍后的步骤中定义此函数。

#### JavaScript
<a name="background-replacement-web-load-background-image-code"></a>

```
initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
```

让我们实现 `initBackgroundCanvas` 函数，它从本地文件加载图像。在此示例中，我们使用一张海滩的图像作为自定义背景。包含自定义图像的画布将从显示画面中隐藏，因为您将把它与包含相机源的画布元素的前景像素合并。

#### JavaScript
<a name="background-replacement-web-implement-initBackgroundCanvas-code"></a>

```
const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};
```

### 创建 ImageSegmenter 实例
<a name="background-replacement-web-imagesegmenter"></a>

接下来，创建 `ImageSegmenter` 的实例，该实例将对图像进行分割并返回结果作为遮罩。在创建 `ImageSegmenter` 的实例时，您将使用[自拍分割模型](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model)。

#### JavaScript
<a name="background-replacement-web-imagesegmenter-code"></a>

```
const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};
```

### 将视频源渲染到画布上
<a name="background-replacement-web-render-video-to-canvas"></a>

接下来，创建将视频源渲染到其他画布元素的函数。我们需要将视频源渲染到画布上，这样我们就可以使用 Canvas 2D API 从画布中提取前景像素。在执行此操作时，我们还会将视频帧传递给我们的 `ImageSegmenter` 实例，使用 [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) 方法在视频帧中分割前景与背景。当 [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) 方法返回时，它会调用我们的自定义回调函数 `replaceBackground` 来进行背景替换。

#### JavaScript
<a name="background-replacement-web-render-video-to-canvas-code"></a>

```
const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};
```

### 创建背景替换逻辑
<a name="background-replacement-web-logic"></a>

创建 `replaceBackground` 函数，它可将自定义背景图像与相机视频源中的前景合并，以替换背景。该函数首先从此前创建的两个画布元素中检索自定义背景图像和视频源的底层像素数据。然后，它会遍历 `ImageSegmenter` 提供的遮罩，该遮罩指示前景中有哪些像素。当它遍历遮罩时，会有选择地将包含用户相机源的像素复制到相应的背景像素数据中。完成后，它会对最终的像素数据进行转换，并将前景复制到背景上，然后将其绘制到画布上。

#### JavaScript
<a name="background-replacement-web-logic-create-replacebackground-code"></a>

```
function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
  // Only copy pixels on to the background image if the mask indicates they are in the foreground
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }

 // Convert the pixel data to a format suitable to be drawn to a canvas
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}
```

作为参考，下面是包含上述所有逻辑的完整 `app.js` 文件：

#### JavaScript
<a name="background-replacement-web-logic-app-js-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

// All helpers are expose on 'media-devices.js' and 'dom.js'
const { setupParticipant } = window;

const { Stage, LocalStageStream, SubscribeType, StageEvents, ConnectionState, StreamType } = IVSBroadcastClient;
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";

let cameraButton = document.getElementById("camera-control");
let micButton = document.getElementById("mic-control");
let joinButton = document.getElementById("join-button");
let leaveButton = document.getElementById("leave-button");

let controls = document.getElementById("local-controls");
let audioDevicesList = document.getElementById("audio-devices");
let videoDevicesList = document.getElementById("video-devices");

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;
let imageSegmenter;
let lastWebcamTime = -1;

const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });

  initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
};

const joinStage = async (segmentationStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById("token").value;

  if (!token) {
    window.alert("Please enter a participant token");
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localMic = await getMic(audioDevicesList.value);

  cameraStageStream = new LocalStageStream(segmentationStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove("hidden");
    } else {
      controls.classList.add("hidden");
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log("Participant Joined:", participant);
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
    console.log("Participant Media Added: ", participant, streams);

    let streamsToDisplay = streams;

    if (participant.isLocal) {
      // Ensure to exclude local audio streams, otherwise echo will occur
      streamsToDisplay = streams.filter((stream) => stream.streamType === StreamType.VIDEO);
    }

    const videoEl = setupParticipant(participant);
    streamsToDisplay.forEach((stream) => videoEl.srcObject.addTrack(stream.mediaStreamTrack));
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log("Participant Left: ", participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = "Hide Camera";
  micButton.innerText = "Mute Mic";
  controls.classList.add("hidden");
};

function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}

const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};

const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};

const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};

createImageSegmenter();
init();
```

### 创建 Webpack 配置文件
<a name="background-replacement-web-webpack-config"></a>

将此配置添加到要捆绑 `app.js` 的 Webpack 配置文件中，这样导入调用就会起作用：

#### JavaScript
<a name="background-replacement-web-webpack-config-code"></a>

```
const path = require("path");
module.exports = {
  entry: ["./app.js"],
  output: {
    filename: "bundle.js",
    path: path.resolve(__dirname, "dist"),
  },
};
```

### 捆绑您的 JavaScript 文件
<a name="background-replacement-web-bundle-javascript"></a>

```
npm run build
```

从包含 `index.html` 的目录中启动简单 HTTP 服务器并打开 `localhost:8000` 以查看结果：

```
python3 -m http.server -d ./
```

## Android
<a name="background-replacement-android"></a>

要替换直播中的背景，可以使用 [Google 机器学习套件](https://developers.google.com/ml-kit/vision/selfie-segmentation)的自拍分割 API。自拍分割 API 接受相机图像作为输入，并返回一个遮罩，该遮罩为图像的每个像素提供置信度分数，指示图像是在前景还是背景中。根据置信度分数，您可以从背景图像或前景图像检索相应的像素颜色。此流程一直持续到检查完遮罩中的所有置信度分数为止。结果是一个新像素颜色数组，其中包含前景像素以及来自背景图像的像素。

要将背景替换与 IVS 实时流式 Android 广播 SDK 集成，您需要：

1. 安装 CameraX 库和 Google 机器学习套件。

1. 初始化样板变量。

1. 创建自定义图像源。

1. 管理相机帧。

1. 将相机帧传递到 Google 机器学习套件。

1. 将相机帧前景叠加到您的自定义背景上。

1. 将新图像馈送到自定义图像源。

### 安装 CameraX 库和 Google 机器学习套件
<a name="background-replacement-android-install-camerax-googleml"></a>

要从实时相机源中提取图像，请使用 Android 的 CameraX 库。要安装 CameraX 库和 Google 机器学习套件，请将以下内容添加到您的模块的 `build.gradle` 文件中。分别将 `${camerax_version}` 和 `${google_ml_kit_version}` 替换为最新版本的 [CameraX](https://developer.android.com/jetpack/androidx/releases/camera) 和 [Google 机器学习套件](https://developers.google.com/ml-kit/vision/selfie-segmentation/android)库。

#### Java
<a name="background-replacement-android-install-camerax-googleml-code"></a>

```
implementation "com.google.mlkit:segmentation-selfie:${google_ml_kit_version}"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
```

导入以下库：

#### Java
<a name="background-replacement-android-import-libraries-code"></a>

```
import androidx.camera.core.CameraSelector
import androidx.camera.core.ImageAnalysis
import androidx.camera.core.ImageProxy
import androidx.camera.lifecycle.ProcessCameraProvider
import com.google.mlkit.vision.segmentation.selfie.SelfieSegmenterOptions
```

### 初始化样板变量
<a name="background-replacement-android-initialize-variables"></a>

初始化 `ImageAnalysis` 的实例和 `ExecutorService` 的实例：

#### Java
<a name="background-replacement-android-initialize-imageanalysis-executorservice-code"></a>

```
private lateinit var binding: ActivityMainBinding
private lateinit var cameraExecutor: ExecutorService
private var analysisUseCase: ImageAnalysis? = null
```

在 [STREAM\$1MODE](https://developers.google.com/ml-kit/vision/selfie-segmentation/android#detector_mode) 中初始化 Segmenter 实例：

#### Java
<a name="background-replacement-android-initialize-segmenter-code"></a>

```
private val options =
        SelfieSegmenterOptions.Builder()
            .setDetectorMode(SelfieSegmenterOptions.STREAM_MODE)
            .build()

private val segmenter = Segmentation.getClient(options)
```

### 创建自定义图像源
<a name="background-replacement-android-create-image-source"></a>

在活动的 `onCreate` 方法中，创建 `DeviceDiscovery` 对象的实例并创建自定义图像源。自定义图像源提供的 `Surface` 将收到最终图像，前景叠加在自定义背景图像上。然后，您将使用自定义图像源创建 `ImageLocalStageStream` 的实例。然后，可以将 `ImageLocalStageStream`（在此例中名为 `filterStream`）的实例发布到舞台。有关设置舞台的说明，请参阅 [IVS Android 广播 SDK 指南](broadcast-android.md)。最后，还要创建一个用于管理相机的线程。

#### Java
<a name="background-replacement-android-create-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)

cameraExecutor = Executors.newSingleThreadExecutor()
```

### 管理相机帧
<a name="background-replacement-android-camera-frames"></a>

接下来，创建一个函数来初始化相机。此功能使用 CameraX 库从实时相机源中提取图像。首先，创建调用 `cameraProviderFuture` 的 `ProcessCameraProvider` 的实例。此对象表示获取摄像机提供者操作的未来结果。然后，将项目中的图像作为位图加载。此示例使用海滩图像作为背景，但它可以是您想使用的任何图像。

然后，向 `cameraProviderFuture` 添加一个侦听器。当相机可用时，或者在获取相机提供者的过程中出现错误时，系统会通知侦听器。

#### Java
<a name="background-replacement-android-initialize-camera-code"></a>

```
private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.beach
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;


        cameraProviderFuture.addListener({
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            
                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```

在侦听器中，创建 `ImageAnalysis.Builder` 以访问来自实时相机源的每个帧。将反向压力策略设置为 `STRATEGY_KEEP_ONLY_LATEST`。这样可以保证一次只能传输一个相机帧进行处理。将每个相机帧转换为位图，这样您就可以提取其像素，以便稍后将其与自定义背景图像合并。

#### Java
<a name="background-replacement-android-create-imageanalysisbuilder-code"></a>

```
val imageAnalyzer = ImageAnalysis.Builder()
analysisUseCase = imageAnalyzer
    .setTargetResolution(Size(360, 640))
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build()

analysisUseCase?.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
    val mediaImage = imageProxy.image
    val tempBitmap = imageProxy.toBitmap();
    val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())
```

### 将相机帧传递到 Google 机器学习套件
<a name="background-replacement-android-frames-to-mlkit"></a>

接下来，创建一个 `InputImage` 并将其传递给 Segmenter 的实例进行处理。`InputImage` 可以从 `ImageAnalysis` 的实例提供的 `ImageProxy` 创建。向 Segmenter 提供 `InputImage` 后，它将返回一个遮罩，其置信度分数表示像素出现在前景或背景中的可能性。此遮罩还提供宽度和高度属性，您将使用这些属性来创建一个包含先前加载的自定义背景图像中的背景像素的新数组。

#### Java
<a name="background-replacement-android-frames-to-mlkit-code"></a>

```
if (mediaImage != null) {
        val inputImage =
            InputImage.fromMediaImag


segmenter.process(inputImage)
    .addOnSuccessListener { segmentationMask ->
        val mask = segmentationMask.buffer
        val maskWidth = segmentationMask.width
        val maskHeight = segmentationMask.height
        val backgroundPixels = IntArray(maskWidth * maskHeight)
        bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)
```

### 将相机帧前景叠加到您的自定义背景上
<a name="background-replacement-android-overlay-frame-foreground"></a>

借助包含置信度分数的遮罩、作为位图的相机帧以及自定义背景图像中的彩色像素，您可以拥有将前景叠加到自定义背景所需的一切。然后使用以下参数调用 `overlayForeground` 函数：

#### Java
<a name="background-replacement-android-call-overlayforeground-code"></a>

```
resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
```

此函数遍历遮罩并检查置信度值，以确定是从背景图像还是从相机帧中获取相应的像素颜色。如果置信度值表明遮罩中的像素很可能位于背景中，则将从背景图像中获得相应的像素颜色；否则，它将从相机帧中获取相应的像素颜色来构建前景。函数完成对遮罩的遍历后，将使用新的彩色像素数组创建一个新的位图并返回。这个新位图包含叠加在自定义背景上的前景。

#### Java
<a name="background-replacement-android-run-overlayforeground-code"></a>

```
private fun overlayForeground(
        byteBuffer: ByteBuffer,
        maskWidth: Int,
        maskHeight: Int,
        cameraBitmap: Bitmap,
        backgroundPixels: IntArray
    ): Bitmap {
        @ColorInt val colors = IntArray(maskWidth * maskHeight)
        val cameraPixels = IntArray(maskWidth * maskHeight)

        cameraBitmap.getPixels(cameraPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

        for (i in 0 until maskWidth * maskHeight) {
            val backgroundLikelihood: Float = 1 - byteBuffer.getFloat()

            // Apply the virtual background to the color if it's not part of the foreground
            if (backgroundLikelihood > 0.9) {
                // Get the corresponding pixel color from the background image
                // Set the color in the mask based on the background image pixel color
                colors[i] = backgroundPixels.get(i)
            } else {
                // Get the corresponding pixel color from the camera frame
                // Set the color in the mask based on the camera image pixel color
                colors[i] = cameraPixels.get(i)
            }
        }

        return Bitmap.createBitmap(
            colors, maskWidth, maskHeight, Bitmap.Config.ARGB_8888
        )
    }
```

### 将新图像馈送到自定义图像源
<a name="background-replacement-android-custom-image-source"></a>

然后，您可以将新位图写入到自定义图像源提供的 `Surface`。这将把它广播到您的舞台。

#### Java
<a name="background-replacement-android-custom-image-source-code"></a>

```
resultBitmap = overlayForeground(mask, inputBitmap, mutableBitmap, bgBitmap)
canvas = surface.lockCanvas(null);
canvas.drawBitmap(resultBitmap, 0f, 0f, null)
```

下面是获取相机帧、将其传递给 Segmenter 并叠加到背景上的完整功能：

#### Java
<a name="background-replacement-android-custom-image-source-startcamera-code"></a>

```
@androidx.annotation.OptIn(androidx.camera.core.ExperimentalGetImage::class)
    private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.clouds
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;

        cameraProviderFuture.addListener({
            // Used to bind the lifecycle of cameras to the lifecycle owner
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            val imageAnalyzer = ImageAnalysis.Builder()
            analysisUseCase = imageAnalyzer
                .setTargetResolution(Size(720, 1280))
                .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                .build()

            analysisUseCase!!.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
                val mediaImage = imageProxy.image
                val tempBitmap = imageProxy.toBitmap();
                val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())

                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                    segmenter.process(inputImage)
                        .addOnSuccessListener { segmentationMask ->
                            val mask = segmentationMask.buffer
                            val maskWidth = segmentationMask.width
                            val maskHeight = segmentationMask.height
                            val backgroundPixels = IntArray(maskWidth * maskHeight)
                            bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```