diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_createdatastream.dita b/en-US/dita/RTC-NG/API/api_irtcengine_createdatastream.dita index a45f7a9b212..3c9344e1473 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_createdatastream.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_createdatastream.dita @@ -22,14 +22,14 @@ public abstract int CreateDataStream(ref int streamId, bool reliable, bool ordered); Future<int> createDataStream(DataStreamConfig config); -

+

-

Each user can create up to five data streams during the lifecycle of .

+

Each user can create up to five data streams during the lifecycle of . The data stream will be destroyed when leaving the channel, and the data stream needs to be recreated if needed.

  • Call this method after joining a channel.
  • -
  • Agora does not support setting reliable as and ordered as .
  • +
  • Agora does not support settingreliable as and ordered as .
Parameters @@ -50,7 +50,7 @@ ordered -

Whether or not the recipients receive the data stream in the sent order:

    +

    Whether or not the recipients receive the data stream in the sent order:

    • : The recipients receive the data in the sent order.
    • : The recipients do not receive the data in the sent order.

    diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring.dita b/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring.dita index 75dcf7549c7..bd2120d55b1 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring.dita @@ -1,12 +1,12 @@ - <ph keyref="enableInEarMonitoring" /> + <ph keyref="enableInEarMonitoring"/> Enables in-ear monitoring. - + @@ -15,15 +15,15 @@

    public abstract int enableInEarMonitoring(boolean enabled); - (int)enableInEarMonitoring:(BOOL)enabled; - - + + public abstract int EnableInEarMonitoring(bool enabled); - - -

    + + +

-
+
Since
v4.0.1
@@ -33,7 +33,7 @@
  • This method applies to Android and iOS only.
  • -
  • Users must use wired earphones to hear their own voices.
  • +
  • Users must use headphones (both wired and Bluetooth) to hear the in-ear monitoring effect.
  • You can call this method either before or after joining a channel.
@@ -41,9 +41,9 @@ enabled - Enables in-ear monitoring.
    -
  • : Enables in-ear monitoring.
  • -
  • : (Default) Disables in-ear monitoring.
  • + Enables or disables in-ear monitoring.
      +
    • : Enables in-ear monitoring.
    • +
    • : (Default) Disables in-ear monitoring.
@@ -54,4 +54,4 @@
  • < 0: Failure.
  • - \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring2.dita b/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring2.dita index e30c4b8eba4..bd2562d4bd4 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring2.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_enableinearmonitoring2.dita @@ -1,12 +1,12 @@ - <ph keyref="enableInEarMonitoring2" /> + <ph keyref="enableInEarMonitoring2"/> Enables in-ear monitoring. - + @@ -25,16 +25,16 @@ enabled: boolean, includeAudioFilters: EarMonitoringFilterType ): number;
    - Future<void> enableInEarMonitoring( + Future<void> enableInEarMonitoring( {required bool enabled, required EarMonitoringFilterType includeAudioFilters}); -

    +

    This method enables or disables in-ear monitoring.

      -
    • Users must use wired earphones to hear their own voices.
    • +
    • Users must use headphones (both wired and Bluetooth) to hear the in-ear monitoring effect.
    • You can call this method either before or after joining a channel.
    @@ -43,17 +43,17 @@ enabled Enables or disables in-ear monitoring.
      -
    • : Enables in-ear monitoring.
    • -
    • : (Default) Disables in-ear monitoring.
    • +
    • : Enables in-ear monitoring.
    • +
    • : (Default) Disables in-ear monitoring.
    includeAudioFilters - The audio filter of in-ear monitoring: See . + The audio filter of in-ear monitoring: See . The audio filter of in-ear monitoring:
      -
    • (1 << 0): Do not add an audio filter to the in-ear monitor.
    • -
    • (1 << 1): Add an audio filter to the in-ear monitor. If you implement functions such as voice beautifier and audio effect, users can hear the voice after adding these effects.
    • -
    • (1 << 2): Enable noise suppression to the in-ear monitor.

      You can use the bitwise OR operator (|) to specify multiple audio filters.If you set the enabled parameter to , you do not need to set the includeAudioFilters parameter.

      +
    • (1 << 0): Do not add an audio filter to the in-ear monitor.
    • +
    • (1 << 1): Add an audio filter to the in-ear monitor. If you implement functions such as voice beautifier and audio effect, users can hear the voice after adding these effects.
    • +
    • (1 << 2): Enable noise suppression to the in-ear monitor.

      You can use the bitwise or operator (|) to specify multiple audio filters.If you set the enabled parameter to, you do not need to set the includeAudioFilters parameter.

    @@ -65,4 +65,4 @@
  • < 0: Failure.
  • - \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_getextensionproperty2.dita b/en-US/dita/RTC-NG/API/api_irtcengine_getextensionproperty2.dita index 934f3fba177..d887a0f82d2 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_getextensionproperty2.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_getextensionproperty2.dita @@ -1,12 +1,12 @@ - <ph keyref="getExtensionProperty2" /> + <ph keyref="getExtensionProperty2"/> Gets detailed information on the extensions. - + @@ -25,7 +25,7 @@ const char* provider, const char* extension, const char* key, char* value, int buf_len, agora::media::MEDIA_SOURCE_TYPE type = agora::media::UNKNOWN_MEDIA_SOURCE) = 0;
    - + abstract getExtensionProperty( provider: string, extension: string, @@ -41,16 +41,16 @@ bufLen: number, type?: MediaSourceType ): string; - Future<String> getExtensionProperty( + Future<String> getExtensionProperty( {required String provider, required String extension, required String key, required int bufLen, MediaSourceType type = MediaSourceType.unknownMediaSource}); -

    +

    -

    +

    Parameters @@ -96,4 +96,4 @@
  • An empty string, if the method call fails.
  • - \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_setaudioscenario.dita b/en-US/dita/RTC-NG/API/api_irtcengine_setaudioscenario.dita index 2c07220e245..4977aa6eb7b 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_setaudioscenario.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_setaudioscenario.dita @@ -21,7 +21,7 @@ public abstract int SetAudioScenario(AUDIO_SCENARIO_TYPE scenario); abstract setAudioScenario(scenario: AudioScenarioType): number; Future<void> setAudioScenario(AudioScenarioType scenario); -

    +

    You can call this method either before or after joining a channel.
    @@ -30,15 +30,15 @@ scenario - The audio scenarios. See . Under different audio scenarios, the device uses different volume types. + The audio scenarios. See . Under different audio scenarios, the device uses different volume types. scenario - The audio scenarios:
    • (0): (Default) Automatic scenario, where the SDK chooses the appropriate audio quality according to the user role and audio route.
    • -
    • (3): High-quality audio scenario, where users mainly play music.
    • -
    • (5): Chatroom scenario, where users need to frequently switch the user role or mute and unmute the microphone. In this scenario, audience members receive a pop-up window to request permission of using microphones.
    • -
    • (7): Real-time chorus scenario, where users have good network conditions and require extremely low latency.Before using this enumeration, you need to call to see whether the audio device supports ultra-low-latency capture and playback. To experience ultra-low latency, you need to ensure that your audio device supports ultra-low latency (isLowLatencyAudioSupported = ).
    • -
    • (8): Meeting scenario that mainly involves the human voice.
    + The audio scenarios:
    • (0): (Default) Automatic scenario, where the SDK chooses the appropriate audio quality according to the user role and audio route.
    • +
    • (3): High-quality audio scenario, where users mainly play music.
    • +
    • (5): Chatroom scenario, where users need to frequently switch the user role or mute and unmute the microphone. In this scenario, audience members receive a pop-up window to request permission of using microphones.
    • +
    • (7): Real-time chorus scenario, where users have good network conditions and require extremely low latency.Before using this enumeration, you need to call to see whether the audio device supports ultra-low-latency capture and playback. To experience ultra-low latency, you need to ensure that your audio device supports ultra-low latency (isLowLatencyAudioSupported = ).
    • +
    • (8): Meeting scenario that mainly involves the human voice.
    @@ -49,4 +49,4 @@
  • < 0: Failure.
  • - \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/api_irtcengine_setearmonitoringaudioframeparameters.dita b/en-US/dita/RTC-NG/API/api_irtcengine_setearmonitoringaudioframeparameters.dita index e8a42e38464..61c9f698d5c 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengine_setearmonitoringaudioframeparameters.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengine_setearmonitoringaudioframeparameters.dita @@ -1,12 +1,12 @@ - <ph keyref="setEarMonitoringAudioFrameParameters" /> + <ph keyref="setEarMonitoringAudioFrameParameters"/> Sets the format of the in-ear monitoring raw audio data. - + @@ -35,26 +35,26 @@ mode: RawAudioFrameOpModeType, samplesPerCall: number ): number;
    -

    +

    -

    This method is used to set the in-ear monitoring audio data format reported by the callback.

    +

    This method is used to set the in-ear monitoring audio data format reported by the callback.

      -
    • Before calling this method, you need to call , and set includeAudioFilters to or .
    • -
    • The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method.Sample interval (sec) = samplePerCall/(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the callback according to the sampling interval.
    • +
    • Before calling this method, you need to call , and set includeAudioFilters to or .
    • +
    • The SDK calculates the sampling interval based on the samplesPerCall, sampleRate and channel parameters set in this method.Sample interval = samplePerCall/(sampleRate × channel). Ensure that the sample interval ≥ 0.01 (s). The SDK triggers the callback according to the sampling interval.
    Parameters sampleRate - The sample rate of the audio data reported in the callback, which can be set as 8,000, 16,000, 32,000, 44,100, or 48,000 Hz. + The sample rate of the audio data reported in the callback, which can be set as 8,000, 16,000, 32,000, 44,100, or 48,000 Hz. channel -

    The number of audio channels reported in the callback.

      +

      The number of audio channels reported in the callback.

      • 1: Mono.
      • 2: Stereo.

      @@ -63,13 +63,13 @@ mode -

      The use mode of the audio frame. See .

      -

      +

      The use mode of the audio frame. See .

      +

      samplesPerCall - The number of data samples reported in the callback, such as 1,024 for the Media Push. + The number of data samples reported in the callback, such as 1,024 for the Media Push.

    @@ -79,4 +79,4 @@
  • < 0: Failure.
  • - \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/api_irtcengineex_resumeallchannelmediarelayex.dita b/en-US/dita/RTC-NG/API/api_irtcengineex_resumeallchannelmediarelayex.dita index d08e9012c2c..7e1720ad4bf 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengineex_resumeallchannelmediarelayex.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengineex_resumeallchannelmediarelayex.dita @@ -23,9 +23,9 @@

    -

    After calling the method, you can call this method to resume relaying media streams to all destination channels.

    -

    After a successful method call, the SDK triggers the callback to report whether the media stream relay is successfully resumed.

    - Call this method after . +

    After calling the method, you can call this method to resume relaying media streams to all destination channels.

    +

    After a successful method call, the SDK triggers the callback to report whether the media stream relay is successfully resumed.

    + Call this method after .
    Parameters diff --git a/en-US/dita/RTC-NG/API/api_irtcengineex_setdualstreammodeex.dita b/en-US/dita/RTC-NG/API/api_irtcengineex_setdualstreammodeex.dita index 539f233be56..7bab3be2c36 100644 --- a/en-US/dita/RTC-NG/API/api_irtcengineex_setdualstreammodeex.dita +++ b/en-US/dita/RTC-NG/API/api_irtcengineex_setdualstreammodeex.dita @@ -33,11 +33,11 @@ streamConfig: SimulcastStreamConfig, connection: RtcConnection ): number; - Future<void> setDualStreamModeEx( + Future<void> setDualStreamModeEx( {required SimulcastStreamMode mode, required SimulcastStreamConfig streamConfig, required RtcConnection connection}); -

    +

    diff --git a/en-US/dita/RTC-NG/API/class_channelmediaoptions.dita b/en-US/dita/RTC-NG/API/class_channelmediaoptions.dita index db625d97ea1..6bf415f807e 100644 --- a/en-US/dita/RTC-NG/API/class_channelmediaoptions.dita +++ b/en-US/dita/RTC-NG/API/class_channelmediaoptions.dita @@ -1,7 +1,7 @@ - <ph keyref="ChannelMediaOptions" /> + <ph keyref="ChannelMediaOptions"/> The channel media options.
    @@ -423,26 +423,26 @@ factory ChannelMediaOptions.fromJson(Map<String, dynamic> json) => _$ChannelMediaOptionsFromJson(json); - Map<String, dynamic> toJson() => _$ChannelMediaOptionsToJson(this); + Map<String, dynamic> toJson() => _$ChannelMediaOptionsToJson(this); } -

    +

    -
    Agora supports publishing multiple audio streams and one video stream at the same time and in the same . For example, publishMicrophoneTrack, publishAudioTrack, publishCustomAudioTrack, and publishMediaPlayerAudioTrack can be set as at the same time, but only one of publishCameraTrack, publishScreenCaptureVideo, publishScreenTrack, publishCustomVideoTrack, or publishEncodedVideoTrack can be set as .
    +
    Agora supports publishing multiple audio streams and one video stream at the same time and in the same . For example, publishMicrophoneTrack, publishAudioTrack, publishCustomAudioTrack, and publishMediaPlayerAudioTrack can be set as at the same time, but only one of publishCameraTrack, publishScreenCaptureVideo, publishScreenTrack, publishCustomVideoTrack, or publishEncodedVideoTrack can be set as .
    <text conref="../conref/conref_api_metadata.dita#conref_api_metadata/property"/> publishCameraTrack Whether to publish the video captured by the camera:
      -
    • : (Default) Publish the video captured by the camera.
    • -
    • : Do not publish the video captured by the camera.
    • +
    • : (Default) Publish the video captured by the camera.
    • +
    • : Do not publish the video captured by the camera.
    publishMicrophoneTrack Whether to publish the audio captured by the microphone:
      -
    • : (Default) Publish the audio captured by the microphone.
    • -
    • : Do not publish the audio captured by the microphone.
    • +
    • : (Default) Publish the audio captured by the microphone.
    • +
    • : Do not publish the audio captured by the microphone.
    As of v4.0.0, the parameter name is changed from publishAudioTrack to publishMicrophoneTrack. @@ -450,19 +450,18 @@
    publishSecondaryCameraTrack -

    Whether to publish the video captured by the second camera:

    -

      -
    • : Publish the video captured by the second camera.
    • -
    • : (Default) Do not publish the video captured by the second camera.
    • -

    + Whether to publish the video captured by the second camera:
      +
    • : Publish the video captured by the second camera.
    • +
    • : (Default) Do not publish the video captured by the second camera.
    • +
    publishScreenCaptureVideo publishScreenTrack

    Whether to publish the video captured from the screen:

      -
    • : Publish the video captured from the screen.
    • -
    • : (Default) Do not publish the captured video from the screen.
    • +
    • : Publish the video captured from the screen.
    • +
    • : (Default) Do not publish the captured video from the screen.

    As of v4.0.0, the parameter name has been changed from publishScreenTrack to publishScreenCaptureVideo.

    @@ -471,8 +470,8 @@ publishScreenCaptureVideo

    Whether to publish the video captured from the screen:

      -
    • : Publish the video captured from the screen.
    • -
    • : (Default) Do not publish the captured video from the screen.
    • +
    • : Publish the video captured from the screen.
    • +
    • : (Default) Do not publish the captured video from the screen.

    This parameter applies to Android and iOS only.

    @@ -480,8 +479,8 @@ publishScreenCaptureAudio

    Whether to publish the audio captured from the screen:

      -
    • : Publish the audio captured from the screen.
    • -
    • : (Default) Do not publish the audio captured from the screen.
    • +
    • : Publish the audio captured from the screen.
    • +
    • : (Default) Do not publish the audio captured from the screen.

    This parameter applies to Android and iOS only.

    @@ -490,63 +489,63 @@ publishSecondaryScreenTrack Whether to publish the video captured from the second screen:
      -
    • : Publish the captured video from the second screen.
    • -
    • : (Default) Do not publish the video captured from the second screen.
    • +
    • : Publish the captured video from the second screen.
    • +
    • : (Default) Do not publish the video captured from the second screen.
    publishTrancodedVideoTrack Whether to publish the local transcoded video:
      -
    • : Publish the local transcoded video.
    • -
    • : (Default) Do not publish the local transcoded video.
    • +
    • : Publish the local transcoded video.
    • +
    • : (Default) Do not publish the local transcoded video.
    publishCustomAudioTrack Whether to publish the audio captured from a custom source:
      -
    • : Publish the captured audio from a custom source.
    • -
    • : (Default) Do not publish the audio captured from the custom source.
    • +
    • : Publish the captured audio from a custom source.
    • +
    • : (Default) Do not publish the audio captured from the custom source.
    publishCustomAudioSourceId - The ID of the custom audio source to publish. The default value is 0.

    If you have set the value of sourceNumber greater than 1 in , the SDK creates the corresponding number of custom audio tracks and assigns an ID to each audio track starting from 0.

    + The ID of the custom audio source to publish. The default value is 0.

    If you have set the value of sourceNumber greater than 1 in , the SDK creates the corresponding number of custom audio tracks and assigns an ID to each audio track starting from 0.

    publishCustomAudioTrackEnableAec Whether to enable AEC when publishing the audio captured from a custom source:
      -
    • : Enable AEC when publishing the captured audio from a custom source.
    • -
    • : (Default) Do not enable AEC when publishing the audio captured from the custom source.
    • +
    • : Enable AEC when publishing the captured audio from a custom source.
    • +
    • : (Default) Do not enable AEC when publishing the audio captured from the custom source.
    publishCustomVideoTrack Whether to publish the video captured from a custom source:
      -
    • : Publish the captured video from a custom source.
    • -
    • : (Default) Do not publish the video captured from the custom source.
    • +
    • : Publish the captured video from a custom source.
    • +
    • : (Default) Do not publish the video captured from the custom source.
    publishEncodedVideoTrack Whether to publish the encoded video:
      -
    • : Publish the encoded video.
    • -
    • : (Default) Do not publish the encoded video.
    • +
    • : Publish the encoded video.
    • +
    • : (Default) Do not publish the encoded video.
    publishMediaPlayerAudioTrack Whether to publish the audio from the media player:
      -
    • : Publish the audio from the media player.
    • -
    • : (Default) Do not publish the audio from the media player.
    • +
    • : Publish the audio from the media player.
    • +
    • : (Default) Do not publish the audio from the media player.
    publishMediaPlayerVideoTrack Whether to publish the video from the media player:
      -
    • : Publish the video from the media player.
    • -
    • : (Default) Do not publish the video from the media player.
    • +
    • : Publish the video from the media player.
    • +
    • : (Default) Do not publish the video from the media player.
    @@ -566,8 +565,8 @@ enableAudioRecordingOrPlayout Whether to enable audio capturing or playback:
      -
    • : (Default) Enable audio capturing or playback.
    • -
    • : Do not enable audio capturing or playback.
    • +
    • : (Default) Enable audio capturing or playback.
    • +
    • : Do not enable audio capturing or playback.
    @@ -580,7 +579,7 @@ clientRoleType - The user role. See . + The user role. See . @@ -604,8 +603,8 @@ publishCustomAudioTrackAec Whether to publish audio frames processed by an external echo cancellation module.
      -
    • : Publish audio frames processed by the external echo cancellation module.
    • -
    • : Do not publish to publish audio frames processed by the external echo cancellation module.
    • +
    • : Publish audio frames processed by the external echo cancellation module.
    • +
    • : Do not publish to publish audio frames processed by the external echo cancellation module.
    @@ -614,34 +613,34 @@

    (Optional) The token generated on your server for authentication. See Authenticate Your Users with Token.

      -
    • This parameter takes effect only when calling or .
    • -
    • Ensure that the App ID, channel name, and user name used for creating the token are the same as those used by the method for initializing the RTC engine, and those used by the and methods for joining the channel.
    • +
    • This parameter takes effect only when calling or .
    • +
    • Ensure that the App ID, channel name, and user name used for creating the token are the same as those used by the method for initializing the RTC engine, and those used by the and methods for joining the channel.
    startPreview Whether to automatically start the preview when joining a channel:
      -
    • : (Default) Automatically start preview. Ensure that you have called the method to set the local video property; otherwise, the preview is not enabled.
    • -
    • : Do not automatically start the preview.
    • +
    • : (Default) Automatically start preview. Ensure that you have called the method to set the local video property; otherwise, the preview is not enabled.
    • +
    • : Do not automatically start the preview.
    publishRhythmPlayerTrack Whether to publish the sound of a metronome to remote users:
      -
    • : (Default) Publish the sound of the metronome. Both the local user and remote users can hear the metronome.
    • -
    • : Do not publish the sound of the metronome. Only the local user can hear the metronome.
    • +
    • : (Default) Publish the sound of the metronome. Both the local user and remote users can hear the metronome.
    • +
    • : Do not publish the sound of the metronome. Only the local user can hear the metronome.
    isInteractiveAudience Whether to enable interactive mode:
      -
    • : Enable interactive mode. Once this mode is enabled and the user role is set as audience, the user can receive remote video streams with low latency.
    • -
    • : (Default) Do not enable interactive mode. If this mode is disabled, the user receives the remote video streams in default settings.
    • +
    • : Enable interactive mode. Once this mode is enabled and the user role is set as audience, the user can receive remote video streams with low latency.
    • +
    • : (Default) Do not enable interactive mode. If this mode is disabled, the user receives the remote video streams in default settings.
      -
    • This parameter only applies to scenarios involving cohosting across channels. The cohosts need to call the method to join the other host's channel as an audience member, and set isInteractiveAudience to .
    • -
    • This parameter takes effect only when the user role is .
    • +
    • This parameter only applies to scenarios involving cohosting across channels. The cohosts need to call the method to join the other host's channel as an audience member, and set isInteractiveAudience to .
    • +
    • This parameter takes effect only when the user role is .
    @@ -651,10 +650,10 @@ isAudioFilterable Whether the audio stream being published is filtered according to the volume algorithm:
      -
    • : (Default) The audio stream is filtered. If the audio stream filter is not enabled, this setting does not takes effect.
    • -
    • : The audio stream is not filtered.
    • +
    • : (Default) The audio stream is filtered. If the audio stream filter is not enabled, this setting does not takes effect.
    • +
    • : The audio stream is not filtered.
    - If you need to enable this function, contact .
    + If you need to enable this function, contact .
    diff --git a/en-US/dita/RTC-NG/API/class_localvideostats.dita b/en-US/dita/RTC-NG/API/class_localvideostats.dita index d8e0abfae16..2ec283ad344 100644 --- a/en-US/dita/RTC-NG/API/class_localvideostats.dita +++ b/en-US/dita/RTC-NG/API/class_localvideostats.dita @@ -1,7 +1,7 @@ - <ph keyref="LocalVideoStats" /> + <ph keyref="LocalVideoStats"/> The statistics of the local video stream.
    @@ -296,15 +296,17 @@ @JsonKey(name: 'hwEncoderAccelerating') final int? hwEncoderAccelerating; - factory LocalVideoStats.fromJson(Map<String, dynamic> json) => + factory LocalVideoStats.fromJson(Map<String, dynamic> json) => _$LocalVideoStatsFromJson(json); - Map<String, dynamic> toJson() => _$LocalVideoStatsToJson(this); + Map<String, dynamic> toJson() => _$LocalVideoStatsToJson(this); } -

    +

    - <text conref="../conref/conref_api_metadata.dita#conref_api_metadata/property" /> + <text + conref="../conref/conref_api_metadata.dita#conref_api_metadata/property" + /> uid @@ -362,11 +364,11 @@ qualityAdaptIndication - The quality adaptation of the local video stream in the reported interval (based on the target frame rate and target bitrate). See .

    + The quality adaptation of the local video stream in the reported interval (based on the target frame rate and target bitrate). See .

      -
    • (0): The local video quality stays the same.
    • -
    • (1): The local video quality improves because the network bandwidth increases.
    • -
    • (2): The local video quality deteriorates because the network bandwidth decreases.
    • +
    • (0): The local video quality stays the same.
    • +
    • (1): The local video quality improves because the network bandwidth increases.
    • +
    • (2): The local video quality deteriorates because the network bandwidth decreases.

    @@ -390,10 +392,10 @@ codecType - The codec type of the local video. See .

    + The codec type of the local video. See .

      -
    • (1): VP8.
    • -
    • (2): (Default) H.264.
    • +
    • (1): VP8.
    • +
    • (2): (Default) H.264.

    @@ -412,19 +414,19 @@ captureBrightnessLevel The brightness level of the video image captured by the local camera.
      -
    • (-1): The SDK does not detect the brightness level of the video image. Wait a few seconds to get the brightness level from captureBrightnessLevel in the next callback.
    • -
    • (0): The brightness level of the video image is normal.
    • -
    • (1): The brightness level of the video image is too bright.
    • -
    • (2): The brightness level of the video image is too dark.
    • +
    • (-1): The SDK does not detect the brightness level of the video image. Wait a few seconds to get the brightness level from captureBrightnessLevel in the next callback.
    • +
    • (0): The brightness level of the video image is normal.
    • +
    • (1): The brightness level of the video image is too bright.
    • +
    • (2): The brightness level of the video image is too dark.
    hwEncoderAccelerating - The local video encoding acceleration type. See .
      + The local video encoding acceleration type. See .
      • 0: Software encoding is applied without acceleration.
      • 1: Hardware encoding is applied for acceleration.
    -
    \ No newline at end of file +
    diff --git a/en-US/dita/RTC-NG/API/class_screencaptureparameters.dita b/en-US/dita/RTC-NG/API/class_screencaptureparameters.dita index 8236d863a30..fadd43d2015 100644 --- a/en-US/dita/RTC-NG/API/class_screencaptureparameters.dita +++ b/en-US/dita/RTC-NG/API/class_screencaptureparameters.dita @@ -1,12 +1,12 @@ - <ph keyref="ScreenCaptureParameters" /> + <ph keyref="ScreenCaptureParameters"/> Screen sharing configurations.

    - + __attribute__((visibility("default"))) @interface AgoraScreenCaptureParameters: NSObject @property (assign, nonatomic) CGSize dimensions; @property (assign, nonatomic) NSInteger frameRate; @@ -206,7 +206,7 @@ final bool? windowFocus; @JsonKey(name: 'excludeWindowList') - final List<int>? excludeWindowList; + final List<int>? excludeWindowList; @JsonKey(name: 'excludeWindowCount') final int? excludeWindowCount; @@ -220,24 +220,24 @@ @JsonKey(name: 'enableHighLight') final bool? enableHighLight; - factory ScreenCaptureParameters.fromJson(Map<String, dynamic> json) => + factory ScreenCaptureParameters.fromJson(Map<String, dynamic> json) => _$ScreenCaptureParametersFromJson(json); - Map<String, dynamic> toJson() => _$ScreenCaptureParametersToJson(this); + Map<String, dynamic> toJson() => _$ScreenCaptureParametersToJson(this); } -

    +

    - <text conref="../conref/conref_api_metadata.dita#conref_api_metadata/property" /> - The video profiles of the shared screen stream are only set by , independent of . + <text conref="../conref/conref_api_metadata.dita#conref_api_metadata/property"/> + The video profiles of the shared video stream are only set by , independent of . dimensions -

    The maximum dimensions to encode the shared region. The video encoding resolution of the shared screen stream. On Windows and macOS, this represents the video encoding resolution of the shared screen stream. See . The default value is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges.

    +

    The maximum dimensions to encode the shared region. The video encoding resolution of the shared screen stream. On Windows and macOS, this represents the video encoding resolution of the shared screen stream. See . The default value is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges.

    If the screen dimensions are different from the value of this parameter, Agora applies the following strategies for encoding. Suppose dimensions is set to 1920 × 1080:

      -
    • If the value of the screen dimensions is lower than that of dimensions, for example, 1000 × 1000 pixels, the SDK uses the screen dimensions, that is, 1000 × 1000 pixels, for encoding.
    • -
    • If the value of the screen dimensions is higher than that of dimensions, for example, 2000 × 1500, the SDK uses the maximum value under dimensions with the aspect ratio of the screen dimension (4:3) for encoding, that is, 1440 × 1080.
    • +
    • If the value of the screen dimensions is lower than that of dimensions, for example, 1000 x 1000 pixels, the SDK uses 1000 x 1000 pixels for encoding.
    • +
    • If the value of the screen dimensions is higher than that of dimensions, for example, 2000 × 1500, the SDK uses the maximum value under dimensions with the aspect ratio of the screen dimension (4:3) for encoding, that is, 1440 × 1080.

    @@ -253,47 +253,48 @@ captureMouseCursor

    Whether to capture the mouse in screen sharing:

      -
    • : (Default) Capture the mouse.
    • -
    • : Do not capture the mouse.
    • +
    • : (Default) Capture the mouse.
    • +
    • : Do not capture the mouse.

    windowFocus -

    Whether to bring the window to the front when calling the method to share it:

      -
    • : Bring the window to the front.
    • -
    • : (Default) Do not bring the window to the front.
    • +

      Whether to bring the window to the front when calling the method to share it:

        +
      • :Bring the window to the front.
      • +
      • : (Default) Do not bring the window to the front.

      excludeWindowList - The ID list of the windows to be blocked. When calling to start screen sharing, you can use this parameter to block a specified window. When calling to update screen sharing configurations, you can use this parameter to dynamically block a specified window. + The ID list of the windows to be blocked. When calling to start screen sharing, you can use this parameter to block a specified window. When calling to update screen sharing configurations, you can use this parameter to dynamically block a specified window. highLighted enableHighLight (For macOS and Windows only) Whether to place a border around the shared window or screen:
        -
      • : Place a border.
      • -
      • : (Default) Do not place a border.
      • +
      • : Place a border.
      • +
      • : (Default) Do not place a border.
      - When you share a part of a window or screen, the SDK places a border around the entire window or screen if you set this parameter to .
      + When you share a part of a window or screen, the SDK places a border around the entire window or screen if you set this parameter to .
      highLightColor - (For macOS and Windows only) On Windows platforms, the color of the border in ARGB format. The default value is 0xFF8CBF26. + (For macOS and Windows only) On Windows platforms, the color of the border in ARGB format. The default value is 0xFF8CBF26. On macOS, COLOR_CLASS refers to NSColor. highLightWidth - (For macOS and Windows only) The width (px) of the border. The default value is 5, and the value range is (0, 50].This parameter only takes effect when highLighted is set to . + (For macOS and Windows only) The width (px) of the border. The default value is 5, and the value range is (0, 50].This parameter only takes effect when highLighted is set to . excludeWindowCount - The number of windows to be excluded.On the Windows platform, the maximum value of this parameter is 24; if this value is exceeded, excluding the window fails. + The number of windows to be excluded.On Windows platform, the maximum value of this parameter cannot exceed 24, otherwise, excluding the window fails.
    -
    \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/class_videocanvas.dita b/en-US/dita/RTC-NG/API/class_videocanvas.dita index 4fdf2f2ef4e..5160ce46503 100644 --- a/en-US/dita/RTC-NG/API/class_videocanvas.dita +++ b/en-US/dita/RTC-NG/API/class_videocanvas.dita @@ -1,7 +1,7 @@ - <ph keyref="VideoCanvas" /> + <ph keyref="VideoCanvas"/> Attributes of video canvas object.
    @@ -198,16 +198,18 @@ @JsonKey(name: 'cropArea') final Rectangle? cropArea; - factory VideoCanvas.fromJson(Map<String, dynamic> json) => + factory VideoCanvas.fromJson(Map<String, dynamic> json) => _$VideoCanvasFromJson(json); - Map<String, dynamic> toJson() => _$VideoCanvasToJson(this); + Map<String, dynamic> toJson() => _$VideoCanvasToJson(this); } -

    +

    - <text conref="../conref/conref_api_metadata.dita#conref_api_metadata/property" /> + <text + conref="../conref/conref_api_metadata.dita#conref_api_metadata/property" + /> view @@ -216,20 +218,22 @@ renderMode -

    The rendering mode of the video. See .

    +

    The rendering mode of the video. See .

      -
    • +

    mirrorMode -

    The mirror mode of the view. See .

    +

    The mirror mode of the view. See .

      -
    • +

      @@ -243,7 +247,7 @@ sourceType - The type of the video frame, see . + The type of the video frame, see . sourceId @@ -256,19 +260,20 @@ setupMode Setting mode of the view.
        -
      • (0): (Default) Replaces a view.
      • -
      • (1): Adds a view.
      • -
      • (2): Delete a view.
      • +
      • (0): (Default) Replaces a view.
      • +
      • (1): Adds a view.
      • +
      • (2): Delete a view.
      mediaPlayerId - The ID of the media player. You can get the media player ID by calling . - This parameter is required when sourceType is . + The ID of the media player. You can get the media player ID by calling . + This parameter is required when sourceType is . cropArea - (Android and iOS only) (Optional) The display area for the video frame. See . width and height represent the video pixel width and height of the area. The default value is null (width or height is 0), which means that the actual resolution of the video frame is displayed. + (Android and iOS only) (Optional) The display area for the video frame. See . width and height represent the video pixel width and height of the area. The default value is null (width or height is 0), which means that the actual resolution of the video frame is displayed. cropArea @@ -281,4 +286,4 @@
    -
    \ No newline at end of file + diff --git a/en-US/dita/RTC-NG/API/rtc_api_overview_ng.dita b/en-US/dita/RTC-NG/API/rtc_api_overview_ng.dita index ebe9c8d4ba2..2538d619753 100644 --- a/en-US/dita/RTC-NG/API/rtc_api_overview_ng.dita +++ b/en-US/dita/RTC-NG/API/rtc_api_overview_ng.dita @@ -420,7 +420,7 @@
    Media player - To see more about the media player method, see . + To see more about the media player methods, see . Method @@ -603,7 +603,7 @@
    Audio pre-process and post-process -

    This method is for Android and iOS only.

    +

    This group of method is for Android and iOS only.

    Method @@ -648,7 +648,7 @@
    Face detection - This method is for Android and iOS only. + This group of methods is for Android and iOS only. Method Description @@ -857,10 +857,10 @@
    -
    +
    DRM-protected music -

    This method is for Android and iOS only.

    +

    This group of method is for Android and iOS only.

    @@ -911,7 +911,7 @@
    Virtual metronome -

    This method is for Android and iOS only.

    +

    This group of method is for Android and iOS only.

    Method @@ -1198,7 +1198,7 @@
    Spatial audio effect -

    This feature is in experimental status. To enable it, contact , contact if needed.

    +

    This feature is in experimental status. To enable it, contact , contact if needed.

    Cloud server calculation methods @@ -1954,7 +1954,7 @@ (For Android and iOS only) - + (For Android and iOS only) @@ -2026,7 +2026,7 @@
    Audio route -

    This method is for Android and iOS only.

    +

    This group of method is for Android and iOS only.

    Method @@ -2444,7 +2444,7 @@
    Miscellaneous audio control -

    This method is for Windows and macOS only.

    +

    This group of method is for Windows and macOS only.

    Method