Main class that is responsible for connecting to the RTC Engine, sending and receiving media.

Hierarchy

Constructors

Properties

connection?: RTCPeerConnection
disabledTrackEncodings: Map<string, TrackEncoding[]> = ...
idToEndpoint: Map<String, Endpoint> = ...
localEndpoint: Endpoint = ...
localTrackIdToTrack: Map<string, TrackContextImpl> = ...
localTracksWithStreams: {
    stream: MediaStream;
    track: MediaStreamTrack;
}[] = []
midToTrackId: Map<string, string> = ...
rtcConfig: RTCConfiguration = ...
trackIdToTrack: Map<string, TrackContextImpl> = ...

Methods

  • Adds track that will be sent to the RTC Engine.

    Returns

    Returns id of added track

    Example

    let localStream: MediaStream = new MediaStream();
    try {
    localAudioStream = await navigator.mediaDevices.getUserMedia(
    AUDIO_CONSTRAINTS
    );
    localAudioStream
    .getTracks()
    .forEach((track) => localStream.addTrack(track));
    } catch (error) {
    console.error("Couldn't get microphone permission:", error);
    }

    try {
    localVideoStream = await navigator.mediaDevices.getUserMedia(
    VIDEO_CONSTRAINTS
    );
    localVideoStream
    .getTracks()
    .forEach((track) => localStream.addTrack(track));
    } catch (error) {
    console.error("Couldn't get camera permission:", error);
    }

    localStream
    .getTracks()
    .forEach((track) => webrtc.addTrack(track, localStream));

    Parameters

    • track: MediaStreamTrack

      Audio or video track e.g. from your microphone or camera.

    • stream: MediaStream

      Stream that this track belongs to.

    • trackMetadata: any = ...

      Any information about this track that other endpoints will receive in endpointAdded. E.g. this can source of the track - whether it's screensharing, webcam or some other media device.

    • simulcastConfig: SimulcastConfig = ...

      Simulcast configuration. By default simulcast is disabled. For more information refer to SimulcastConfig.

    • maxBandwidth: TrackBandwidthLimit = 0

      maximal bandwidth this track can use. Defaults to 0 which is unlimited. This option has no effect for simulcast and audio tracks. For simulcast tracks use `setTrackBandwidth.

    Returns string

  • Parameters

    • serverTracks: Map<string, number>

    Returns void

  • Tries to connect to the RTC Engine. If user is succesfully connected then connected will be emitted.

    Example

    let webrtc = new WebRTCEndpoint();
    webrtc.connect({displayName: "Bob"});

    Parameters

    • metadata: any

      Any information that other endpoints will receive in endpointAdded after accepting this endpoint

    Returns void

  • Disables track encoding so that it will be no longer sent to the server.

    Example

    const trackId = webrtc.addTrack(track, stream, {}, {enabled: true, activeEncodings: ["l", "m", "h"]});
    webrtc.disableTrackEncoding(trackId, "l");

    Parameters

    • trackId: string

      id of track

    • encoding: TrackEncoding

      encoding that will be disabled

    Returns void

  • Disconnects from the room. This function should be called when user disconnects from the room in a clean way e.g. by clicking a dedicated, custom button disconnect. As a result there will be generated one more media event that should be sent to the RTC Engine. Thanks to it each other endpoint will be notified that endpoint was removed in endpointRemoved,

    Returns void

  • Type Parameters

    Parameters

    Returns boolean

  • Enables track encoding so that it will be sent to the server.

    Example

    const trackId = webrtc.addTrack(track, stream, {}, {enabled: true, activeEncodings: ["l", "m", "h"]});
    webrtc.disableTrackEncoding(trackId, "l");
    // wait some time
    webrtc.enableTrackEncoding(trackId, "l");

    Parameters

    • trackId: string

      id of track

    • encoding: TrackEncoding

      encoding that will be enabled

    Returns void

  • Parameters

    • trackId: string
    • endpointId: string

    Returns void

  • Returns (string | symbol)[]

  • Returns number

  • Returns a snapshot of currently received remote tracks.

    Example

    if (webRTCEndpoint.getRemoteTracks()[trackId]?.simulcastConfig?.enabled) {
    webRTCEndpoint.setTargetTrackEncoding(trackId, encoding);
    }

    Returns Record<string, TrackContext>

  • Type Parameters

    Parameters

    • event: E

    Returns number

  • Type Parameters

    Parameters

    • event: E

    Returns Required<WebRTCEndpointEvents>[E][]

  • Parameters

    • tracks: [string, any][]
    • endpoint: Endpoint

    Returns Map<string, TrackContextImpl>

  • Parameters

    • answer: RTCSessionDescriptionInit

    Returns Promise<void>

  • Parameters

    • event: RTCPeerConnectionIceErrorEvent

    Returns void

  • Returns ((event: RTCPeerConnectionIceEvent) => void)

      • (event: RTCPeerConnectionIceEvent): void
      • Parameters

        • event: RTCPeerConnectionIceEvent

        Returns void

  • Parameters

    • offerData: Map<string, number>

    Returns Promise<void>

  • Parameters

    • candidate: RTCIceCandidate

    Returns void

  • Returns ((event: RTCTrackEvent) => void)

      • (event: RTCTrackEvent): void
      • Parameters

        • event: RTCTrackEvent

        Returns void

  • Currently, this function only works when DisplayManager in RTC Engine is enabled and simulcast is disabled.

    Prioritizes a track in connection to be always sent to browser.

    Parameters

    • trackId: string

      Id of video track to prioritize.

    Returns void

  • Type Parameters

    Parameters

    • event: E

    Returns Required<WebRTCEndpointEvents>[E][]

  • Feeds media event received from RTC Engine to WebRTCEndpoint. This function should be called whenever some media event from RTC Engine was received and can result in WebRTCEndpoint generating some other media events.

    Example

    This example assumes phoenix channels as signalling layer. As phoenix channels require objects, RTC Engine encapsulates binary data into map with one field that is converted to object with one field on the TS side.

    webrtcChannel.on("mediaEvent", (event) => webrtc.receiveMediaEvent(event.data));
    

    Parameters

    • mediaEvent: string

      String data received over custom signalling layer.

    Returns void

  • Type Parameters

    Parameters

    • Optional event: E

    Returns WebRTCEndpoint

  • Removes a track from connection that was sent to the RTC Engine.

    Example

    // setup camera
    let localStream: MediaStream = new MediaStream();
    try {
    localVideoStream = await navigator.mediaDevices.getUserMedia(
    VIDEO_CONSTRAINTS
    );
    localVideoStream
    .getTracks()
    .forEach((track) => localStream.addTrack(track));
    } catch (error) {
    console.error("Couldn't get camera permission:", error);
    }

    let trackId
    localStream
    .getTracks()
    .forEach((track) => trackId = webrtc.addTrack(track, localStream));

    // remove track
    webrtc.removeTrack(trackId)

    Parameters

    • trackId: string

      Id of audio or video track to remove.

    Returns void

  • Replaces a track that is being sent to the RTC Engine.

    Returns

    success

    Example

    // setup camera
    let localStream: MediaStream = new MediaStream();
    try {
    localVideoStream = await navigator.mediaDevices.getUserMedia(
    VIDEO_CONSTRAINTS
    );
    localVideoStream
    .getTracks()
    .forEach((track) => localStream.addTrack(track));
    } catch (error) {
    console.error("Couldn't get camera permission:", error);
    }
    let oldTrackId;
    localStream
    .getTracks()
    .forEach((track) => trackId = webrtc.addTrack(track, localStream));

    // change camera
    const oldTrack = localStream.getVideoTracks()[0];
    let videoDeviceId = "abcd-1234";
    navigator.mediaDevices.getUserMedia({
    video: {
    ...(VIDEO_CONSTRAINTS as {}),
    deviceId: {
    exact: videoDeviceId,
    },
    }
    })
    .then((stream) => {
    let videoTrack = stream.getVideoTracks()[0];
    webrtc.replaceTrack(oldTrackId, videoTrack);
    })
    .catch((error) => {
    console.error('Error switching camera', error);
    })

    Parameters

    • trackId: string

      Id of audio or video track to replace.

    • newTrack: MediaStreamTrack
    • Optional newTrackMetadata: any

    Returns Promise<boolean>

  • Updates maximum bandwidth for the given simulcast encoding of the given track.

    Returns

    Parameters

    • trackId: string

      id of the track

    • rid: string

      rid of the encoding

    • bandwidth: number

      desired max bandwidth used by the encoding (in kbps)

    Returns Promise<boolean>

  • Parameters

    • maxListeners: number

    Returns WebRTCEndpoint

  • Currently this function has no effect.

    This function allows to adjust resolution and number of video tracks sent by an SFU to a client.

    Parameters

    • bigScreens: number

      number of screens with big size (if simulcast is used this will limit number of tracks sent with highest quality).

    • smallScreens: number

      number of screens with small size (if simulcast is used this will limit number of tracks sent with lowest quality).

    • mediumScreens: number = 0

      number of screens with medium size (if simulcast is used this will limit number of tracks sent with medium quality).

    • allSameSize: boolean = false

      flag that indicates whether all screens should use the same quality

    Returns void

  • Sets track encoding that server should send to the client library.

    The encoding will be sent whenever it is available. If chosen encoding is temporarily unavailable, some other encoding will be sent until the chosen encoding becomes active again.

    Example

    webrtc.setTargetTrackEncoding(incomingTrackCtx.trackId, "l")
    

    Parameters

    • trackId: string

      id of track

    • encoding: TrackEncoding

      encoding to receive

    Returns void

  • Updates maximum bandwidth for the track identified by trackId. This value directly translates to quality of the stream and, in case of video, to the amount of RTP packets being sent. In case trackId points at the simulcast track bandwidth is split between all of the variant streams proportionally to their resolution.

    Returns

    success

    Parameters

    • trackId: string
    • bandwidth: number

      in kbps

    Returns Promise<boolean>

  • Parameters

    • encodings: RTCRtpEncodingParameters[]
    • bandwidth: number

    Returns void

  • Currently, this function only works when DisplayManager in RTC Engine is enabled and simulcast is disabled.

    Unprioritizes a track.

    Parameters

    • trackId: string

      Id of video track to unprioritize.

    Returns void

  • Updates the metadata for the current endpoint.

    Parameters

    • metadata: any

      Data about this endpoint that other endpoints will receive upon being added.

      If the metadata is different from what is already tracked in the room, the optional event endpointUpdated will be emitted for other endpoint in the room.

    Returns void

  • Updates the metadata for a specific track.

    Parameters

    • trackId: string

      trackId (generated in addTrack) of audio or video track.

    • trackMetadata: any

      Data about this track that other endpoint will receive upon being added.

      If the metadata is different from what is already tracked in the room, the optional event trackUpdated will be emitted for other endpoints in the room.

    Returns void

Generated using TypeDoc