Private
Optional
connectionPrivate
disabledPrivate
idPrivate
localPrivate
localPrivate
localPrivate
midPrivate
rtcPrivate
trackPrivate
addAdds track that will be sent to the RTC Engine.
Returns id of added track
let localStream: MediaStream = new MediaStream();
try {
localAudioStream = await navigator.mediaDevices.getUserMedia(
AUDIO_CONSTRAINTS
);
localAudioStream
.getTracks()
.forEach((track) => localStream.addTrack(track));
} catch (error) {
console.error("Couldn't get microphone permission:", error);
}
try {
localVideoStream = await navigator.mediaDevices.getUserMedia(
VIDEO_CONSTRAINTS
);
localVideoStream
.getTracks()
.forEach((track) => localStream.addTrack(track));
} catch (error) {
console.error("Couldn't get camera permission:", error);
}
localStream
.getTracks()
.forEach((track) => webrtc.addTrack(track, localStream));
Audio or video track e.g. from your microphone or camera.
Stream that this track belongs to.
Any information about this track that other endpoints will receive in endpointAdded. E.g. this can source of the track - whether it's screensharing, webcam or some other media device.
Simulcast configuration. By default simulcast is disabled. For more information refer to SimulcastConfig.
maximal bandwidth this track can use. Defaults to 0 which is unlimited. This option has no effect for simulcast and audio tracks. For simulcast tracks use `setTrackBandwidth.
Private
addPrivate
addPrivate
applyPrivate
checkCleans up WebRTCEndpoint instance.
Tries to connect to the RTC Engine. If user is succesfully connected then connected will be emitted.
let webrtc = new WebRTCEndpoint();
webrtc.connect({displayName: "Bob"});
Any information that other endpoints will receive in endpointAdded after accepting this endpoint
Private
createPrivate
createPrivate
createPrivate
createDisables track encoding so that it will be no longer sent to the server.
const trackId = webrtc.addTrack(track, stream, {}, {enabled: true, activeEncodings: ["l", "m", "h"]});
webrtc.disableTrackEncoding(trackId, "l");
id of track
encoding that will be disabled
Disconnects from the room. This function should be called when user disconnects from the room
in a clean way e.g. by clicking a dedicated, custom button disconnect
.
As a result there will be generated one more media event that should be
sent to the RTC Engine. Thanks to it each other endpoint will be notified
that endpoint was removed in endpointRemoved,
Rest
...args: Parameters<Required<WebRTCEndpointEvents>[E]>Enables track encoding so that it will be sent to the server.
const trackId = webrtc.addTrack(track, stream, {}, {enabled: true, activeEncodings: ["l", "m", "h"]});
webrtc.disableTrackEncoding(trackId, "l");
// wait some time
webrtc.enableTrackEncoding(trackId, "l");
id of track
encoding that will be enabled
Private
erasePrivate
erasePrivate
findPrivate
getPrivate
getReturns a snapshot of currently received remote tracks.
if (webRTCEndpoint.getRemoteTracks()[trackId]?.simulcastConfig?.enabled) {
webRTCEndpoint.setTargetTrackEncoding(trackId, encoding);
}
Private
getPrivate
getPrivate
getPrivate
getPrivate
handlePrivate
mapPrivate
onPrivate
onPrivate
onPrivate
onPrivate
onPrivate
onPrivate
onPrivate
onFeeds media event received from RTC Engine to WebRTCEndpoint. This function should be called whenever some media event from RTC Engine was received and can result in WebRTCEndpoint generating some other media events.
This example assumes phoenix channels as signalling layer. As phoenix channels require objects, RTC Engine encapsulates binary data into map with one field that is converted to object with one field on the TS side.
webrtcChannel.on("mediaEvent", (event) => webrtc.receiveMediaEvent(event.data));
String data received over custom signalling layer.
Optional
event: ERemoves a track from connection that was sent to the RTC Engine.
// setup camera
let localStream: MediaStream = new MediaStream();
try {
localVideoStream = await navigator.mediaDevices.getUserMedia(
VIDEO_CONSTRAINTS
);
localVideoStream
.getTracks()
.forEach((track) => localStream.addTrack(track));
} catch (error) {
console.error("Couldn't get camera permission:", error);
}
let trackId
localStream
.getTracks()
.forEach((track) => trackId = webrtc.addTrack(track, localStream));
// remove track
webrtc.removeTrack(trackId)
Id of audio or video track to remove.
Replaces a track that is being sent to the RTC Engine.
success
// setup camera
let localStream: MediaStream = new MediaStream();
try {
localVideoStream = await navigator.mediaDevices.getUserMedia(
VIDEO_CONSTRAINTS
);
localVideoStream
.getTracks()
.forEach((track) => localStream.addTrack(track));
} catch (error) {
console.error("Couldn't get camera permission:", error);
}
let oldTrackId;
localStream
.getTracks()
.forEach((track) => trackId = webrtc.addTrack(track, localStream));
// change camera
const oldTrack = localStream.getVideoTracks()[0];
let videoDeviceId = "abcd-1234";
navigator.mediaDevices.getUserMedia({
video: {
...(VIDEO_CONSTRAINTS as {}),
deviceId: {
exact: videoDeviceId,
},
}
})
.then((stream) => {
let videoTrack = stream.getVideoTracks()[0];
webrtc.replaceTrack(oldTrackId, videoTrack);
})
.catch((error) => {
console.error('Error switching camera', error);
})
Id of audio or video track to replace.
Optional
newTrackMetadata: anyPrivate
sendUpdates maximum bandwidth for the given simulcast encoding of the given track.
id of the track
rid of the encoding
desired max bandwidth used by the encoding (in kbps)
Currently this function has no effect.
This function allows to adjust resolution and number of video tracks sent by an SFU to a client.
number of screens with big size (if simulcast is used this will limit number of tracks sent with highest quality).
number of screens with small size (if simulcast is used this will limit number of tracks sent with lowest quality).
number of screens with medium size (if simulcast is used this will limit number of tracks sent with medium quality).
flag that indicates whether all screens should use the same quality
Sets track encoding that server should send to the client library.
The encoding will be sent whenever it is available. If chosen encoding is temporarily unavailable, some other encoding will be sent until the chosen encoding becomes active again.
webrtc.setTargetTrackEncoding(incomingTrackCtx.trackId, "l")
id of track
encoding to receive
Updates maximum bandwidth for the track identified by trackId. This value directly translates to quality of the stream and, in case of video, to the amount of RTP packets being sent. In case trackId points at the simulcast track bandwidth is split between all of the variant streams proportionally to their resolution.
success
in kbps
Private
setPrivate
splitUpdates the metadata for the current endpoint.
Data about this endpoint that other endpoints will receive upon being added.
If the metadata is different from what is already tracked in the room, the optional
event endpointUpdated
will be emitted for other endpoint in the room.
Updates the metadata for a specific track.
trackId (generated in addTrack) of audio or video track.
Data about this track that other endpoint will receive upon being added.
If the metadata is different from what is already tracked in the room, the optional
event trackUpdated
will be emitted for other endpoints in the room.
Generated using TypeDoc
Main class that is responsible for connecting to the RTC Engine, sending and receiving media.