/client-sdk-react-native

Official React Native SDK for LiveKit.

Primary LanguageTypeScriptApache License 2.0Apache-2.0

The LiveKit icon, the name of the repository and some sample code in the background.

livekit-react-native

Use this SDK to add realtime video, audio and data features to your React Native app. By connecting to LiveKit Cloud or a self-hosted server, you can quickly build applications such as multi-modal AI, live streaming, or video calls with just a few lines of code.

Note

This is v2 of the React-Native SDK. When migrating from v1.x to v2.x you might encounter a small set of breaking changes. Read the migration guide for a detailed overview of what has changed.

Installation

NPM

npm install @livekit/react-native @livekit/react-native-webrtc

Yarn

yarn add @livekit/react-native @livekit/react-native-webrtc

This library depends on @livekit/react-native-webrtc, which has additional installation instructions found here:


Once the @livekit/react-native-webrtc dependency is installed, one last step is needed to finish the installation:

Android

In your MainApplication.java file:

Java

import com.livekit.reactnative.LiveKitReactNative;
import com.livekit.reactnative.audio.AudioType;

public class MainApplication extends Application implements ReactApplication {

  @Override
  public void onCreate() {
    // Place this above any other RN related initialization
    // When AudioType is omitted, it'll default to CommunicationAudioType.
    // Use MediaAudioType if user is only consuming audio, and not publishing.
    LiveKitReactNative.setup(this, new AudioType.CommunicationAudioType());

    //...
  }
}

Or in your MainApplication.kt if you are using RN 0.73+

Kotlin

import com.livekit.reactnative.LiveKitReactNative
import com.livekit.reactnative.audio.AudioType

class MainApplication : Application, ReactApplication() {
  override fun onCreate() {
    // Place this above any other RN related initialization
    // When AudioType is omitted, it'll default to CommunicationAudioType.
    // Use MediaAudioType if user is only consuming audio, and not publishing.
    LiveKitReactNative.setup(this, AudioType.CommunicationAudioType())

    //...
  }
}

iOS

In your AppDelegate.m file:

#import "LivekitReactNative.h"

@implementation AppDelegate

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  // Place this above any other RN related initialization
  [LivekitReactNative setup];

  //...
}

Expo

LiveKit is available on Expo through development builds. You can find our Expo plugin and setup instructions here.

Example app

You can try our standalone example app here.

Usage

In your index.js file, setup the LiveKit SDK by calling registerGlobals(). This sets up the required WebRTC libraries for use in Javascript, and is needed for LiveKit to work.

import { registerGlobals } from '@livekit/react-native';

// ...

registerGlobals();

In your app, wrap your component in a LiveKitRoom component, which manages a Room object and allows you to use our hooks to create your own real-time video/audio app.

import * as React from 'react';
import {
  StyleSheet,
  View,
  FlatList,
  ListRenderItem,
} from 'react-native';
import { useEffect } from 'react';
import {
  AudioSession,
  LiveKitRoom,
  useTracks,
  TrackReferenceOrPlaceholder,
  VideoTrack,
  isTrackReference,
  registerGlobals,
} from '@livekit/react-native';
import { Track } from 'livekit-client';

const wsURL = "wss://example.com"
const token = "your-token-here"

export default function App() {
  // Start the audio session first.
  useEffect(() => {
    let start = async () => {
      await AudioSession.startAudioSession();
    };

    start();
    return () => {
      AudioSession.stopAudioSession();
    };
  }, []);

  return (
    <LiveKitRoom
      serverUrl={wsURL}
      token={token}
      connect={true}
      options={{
        // Use screen pixel density to handle screens with differing densities.
        adaptiveStream: { pixelDensity: 'screen' },
      }}
      audio={true}
      video={true}
    >
      <RoomView />
    </LiveKitRoom>
  );
};

const RoomView = () => {
  // Get all camera tracks.
  // The useTracks hook grabs the tracks from LiveKitRoom component
  // providing the context for the Room object.
  const tracks = useTracks([Track.Source.Camera]);

  const renderTrack: ListRenderItem<TrackReferenceOrPlaceholder> = ({item}) => {
    // Render using the VideoTrack component.
    if(isTrackReference(item)) {
      return (<VideoTrack trackRef={item} style={styles.participantView} />)
    } else {
      return (<View style={styles.participantView} />)
    }
  };

  return (
    <View style={styles.container}>
      <FlatList
        data={tracks}
        renderItem={renderTrack}
      />
    </View>
  );
};

const styles = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'stretch',
    justifyContent: 'center',
  },
  participantView: {
    height: 300,
  },
});

API documentation is located here.

Additional documentation for the LiveKit SDK can be found at https://docs.livekit.io/

Audio sessions

As seen in the above example, we've introduced a class AudioSession that helps to manage the audio session on native platforms. This class wraps either AudioManager on Android, or AVAudioSession on iOS.

You can customize the configuration of the audio session with configureAudio.

Android

Media playback

By default, the audio session is set up for bidirectional communication. In this mode, the audio framework exhibits the following behaviors:

  • The volume cannot be reduced to 0.
  • Echo cancellation is available and is enabled by default.
  • A microphone indicator can be displayed, depending on the platform.

If you're leveraging LiveKit primarily for media playback, you have the option to reconfigure the audio session to better suit media playback. Here's how:

useEffect(() => {
  let connect = async () => {
    // configure audio session prior to starting it.
    await AudioSession.configureAudio({
      android: {
        // currently supports .media and .communication presets
        audioTypeOptions: AndroidAudioTypePresets.media,
      },
    });
    await AudioSession.startAudioSession();
    await room.connect(url, token, {});
  };
  connect();
  return () => {
    room.disconnect();
    AudioSession.stopAudioSession();
  };
}, [url, token, room]);

Customizing audio session

Instead of using our presets, you can further customize the audio session to suit your specific needs.

await AudioSession.configureAudio({
  android: {
    preferredOutputList: ['earpiece'],
    // See [AudioManager](https://developer.android.com/reference/android/media/AudioManager)
    // for details on audio and focus modes.
    audioTypeOptions: {
      manageAudioFocus: true,
      audioMode: 'normal',
      audioFocusMode: 'gain',
      audioStreamType: 'music',
      audioAttributesUsageType: 'media',
      audioAttributesContentType: 'unknown',
    },
  },
});
await AudioSession.startAudioSession();

iOS

For iOS, the most appropriate audio configuration may change over time when local/remote audio tracks publish and unpublish from the room. To adapt to this, the useIOSAudioManagement hook is advised over just configuring the audio session once for the entire audio session.

Screenshare

Enabling screenshare requires extra installation steps:

Android

Android screenshare requires a foreground service with type mediaProjection to be present.

From version 2.4.0 onwards, the foreground service is handled internally, but you must declare the permission yourself in your app's AndroidManifest.xml file.

<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION" />

iOS

iOS screenshare requires adding a Broadcast Extension to your iOS project. Follow the integration instructions here:

https://jitsi.github.io/handbook/docs/dev-guide/dev-guide-ios-sdk/#screen-sharing-integration

It involves copying the files found in this sample project to your iOS project, and registering a Broadcast Extension in Xcode.

It's also recommended to use CallKeep, to register a call with CallKit (as well as turning on the voip background mode). Due to background app processing limitations, screen recording may be interrupted if the app is restricted in the background. Registering with CallKit allows the app to continue processing for the duration of the call.

Once setup, iOS screenshare can be initiated like so:

const screenCaptureRef = React.useRef(null);
const screenCapturePickerView = Platform.OS === 'ios' && (
  <ScreenCapturePickerView ref={screenCaptureRef} />
);
const startBroadcast = async () => {
  if (Platform.OS === 'ios') {
    const reactTag = findNodeHandle(screenCaptureRef.current);
    await NativeModules.ScreenCapturePickerViewManager.show(reactTag);
    room.localParticipant.setScreenShareEnabled(true);
  } else {
    room.localParticipant.setScreenShareEnabled(true);
  }
};

return (
  <View style={styles.container}>
    /*...*/ // Make sure the ScreenCapturePickerView exists in the view tree.
    {screenCapturePickerView}
  </View>
);

Note

You will not be able to publish camera or microphone tracks on iOS Simulator.

Background Processing

Android

To support staying connected to LiveKit in the background, you will need a foreground service on Android.

The example app uses @supersami/rn-foreground-service for this.

Add the following permissions to your AndroidManifest.xml file:

<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_CAMERA" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MICROPHONE" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK" />

Declare the the service and ensure it's labelled a mediaProjection service like so:

<service android:name="com.supersami.foregroundservice.ForegroundService" android:foregroundServiceType="camera|microphone|mediaPlayback" />
<service android:name="com.supersami.foregroundservice.ForegroundServiceTask" />

The camera and microphone permissions/foreground service types can be omitted if you are not using those.

Once setup, start the foreground service to keep the app alive in the background.

iOS

By default, simple background processing can be enabled by selecting the audio and voip UIBackgroundModes in your XCode project. In your project, select your app target -> Signing & Capabilities -> Add Capability -> Background Modes.

These background modes will keep the app alive in the background as long as a mic or audio track is playing.

For a more robust background that isn't sensitive to the above conditions, we suggest using CallKit to maintain the connection while in the background. The example uses react-native-callkeep for simple integration with CallKit.

Our example code can be found here.

For apps planning to use CallKit to handle incoming calls in the background, it is important to call RTCAudioSession.audioSessionDidActivate/Deactivate when the call provider activates/deactivates the audio session.

Troubleshooting

Cannot read properties of undefined (reading 'split')

This error could happen if you are using yarn and have incompatible versions of dependencies with livekit-client.

To fix this, you can either:

  • use another package manager, like npm
  • use yarn-deduplicate to deduplicate dependencies

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

Apache License 2.0


LiveKit Ecosystem
Realtime SDKsReact Components · Browser · Swift Components · iOS/macOS/visionOS · Android · Flutter · React Native · Rust · Node.js · Python · Unity (web) · Unity (beta)
Server APIsNode.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community)
Agents FrameworksPython · Playground
ServicesLiveKit server · Egress · Ingress · SIP
ResourcesDocs · Example apps · Cloud · Self-hosting · CLI