Build a Video Chat App with ConnectyCube Flutter SDK
Valentyn Tereshchenko
Posted on January 9, 2024
π Preface
Integrating calling functionality into an app can bring several significant benefits, making it an important consideration. Here are some reasons why it's important to add calling features to your app:
π« Enhanced Communication: Calls provide a more immediate and personal means of communication compared to text messages. Users can convey emotions, tone, and nuances better through voice or video calls, leading to more meaningful and effective conversations.
π Complete Communication Platform: Adding calling capabilities can transform your app into a comprehensive communication platform. Users can conduct both asynchronous (messaging) and synchronous (calling) conversations within a single app, reducing the need to switch between different communication tools.
π Personalization: Voice and video calls allow for more personalized interactions. This can be crucial for apps that aim to offer personalized coaching, consulting, or support services.
π§ User Preferences: Some users prefer voice or video calls over text messages, depending on their communication style or the context of the conversation. Offering calling options caters to a wider range of user preferences.
π΅ Improved Customer Support: If your app provides customer support or assistance, offering voice or video calls can lead to more effective and satisfying customer interactions.
While adding calling functionality can be beneficial, it's important to carefully consider the implementation, user experience, and data privacy and security concerns. Additionally, ensure that adding calling features aligns with your app's overall purpose and the needs of your target audience.
The ConnectyCube platform provides main call features using WebRTC technology. The platform supports two main schemes of organisation connection between users in the call. There are:
- Mesh: each user connects directly to every other user participating in the call;
- SFU: each user connects to a central server and the central server relays media between users;
These two schemes have own pros and cons but briefly we can say that the Mesh is more suitable for private calls and small group calls (up to 4 users) and the SFU scheme is more suitable for group calls, meetings and streaming. Sure, it is the rough comparison and you can get more complicated by this link.
In our environment, we use the P2P Calls for the Mesh connection scheme and the Conference Calls for the SFU scheme. In this article, we will use the P2P Calls implementation to cover the minimum requirements for
the integration of call functionality into the app.
Let's begin!
π Get started
The integration flow includes a lot of steps but they are easy and we will reveal each in details next in this article. They are:
- Before we start;
- Set Up ConnectyCube Account;
- Integrate ConnectyCube Flutter SDK;
- Initialize ConnectyCube;
- User Authentication;
- Implement Calling Functionality:
πͺ§ Before we start
The complete source code of the final app can be found at https://github.com/ConnectyCube/connectycube-flutter-samples/tree/master/p2p_call_sample. Feel free to reference to it while reading this integration guide.
π’ Set Up ConnectyCube Account
First, you need to Login on the homepage of the ConnectyCube dashboard. If you donβt yet have an account, sign-up for an account now. After accessing the admin panel, create a new project by selecting the New app button, so that you will get the app credentials.
These credentials will be used to identify your app.
All users within the same ConnectyCube app can communicate by chat or video chat with each other, across all platforms - iOS, Android, Web, Flutter, React Native etc.
β‘ Integrate ConnectyCube Flutter SDK
Prerequisites: You have an existing Flutter app where you want to integrate calling functionality. If you don't have it, you can create your first Flutter app using the following guide.
π Depend on it
Run this command:
flutter pub add connectycube_sdk
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
connectycube_sdk: ^x.x.x
where x.x.x
is a latest version of the connectycube_sdk on pub.dev repository.
𧲠Import it
Now in your Dart code, you can use:
import 'package:connectycube_sdk/connectycube_sdk.dart';
π§ Initialize ConnectyCube
Having your app you can initialize the framework with your ConnectyCube application credentials. You can access your application credentials in ConnectyCube Dashboard:
String appId = '';
String authKey = '';
String authSecret = '';
init(appId, authKey, authSecret);
CubeSettings.instance.isDebugEnabled = true; // to enable ConnectyCube SDK logs;
π User Authentication
For the P2P calling to happen, it requires to have a ConnectyCube user and this user needs to be connected to ConnectyCube chat in order for call signalling to work.
The most simple way is to create users using the ConnectyCube Admin panel. Just go to the ConnectyCube Admin panel Home -> Your app -> Users -> Users list
and press the ADD NEW USER
button. Fill in required fields and press the SAVE CHANGES
button at the bottom of the page. This will work for a testing purpose.
In real case scenario, most realistically you allow your users to create an account in your app, means a ConnectyCube user should be created on an app code level. In this case you need to use ConnectyCube Flutter SDK. To create a user via ConnectyCube Flutter SDK use the following code snippet:
var userToCreate = CubeUser(login: 'some_login', password: 'some_password');
var createdUser = await signUp(userToCreate);
π Note: You can use not only the Login/Password approach but also E-mail/Password, Phone number, Facebook account, etc. See the link on how to use them in your app.
Then you can use created users for login to the Chat. For it use the following code snippet:
var cubeUser = CubeUser(id: 0, password: ''); // replase the `id` and `password` with the real user's data
CubeChatConnection.instance.login(cubeUser).then((loggedUser) {
// the user was successfully logged in to the chat
}).catchError((exception) {
// something went wrong during login to the chat
});
π Note: You need at least two users logged in to the chat for establishing the call connection
π² Implement Calling Functionality
This is the biggest part of this article, but it is not to hard for implementation.
We separated it for a few sequential subparts for better understanding the calling flow and simplify implementation.
β³ Initialise the P2PClient
After successful authorisation we need to initialize the P2PClient
. It required for initialization the underlying signalling processor. Just call the following method to do this:
P2PClient.instance.init();
β³ Create the call session and start the call
After initialization the P2PClient we can create the P2PSession
with selected opponents and required type of the call. The ConnectyCube platform supports two types of the call: CallType.VIDEO_CALL
and CallType.AUDIO_CALL
. Also you need at least one opponent for creating the call session. Use the user(s) id(s) you created before in the ConnectyCube Admin panel. Well, will create the new call session:
var callType = CallType.VIDEO_CALL;
var opponents = {}; // the `Set<int>` of users you want to call
P2PSession callSession = P2PClient.instance.createCallSession(callType, opponents);
Starting from this point we can start the call. We just need to call the
callSession.startCall();
to start the call.
But how can we see the opponent? Before starting the call we should add corresponding listeners and UI elements.
πΊ Video Conversation Screen
As described in the part Before we start we have the ready for use code sample, so you can re-use most of its parts.
But, to make it even simpler, we prepared the simplified version of the ConversationCallScreen
that contains the minimal set of features required for the video calling app. You can copy the following code and pass in entierly into lib/src/conversation_screen.dart
file from the code sample:
conversation_screen.dart
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:universal_io/io.dart';
import 'package:web_browser_detect/web_browser_detect.dart';
import 'package:connectycube_sdk/connectycube_sdk.dart';
import 'login_screen.dart';
class ConversationCallScreen extends StatefulWidget {
final P2PSession _callSession;
final bool _isIncoming;
@override
State<StatefulWidget> createState() {
return _ConversationCallScreen2State(_callSession, _isIncoming);
}
ConversationCallScreen(this._callSession, this._isIncoming);
}
class _ConversationCallScreenState extends State<ConversationCallScreen2>
implements RTCSessionStateCallback<P2PSession> {
static const String TAG = "ConversationCallScreenState";
final P2PSession _callSession;
final bool _isIncoming;
bool _isCameraEnabled = true;
bool _isMicMute = false;
bool _isFrontCameraUsed = true;
bool _isSpeakerEnabled = Platform.isIOS ? false : true;
RTCVideoRenderer? _localVideoRenderer;
RTCVideoRenderer? _remoteVideoRenderer;
_ConversationCallScreenState(this._callSession, this._isIncoming);
@override
void initState() {
super.initState();
_callSession.onLocalStreamReceived = _addLocalMediaStream;
_callSession.onRemoteStreamReceived = _addRemoteMediaStream;
_callSession.onSessionClosed = _onSessionClosed;
_callSession.setSessionCallbacksListener(this);
if (_isIncoming) {
_callSession.acceptCall();
} else {
_callSession.startCall();
}
}
@override
void dispose() {
super.dispose();
_localVideoRenderer?.srcObject = null;
_localVideoRenderer?.dispose();
_remoteVideoRenderer?.srcObject?.dispose().then((_) {
_remoteVideoRenderer?.srcObject = null;
_remoteVideoRenderer?.dispose();
});
}
Future<void> _addLocalMediaStream(MediaStream stream) async {
var localVideoRenderer = RTCVideoRenderer();
await localVideoRenderer.initialize();
localVideoRenderer.srcObject = stream;
setState(() {
_localVideoRenderer = localVideoRenderer;
});
}
Future<void> _addRemoteMediaStream(
session, int userId, MediaStream stream) async {
var remoteVideoRenderer = RTCVideoRenderer();
await remoteVideoRenderer.initialize();
remoteVideoRenderer.srcObject = stream;
setState(() {
_remoteVideoRenderer = remoteVideoRenderer;
});
}
void _onSessionClosed(session) {
_callSession.removeSessionCallbacksListener();
Navigator.pushReplacement(
context,
MaterialPageRoute(
builder: (context) => LoginScreen(),
),
);
}
@override
Widget build(BuildContext context) {
return WillPopScope(
onWillPop: () => _onBackPressed(context),
child: Scaffold(
backgroundColor: Colors.grey,
body: Stack(
fit: StackFit.loose,
clipBehavior: Clip.none,
children: [
if (_remoteVideoRenderer != null)
RTCVideoView(
_remoteVideoRenderer!,
objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
),
if (_localVideoRenderer != null)
Align(
alignment: Alignment.topRight,
child: Padding(
padding: EdgeInsets.only(
top: MediaQuery.of(context).padding.top + 10,
right: MediaQuery.of(context).padding.right + 10),
child: SizedBox(
width: MediaQuery.of(context).size.width / 3,
height: MediaQuery.of(context).size.height / 4,
child: ClipRRect(
borderRadius: BorderRadius.circular(6.0),
child: RTCVideoView(
_localVideoRenderer!,
objectFit:
RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
mirror: _isFrontCameraUsed,
),
),
),
),
),
Align(
alignment: Alignment.bottomCenter,
child: _getActionsPanel(),
),
],
),
),
);
}
Widget _getActionsPanel() {
return Container(
margin: EdgeInsets.only(
bottom: MediaQuery.of(context).padding.bottom + 8,
left: MediaQuery.of(context).padding.left + 8,
right: MediaQuery.of(context).padding.right + 8),
child: ClipRRect(
borderRadius: BorderRadius.only(
bottomLeft: Radius.circular(32),
bottomRight: Radius.circular(32),
topLeft: Radius.circular(32),
topRight: Radius.circular(32)),
child: Container(
padding: EdgeInsets.all(4),
color: Colors.black26,
child: Row(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Padding(
padding: EdgeInsets.only(right: 4),
child: FloatingActionButton(
elevation: 0,
heroTag: "Mute",
child: Icon(
_isMicMute ? Icons.mic_off : Icons.mic,
color: _isMicMute ? Colors.grey : Colors.white,
),
onPressed: () => _muteMic(),
backgroundColor: Colors.black38,
),
),
Padding(
padding: EdgeInsets.only(right: 4),
child: FloatingActionButton(
elevation: 0,
heroTag: "ToggleCamera",
child: Icon(
_isVideoEnabled() ? Icons.videocam : Icons.videocam_off,
color: _isVideoEnabled() ? Colors.white : Colors.grey,
),
onPressed: () => _toggleCamera(),
backgroundColor: Colors.black38,
),
),
Padding(
padding: EdgeInsets.only(right: 4),
child: FloatingActionButton(
elevation: 0,
heroTag: "SwitchCamera",
child: Icon(
Icons.cameraswitch,
color: _isVideoEnabled() ? Colors.white : Colors.grey,
),
onPressed: () => _switchCamera(),
backgroundColor: Colors.black38,
),
),
Visibility(
visible: !(kIsWeb &&
(Browser().browserAgent == BrowserAgent.Safari ||
Browser().browserAgent == BrowserAgent.Firefox)),
child: FloatingActionButton(
elevation: 0,
heroTag: "SwitchAudioOutput",
child: Icon(
kIsWeb || WebRTC.platformIsDesktop
? Icons.surround_sound
: _isSpeakerEnabled
? Icons.volume_up
: Icons.volume_off,
color: _isSpeakerEnabled ? Colors.white : Colors.grey,
),
onPressed: () => _switchSpeaker(),
backgroundColor: Colors.black38,
),
),
Expanded(
child: SizedBox(),
flex: 1,
),
Padding(
padding: EdgeInsets.only(left: 0),
child: FloatingActionButton(
child: Icon(
Icons.call_end,
color: Colors.white,
),
backgroundColor: Colors.red,
onPressed: () => _endCall(),
),
),
],
),
),
),
);
}
_endCall() {
_callSession.hungUp();
}
Future<bool> _onBackPressed(BuildContext context) {
return Future.value(false);
}
_muteMic() {
setState(() {
_isMicMute = !_isMicMute;
_callSession.setMicrophoneMute(_isMicMute);
});
}
_switchCamera() {
if (!_isVideoEnabled()) return;
if (!kIsWeb && (Platform.isAndroid || Platform.isIOS)) {
_callSession.switchCamera().then((isFrontCameraUsed) {
setState(() {
_isFrontCameraUsed = isFrontCameraUsed;
});
});
} else {
showDialog(
context: context,
builder: (BuildContext context) {
return FutureBuilder<List<MediaDeviceInfo>>(
future: _callSession.getCameras(),
builder: (context, snapshot) {
if (!snapshot.hasData || snapshot.data!.isEmpty) {
return AlertDialog(
content: const Text('No cameras found'),
actions: <Widget>[
TextButton(
style: TextButton.styleFrom(
textStyle: Theme.of(context).textTheme.labelLarge,
),
child: const Text('Ok'),
onPressed: () {
Navigator.of(context).pop();
},
)
],
);
} else {
return SimpleDialog(
title: const Text('Select camera'),
children: snapshot.data?.map(
(mediaDeviceInfo) {
return SimpleDialogOption(
onPressed: () {
Navigator.pop(context, mediaDeviceInfo.deviceId);
},
child: Text(mediaDeviceInfo.label),
);
},
).toList(),
);
}
},
);
},
).then((deviceId) {
log("onCameraSelected deviceId: $deviceId", TAG);
if (deviceId != null) _callSession.switchCamera(deviceId: deviceId);
});
}
}
_toggleCamera() {
if (!_isVideoCall()) return;
setState(() {
_isCameraEnabled = !_isCameraEnabled;
_callSession.setVideoEnabled(_isCameraEnabled);
});
}
_switchSpeaker() {
if (kIsWeb || WebRTC.platformIsDesktop) {
showDialog(
context: context,
builder: (BuildContext context) {
return FutureBuilder<List<MediaDeviceInfo>>(
future: _callSession.getAudioOutputs(),
builder: (context, snapshot) {
if (!snapshot.hasData || snapshot.data!.isEmpty) {
return AlertDialog(
content: const Text('No Audio output devices found'),
actions: <Widget>[
TextButton(
style: TextButton.styleFrom(
textStyle: Theme.of(context).textTheme.labelLarge,
),
child: const Text('Ok'),
onPressed: () {
Navigator.of(context).pop();
},
)
],
);
} else {
return SimpleDialog(
title: const Text('Select Audio output device'),
children: snapshot.data?.map(
(mediaDeviceInfo) {
return SimpleDialogOption(
onPressed: () {
Navigator.pop(context, mediaDeviceInfo.deviceId);
},
child: Text(mediaDeviceInfo.label),
);
},
).toList(),
);
}
},
);
},
).then((deviceId) {
log("onAudioOutputSelected deviceId: $deviceId", TAG);
if (deviceId != null) {
setState(() {
if (kIsWeb) {
_localVideoRenderer?.audioOutput(deviceId);
_remoteVideoRenderer?.audioOutput(deviceId);
} else {
_callSession.selectAudioOutput(deviceId);
}
});
}
});
} else {
setState(() {
_isSpeakerEnabled = !_isSpeakerEnabled;
_callSession.enableSpeakerphone(_isSpeakerEnabled);
});
}
}
bool _isVideoEnabled() {
return _isVideoCall() && _isCameraEnabled;
}
bool _isVideoCall() {
return CallType.VIDEO_CALL == _callSession.callType;
}
@override
void onConnectedToUser(P2PSession session, int userId) {
log("onConnectedToUser userId= $userId");
}
@override
void onConnectionClosedForUser(P2PSession session, int userId) {
log("onConnectionClosedForUser userId= $userId");
// _removeMediaStream(session, userId);
}
@override
void onDisconnectedFromUser(P2PSession session, int userId) {
log("onDisconnectedFromUser userId= $userId");
}
}
As the next steps below, we will describe the main features provided in this file.
π Display the members' videos on the UI
For displaying the videos of users, the P2PSession
provides special listeners. The onLocalStreamReceived
is used for listening the local (the current user's) media stream changes and the onRemoteStreamReceived
for listening the opponents users media streams.
Use these listeners for updating your UI. The code for building the local video view widget could be as the following:
callSession.onLocalStreamReceived = (localMediaStream) async {
var localVideoRenderer = RTCVideoRenderer();
await localVideoRenderer.initialize();
localVideoRenderer.srcObject = localMediaStream;
var localVideoView = Expanded(
child: RTCVideoView(localVideoRenderer,
objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
mirror: true,
),
);
};
The building of the remote video view widget is similar to building the local video view widget and could be as the following:
callSession.onRemoteStreamReceived = (callSession, userId, remoteMediaStream) async {
var remoteVideoRenderer = RTCVideoRenderer();
await remoteVideoRenderer.initialize();
remoteVideoRenderer.srcObject = remoteMediaStream;
var remoteVideoView = Expanded(
child: RTCVideoView(
remoteVideoRenderer,
objectFit: RTCVideoViewObjectFit.RTCVideoViewObjectFitCover,
),
);
};
These localVideoView
and remoteVideoView
can be used on your main widget anywhere you need it.
After establishing the call connection, the call screen should look similar to the following picture (a screenshot was made from the emulator):
π Accept the call
Once one user initiated a call, the others can accept it. It is so easy, after receiving the call in the callback onReceiveNewSession
, your opponent just need to call the acceptCall()
method:
P2PClient.instance.onReceiveNewSession = (callSession){
callSession.acceptCall();
};
This is enough for starting the call connection and started exchanging media streams between users on a same call.
π¬ Manage media sources during the call
The common calling app has the minimal functionality for managing the user's media during the call like the enabling/disabling the microphone or camera and usually the mobile app can route the sound to the speakerphone or to the headphone. Also your users may want to switch between Front and Back cameras. You can do it very easy using Connecticube's Flutter SDK's API.
π€ Mute/Unmute mic
Use the method setMicrophoneMute
to manage the disabling/enabling the microphone:
callSession.setMicrophoneMute(true); // `true` for muting, `false` - for unmuting
πΉ Enable/Disable the camera
Use the method setVideoEnabled
to manage the disabling/enabling the camera:
callSession.setVideoEnabled(false); // `false` for disabling, `true` - for enabling
π§ Switching between the Speakerphone and Headphone
Here is the common case when the call member wants to switch between the Speakerphone and Headphone. Just use the enableSpeakerphone
method for switching between of them. Use true
for enabling the Speakerphone and false
for disabling. In the code it looks next:
callSession.enableSpeakerphone(true); // `true` for enabling Speakerphone, `false` - for disabling
Considering that the Flutter framework supports the Web and the Desktop platforms, here can appear the requirement to manage the Audio Output Device, for these Platforms. The ConnectyCube Flutter SDK provides this feature, just unwrap the spoiler below:
Change the Audio Output Device on the WEB (Chrome-based browsers) and Desktop
if (kIsWeb || WebRTC.platformIsDesktop) {
showDialog(
context: context,
builder: (BuildContext context) {
return FutureBuilder<List<MediaDeviceInfo>>(
future: callSession.getAudioOutputs(),
builder: (context, snapshot) {
if (!snapshot.hasData || snapshot.data!.isEmpty) {
return AlertDialog(
content: const Text('No Audio output devices found'),
actions: <Widget>[
TextButton(
style: TextButton.styleFrom(
textStyle: Theme.of(context).textTheme.labelLarge,
),
child: const Text('Ok'),
onPressed: () {
Navigator.of(context).pop();
},
)
],
);
} else {
return SimpleDialog(
title: const Text('Select Audio output device'),
children: snapshot.data?.map((mediaDeviceInfo) {
return SimpleDialogOption(
onPressed: () {
Navigator.pop(context, mediaDeviceInfo.deviceId);
},
child: Text(mediaDeviceInfo.label),
);
}).toList(),
);
}
},
);
},
).then((deviceId) {
if (deviceId != null) {
setState(() {
if (kIsWeb) {
_localVideoRenderer?.audioOutput(deviceId);
_remoteVideoRenderer?.audioOutput(deviceId);
} else {
callSession.selectAudioOutput(deviceId);
}
});
}
});
}
π· Switching between the Front and Back cameras
Almost every modern device has two cameras. There are Front and the Back cameras. And it is normal case when the user want to switch between them during the call. The ConnectyCube Flutter SDK has comfortable API for doing it. You need just to call the switchCamera()
method for it. This method returns the Future
boolean value where true
means that the Front camera was selected and the false
means that the Back camera was selected. Please see the snippet below how it can be used in the code:
callSession.switchCamera().then((isFrontCameraUsed) {
if (isFrontCameraUsed){
// the Front camera was selected
} else {
// the Back camera was selected
}
});
Also let's don't forget that the Flutter app works not only Android/iOS, but also on the WEB and Desktop platforms. And sure ConnectyCube Flutter SDK has the API for switching to any Camera connected to your desktop. Just unwrap the spoiler below to see how to implement in your app what supports the Web and Desktop:
Select a Camera
if (kIsWeb || WebRTC.platformIsDesktop) {
showDialog(
context: context,
builder: (BuildContext context) {
return FutureBuilder<List<MediaDeviceInfo>>(
future: _callSession.getCameras(),
builder: (context, snapshot) {
if (!snapshot.hasData || snapshot.data!.isEmpty) {
return AlertDialog(
content: const Text('No cameras found'),
actions: <Widget>[
TextButton(
style: TextButton.styleFrom(
textStyle: Theme.of(context).textTheme.labelLarge,
),
child: const Text('Ok'),
onPressed: () {
Navigator.of(context).pop();
},
)
],
);
} else {
return SimpleDialog(
title: const Text('Select camera'),
children: snapshot.data?.map((mediaDeviceInfo) {
return SimpleDialogOption(
onPressed: () {
Navigator.pop(context, mediaDeviceInfo.deviceId);
},
child: Text(mediaDeviceInfo.label),
);
},).toList(),
);
}
},
);
},
).then((deviceId) {
if (deviceId != null) _callSession.switchCamera(deviceId: deviceId);
});
}
}
π Ending the call
Here can be two stages of the calling flow: received new call and call is in progress. If you want to end the call on the first stage, you need to call the reject
method:
callSession.reject();
If you want to end the call after accepting it, you need to call the hungUp
method:
callSession.hungUp();
π Conclusion
The complete source code of the final app can be found at https://github.com/ConnectyCube/connectycube-flutter-samples/tree/master/p2p_call_sample.
In the ever-evolving landscape of mobile, desktop, and web applications, the integration of calling functionality is no longer just an option; it's a vital element that can elevate your app's user experience and expand its capabilities.
Incorporating voice and video calls into your app can bring forth a multitude of benefits. It enhances real-time communication, fosters user engagement, and provides a competitive edge. Whether your app is for personal communication, business collaboration, or specialized services like telehealth and dating, integrating calls opens up a world of possibilities.
In the end, call integration is more than a feature; it's a conduit for forging connections, facilitating collaboration, and enriching the user journey. It's a testament to the evolving nature of technology and its capacity to transform the way we interact. So, as you embark on your journey to integrate calls into your app, remember that the potential for meaningful and impactful communication is now in your hands.
π§βπ» Where to go next
We just described the bare minimum functionality you could integrate into your calling app. There are more advanced features which you could be interested to add as well. Follow the below links in ConnectyCube documentation to get more information re how to add it:
- Microphone selection functionality;
- Screen Sharing;
- Monitoring mic level and video bitrate;
- CallKit integration;
- etc.
Helpful links
Posted on January 9, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 29, 2024