Deep Learning with React Native (iOS only)
Thomas Dittmar
Posted on August 9, 2020
Intro
In this tutorial, I will cover all the steps on how to build a mobile application and train a deep learning model so that you can predict a handwritten digit between 0 and 9 by using your mobile's camera.
Cool hey
But before we start building the mobile app we need to come up with a high-level strategy. Let's go through the thought process:
- Do we build a bare RN or an Expo app?
- Which camera library do we want to use?
- Do we have to crop the images and what library we want to use?
- How do we train a deep learning model?
- How do we use that model against the photo?
- How do we show the result?
Note: This tutorial needs some prerequisites and a good understanding of RN and Javascript in general. If you are an absolute beginner, I would suggest following a good course on Youtube, Udemy or Egghead before continuing with this tutorial.
Let's get started
I'm going to divide this tutorial into 3 sections
- Section 1: Create the RN application
- Section 2: Train the deep learning model
- Section 3: Implement the model, predict and show the result
Section 1 - Create the RN application
Remember the first point of our thought process whether to create a bare or an Expo boilerplate app?
After some research, I decided to load the trained model locally. It's the easiest way without fetching the model from a cloud server but you can do that too.
In this tutorial, we will use bundleResourceIO
from @tensorflow/tfjs-react-native
that is unfortunately not compatible with Expo.
Also, since we want to use the camera we have to use a physical phone and not a simulator. For that you must have an Apple Developer Account to sign your app otherwise you will not be able to run the app.
Let's create the app with the following command:
$ react-native init MyFirstMLApp
After the installation process has been completed make sure all your pods are installed too!
$ cd MyFirstMLApp
$ npx pod-install
Let's run the app for the first time on your physical iPhone. Open Xcode and find the MyFirstMLApp.xcworkspace
and open it. Connect your iPhone to your Mac with the lightning cable and select your phone. Press the play button for building and running the app for the first time. You should see the Welcome to React
screen on your iPhone.
Awesome
Let's add some packages needed for this app:
yarn add @react-native-community/async-storage @react-native-community/cameraroll @tensorflow/tfjs @tensorflow/tfjs-react-native expo-camera expo-gl expo-gl-cpp expo-image-manipulator react-native-fs react-native-svg react-native-unimodules victory-native
and finally, install the navigation library.
yarn add react-native-navigation && npx rnn-link
The latter command will add the navigation package to iOS and Android. But we are not quite just done yet.
Because we use the bare framework of RN, unimodules need to be installed manually.
Please click on the link and modify the Podfile
as described in the iOS section. After that run
$ npx pod-install
and build the Xcode project to see if everything has been installed correctly.
Then continue with adding the code for the unimodules to the AppDelegate.m
and build the project again.
Because we want to use the camera to take pictures we also need to add a few privacy keys to the Info.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<!-- Required for iOS 10 and higher -->
<key>NSCameraUsageDescription</key>
<string>We need to use the camera for taking pictures of the digits</string>
<!-- Required for iOS 11 and higher: include this only if you are planning to use the camera roll -->
<key>NSPhotoLibraryAddUsageDescription</key>
<string>We need to access the photo library to upload the images</string>
<!-- Include for only if you are planning to use the camera roll -->
<key>NSPhotoLibraryUsageDescription</key>
<string>We need to access the photo library to upload the images</string>
<!-- Include this only if you are planning to use the microphone for video recording -->
<key>NSMicrophoneUsageDescription</key>
<string>We need to access the microphone</string>
<key>CFBundleDevelopmentRegion</key>
<string>en</string>
If Xcode builds fine you can either continue running the app from Xcode or just use the terminal.
If you decide to run the app from the command line from now on, like me, please add --device
to the ios
script in your package.json
file and run
yarn ios
Once the app is starting on your iPhone don't be surprised that you don't see the welcome page anymore. That's because we use react-native-navigation. But you should see the loading screen MyFirstMLApp
Now it's time to create our 2 only screens and add the navigation for these screens to our project.
Please create the src/screens/CameraView
and src/screens/EvaluationView
directories in the root of our project.
Inside src/screens/CameraView
create an index.js
file and add this code:
import React, { useState, useRef, useEffect } from 'react';
import {
SafeAreaView,
TouchableOpacity,
View,
Text,
StatusBar,
} from 'react-native';
import { Navigation } from 'react-native-navigation';
import { Camera } from 'expo-camera';
const MASK_DIMENSION = 100;
export const CameraView = (props) => {
const [hasPermission, setHasPermission] = useState(null);
const [showShutterButton, setShowShutterButton] = useState(false);
const cameraRef = useRef();
useEffect(() => {
(async () => {
const { status } = await Camera.requestPermissionsAsync();
setHasPermission(status === 'granted');
})();
}, []);
const handlePictureProcessing = async () => {
goToEvaluationView();
};
const goToEvaluationView = () => {
Navigation.push(props.componentId, {
component: {
name: 'evaluationView',
options: {
topBar: {
title: {
text: 'Evaluating ML result',
color: 'white',
},
background: {
color: '#4d089a',
},
backButton: {
color: 'white',
showTitle: false,
},
},
},
passProps: {},
},
});
};
if (hasPermission === null) {
return <View />;
}
if (hasPermission === false) {
return <Text>No access to camera</Text>;
}
return (
<React.Fragment>
<StatusBar barStyle="light-content" />
<SafeAreaView style={styles.safeArea}>
<Camera
ref={cameraRef}
type={Camera.Constants.Type.back}
whiteBalance={Camera.Constants.WhiteBalance.auto}
onCameraReady={() => setShowShutterButton(true)}>
<View style={styles.cameraView}>
<View style={styles.mask} />
{showShutterButton && (
<TouchableOpacity
style={styles.shutterButton}
onPress={handlePictureProcessing}>
<Text style={styles.shutterButtonText}>Take a picture</Text>
</TouchableOpacity>
)}
</View>
</Camera>
</SafeAreaView>
</React.Fragment>
);
};
const styles = {
safeArea: {
backgroundColor: '#4d089a',
},
cameraView: {
height: '100%',
justifyContent: 'center',
alignItems: 'center',
backgroundColor: 'transparent',
},
mask: {
height: MASK_DIMENSION,
width: MASK_DIMENSION,
borderWidth: 3,
borderColor: 'white',
borderStyle: 'dotted',
borderRadius: 15,
},
shutterButton: {
position: 'absolute',
bottom: 0,
width: 150,
height: 40,
justifyContent: 'center',
alignItems: 'center',
borderWidth: 1,
borderColor: 'white',
borderRadius: 15,
marginBottom: 20,
},
shutterButtonText: {
fontSize: 18,
color: 'white',
},
};
CameraView.options = {
statusBar: {
backgroundColor: null,
},
topBar: {
title: {
text: 'Take a picture',
color: 'white',
},
background: {
color: '#4d089a',
},
},
tapBar: {
background: {
color: '#4d089a',
},
},
};
Inside src/screens/EvaluationView
create an index.js
file and add this code:
import React from 'react';
import { SafeAreaView, View, Text, StatusBar } from 'react-native';
export const EvaluationView = (props) => {
return (
<React.Fragment>
<StatusBar barStyle="light-content" />
<SafeAreaView style={styles.safeArea}>
<View style={styles.container}>
<Text style={styles.headerText}>ANALYSIS</Text>
</View>
</SafeAreaView>
</React.Fragment>
);
};
const styles = {
safeArea: {
backgroundColor: '#4d089a',
},
container: {
height: '100%',
alignItems: 'center',
backgroundColor: 'white',
},
headerText: {
fontSize: 20,
fontWeight: '500',
color: '#4d089a',
margin: 20,
},
};
Then override the index.js
file in your root with the following code:
import { Navigation } from 'react-native-navigation';
import { CameraView } from './src/screens/CameraView';
import { EvaluationView } from './src/screens/EvaluationView';
Navigation.registerComponent('cameraView', () => CameraView);
Navigation.registerComponent('evaluationView', () => EvaluationView);
Navigation.setDefaultOptions({
statusBar: {
style: 'light',
backgroundColor: '#4d089a',
},
topBar: {
title: {
color: 'white',
},
background: {
color: '#4d089a',
},
backButton: {
color: 'white',
showTitle: false,
},
},
});
Navigation.events().registerAppLaunchedListener(() => {
Navigation.setRoot({
root: {
stack: {
children: [
{
component: {
name: 'cameraView',
},
},
],
},
},
});
});
Finally, you can remove the App.js
file as it's not needed anymore.
Restart your metro bundler and you should see the app running like that ...
Congratulations. You have created the base app that doesn't take pictures yet but can navigate from one screen to the other.
Section 2 - Train the deep learning model
Initially, I used this pre-trained model from Kaggle but the effort to make the app work was huge.
I had to create created an AWS EC2 Deep Learning AMI (Amazon Linux 2) Version 30.1 instance with SSH access because my Macbook doesn't support CUDA. (GPU support needed to train the model)
Then I had to copy the Jupyter notebook from Kaggle, ran the notebook to train the model on the AWS instance (it ran for 3 hours) and moved the model back into my project.
Furthermore, I had to install OpenGL to modify the taken image and wrote a quite complex script to reshape the base64 string to a tensor to match the expected input for the model ([1, 28, 28, 1]
).
All that made me rethink how to write this tutorial. After all, this tutorial should be for people that just want to play around with the machine learning model without learning Python, Jupyter, Tensorflow and Keras beforehand. Also, the tutorial would have been 5 times the length of what it is now.
Note: If you want to learn how to use Tensorflow & Keras I found a good Youtube channel about deep learning with deeplizard, which is very informative and it is in line with what we want to do in this tutorial.
Also, this course on Udemy, which is very informative and comprehensive but it's not free 😉.
Anyway, for this tutorial, I decided to use Google's Teachable Machine to train the images.
The idea is to take 28 x 28 pixels images with the app we just build, upload the images to the Teachable Machine and download the trained model back into our project.
Just in case you were asking why I use 28 x 28 pixels images? Well, that was the original input size of the model I used first. So I stuck with it.
That also means, we have to crop and save the taken images to the camera library. In order to do that we need to modify our code a little.
Please create a helper.js
file inside the CameraView
folder and paste the following code:
import { Dimensions } from 'react-native';
import * as ImageManipulator from 'expo-image-manipulator';
import CameraRoll from '@react-native-community/cameraroll';
const { height: DEVICE_HEIGHT, width: DEVICE_WIDTH } = Dimensions.get('window');
// got the dimension from the trained data of the *Teachable Machine*; pixel resolution conversion (8x)
export const BITMAP_DIMENSION = 224;
export const cropPicture = async (imageData, maskDimension) => {
try {
const { uri, width, height } = imageData;
const cropWidth = maskDimension * (width / DEVICE_WIDTH);
const cropHeight = maskDimension * (height / DEVICE_HEIGHT);
const actions = [
{
crop: {
originX: width / 2 - cropWidth / 2,
originY: height / 2 - cropHeight / 2,
width: cropWidth,
height: cropHeight,
},
},
{
resize: {
width: BITMAP_DIMENSION,
height: BITMAP_DIMENSION,
},
},
];
const saveOptions = {
compress: 1,
format: ImageManipulator.SaveFormat.JPEG,
base64: false,
};
return await ImageManipulator.manipulateAsync(uri, actions, saveOptions);
} catch (error) {
console.log('Could not crop & resize photo', error);
}
};
export const saveToCameraRoll = async (uri) => {
try {
return await CameraRoll.save(uri, 'photo');
} catch (error) {
console.log('Could not save the image', error);
}
};
Add the import to src/screens/CameraView/index.js
file
import { cropPicture, saveToCameraRoll } from './helpers';
and add the takePicture
and modify the handlePictureProcessing
function like that
const handlePictureProcessing = async () => {
const imageData = await takePicture();
const croppedData = await cropPicture(imageData, MASK_DIMENSION);
await saveToCameraRoll(croppedData.uri);
// we don't want to go to the evaluation view now
//goToEvaluationView();
};
const takePicture = async () => {
const options = {
quality: 0.1,
fixOrientation: true,
};
try {
return await cameraRef.current.takePictureAsync(options);
} catch (error) {
console.log('Could not take photo', error);
}
};
As you can see we comment out the line //goToEvaluationView();
so we don't go to the other screen. That means you can take as many pictures in a row as you want. All the pictures will now be saved in the photo library.
Our next task is to write as many variations of numbers between 0 and 9 as possible on a piece of paper. The more numbers, colours and pen shapes we use the better the prediction will be.
I was lazy and ended up with roughly 10 variations per number but the prediction was a bit off for a few numbers such as 4 and 8.
So it's up to you how many numbers you will let the Teachable Machine train on.
When you finished with taken the images, Airdrop them all back to your Mac and from there upload them to the Teachable Machine and start training them.
Once that is finished you can take additional pictures with your app and upload them too, to test against the trained model.
If you are happy with the result, click on Export Model -> Tensorflow.js -> Download -> Download my model, which will download a ZIP file.
Unzip it, create a model folder in the src directory (src/model
) and copy the model.json
and the weights.bin
into that folder.
We also need to tell metro to deal with the new file format: *.bin
. So please modify the metro.config.js
like that ...
const { getDefaultConfig } = require('metro-config');
module.exports = (async () => {
const {
resolver: { assetExts },
} = await getDefaultConfig();
return {
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
resolver: {
assetExts: [...assetExts, 'bin'],
},
};
})();
Great. Now that our model is in the project, let's start using the model to predict the number.
Section 3 - Implement the model, predict and show result
Firstly, we don't want is to save the photos we will use to predict the number into the image library anymore. (unless you want to).
So comment out that line await saveToCameraRoll(croppedData.uri);
.
We also need the base64 string prop of the cropped image return data and lastly, we want to pass that base64 string to the EvaluationView
via props
.
Let's modify our CameraView src/screens/CameraView/index.js
file again like that:
const handlePictureProcessing = async () => {
const imageData = await takePicture();
const croppedData = await cropPicture(imageData, MASK_DIMENSION);
// await saveToCameraRoll(croppedData.uri);
goToEvaluationView(croppedData);
};
const goToEvaluationView = (croppedData) => {
Navigation.push(props.componentId, {
component: {
name: 'evaluationView',
options: {
topBar: {
title: {
text: 'Evaluating ML result',
color: 'white',
},
background: {
color: '#4d089a',
},
backButton: {
color: 'white',
showTitle: false,
},
},
},
passProps: {
base64: croppedData.base64 || null,
},
},
});
};
Sweet. Let's display the image in the EvaluationView
. Let's import the Image
from react-native
and add the Image component within the View
container like that
<View style={styles.container}>
<Text style={styles.headerText}>ANALYSIS</Text>
<Image
style={styles.imageContainer}
source={{ uri: `data:image/gif;base64,${props.base64}` }}
resizeMethod="scale"/>
</View>
and add the style for the imageContainer
underneath the headerText
style.
imageContainer: {
height: 300,
width: 300,
},
The last step is to go to the src/screens/CameraView/helpers.js
file and change the saveOptions
to base64: true.
Voilà! You should see the taken image now in the EvaluationView
below the ANALYSIS text.
Let's add the Victory Chart to the EvaluationView
together with few react-native
packages
import React from 'react';
import {
Dimensions,
ActivityIndicator,
SafeAreaView,
View,
Image,
Text,
StatusBar,
} from 'react-native';
import {
VictoryChart,
VictoryAxis,
VictoryBar,
VictoryTheme,
} from 'victory-native';
const { width: DEVICE_WIDTH } = Dimensions.get('window');
To get the width
of the device (needed for the VictoryChart) we use the Dimensions
library.
Then add the Victory Chart container. Since we only want to use the chart after we have a prediction, we add a condition based on the length of the graphData
, and some fake graph data.
import React from 'react';
import {
Dimensions,
ActivityIndicator,
SafeAreaView,
View,
Image,
Text,
StatusBar,
} from 'react-native';
import {
VictoryChart,
VictoryAxis,
VictoryBar,
VictoryTheme,
} from 'victory-native';
const { width: DEVICE_WIDTH } = Dimensions.get('window');
export const EvaluationView = (props) => {
const graphData = [
{ number: 0, prediction: 0.04 },
{ number: 1, prediction: 0.02 },
{ number: 2, prediction: 0.02 },
{ number: 3, prediction: 0.1 },
{ number: 4, prediction: 0.85 },
{ number: 5, prediction: 0.04 },
{ number: 6, prediction: 0.2 },
{ number: 7, prediction: 0.12 },
{ number: 8, prediction: 0.0 },
{ number: 9, prediction: 0.0 },
];
return (
<React.Fragment>
<StatusBar barStyle="light-content" />
<SafeAreaView style={styles.safeArea}>
<View style={styles.container}>
<Text style={styles.headerText}>ANALYSIS</Text>
<Image
style={styles.imageContainer}
source={{ uri: `data:image/gif;base64,${props.base64}` }}
resizeMethod="scale"
/>
<View style={styles.resultContainer}>
{graphData.length ? (
<VictoryChart
width={DEVICE_WIDTH - 20}
padding={{ top: 30, bottom: 70, left: 50, right: 30 }}
theme={VictoryTheme.material}>
<VictoryAxis
tickValues={[1, 2, 3, 4, 5, 6, 7, 8, 9]}
tickFormat={[1, 2, 3, 4, 5, 6, 7, 8, 9]}
/>
<VictoryAxis dependentAxis tickFormat={(tick) => tick} />
<VictoryBar
style={{ data: { fill: '#c43a31' } }}
barRatio={0.8}
alignment="start"
data={graphData}
x="number"
y="prediction"
/>
</VictoryChart>
) : (
<ActivityIndicator size="large" color="#4d089a" />
)}
</View>
</View>
</SafeAreaView>
</React.Fragment>
);
};
You should have a screen like that:
Wonderful.
Now, we are coming the final part of the tutorial, in where we will load the model and compare the taken photo against the model.
Please create a util.js
in the src directory and paste the following code.
/* eslint-disable no-bitwise */
/*
Copyright (c) 2011, Daniel Guerrero
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL DANIEL GUERRERO BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* Uses the new array typed in javascript to binary base64 encode/decode
* at the moment just decodes a binary base64 encoded
* into either an ArrayBuffer (decodeArrayBuffer)
* or into an Uint8Array (decode)
*
* References:
* https://developer.mozilla.org/en/JavaScript_typed_arrays/ArrayBuffer
* https://developer.mozilla.org/en/JavaScript_typed_arrays/Uint8Array
*/
export const Base64Binary = {
_keyStr: 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=',
/* will return a Uint8Array type */
decodeArrayBuffer: function (input) {
var bytes = (input.length / 4) * 3;
var ab = new ArrayBuffer(bytes);
this.decode(input, ab);
return ab;
},
removePaddingChars: function (input) {
var lkey = this._keyStr.indexOf(input.charAt(input.length - 1));
if (lkey === 64) {
return input.substring(0, input.length - 1);
}
return input;
},
decode: function (input, arrayBuffer) {
//get last chars to see if are valid
input = this.removePaddingChars(input);
input = this.removePaddingChars(input);
var bytes = parseInt((input.length / 4) * 3, 10);
var uarray;
var chr1, chr2, chr3;
var enc1, enc2, enc3, enc4;
var i = 0;
var j = 0;
if (arrayBuffer) {
uarray = new Uint8Array(arrayBuffer);
} else {
uarray = new Uint8Array(bytes);
}
input = input.replace(/[^A-Za-z0-9\+\/\=]/g, '');
for (i = 0; i < bytes; i += 3) {
//get the 3 octects in 4 ascii chars
enc1 = this._keyStr.indexOf(input.charAt(j++));
enc2 = this._keyStr.indexOf(input.charAt(j++));
enc3 = this._keyStr.indexOf(input.charAt(j++));
enc4 = this._keyStr.indexOf(input.charAt(j++));
chr1 = (enc1 << 2) | (enc2 >> 4);
chr2 = ((enc2 & 15) << 4) | (enc3 >> 2);
chr3 = ((enc3 & 3) << 6) | enc4;
uarray[i] = chr1;
if (enc3 !== 64) {
uarray[i + 1] = chr2;
}
if (enc4 !== 64) {
uarray[i + 2] = chr3;
}
}
return uarray;
},
};
Out of respect to the developer please don't remove the copyright disclaimer 😃
Now create another helpers.js file but this time in the EvaluationView directory src/screens/EvaluationView/helpers.js
and copy this code
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-react-native';
import { bundleResourceIO, decodeJpeg } from '@tensorflow/tfjs-react-native';
import { Base64Binary } from '../../util';
import { BITMAP_DIMENSION } from '../CameraView/helpers';
const modelJson = require('../../model/model.json');
const modelWeights = require('../../model/weights.bin');
// 0: channel from JPEG-encoded image
// 1: gray scale
// 3: RGB image
const TENSORFLOW_CHANNEL = 3;
export const getModel = async () => {
try {
// wait until tensorflow is ready
await tf.ready();
// load the trained model
return await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights));
} catch (error) {
console.log('Could not load model', error);
}
};
export const convertBase64ToTensor = async (props) => {
try {
const uIntArray = Base64Binary.decode(props.base64);
// decode a JPEG-encoded image to a 3D Tensor of dtype
const decodedImage = decodeJpeg(uIntArray, 3);
// reshape Tensor into a 4D array
return decodedImage.reshape([
1,
BITMAP_DIMENSION,
BITMAP_DIMENSION,
TENSORFLOW_CHANNEL,
]);
} catch (error) {
console.log('Could not convert base64 string to tesor', error);
}
};
export const startPrediction = async (model, tensor) => {
try {
// predict against the model
const output = await model.predict(tensor);
// return typed array
return output.dataSync();
} catch (error) {
console.log('Error predicting from tesor image', error);
}
};
export const populateData = (typedArray) => {
const predictions = Array.from(typedArray);
return predictions.map((item, index) => {
return {
number: index,
prediction: item,
};
});
};
These are our functions to load the model, convert the base64 string to a tensor, predict the digit and populate the data for the victory chart.
Last but not least, we add these functions to the componentDidMount
flow of the src/screens/EvaluationView/index.js
. Here the complete code of that component
import React, { useState, useEffect } from 'react';
import {
Dimensions,
ActivityIndicator,
SafeAreaView,
View,
Image,
Text,
StatusBar,
} from 'react-native';
import {
VictoryChart,
VictoryAxis,
VictoryBar,
VictoryTheme,
} from 'victory-native';
import {
getModel,
convertBase64ToTensor,
startPrediction,
populateData,
} from './helpers';
const { width: DEVICE_WIDTH } = Dimensions.get('window');
export const EvaluationView = (props) => {
const [graphData, setGraphData] = useState([]);
useEffect(() => {
const predictDigits = async () => {
const model = await getModel();
const tensor = await convertBase64ToTensor(props);
const typedArray = await startPrediction(model, tensor);
setGraphData(populateData(typedArray));
};
predictDigits();
}, [props]);
return (
<React.Fragment>
<StatusBar barStyle="light-content" />
<SafeAreaView style={styles.safeArea}>
<View style={styles.container}>
<Text style={styles.headerText}>ANALYSIS</Text>
<Image
style={styles.imageContainer}
source={{ uri: `data:image/gif;base64,${props.base64}` }}
resizeMethod="scale"
/>
<View style={styles.resultContainer}>
{graphData.length ? (
<VictoryChart
width={DEVICE_WIDTH - 20}
padding={{ top: 30, bottom: 70, left: 50, right: 30 }}
theme={VictoryTheme.material}>
<VictoryAxis
tickValues={[1, 2, 3, 4, 5, 6, 7, 8, 9]}
tickFormat={[1, 2, 3, 4, 5, 6, 7, 8, 9]}
/>
<VictoryAxis dependentAxis tickFormat={(tick) => tick} />
<VictoryBar
style={{ data: { fill: '#c43a31' } }}
barRatio={0.8}
alignment="start"
data={graphData}
x="number"
y="prediction"
/>
</VictoryChart>
) : (
<ActivityIndicator size="large" color="#4d089a" />
)}
</View>
</View>
</SafeAreaView>
</React.Fragment>
);
};
const styles = {
safeArea: {
backgroundColor: '#4d089a',
},
container: {
height: '100%',
alignItems: 'center',
backgroundColor: 'white',
},
headerText: {
fontSize: 20,
fontWeight: '500',
color: '#4d089a',
margin: 20,
},
imageContainer: {
height: 300,
width: 300,
},
resultContainer: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
};
As I mentioned before, the trained model will be as good as you train the model.
In a real-world scenario, a data engineer would use tens of thousands of variations of handwritten digits to train the model. Then would use another set to tweak the model and use a completely new set to check the model performance.
This was obviously out of scope for this little fun project but I hope you will learn something.
On a side note before I close this tutorial; if you are a seasoned React Native developer you would have realised by now that with a few manual imports, especially for
react-native-unimodules
,expo-camera
and permission settings, the project will work on Android out of the box too. 🤓
Please leave a comment if I could have done something different or if you liked this tutorial. After all, we are all here to learn, right? 👨🏼🎓
Posted on August 9, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.