TensorFlow with Interactive Example
Benjamin Blouin
Posted on February 1, 2021
I taught myself TensorFlow and used Jupyter Notebooks for part of my Capstone project for Electrical and Computer Engineering, training a model that can decide if an image has fire and/or smoke. I've included a link to the Binder notebook, where you can run each cell and play with the notebook to see what happens.
Every cell can be run in the Notebook, so you don't even use your own computer to do the model training. I think this means you might be able to train on any browser.
I will try to explain as best I can what is happening.
Modules
The line with the percent prefix is a magic function that helps display the plots used later.
The imports should be self explanatory.
Setting the level of TensorFlow helps speed up using API by logging less.
The last variable is used later.
Download Dataset
This dataset is what I used for my Capstone, downloaded to the local notebook.
Unpack Dataset
We import the modules needed to unpack the dataset, as well as try to make a folder for decompression. This can probably be more efficient, but I actually didn't need this for the Capstone project.
Create Dataset Objects
First lets set some variables, no magic numbers!
We unpacked the dataset, now we have to turn it into something TensorFlow can use. The API has a very nice function that can create a dataset from a folder. The folder names, in this case, has two sub-folders: 'fire_smoke', 'no-fire'. These are the classes, or categories.
In these function's arguments we must give:
- The data directory path, we made this in cell 1.
- The subset: training and validation are our only choices. This works because we've setup the dataset directory in an orderly manner.
- The validation split is the amount of each directories images we want to use for validation, and the amount for training. In this case, and this might be confusing, but 0.21 for training = 79% of images, and the second value, 0.21, actually being 21%. Making sure these are equal is safe for simplification.
- The seed is the pseudorandom number used to pick the images; helps randomize the chosen pictures for each type: validation and training don't overlap, and are randomly chosen each time.
- image_size resizes the images to standardize the matrix sizes: 160px x 120px; we can not do the matrix multiplication if the sizes are different, so we resize them.
- color_mode makes the images black and white. This makes sure the matrices for each image are (160,120,1). Now all our pictures are exactly the same size, and grayscale.
- batch_size is a little arbitray, and the value is chosen so the computer running the training can actually finish. If it's too big, you run out of memory.
Now, we save the class names (fire_smoke, no-fire) into a variable, for the examples for the datasets. The final section is plotting, not unlike how Matlab works, and the autotune tries to help make the dataset work better for the computer.
Create Model
We only have two choices, either fire_smoke or no-fire; therefore, we have 2 classes.
The sequential model means that each part of the model has one input and one output.
The first layer normalizes the input between 0 and 1. These numbers are easier for TensorFlow use.
Dense layers
Every value in the domain is connected every value in the range. This layer is used to give a greater number of tries, 64, for each try of the next layer.
Conv2D Layers
Extracts features from the image, or parts of the image, by doing convolution. Explaining convolution is beyond the scope of this article.
Dropout
This makes the learned values to be thrown away; this helps make sure our model is actually learning, or just getting better at this particular dataset. We want models to be generalized so it can be used in many different scenarios.
MaxPooling2D
This is a different way to avoid overfitting, but is a reduction instead of throwing away the learned variables.
Flatten
Turns the learning values into a vector, making the math become matrix multiplication instead of some other harder math function.
Output
The output layer is our last layer, which has the same number of neurons as classes. This is where a decision is made, fire_smoke or no-fire.
The next set of settings requires a delve into different types of gradient linear regressions. Basically, are my guesses getting better? If yes, save these values for my next guesses, and if not go a different way. The learning rate is how far away from my current guess the next guess will be. Adam is a type of linear regression that includes momentum.
The loss variable is used to tell the model what we want to train for, in this case we want the accuracy of guesses to be training for. We use the metrics variable to save the actual values of our learning progress as we go through each round of learning.
Train Model
This is where the training actually starts. Epochs is the number of times we will let the model start over with the new, learned variables. The other arguments are self explanator, and we want to see the results in real-time, so we set verbose. The summary call gives you a better idea the shape and sizes of each layer we made earlier.
Plot Model
The next cell is really more about plotting the results, so I'll include the most recent run, which I ran while writing this post.
Success! Our model got better each epoch, is fairly linear, and gets really good at guessing, over 90%.
Prediction
I won't explain this cell, and the output is self explanatory. We asked the model to make a prediction on an image that isn't part of the dataset. It guesses correctly, at least on these two images.
Hope this helps.
Posted on February 1, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.