Tallan Groberg
Posted on July 30, 2023
Deploying to AWS can be difficult to learn. I hope to make this easy.
Credits: Professor Andrew Awdeorio
What you will get out of this
You will have a very simple web app live on the internet. Click here and here to see the end result.
Possible limitations
To ensure you don't waste your time. SQLite might not be the best option if the application that you hope to make with this is very data intensive.
While this article is set up for image upload functionality to be possible. This is not covered in this tutorial. You can store references to images very easily with the functionality I use in this tutorial. Checking out my tutorial on image uploads with firebase is a great option until I release a tutorial about that for this stack.
Important:
This tutorial does not go over best practices for securing your AWS account upon account creation. If you plan on building applications that hold sensitive user data hosted through AWS and not just this tutorial, I recommend learning about how to secure your AWS account before you start building them.
Table Of Contents.
Part 1: creating the database
Part 2: creating a flask app
part 3: Creating the react app
part 4: Setting up an AWS account
part 5: Deploying to AWS
Introduction
In this tutorial we are going to walk through how to make a deployment to AWS with a simple SQLite server. How to make a flask app capable of interacting with that server to store and retrieve data. And how to make a react app hosted through flask.
The application itself will be simple but you will able host powerful web applications after completing this tutorial.
Why this is useful
Completing this tutorial is a great way introduction to hosting on aws. Even if this isn't your typical stack this will show the concepts to make your next deploy easier.
It's a perfect introduction to scripting. I use commandline scripting in almost every type of project that I do becuase it helps me save so much time in my workflows.
This can also be used to integrate python apps into something that can be shared on the web. This is important for anyone who has a great python app but doesn't have a means of share that with the world.
What you will need.
A text editor. I will be using VS code.
a command line interface. I will be using macOS with unix and zsh. I believe that the type of command line wont matter as long as you understand the corrisponding command for your operating system or look them up as needed.
python3 installed on your host machine 3.10 or greater.
A github account we will be using a repo to move all of our code base to an ubuntu instance on AWS since this is currently on the free tier.
prerequisite skills.
- Basic understanding of python.
- basic knowledge of flask.
- Basic understanding of react.
- good understanding of unix command line.
- we will use shell scripting in order to test things. if you understand bash this should be straightforward. We will also use many linux commands and I believe I have done a great job of making this as beginner friendly as possible.
- basic understanding of git.
- basic understanding of emacs is a plus but not necessary.
Disclaimer
While I have done many hours of due diligence on this article. Mistakes may still be in here. If you notice anything that isn't right, I am very open to constructive criticism and feedback. I'm also very willing to help anyone who is having trouble to complete this tutorial. I want to see you succeed in deploying a web-app if you are willing to give me your time to learn this and I take not wasting your time very seriously.
Part 1: Making the database.
We are going to start by making a simple project directory.
Go to the directory where you want this project in the terminal and paste this code snippet.
mkdir webapp && cd webapp
next we have to install SQLite via homebrew if on macOS.
brew install sqlite3
you can make sure you have the right version by running this line in the terminal.
sqlite3 --version
output:
-- Loading resources from /Users/tallan/.sqliterc
3.32.3 2020-06-18 14:16:19 02c34......
Next we are making a very simple database with one table so that we can test if the database works properly.
To do this, we want to make a folder to keep 2 files. A small amount of dummy data and a schema.
from the terminal inside the project directory, we make a new folder /sql
mkdir sql && cd sql
From the terminal, now inside the /sql directory, we make 2 files to host each of these.
touch data.sql && touch schema.sql
now we make a folder for stock images.
I wont provide the images but I took screen shots from an AI photo generator and added them to the following folder and named them after the [fake people(https://this-person-does-not-exist.com/en) that I'm about to add to the database.
mkdir uploads
when you run this command you should have all the same filenames.
tree uploads
output:
uploads
├── alicewilliams.png
├── bobjohnson.jpeg
├── charliebrown.jpg
├── janesmith.jpg
└── johndoe.png
Let's add the simple table to the schema.sql file.
Even though we don't have any foreign keys, this will be good to remember in case we decide to do anything else with the database after the tutorial.
PRAGMA foreign_keys=ON;
CREATE TABLE developer (
fullname TEXT CHECK(length(fullname) <= 40),
email TEXT CHECK(length(email) <= 40),
picture TEXT CHECK(length(picture) <= 64),
password TEXT CHECK(length(password) <= 256),
created DATETIME DEFAULT CURRENT_TIMESTAMP
);
Add some simple entries to the data.sql file
INSERT INTO developer(fullname, email, picture, password)
VALUES('John Doe', 'john_doe@gmail.com', 'johndoe.png', 'password');
INSERT INTO developer(fullname, email, picture, password)
VALUES('Jane Smith', 'jane_smith@yahoo.com', 'janesmith.jpg', 'password');
INSERT INTO developer(fullname, email, picture, password)
VALUES('Bob Johnson', 'bob_johnson@outlook.com', 'bobjohnson.jpeg', 'password');
INSERT INTO developer(fullname, email, picture, password)
VALUES('Alice Williams', 'alice_williams@hotmail.com', 'alicewilliams.png', 'password');
INSERT INTO developer(fullname, email, picture, password)
VALUES('Charlie Brown', 'charlie_brown@gmail.com', 'charliebrown.jpg', 'password');
We move on to the bash scripting portion so that we can make setting up, taking down and reseting the database easy.
make sure you move your terminal to the project directory.
from inside the sql folder
tree .
*output:
├── data.sql
├── schema.sql
└── uploads
├── alicewilliams.png
├── bobjohnson.jpeg
├── charliebrown.jpg
├── janesmith.jpg
└── johndoe.png
Move back to the projects root directory.
cd ..
Make a new folder for your bash script and the bash script file while staying in this project directory.
mkdir bin && touch bin/db
I'm going to add these in parts with an explanation of code and will post the whole file at the end.
at the very top of the db file paste this line.
#!/bin/bash
this lets the computer know that this is a bash script, so when you run the file, it will be like running many terminal commands as once.
below that line we will add commands to echo the outputs so that it's easy to spot errors in these commands.
set -Eeuo pipefail
set -x
let's add a friendly usage message for when we don't add an argument.
usage() {
echo "Usage: $0 (create|destroy|reset|dump)"
}
if [ $# -ne 1 ]; then
usage
exit 1
fi
The first block is a function making it so that if we run ./bin/db
in the terminal, then there is text that is output to the terminal.
The second block is saying if the number of terminal arguments used to call this script are not equal to 1 then call the usage function and exit 1. notice that the call to the script from the terminal (./bin/db
) is not considered an argument.
We have to run this line to enable permissions for db to access the command line.
chmod +x bin/db
Now you can run this script with this command.
./bin/db
if you don't run chmod +x bin/db
before you call this script, you will get this error.
output:
bash: ./bin/db: Permission denied
If you run it after entering chmod +x bin/db
into the terminal then you will get the following output.
output:
+ '[' 0 -ne 1 ']'
+ usage
+ echo 'Usage: ./bin/db (create|destroy|reset|dump)'
Usage: ./bin/db (create|destroy|reset|dump)
+ exit 1
Now we get to the commands themselves.
To create a database, we make an else if statement that will run a block of commands based on the argument given when we call the script.
This command will be ./bin/db create
and the code that will run when we call this will be.
if [ "$1" = "create" ]; then
mkdir -p var/uploads
sqlite3 var/App.sqlite3 < sql/schema.sql
sqlite3 var/App.sqlite3 < sql/data.sql
cp sql/uploads/* var/uploads/
fi
Then in the terminal run.
./bin/db create
if everything was coded correctly, and you ran chmod +x bin/db
and then ./bin/db create
you should have a new folder var and a file called App.sqlite3 inside that folder.
To check, run this command in the terminal
tree var
output:
var
├── App.sqlite3
└── uploads
├── alicewilliams.png
├── bobjohnson.jpeg
├── charliebrown.jpg
├── janesmith.jpg
└── johndoe.png
We want to make a command for each functionality stated by the usage message.
To do that we will add elif
's before the closing fi
and after the command for create.
the rest of the elif
statement will look like this.
elif [ "$1" = "destroy" ]; then
rm -rf var/App.sqlite3 var/
elif [ "$1" = "reset" ]; then
./bin/db destroy
./bin/db create
elif [ "$1" = "dump" ]; then
sqlite3 var/App.sqlite3 .dump
The full db file will look like this.
#!/bin/bash
set -Eeuo pipefail
set -x
usage() {
echo "Usage: $0 (create|destroy|reset|dump)"
}
if [ $# -ne 1 ]; then
usage
exit 1
fi
if [ "$1" = "create" ]; then
mkdir -p var/uploads
sqlite3 var/App.sqlite3 < sql/schema.sql
sqlite3 var/App.sqlite3 < sql/data.sql
cp sql/uploads/* var/uploads/
elif [ "$1" = "destroy" ]; then
rm -rf var
elif [ "$1" = "reset" ]; then
./bin/db destroy
./bin/db create
elif [ "$1" = "dump" ]; then
sqlite3 var/App.sqlite3 .dump
fi
Make sure that when you run the following terminal commands that you get the same outputs before moving onto the flask portion. You will be using these commands often.
From the project directory with no database created
./bin/db create
output:
+ '[' 1 -ne 1 ']'
+ '[' create = create ']'
+ mkdir -p var
+ sqlite3 var/App.sqlite3
+ sqlite3 var/App.sqlite3
./bin/db destroy
output:
+ '[' 1 -ne 1 ']'
+ '[' destroy = create ']'
+ '[' destroy = destroy ']'
+ rm -rf var
./bin/db reset
output:
+ '[' 1 -ne 1 ']'
+ '[' reset = create ']'
+ '[' reset = destroy ']'
+ '[' reset = reset ']'
+ ./bin/db destroy
+ '[' 1 -ne 1 ']'
+ '[' destroy = create ']'
+ '[' destroy = destroy ']'
+ rm -rf var
+ ./bin/db create
+ '[' 1 -ne 1 ']'
+ '[' create = create ']'
+ mkdir -p var/uploads
+ sqlite3 var/App.sqlite3
+ sqlite3 var/App.sqlite3
+ cp sql/uploads/alicewilliams.png sql/uploads/bobjohnson.jpeg sql/uploads/charliebrown.jpg sql/uploads/janesmith.jpg sql/uploads/johndoe.png var/uploads/
./bin/db dump
output:
+ '[' 1 -ne 1 ']'
+ '[' dump = create ']'
+ '[' dump = destroy ']'
+ '[' dump = reset ']'
+ '[' dump = dump ']'
+ sqlite3 var/App.sqlite3 .dump
-- Loading resources from /Users/tallan/.sqliterc
PRAGMA foreign_keys=ON;
BEGIN TRANSACTION;
CREATE TABLE developer (
fullname TEXT CHECK(length(fullname) <= 40),
email TEXT CHECK(length(email) <= 40),
picture TEXT CHECK(length(picture) <= 64),
password TEXT CHECK(length(password) <= 256),
created DATETIME DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO developer VALUES('John Doe','john_doe@gmail.com','johndoe.png','password','2023-07-23 22:43:59');
INSERT INTO developer VALUES('Jane Smith','jane_smith@yahoo.com','janesmith.jpg','password','2023-07-23 22:43:59');
INSERT INTO developer VALUES('Bob Johnson','bob_johnson@outlook.com','bobjohnson.jpeg','password','2023-07-23 22:43:59');
INSERT INTO developer VALUES('Alice Williams','alice_williams@hotmail.com','alicewilliams.png','password','2023-07-23 22:43:59');
INSERT INTO developer VALUES('Charlie Brown','charlie_brown@gmail.com','charliebrown.jpg','password','2023-07-23 22:43:59');
COMMIT;
Part 2: Making the flask App.
Ensure part 1 works properly before continuing to part 2.
We are going to want to make a custom python environment that is specific to this directory
This will mean we need to define a requirements.txt that's used for defining all the packages.
If you mess up at any point in trying to do this section run this command in the project directory and it will remove the python env folder that we are about to create.
rm -rf env
To ensure that we have python3 on your machine, run this command.
python3 --version
output:
Python 3.11.4
You may have a different version but, as long as this is 3.10 or greater you don't have to update to succeed in this tutorial.
This also assumes that you are not using a python version from Anaconda.
Another gotcha is that when we run this command it should be a blank output.
printenv PYTHONPATH
if you do have an output when you run this, you can run this command, but you will have to do so everytime you restart the terminal.
env --unset PYTHONPATH
If all the above checks passed, we can run this command.
python3 -m venv env
This will have created a python environment local to this directory and this command will need to be ran every time we restart the terminal regardless of whether you had an ouput on the command before the one above. If we don't then the flask App will fail to start.
To activate the environment we run this command.
source env/bin/activate
There should be something that says (env) to the left most side of our textbar where we write to the terminal.
To double check, run this command and it should output the env directory first.
which -a python
output:
/Users/tallan/Desktop/article-writing/tutorials/flask-app/project/env/bin/python
/usr/local/bin/python
/usr/bin/python
Now we want to install jinja2 since this is what will allow us to mount the DOM to the flask App with react. It can also be very useful for making simple webpages like forms that are used in addition to react since they can be more responsive and easier to make than full react components, when the page is a simple form or needs little to no database calls, I prefer to use it.
./env/bin/pip install jinja2
We want to add additional packages to our starter files. To do this we will use a requirements.txt file.
from the terminal make a requirements.txt file
touch requirements.txt
copy and paste the following code snippet into the file.
__requirements.txt_
arrow==1.2.3
astroid==2.15.0
attrs==22.2.0
beautifulsoup4==4.11.1
bs4==0.0.1
certifi==2022.12.7
charset-normalizer==3.0.1
click==8.1.3
dill==0.3.6
exceptiongroup==1.1.0
Flask==2.2.2
html5validator==0.4.2
idna==3.4
iniconfig==2.0.0
isort==5.12.0
itsdangerous==2.1.2
Jinja2==3.1.2
lazy-object-proxy==1.9.0
MarkupSafe==2.1.2
mccabe==0.7.0
packaging==23.0
platformdirs==2.6.2
pluggy==1.0.0
pycodestyle==2.10.0
pydocstyle==6.3.0
pylint==2.16.2
pytest==7.2.1
python-dateutil==2.8.2
PyYAML==6.0
requests==2.28.2
six==1.16.0
snowballstemmer==2.2.0
soupsieve==2.3.2.post1
tomli==2.0.1
tomlkit==0.11.6
typing_extensions==4.4.0
urllib3==1.26.14
Werkzeug==2.2.2
wrapt==1.14.1
When you run the following command, this will install all the packages in the requirements.txt file to the virtual environment.
with our env activated and the txt file with all the pages above copied. Let's run the following command.
pip install -r requirements.txt
We may get some depreciation warnings, that shouldn't be an issue.
Run the env activate command again.
source env/bin/activate
We are ready to make the flask App itself.
Lowercase app is a reserved word in flask and I didn't want to call this package tutorial and so I settled for uppercase App. be mindful that there is a distinction between the 2.
We are going to house most of the App in a folder of the same name.
mkdir App
we need an __init__.py file to initialize the package.
touch App/__init__.py
__init__.py
"""App package initializer."""
import flask
# app is a single object used by all the code modules in this package
app = flask.Flask(__name__) # pylint: disable=invalid-name
# Read settings from config module (insta485/config.py)
app.config.from_object('App.config')
# Overlay settings read from a Python file whose path is set in the environment
# variable INSTA485_SETTINGS. Setting this environment variable is optional.
# Docs: http://flask.pocoo.org/docs/latest/config/
#
# EXAMPLE:
# $ export APP_SETTINGS=secret_key_config.py
app.config.from_envvar('APP_SETTINGS', silent=True)
# Tell our App about views and model. This is dangerously close to a
# circular import, but Flask was designed that way.
# (Reference http://flask.pocoo.org/docs/patterns/packages/)
import App.views
import App.model
Next we make a config.py file for environment variables specific to the flask app.
Inside the App directory make a file called config.py
touch App/config.py
config.py
"""App development configuration."""
import pathlib
# Root of this application, useful if it doesn't occupy an entire domain
APPLICATION_ROOT = '/'
# Secret key for encrypting cookies
SECRET_KEY = b'FIXME SET WITH: $ python3 -c "import os; print(os.urandom(24))"'
SESSION_COOKIE_NAME = 'login'
# File Upload to var/uploads/
APP_ROOT = pathlib.Path(__file__).resolve().parent.parent
UPLOAD_FOLDER = APP_ROOT/'var'/'uploads'
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])
MAX_CONTENT_LENGTH = 16 * 1024 * 1024
# Database file is var/App.sqlite3
DATABASE_FILENAME = APP_ROOT/'var'/'App.sqlite3'
As the TODO suggests, run the following command and replace the SECRET_KEY
assignment with the following command.
python3 -c "import os; print(os.urandom(24))"
example output:
b'\x9a\xbb4\x86\x1d7\xac\x1ad\x14\xd9:\xcc\xf3\xf4\r\xf3\xd7\xd3cd\xfc$\xae'
Now we have to make a static directory to hold css and placeholder images, I wont be providing any code for this but this is folder is where you can house it for your project. If you don't use any style within your react application or the jinja templates. This is the command.
mkdir App/static
Now lets make a folder for templates inside the App directory to act as a place to put html and jinja template files.
We will also make an index.html in the same command.
mkdir App/templates && touch App/templates/index.html
We'll add simple html to use for testing if our flask app is working.
<!DOCTYPE html>
<html lang="en">
Hello world!
</html>
Now we need to make a views model. This is where you will keep python functions that are called when a user visits a url.
mkdir App/views
Next we need to make an __init__.py to access our views functions app wide and an index.py file to hold our function specific to the '/'
route.
touch App/views/index.py && touch App/views/__init__.py
Inside this index.py we will make our first flask function.
"""
index (main) view.
URLs include:
/
"""
import flask
import App
@App.app.route('/')
def show_index():
"""Display / route."""
context = {}
return flask.render_template("index.html", **context)
Inside the views/__init__.py we add the import.
"""Views, one for each App's page."""
from App.views.index import show_index
We will add a placeholder file model.py. Even though we wont add any code to it for the moment, eventually this file will house the code for the database connection.
touch App/model.py
We still don't have a way for flask to understand that we have an App package to use. To fix this we will create a .toml file.
in the project directory add this file.
touch pyproject.toml
Add the following text to the pyproject.toml.
[build-system]
requires = ["setuptools>=64.0.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "App"
version = "0.1.0"
dependencies = [
"arrow",
"bs4",
"Flask",
"requests",
]
requires-python = ">=3.10"
[tool.setuptools]
packages = ["App"]
Double check that you are in your python virtual environment,
now we have to install App into the virtual environment.
source env/bin/activate
and then
pip install -r requirements.txt && pip install -e .
You should receive the following output.
output:
a bunch of build info..
...
Successfully built App
Installing collected packages: App
Successfully installed App-0.1.0
if everything was done correctly you should be able to see hello world by running the following command.
flask --app App --debug run --host 0.0.0.0 --port 4000
and going to localhost:4000
Exit the terminal for the flask app by pressing control + c
when you are finished checking that you see hello world at the local host.
In preparation to move onto the part where we connect the database to the App, we are going to make a run script so we don't have to type a long command in everytime we want to start the server.
touch ./bin/run
And add the following code to the script.
#!/bin/bash
set -Eeuo pipefail
set -x
export FLASK_ENV=development
export FLASK_APP=App
export FLASK_DEBUG=1
usage() {
echo "Usage: $0"
}
if [ $# -ne 0 ]; then
usage
exit 1
fi
# If var/App.sqlite3 does not exist, print an error and exit non-zero.
if [ ! -f var/App.sqlite3 ]; then
echo "Error: var/App.sqlite3 does not exist. Run ./bin/db create."
exit 1
fi
flask --app App --debug run --host 0.0.0.0 --port 4000
Enable the script.
chmod +x bin/run
Run the script to ensure no errors were introduced.
./bin/run
You should now be able to visit localhost:4000 the same as before.
let's start by resetting the database.
./bin/db reset
Ensure that you can interact with the database from the terminal first.
sqlite3 var/App.sqlite3 "SELECT fullname, email FROM developer;"
output:
-- Loading resources from /Users/tallan/.sqliterc
fullname email
---------- ------------------
John Doe john_doe@gmail.com
Jane Smith jane_smith@yahoo.c
Bob Johnso bob_johnson@outloo
Alice Will alice_williams@hot
Charlie Br charlie_brown@gmai
Copy and paste the following code into the blank model.py file.
"""App model (database) API."""
import sqlite3
import flask
import App
def dict_factory(cursor, row):
"""Convert database row objects to a dictionary keyed on column name.
This is useful for building dictionaries which are then used to render a
template. Note that this would be inefficient for large queries.
"""
return {col[0]: row[idx] for idx, col in enumerate(cursor.description)}
def get_db():
"""Open a new database connection."""
if 'sqlite_db' not in flask.g:
db_filename = App.app.config['DATABASE_FILENAME']
flask.g.sqlite_db = sqlite3.connect(str(db_filename))
flask.g.sqlite_db.row_factory = dict_factory
# Foreign keys have to be enabled per-connection.
flask.g.sqlite_db.execute("PRAGMA foreign_keys = ON")
return flask.g.sqlite_db
@App.app.teardown_appcontext
def close_db(error):
"""Close the database at the end of a request."""
sqlite_db = flask.g.pop('sqlite_db', None)
if sqlite_db is not None:
sqlite_db.commit()
sqlite_db.close()
Before we can display anything to the browser, we have to account for how to handle images.
Let's make a file in the views folder called images.py
touch App/views/images.py
Add the following code to this file.
import flask
from flask import send_from_directory
import App
@App.app.route('/uploads/<path:filename>', methods=['GET'])
def download_file(filename):
"""Download a file."""
return send_from_directory(App.app.config['UPLOAD_FOLDER'],
filename, as_attachment=True)
Now add this function to the view/__init__.py so that we can find files in the uploads folder.
view/__init__.py
"""Views, one for each app's page."""
from App.views.index import show_index
from App.views.images import download_file
In this file we have a way for flask to get reference images based on the string names stored in the database.
Now change the views/index.py to have a database call for all users except one.
"""
index (main) view.
URLs include:
/
"""
import flask
import App
@App.app.route('/')
def show_index():
"""Display / route."""
# Connect to database
connection = App.model.get_db()
# Query database
logname = "John Doe"
cur = connection.execute(
"SELECT fullname, email, picture "
"FROM developer "
"WHERE fullname != ?",
(logname, )
)
devs = cur.fetchall()
context = {"devs": devs}
return flask.render_template("index.html", **context)
And change the index.html to display that information with jinja.
<!DOCTYPE html>
<html lang="en">
<h1>developer</h1>
{% for dev in devs %}
<p>{{dev.fullname}}, </p>
<p>{{dev.email}} </p>
<img src="{{ url_for('download_file', filename=dev.picture) }}" alt="image" width="100" height="100">
<br />
{% endfor %}
</html>
If our development server is on by running ./bin/run
, when we visit localhost:4000 we should see the names of the developers from the server except John Doe.
This completes the flask portion of this tutorial.
Part 3: Making the react app.
We are going to make our react app in a slightly different way that normal so that it will make the instantiation on AWS easier. Doing it this was will make it so that we can run the front-end and back-end with a single process.
Let's jump in.
Make sure that you have a version of node greater than 19.8.1
node --version
and npm version greater than 9.5.1.
this should have installed npm as a package with your node install.
npm --version
Now let's make an App/js folder to keep react components.
mkdir App/js
And a Index.js file to keep our dom root div.
touch App/js/Index.js
And add the following code to this file.
import React from "react";
import { createRoot } from "react-dom/client";
import App from "./App";
const root = createRoot(document.getElementById("root"));
root.render(<App />);
And the App component which will be the starting point of all the react code.
touch App/js/App.js
add the following code to the App.js file.
import React from "react";
const App = () => {
return (
<div>
Hello React!
</div>
);
};
export default App;
Next we need a package json to define all of the packages for webpack and react to use.
touch package.json
Add this code to the package.json file.
{
"name": "App",
"version": "1.0.0",
"description": "App front end",
"main": "App/js/Index.jsx",
"author": "awdeorio",
"license": "MIT",
"repository": {},
"devDependencies": {
"@babel/core": ">=7.21.3",
"@babel/plugin-transform-runtime": ">=7.21.0",
"@babel/preset-env": ">=7.20.2",
"@babel/preset-react": ">=7.18.6",
"@babel/runtime": ">=7.21.0",
"@cypress/grep": "^3.1.5",
"@types/react": ">=18.0.28",
"@types/react-dom": ">=18.0.11",
"babel-loader": "^9.1.2",
"eslint": ">=8.36.0",
"start-server-and-test": "^2.0.0",
"tmp": "^0.2.1",
"webpack": ">=5.76.2",
"webpack-cli": ">=5.0.1"
},
"dependencies": {
"latest-version": "^7.0.0",
"moment": ">=2.29.4",
"prop-types": ">=15.8.1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"ts-loader": ">=9.4.2",
"typescript": ">=5.0.2"
},
"engines": {
"node": ">=18.0.0"
}
}
Next, we want to make a package-lock.json from the package.json.
To do this run the following command.
npm i --package-lock-only
Now run this command to make a node modules folder.
npm ci .
Now we want to make sure that we have webpack with the right version.
npx webpack --version
The output should look something like this but results may very slightly.
output:
System:
OS: macOS 11.7.6
CPU: (8) x64 Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz
Memory: 276.08 MB / 16.00 GB
Binaries:
Node: 20.3.0 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.6.7 - /usr/local/bin/npm
Browsers:
Chrome: 115.0.5790.102
Safari: 16.4.1
Packages:
babel-loader: ^9.1.2 => 9.1.3
ts-loader: >=9.4.2 => 9.4.4
webpack: >=5.76.2 => 5.88.2
webpack-cli: >=5.0.1 => 5.1.4
We want create a root.html and make a root div for the react-dom to mount onto.
touch App/templates/root.html
App/templates/root.html
<!DOCTYPE html>
<script type="text/javascript" src="{{ url_for('static', filename='js/bundle.js') }}"></script>
<html lang="en">
<body>
<!-- Plain old HTML and jinja2 nav bar goes here -->
<div id="root">
Loading ...
</div>
<!-- Load JavaScript -->
<script type="text/javascript" src="{{ url_for('static', filename='js/bundle.js') }}"></script>
</body>
</html>
This is telling the flask app to display whatever is in the bundle.js in the root div.
We want to make a webpack.config.js to keep our build configurations for react.
In the project directory run the following command.
touch webpack.config.js
And add this code to the webpack.config.js.
const path = require("path");
const { existsSync } = require("fs");
// Set the entrypoint to Index.jsx by default, but Index.tsx if using TypeScript.
let entry = "./App/js/Index.js";
if (existsSync("./App/js/Index.ts")) {
entry = "./App/js/Index.ts";
}
module.exports = {
mode: "development",
entry,
output: {
path: path.join(__dirname, "/App/static/js/"),
filename: "bundle.js",
},
devtool: "source-map",
module: {
rules: [
{
// Test for js or jsx files
test: /\.jsx?$/,
// Exclude external modules from loader tests
exclude: /node_modules/,
loader: "babel-loader",
options: {
presets: ["@babel/preset-env", "@babel/preset-react"],
plugins: ["@babel/transform-runtime"],
},
},
{
// Support for TypeScript in optional .ts or .tsx files
test: /\.tsx?$/,
use: "ts-loader",
exclude: /node_modules/,
},
],
},
resolve: {
extensions: [".js", ".jsx", ".ts", ".tsx"],
},
};
Compile the react code in preparation for a run test.
npx webpack
In the same way that we made a index.py we want to make a root.py to display the react app.
touch App/views/root.py
Add the following code.
"""
root of react app.
URLs include:
/root/
"""
import flask
import App
@App.app.route('/root/')
def show_root():
"""Display / root for react application."""
# Connect to database
context = {}
return flask.render_template("root.html", **context)
Now add this function to the __init__.py within the views folder. the whole file will look like this.
App/views/__init__.py
"""Views, one for each app's page."""
from App.views.index import show_index
from App.views.root import show_root
from App.views.images import download_file
You should not receive any errors but might get warnings.
Use the script we made earlier to run the flask app.
./bin/run
You should not get an error visit localhost:4000/root
Also ensure that our jinja index page still has our developer names.
When you want to develop and make changes to the the react app and not type npx webpack
everytime, you can use the --watch
flag. this will work the same way as npm start
After we are finished checking the browser. Let's shut down the flask app by clicking on the terminal and pressing control + c
.
npx webpack --watch
npm packages can be installed as normal with npm i <package-name>
.
The only major difference between how you interact with this react app vs. one mode with create-react-app is the front-end is essentially ran through the back-end.
We are now ready to develop a front-end with react and a back-end with flask.
We can also make jinja templates in this app when the situation fits.
This completes the react portion of this tutorial.
Part 4: Setting up an AWS account.
Its a good idea to check out the best practices for setting up AWS accounts since this is easy to forget about account security after you have had an account for a while.
Before we can deploy to AWS we first have to set up an aws account. If you already have an account set up, sign into your AWS account you can skip this section by clicking here
Go to this url to create an account.
They will send a verification code to your email.
After entering the code you will enter the will have to make a password.
Then you will be sent to this page.
You will need to add a credit card to your account even though the setup we are making is free.
Confirm your Identity.
After you confirm your phone number, choose which tier. We will use the free tier. Any tier you use will not effect the rest of this tutorial.
This will redirect you to your AWS sign-in.
Log into you account and you will be directed to this page.
Click on the settings icon in the top left corner of the page.
Now click to launch a new instance.
We are going to use an ubuntu instance. Click on Ubuntu where you see all the different virtual boxes.
Scroll down to where it says add a key pair(login) and name it.
If you start typing in the text box, a modal will popup where you can enter the name of the key.
Whether you added a key pair in the model or added directly to the textbox does not matter. As long as we choose RSA and .pem this will be fine.
Click add key pair once you named it.
This will download a aws-tutorial.pem file to you computer.
Scroll down to network settings.
Click on Allow HTTP traffic. This part is important because you will be able to do everything else in this tutorial but wont be able to ever see the website even though you did everything else right!
You can also allow HTTPS traffic but I have not tested this. you will need to buy an SSL certificate too. If someone gets this to work with allowing https traffic please let us know in the comments and how to do it.
Now click launch instance.
Scroll to the bottom and click view all instances.
You will be forwarded to a page with information about all of your instances. When you click the checkbox for the instance you just created, information about that instance will be displayed on the bottom of the page.
When you go the the address it will try and load for a while but no website will pop up.
We will add the code from the previous parts of this section to be the website in the next section.
This completes the AWS console setup portion of this tutorial.
Part 5: Deploying to AWS.
This is the most complex portion the tutorial. Be sure to read carefully because a small error in this section can cause hours of headache if you don't do this step by step. But at the end of this you will have a website ready for development, live on the internet and the ability to deploy other websites the same way.
If you stuggled with earlier parts of this tutorial, you can copy my repo to make sure that none of the errors are from any previous sections which can make debugging easier.
I recommend trying to do this on your own first.
First thing is to make a github repo.
for extra data safety make a .gitignore file and add the .pem file to it.
touch .gitignore
we want to add a lot folders to the gitignore since most of these are boilerplate files and not files that we have created.
*.pem
node_modules
env
App.egg-info
__pycache__
This will prevent any private keys from being published to github.
make an initial repo
git init && git add . && git commit -m "initialize"
create a remote origin and push it to github, which you will have to do on your own, leave it public for now, at least until you pull it into your ubuntu instance which we will do shortly. Then you can make it private if you wish.
If you completed the previous sections, you should have a aws-tutorial.pem file in your downloads folder.
Ensure you are in your project directory of your local repo, then run the following command.
mv ~/Downloads/aws-tutorial.pem .
we can also drag and drop into the project directory. As long as we have it in the root of this folder of your local repo, we are fine.
Now we give this a special 400 permission to the aws-tutorial.pem file for read only access so that we don't modify our key to the aws instance.
Run this command.
chmod 400 aws-tutorial.pem
We are now going to tunnel into our aws instance which will take use to a ubuntu terminal on aws that we control from our host machine.
To do this you want to run the following command.
Do not include the <>'s in the command but do include the ubuntu@
ssh -i aws-tutorial.pem ubuntu@<your public ipv4 DNS from the AWS cosole>
we can copy and past from our AWS console like so.
ssh -i aws-tutorial.pem ubuntu@<paste what you just copied here>
Type yes and enter to finish connecting.
You should have a proper ssh tunnel to your ubuntu instance. It should look like this.
If it didn't work. Make sure that you have your aws-tutorial.pem file in the project directory. That you ran chmod 400 aws-tutorial.pem
and that the command ssh -i aws-tutorial.pem ubuntu@<public ipV4 DNS>
is correct and without the angle brackets. Note that everytime you shut down and restart you instance you will have to recopy this public ipV4 DNS.
Since we are now going to be working with a linux command line interface, the commands will be a bit different.
Now we want to install nginx which will give our ubuntu server a way host a website.
sudo apt-get update
sudo apt-get install nginx
type Y to add nginx.
when you visit the site remember to make sure that it's http::/
and not https::/
when you do, you should see this page now.
We are going to have to change some files to get a configuration for our app to work correctly with njinx.
To accomplish this we will use emacs to alter files.
sudo apt-get install emacs-nox
Accept this install like last time.
Also run this command.
export EDITOR=emacs
This command will give no output.
Having emacs means we will be using nano.
Check that we have it.
which nano
output:
/usr/bin/nano
Now we will create a nginx.conf file to define how nginx will interact with our web app.
sudo nano /etc/nginx/nginx.conf
For reference, when I write control + k
the plus is to indicate that you press control and k keys at the same time.
Hold control + k
in order to delete all the lines in the file line by line.
after the file is completely blank, copy and paste this text using control + v
# Run nginx worker processes using user www-data, which should have been created after installing Nginx.
user www-data;
# Start as many workers as there are CPU cores.
worker_processes auto;
# Configure connection processing.
events {
# Maximum number of simultaneous connections per worker process.
worker_connections 1000;
}
# Configure the HTTP server.
http {
# Directly copy data between file descriptors instead of storing it in a buffer.
sendfile on;
# Send the response headers and the beginning of a file in a single packet.
tcp_nopush on;
# Map file extensions to MIME types.
include /etc/nginx/mime.types;
# Specify where logs should be written.
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Compress HTML data before sending it in responses.
gzip on;
# Include site-specific config files.
include /etc/nginx/conf.d/*.conf;
}
Then press control + x
then y
for saving the file and enter
This will bring you back to the ubuntu terminal with the green text.
Now we want to add special configuration details specific to our App itself.
This will be a similar process as we just did but, we will be creating this file.
Type this command into the terminal.
sudo nano /etc/nginx/conf.d/App.conf
Copy and paste this file.
# Configure a virtual server. We only need one, because this machine will only host a single website.
server {
# Configure all requests to the path /uploads.
location /uploads {
# Send a subrequest to /accounts/auth/. If the response is 200, proceed; if it's 403, don't
# serve content, and just return a 403 status.
#auth_request /accounts/auth/;
# Serve the requested file from /var/www/uploads/<filename>.
root /var/www;
}
# Configure all requests to the specific path /accounts/auth/.
location = /accounts/auth {
# Forward the request to the Flask app running at localhost:8000.
proxy_pass http://localhost:8000;
# Don't include the body of the request if any in the proxied request.
proxy_pass_request_body off;
# Set some headers that Nginx wants us to use for authentication subrequests.
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
# Configure all other requests to the server besides the ones that match above.
location / {
# Forward the request to http://localhost:8000 and return its response to the client.
proxy_pass http://localhost:8000;
# Make sure the proxied request's Host header is set to what the client intended.
proxy_set_header Host $host;
# Add a header to the proxied request indicating whether the protocol is HTTP or HTTPS.
proxy_set_header X-Forwarded-Proto $scheme;
# Add a header to the proxied request specifying the IP address of the original client.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
there are some configurations in this tutorial that are based on the idea that this app will be much more heavy duty than what is given in this tutorial. Try uncommenting lines of code when you implement auth.
Press control + x
then y
then enter
.
Now we want to restart the nginx server.
sudo systemctl restart nginx
Install our python environment.
sudo apt-get install python3 python3-venv sqlite3
Now clone your remote repository. Alternatively you can use my repo
git clone https://github.com/TallanGroberg/aws-deploy-tutorial.git
Create a virtual environment for your project instance.
python3 --version
output:
3.10.6
This is fine if its different that our local machine but it has to still be greater that 3.10
cd into your folder.
cd aws-tutorial
We have one more file to change. Since we changed the default location for uploads in production. we have to alter our App's config.py to reference var/www instead of var/uploads.
nano App/config.py
You can use your mouse to scroll to a single line. And should be able to erase and type like in more common text editors.
We should change UPLOADS_FOLDER
to be this now.
UPLOADS_FOLDER = pathlib.Path('/var/www/uploads')
Then press control + x
then Y
and finally enter
Create the virtual environment.
python3 -m venv env
And activate.
source env/bin/activate
(env) should be at the left side of the green text.
Run each of these commands one by one.
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
pip install -e .
This next command to install gunicorn will enable us to run a local server like in development but in production.
pip install gunicorn
Now we initialize the database.
./bin/db create
Copy the uploads folder to the var/www folder where nginx can use it.
sudo cp -r var/uploads /var/www
Give ubuntu permissions for this new directory
sudo chown ubuntu:www-data /var/www/uploads
Now we install react related packages starting with npm and node.
sudo apt-get install nodejs npm
You will notice that we don't have the same node version.
When you see pink screens durning package downloads just press enter to get through them. They didn't cause me any problems when I picked the default everytime.
node --version
output:
v12.22.9
Now we want to install everything from our package.json
npm install .
just to be sure make sure that your package-lock.json is here.
npm i --package-lock-only
When this download is happening, we can see that we get alot of versioning warings. This will have to be addressed by updating node version manager, you can check out the github here (nvm).
Run and update for ubuntu.
sudo apt update
Install the version manager from its github directly.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
Now we have to source it to make sure ubuntu can take advantage of it.
source ~/.bashrc
This will cause us the have to reactivate our python environment this means that we have to reinstantiate the whole package.
source env/bin/activate
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
pip install -e .
Now we just install the node version that we had our react app running on in development.
nvm install v20.3.0
Make sure that the node version update.
node -v
output:
v20.3.0
This should ensure that when we run webpack without any errors.
npx webpack
You should receive a compiled successfully in the last line of the output.
We want to install javascript-obfuscator
to make our javascript harder to understand for hackers.
npm install javascript-obfuscator
Now we run the next few commands to compile, obfuscate and replace our original build.
npx webpack
npx javascript-obfuscator App/static/js/bundle.js --reserved-strings '\s*'
press y
to proceed.
mv App/static/js/bundle-obfuscated.js App/static/js/bundle.js
Now we are going to get the server running.
Before we try and run it make sure that no other gunicorn processes are running.
pkill -f gunicorn
When you run the next command, there should be no output.
pgrep -af gunicorn
Start the server in the background with this command.
gunicorn -b localhost:8000 -w 2 -D App:app
Now when you visit your aws site, you should be able to see the app running.
If it's not, we can trouble shoot with this command which will give the log level output of what is happening.
first.
pkill -f gunicorn
then.
gunicorn -b localhost:8000 -w 2 App:app --log-level debug
If you get an error about packages missing pertaining to python redo the virtual environment installation. You can do the same for react and that should do the trick.
Conclusion
I have not tried to develop anything beyond a single page react application with react with this configuration. I encourage comments about any react-packages that will not work with this configuration or if anyone could make the same functionality with create-react-app.
There is a way to write a script so that this deploy is way commands in the terminal. A great exercise would be to make this script on your own.
Let me know what I could add to this article in the comments.
Posted on July 30, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.