Authorized Resources and Database Migrations with Strongloop's Loopback
Cole Morrison
Posted on February 3, 2017
This post is going to cover the following:
- Setting up a Strongloop Loopback and MySQL local environment with Docker
- Hooking up our environment with docker-compose
- Scaffolding out some base models
- Automating Database Migrations and Updates via Loopback
- Protecting REST endpoints with Authorization and Authentication
The main focus will be on Database Migrations / Updates and Authentication/Authorization. There's a hefty chunk in here regarding creating a standalone MySQL image that won't clobber existing versions on our local machine. The reason I felt it necessary to include the first few parts is that I personally can't stand it when a guide/tip/tutorial just starts in and assumes everything is already set up.
If you're just here to learn about Database Migrations, you can skip to that part of the guide. The scripts to do so are reusable, just swap out your models for the ones within.
The code for this repository can be found here:
https://github.com/jcolemorrison/strongloop-automigration-demo
Table of Contents
- Preface
- Setting Up The Development Environment
- Setting up a Stand Alone MySQL DB
- Scaffolding Out Our Models
- Automated Database Migrations and Updates
- Final Thoughts
Preface
Yes. Strongloop's Loopback. That's right. And yeah, I actually like it. After doing many many many projects in base ExpressJS, it is massively refreshing to not have to
a) dig through npm package soup kitchen
b) ID packages that are well maintained
c) connect packages in own home soup
d) maintain / customize packages
e) reinvent the wheel
Does strongloop loopback solve everything? I don't know why I even asked that because we all know the answer. No. Nothing does. However, spinning up solid REST APIs, dealing with authentication/authorization, having MULTIPLE datasources (one model to mongo one to sql), routing, docs...
...all the little things that are no brainers and yet simultaneously timesinks.
I'd say the only two reasons it's not more ubiquitous is due to two reasons:
1) Pretty Terrible Documentation
2) Geared Towards Creating APIs, not necessarily with Front Ends
3) Terrible Documentation
The first one is a usual suspect for most frameworks and is generally the bane of most great dev tools out there. It's like some teams don't want us to use their stuff.. or they're hiding something..
The second always seems to be an issue with selection. Most developers want all in one frameworks to handle front-end, back-end, heroku deploy and free money. I personally love that it's specialized in APIs and view it as a benefit vs. an issue. It allows for it to be a much easier player in the service style architecture conversations.
And third. Terrible Documentation. I'm serious, if a developer releases a framework, but no one knows what it does, did a developer release a framework?
This may raise the question of - "Well, you seem to like it enough." And I do, because the pain of digging through git issues, learning via experience and peeling through their docs is less than the pain of configuring up a full express application for an API.
Additionally, once the basic concepts are understood, it's Very productive.
That was all an aside, but is here for everyone who may or may not lose their minds at the thought of using something other than Express. Oh by the way, Strongloop is the organization that maintains Express. IBM owns Strongloop. Therefore it's a pretty safe bet that Strongloop Loopback isn't going anywhere.
Enough of that, let's dig in.
Setting Up The Development Environment
We'll do this real quick with Docker (if you've read any of my other posts, I tend to use it. A lot.). Make sure you have it installed and that you also have a https://hub.docker.com/ account and login. (also make sure to docker login
on the command line with that login).
Get started with it here: https://www.docker.com/products/docker
While it's perfectly fine to just use a local version of Strongloop and MySQL, I'm segmenting it out in this tutorial so that it's completely separate and won't affect our other installations.
1) Create a code
directory and navigate to it in your command line
$ mkdir code && cd code
Probably didn't need to mention how to do that.
2) Create a folder inside of code
called dev-images
and another within that called strongloop
$ mkdir -p dev-images/strongloop
We'll house our Dockerfile
that will build out our development Docker image here.
If you're unfamiliar, this will allow us to run our code within a segmented box (docker container) without having to install any of the dependencies directly.
3) Create the Dockerfile inside of code/dev-images/strongloop
If we're in code
$ touch dev-images/strongloop/Dockerfile
open it in our text editor
4) Input the following:
From node:6.9.4
# Yarn please
RUN curl -o- -L https://yarnpkg.com/install.sh | bash
ENV PATH="/root/.yarn/bin:${PATH}"
# Installs these globally WITHIN the container, not our local machine
RUN yarn && yarn global add loopback-cli && yarn global add nodemon
# Any commands start from this directory IN the container
WORKDIR /usr/src/api
This allows us to use Strongloop's CLI, Yarn and Nodemon. A couple of notes:
a) Yarn instead of NPM every time (speed, performance, less dupes, yarn.lock for consistency)
b) Loopback-cli is the "new" cli for Strongloop. It's what Strongloop would like for everyone to move to vs. strongloop
and slc
.
5) Build the Docker Image
In the code
build the image docker build -t <yourusername>/strongloop-dev dev-images/strongloop/
Where <yourusername>
is your username.
If you've used any of these intermediary images/layers before, you can use the --no-cache=true
to make sure it fresh install and executes.
6) Create the docker-compose
file
In the code
directory create a docker-compose.yml
file. This will be the convenience file for us to up our MySQL database and Strongloop container simultaneously, watch their logs and manage / run commands.
$ touch docker-compose.yml
Inside of the docker-compose.yml
file input the following:
# The standard now
version: '2'
# All of the images/containers compose will deal with
services:
# our strongloop service shall be known as 'api'
api:
# use your user name
image: <yourusername>/strongloop-dev
# map the containers port of 3000 to our local 3002
ports:
- 3002:3000
# mount our current directory (code) to the container's /usr/src/api
volumes:
- .:/usr/src/api
# the default command unless we pass it one
command: nodemon .
The only thing to note on that's not in the comments is probably our choice to use port
3002
instead of 3000
. 3000
is just fine, however whenever I'm developing an API, there's generally another container up somewhere that also wants port 3000
. Obviously we can't map both to the same.
The command
is what will be run, unless we specify otherwise. The default will be to start the application using Nodemon, so that if we make changes to the files, we don't have to restart the app manually.
make sure to switch out <yourusername>
with your username
7) Scaffold out the Strongloop Application
From our code
directory we can now begin using docker-compose
to manage our commands. Run the following:
$ docker-compose run api lb
This will begin the application scaffolding. Use the following settings:
What's the name of your application? Press enter to keep use the current directory
Which version of LoopBack would you like to use? Use 3.x
What kind of application do you have in mind? api-server
Now it will scaffold out the application and install dependencies. It'll use NPM, but we'll yarn-ify that as soon as it's done.
8) Once the NPM install is done...
run:
$ docker-compose run api yarn
This will link dependencies, create a yarn.lock file and much more. This will create consistency in dependencies' dependencies across development environments. What I mean by that is if someone on another machine yarn
's this project, they'll definitely get all the correct versions of all the packages every single time. It won't accidentally upgrade one or anything like that.
Also, if you're tired of typing docker-compose
100 times, just open up your .bashrc
and input the following:
alias dco="docker-compose"
alias dcor="docker-compose run"
And then in your current terminal session run
$ source ~/.bashrc
Now we'd be able to run yarn like so:
$ dcor api yarn
note: you only need to source your current terminal window, any new session from this point on will include those aliases
9) Test out your new loopback app
In our code
directory, run
$ docker-compose up
And after it's all setup navigate to localhost:3002/explorer
to see your shiny new api.
note: even though the container will say it's on localhost:3000
, that's not where it is on our local machine. Remember, we mapped 3000
-> 3002
If you're interested in learning more about docker, I have an entire guide dedicated to setting up an entire environment on AWS:
Guide to Fault Tolerant and Load Balanced AWS Docker Deployment on ECS
Setting up a Stand Alone MySQL DB
Now we need to setup the MySQL docker image, container and compose service. Honestly, this is a pretty useful pattern to use in any areas of development where you need a local database. It will allow you to safely configure a variety of versions of MySQL without fear of clobbering any MySQL setups you may or may not have locally.
In order to be able to pull down the local mysql
image, as stated at the beginning, you'll need an account for https://hub.docker.com/. With that created you'll then need to run:
$ docker login
And use the your hub account credentials.
10) Open up our docker-compose.yml
file and modify it to reflect the following:
# The standard now
version: '2'
# All of the images/containers compose will deal with
services:
# our strongloop service shall be known as 'api'
api:
# use your user name
image: jcolemorrison/strongloop-dev
# map the containers port of 3000 to our local 3002
ports:
- 3002:3000
# mount our current directory (code) to the container's /usr/src/api
volumes:
- .:/usr/src/api
# the default command unless we pass it one
command: nodemon .
# ADD HERE. This is what our MySQL service shall be known as
mysqlDb:
# This is the official MySQL 5.6 docker image
image: mysql:5.6
# These are required variables for the official MySQL image
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT}"
MYSQL_DATABASE: "${DB_NAME}"
MYSQL_USER: "${DB_USER}"
MYSQL_PASSWORD: "${DB_PWD}"
# Keep it mapped to the usual MySQL port
ports:
- 3306:3306
# Create a separate volume on our machine to map to the container's default mysql data directory
volumes:
- strongloopDev:/var/lib/mysql
# These must be declared to be used above
volumes:
strongloopDev:
There's 3 major differences here from the previous service (api
) that we defined:
a) We're using an environment
field. It's declaring values that are required by the MySQL image if we want the database to go up and work without a ton of extra work. You can read more about the official MySQL image here.
MYSQL_ROOT_PASSWORD: Password to our `root` user
MYSQL_DATABASE: Our DB name
MYSQL_USER: Our `user` that's not `root`
MYSQL_PASSWORD: Our `user` password
Where do we get the interpolated values though in the actual file? docker-compose will look for a .env
file in the same directory and make those values available inside of the file. We'll make that next.
b) We're creating and mapping a volume called strongloopDev
to our container's mysql data directory. This is exactly like what we did above with mounting our current directory to the container's. However, instead of the current directory, Docker has an area on our machine that it will create a directory and mount for us. That's more of an explanation for understanding that direct accuracy of what's happening.
Just think, when we define a volume like so, docker creates a folder (strongloopDev
) on our machine where it's files our located. It mounts that to the path we hand it, which in our case was /var/lib/mysql
.
Before we make our .env
file, why MySQL 5.6? This is simple, because in production, I use Amazon Aurora DB, which is drop-in compatible with 5.6.
11) In the code
directory create a new file .env
and input the following:
DB_NAME=strongdevdb
DB_USER=strongdevuser
DB_PWD=strongdevpwd
DB_ROOT=strongroot
Great, now those values in our docker-compose
file will fill in correctly.
12) In our code
directory, run the following to up the api server and mysql service:
$ docker-compose up
we can also run docker-compose up -d
to have the service start in the back ground and then docker-compose logs -f
to view the logs
Let's confirm that our MySQL db is indeed alive. Run the following in another tab (in the same code
directory of course):
$ docker-compose run mysqlDb mysql -h <yourlocalip> -P 3306 -u strongdevuser -p
Where <yourlocalip>
is the IPv4 address (i.e. 10.0.0.100) in your local network. To find it run:
ifconfig | grep 'inet '
and it should be the second of the two addresses.
After running the mysql command, we'll be prompted for the password to our strongdevuser
, which is strongdevpwd
.
Once inside run:
show databases;
And we'll see our DB has been created. Then run:
use strongdevdb;
13) Install the loopback-connector-mysql
package
In our code
run the following (either in yet another new tab, or you can stop our service, or the mysql db tab and run it there):
$ docker-compose run api yarn add loopback-connector-mysql
This package allows us to hook up our loopback application to MySQL.
Once it's completed installing, in our text editor, open up server/datasources.json
. Modify it to reflect the following:
{
"db": {
"name": "db",
"connector": "memory"
},
"mysql": {
"name": "mysql",
"connector": "mysql",
"database": "strongdevdb",
"password": "strongdevpwd",
"user": "strongdevuser",
"port": 3306,
"host": "mysqlDb"
}
}
All the top level key of mysql
is, is just a reference for loopback (as is it's name property). All but the host
property should be pretty explanatory. Generally, if this were a local db, we'd input something like localhost
or a specific IP. But since these are docker containers, we get to reference them as their service name! When docker-compose
ups our containers together, it makes each service's name available to each other as a host as its name.
Excellent, now our MySQL and Loopback service are ready to work together.
Scaffolding Out Our Models
Now we're going to create two models. One will be our own type of user called Client
and the other will be a luxurious, exotic type called Widget
. We'll be using these to demonstrate DB Migration, Authentication and Authorization.
Let's begin the client
14) In the code
directory, run the following:
$ docker-compose run api lb model Client
(seriously, if you work with docker a lot, use those aliases I mentioned)
This will begin the model scaffolder. Use the following settings:
Enter the model name: press enter here to use Client
Select the data-source to attach Client to: Use mysql
Select model's base class: Scroll down and select User
Expose Client via the REST API? press y
and enter
Custom plural form (used to build REST URL) just press enter, it will default to clients
Common model or server only? use server
After that, press enter again on properties. We don't want to add any extras. We'll get all of the properties that the built in loopback user gets.
So real quick aside. Why are we making a brand new User? Because in Strongloop's infinite wisdom they decided two things:
a) The built in user shall be called User
b) The only way to extend its functionality, is to extend it with your own model
This is probably one of the most annoying things and yet so small. They could've easily called it BaseUser
so that we could call ours User
. Support the change here: https://github.com/strongloop/loopback/issues/3028
15) Create the Widget
model by running the following:
$ docker-compose run api lb model Widget
Just like before, we'll walk through this process and create some settings.
Enter the model name: press enter here to use Widget
Select the data-source to attach Client to: Use mysql
Select model's base class: Scroll down and select Persisted Model
Expose Client via the REST API? press y
and enter
Custom plural form (used to build REST URL) just press enter, it will default to widgets
Common model or server only? use server
For Properties, for the first one:
Property Name: name
Property Type: string
Required: n
Default Value: leave blank for none
For the second:
Property Name: description
Property Type: string
Required: n
Default Value: leave blank for none
After those two, just press enter again on the third property with nothing entered and it will exit you out.
16) Relate the Widget
and Client
via a hasMany
relation:
This is an awesome, and very Rail-sy feature. We can easily associated models and automatically have the associated rest endpoints created. In our case here we're going to make it so that a Client
hasMany
Widget
s via the endpoint:
/clients/:id/widgets
Which again, while pretty "straightforward" would be a file scaffolding timesink in raw ExpressJs. Let's do this by running:
$ docker-compose run api lb relation
Use the following settings:
Select the model to create the relationship from: select Client
Relation type: select hasMany
Choose a model to create a relationship with select Widget
Enter the property name for the relation: press enter to accept widgets
Optionally enter a custom foreign key: press enter and it will by default use widgetId
Require a through model? type n
and press enter
and our relation is created.
We can view this in our code by navigating to server/models/client.json
and we'll see the relation and all of our properties have been scaffolded out.
That's also the really neat thing with loopback. We define our models by simply creating a json file. All the scaffolding tool did was create this and the accompanying .js
file.
It also adds the new models to our server/model-config.json
file which is basically the master config file for all loopback models. Go ahead and open that now. You're should look like:
{
"_meta": {
"sources": [
"loopback/common/models",
"loopback/server/models",
"../common/models",
"./models"
],
"mixins": [
"loopback/common/mixins",
"loopback/server/mixins",
"../common/mixins",
"./mixins"
]
},
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
},
"RoleMapping": {
"dataSource": "db",
"public": false
},
"Role": {
"dataSource": "db",
"public": false
},
"Client": {
"dataSource": "mysql",
"public": true
},
"Widget": {
"dataSource": "mysql",
"public": true
}
}
Immediately, we should notice a problem. Everything except our Client
and Widget
models use the db
in memory store. Change all of those mysql
and also set the User
to have a property of public: false
since we have to use our extended Client
model. The model-config.json
file should now look like this:
{
"_meta": {
"sources": [
"loopback/common/models",
"loopback/server/models",
"../common/models",
"./models"
],
"mixins": [
"loopback/common/mixins",
"loopback/server/mixins",
"../common/mixins",
"./mixins"
]
},
"User": {
"dataSource": "mysql",
"public": true
},
"AccessToken": {
"dataSource": "mysql",
"public": false
},
"ACL": {
"dataSource": "mysql",
"public": false
},
"RoleMapping": {
"dataSource": "mysql",
"public": false
},
"Role": {
"dataSource": "mysql",
"public": false
},
"Client": {
"dataSource": "mysql",
"public": true
},
"Widget": {
"dataSource": "mysql",
"public": true
}
}
Excellent
17) Head back over to localhost:3002/explorer
a) Click on the Widget
option to see a list of endpoints that have been created.
b) Click on the GET /Widgets
And we'll see that it's failed. Even though we've setup our application logic to deal with models and relations, we have not informed our DB of the change. Let's do that now.
Just as a note, we're doing this via the UI console instead of curl
simply for less steps and brevity. We can create requests to the API by simply doing somthing akin to:
curl -H "Accept: application/json" \
-H "Content-Type: application/json" \
-X POST -d "{\"email\": \"user@name.com\", \"password\": \"password\"}" \ localhost:3002/api/clients/login
The above would grab your access token, and then to grab the widgets authenticated we'd do:
curl -H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: TOKEN_WE_JUST_GOT" \
localhost:3002/api/widgets
Really, the important part there is how to set the AUTH header. Other than that it's straightforward.
Automated Database Migrations and Updates
A recurring problem in any type of application that develops around ANY type of database is changing schemas, tables and data structures. Most application stacks, specifically Rails, have a great way to handle this (well or at least a way). In the world of Node however, good luck. Sequelize has some, but as is classic dev teams - the documentation is bad. Knex and Bookshelf are pretty awesome, but that requires config'ing express of course. Sails.js and friends have Waterline, but last I looked in to Sails.js, they had split and now I have no idea if it's Sails, Trails or whatever.
And lets not get started on Mongo. The number of developers that just pick mongo because it looks like JSON is hilarious. And inevitably, as is the case with MOST data in MOST apps, they require relations. And as soon as all the data starts getting super relational heavy, all the benefits of NoSQL start disappearing (quickly).
Back on topic here. Strongloop's Loopback actually has a pretty great migration/update system. However, you'd think they'd want you to not know about it. It's not that it's not documented, it's just worded very oddly. There's two functions:
automigrate
- updates your tables but drops all the data in existing ones. Ouch.
autoupdate
- updates tables.
When first reading it, and maybe it's just me, I assumed that autoupdate
was only something one could perform if the table was already in existence. So of course that lead to this weird conundrum of looking for a way to create the table if it does not exist and update it if it does and only if it needs to be updated.
THANKFULLY, despite it being terribly documented, we can achieve this.
What we're going to do is two fold:
a) Create a migration script that will create our tables and drop current ones. We can run this when we need to refresh our local dev environment or add seed data.
b) Create a set of auto update scripts that will keep our database in sync with all of our models/model.json
files!
18) Create a new folder bin
in our code
directory. Create a file inside of bin
called migrate.js
So the full file path to this in our code
directory is bin/migrate.js
Inside put the following:
'use strict'
const path = require('path')
// import our app for one time usage
const server = require(path.resolve(__dirname, '../server/server.js'))
// reference to our datasource that we named 'mysql'
const mysql = server.dataSources.mysql
// the basic loopback model tables
const base = ['User', 'AccessToken', 'ACL', 'RoleMapping', 'Role']
// our custom models
const custom = ['Widget', 'Client']
const lbTables = [].concat(base, custom)
// Run through and create all of them
mysql.automigrate(lbTables, function (err) {
if (err) throw err
console.log(' ')
console.log('Tables [' + lbTables + '] reset in ' + mysql.adapter.name)
console.log(' ')
mysql.disconnect()
process.exit(0)
})
optional aside
I hate semicolons and long lines, so if your editor is complaining just modify your .eslintrc
file in your code
directory to reflect the following:
{
"extends": "loopback",
"parserOptions": {
"ecmaVersion": 6
},
"rules": {
"semi": ["error", "never"],
"space-before-function-paren": ["error", "always"],
"max-len": ["error", 100]
}
}
/end optional aside
19) Run the migration script
In our code
directory run the following:
docker-compose run api node bin/migrate.js
Once done, hop over into your mysql DB command line and run
show tables;
And we'll see that all of our tables now exist.
20) Create a Widget
Hop back over to our localhost:3002
a) Find POST /Widgets
b) Create {"name": "amazing widget", "description": "so good"}
c) Click Try it out!
and the Widget
will be created.
Now to solve updating tables with new schemas.
21) Navigate to server/models/widget.json
and add the following property:
{
"properties": {
"name": {
"type": "string"
},
"description": {
"type": "string"
},
"size": {
"type": "number"
}
},
}
Where size
is our new property.
22) Head back to localhost:3002/explorer
and attempt the following Widget
:
a) Find POST /Widgets
b) Create {"name": "huge widget", "description": "huge", "size": 10}
c) Click Try it out!
And it will fail with:
Unknown column 'size' in 'field list'
Let's create those Autoupdate
scripts now
23) Create a new file at server/boot/base.migration.js
Inside of this file, we'll create the auto updating of Loopback's built in models. Input the following:
'use strict'
// the base loopback models
const models = ['User', 'AccessToken', 'ACL', 'RoleMapping', 'Role']
module.exports = function updateBaseModels (app, next) {
// reference to our datasource
const mysql = app.dataSources.mysql
// check to see if the model is out of sync with DB
mysql.isActual(models, (err, actual) => {
if (err) {
throw err
}
let syncStatus = actual ? 'in sync' : 'out of sync'
console.log('')
console.log(`Base models are ${syncStatus}`)
console.log('')
// if the models are in sync, move along
if (actual) return next()
console.log('Migrating Base Models...')
// update the models
mysql.autoupdate(models, (err, result) => {
if (err) throw err
console.log('Base models migration successful!')
console.log('')
next()
})
})
}
After saving this file, if we head back over to our logs, we'll see the message that they're in sync. We haven't changed them, and honestly probably won't even change the base models, but just in case we ever need to finagle them.
24) Create a new file at server/boot/custom.migration.js
Finally, for our custom models, even though these script are basically identical, it's convenience since we might have to change they way they update in the future that differs from the base ones.
'use strict'
const models = ['Widget', 'Client']
module.exports = function updateCustomModels (app, next) {
const mysql = app.dataSources.mysql
mysql.isActual(models, (err, actual) => {
if (err) {
throw err
}
let syncStatus = actual ? 'in sync' : 'out of sync'
console.log('')
console.log(`Custom models are ${syncStatus}`)
console.log('')
if (actual) return next()
console.log('Migrating Custom Models...')
mysql.autoupdate(models, (err, result) => {
if (err) throw err
console.log('Custom models migration successful!')
console.log('')
next()
})
})
}
No comments for this one since it's the same.
One aside though is boot
. This directory, as its name suggests, inclues scripts that are run everytime the loopback app is booted up. So in this case, when our app is restarted, it will always seek to ensure that our models are in sync with our database based on our model.json
files.
After saving this, back in the console we should see the message that our custom models have been migrated successfully! Let's go back over and create that huge widget now.
25) Head back to localhost:3002/explorer
and create the huge widget
a) Find POST /Widgets
b) Create {"name": "huge widget", "description": "huge", "size": 10}
c) Click Try it out!
And everything should work as planned. From now on if we update a model's json file and reboot, the MySQL DB will automatically update.
If you'd like to verify these are indeed in existence, just head back over to the Mysql DB and do a select * from Widget;
and you'll see our beautiful widgets. Of course it's missing clientID
because haven't created any through a relation yet, which we'll do next.
Authenticating and Authorizing Resources
Strongloop has a very brilliant (and fun) and yet terribly documented and confusing concept for authorization. It's known as ACLs or 'access control lists'. They have a bit of a learning curve, but once over that are incredibly useful. Not to mention better than most of the other package soup authorization libraries out there.
In a model.json
file there is a property called acls
. It's an array and accepts a set of objects that follow the pattern of:
{
"accessType": READ, WRITE, EXECUTE,
"principalType": USER, APP, ROLE,
"principalId": if `Role` then one of a few we'll mention below,
"permission": ALLOW or DENY,
"property": an array of methods or a single one this applies too
}
The most common setup we'll use is a principalType: ROLE
which then allows us to use a principleId
of:
-
$owner
- only the resource owner may access -
$everyone
- anyone may access -
$authenticated
- only logged in users may access -
$unauthenticated
- logged out users -
custom
- we can define our own roles!
These ACLs have an order of precedence in which they apply. In simple terms, that means if you apply 3 different ACLs, there is a set order by which loopback will determine the final permission. This is actually made pretty clear at the end of their docs
http://loopback.io/doc/en/lb3/Controlling-data-access.html#acl-rule-precedence
The way I like to think about it is using a visual.
I have a resource. In our case a Widget. And it's huge and green.
There's a road to it that's letting everyone in.
In order to filter out only the traffic I want, I'll put security guard posts along the road to the Widget.
The guard posts in this case are ACLs. They each have their own set of rules to let traffic in.
Anyhow..
Before anything, let's create our first related widget.
26) Head over to the localhost:3002/explorer
a) Under Client
find POST /Clients
and let's create a user and use the following:
{"email": "test@widget.com", "password": "test"}
b) After our user has been created find POST /Clients/login
and use the following (the same as what you signed up with):
{"email": "test@widget.com", "password": "test"}
After this is posted it will return an instance of an Access Token
.
From this, grab the id
property of the returned token, paste it into the Set Access Token
field in the navigation bar and set it.
All this does is add our access token to each request from this point on.
Also note our userId
c) Find POST /Clients/:id/widgets
, enter your userId
for id
and post the following widget:
{"name": "user widget", "description": "user awesome", "size": 5}
We'll receive an Authorization error here. That's because, by default, related resources are not allowed to be executed/read from by their related model.
27) Hop over to client.json
and add the following object in the acls
array:
{
"accessType": "EXECUTE",
"principalType": "ROLE",
"principalId": "$authenticated",
"permission": "ALLOW",
"property": ["__create__widgets"]
}
The above ACL says, allow a Client
to create a Widget
via the related method __create__widgets
IF the Client
is authenticated.
All related model methods follow the pattern of __action__relatedModelPluralName
However, just because we can POST
them does not mean we can fetch them. Add one more ACL:
{
"accessType": "READ",
"principalType": "ROLE",
"principalId": "$owner",
"permission": "ALLOW",
"property": ["__get__widgets", "__findById__widgets"]
}
The above states that if our Client
is the owner, meaning their clientId
is present as a foreign key on the widget, allow them to fetch the widget via either a full get list or as an individual find by id.
For a list of some of the related model methods - see this doc: http://loopback.io/doc/en/lb3/Accessing-related-models.html
I say some, because I keep finding methods and aliases that aren't documented anywhere.
The final client.json
should look like:
{
"name": "Client",
"base": "User",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {},
"validations": [],
"relations": {
"widgets": {
"type": "hasMany",
"model": "Widget",
"foreignKey": ""
}
},
"acls": [
{
"accessType": "EXECUTE",
"principalType": "ROLE",
"principalId": "$authenticated",
"permission": "ALLOW",
"property": ["__create__widgets"]
},
{
"accessType": "READ",
"principalType": "ROLE",
"principalId": "$owner",
"permission": "ALLOW",
"property": ["__get__widgets", "__findById__widgets"]
}
],
"methods": {}
}
28) Head back to localhost:3002/explorer
and POST
the widget
Find POST /Clients/:id/widgets
, enter your userId
for id
and post the following widget:
{"name": "user widget", "description": "user awesome", "size": 5}
Now it will work. Fabulous. One more problem though. We can still POST
directly to the Widgets
API. That means Widgets can be created without owners, which may be what we want or it may not be. In order to lock down the Widget
api...
29) Open up server/widget.json
and add the following ACL:
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "DENY"
}
This just straight up denies anyone from accessing widgets directly. The access via the client will still work though. When no property
is supplied, it assumes ALL. The final widget.json
should look like:
{
"name": "Widget",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"name": {
"type": "string"
},
"description": {
"type": "string"
},
"size": {
"type": "number"
}
},
"validations": [],
"relations": {},
"acls": [
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "DENY"
}
],
"methods": {}
}
The alternative to this would just be to go to our model-config.json
and change the public: true
to public: false
.
Final Thoughts
As with most things within the Node community, Strongloop Loopback has a ton of major advantages and powerful features... however it's documentation is incredibly lacking. I'm still a huge proponent of it though simply because of how productive one can be in such a short amount of time. So many REST APIs have SO many things in common, why do them all again?
Setting up custom REST methods, roles and hooking up to Passport oAuth is pretty straight forward. Not to mention integrating with almost any Express package is simple since it's just an extension of Express. And with a nice and simple migration system, it takes a lot of headache out of the process.
I've got a video series in the works that should be out in the next couple of months that will include a super deep dive into Strongloop's Loopback, using it with Docker and deploying it to hook up with a separate react webservice all inside of AWS!
If the video series sounds like something of interest, or if you'd like to subscribe and get all of my weekly guides in your inbox, signup for my mailing list!
As always, please leave me a comment or drop a line if there's any technical glitches or problems.
This was originally posted on J Cole Morrison: Tech Guides and Thoughts
Check out some of my other guides:
- Guide to Fault Tolerant and Load Balanced AWS Docker Deployment on ECS
- Create React App with SASS, Storybook and Yarn in a Docker Environment
Posted on February 3, 2017
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.