Friday, August 31, 2018
Thursday, August 30, 2018
Wednesday, August 29, 2018
How to Use the Symfony Filesystem Component
In this article, we're going to explore the Symfony Filesystem component, which provides useful methods to interact with a file system. After installation and configuration, we'll create a few real-world examples of how to use it.
The Symfony Filesystem Component
More often than not, you'll need to interact with a file system if you're dealing with PHP applications. In most cases, you either end up using the core PHP functions or create your own custom wrapper class to achieve the desired functionality. Either way, it's difficult to maintain over a longer period of time. So what you need is a library which is well maintained and easy to use. That's where the Symfony Filesystem component comes in.
The Symfony Filesystem component provides useful wrapper methods that make the file system interaction a breeze and a fun experience. Let's quickly look at what it's capable of:
- creating a directory
- creating a file
- editing file contents
- changing the owner and group of a file or directory
- creating a symlink
- copying a file or directory
- removing a file or directory
- and more
In this article, I'll show you how to unleash the power of the Symfony Filesystem component. As usual, we'll start with installation and configuration instructions, and then we'll implement a few real-world examples to demonstrate the key concepts.
Installation and Configuration
In this section, we're going to install the Symfony Filesystem component. I assume that you've already installed Composer in your system as we'll need it to install the Filesystem component available at Packagist.
So go ahead and install the Filesystem component using the following command.
$composer require symfony/filesystem
That should have created a composer.json file, which should look like this:
{ "require": { "symfony/filesystem": "^4.1" } }
So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.
<?php require_once './vendor/autoload.php'; // application code ?>
A Real-World Example
In this section, we'll create an example which demonstrates how you could use the Filesystem component in your applications to perform various filesystem operations.
To start with, let's go ahead and create the index.php file with the following contents.
<?php require_once './vendor/autoload.php'; use Symfony\Component\Filesystem\Filesystem; use Symfony\Component\Filesystem\Exception\IOExceptionInterface; // init file system $fsObject = new Filesystem(); $current_dir_path = getcwd(); // make a new directory // create a new file and add contents // copy a directory // remove a directory
Here, we've initialized the Filesystem
object to $fsObject
and saved the current directory to $current_dir_path
. In the upcoming sections, we'll use $fsObject
to perform different operations.
Make a New Directory
First, we'll create a new directory.
//make a new directory try { $new_dir_path = $current_dir_path . "/foo"; if (!$fsObject->exists($new_dir_path)) { $old = umask(0); $fsObject->mkdir($new_dir_path, 0775); $fsObject->chown($new_dir_path, "www-data"); $fsObject->chgrp($new_dir_path, "www-data"); umask($old); } } catch (IOExceptionInterface $exception) { echo "Error creating directory at". $exception->getPath(); }
Here, we've used the exists
method to check if the foo directory already exists before creating it.
Next, we used the mkdir
method to create the foo directory with the 0775 permissions, which means readable and executable by all, but only writable by the file owner and their group. (This is the octal notation for filesystem permissions—to learn more, check out this breakdown of octal notation.) Further, we've used the chown and chgrp methods to change the owner and group of the foo directory.
Create a New File and Add Contents
In this section, we'll create a new file and add contents to that file.
// create a new file and add contents try { $new_file_path = $current_dir_path . "/foo/bar.txt"; if (!$fsObject->exists($new_file_path)) { $fsObject->touch($new_file_path); $fsObject->chmod($new_file_path, 0777); $fsObject->dumpFile($new_file_path, "Adding dummy content to bar.txt file.\n"); $fsObject->appendToFile($new_file_path, "This should be added to the end of the file.\n"); } } catch (IOExceptionInterface $exception) { echo "Error creating file at". $exception->getPath(); }
Here, we've used the touch
method to create a new file and then used chmod
to set its permissions to 0777—globally readable, writable, and executable.
Once the file is created, you can use the dumpFile
method to add contents in that file. On the other hand, if you want to add contents to the already existing file, you can use the appendToFile
method, as shown in the above example.
Copy a Directory
So far, we've created the foo directory and the bar.txt file using the $fsObject
object. In this section, we'll see how to copy a directory along with the contents.
//copy a directory try { $src_dir_path = $current_dir_path . "/foo"; $dest_dir_path = $current_dir_path . "/foo_copy"; if (!$fsObject->exists($dest_dir_path)) { $fsObject->mirror($src_dir_path, $dest_dir_path); } } catch (IOExceptionInterface $exception) { echo "Error copying directory at". $exception->getPath(); }
As you can see, first we built the path names with string concatenation. Then, once we made sure the directory didn't already exist using the exists
method, we used the mirror
method to copy the foo directory into the foo_copy directory.
Remove a Directory
Finally, let's see how to remove a directory.
//remove a directory try { $arr_dirs = array( $current_dir_path . "/foo", $current_dir_path . "/foo_copy" ); $fsObject->remove($arr_dirs); } catch (IOExceptionInterface $exception) { echo "Error deleting directory at". $exception->getPath(); }
Again, it's pretty straightforward—to delete a directory, you just use the remove
method.
You can find the complete code to index.php in our GitHub repo.
Conclusion
So that's a brief introduction to the Symfony Filesystem component. The Symfony Filesystem component provides methods that make interaction with a file system a breeze. We looked at how to install the component, and we created a handful of examples to demonstrate various aspects of the component.
I hope that you've enjoyed this article, and feel free to post your thoughts using the feed below!
Tuesday, August 28, 2018
Code Your First API With Node.js and Express: Connect a Database
Build a REST API With Node.js and Express: Connecting a Database
In the first tutorial, Understanding RESTful APIs, we learned what the REST architecture is, what HTTP request methods and responses are, and how to understand a RESTful API endpoint. In the second tutorial, How to Set Up an Express API Server, we learned how to build servers with both Node's built-in http
module and the Express framework, and how to route the app we created to different URL endpoints.
Currently, we're using static data to display user information in the form of a JSON feed when the API endpoint is hit with a GET
request. In this tutorial, we're going to set up a MySQL database to store all the data, connect to the database from our Node.js app, and allow the API to use the GET
, POST
, PUT
, and DELETE
methods to create a complete API.
Installation
Up to this point, we have not used a database to store or manipulate any data, so we're going to set one up. This tutorial will be using MySQL, and if you already have MySQL installed on your computer, you'll be ready to go on to the next step.
If you don't have MySQL installed, you can download MAMP for macOS and Windows, which provides a free, local server environment and database. Once you have this downloaded, open the program and click Start Servers to start MySQL.
In addition to setting up MySQL itself, we'll want GUI software to view the database and tables. For Mac, download SequelPro, and for Windows download SQLyog. Once you have MySQL downloaded and running, you can use SequelPro or SQLyog to connect to localhost
with the username root
and password root
on port 3306
.
Once everything is set up here, we can move on to setting up the database for our API.
Setting Up the Database
In your database viewing software, add a new database and call it api
. Make sure MySQL is running, or you won't be able to connect to localhost
.
When you have the api
database created, move into it and run the following query to create a new table.
CREATE TABLE `users` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(30) DEFAULT '', `email` varchar(50) DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This SQL query will create the structure of our users
table. Each user will have an auto-incrementing id, a name, and an email address.
We can also fill the database with the same data that we're currently displaying through a static JSON array by running an INSERT
query.
INSERT INTO users (name, email) VALUES ('Richard Hendricks', 'richard@piedpiper.com'), ('Bertram Gilfoyle', 'gilfoyle@piedpiper.com');
There is no need to input the id
field, as it is auto-incrementing. At this point, we have the structure of our table as well as some sample data to work with.
Connecting to MySQL
Back in our app, we have to connect to MySQL from Node.js to begin working with the data. Earlier, we installed the mysql
npm module, and now we're going to use it.
Create a new directory called data and make a config.js file.
We'll begin by requiring the mysql
module in data/config.js.
const mysql = require('mysql');
Let's create a config
object that contains the host, user, password, and database. This should refer to the api
database we made and use the default localhost settings.
// Set database connection credentials const config = { host: 'localhost', user: 'root', password: 'root', database: 'api', };
For efficiency, we're going to create a MySQL pool, which allows us to use multiple connections at once instead of having to manually open and close multiple connections.
// Create a MySQL pool const pool = mysql.createPool(config);
Finally, we'll export the MySQL pool so the app can use it.
// Export the pool module.exports = pool;
You can see the completed database configuration file in our GitHub repo.
Now that we're connecting to MySQL and our settings are complete, we can move on to interacting with the database from the API.
Getting API Data From MySQL
Currently, our routes.js
file is manually creating a JSON array of users, which looks like this.
const users = [{ ...
Since we're no longer going to be using static data, we can delete that entire array and replace it with a link to our MySQL pool.
// Load the MySQL pool connection const pool = require('../data/config');
Previously, the GET
for the /users
path was sending the static users
data. Our updated code is going to query the database for that data instead. We're going to use a SQL query to SELECT
all from the users
table, which looks like this.
SELECT * FROM users
Here is what our new /users
get route will look like, using the pool.query()
method.
// Display all users app.get('/users', (request, response) => { pool.query('SELECT * FROM users', (error, result) => { if (error) throw error; response.send(result); }); });
Here, we're running the SELECT
query and then sending the result as JSON to the client via the /users
endpoint. If you restart the server and navigate to the /users
page, you'll see the same data as before, but now it's dynamic.
Using URL Parameters
So far, our endpoints have been static paths—either the /
root or /users
—but what about when we want to see data only about a specific user? We'll need to use a variable endpoint.
For our users, we might want to retrieve information about each individual user based on their unique id. To do that, we would use a colon (:
) to denote that it's a route parameter.
// Display a single user by ID app.get('/users/:id', (request, response) => { ... }); });
We can retrieve the parameter for this path with the request.params
property. Since ours is named id
, that will be how we refer to it.
const id = request.params.id;
Now we'll add a WHERE
clause to our SELECT
statement to only get results that have the specified id
.
We'll use ?
as a placeholder to avoid SQL injection and pass the id through as a parameter, instead of building a concatenated string, which would be less secure.
pool.query('SELECT * FROM users WHERE id = ?', id, (error, result) => { if (error) throw error; response.send(result); });
The full code for our individual user resource now looks like this:
// Display a single user by ID app.get('/users/:id', (request, response) => { const id = request.params.id; pool.query('SELECT * FROM users WHERE id = ?', id, (error, result) => { if (error) throw error; response.send(result); }); });
Now you can restart the server and navigate to http://localhost/users/2
to see only the information for Gilfoyle. If you get an error like Cannot GET /users/2
, it means you need to restart the server.
Going to this URL should return a single result.
[{ id: 2, name: "Bertram Gilfoyle", email: "gilfoyle@piedpiper.com" }]
If that's what you see, congratulations: you've successfully set up a dynamic route parameter!
Sending a POST Request
So far, everything we've been doing has used GET
requests. These requests are safe, meaning they do not alter the state of the server. We've simply been viewing JSON data.
Now we're going to begin to make the API truly dynamic by using a POST
request to add new data.
I mentioned earlier in the Understanding REST article that we don't use verbs like add
or delete
in the URL for performing actions. In order to add a new user to the database, we'll POST
to the same URL we view them from, but just set up a separate route for it.
// Add a new user app.post('/users', (request, response) => { ... });
Note that we're using app.post()
instead of app.get()
now.
Since we're creating instead of reading, we'll use an INSERT
query here, much like we did at the initialization of the database. We'll send the entire request.body
through to the SQL query.
pool.query('INSERT INTO users SET ?', request.body, (error, result) => { if (error) throw error;
We're also going to specify the status of the response as 201
, which stands for Created
. In order to get the id of the last inserted item, we'll use the insertId
property.
response.status(201).send(`User added with ID: ${result.insertId}`);
Our entire POST
receive code will look like this.
// Add a new user app.post('/users', (request, response) => { pool.query('INSERT INTO users SET ?', request.body, (error, result) => { if (error) throw error; response.status(201).send(`User added with ID: ${result.insertId}`); }); });
Now we can send a POST
request through. Most of the time when you send a POST
request, you're doing it through a web form. We'll learn how to set that up by the end of this article, but the fastest and easiest way to send a test POST
is with cURL, using the -d (--data)
flag.
We'll run curl -d
, followed by a query string containing all the key/value pairs and the request endpoint.
curl -d "name=Dinesh Chugtai&email=dinesh@piedpiper.com" http://localhost:3002/users
Once you send this request through, you should get a response from the server.
User added with ID: 3
If you navigate to http://localhost/users
, you'll see the latest entry added to the list.
Sending a PUT Request
POST
is useful for adding a new user, but we'll want to use PUT
to modify an existing user. PUT
is idempotent, meaning you can send the same request through multiple times and only one action will be performed. This is different than POST
, because if we sent our new user request through more than once, it would keep creating new users.
For our API, we're going to set up PUT
to be able to handle editing a single user, so we're going to use the :id
route parameter this time.
Let's create an UPDATE
query and make sure it only applies to the requested id with the WHERE
clause. We're using two ?
placeholders, and the values we pass will go in sequential order.
// Update an existing user app.put('/users/:id', (request, response) => { const id = request.params.id; pool.query('UPDATE users SET ? WHERE id = ?', [request.body, id], (error, result) => { if (error) throw error; response.send('User updated successfully.'); }); });
For our test, we'll edit user 2
and update the email address from gilfoyle@piedpiper.com to bertram@piedpiper.com. We can use cURL again, with the [-X (--request)]
flag, to explicitly specify that we're sending a PUT request through.
curl -X PUT -d "name=Bertram Gilfoyle" -d "email=bertram@piedpiper.com" http://localhost:3002/users/2
Make sure to restart the server before sending the request, or else you'll get the Cannot PUT /users/2
error.
You should see this:
User updated successfully.
The user data with id 2
should now be updated.
Sending a DELETE Request
Our last task to complete the CRUD functionality of the API is to make an option for deleting a user from the database. This request will use the DELETE
SQL query with WHERE
, and it will delete an individual user specified by a route parameter.
// Delete a user app.delete('/users/:id', (request, response) => { const id = request.params.id; pool.query('DELETE FROM users WHERE id = ?', id, (error, result) => { if (error) throw error; response.send('User deleted.'); }); });
We can use -X
again with cURL to send the delete through. Let's delete the latest user we created.
curl -X DELETE http://localhost:3002/users/3
You'll see the success message.
User deleted.
Navigate to http://localhost:3002
, and you'll see that there are only two users now.
Congratulations! At this point, the API is complete. Visit the GitHub repo to see the complete code for routes.js.
Sending Requests Through the request
Module
At the beginning of this article, we installed four dependencies, and one of them was the request
module. Instead of using cURL requests, you could make a new file with all the data and send it through. I'll create a file called post.js that will create a new user via POST
.
const request = require('request'); const json = { "name": "Dinesh Chugtai", "email": "dinesh@piedpiper.com", }; request.post({ url: 'http://localhost:3002/users', body: json, json: true, }, function (error, response, body) { console.log(body); });
We can call this using node post.js
in a new terminal window while the server is running, and it will have the same effect as using cURL. If something is not working with cURL, the request
module is useful as we can view the error, response, and body.
Sending Requests Through a Web Form
Usually, POST
and other HTTP methods that alter the state of the server are sent using HTML forms. In this very simple example, we can create an index.html file anywhere, and make a field for a name and email address. The form's action will point to the resource, in this case http//localhost:3002/users
, and we'll specify the method as post
.
Create index.html and add the following code to it:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Node.js Express REST API</title> </head> <body> <form action="http://localhost:3002/users" method="post"> <label for="name">Name</label> <input type="text" name="name"> <label for="email">Email</label> <input type="email" name="email"> <input type="submit"> </form> </body> </html>
Open this static HTML file in your browser, fill it out, and send it while the server is running in the terminal. You should see the response of User added with ID: 4
, and you should be able to view the new list of users.
Conclusion
In this tutorial, we learned how to hook up an Express server to a MySQL database and set up routes that correspond to the GET
, POST
, PUT
, and DELETE
methods for paths and dynamic route parameters. We also learned how to send HTTP requests to an API server using cURL, the Node.js request
module, and HTML forms.
At this point, you should have a very good understanding of how RESTful APIs work, and you can now create your own full-fledged API in Node.js with Express and MySQL!
Monday, August 27, 2018
Sunday, August 26, 2018
Saturday, August 25, 2018
Friday, August 24, 2018
A Beginner's Guide to Drawing 2D Graphics With Two.js
Two.js an API that makes it easy to create 2D shapes with code. Follow along and you'll learn how to create and animate shapes from JavaScript.
Two.js is renderer agnostic, so you can rely on the same API to draw with Canvas, SVG, or WebGL. The library has a lot of methods which can be used to control how different shapes appear on the screen or how they are animated.
Installation
The uncompressed version of the library has a size of around 128 KB, while the compressed version is 50 KB. If you are using the latest version, you can further reduce the size of the library using a custom build.
You can either download the minified version of the library from GitHub or you can link directly to the CDN hosted version. Once you have added the library to your webpage, you can start drawing and animating different shapes or objects.
Creating Basic Shapes
First, you need to tell Two.js about the element on which you want to draw and animate your shapes. You can pass some parameters to the Two
constructor to set things up.
Set the type of renderer using the type
property. You can specify a value like svg
, webgl
, canvas
, etc. The type
is set to svg
by default. The width and height of the drawing space can be specified using the width
and height
parameters. You can also set the drawing space to the full available screen using the fullscreen
parameter. When fullscreen
is set to true, the values of width
and height
will be disregarded.
Finally, you can tell Two.js to automatically start an animation with the help of the Boolean autostart
parameter.
After passing all the desired parameters to the constructor, you can start drawing lines, rectangles, circles, and ellipses.
You can draw a line using two.makeLine(x1, y1, x2, y2)
. Here, (x1, y1)
are the coordinates of the first end point, and (x2, y2)
are the coordinates of the second end point. This function will return a Two.Line
object, which can be stored in a variable for further manipulation at a later point.
In a similar manner, you can draw normal and rounded rectangles using two.makeRectangle(x, y, width, height)
and two.makeRoundedRectangle(x, y, width, height, radius)
respectively. Remember that x
and y
determine the center of the rectangle, instead of its top left coordinates like many other libraries. The width
and height
parameters will determine the size of the rectangle. The radius
parameter is used to specify the value of the radius for the rounded corner.
You can also render circles and ellipses on a webpage using two.makeCircle(x, y, radius)
and two.makeEllipse(x, y, width, height)
respectively. Just like the rectangles, the x
and y
parameters specify the center of the circle or ellipse. Setting the width
and height
to the same value in the case of an ellipse will render it like a circle.
One useful method in Two.js that you will use frequently is two.makeGroup(objects)
. You can either pass a list of different objects or pass an array of objects, paths or groups as the parameter to this method. It will also return a Two.Group
object.
Manipulating Objects in a Group
After you have created a group, you can manipulate all its children at once using properties that the group makes available to you.
The stroke
and fill
properties can be used to set the stroke and fill color for all children in a group. They will accept all valid forms in which you can represent a color in CSS. This means that you are free to use RGB, HSL, or hex notation. You can also simply use the name of the color, like orange
, red
, or blue
. Similarly, you can set values for all other properties like linewidth
, opacity
, miter
, and cap
. It is possible to remove the fill and stroke from all children in a group using the noFill()
and noStroke()
methods.
You can also apply other physical transformations like scale
, rotation
, and translation
. These transformations will be applied on individual objects. Adding new objects to a group and removing them is easy with methods like add()
and remove()
.
Defining Gradients and Writing Text
You can define both linear and radial gradients in Two.js. Defining a gradient does not mean that it will be rendered automatically on the screen, but it will be available for you to use when setting the fill
or stroke
values of various objects.
You can define a linear gradient using two.makeLinearGradient(x1, y1, x2, y2, stops)
. The values x1
and y1
determine the coordinates of the start of the gradient. Similarly, the values x2
and y2
determine the coordinates of the end of the gradient. The stops
parameter is an array of Two.Stop
instances. These define the colors of each part of the array and where each color transitions into the next. They can be defined using new Two.Stop(offset, color, opacity)
, where offset
determines the point on the gradient where that particular color has to be fully rendered. The color
parameter determines the color of the gradient at the particular point. You can use any valid CSS color representations as its value. Finally, the opacity
parameter determines the opacity of the color. The opacity is optional, and it can have any value between 0 and 1.
You can define radial gradients in a similar manner using two.makeRadialGradient(x, y, radius, stops, fx, fy)
. In this case, the values x
and y
determine the center of the gradient. The radius
parameter specifies how far the gradient should extend. You can also pass an array of stops to this method in order to set the color composition of the gradients. The parameters fx
and fy
are optional, and they can be used to specify the focal position for the gradient.
Check out some of the types of gradient and their code in the CodePen below.
Remember that the x
and y
position of the gradients are with respect to the origin of the shape they are trying to fill. For instance, a radial gradient which is supposed to fill a shape from the center will always have x
and y
set to zero.
Two.js also allows you to write text on the drawing area and update it later according to your needs. This requires the use of the method two.makeText(message, x, y, styles)
. It might be evident from the name of the parameters that message
is the actual text that you want to write. The parameters x
and y
are the coordinates of the point which will act as the center for writing the text. The styles
parameter is an object which can be used to set the values of a large set of properties.
You can use styles to set the values of properties like font family
, size
, and alignment
. You can also specify the value of properties like fill
, stroke
, opacity
, rotation
, scale
, and translation
.
Creating a Two.js Project
After learning about all these methods and properties, it is time to apply them to a project. In this tutorial, I will show you how we can use Two.js to render the first ten elements of the periodic table with electrons rotating around the nucleus. The nucleus will also have some slight movement to improve the visual appeal of our representation.
We begin by defining some variables and functions which will be used later.
var centerX = window.innerWidth / 2; var centerY = window.innerHeight / 2; var elem = document.getElementById("atoms"); var elementNames = [ "", "Hydrogen", "Helium", "Lithium", "Beryllium", "Boron", "Carbon", "Nitrogen", "Oxygen", "Fluorine", "Neon" ]; var styles = { alignment: "center", size: 36, family: "Lato" }; var nucleusCount = 10; var nucleusArray = Array(); var electronCount = 10; var electronArray = Array(); function intRange(min, max) { return Math.random() * (max - min) + min; }
The above code stores the coordinates of the center of our window in the variables centerX
and centerY
. These will be used later to place our atom in the center. The elementNames
array contains the names of the first ten elements of the periodic table. The index of each name corresponds to the number of electrons and protons of that element, and it begins with an empty string. The styles
object contains properties for styling the text object.
We have also defined a function intRange()
to get a random integer value within given extremes.
var two = new Two({ fullscreen: true }).appendTo(elem); var protonColor = two.makeRadialGradient( 0, 0, 15, new Two.Stop(0, "red", 1), new Two.Stop(1, "black", 1) ); var neutronColor = two.makeRadialGradient( 0, 0, 15, new Two.Stop(0, "blue", 1), new Two.Stop(1, "black", 1) ); for (i = 0; i < nucleusCount; i++) { nucleusArray.push(two.makeCircle(intRange(-10, 10), intRange(-10, 10), 8)); } nucleusArray.forEach(function(nucleus, index) { if (index % 2 == 0) { nucleus.fill = protonColor; } if (index % 2 == 1) { nucleus.fill = neutronColor; } nucleus.noStroke(); });
This creates an instance of Two and defines two radial gradients. The red/black radial gradients will represent protons, and blue/black gradients will represent neutrons.
We have used the intRange()
function to place all these neutrons and protons within 20 pixels of each other. The makeCircle()
method also sets the radius of these protons and neutrons to 10 pixels. After that, we iterate over nucleusArray
and fill each circle with a different gradient alternately.
for (var i = 0; i < 10; i++) { if (i < 2) { var shellRadius = 50; var angle = i * Math.PI; electronArray.push( two.makeCircle( Math.cos(angle) * shellRadius, Math.sin(angle) * shellRadius, 5 ) ); } if (i >= 2 && i < 10) { var shellRadius = 80; var angle = (i - 2) * Math.PI / 4; electronArray.push( two.makeCircle( Math.cos(angle) * shellRadius, Math.sin(angle) * shellRadius, 5 ) ); } }
Placing neutrons and protons inside the nucleus was easy. However, properly placing the electrons at a uniform distance will require a little maths. We use the shellRadius
variable to specify the distance of different electron shells from the nucleus. A whole circle covers an angle equal to 2 PI radians. We can place different electrons uniformly by distributing the 2 PI radians between them equally.
The Math.cos()
and Math.sin()
functions are used to separate the vertical and horizontal components of the position vector of different electrons based on their angle.
var orbitA = two.makeCircle(centerX, centerY, 50); orbitA.fill = "transparent"; orbitA.linewidth = 2; orbitA.stroke = "rgba(0, 0, 0, 0.1)"; var orbitB = two.makeCircle(centerX, centerY, 80); orbitB.fill = "transparent"; orbitB.linewidth = 2; orbitB.stroke = "rgba(0, 0, 0, 0.1)"; var groupElectronA = two.makeGroup(electronArray.slice(0, 2)); groupElectronA.translation.set(centerX, centerY); groupElectronA.fill = "orange"; groupElectronA.linewidth = 1; var groupElectronB = two.makeGroup(electronArray.slice(2, 10)); groupElectronB.translation.set(centerX, centerY); groupElectronB.fill = "yellow"; groupElectronB.linewidth = 1; var groupNucleus = two.makeGroup(nucleusArray); groupNucleus.translation.set(centerX, centerY);
This part of the code puts electrons from different shells as well as neutrons and protons in their own separate groups. It also sets the fill colors for all electrons in a specific orbit at once.
two .bind("update", function(frameCount) { groupElectronA.rotation += 0.025 * Math.PI; groupElectronB.rotation += 0.005 * Math.PI; groupNucleus.rotation -= 0.05; }) .play(); var text = two.makeText("", centerX, 100, styles); nucleusArray.forEach(function(nucleus, index) { nucleus.opacity = 0; }); electronArray.forEach(function(electron, index) { electron.opacity = 0; });
This part of the code sets the opacity of individual electrons and protons to zero. It also tells Two.js to rotate the electrons and protons at specific speeds.
visible = 0; document.addEventListener("click", function(event) { if (visible < nucleusArray.length) { nucleusArray[visible].opacity = 1; electronArray[visible].opacity = 1; visible++; text.value = elementNames[visible]; } else { nucleusArray.forEach(el => el.opacity=0); electronArray.forEach(el => el.opacity=0); visible = 0; text.value = elementNames[0]; } });
The final part of the code allows us to iterate through the elements by clicking the mouse or tapping. To load the next element, we make one more electron and one more proton or neutron visible and update the element name. After clicking on the last element, all the particles are hidden again so we can start over. The visible
variable keeps track of the number of atomic particles that are currently visible so that we can show or hide them accordingly.
Try clicking or tapping in the following CodePen demo to see the first ten elements of the periodic table.
Final Thoughts
We began this tutorial with a brief introduction to the Two.js library and how it can be used to draw shapes like rectangles, circles, and ellipses. After that, we discussed how we can group different objects together to manipulate them all at once. We used this ability to group elements to translate and rotate them in synchronization. These tools all came together in our animation of the atoms of the first ten elements in the periodic table.
As you can see, creating animated 2D graphics is very easy using Two.js. The focus of this post was to help you get started quickly, so we only covered the basics. However, you should read the official documentation to learn more about the library!
Thursday, August 23, 2018
Code Your First API With Node.js and Express: Set Up the Server
How to Set Up an Express API Server in Node.js
In the previous tutorial, we learned what the REST architecture is, the six guiding constraints of REST, how to understand HTTP request methods and their response codes, and the anatomy of a RESTful API endpoint.
In this tutorial, we'll set up a server for our API to live on. You can build an API with any programming language and server software, but we will use Node.js, which is the back-end implementation of JavaScript, and Express, a popular, minimal framework for Node.
Installation
Our first prerequisite is making sure Node.js and npm are installed globally on the computer. We can test both using the -v
flag, which will display the version. Open up your command prompt and type the following.
node -v && npm -v
v10.8.0 6.2.0
Your versions may be slightly different than mine, but as long as both are there, we can get started.
Let's create a project directory called express-api
and move to it.
mkdir express-api && cd express-api
Now that we're in our new directory, we can initialize our project with the init command.
npm init
This command will prompt you to answer some questions about the project, which you can choose to fill out or not. Once the setup is complete, you'll have a package.json file that looks like this:
{ "name": "express-api", "version": "1.0.0", "description": "Node.js and Express REST API", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Tania Rascia", "license": "MIT" }
Now that we have our package.json, we can install the dependencies required for our project. Fortunately we don't require too many dependencies, just these four listed below.
- body-parser: Body parsing middleware.
- express: A minimalist web framework we'll use for our server.
- mysql: A MySQL driver.
- request (optional): A simple way to make HTTP calls.
We'll use the install
command followed by each dependency to finish setting up our project.
npm install body-parser express mysql request
This will create a package-lock.json file and a node_modules directory, and our package.json will be updated to look something like this:
{ "name": "express-api", "version": "1.0.0", "description": "Node.js and Express REST API", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Tania Rascia", "license": "MIT", "dependencies": { "dependencies": { "body-parser": "^1.18.3", "express": "^4.16.3", "mysql": "^2.16.0", "request": "^2.88.0" } }
Setting Up an HTTP Server
Before we get started on setting up an Express server, we will quickly set up an HTTP server with Node's built-in http
module, to get an idea of how a simple server works.
Create a file called hello-server.js. Load in the http
module, set a port number (I chose 3001
), and create the server with the createServer()
method.
// Build a server with Node's HTTP module const http = require('http'); const port = 3001; const server = http.createServer();
In the introductory REST article, we discussed what requests and responses are with regards to an HTTP server. We're going to set our server to handle a request and display the URL requested on the server side, and display a Hello, server! message to the client on the response side.
server**on('request'** (request, response) => { console.log(`URL: ${request.url}`); response.end('Hello, server!') })
Finally, we will tell the server which port to listen on, and display an error if there is one.
// Start the server server.listen(port, (error) => { if (error) return console.log(`Error: ${error}`); console.log(`Server is listening on port ${port}`) })
Now, we can start our server with node
followed by the filename.
node hello-server.js
You will see this response in the terminal:
Server is listening on port 3001
To check that the server is actually running, go to http://localhost:3001/
in your browser's address bar. If all is working properly, you should see Hello, server! on the page. In your terminal, you'll also see the URLs that were requested.
URL: / URL: /favicon.ico
If you were to navigate to http://localhost:3001/hello
, you would see URL: /hello
.
We can also use cURL on our local server, which will show us the exact headers and body that are being returned.
curl -i http://localhost:3001
HTTP/1.1 200 OK Date: Wed, 15 Aug 2018 22:14:23 GMT Connection: keep-alive Content-Length: 14 Hello, server!
If you close the terminal window at any time, the server will go away.
Now that we have an idea of how the server, request, and response all work together, we can rewrite this in Express, which has an even simpler interface and extended features.
Setting Up an Express Server
We're going to create a new file, app.js, which will be the entry point to our actual project. Just like with the original http server, we'll require a module and set a port to start.
Create an app.js file and put the following code in it.
// Require packages and set the port const express = require('express'); const port = 3002; const app = express();
Now, instead of looking for all requests, we will explicitly state that we are looking for a GET
request on the root of the server (/
). When /
receives a request, we will display the URL requested and the "Hello, Server!" message.
app.get('/', (request, response) => { console.log(`URL: ${request.url}`); response.send('Hello, Server!'); });
Finally, we'll start the server on port 3002
with the listen()
method.
// Start the server const server = app.listen(port, (error) => { if (error) return console.log(`Error: ${error}`); console.log(`Server listening on port ${server.address().port}`); });
We can start the server with node app.js
as we did before, but we can also modify the scripts
property in our package.json file to automatically run this specific command.
"scripts": { "start": "node app.js" },
Now we can use npm start
to start the server, and we'll see our server message in the terminal.
Server listening on port 3002
If we run a curl -i
on the URL, we will see that it is powered by Express now, and there are some additional headers such as Content-Type
.
curl -i http://localhost:3002
HTTP/1.1 200 OK X-Powered-By: Express Content-Type: text/html; charset=utf-8 Content-Length: 14 ETag: W/"e-gaHDsc0MZK+LfDiTM4ruVL4pUqI" Date: Wed, 15 Aug 2018 22:38:45 GMT Connection: keep-alive Hello, Server!
Add Body Parsing Middleware
In order to easily deal with POST
and PUT
requests to our API, we will add body parsing middleware. This is where our body-parser
module comes in. body-parser
will extract the entire body of an incoming request and parse it into a JSON object that we can work with.
We'll simply require the module at the top of our file. Add the following require
statement to the top of your app.js file.
const bodyParser = require('body-parser'); ...
Then we'll tell our Express app to use body-parser
, and look for JSON.
// Use Node.js body parsing middleware app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true, }));
Also, let's change our message to send a JSON object as a response instead of plain text.
response.send({message: 'Node.js and Express REST API'});
Following is our full app.json file as it stands now.
// Require packages and set the port const express = require('express'); const port = 3002; const bodyParser = require('body-parser'); const app = express(); // Use Node.js body parsing middleware app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true, })); app.get('/', (request, response) => { response.send({ message: 'Node.js and Express REST API'} ); }); // Start the server const server = app.listen(port, (error) => { if (error) return console.log(`Error: ${error}`); console.log(`Server listening on port ${server.address().port}`); });
If you send a curl -i
to the server, you'll see that the header now returns Content-Type: application/json; charset=utf-8
.
Set Up Routes
So far, we only have a GET
route to the root (/
), but our API should be able to handle all four major HTTP request methods on multiple URLs. We're going to set up a router and make some fake data to display.
Let's create a new directory called routes, and a file within called routes.js. We'll link to it at the top of app.js.
const routes = require('./routes/routes');
Note that the .js
extension is not necessary in the require. Now we'll move our app's GET
listener to routes.js. Enter the following code in routes.js.
const router = app => { app.get('/', (request, response) => { response.send({ message: 'Node.js and Express REST API' }); }); }
Finally, export the router
so we can use it in our app.js file.
// Export the router module.exports = router;
In app.js, replace the app.get()
code you had before with a call to routes()
:
routes(app);
You should now be able to go to http://localhost:3002
and see the same thing as before. (Don't forget to restart the server!)
Once that is all set up and working properly, we'll serve some JSON data with another route. We'll just use fake data for now, since our database is not yet set up.
Let's create a users
variable in routes.js, with some fake user data in JSON format.
const users = [{ id: 1, name: "Richard Hendricks", email: "richard@piedpiper.com", }, { id: 2, name: "Bertram Gilfoyle", email: "gilfoyle@piedpiper.com", }, ];
We'll add another GET
route to our router, /users
, and send the user data through.
app.get('/users', (request, response) => { response.send(users); });
After restarting the server, you can now navigate to http://localhost:3002/users
and see all our data displayed.
Note: If you do not have a JSON viewer extension on your browser, I highly recommend you download one, such as JSONView for Chrome. This will make the data much easier to read!
Visit our GitHub Repo to see the completed code for this post and compare it to your own.
Conclusion
In this tutorial, we learned how to set up a built-in HTTP server and an Express server in node, route requests and URLs, and consume JSON data with get requests.
In the final installment of the RESTful API series, we will hook up our Express server to MySQL to create, view, update, and delete users in a database, finalizing our API's functionality.
Wednesday, August 22, 2018
How Secure Are Your JavaScript Open-Source Dependencies?
Modern-day JavaScript developers love npm. GitHub and the npm registry are a developer’s first choice place for finding a particular package. Open-source modules add to the productivity and efficiency by providing developers with a host of functionalities that you can reuse in your project. It is fair to say that if it were not for these open-source packages, most of the frameworks today would not exist in their current form.
A full-fledged enterprise-level application, for instance, might rely on hundreds if not thousands of packages. The usual dependencies include direct dependencies, development dependencies, bundled dependencies, production dependencies, and optional dependencies. That’s great because everyone’s getting the best out of the open-source ecosystem.
However, one of the factors that get overlooked is the amount of risk involved. Although these third-party modules are particularly useful in their domain, they also introduce some security risks into your application.
Are Open-Source Libraries Vulnerable?
OSS dependencies are indeed vulnerable to exploits and compromises. Let's have a look at a few examples:
A vulnerability was discovered recently in a package called eslint-scope which is a dependency of several popular JavaScript packages such as babel-eslint and webpack. The account of the package maintainer was compromised, and the hackers added some malicious code into it. Fortunately, someone found out the exploit soon enough that the damage was reportedly limited to a few users.
Moment.js, which is one of the most-used libraries for parsing and displaying dates in JavaScript, was recently found to have a vulnerability with a severity score of 7.5. The exploit made it vulnerable to ReDoS attacks. Patches were quickly released, and they were able to fix the issue rather quickly.
But that's not all. A lot of new exploits get unearthed every week. Some of them get disclosed to the public, but others make headlines only after a serious breach.
So how do we mitigate these risks? In this article, I'll explain some of the industry-standard best practices that you can use to secure your open-source dependencies.
1. Keep Track of Your Application’s Dependencies
Logically speaking, as the number of dependencies increase, the risk of ending up with a vulnerable package can also increase. This holds true equally for direct and indirect dependencies. Although there’s no reason that you should stop using open-source packages, it’s always a good idea to keep track of them.
These dependencies are easily discoverable and can be as simple as running npm ls
in the root directory of your application. You can use the –prod
argument which displays all production dependencies and the –long
argument for a summary of each package description.
Furthermore, you can use a service to automate the dependency management process that offers real-time monitoring and automatic update testing for your dependencies. Some of the familiar tools include GreenKeeper, Libraries.io, etc. These tools collate a list of the dependencies that you are currently using and track relevant information regarding them.
2. Get Rid of Packages That You Do Not Need
With the passage of time and changes in your code, it is likely that you'll stop using some packages altogether and instead add in new ones. However, developers tend not to remove old packages as they go along.
Over time, your project might accumulate a lot of unused dependencies. Although this is not a direct security risk, these dependencies almost certainly add to your project’s attack surface and lead to unnecessary clutter in the code. An attacker may be able to find a loophole by loading an old but installed package that has a higher vulnerability quotient, thereby increasing the potential damage it can cause.
How do you check for such unused dependencies? You can do this with the help of the depcheck tool. Depcheck scans your entire code for requires
and import
commands. It then correlates these commands with either installed packages or those mentioned in your package.json and provides you with a report. The command can also be modified using different command flags, thereby making it simpler to automate the checking of unused dependencies.
Install depcheck with:
npm install -g depcheck
3. Find and Fix Crucial Security Vulnerabilities
Almost all of the points discussed above are primarily concerned with the potential problems that you might encounter. But what about the dependencies that you’re using right now?
Based on a recent study, almost 15% of current packages include a known vulnerability, either in the components or dependencies. However, the good news is that there are many tools that you can use to analyze your code and find open-source security risks within your project.
The most convenient tool is npm’s npm audit
. Audit is a script that was released with npm’s version 6. Node Security Platform initially developed npm audit, and npm registry later acquired it. If you’re curious to know what npm audit is all about, here’s a quote from the official blog:
A security audit is an assessment of package dependencies for security vulnerabilities. Security audits help you protect your package's users by enabling you to find and fix known vulnerabilities in dependencies. The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities.
The report generated usually comprises of the following details: the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities. You can even get the audit report in JSON by running npm audit --json
.
Apart from that, npm also offers assistance on how to act based on the report. You can use npm audit fix
to fix issues that have already been found. These fixes are commonly accomplished using guided upgrades or via open-source patches.
Feel free to refer npm’s documentation for more information.
4. Replace Expired Libraries With In-House Alternatives
The concept of open-source security is heavily reliant on the number of eyes that are watching over that particular library. Packages that are actively used are more closely watched. Therefore, there is a higher chance that the developer might have addressed all the known security issues in that particular package.
Let’s take an example. On GitHub, there are many JSON web token implementations that you can use with your Node.js library. However, the ones that are not in active development could have critical vulnerabilities. One such vulnerability, which was reported by Auth0, lets anyone create their own "signed" tokens with whatever payload they want.
If a reasonably popular or well-used package had this flaw, the odds of a developer finding and patching the fault would be higher. But what about an inactive/abandoned project? We’ll talk about that in the next point.
5. Always Choose a Library That’s in Active Development
Perhaps the quickest and most efficient way to determine the activity of a specific package is to check its download rate on npm. You can find this in the Stats section of npm’s package page. It is also possible to extract these figures automatically using the npm stats API or by browsing historic stats on npm-stat.com. For packages with GitHub repositories, you should check out the commit history, the issue tracker, and any relevant pull requests for the library.
6. Update the Dependencies Frequently
There are many bugs, including a large number of security bugs that are continually unearthed and, in most cases, immediately patched. It is not uncommon to see recently reported vulnerabilities being fixed solely on the most recent branch/version of a given project.
For example, let's take the Regular Expression Denial of Service (ReDoS) vulnerability reported on the HMAC package ‘hawk’ in early 2016. This bug in hawk was quickly resolved, but only in the latest major version, 4.x. Older versions like 3.x were patched a lot later even though they were equally at risk.
Therefore, as a general rule, your dependencies are less likely to have any security bugs if they use the latest available version.
The easiest way to confirm if you’re using the latest version is by using the npm outdated
command. This command supports the -prod
flag to ignore any dev dependencies and --json
to make automation simpler.
Regularly inspect the packages you use to verify their modification date. You can do this in two ways: via the npm UI, or by running npm view <package> time.modified
.
Conclusion
The key to securing your application is to have a security-first culture from the start. In this post, we’ve covered some of the standard practices for improving the security of your JavaScript components.
- Use open-source dependencies that are in active development.
- Update and monitor your components.
- Review your code and write tests.
- Remove unwanted dependencies or use alternatives.
- Use security tools like
npm audit
to analyze your dependencies.
If you have any thoughts about JavaScript security, feel free to share them in the comments.
Tuesday, August 21, 2018
Code Your First API With Node.js and Express: Understanding REST APIs
Understanding REST and RESTful APIs
If you've spent any amount of time with modern web development, you will have come across terms like REST and API. If you've heard of these terms or work with APIs but don't have a complete understanding of how they work or how to build your own API, this series is for you.
In this tutorial series, we will start with an overview of REST principles and concepts. Then we will go on to create our own full-fledged API that runs on a Node.js Express server and connects to a MySQL database. After finishing this series, you should feel confident building your own API or delving into the documentation of an existing API.
Prerequisites
In order to get the most out of this tutorial, you should already have some basic command line knowledge, know the fundamentals of JavaScript, and have Node.js installed globally.
What Are REST and RESTful APIs?
Representational State Transfer, or REST, describes an architectural style for web services. REST consists of a set of standards or constraints for sharing data between different systems, and systems that implement REST are known as RESTful. REST is an abstract concept, not a language, framework, or type of software.
A loose analogy for REST would be keeping a collection of vinyl vs. using a streaming music service. With the physical vinyl collection, each record must be duplicated in its entirety to share and distribute copies. With a streaming service, however, the same music can be shared in perpetuity via a reference to some data such as a song title. In this case, the streaming music is a RESTful service, and the vinyl collection is a non-RESTful service.
An API is an Application Programming Interface, which is an interface that allows software programs to communicate with each other. A RESTful API is simply an API that adheres to the principles and constraints of REST. In a Web API, a server receives a request through a URL endpoint and sends a response in return, which is often data in a format such as JSON.
REST Principles
Six guiding constraints define the REST architecture, outlined below.
- Uniform Interface: The interface of components must be the same. This means using the URI standard to identify resources—in other words, paths that could be entered into the browser's location bar.
- Client-Server: There is a separation of concerns between the server, which stores and manipulates data, and the client, which requests and displays the response.
- Stateless Interactions: All information about each request is contained in each individual request and does not depend on session state.
- Cacheable: The client and server can cache resources.
- Layered System: The client can be connected to the end server, or an intermediate layer such as a load-balancer.
- Code on Demand (Optional): A client can download code, which reduces visibility from the outside.
Request and Response
You will already be familiar with the fact that all websites have URLs that begin with http
(or https
for the secure version). HyperText Transfer Protocol, or HTTP, is the method of communication between clients and servers on the internet.
We see it most obviously in the URL bar of our browsers, but HTTP can be used for more than just requesting websites from servers. When you go to a URL on the web, you are actually doing a GET
request on that specific resource, and the website you see is the body of the response. We will go over GET
and other types of requests shortly.
HTTP works by opening a TCP (Transmission Control Protocol) connection to a server port (80
for http
, 443
for https
) to make a request, and the listening server responds with a status and a body.
A request must consist of a URL, a method, header information, and a body.
Request Methods
There are four major HTTP methods, also referred to as HTTP verbs, that are commonly used to interact with web APIs. These methods define the action that will be performed with any given resource.
HTTP request methods loosely correspond to the paradigm of CRUD, which stands for Create, Update, Read, Delete. Although CRUD refers to functions used in database operations, we can apply those design principles to HTTP verbs in a RESTful API.
- Read—
GET
: Retrieves a resource - Create—
POST
: Creates a new resource - Update—
PUT
: Updates an existing resource - Delete—
DELETE
: Deletes a resource
GET
is a safe, read-only operation that will not alter the state of a server. Every time you hit a URL in your browser, such as https://www.google.com
, you are sending a GET
request to Google's servers.
POST
is used to create a new resource. A familiar example of POST
is signing up as a user on a website or app. After submitting the form, a POST
request with the user data might be sent to the server, which will then write that information into a database.
PUT
updates an existing resource, which might be used to edit the settings of an existing user. Unlike POST
, PUT
is idempotent, meaning the same call can be made multiple times without producing a different result. For example, if you sent the same POST
request to create a new user in a database multiple times, it would create a new user with the same data for each request you made. However, using the same PUT
request on the same user would continuously produce the same result.
DELETE
, as the name suggests, will simply delete an existing resource.
Response Codes
Once a request goes through from the client to the server, the server will send back an HTTP response, which will include metadata about the response known as headers, as well as the body. The first and most important part of the response is the status code, which indicates if a request was successful, if there was an error, or if another action must be taken.
The most well-known response code you will be familiar with is 404
, which means Not Found
. 404
is part of the 4xx
class of status codes, which indicate client errors. There are five classes of status codes that each contain a range of responses.
1xx
: Information2xx
: Success3xx
: Redirection4xx
: Client Error5xx
: Server Error
Other common responses you may be familiar with are 301 Moved Permanently
, which is used to redirect websites to new URLs, and 500 Internal Server Error
, which is an error that comes up frequently when something unexpected has happened on a server that makes it impossible to fulfil the intended request.
With regards to RESTful APIs and their corresponding HTTP verbs, all the responses should be in the 2xx
range.
GET
: 200
(OK)
POST
: 201
(Created)
PUT
: 200
(OK)
DELETE
: 200
(OK), 202
(Accepted), or 204
(No Content)
200 OK
is the response that indicates that a request is successful. It is used as a response when sending a GET
or PUT
request. POST
will return a 201 Created
to indicate that a new resource has been created, and DELETE
has a few acceptable responses, which convey that either the request has been accepted (202
), or there is no content to return because the resource no longer exists (204
).
We can test the status code of a resource request using cURL, which is a command-line tool used for transferring data via URLs. Using curl
, followed by the -i
or --include
flag, will send a GET
request to a URL and display the headers and body. We can test this by opening the command-line program and testing cURL with Google.
curl -i https://www.google.com
Google's server will respond with the following.
HTTP/2 200 date: Tue, 14 Aug 2018 05:15:40 GMT expires: -1 cache-control: private, max-age=0 content-type: text/html; charset=ISO-8859-1 ...
As we can see, the curl
request returns multiple headers and the entire HTML body of the response. Since the request went through successfully, the first part of the response is the 200
status code, along with the version of HTTP (this will either be HTTP/1.1 or HTTP/2).
Since this particular request is returning a website, the content-type
(MIME type) being returned is text/html
. In a RESTful API, you will likely see application/json
to indicate the response is JSON.
Interestingly, we can see another type of response by inputting a slightly different URL. Do a curl
on Google without the www
.
curl -i https://google.com
HTTP/2 301 location: https://www.google.com/ content-type: text/html; charset=UTF-8
Google redirects google.com
to www.google.com
, and uses a 301
response to indicate that the resource should be redirected.
REST API Endpoints
When an API is created on a server, the data it contains is accessible via endpoints. An endpoint is the URL of the request that can accept and process the GET
, POST
, PUT
, or DELETE
request.
An API URL will consist of the root, path, and optional query string.
- Root e.g.
https://api.example.com
orhttps://api.example.com/v2
: The root of the API, which may consist of the protocol, domain, and version. - Path e.g.
/users/
or/users/5
: Unique location of the specific resource. - Query Parameters (optional) e.g.
?location=chicago&age=29
: Optional key value pairs used for sorting, filtering, and pagination.
We can put them all together to implement something such as the example below, which would return a list of all users and use a query parameter oflimit
to filter the responses to only include ten results.
https://api.example.com/users?limit=10
Generally, when people refer to an API as a RESTful API, they are referring to the naming conventions that go into building API URL endpoints. A few important conventions for a standard RESTful API are as follows:
- Paths should be plural: For example, to get the user with an id of
5
, we would use/users/5
, not/user/5
. - Endpoints should not display the file extension: Although an API will most likely be returning JSON, the URL should not end in
.json
. - Endpoints should use nouns, not verbs: Words like
add
anddelete
should not appear in a REST URL. In order to add a new user, you would simply send aPOST
request to/users
, not something like/users/add
. The API should be developed to handle multiple types of requests to the same URL. - Paths are case sensitive, and should be written in lowercase with hyphens as opposed to underscores.
All of these conventions are guidelines, as there are no strict REST standards to follow. However, using these guidelines will make your API consistent, familiar, and easy to read and understand.
Conclusion
In this article, we learned what REST and RESTful APIs are, how HTTP request methods and response codes work, the structure of an API URL, and common RESTful API conventions. In the next tutorial, we will learn how to put all this theory to use by setting up an Express server with Node.js and building our own API.
Monday, August 20, 2018
Sunday, August 19, 2018
Saturday, August 18, 2018
Friday, August 17, 2018
Thursday, August 16, 2018
How to Do User Authentication With the Symfony Security Component
In this article, you'll learn how to set up user authentication in PHP using the Symfony Security component. As well as authentication, I'll show you how to use its role-based authorization, which you can extend according to your needs.
The Symfony Security Component
The Symfony Security Component allows you to set up security features like authentication, role-based authorization, CSRF tokens and more very easily. In fact, it's further divided into four sub-components which you can choose from according to your needs.
The Security component has the following sub-components:
- symfony/security-core
- symfony/security-http
- symfony/security-csrf
- symfony/security-acl
In this article, we are going to explore the authentication feature provided by the symfony/security-core component.
As usual, we'll start with the installation and configuration instructions, and then we'll explore a few real-world examples to demonstrate the key concepts.
Installation and Configuration
In this section, we are going to install the Symfony Security component. I assume that you have already installed Composer on your system—we'll need it to install the Security component available at Packagist.
So go ahead and install the Security component using the following command.
$composer require symfony/security
We are going to load users from the MySQL database in our example, so we'll also need a database abstraction layer. Let's install one of the most popular database abstraction layers: Doctrine DBAL.
$composer require doctrine/dbal
That should have created the composer.json file, which should look like this:
{ "require": { "symfony/security": "^4.1", "doctrine/dbal": "^2.7" } }
Let's modify the composer.json file to look like the following one.
{ "require": { "symfony/security": "^4.1", "doctrine/dbal": "^2.7" }, "autoload": { "psr-4": { "Sfauth\\": "src" }, "classmap": ["src"] } }
As we have added a new classmap
entry, let's go ahead and update the composer autoloader by running the following command.
$composer dump -o
Now, you can use the Sfauth
namespace to autoload classes under the src directory.
So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.
<?php require_once './vendor/autoload.php'; // application code ?>
A Real-World Example
Firstly, let's go through the usual authentication flow provided by the Symfony Security component.
- The first thing is to retrieve the user credentials and create an unauthenticated token.
- Next, we'll pass an unauthenticated token to the authentication manager for validation.
- The authentication manager may contain different authentication providers, and one of them will be used to authenticate the current user request. The logic of how the user is authenticated is defined in the authentication provider.
- The authentication provider contacts the user provider to retrieve the user. It's the responsibility of the user provider to load users from the respective back-end.
- The user provider tries to load the user using the credentials provided by the authentication provider. In most cases, the user provider returns the user object that implements the
UserInterface
interface. - If the user is found, the authentication provider returns an unauthenticated token, and you can store this token for the subsequent requests.
In our example, we are going to match the user credentials against the MySQL database, thus we'll need to create the database user provider. We'll also create the database authentication provider that handles the authentication logic. And finally, we'll create the User class, which implements the UserInterface
interface.
The User Class
In this section, we'll create the User class which represents the user entity in the authentication process.
Go ahead and create the src/User/User.php file with the following contents.
<?php namespace Sfauth\User; use Symfony\Component\Security\Core\User\UserInterface; class User implements UserInterface { private $username; private $password; private $roles; public function __construct(string $username, string $password, string $roles) { if (empty($username)) { throw new \InvalidArgumentException('No username provided.'); } $this->username = $username; $this->password = $password; $this->roles = $roles; } public function getUsername() { return $this->username; } public function getPassword() { return $this->password; } public function getRoles() { return explode(",", $this->roles); } public function getSalt() { return ''; } public function eraseCredentials() {} }
The important thing is that the User class must implement the Symfony Security UserInterface
interface. Apart from that, there's nothing out of the ordinary here.
The Database Provider Class
It's the responsibility of the user provider to load users from the back-end. In this section, we'll create the database user provider, which loads the user from the MySQL database.
Let's create the src/User/DatabaseUserProvider.php file with the following contents.
<?php namespace Sfauth\User; use Symfony\Component\Security\Core\User\UserProviderInterface; use Symfony\Component\Security\Core\User\UserInterface; use Symfony\Component\Security\Core\Exception\UsernameNotFoundException; use Symfony\Component\Security\Core\Exception\UnsupportedUserException; use Doctrine\DBAL\Connection; use Sfauth\User\User; class DatabaseUserProvider implements UserProviderInterface { private $connection; public function __construct(Connection $connection) { $this->connection = $connection; } public function loadUserByUsername($username) { return $this->getUser($username); } private function getUser($username) { $sql = "SELECT * FROM sf_users WHERE username = :name"; $stmt = $this->connection->prepare($sql); $stmt->bindValue("name", $username); $stmt->execute(); $row = $stmt->fetch(); if (!$row['username']) { $exception = new UsernameNotFoundException(sprintf('Username "%s" not found in the database.', $row['username'])); $exception->setUsername($username); throw $exception; } else { return new User($row['username'], $row['password'], $row['roles']); } } public function refreshUser(UserInterface $user) { if (!$user instanceof User) { throw new UnsupportedUserException(sprintf('Instances of "%s" are not supported.', get_class($user))); } return $this->getUser($user->getUsername()); } public function supportsClass($class) { return 'Sfauth\User\User' === $class; } }
The user provider must implement the UserProviderInterface
interface. We are using the doctrine DBAL to perform the database-related operations. As we have implemented the UserProviderInterface
interface, we must implement the loadUserByUsername
, refreshUser
, and supportsClass
methods.
The loadUserByUsername
method should load the user by the username, and that's done in the getUser
method. If the user is found, we return the corresponding Sfauth\User\User
object, which implements the UserInterface
interface.
On the other hand, the refreshUser
method refreshes the supplied User
object by fetching the latest information from the database.
And finally, the supportsClass
method checks if the DatabaseUserProvider
provider supports the supplied user class.
The Database Authentication Provider Class
Finally, we need to implement the user authentication provider, which defines the authentication logic—how a user is authenticated. In our case, we need to match the user credentials against the MySQL database, and thus we need to define the authentication logic accordingly.
Go ahead and create the src/User/DatabaseAuthenticationProvider.php file with the following contents.
<?php namespace Sfauth\User; use Symfony\Component\Security\Core\Authentication\Provider\UserAuthenticationProvider; use Symfony\Component\Security\Core\User\UserProviderInterface; use Symfony\Component\Security\Core\User\UserCheckerInterface; use Symfony\Component\Security\Core\Exception\UsernameNotFoundException; use Symfony\Component\Security\Core\Exception\AuthenticationServiceException; use Symfony\Component\Security\Core\Authentication\Token\UsernamePasswordToken; use Symfony\Component\Security\Core\User\UserInterface; use Symfony\Component\Security\Core\Exception\AuthenticationException; class DatabaseAuthenticationProvider extends UserAuthenticationProvider { private $userProvider; public function __construct(UserProviderInterface $userProvider, UserCheckerInterface $userChecker, string $providerKey, bool $hideUserNotFoundExceptions = true) { parent::__construct($userChecker, $providerKey, $hideUserNotFoundExceptions); $this->userProvider = $userProvider; } protected function retrieveUser($username, UsernamePasswordToken $token) { $user = $token->getUser(); if ($user instanceof UserInterface) { return $user; } try { $user = $this->userProvider->loadUserByUsername($username); if (!$user instanceof UserInterface) { throw new AuthenticationServiceException('The user provider must return a UserInterface object.'); } return $user; } catch (UsernameNotFoundException $e) { $e->setUsername($username); throw $e; } catch (\Exception $e) { $e = new AuthenticationServiceException($e->getMessage(), 0, $e); $e->setToken($token); throw $e; } } protected function checkAuthentication(UserInterface $user, UsernamePasswordToken $token) { $currentUser = $token->getUser(); if ($currentUser instanceof UserInterface) { if ($currentUser->getPassword() !== $user->getPassword()) { throw new AuthenticationException('Credentials were changed from another session.'); } } else { $password = $token->getCredentials(); if (empty($password)) { throw new AuthenticationException('Password can not be empty.'); } if ($user->getPassword() != md5($password)) { throw new AuthenticationException('Password is invalid.'); } } } }
The DatabaseAuthenticationProvider
authentication provider extends the UserAuthenticationProvider
abstract class. Hence, we need to implement the retrieveUser
and checkAuthentication
abstract methods.
The job of the retrieveUser
method is to load the user from the corresponding user provider. In our case, it will use the DatabaseUserProvider
user provider to load the user from the MySQL database.
On the other hand, the checkAuthentication
method performs the necessary checks in order to authenticate the current user. Please note that I've used the MD5 method for password encryption. Of course, you should use more secure encryption methods to store user passwords.
How It Works Altogether
So far, we have created all the necessary elements for authentication. In this section, we'll see how to put it all together to set up the authentication functionality.
Go ahead and create the db_auth.php file and populate it with the following contents.
<?php require_once './vendor/autoload.php'; use Sfauth\User\DatabaseUserProvider; use Symfony\Component\Security\Core\User\UserChecker; use Sfauth\User\DatabaseAuthenticationProvider; use Symfony\Component\Security\Core\Authentication\AuthenticationProviderManager; use Symfony\Component\Security\Core\Authentication\Token\UsernamePasswordToken; use Symfony\Component\Security\Core\Exception\AuthenticationException; // init doctrine db connection $doctrineConnection = \Doctrine\DBAL\DriverManager::getConnection( array('url' => 'mysql://{USERNAME}:{PASSWORD}@{HOSTNAME}/{DATABASE_NAME}'), new \Doctrine\DBAL\Configuration() ); // init our custom db user provider $userProvider = new DatabaseUserProvider($doctrineConnection); // we'll use default UserChecker, it's used to check additional checks like account lock/expired etc. // you can implement your own by implementing UserCheckerInterface interface $userChecker = new UserChecker(); // init our custom db authentication provider $dbProvider = new DatabaseAuthenticationProvider( $userProvider, $userChecker, 'frontend' ); // init authentication provider manager $authenticationManager = new AuthenticationProviderManager(array($dbProvider)); try { // init un/pw, usually you'll get these from the $_POST variable, submitted by the end user $username = 'admin'; $password = 'admin'; // get unauthenticated token $unauthenticatedToken = new UsernamePasswordToken( $username, $password, 'frontend' ); // authenticate user & get authenticated token $authenticatedToken = $authenticationManager->authenticate($unauthenticatedToken); // we have got the authenticated token (user is logged in now), it can be stored in a session for later use echo $authenticatedToken; echo "\n"; } catch (AuthenticationException $e) { echo $e->getMessage(); echo "\n"; }
Recall the authentication flow which was discussed in the beginning of this article—the above code reflects that sequence.
The first thing was to retrieve the user credentials and create an unauthenticated token.
$unauthenticatedToken = new UsernamePasswordToken( $username, $password, 'frontend' );
Next, we have passed that token to the authentication manager for validation.
// authenticate user & get authenticated token $authenticatedToken = $authenticationManager->authenticate($unauthenticatedToken);
When the authenticate method is called, a lot of things are happening behind the scenes.
Firstly, the authentication manager selects an appropriate authentication provider. In our case, it's the DatabaseAuthenticationProvider
authentication provider, which will be selected for authentication.
Next, it retrieves the user by the username from the DatabaseUserProvider
user provider. Finally, the checkAuthentication
method performs the necessary checks to authenticate the current user request.
Should you wish to test the db_auth.php script, you'll need to create the sf_users
table in your MySQL database.
CREATE TABLE `sf_users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(255) NOT NULL, `password` varchar(255) NOT NULL, `roles` enum('registered','moderator','admin') DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; INSERT INTO `sf_users` VALUES (1,'admin','21232f297a57a5a743894a0e4a801fc3','admin');
Go ahead and run the db_auth.php script to see how it goes. Upon successful completion, you should receive an authenticated token, as shown in the following snippet.
$php db_auth.php UsernamePasswordToken(user="admin", authenticated=true, roles="admin")
Once the user is authenticated, you can store the authenticated token in the session for the subsequent requests.
And with that, we've completed our simple authentication demo!
Conclusion
Today, we looked at the Symfony Security component, which allows you to integrate security features in your PHP applications. Specifically, we discussed the authentication feature provided by the symfony/security-core sub-component, and I showed you an example of how this functionality can be implemented in your own app.
Feel free to post your thoughts using the feed below!