Wednesday, August 29, 2018

International Artist Feature: Israel

How to Make an Attractive PDF Photography Portfolio

10 Best Photoshop Effects to Add Beautiful Bokeh to Photos

How to Use the Symfony Filesystem Component

How to Use the Symfony Filesystem Component

In this article, we're going to explore the Symfony Filesystem component, which provides useful methods to interact with a file system. After installation and configuration, we'll create a few real-world examples of how to use it.

The Symfony Filesystem Component

More often than not, you'll need to interact with a file system if you're dealing with PHP applications. In most cases, you either end up using the core PHP functions or create your own custom wrapper class to achieve the desired functionality. Either way, it's difficult to maintain over a longer period of time. So what you need is a library which is well maintained and easy to use. That's where the Symfony Filesystem component comes in.

The Symfony Filesystem component provides useful wrapper methods that make the file system interaction a breeze and a fun experience. Let's quickly look at what it's capable of:

  • creating a directory
  • creating a file
  • editing file contents
  • changing the owner and group of a file or directory
  • creating a symlink
  • copying a file or directory
  • removing a file or directory
  • and more

In this article, I'll show you how to unleash the power of the Symfony Filesystem component. As usual, we'll start with installation and configuration instructions, and then we'll implement a few real-world examples to demonstrate the key concepts.

Installation and Configuration

In this section, we're going to install the Symfony Filesystem component. I assume that you've already installed Composer in your system as we'll need it to install the Filesystem component available at Packagist.

So go ahead and install the Filesystem component using the following command.

That should have created a composer.json file, which should look like this:

So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

A Real-World Example

In this section, we'll create an example which demonstrates how you could use the Filesystem component in your applications to perform various filesystem operations.

To start with, let's go ahead and create the index.php file with the following contents.

Here, we've initialized the Filesystem object to $fsObject and saved the current directory to $current_dir_path. In the upcoming sections, we'll use $fsObject to perform different operations.

Make a New Directory

First, we'll create a new directory.

Here, we've used the exists method to check if the foo directory already exists before creating it.

Next, we used the mkdir method to create the foo directory with the 0775 permissions, which means readable and executable by all, but only writable by the file owner and their group. (This is the octal notation for filesystem permissions—to learn more, check out this breakdown of octal notation.) Further, we've used the chown and chgrp methods to change the owner and group of the foo directory.

Create a New File and Add Contents

In this section, we'll create a new file and add contents to that file.

Here, we've used the touch method to create a new file and then used chmod to set its permissions to 0777—globally readable, writable, and executable.

Once the file is created, you can use the dumpFile method to add contents in that file. On the other hand, if you want to add contents to the already existing file, you can use the appendToFile method, as shown in the above example.

Copy a Directory

So far, we've created the foo directory and the bar.txt file using the $fsObject object. In this section, we'll see how to copy a directory along with the contents.

As you can see, first we built the path names with string concatenation. Then, once we made sure the directory didn't already exist using the exists method, we used the mirror method to copy the foo directory into the foo_copy directory.

Remove a Directory

Finally, let's see how to remove a directory.

Again, it's pretty straightforward—to delete a directory, you just use the remove method.

You can find the complete code to index.php in our GitHub repo.

Conclusion

So that's a brief introduction to the Symfony Filesystem component. The Symfony Filesystem component provides methods that make interaction with a file system a breeze. We looked at how to install the component, and we created a handful of examples to demonstrate various aspects of the component.

I hope that you've enjoyed this article, and feel free to post your thoughts using the feed below!


How to Quickly Edit Creative Chart Template Designs in PowerPoint

How to Create a Surreal Dance Scene in the Rain in Affinity Photo

Tuesday, August 28, 2018

How to Create a Stunning Photography Portfolio for Your Tablet

How to Draw People

Ideation in Design Thinking: The Zone of Infinite Creative Possibilities

Code Your First API With Node.js and Express: Connect a Database

Code Your First API With Node.js and Express: Connect a Database

Build a REST API With Node.js and Express: Connecting a Database

In the first tutorial, Understanding RESTful APIs, we learned what the REST architecture is, what HTTP request methods and responses are, and how to understand a RESTful API endpoint. In the second tutorial, How to Set Up an Express API Server, we learned how to build servers with both Node's built-in http module and the Express framework, and how to route the app we created to different URL endpoints.

Currently, we're using static data to display user information in the form of a JSON feed when the API endpoint is hit with a GET request. In this tutorial, we're going to set up a MySQL database to store all the data, connect to the database from our Node.js app, and allow the API to use the GET, POST, PUT, and DELETE methods to create a complete API.

Installation

Up to this point, we have not used a database to store or manipulate any data, so we're going to set one up. This tutorial will be using MySQL, and if you already have MySQL installed on your computer, you'll be ready to go on to the next step.

If you don't have MySQL installed, you can download MAMP for macOS and Windows, which provides a free, local server environment and database. Once you have this downloaded, open the program and click Start Servers to start MySQL.

In addition to setting up MySQL itself, we'll want GUI software to view the database and tables. For Mac, download SequelPro, and for Windows download SQLyog. Once you have MySQL downloaded and running, you can use SequelPro or SQLyog to connect to localhost with the username root and password root on port 3306.

Once everything is set up here, we can move on to setting up the database for our API.

Setting Up the Database

In your database viewing software, add a new database and call it api. Make sure MySQL is running, or you won't be able to connect to localhost.

When you have the api database created, move into it and run the following query to create a new table.

This SQL query will create the structure of our users table. Each user will have an auto-incrementing id, a name, and an email address.

We can also fill the database with the same data that we're currently displaying through a static JSON array by running an INSERT query.

There is no need to input the id field, as it is auto-incrementing. At this point, we have the structure of our table as well as some sample data to work with.

Connecting to MySQL

Back in our app, we have to connect to MySQL from Node.js to begin working with the data. Earlier, we installed the mysql npm module, and now we're going to use it.

Create a new directory called data and make a config.js file.

We'll begin by requiring the mysql module in data/config.js.

Let's create a config object that contains the host, user, password, and database. This should refer to the api database we made and use the default localhost settings.

For efficiency, we're going to create a MySQL pool, which allows us to use multiple connections at once instead of having to manually open and close multiple connections.

Finally, we'll export the MySQL pool so the app can use it.

You can see the completed database configuration file in our GitHub repo.

Now that we're connecting to MySQL and our settings are complete, we can move on to interacting with the database from the API.

Getting API Data From MySQL

Currently, our routes.js file is manually creating a JSON array of users, which looks like this.

Since we're no longer going to be using static data, we can delete that entire array and replace it with a link to our MySQL pool.

Previously, the GET for the /users path was sending the static users data. Our updated code is going to query the database for that data instead. We're going to use a SQL query to SELECT all from the users table, which looks like this.

Here is what our new /users get route will look like, using the pool.query() method.

Here, we're running the SELECT query and then sending the result as JSON to the client via the /users endpoint. If you restart the server and navigate to the /users page, you'll see the same data as before, but now it's dynamic.

Using URL Parameters

So far, our endpoints have been static paths—either the / root or /users—but what about when we want to see data only about a specific user? We'll need to use a variable endpoint.

For our users, we might want to retrieve information about each individual user based on their unique id. To do that, we would use a colon (:) to denote that it's a route parameter.

We can retrieve the parameter for this path with the request.params property. Since ours is named id, that will be how we refer to it.

Now we'll add a WHERE clause to our SELECT statement to only get results that have the specified id.

We'll use ? as a placeholder to avoid SQL injection and pass the id through as a parameter, instead of building a concatenated string, which would be less secure.

The full code for our individual user resource now looks like this:

Now you can restart the server and navigate to http://localhost/users/2 to see only the information for Gilfoyle. If you get an error like Cannot GET /users/2, it means you need to restart the server.

Going to this URL should return a single result.

If that's what you see, congratulations: you've successfully set up a dynamic route parameter!

Sending a POST Request

So far, everything we've been doing has used GET requests. These requests are safe, meaning they do not alter the state of the server. We've simply been viewing JSON data.

Now we're going to begin to make the API truly dynamic by using a POST request to add new data.

I mentioned earlier in the Understanding REST article that we don't use verbs like add or delete in the URL for performing actions. In order to add a new user to the database, we'll POST to the same URL we view them from, but just set up a separate route for it.

Note that we're using app.post() instead of app.get() now.

Since we're creating instead of reading, we'll use an INSERT query here, much like we did at the initialization of the database. We'll send the entire request.body through to the SQL query.

We're also going to specify the status of the response as 201, which stands for Created. In order to get the id of the last inserted item, we'll use the insertId property.

Our entire POST receive code will look like this.

Now we can send a POST request through. Most of the time when you send a POST request, you're doing it through a web form. We'll learn how to set that up by the end of this article, but the fastest and easiest way to send a test POST is with cURL, using the -d (--data) flag.

We'll run curl -d, followed by a query string containing all the key/value pairs and the request endpoint.

Once you send this request through, you should get a response from the server.

If you navigate to http://localhost/users, you'll see the latest entry added to the list.

Sending a PUT Request

POST is useful for adding a new user, but we'll want to use PUT to modify an existing user. PUT is idempotent, meaning you can send the same request through multiple times and only one action will be performed. This is different than POST, because if we sent our new user request through more than once, it would keep creating new users.

For our API, we're going to set up PUT to be able to handle editing a single user, so we're going to use the :id route parameter this time.

Let's create an UPDATE query and make sure it only applies to the requested id with the WHERE clause. We're using two ? placeholders, and the values we pass will go in sequential order.

For our test, we'll edit user 2 and update the email address from gilfoyle@piedpiper.com to bertram@piedpiper.com. We can use cURL again, with the [-X (--request)] flag, to explicitly specify that we're sending a PUT request through.

Make sure to restart the server before sending the request, or else you'll get the Cannot PUT /users/2 error.

You should see this:

The user data with id 2 should now be updated.

Sending a DELETE Request

Our last task to complete the CRUD functionality of the API is to make an option for deleting a user from the database. This request will use the DELETE SQL query with WHERE, and it will delete an individual user specified by a route parameter.

We can use -X again with cURL to send the delete through. Let's delete the latest user we created.

You'll see the success message.

Navigate to http://localhost:3002, and you'll see that there are only two users now.

Congratulations! At this point, the API is complete. Visit the GitHub repo to see the complete code for routes.js.

Sending Requests Through the request Module

At the beginning of this article, we installed four dependencies, and one of them was the request module. Instead of using cURL requests, you could make a new file with all the data and send it through. I'll create a file called post.js that will create a new user via POST.

We can call this using node post.js in a new terminal window while the server is running, and it will have the same effect as using cURL. If something is not working with cURL, the request module is useful as we can view the error, response, and body.

Sending Requests Through a Web Form

Usually, POST and other HTTP methods that alter the state of the server are sent using HTML forms. In this very simple example, we can create an index.html file anywhere, and make a field for a name and email address. The form's action will point to the resource, in this case http//localhost:3002/users, and we'll specify the method as post.

Create index.html and add the following code to it:

Open this static HTML file in your browser, fill it out, and send it while the server is running in the terminal. You should see the response of User added with ID: 4, and you should be able to view the new list of users.

Conclusion

In this tutorial, we learned how to hook up an Express server to a MySQL database and set up routes that correspond to the GET, POST, PUT, and DELETE methods for paths and dynamic route parameters. We also learned how to send HTTP requests to an API server using cURL, the Node.js request module, and HTML forms.

At this point, you should have a very good understanding of how RESTful APIs work, and you can now create your own full-fledged API in Node.js with Express and MySQL!


20 Free Creative Resume Templates (Word & PSD Downloads)

How to Create Transparent Water Droplets With Gradient Mesh in Adobe Illustrator

Monday, August 27, 2018

5 Amazing Assets for Amazing Autumnal Photos and Video

The 3 Best Templates for Adobe After Effects to Promote Your App

10 Fun Photo Effects and Look Presets for Photoshop

Google Flutter From Scratch: Animating Widgets

Kickstarting a Web Design Using an Image as a Base

Get Started With Node.js Express in Our New Course

How to Set Up & Run a Professional Online Webinar

20+ Best Classic Typewriter Fonts With Old (Vintage) Machine Styles

How to Create an inFamous Inspired Text Effect in Adobe Photoshop

Friday, August 24, 2018

How to Create a Punk-Rock Portrait in Procreate

A Beginner's Guide to Drawing 2D Graphics With Two.js

A Beginner's Guide to Drawing 2D Graphics With Two.js

Two.js an API that makes it easy to create 2D shapes with code. Follow along and you'll learn how to create and animate shapes from JavaScript.

Two.js is renderer agnostic, so you can rely on the same API to draw with Canvas, SVG, or WebGL. The library has a lot of methods which can be used to control how different shapes appear on the screen or how they are animated.

Installation

The uncompressed version of the library has a size of around 128 KB, while the compressed version is 50 KB. If you are using the latest version, you can further reduce the size of the library using a custom build.

You can either download the minified version of the library from GitHub or you can link directly to the CDN hosted version. Once you have added the library to your webpage, you can start drawing and animating different shapes or objects.

Creating Basic Shapes

First, you need to tell Two.js about the element on which you want to draw and animate your shapes. You can pass some parameters to the Two constructor to set things up.

Set the type of renderer using the type property. You can specify a value like svg, webgl, canvas, etc. The type is set to svg by default. The width and height of the drawing space can be specified using the width and height parameters. You can also set the drawing space to the full available screen using the fullscreen parameter. When fullscreen is set to true, the values of width and height will be disregarded.

Finally, you can tell Two.js to automatically start an animation with the help of the Boolean autostart parameter.

After passing all the desired parameters to the constructor, you can start drawing lines, rectangles, circles, and ellipses.

You can draw a line using two.makeLine(x1, y1, x2, y2). Here, (x1, y1) are the coordinates of the first end point, and (x2, y2) are the coordinates of the second end point. This function will return a Two.Line object, which can be stored in a variable for further manipulation at a later point.

In a similar manner, you can draw normal and rounded rectangles using two.makeRectangle(x, y, width, height) and two.makeRoundedRectangle(x, y, width, height, radius) respectively. Remember that x and y determine the center of the rectangle, instead of its top left coordinates like many other libraries. The width and height parameters will determine the size of the rectangle. The radius parameter is used to specify the value of the radius for the rounded corner.

You can also render circles and ellipses on a webpage using two.makeCircle(x, y, radius) and two.makeEllipse(x, y, width, height) respectively. Just like the rectangles, the x and y parameters specify the center of the circle or ellipse. Setting the width and height to the same value in the case of an ellipse will render it like a circle.

One useful method in Two.js that you will use frequently is two.makeGroup(objects). You can either pass a list of different objects or pass an array of objects, paths or groups as the parameter to this method. It will also return a Two.Group object.

Manipulating Objects in a Group

After you have created a group, you can manipulate all its children at once using properties that the group makes available to you.

The stroke and fill properties can be used to set the stroke and fill color for all children in a group. They will accept all valid forms in which you can represent a color in CSS. This means that you are free to use RGB, HSL, or hex notation. You can also simply use the name of the color, like orange, red, or blue. Similarly, you can set values for all other properties like linewidth, opacity, miter, and cap. It is possible to remove the fill and stroke from all children in a group using the noFill() and noStroke() methods.

You can also apply other physical transformations like scale, rotation, and translation. These transformations will be applied on individual objects. Adding new objects to a group and removing them is easy with methods like add() and remove().

Defining Gradients and Writing Text

You can define both linear and radial gradients in Two.js. Defining a gradient does not mean that it will be rendered automatically on the screen, but it will be available for you to use when setting the fill or stroke values of various objects.

You can define a linear gradient using two.makeLinearGradient(x1, y1, x2, y2, stops). The values x1 and y1 determine the coordinates of the start of the gradient. Similarly, the values x2 and y2 determine the coordinates of the end of the gradient. The stops parameter is an array of Two.Stop instances. These define the colors of each part of the array and where each color transitions into the next. They can be defined using new Two.Stop(offset, color, opacity), where offset determines the point on the gradient where that particular color has to be fully rendered. The color parameter determines the color of the gradient at the particular point. You can use any valid CSS color representations as its value. Finally, the opacity parameter determines the opacity of the color. The opacity is optional, and it can have any value between 0 and 1.

You can define radial gradients in a similar manner using two.makeRadialGradient(x, y, radius, stops, fx, fy). In this case, the values x and y determine the center of the gradient. The radius parameter specifies how far the gradient should extend. You can also pass an array of stops to this method in order to set the color composition of the gradients. The parameters fx and fy are optional, and they can be used to specify the focal position for the gradient.

Check out some of the types of gradient and their code in the CodePen below.

Remember that the x and y position of the gradients are with respect to the origin of the shape they are trying to fill. For instance, a radial gradient which is supposed to fill a shape from the center will always have x and y set to zero.

Two.js also allows you to write text on the drawing area and update it later according to your needs. This requires the use of the method two.makeText(message, x, y, styles). It might be evident from the name of the parameters that message is the actual text that you want to write. The parameters x and y are the coordinates of the point which will act as the center for writing the text. The styles parameter is an object which can be used to set the values of a large set of properties.

You can use styles to set the values of properties like font family, size, and alignment. You can also specify the value of properties like fill, stroke, opacity, rotation, scale, and translation.

Creating a Two.js Project

After learning about all these methods and properties, it is time to apply them to a project. In this tutorial, I will show you how we can use Two.js to render the first ten elements of the periodic table with electrons rotating around the nucleus. The nucleus will also have some slight movement to improve the visual appeal of our representation.

We begin by defining some variables and functions which will be used later.

The above code stores the coordinates of the center of our window in the variables centerX and centerY. These will be used later to place our atom in the center. The elementNames array contains the names of the first ten elements of the periodic table. The index of each name corresponds to the number of electrons and protons of that element, and it begins with an empty string. The styles object contains properties for styling the text object.

We have also defined a function intRange() to get a random integer value within given extremes.

This creates an instance of Two and defines two radial gradients. The red/black radial gradients will represent protons, and blue/black gradients will represent neutrons.

We have used the intRange() function to place all these neutrons and protons within 20 pixels of each other. The makeCircle() method also sets the radius of these protons and neutrons to 10 pixels. After that, we iterate over nucleusArray and fill each circle with a different gradient alternately.

Placing neutrons and protons inside the nucleus was easy. However, properly placing the electrons at a uniform distance will require a little maths. We use the shellRadius variable to specify the distance of different electron shells from the nucleus. A whole circle covers an angle equal to 2 PI radians. We can place different electrons uniformly by distributing the 2 PI radians between them equally.

The Math.cos() and Math.sin() functions are used to separate the vertical and horizontal components of the position vector of different electrons based on their angle.

This part of the code puts electrons from different shells as well as neutrons and protons in their own separate groups. It also sets the fill colors for all electrons in a specific orbit at once.

This part of the code sets the opacity of individual electrons and protons to zero. It also tells Two.js to rotate the electrons and protons at specific speeds.

The final part of the code allows us to iterate through the elements by clicking the mouse or tapping. To load the next element, we make one more electron and one more proton or neutron visible and update the element name. After clicking on the last element, all the particles are hidden again so we can start over. The visible variable keeps track of the number of atomic particles that are currently visible so that we can show or hide them accordingly.

Try clicking or tapping in the following CodePen demo to see the first ten elements of the periodic table.

Final Thoughts

We began this tutorial with a brief introduction to the Two.js library and how it can be used to draw shapes like rectangles, circles, and ellipses. After that, we discussed how we can group different objects together to manipulate them all at once. We used this ability to group elements to translate and rotate them in synchronization. These tools all came together in our animation of the atoms of the first ten elements in the periodic table.

As you can see, creating animated 2D graphics is very easy using Two.js. The focus of this post was to help you get started quickly, so we only covered the basics. However, you should read the official documentation to learn more about the library!


How to Write (Perfectly Tailor) a Resume to a Job Posting

How to Get the Most From the Transport Window in Pro Tools

20 Best Fonts for Making Monograms & Logo Designs in 2018

How to Achieve Realistic Glow in After Effects With Deep Glow

Thursday, August 23, 2018

Code Your First API With Node.js and Express: Set Up the Server

Code Your First API With Node.js and Express: Set Up the Server

How to Set Up an Express API Server in Node.js

In the previous tutorial, we learned what the REST architecture is, the six guiding constraints of REST, how to understand HTTP request methods and their response codes, and the anatomy of a RESTful API endpoint.

In this tutorial, we'll set up a server for our API to live on. You can build an API with any programming language and server software, but we will use Node.js, which is the back-end implementation of JavaScript, and Express, a popular, minimal framework for Node.

Installation

Our first prerequisite is making sure Node.js and npm are installed globally on the computer. We can test both using the -v flag, which will display the version. Open up your command prompt and type the following.

Your versions may be slightly different than mine, but as long as both are there, we can get started.

Let's create a project directory called express-api and move to it.

Now that we're in our new directory, we can initialize our project with the init command.

This command will prompt you to answer some questions about the project, which you can choose to fill out or not. Once the setup is complete, you'll have a package.json file that looks like this:

Now that we have our package.json, we can install the dependencies required for our project. Fortunately we don't require too many dependencies, just these four listed below.

  • body-parser: Body parsing middleware.
  • express: A minimalist web framework we'll use for our server.
  • mysql: A MySQL driver.
  • request (optional): A simple way to make HTTP calls.

We'll use the install command followed by each dependency to finish setting up our project.

This will create a package-lock.json file and a node_modules directory, and our package.json will be updated to look something like this:

Setting Up an HTTP Server

Before we get started on setting up an Express server, we will quickly set up an HTTP server with Node's built-in http module, to get an idea of how a simple server works.

Create a file called hello-server.js. Load in the http module, set a port number (I chose 3001), and create the server with the createServer() method.

In the introductory REST article, we discussed what requests and responses are with regards to an HTTP server. We're going to set our server to handle a request and display the URL requested on the server side, and display a Hello, server! message to the client on the response side.

Finally, we will tell the server which port to listen on, and display an error if there is one.

Now, we can start our server with node followed by the filename.

You will see this response in the terminal:

To check that the server is actually running, go to http://localhost:3001/ in your browser's address bar. If all is working properly, you should see Hello, server! on the page. In your terminal, you'll also see the URLs that were requested.

If you were to navigate to http://localhost:3001/hello, you would see URL: /hello.

We can also use cURL on our local server, which will show us the exact headers and body that are being returned.

If you close the terminal window at any time, the server will go away.

Now that we have an idea of how the server, request, and response all work together, we can rewrite this in Express, which has an even simpler interface and extended features.

Setting Up an Express Server

We're going to create a new file, app.js, which will be the entry point to our actual project. Just like with the original http server, we'll require a module and set a port to start.

Create an app.js file and put the following code in it.

Now, instead of looking for all requests, we will explicitly state that we are looking for a GET request on the root of the server (/). When / receives a request, we will display the URL requested and the "Hello, Server!" message.

Finally, we'll start the server on port 3002 with the listen() method.

We can start the server with node app.js as we did before, but we can also modify the scripts property in our package.json file to automatically run this specific command.

Now we can use npm start to start the server, and we'll see our server message in the terminal.

If we run a curl -i on the URL, we will see that it is powered by Express now, and there are some additional headers such as Content-Type.

Add Body Parsing Middleware

In order to easily deal with POST and PUT requests to our API, we will add body parsing middleware. This is where our body-parser module comes in. body-parser will extract the entire body of an incoming request and parse it into a JSON object that we can work with.

We'll simply require the module at the top of our file. Add the following require statement to the top of your app.js file.

Then we'll tell our Express app to use body-parser, and look for JSON.

Also, let's change our message to send a JSON object as a response instead of plain text.

Following is our full app.json file as it stands now.

If you send a curl -i to the server, you'll see that the header now returns Content-Type: application/json; charset=utf-8.

Set Up Routes

So far, we only have a GET route to the root (/), but our API should be able to handle all four major HTTP request methods on multiple URLs. We're going to set up a router and make some fake data to display.

Let's create a new directory called routes, and a file within called routes.js. We'll link to it at the top of app.js.

Note that the .js extension is not necessary in the require. Now we'll move our app's GET listener to routes.js. Enter the following code in routes.js.

Finally, export the router so we can use it in our app.js file.

In app.js, replace the app.get() code you had before with a call to routes():

You should now be able to go to http://localhost:3002 and see the same thing as before. (Don't forget to restart the server!)

Once that is all set up and working properly, we'll serve some JSON data with another route. We'll just use fake data for now, since our database is not yet set up.

Let's create a users variable in routes.js, with some fake user data in JSON format.

We'll add another GET route to our router, /users, and send the user data through.

After restarting the server, you can now navigate to http://localhost:3002/users and see all our data displayed.

Note: If you do not have a JSON viewer extension on your browser, I highly recommend you download one, such as JSONView for Chrome. This will make the data much easier to read!

Visit our GitHub Repo to see the completed code for this post and compare it to your own.

Conclusion

In this tutorial, we learned how to set up a built-in HTTP server and an Express server in node, route requests and URLs, and consume JSON data with get requests. 

In the final installment of the RESTful API series, we will hook up our Express server to MySQL to create, view, update, and delete users in a database, finalizing our API's functionality.


15 Fab Fashion Photography Actions and Presets (for 2018)

How to Draw Lines & Freeform Shapes in PowerPoint

Get Started With Pusher: Using Presence Channels

10 Top Logo Sting Animation Templates for After Effects & Premiere Pro (2018)

How to Create a Funny Ice Cream Character in Affinity Designer

How to Create VHS Glitch Art in Adobe Photoshop

Wednesday, August 22, 2018

Quick Tip: Design an SVG Arrow Graphic in Adobe XD

How to Install a New Joomla Website Template Manually

How Secure Are Your JavaScript Open-Source Dependencies?

How Secure Are Your JavaScript Open-Source Dependencies?

Modern-day JavaScript developers love npm. GitHub and the npm registry are a developer’s first choice place for finding a particular package. Open-source modules add to the productivity and efficiency by providing developers with a host of functionalities that you can reuse in your project. It is fair to say that if it were not for these open-source packages, most of the frameworks today would not exist in their current form.

A full-fledged enterprise-level application, for instance, might rely on hundreds if not thousands of packages. The usual dependencies include direct dependencies, development dependencies, bundled dependencies, production dependencies, and optional dependencies. That’s great because everyone’s getting the best out of the open-source ecosystem.

However, one of the factors that get overlooked is the amount of risk involved. Although these third-party modules are particularly useful in their domain, they also introduce some security risks into your application.

Are Open-Source Libraries Vulnerable?

OSS dependencies are indeed vulnerable to exploits and compromises. Let's have a look at a few examples: 

A vulnerability was discovered recently in a package called eslint-scope which is a dependency of several popular JavaScript packages such as babel-eslint and webpack. The account of the package maintainer was compromised, and the hackers added some malicious code into it. Fortunately, someone found out the exploit soon enough that the damage was reportedly limited to a few users. 

Moment.js, which is one of the most-used libraries for parsing and displaying dates in JavaScript, was recently found to have a vulnerability with a severity score of 7.5. The exploit made it vulnerable to ReDoS attacks. Patches were quickly released, and they were able to fix the issue rather quickly.

But that's not all. A lot of new exploits get unearthed every week. Some of them get disclosed to the public, but others make headlines only after a serious breach. 

So how do we mitigate these risks? In this article, I'll explain some of the industry-standard best practices that you can use to secure your open-source dependencies.

1. Keep Track of Your Application’s Dependencies

Logically speaking, as the number of dependencies increase, the risk of ending up with a vulnerable package can also increase. This holds true equally for direct and indirect dependencies. Although there’s no reason that you should stop using open-source packages, it’s always a good idea to keep track of them.

These dependencies are easily discoverable and can be as simple as running npm ls in the root directory of your application. You can use the –prod argument which displays all production dependencies and the –long argument for a summary of each package description. 

Furthermore, you can use a service to automate the dependency management process that offers real-time monitoring and automatic update testing for your dependencies. Some of the familiar tools include GreenKeeper, Libraries.io, etc. These tools collate a list of the dependencies that you are currently using and track relevant information regarding them.

2. Get Rid of Packages That You Do Not Need

With the passage of time and changes in your code, it is likely that you'll stop using some packages altogether and instead add in new ones. However, developers tend not to remove old packages as they go along.

Over time, your project might accumulate a lot of unused dependencies. Although this is not a direct security risk, these dependencies almost certainly add to your project’s attack surface and lead to unnecessary clutter in the code. An attacker may be able to find a loophole by loading an old but installed package that has a higher vulnerability quotient, thereby increasing the potential damage it can cause.

How do you check for such unused dependencies? You can do this with the help of the depcheck tool. Depcheck scans your entire code for requires and import commands. It then correlates these commands with either installed packages or those mentioned in your package.json and provides you with a report. The command can also be modified using different command flags, thereby making it simpler to automate the checking of unused dependencies.

Install depcheck with:

3. Find and Fix Crucial Security Vulnerabilities

Almost all of the points discussed above are primarily concerned with the potential problems that you might encounter. But what about the dependencies that you’re using right now?

Based on a recent study, almost 15% of current packages include a known vulnerability, either in the components or dependencies. However, the good news is that there are many tools that you can use to analyze your code and find open-source security risks within your project.

The most convenient tool is npm’s npm audit. Audit is a script that was released with npm’s version 6. Node Security Platform initially developed npm audit, and npm registry later acquired it. If you’re curious to know what npm audit is all about, here’s a quote from the official blog:

A security audit is an assessment of package dependencies for security vulnerabilities. Security audits help you protect your package's users by enabling you to find and fix known vulnerabilities in dependencies. The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. 

The report generated usually comprises of the following details: the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities. You can even get the audit report in JSON by running npm audit --json.

Apart from that, npm also offers assistance on how to act based on the report. You can use npm audit fix to fix issues that have already been found. These fixes are commonly accomplished using guided upgrades or via open-source patches. 

Feel free to refer npm’s documentation for more information.

4. Replace Expired Libraries With In-House Alternatives 

The concept of open-source security is heavily reliant on the number of eyes that are watching over that particular library. Packages that are actively used are more closely watched. Therefore, there is a higher chance that the developer might have addressed all the known security issues in that particular package. 

Let’s take an example. On GitHub, there are many JSON web token implementations that you can use with your Node.js library. However, the ones that are not in active development could have critical vulnerabilities. One such vulnerability, which was reported by Auth0, lets anyone create their own "signed" tokens with whatever payload they want. 

If a reasonably popular or well-used package had this flaw, the odds of a developer finding and patching the fault would be higher. But what about an inactive/abandoned project? We’ll talk about that in the next point.

5. Always Choose a Library That’s in Active Development

Perhaps the quickest and most efficient way to determine the activity of a specific package is to check its download rate on npm. You can find this in the Stats section of npm’s package page. It is also possible to extract these figures automatically using the npm stats API or by browsing historic stats on npm-stat.com. For packages with GitHub repositories, you should check out the commit history, the issue tracker, and any relevant pull requests for the library.

6. Update the Dependencies Frequently

There are many bugs, including a large number of security bugs that are continually unearthed and, in most cases, immediately patched. It is not uncommon to see recently reported vulnerabilities being fixed solely on the most recent branch/version of a given project.

For example, let's take the Regular Expression Denial of Service (ReDoS) vulnerability reported on the HMAC package ‘hawk’ in early 2016. This bug in hawk was quickly resolved, but only in the latest major version, 4.x. Older versions like 3.x were patched a lot later even though they were equally at risk. 

Therefore, as a general rule, your dependencies are less likely to have any security bugs if they use the latest available version. 

The easiest way to confirm if you’re using the latest version is by using the npm outdated command. This command supports the -prod flag to ignore any dev dependencies and --json to make automation simpler.

Regularly inspect the packages you use to verify their modification date. You can do this in two ways: via the npm UI, or by running npm view <package> time.modified.

Conclusion

The key to securing your application is to have a security-first culture from the start. In this post, we’ve covered some of the standard practices for improving the security of your JavaScript components. 

  1. Use open-source dependencies that are in active development.
  2. Update and monitor your components.
  3. Review your code and write tests.
  4. Remove unwanted dependencies or use alternatives.
  5. Use security tools like npm audit to analyze your dependencies.

If you have any thoughts about JavaScript security, feel free to share them in the comments.


How to Draw Angel Wings

10 Best Vintage Effects to Give Your Video Old-school Looks

How to Create Your Own Magazines: A Step-by-Step Guide

Tuesday, August 21, 2018

New Course: Create a Low Poly Moon With Cinema 4D

Code Your First API With Node.js and Express: Understanding REST APIs

Code Your First API With Node.js and Express: Understanding REST APIs

Understanding REST and RESTful APIs

If you've spent any amount of time with modern web development, you will have come across terms like REST and API. If you've heard of these terms or work with APIs but don't have a complete understanding of how they work or how to build your own API, this series is for you.

In this tutorial series, we will start with an overview of REST principles and concepts. Then we will go on to create our own full-fledged API that runs on a Node.js Express server and connects to a MySQL database. After finishing this series, you should feel confident building your own API or delving into the documentation of an existing API.

Prerequisites

In order to get the most out of this tutorial, you should already have some basic command line knowledge, know the fundamentals of JavaScript, and have Node.js installed globally.

What Are REST and RESTful APIs?

Representational State Transfer, or REST, describes an architectural style for web services. REST consists of a set of standards or constraints for sharing data between different systems, and systems that implement REST are known as RESTful. REST is an abstract concept, not a language, framework, or type of software.

A loose analogy for REST would be keeping a collection of vinyl vs. using a streaming music service. With the physical vinyl collection, each record must be duplicated in its entirety to share and distribute copies. With a streaming service, however, the same music can be shared in perpetuity via a reference to some data such as a song title. In this case, the streaming music is a RESTful service, and the vinyl collection is a non-RESTful service.

An API is an Application Programming Interface, which is an interface that allows software programs to communicate with each other. A RESTful API is simply an API that adheres to the principles and constraints of REST. In a Web API, a server receives a request through a URL endpoint and sends a response in return, which is often data in a format such as JSON.

REST Principles

Six guiding constraints define the REST architecture, outlined below.

  1. Uniform Interface: The interface of components must be the same. This means using the URI standard to identify resources—in other words, paths that could be entered into the browser's location bar.
  2. Client-Server: There is a separation of concerns between the server, which stores and manipulates data, and the client, which requests and displays the response.
  3. Stateless Interactions: All information about each request is contained in each individual request and does not depend on session state.
  4. Cacheable: The client and server can cache resources.
  5. Layered System: The client can be connected to the end server, or an intermediate layer such as a load-balancer.
  6. Code on Demand (Optional): A client can download code, which reduces visibility from the outside.

Request and Response

You will already be familiar with the fact that all websites have URLs that begin with http (or https for the secure version). HyperText Transfer Protocol, or HTTP, is the method of communication between clients and servers on the internet.

We see it most obviously in the URL bar of our browsers, but HTTP can be used for more than just requesting websites from servers. When you go to a URL on the web, you are actually doing a GET request on that specific resource, and the website you see is the body of the response. We will go over GET and other types of requests shortly.

HTTP works by opening a TCP (Transmission Control Protocol) connection to a server port (80 for http, 443 for https) to make a request, and the listening server responds with a status and a body.

A request must consist of a URL, a method, header information, and a body.

Request Methods

There are four major HTTP methods, also referred to as HTTP verbs, that are commonly used to interact with web APIs. These methods define the action that will be performed with any given resource.

HTTP request methods loosely correspond to the paradigm of CRUD, which stands for Create, Update, Read, Delete. Although CRUD refers to functions used in database operations, we can apply those design principles to HTTP verbs in a RESTful API.

  • Read—GET: Retrieves a resource
  • Create—POST: Creates a new resource
  • Update—PUT: Updates an existing resource
  • Delete—DELETE: Deletes a resource

GET is a safe, read-only operation that will not alter the state of a server. Every time you hit a URL in your browser, such as https://www.google.com, you are sending a GET request to Google's servers.

POST is used to create a new resource. A familiar example of POST is signing up as a user on a website or app. After submitting the form, a POST request with the user data might be sent to the server, which will then write that information into a database.

PUT updates an existing resource, which might be used to edit the settings of an existing user. Unlike POST, PUT is idempotent, meaning the same call can be made multiple times without producing a different result. For example, if you sent the same POST request to create a new user in a database multiple times, it would create a new user with the same data for each request you made. However, using the same PUT request on the same user would continuously produce the same result.

DELETE, as the name suggests, will simply delete an existing resource.

Response Codes

Once a request goes through from the client to the server, the server will send back an HTTP response, which will include metadata about the response known as headers, as well as the body. The first and most important part of the response is the status code, which indicates if a request was successful, if there was an error, or if another action must be taken.

The most well-known response code you will be familiar with is 404, which means Not Found. 404 is part of the 4xx class of status codes, which indicate client errors. There are five classes of status codes that each contain a range of responses.

  • 1xx: Information
  • 2xx: Success
  • 3xx: Redirection
  • 4xx: Client Error
  • 5xx: Server Error

Other common responses you may be familiar with are 301 Moved Permanently, which is used to redirect websites to new URLs, and 500 Internal Server Error, which is an error that comes up frequently when something unexpected has happened on a server that makes it impossible to fulfil the intended request.

With regards to RESTful APIs and their corresponding HTTP verbs, all the responses should be in the 2xx range.

GET: 200 (OK)
POST: 201 (Created)
PUT: 200 (OK)
DELETE: 200 (OK), 202 (Accepted), or 204 (No Content)

200 OK is the response that indicates that a request is successful. It is used as a response when sending a GET or PUT request. POST will return a 201 Created to indicate that a new resource has been created, and DELETE has a few acceptable responses, which convey that either the request has been accepted (202), or there is no content to return because the resource no longer exists (204).

We can test the status code of a resource request using cURL, which is a command-line tool used for transferring data via URLs. Using curl, followed by the -i or --include flag, will send a GET request to a URL and display the headers and body. We can test this by opening the command-line program and testing cURL with Google.

Google's server will respond with the following.

As we can see, the curl request returns multiple headers and the entire HTML body of the response. Since the request went through successfully, the first part of the response is the 200 status code, along with the version of HTTP (this will either be HTTP/1.1 or HTTP/2).

Since this particular request is returning a website, the content-type (MIME type) being returned is text/html. In a RESTful API, you will likely see application/json to indicate the response is JSON.

Interestingly, we can see another type of response by inputting a slightly different URL. Do a curl on Google without the www.

Google redirects google.com to www.google.com, and uses a 301 response to indicate that the resource should be redirected.

REST API Endpoints

When an API is created on a server, the data it contains is accessible via endpoints. An endpoint is the URL of the request that can accept and process the GET, POST, PUT, or DELETE request.

An API URL will consist of the root, path, and optional query string.

  • Root e.g. https://api.example.com or https://api.example.com/v2: The root of the API, which may consist of the protocol, domain, and version.
  • Path e.g. /users/or /users/5: Unique location of the specific resource.
  • Query Parameters (optional) e.g. ?location=chicago&age=29: Optional key value pairs used for sorting, filtering, and pagination.
    We can put them all together to implement something such as the example below, which would return a list of all users and use a query parameter of limit to filter the responses to only include ten results.

https://api.example.com/users?limit=10

Generally, when people refer to an API as a RESTful API, they are referring to the naming conventions that go into building API URL endpoints. A few important conventions for a standard RESTful API are as follows:

  • Paths should be plural: For example, to get the user with an id of 5, we would use /users/5, not /user/5.
  • Endpoints should not display the file extension: Although an API will most likely be returning JSON, the URL should not end in .json.
  • Endpoints should use nouns, not verbs: Words like add and delete should not appear in a REST URL. In order to add a new user, you would simply send a POST request to /users, not something like /users/add. The API should be developed to handle multiple types of requests to the same URL.
  • Paths are case sensitive, and should be written in lowercase with hyphens as opposed to underscores.

All of these conventions are guidelines, as there are no strict REST standards to follow. However, using these guidelines will make your API consistent, familiar, and easy to read and understand.

Conclusion

In this article, we learned what REST and RESTful APIs are, how HTTP request methods and response codes work, the structure of an API URL, and common RESTful API conventions. In the next tutorial, we will learn how to put all this theory to use by setting up an Express server with Node.js and building our own API.


Quick Tip: How to Create a Cute Welsh Corgi in Adobe Illustrator

20+ Best Free PowerPoint Timeline and Roadmap Templates

How to Create a Retro, Colorful, Halftone Text Effect in Adobe Photoshop

Thursday, August 16, 2018

Get Started With Pusher: Client Events

How to Choose a Photography Portfolio Website

How to Design Minimalist and Functional UI

How to Create a Grunge Ultra-Violet Photo Manipulation Poster in Affinity Photo

How to Successfully Host an Online Webinar (With Free vs Paid Tools)

How to Cite PowerPoint Presentations in APA & MLA Formats

How to Do User Authentication With the Symfony Security Component

How to Do User Authentication With the Symfony Security Component

In this article, you'll learn how to set up user authentication in PHP using the Symfony Security component. As well as authentication, I'll show you how to use its role-based authorization, which you can extend according to your needs.

The Symfony Security Component

The Symfony Security Component allows you to set up security features like authentication, role-based authorization, CSRF tokens and more very easily. In fact, it's further divided into four sub-components which you can choose from according to your needs.

The Security component has the following sub-components:

  • symfony/security-core
  • symfony/security-http
  • symfony/security-csrf
  • symfony/security-acl

In this article, we are going to explore the authentication feature provided by the symfony/security-core component.

As usual, we'll start with the installation and configuration instructions, and then we'll explore a few real-world examples to demonstrate the key concepts.

Installation and Configuration

In this section, we are going to install the Symfony Security component. I assume that you have already installed Composer on your system—we'll need it to install the Security component available at Packagist.

So go ahead and install the Security component using the following command.

We are going to load users from the MySQL database in our example, so we'll also need a database abstraction layer. Let's install one of the most popular database abstraction layers: Doctrine DBAL.

That should have created the composer.json file, which should look like this:

Let's modify the composer.json file to look like the following one.

As we have added a new classmap entry, let's go ahead and update the composer autoloader by running the following command.

Now, you can use the Sfauth namespace to autoload classes under the src directory.

So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

A Real-World Example

Firstly, let's go through the usual authentication flow provided by the Symfony Security component.

  • The first thing is to retrieve the user credentials and create an unauthenticated token.
  • Next, we'll pass an unauthenticated token to the authentication manager for validation.
  • The authentication manager may contain different authentication providers, and one of them will be used to authenticate the current user request. The logic of how the user is authenticated is defined in the authentication provider.
  • The authentication provider contacts the user provider to retrieve the user. It's the responsibility of the user provider to load users from the respective back-end.
  • The user provider tries to load the user using the credentials provided by the authentication provider. In most cases, the user provider returns the user object that implements the UserInterface interface.
  • If the user is found, the authentication provider returns an unauthenticated token, and you can store this token for the subsequent requests.

In our example, we are going to match the user credentials against the MySQL database, thus we'll need to create the database user provider. We'll also create the database authentication provider that handles the authentication logic. And finally, we'll create the User class, which implements the UserInterface interface.

The User Class

In this section, we'll create the User class which represents the user entity in the authentication process.

Go ahead and create the src/User/User.php file with the following contents.

The important thing is that the User class must implement the Symfony Security UserInterface interface. Apart from that, there's nothing out of the ordinary here.

The Database Provider Class

It's the responsibility of the user provider to load users from the back-end. In this section, we'll create the database user provider, which loads the user from the MySQL database.

Let's create the src/User/DatabaseUserProvider.php file with the following contents.

The user provider must implement the UserProviderInterface interface. We are using the doctrine DBAL to perform the database-related operations. As we have implemented the UserProviderInterface interface, we must implement the loadUserByUsername, refreshUser, and supportsClass methods.

The loadUserByUsername method should load the user by the username, and that's done in the getUser method. If the user is found, we return the corresponding Sfauth\User\User object, which implements the UserInterface interface.

On the other hand, the refreshUser method refreshes the supplied User object by fetching the latest information from the database.

And finally, the supportsClass method checks if the DatabaseUserProvider provider supports the supplied user class.

The Database Authentication Provider Class

Finally, we need to implement the user authentication provider, which defines the authentication logic—how a user is authenticated. In our case, we need to match the user credentials against the MySQL database, and thus we need to define the authentication logic accordingly.

Go ahead and create the src/User/DatabaseAuthenticationProvider.php file with the following contents.

The DatabaseAuthenticationProvider authentication provider extends the UserAuthenticationProvider abstract class. Hence, we need to implement the retrieveUser and checkAuthentication abstract methods.

The job of the retrieveUser method is to load the user from the corresponding user provider. In our case, it will use the DatabaseUserProvider user provider to load the user from the MySQL database.

On the other hand, the checkAuthentication method performs the necessary checks in order to authenticate the current user. Please note that I've used the MD5 method for password encryption. Of course, you should use more secure encryption methods to store user passwords.

How It Works Altogether

So far, we have created all the necessary elements for authentication. In this section, we'll see how to put it all together to set up the authentication functionality.

Go ahead and create the db_auth.php file and populate it with the following contents.

Recall the authentication flow which was discussed in the beginning of this article—the above code reflects that sequence.

The first thing was to retrieve the user credentials and create an unauthenticated token.

Next, we have passed that token to the authentication manager for validation.

When the authenticate method is called, a lot of things are happening behind the scenes.

Firstly, the authentication manager selects an appropriate authentication provider. In our case, it's the DatabaseAuthenticationProvider authentication provider, which will be selected for authentication.

Next, it retrieves the user by the username from the DatabaseUserProvider user provider. Finally, the checkAuthentication method performs the necessary checks to authenticate the current user request.

Should you wish to test the db_auth.php script, you'll need to create the sf_users table in your MySQL database.

Go ahead and run the db_auth.php script to see how it goes. Upon successful completion, you should receive an authenticated token, as shown in the following snippet.

Once the user is authenticated, you can store the authenticated token in the session for the subsequent requests.

And with that, we've completed our simple authentication demo!

Conclusion

Today, we looked at the Symfony Security component, which allows you to integrate security features in your PHP applications. Specifically, we discussed the authentication feature provided by the symfony/security-core sub-component, and I showed you an example of how this functionality can be implemented in your own app.

Feel free to post your thoughts using the feed below!