Wednesday, October 31, 2018
Tuesday, October 30, 2018
Monday, October 29, 2018
Capturing Lifelike Guitar Sounds Without Microphones: Part 2
In the previous tutorial I showed you how Impulse Responses (IRs) have revolutionised the world of recording over the past 20 years. What started out as a way of creating authentic reverbs by modelling physical spaces has diversified significantly.
They're increasingly relevant to guitarists, as IRs are used in studios to replicate the sounds of speaker cabinets, and indeed, whole recording chains. They're quick, simple to use, readily available and significantly cheaper than their hardware counterparts.
Let’s look at how this is typically achieved.
Hardware or Software
Hardware
By this I’m referring to using an amp, or perhaps even just a pedal, as there are an increasing number of ‘amp-in-a-box’ pedals available. Perhaps it’s an advanced modelling unit, such as the Line6 Helix, or Fractal Audio’s AxeFX.
If it’s the amp, check to see if it can handle silent recording. This is an amp with a built-in loading facility so it doesn’t require speakers. If the amp doesn’t do this, you must either connect some speakers to it, or use a reactive load box. Failure to do either could result in permanent damage to the amp.
If it’s anything other than an amp, you can connect it straight to your DAW’s interface.
Software
If you don’t have an amp, or don’t want to use it, everything can be done ‘in the box’. Some DAWs, such as Logic Pro X, come with a built-in amp simulator. Failing that, there are a number of great plugins, such as Positive Grid’s BIAS FX. I use some of the amps from Brainworx, as they’re highly detailed.
Some software, such as the amp sims from Kuassa, come with the facility to load IRs. If it doesn’t, make sure that any cabinet emulation can be disabled, as you’re going to use an IR loader.
IR Loader
This is either hardware or software to house and run the IRs accordingly. One of the most ubiquitous pieces of hardware currently is the Torpedo Studio from Two Notes.
If you’re working ‘in the box’, software IR loaders are available. There are some free ones but I chose to buy Impulsive from 3Sigma Audio, as it has a greater range of controls, plus the ability to load more than one IR simultaneously. This latter feature allows you to blend cabinets.
Once you’re all set up, you can audition sounds before or after recording. This is especially useful as your mix evolves, and will avoid having to re-record parts just to make them fit in.
A Choice Of Speakers
Some IR loaders come with some free IRs, and you can also find some collections via an internet search.
But, if you’re prepared to spend a relatively small amount of money, you can purchase some high quality IRs, and that’ll really make a difference to your recordings.
Celestion is a world-famous manufacturer of speakers, particularly when it comes to guitar. In an extremely forward-thinking move, they’re now offering their speakers as IRs.
Lots of companies also offer models of Celestion speakers, as well as those of other manufacturers. I’ve found the IRs from 3Sigma Audio and Ownhammer to be particularly good.
So IRs are great for recording, but the really exciting aspect is their usage in the live environment.
Going Electric
Whether you’re playing to thousands all over the world, or just the occasional pub gig of a weekend, your set-up’s always a balancing act between the gear you’ve got, the sounds you want, and physically transporting it to and from the venue.
Unless you’re a touring juggernaut like U2, you’re unlikely to take large amounts of equipment with you, so you have to design a portable rig.
Consequently, more guitarists are starting to come around to the idea of using modelling equipment and IRs, especially if the equipment’s expensive, rare or even vintage.
Many have embraced Kemper.
Just like IRs, Kemper developed a way of modelling the characteristics of any physical amp with extraordinary clarity. Touring guitarists often model their favourite amps, allowing them to take their cherished sounds on the road whilst leaving the amps safely at home. Kemper even now do an amp head version, meaning that no additional power amp’s required.
Kemper, and others like it, allow you to load IRs or use the onboard ones. These are great pieces of kit but, unsurprisingly, are far from cheap.
Pedalboard
Thankfully, pedalboard-friendly products are starting to appear with some less than a tenth the price of pro kit such as Kemper.
For example, the Ampli-Firebox from Atomic is a preamp that can sit at the end of the pedalboard and plug straight into a PA via XLR connections. It not only emulates amplifiers, but hosts its own IRs, as well as third-party ones.
An even cheaper but no less exciting option is the Mooer Radar. This is an IR loader in a compact pedal format. Like the Ampli-Firebox, it comes with its own IRs, and will host others. This pedal would therefore allow you to connect the preamp of choice.
You could therefore have a complete rig on your pedalboard without any need for a physical guitar amp. You would of course need some sort of monitoring, however, in order to hear yourself. In any case, this is an ideal set-up for the travelling guitarist.
Acoustic
In the previous tutorial I showed you the difficulties of recording an acoustic guitar with microphones, in terms of consistency of tone, and so on. Thankfully, IRs now give us a very usable solution.
3Sigma Audio offer IRs of acoustic instruments, so not just guitars, but mandolins and even strings such as the cello. You can therefore record your guitar direct, using its onboard piezo pickup, and then overlay the IR of a mic’d guitar. Furthermore, thanks to a pedal such as Mooer’s Radar, you could do that live. No more quacky piezo.
Conclusion
The world of IRs represents some real advantages to both recording and touring guitarists. If you’ve not tried them, I encourage you to do so, as they allow you to:
- Leave your gear at home
- Access sounds from equipment at a fraction of the cost
- Audition tones before and after recording
- Get great reliable sound without microphones
Sunday, October 28, 2018
Saturday, October 27, 2018
Friday, October 26, 2018
Thursday, October 25, 2018
Wednesday, October 24, 2018
Tuesday, October 23, 2018
Monday, October 22, 2018
Sunday, October 21, 2018
Saturday, October 20, 2018
Friday, October 19, 2018
How to Zip and Unzip Files in PHP
Compressing files when transferring them over the internet has a lot of advantages. In most cases, the combined total size of all the files in the compressed format comes down by a nice margin. This means that you will save some of your bandwidth, and users will also get faster download speeds. Once the users have downloaded a file, they can decompress it whenever they want. In short, compression can make serving files over the internet a lot easier for you as well as your visitors.
One factor that can discourage you from compressing files or make the process very tiresome is the fact that you might be doing it manually. Luckily, PHP comes with a lot of extensions that deal specifically with file compression and extraction. You can use the functions available in these extensions to automatically compress files in PHP.
This tutorial will teach you how to zip and unzip (compress and extract) files to and from a zip archive in PHP. You will also learn how to delete or rename files in an archive without extracting them first.
Compressing Files in PHP
The PHP ZipArchive
class has a lot of properties and methods which can help you compress and decompress all your files.
Compress Individual Files
You can add files to your zip archive one at a time or add the whole directory at once. In either case, the first step is creating a new ZipArchive
instance and then calling the open($filename, [$flags])
method. This method will open a new zip archive for reading, writing, or other modifications. There are four valid values for the optional $flag
parameter which determine how to handle different situations.
ZipArchive::OVERWRITE
—This flag will overwrite the contents in the specified archive if it already exists.ZipArchive::CREATE
—This flag will create a new archive if it does not already exist.ZipArchive::EXCL
—This flag will result in an error if the archive already exists.ZipArchive::CHECKCONS
—This flag will tell PHP to perform additional consistency checks on the archive and give an error if they fail.
You can check the documentation of this method to learn about different error codes returned in case of failures to open the file. If the zip file was opened or created successfully, the method will return true
.
Once you have opened the archive successfully, you can use the addFile($filename, $localname, $start, $length)
method to add any file from a given path to your archive. The $filename
parameter is the path of a file that you want to add to the archive. The $localname
parameter is used to assign a name to the file to store it inside the archive. You can call addFile()
every time you want to add a new file to your archive.
After adding all the necessary files to the archive, you can simply call the close()
method to close it and save the changes.
Let's say you have a website which allows users to download font files for different fonts along with the licensing information to use them. Files like these will be perfect examples of automated archiving using PHP. The following code shows you how to do exactly that.
<?php $zip = new ZipArchive(); $zip->open('compressed/font_files.zip', ZipArchive::CREATE); $zip->addFile('fonts/Monoton/Monoton-Regular.ttf', 'Monoton-Regular.ttf'); $zip->addFile('fonts/Monoton/OFL.txt', 'license.txt'); $zip->close(); ?>
We begin by creating a ZipArchive
instance and then using the open()
method to create our archive. The addFile()
method adds our actual .ttf font file and the .txt license file to the archive.
You should note that the original files were inside the fonts/Monoton directory. However, the PHP code places it directly inside the root of our archive. You can change the directory structure as well as the names of files going in the archive.
Compressing Multiple Files From a Directory
Adding individual files to your archive can get tiring after a while. For example, you might want to create an archive of all .pdf or .png files in a directory. The addGlob($pattern, $flags, $options)
method will prove very helpful in this case. The only disadvantage of this method is that you lose control over the location of individual files in the archive. However, you can still influence the directory structure inside the archive using the $options
parameter. The options are passed in the form of an associative array.
add_path
—The value you assign toadd_path
is prefixed to the local path of the file within the archive.remove_path
—The value you assign toremove_path
is used to remove a matching prefix from the path of different files which are added to the archive.remove_all_path
—Setting the value ofremove_all_path
totrue
will remove everything from the path of the file besides its name. In this case, the files are added to the root of the archive.
It's important to remember that removal of a path is done before prefixing the value specified in add_path
.
The following code snippet will make the use of addGlob()
and all these options clearer.
$zip = new ZipArchive(); $zip->open('compressed/user_archive.zip', ZipArchive::CREATE); $options = array('add_path' => 'light_wallpapers/', 'remove_all_path' => TRUE); $zip->addGlob('lights/*.jpg', 0, $options); $options = array('add_path' => 'font_files/', 'remove_all_path' => TRUE); $zip->addGlob('documents/*.ttf', 0, $options); $options = array('add_path' => 'pdf_books/', 'remove_all_path' => TRUE); $zip->addGlob('documents/*.pdf', 0, $options); $options = array('add_path' => 'images/', 'remove_all_path' => TRUE); $zip->addGlob('documents/*.{jpg, png}', GLOB_BRACE, $options); $zip->close();
As usual, we begin by creating a ZipArchive
instance and then use the open()
method to create our archive. We also specify different values for the add_path
key in the $options
array each time before calling the addGlob()
method. This way, we can deal with one specific set of files at a time and provide archiving options accordingly.
In the first case, we iterate over all .jpg files in the lights directory and place them in the light_wallpapers directory in the archive. Similarly, we iterate over all the .ttf files in the documents directory and then put them inside a folder called font_files in our archive. Finally, we iterate over all the .jpg and .png files in our documents at once and put them all together in the images directory.
As you can see, the values in the $options
parameter are useful in organizing the content inside the archive.
Extracting Content From an Archive
The ZipArchive
class has a method called extractTo($destination, $entries)
to extract the contents of an archive. You can use it to either extract everything inside the archive or just some specific files. The $entries
parameter can be used to specify a single file name which is to be extracted, or you can use it to pass an array of files.
One important point to remember is that you need to specify the proper path of the file inside the archive in order to extract it. For example, we archived a font file called AlegreyaSans-Light.ttf in the previous section. The file was stored within the archive in a directory called font_files. This means that the path you need to specify in the $entries
parameter would be font_files/AlegreyaSans-Light.ttf and not simply AlegreyaSans-Light.ttf.
The directory and file structure will be preserved during the extraction process, and files will be extracted in their respective directories.
<?php $zip = new ZipArchive(); $zip->open('compressed/user_archive.zip', ZipArchive::CREATE); $zip->extractTo('uncompressed/', 'font_files/AlegreyaSans-Light.ttf'); $zip->close(); ?>
If you omit the second parameter, the method will extract all files in the archive.
Get More Control Over the Archives
The ZipArchive
class also has a lot of other methods and properties to help you get more information about the archive before extracting all its contents.
You can count the number of files in an archive using the count()
method. Another option is to use the numFiles
property. They can be used to iterate over all the files in the archive and only extract the ones you need—or you can do something else with them, like removing them from the archive.
In the following example, we are deleting all files in the archive which contain the word Italic. Similar code could be used to delete all files which don't contain a specific word. You could also iterate over these files and replace a particular word with something else.
<?php $zip = new ZipArchive(); $zip->open('compressed/user_archive.zip', ZipArchive::CREATE); $file_count = $zip->count(); for($i = 0; $i < $file_count; $i++) { $file_name = $zip->getNameIndex($i); if(stripos($file_name, 'Italic') !== false) { $zip->deleteName($file_name); } } $zip->close(); ?>
In the above code, we are using deleteName()
to delete an individual file. However, you can also use it to delete an entire directory.
A similar function renameName($oldname, $newname)
can be used to change the name of any files in the archive. You will get an error if a file titled $newname
already exists.
Final Thoughts
We have covered a bunch of very useful methods of the ZipArchive
class which will make automated compression and extraction of files in PHP a breeze. You should now be able to compress individual files or a group of them at once, based on your own criteria. Similarly, you should be able to extract any particular file from the archive without affecting other content.
With the help of count()
and numFiles
, you will get more control over the individual files, and renaming or deleting them would be super easy. You should go through the documentation at least once to read about more such functions.
Thursday, October 18, 2018
How to Use the Symfony Event Dispatcher for PHP
Today, we're going to learn how to use the Symfony event dispatcher component, which allows you to create events and listeners in your PHP applications. Thus, different components of your application can talk to each other with loosely coupled code.
What Is the Symfony Event Dispatcher Component?
You may be familiar with the event-observer pattern, which allows you to define listeners for system-generated events so that they are executed when the event is triggered. Similarly, the Symfony EventDispatcher component allows you to set up a system in which you could create custom events and listeners. In that way, you allow components in your application to react if something happens in a system.
In fact, the event dispatcher component provides three elements that you could build your app architecture around: event, listener, and dispatcher. The whole system is orchestrated by the dispatcher class, which raises events at appropriate points in an application and calls listeners associated with those events.
Let's assume that you want to allow other components in your application to react when the cache is cleared. In that case, you need to define the clear cache event in the first place. After the cache is cleared, you can use the dispatcher to raise the clear cache event, and that notifies all listeners that are listening to this event. This gives listeners the opportunity to purge component-specific caches.
In this article, we'll explore the basics of the event dispatcher component. We'll start with installation and configuration, and we'll also create a few real-world examples to demonstrate all the concepts mentioned above.
Installing and Configuring the Event Dispatcher
In this section, we're going to install the event dispatcher component. I assume that you've already installed Composer on your system, because we'll need it to install the EventDispatcher component.
Once you've installed Composer, go ahead and install the EventDispatcher component using the following command.
$composer require symfony/event-dispatcher
That should have created the composer.json file, which should look like this:
{ "require": { "symfony/event-dispatcher": "^4.1" } }
Let's further edit the composer.json file to look like the following:
{ "require": { "symfony/event-dispatcher": "^4.1" }, "autoload": { "psr-4": { "EventDispatchers\\": "src" }, "classmap": ["src"] } }
As we've added a new classmap entry, go ahead and update the Composer autoloader by running the following command.
$composer dump -o
Now, you can use the EventDispatchers
namespace to autoload classes under the src directory.
So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.
<?php require_once './vendor/autoload.php'; // application code ?>
How to Create, Dispatch, and Listen to Events
In this section, we'll go through an example which demonstrates how you could create a custom event and set up a listener for that event.
The Event Class
To start with, go ahead and create the src/Events/DemoEvent.php file with the following contents.
<?php namespace EventDispatchers\Events; use Symfony\Component\EventDispatcher\Event; class DemoEvent extends Event { const NAME = 'demo.event'; protected $foo; public function __construct() { $this->foo = 'bar'; } public function getFoo() { return $this->foo; } }
Our custom DemoEvent
class extends the core Event
class of the EventDispatcher component. The NAME
constant holds the name of our custom event—demo.event
. It's used when you want to set up a listener for this event.
The Listener Class
Next, let's create the listener class src/Listeners/DemoListener.php with the following contents.
<?php namespace EventDispatchers\Listeners; use Symfony\Component\EventDispatcher\Event; class DemoListener { public function onDemoEvent(Event $event) { // fetch event information here echo "DemoListener is called!\n"; echo "The value of the foo is: ".$event->getFoo()."\n"; } }
The DemoListener
class implements the onDemoEvent
method which is triggered when the system dispatches the DemoEvent
event. Of course, it won't happen automatically yet, as we need to register the DemoListener
listener to listen the demo.event
event using the EventDispatcher class.
So far, we've created event and listener classes. Next, we'll see how to tie all these pieces together.
An Example File
Let's create the basic_example.php file with the following contents.
<?php // basic_example.php require_once './vendor/autoload.php'; use Symfony\Component\EventDispatcher\EventDispatcher; use EventDispatchers\Events\DemoEvent; use EventDispatchers\Listeners\DemoListener; // init event dispatcher $dispatcher = new EventDispatcher(); // register listener for the 'demo.event' event $listener = new DemoListener(); $dispatcher->addListener('demo.event', array($listener, 'onDemoEvent')); // dispatch $dispatcher->dispatch(DemoEvent::NAME, new DemoEvent());
The EventDispatcher
class is the most important element in the EventDispatcher component—it allows you to bind listeners to events they want to listen to. We've used the addListener
method of the EventDispatcher
class to listen to the demo.event
event.
The first argument of the addListener
method is an event name, and the second argument is the PHP callable which is triggered when the registered event is dispatched. In our case, we've provided the DemoListener
object as a listener along with the onDemoEvent
method.
$dispatcher->addListener('demo.event', array($listener, 'onDemoEvent'));
Finally, we've used the dispatch
method of the EventDispatcher
class to dispatch the demo.event
event.
$dispatcher->dispatch(DemoEvent::NAME, new DemoEvent());
When you run the basic_example.php file, it should produce the following output.
$php basic_example.php DemoListener is called! The value of the foo is: bar
As expected, the onDemoEvent
method of the DemoListener
class is called, and that in turn calls the getFoo
method of the DemoEvent
class to fetch the event-related information.
What Is an Event Subscriber?
In the previous section, we built an example which demonstrated how to create a custom event and a custom listener. We also discussed how to bind a listener to the specific event using the EventDispatcher
class.
That was a simple example, as we only wanted to set up a listener for a single event. On the other hand, if you want to set up listeners for multiple events or you want to logically group event handling logic in a single class, you should consider using event subscribers because they allow you to keep everything in one place.
In this section, we'll revise the example which was created in the previous section.
The Subscriber Class
The first thing that we need to do is to create a subscriber class which implements the EventSubscriberInterface
interface. Go ahead and create the src/Subsribers/DemoSubscriber.php class as shown in the following snippet.
<?php namespace EventDispatchers\Subscribers; use Symfony\Component\EventDispatcher\EventSubscriberInterface; use EventDispatchers\Events\DemoEvent; class DemoSubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { return array( DemoEvent::NAME => 'onDemoEvent', ); } public function onDemoEvent(DemoEvent $event) { // fetch event information here echo "DemoListener is called!\n"; echo "The value of the foo is :".$event->getFoo()."\n"; } }
Since the class DemoSubscriber
implements the EventSubscriberInterface
interface, it must implement the getSubscribedEvents
method. The getSubscribedEvents
method should return an array of events that you want to subscribe to. You need to provide the event name in an array key and the method name in an array value which is called when the event is triggered.
The last thing is to implement the listener method in the same class. In our case, we need to implement the onDemoEvent
method, and we've already done that.
An Example File
It's time to test our subscriber! Let's quickly create the subscriber_example.php file with the following contents.
<?php require_once './vendor/autoload.php'; use Symfony\Component\EventDispatcher\EventDispatcher; use EventDispatchers\Subscribers\DemoSubscriber; use EventDispatchers\Events\DemoEvent; // init event dispatcher $dispatcher = new EventDispatcher(); // register subscriber $subscriber = new DemoSubscriber(); $dispatcher->addSubscriber($subscriber); // dispatch $dispatcher->dispatch(DemoEvent::NAME, new DemoEvent());
You need to use the addSubscriber
method of the EventDispatcher
class to subscribe your custom subscriber, and the EventDispatcher
class handles the rest. It fetches events to be subscribed from the getSubscribedEvents
method and sets up listeners for those events. Apart from that, everything is the same, and it should work as expected with no surprises.
Let's test it!
$php subscriber_example.php DemoListener is called! The value of the foo is: bar
And that was an event subscriber at your disposal! That also brings us to the end of this article.
Conclusion
Today, we explored the Symfony event dispatcher component, which allows you to set up events and listeners in your PHP applications. By using this library, you can create a loosely coupled system which allows components of your application to communicate with each other effortlessly.
Feel free to share your thoughts and queries using the form below!
Wednesday, October 17, 2018
Practical Test-Driven Development
What Is Test-Driven Development?
Test-driven development (TDD) simply means that you write your tests first. You set the expectations for correct code up front, before you have even written a single line of business logic. Not only does TDD help make sure that your code is correct, but it also helps you write smaller functions, refactor your code without breaking functionality, and understand your problem better.
In this article, I'll introduce some of the concepts of TDD by building a small utility. We will also cover some of the practical scenarios where TDD will make your life simple.
Building an HTTP Client With TDD
What We'll Be Building
We'll be incrementally building a simple HTTP client that abstracts various HTTP verbs. To make the refactors smooth, we will follow TDD practices. We will be using Jasmine, Sinon, and Karma for testing. To get started, copy package.json, karma.conf.js, and webpack.test.js from the sample project, or just clone the sample project from the GitHub repo.
It helps if you understand how the new Fetch API works, but the examples should be easy to follow. For the uninitiated, the Fetch API is a better alternative to XMLHttpRequest. It simplifies network interactions and works well with Promises.
A Wrapper Over GET
First, create an empty file at src/http.js and an accompanying test file under src/__tests__/http-test.js.
Let's set up a test environment for this service.
import * as http from "../http.js"; import sinon from "sinon"; import * as fetch from "isomorphic-fetch"; describe("TestHttpService", () => { describe("Test success scenarios", () => { beforeEach(() => { stubedFetch = sinon.stub(window, "fetch"); window.fetch.returns(Promise.resolve(mockApiResponse())); function mockApiResponse(body = {}) { return new window.Response(JSON.stringify(body), { status: 200, headers: { "Content-type": "application/json" } }); } }); }); });
We're using both Jasmine and Sinon here—Jasmine to define the test scenarios and Sinon to assert and spy on objects. (Jasmine has its own way to spy and stub on tests, but I like Sinon's API better.)
The above code is self-explanatory. Before every test run, we hijack the call to the Fetch API, as there is no server available, and return a mock promise object. The goal here is to unit test if the Fetch API is called with the right params and see if the wrapper is able to handle any network errors properly.
Let's start with a failing test case:
describe("Test get requests", () => { it("should make a GET request", done => { http.get(url).then(response => { expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(response).toEqual({}); done(); }); }); });
Start your test runner by calling karma start
. The tests will obviously fail now, since there is no get
method in http
. Let's rectify that.
const status = response => { if (response.ok) { return Promise.resolve(response); } return Promise.reject(new Error(response.statusText)); }; export const get = (url, params = {}) => { return fetch(url) .then(status); };
If you run your tests now, you'll see a failed response saying Expected [object Response] to equal Object({ })
. The response is a Stream object. Stream objects, as the name suggests, are each a stream of data. To get the data from a stream, you need to read the stream first, using some of its helper methods. For now, we can assume that the stream will be JSON and deserialize it by calling response.json()
.
const deserialize = response => response.json(); export const get = (url, params = {}) => { return fetch(url) .then(status) .then(deserialize) .catch(error => Promise.reject(new Error(error))); };
Our test suite should be green now.
Adding Query Parameters
So far, the get
method just makes a simple call without any query params. Let's write a failing test to see how it should work with query parameters. If we pass { users: [1, 2], limit: 50, isDetailed: false }
as query params, our HTTP client should make a network call to /api/v1/users/?users=1&users=2&limit=50&isDetailed=false
.
it("should serialize array parameter", done => { const users = [1, 2]; const limit = 50; const isDetailed = false; const params = { users, limit, isDetailed }; http .get(url, params) .then(response => { expect(stubedFetch.calledWith(`${url}?isDetailed=false&limit=50&users=1&users=2/`)).toBeTruthy(); done(); }) });
Now that we have our test set up, let's extend our get
method to handle query params.
import { stringify } from "query-string"; export const get = (url, params) => { const prefix = url.endsWith('/') ? url : `${url}/`; const queryString = params ? `?${stringify(params)}/` : ''; return fetch(`${prefix}${queryString}`) .then(status) .then(deserializeResponse) .catch(error => Promise.reject(new Error(error))); };
If the params are present, we construct a query string and append it to the URL.
Here I've used the query-string library—it's a nice little helper library that helps in handling various query params scenarios.
Handling Mutations
GET is perhaps the simplest of HTTP methods to implement. GET is idempotent, and it should not be used for any mutations. POST is typically meant to update some records in the server. This means that POST requests need some guardrails in place by default, like a CSRF token. More on that in the next section.
Let's start by constructing a test for a basic POST request:
describe(`Test post requests`, () => { it("should send request with custom headers", done => { const postParams = { users: [1, 2] }; http.post(url, postParams, { contentType: http.HTTP_HEADER_TYPES.text }) .then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual(JSON.stringify(postParams)); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.text); done(); }); }); });
The signature for the POST is very similar to GET. It takes an options
property, where you can define the headers, body and, most importantly, method
. The method describes the HTTP verb—in this case, "post"
.
For now, let's assume that the content type is JSON and start our implementation of the POST request.
export const HTTP_HEADER_TYPES = { json: "application/json", text: "application/text", form: "application/x-www-form-urlencoded", multipart: "multipart/form-data" }; export const post = (url, params) => { const headers = new Headers(); headers.append("Content-Type", HTTP_HEADER_TYPES.json); return fetch(url, { headers, method: "post", body: JSON.stringify(params), }); };
At this point, our post
method is very primitive. It doesn't support anything other than a JSON request.
Alternate Content Types and CSRF Tokens
Let's allow the caller to decide the content type, and throw the CSRF token into the fray. Depending on your requirements, you can make CSRF optional. In our use case, we will assume that this is an opt-in feature and let the caller determine if you need to set a CSRF token in the header.
To do this, start by passing an options object as the third parameter to our method.
it("should send request with CSRF", done => { const postParams = { users: [1, 2 ] }; http.post(url, postParams, { contentType: http.HTTP_HEADER_TYPES.text, includeCsrf: true }).then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual(JSON.stringify(postParams)); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.text); expect(params.headers.get("X-CSRF-Token")).toEqual(csrf); done(); }); });
When we supply options
with {contentType: http.HTTP_HEADER_TYPES.text,includeCsrf: true
, it should set the content header and the CSRF headers accordingly. Let's update the post
function to support these new options.
export const post = (url, params, options={}) => { const {contentType, includeCsrf} = options; const headers = new Headers(); headers.append("Content-Type", contentType || HTTP_HEADER_TYPES.json()); if (includeCsrf) { headers.append("X-CSRF-Token", getCSRFToken()); } return fetch(url, { headers, method: "post", body: JSON.stringify(params), }); }; const getCsrfToken = () => { //This depends on your implementation detail //Usually this is part of your session cookie return 'csrf' }
Note that getting the CSRF token is an implementation detail. Usually, it's part of your session cookie, and you can extract it from there. I won't cover it further in this article.
Your test suite should be happy now.
Encoding Forms
Our post
method is taking shape now, but it's still trivial when sending the body. You'll have to massage your data differently for each content type. When dealing with forms, we should encode the data as a string before sending it across the wire.
it("should send a form-encoded request", done => { const users = [1, 2]; const limit = 50; const isDetailed = false; const postParams = { users, limit, isDetailed }; http.post(url, postParams, { contentType: http.HTTP_HEADER_TYPES.form, includeCsrf: true }).then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual("isDetailed=false&limit=50&users=1&users=2"); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.form); expect(params.headers.get("X-CSRF-Token")).toEqual(csrf); done(); }); });
Let's extract a small helper method to do this heavy lifting. Based on the contentType
, it processes the data differently.
const encodeRequests = (params, contentType) => { switch (contentType) { case HTTP_HEADER_TYPES.form: { return stringify(params); } default: return JSON.stringify(params); } } export const post = (url, params, options={}) => { const {includeCsrf, contentType} = options; const headers = new Headers(); headers.append("Content-Type", contentType || HTTP_HEADER_TYPES.json); if (includeCsrf) { headers.append("X-CSRF-Token", getCSRFToken()); } return fetch(url, { headers, method="post", body: encodeRequests(params, contentType || HTTP_HEADER_TYPES.json) }).then(deserializeResponse) .catch(error => Promise.reject(new Error(error))); };
Look at that! Our tests are still passing even after refactoring a core component.
Handling PATCH Requests
Another commonly used HTTP verb is PATCH. Now, PATCH is a mutative call, which means that its signature of these two actions is very similar. The only difference is in the HTTP verb. We can reuse all the tests that we wrote for POST, with a simple tweak.
['post', 'patch'].map(verb => { describe(`Test ${verb} requests`, () => { let stubCSRF, csrf; beforeEach(() => { csrf = "CSRF"; stub(http, "getCSRFToken").returns(csrf); }); afterEach(() => { http.getCSRFToken.restore(); }); it("should send request with custom headers", done => { const postParams = { users: [1, 2] }; http[verb](url, postParams, { contentType: http.HTTP_HEADER_TYPES.text }) .then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual(JSON.stringify(postParams)); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.text); done(); }); }); it("should send request with CSRF", done => { const postParams = { users: [1, 2 ] }; http[verb](url, postParams, { contentType: http.HTTP_HEADER_TYPES.text, includeCsrf: true }).then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual(JSON.stringify(postParams)); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.text); expect(params.headers.get("X-CSRF-Token")).toEqual(csrf); done(); }); }); it("should send a form-encoded request", done => { const users = [1, 2]; const limit = 50; const isDetailed = false; const postParams = { users, limit, isDetailed }; http[verb](url, postParams, { contentType: http.HTTP_HEADER_TYPES.form, includeCsrf: true }).then(response => { const [uri, params] = [...stubedFetch.getCall(0).args]; expect(stubedFetch.calledWith(`${url}`)).toBeTruthy(); expect(params.body).toEqual("isDetailed=false&limit=50&users=1&users=2"); expect(params.headers.get("Content-Type")).toEqual(http.HTTP_HEADER_TYPES.form); expect(params.headers.get("X-CSRF-Token")).toEqual(csrf); done(); }); }); }); });
Similarly, we can reuse the current post
method by making the verb configurable, and rename the method name to reflect something generic.
const request = (url, params, options={}, method="post") => { const {includeCsrf, contentType} = options; const headers = new Headers(); headers.append("Content-Type", contentType || HTTP_HEADER_TYPES.json); if (includeCsrf) { headers.append("X-CSRF-Token", getCSRFToken()); } return fetch(url, { headers, method, body: encodeRequests(params, contentType) }).then(deserializeResponse) .catch(error => Promise.reject(new Error(error))); }; export const post = (url, params, options = {}) => request(url, params, options, 'post');
Now that all our POST tests are passing, all that's left is to add another method for patch
.
export const patch = (url, params, options = {}) => request(url, params, options, 'patch');
Simple, right? As an exercise, try adding a PUT or DELETE request on your own. If you're stuck, feel free to refer to the repo.
When to TDD?
The community is divided on this. Some programmers run and hide the moment they hear the word TDD, while others live by it. You can achieve some of the beneficial effects of TDD simply by having a good test suite. There is no right answer here. It's all about how comfortable you and your team are with your approach.
As a rule of thumb, I use TDD for complex, unstructured problems that I need more clarity on. While evaluating an approach or comparing multiple approaches, I find it helpful to define the problem statement and the boundaries up front. It helps in crystallizing the requirements and the edge cases that your function needs to handle. If the number of cases is too high, it suggests that your program may be doing too many things and maybe it's time to split it into smaller units. If the requirements are straightforward, I skip TDD and add the tests later.
Wrapping Up
There is a lot of noise on this topic, and it's easy to get lost. If I can leave you with some parting advice: don't worry too much about the TDD itself, but focus on the underlying principles. It's all about writing clean, easy-to-understand, maintainable code. TDD is a useful skill in a programmer's tool belt. Over time, you'll develop an intuition about when to apply this.
Thanks for reading, and do let us know your thoughts in the comments section.
Tuesday, October 16, 2018
Monday, October 15, 2018
Sunday, October 14, 2018
Saturday, October 13, 2018
Friday, October 12, 2018
Hands-on With ARIA: Accessibility Recipes for Web Apps
In the confusing world of web applications, ARIA can help improve accessibility and ease of use for your creations. HTML isn't able to handle many types of relationship between elements on the page, but ARIA is ideal for almost any kind of setup you can come up with. Let’s take a look at what ARIA is, how it can apply to your web app, and some quick recipes you can use for your own sites.
Basics of ARIA
ARIA, also called WAI-ARIA, stands for the Web Accessibility Initiative–Accessible Rich Internet Applications. This initiative, updated by the W3C, aims to give developers a new set of schemas and attributes for making their creations more accessible. It specifically aims to cover the inherent gaps left by HTML. If you’re not familiar with what it does already, you should take a look at our primer on ARIA. You might also be interested in our pieces on ARIA for the Homepage, and ARIA for eCommerce.
Briefly though, ARIA has three main features that we'll be focusing on:
- Creating relationships outside of the parent-child association: HTML only allows for relationships between parent and child elements, but the associations we want to define aren't always nested within each other. ARIA let's us define element relationships outside of this constraint.
- Defining advanced controls and interactivity: HTML covers many basic UI elements, but there are many more advanced controls that are used around the web that are hard to define outside of their visual component. ARIA helps with that.
- Providing access to "live" area update attributes: the
aria-live
attribute gives screen readers and other devices a listener for when content on the page changes. This allows for easier communication of when on-screen content changes.
ARIA and Web Applications
Before, we had looked at adding ARIA to the common elements of eCommerce pages and site homepages. With web apps however, each one differs drastically from the last. Forms and functions shift between each app, and often even between versions of the same app. Because of this, we’ll treat our implementations here more like recipes in a cookbook rather than a wholesale conversion of a page.
When it comes to web apps, a user’s intent is more difficult to discern in a generalized sense. With eCommerce, no matter which site you are on, it is likely that the visitors are looking to purchase a product or service. Web apps serve a variety of purposes, so instead, we’ll focus on creating nuanced controls that are accessible and user friendly.
Let’s get into some of these control types.
Controlling Live Updates with Buttons
The first control we’re going to look at is a displayed value updated by a button press. These types of controls are commonly seen where an element is displaying a quantity that may be adjusted by buttons labelled ‘+’ and ‘-’, but can take many forms, such as arrow buttons that let you cycle through predefined statuses.
A standard implementation can leave some gaps in understanding for the user. It is unclear what elements the buttons affect, how they affect them, and when the element’s value changes.
Below, we’ll use ARIA to create a connection between the buttons and the value display element using the aria-controls
attribute. Then, we’ll make it clear what the use of the buttons are using aria-label
and HTML <label>
. Finally, we’ll utilize the aria alert
role and the aria-live
attribute to let our user know when the value is being updated.
Let’s take a look at what that code looks like:
<form action=""> <fieldset> <legend>Adjust Quantity</legend> <div> <label for="qty-element">Current Quantity</label> <input type="text" role="alert" aria-live="assertive" value="0" id="qty-element" /> <button type="button" aria-label='Add to Quantity' aria-controls="qty-element">+</button> <button type="button" aria-label='Subtract from Quantity' title="subtract 10" aria-controls="qty-element">=</button> </div> </fieldset> </form>
ARIA Popups and Hover Tooltips
When outfitting a site with ARIA, it is common to use "progressive accessibility". The idea behind this term is that taking a site or web app from its basic form to fully accessible is a daunting task. To deal with this in a way that still makes forward movement, you can implement new features progressively and iteratively.
For a tooltip with a related popup or modal, this means that we can break the problem into two steps, rolling each out as we can. In this case, the tooltip we’re talking about is the common image of a small question mark that opens additional information when hovered over.
To let users know that the question mark image is actually a tooltip, we’ve defined it before using an appropriate role, like this:
<img src="question-mark.jpg" role='tooltip' />
There are a few issues with this implementation though. Users may not still not be aware that hovering over the tooltip initiates a popup with further information. Here’s how we can add that to our code:
<img src="question-mark.jpg" role='tooltip' aria-haspopup='true' aria-controls='tooltip-popup' /> <div id='tooltip-popup' aria-hidden='true'> Tooltip text </div>
Accessible Input Tooltips
Instead of a hover-based tooltip, it’s also common for a web app to utilize forms where each input has its own associated tooltip.
Without additional ARIA markup, it can be difficult to tell which tooltips apply to which input for a user. Not having this relation in place can render your helper text useless in some cases.
To correct for this, we’ll wrap our tooltips within their own elements. Each of these can be nested near their related input, have their relations established with ARIA, and then can be triggered with JavaScript (or just CSS if you’re crafty).
Here’s how that could look:
<form action=""> <fieldset> <legend>User Login</legend> <div> <input type="text" id="user" aria-describedby="user-tip" /> <label for="user">Your Username</label> <div role="tooltip" id="user-tip">Tooltip about their username</div> </div> <div> <input type="password" id="password" aria-describedby="password-tip" /> <label for="password">Your Username</label> <div role="tooltip" id="password-tip">Tooltip about their password</div> </div> </fieldset> </form>
Status Alerts
“Our service is currently down”, “Your account is suspended”, and related status alerts are commonly used among web apps, and display important information for users. Without ARIA, they can get buried within the information on a page and cause a variety of issues.
Utilizing the ARIA alert
role and the aria-live
attribute, we can make sure that our users are aware of any issues quickly once they arrive on a page.
We can set this type of status alert up like this:
<div id="system-status" role="alert" aria-live="assertive"> <p>The system is offline!</p> </div>
Creating a Toolbar
Finally, let’s take a look at another common control element used within web apps: the toolbar. For our purposes, we’re going to be marking up a toolbar that works like this: our web app shows a large amount of data, oriented in a table. Above this table, our toolbar has several buttons that allow users to sort the table in various ways. These buttons include classic sort options such as A to Z and Z to A.
Relationally, these leave some problems concerning accessibility. First, it isn’t clear that those buttons affect the table—we’ll solve this using the aria-controls
attribute. It also isn’t clear that the buttons are associated with each other, which may be a useful piece of information for our users. To define this, we’ll be using the toolbar
role. Finally, a user doesn’t necessarily know which button was pressed last. To correct this, we’ll use the aria-pressed
attribute.
When using the aria-pressed
attribute, its important to note that you’ll have to update these elements as the user interacts with them. This will likely require changing the attributes through JavaScript or jQuery.
Here’s what our toolbar code looks like:
<div role="toolbar" aria-label="Sorting Toolbs" aria-controls="data-table"> <button type="button" aria-pressed="false" aria-label='Sort Alphabetically, A to Z'>A to Z</button> <button type="button" aria-pressed="true" aria-label='Sort Alphabetically, Z to A'>Z to A</button> <button type="button" aria-pressed="false" aria-label='Sort Numerically'>Numerical</button> </div> <table id='data-table'> ... </table>
Adding ARIA to Your Own Web Apps
With this handful of new control schemes and relations under your belt, you’re well on your way to making your own web app fully accessible! After you’ve added these new markups in, think about how you could apply these attributes to other parts of your user interface to maximize the usability of your creation.
Are there attributes, roles, or other features of ARIA that you’d like to know about? Or maybe you have some questions about your own implementations, or corrections for this article? Get in contact using the comment section below, or by tagging kylejspeaker on Twitter!