Wednesday, February 28, 2018
A Gentle Introduction to Higher-Order Components in React: Best Practices
This is the third part of the series on Higher-Order Components. In the first tutorial, we started from ground zero. We learned the basics of ES6 syntax, higher-order functions, and higher-order components.
The higher-order component pattern is useful for creating abstract components—you can use them to share data (state and behavior) with your existing components. In the second part of the series, I demonstrated practical examples of code using this pattern. This includes protected routes, creating a configurable generic container, attaching a loading indicator to a component, etc.
In this tutorial, we will have a look at some best practices and dos and don'ts that you should look into while writing HOCs.
Introduction
React previously had something called Mixins, which worked great with the React.createClass
method. Mixins allowed developers to share code between components. However, they had some drawbacks, and the idea was dropped eventually. Mixins were not upgraded to support ES6 classes, and Dan Abramov even wrote an in-depth post on why Mixins are considered harmful.
Higher-order components emerged as an alternative to Mixins, and they supported ES6 classes. Moreover, HOCs don't have to do anything with the React API and are a generic pattern that works well with React. However, HOCs have flaws too. Although the downsides of higher-order components might not be evident in smaller projects, you could have multiple higher-order components chained to a single component, just like below.
const SomeNewComponent = withRouter(RequireAuth(LoaderDemo(GenericContainer(CustomForm(Form)))))
You shouldn't let the chaining get to the point where you are asking yourself the question: "Where did that props come from?" This tutorial addresses some of the common issues with higher-order component patterns and the solutions to get them right.
The Problems With HOC
Some of the common problems concerned with HOCs have less to do with HOCs themselves, but rather your implementation of them.
As you already know, HOCs are great for code abstraction and creating reusable code. However, when you have multiple HOCs stacked up, and if something looks out of place or if some props are not showing up, it's painful to debug because the React DevTools give you a very limited clue about what might have gone wrong.
A Real-World HOC Problem
To understand the drawbacks of HOCs, I've created an example demo that nests some of the HOCs that we created in the previous tutorial. We have four higher-order functions wrapping that single ContactList component. If the code doesn't make sense or if you haven't followed my previous tutorial, here is a brief summary of how it works.
withRouter
is a HOC that's part of the react-router package. It provides you access to the history object's properties and then passes them as a prop.
withAuth
looks for an authentication
prop and, if authentication is true, it renders the WrappedComponent
. If authentication is false, it pushes '/login
' to the history object.
withGenericContainer
accepts an object as an input in addition to the WrappedComponent
. The GenericContainer
makes API calls and stores the result in the state and then sends the data to the wrapped component as props.
withLoader
is a HOC that attaches a loading indicator. The indicator spins until the fetched data reaches the state.
BestPracticeDemo.jsx
class BestPracticesDemo extends Component { render() { return( <div className="contactApp"> <ExtendedContactList authenticated = {true} {...this.props} contacts ="this" /> </div> ) } } const ContactList = ({contacts}) => { return( <div> <ul> {contacts.map( (contact) => <li key={contact.email}> <img src={contact.photo} width="100px" height="100px" alt="presentation" /> <div className="contactData"> <h4>{contact.name}</h4> <small>{contact.email}</small> <br/><small> {contact.phone}</small> </div> </li> )} </ul> </div> ) } const reqAPI = {reqUrl: 'https://demo1443058.mockable.io/users/', reqMethod:'GET', resName:'contacts'} const ExtendedContactList = withRouter( withAuth( withGenericContainer(reqAPI)( withLoader('contacts') (ContactList)))); export default BestPracticesDemo;
Now you can see for yourself some of the common pitfalls of higher-order components. Let's discuss some of them in detail.
Basic Dos and Don'ts
Don't Forget to Spread the Props in Your HOC
Assume that we have an authenticated = { this.state.authenticated }
prop at the top of the composition hierarchy. We know that this is an important prop and that this should make it all the way to the presentational component. However, imagine that an intermediate HOC, such as withGenericContainer
, decided to ignore all its props.
//render method of withGenericContainer render() { return( <WrappedComponent /> ) }
This is a very common mistake that you should try to avoid while writing higher-order components. Someone who isn't acquainted with HOCs might find it hard to figure out why all the props are missing because it would be hard to isolate the problem. So, always remember to spread the props in your HOC.
//The right way render() { return( <WrappedComponent {...this.props} {...this.state} />) }
Don't Pass Down Props That Have No Existence Beyond the Scope of the HOC
A HOC might introduce new props that the WrappedComponent might not have any use for. In such cases, it's a good practice to pass down props that are only relevant to the composed components.
A higher-order component can accept data in two ways: either as the function's argument or as the component's prop. For instance, authenticated = { this.state.authenticated }
is an example of a prop, whereas in withGenericContainer(reqAPI)(ContactList)
, we are passing the data as arguments.
Because withGenericContainer is a function, you can pass in as few or as many arguments as you like. In the example above, a config object is used to specify a component's data dependency. However, the contract between an enhanced component and the wrapped component is strictly through props.
So I recommend filling in the static-time data dependencies via the function parameters and passing dynamic data as props. The authenticated props are dynamic because a user can be either authenticated or not depending on whether they are logged in or not, but we can be sure that the contents of the reqAPI
object are not going to change dynamically.
Don’t Use HOCs Inside the Render Method
Here is an example that you should avoid at all cost.
var OriginalComponent = () => <p>Hello world.</p>; class App extends React.Component { render() { return React.createElement(enhanceComponent(OriginalComponent)); } };
Apart from the performance hitches, you will lose the state of the OriginalComponent
and all of its children on each render. To solve this problem, move the HOC declaration outside the render method so that it is only created once, so that the render always returns the same EnhancedComponent.
var OriginalComponent = () => <p>Hello world.</p>; var EnhancedComponent = enhanceComponent(OriginalComponent); class App extends React.Component { render() { return React.createElement(EnhancedComponent); } };
Don't Mutate the Wrapped Component
Mutating the Wrapped Component inside a HOC makes it impossible to use the Wrapped Component outside the HOC. If your HOC returns a WrappedComponent, you can almost always be sure that you're doing it wrong. The example below demonstrates the difference between mutation and composition.
function logger(WrappedComponent) { WrappedComponent.prototype.componentWillReceiveProps = function(nextProps) { console.log('Current props: ', this.props); console.log('Next props: ', nextProps); }; // We're returning the WrappedComponent rather than composing //it return WrappedComponent; }
Composition is one of React's fundamental characteristics. You can have a component wrapped inside another component in its render function, and that's what you call composition.
function logger(WrappedComponent) { return class extends Component { componentWillReceiveProps(nextProps) { console.log('Current props: ', this.props); console.log('Next props: ', nextProps); } render() { // Wraps the input component in a container, without mutating it. Good! return <WrappedComponent {...this.props} />; } } }
Moreover, if you mutate the WrappedComponent inside a HOC and then wrap the enhanced component using another HOC, the changes made by the first HOC will be overridden. To avoid such scenarios, you should stick to composing components rather than mutating them.
Namespace Generic Propnames
The importance of namespacing prop names is evident when you have multiple stacked up. A component might push a prop name into the WrappedComponent that's already been used by another higher-order component.
import React, { Component } from 'react'; const withMouse = (WrappedComponent) => { return class withMouse extends Component { constructor(props) { super(props); this.state = { name: 'Mouse' } } render() { return( <WrappedComponent {...this.props} name={this.state.name} /> ); } } } const withCat = (WrappedComponent) => { return class withCat extends Component { render() { return( <WrappedComponent {...this.props} name= "Cat" /> ) } } } const NameComponent = ({name}) => { return( <div> {name} </div>) } const App =() => { const EnhancedComponent = withMouse(withCat(NameComponent)); return( <div> <EnhancedComponent /> </div>) } export default App;
Both the withMouse
and withCat
are trying to push their own version of name prop. What if the EnhancedComponent too had to share some props with the same name?
<EnhancedComponent name="This is important" />
Wouldn't it be a source of confusion and misdirection for the end developer? The React Devtools don't report any name conflicts, and you will have to look into the HOC implementation details to understand what went wrong.
This can be solved by making HOC prop names scoped as a convention via the HOC that provides them. So you would have withCat_name
and withMouse_name
instead of a generic prop name.
Another interesting thing to note here is that ordering your properties is important in React. When you have the same property multiple times, resulting in a name conflict, the last declaration will always survive. In the above example, the Cat wins since it's placed after { ...this.props }
.
If you would prefer to resolve the name conflict some other way, you can reorder the properties and spread this.props
last. This way, you can set sensible defaults that suit your project.
Make Debugging Easier Using a Meaningful Display Name
The components created by a HOC show up in the React Devtools as normal components. It's hard to distinguish between the two. You can ease the debugging by providing a meaningful displayName
for the higher-order component. Wouldn't it be sensible to have something like this on React Devtools?
<withMouse(withCat(NameComponent)) > ... </withMouse(withCat(NameComponent))>
So what is displayName
? Each component has a displayName
property that you can use for debugging purposes. The most popular technique is to wrap the display name of the WrappedComponent
. If withCat
is the HOC, and NameComponent
is the WrappedComponent
, then the displayName
will be withCat(NameComponent)
.
const withMouse = (WrappedComponent) => { class withMouse extends Component { /* */ } withMouse.displayName = `withMouse(${getDisplayName(WrappedComponent)})`; return withMouse; } const withCat = (WrappedComponent) => { class withCat extends Component { /* */ } withCat.displayName = `withCat(${getDisplayName(WrappedComponent)})`; return withCat; } function getDisplayName(WrappedComponent) { return WrappedComponent.displayName || WrappedComponent.name || 'Component'; }
An Alternative to Higher-Order Components
Although Mixins are gone, it would be misleading to say higher-order components are the only pattern out there that allow code sharing and abstraction. Another alternative pattern has emerged, and I've heard some say it's better than HOCs. It's beyond the scope of this tutorial to touch on the concept in depth, but I will introduce you to render props and some basic examples that demonstrate why they are useful.
Render props are referred to by a number of different names:
- render prop
- children prop
- function as a child
- render callback
Here is a quick example that should explain how a render prop works.
class Mouse extends Component { constructor() { super(); this.state = { name: "Nibbles" } } render() { return( <div> {this.props.children(this.state)} </div> ) } } class App extends Component { render() { return( <Mouse> {(mouse) => <div> The name of the mouse is {mouse.name} </div> } </Mouse> ) } }
As you can see, we've got rid of the higher-order functions. We have a regular component called Mouse
. Instead of rendering a wrapped component in its render method, we are going to render this.props.children()
and pass in the state as an argument. So we are giving Mouse
a render prop, and the render prop decides what should be rendered.
In other words, the Mouse
components accept a function as the value for the children props. When Mouse
renders, it returns the state of the Mouse
, and the render prop function can use it however it pleases.
There are a few things I like about this pattern:
- From a readability perspective, it's more evident where a prop is coming from.
- This pattern is dynamic and flexible. HOCs are composed at static-time. Although I've never found that to be a limitation, render props are dynamically composed and are more flexible.
- Simplified component composition. You could say goodbye to nesting multiple HOCs.
Conclusion
Higher-order components are patterns that you can use to build robust, reusable components in React. If you're going to use HOCs, there are a few ground rules that you should follow. This is so that you don't regret the decision of using them later on. I've summarized most of the best practices in this tutorial.
HOCs are not the only patterns that are popular today. Towards the end of the tutorial, I've introduced you to another pattern called render props that is gaining ground among React developers.
I won't judge a pattern and say that this one is better than another. As React grows, and the ecosystem that surrounds it matures, more and more patterns will emerge. In my opinion, you should learn them all and stick with the one that suits your style and that you're comfortable with.
This also marks the end of the tutorial series on higher-order components. We've gone from ground zero to mastering an advanced technique called HOC. If I missed anything or if you have suggestions/thoughts, I would love to hear them. You can post them in the comments.
Tuesday, February 27, 2018
Storing Data Securely on Android
An app's credibility today highly depends on how the user's private data is managed. The Android stack has many powerful APIs surrounding credential and key storage, with specific features only available in certain versions. This short series will start off with a simple approach to get up and running by looking at the storage system and how to encrypt and store sensitive data via a user-supplied passcode. In the second tutorial, we will look at more complex ways of protecting keys and credentials.
The Basics
The first question to think about is how much data you actually need to acquire. A good approach is to avoid storing private data if you don't really have to.
For data that you must store, the Android architecture is ready to help. Since 6.0 Marshmellow, full-disk encryption is enabled by default, for devices with the capability. Files and SharedPreferences
that are saved by the app are automatically set with the MODE_PRIVATE
constant. This means the data can be accessed only by your own app.
It's a good idea to stick to this default. You can set it explicitly when saving a shared preference.
SharedPreferences.Editor editor = getSharedPreferences("preferenceName", MODE_PRIVATE).edit(); editor.putString("key", "value"); editor.commit();
Or when saving a file.
FileOutputStream fos = openFileOutput(filenameString, Context.MODE_PRIVATE); fos.write(data); fos.close();
Avoid storing data on external storage, as the data is then visible by other apps and users. In fact, to prevent make it harder for people to copy your app binary and data, you can disallow users from being able to install the app on external storage. Adding android:installLocation
with a value of internalOnly
to the manifest file will accomplish that.
You can also prevent the app and its data from being backed up. This also prevents downloading the contents of an app's private data directory using adb backup
. To do so, set the android:allowBackup
attribute to false
in the manifest file. By default, this attribute is set to true
.
These are best practices, but they won't work for a compromised or rooted device, and disk encryption is only useful when the device is secured with a lock screen. This is where having an app-side password that protects its data with encryption is beneficial.
Securing User Data With a Password
Conceal is a great choice for an encryption library because it gets you up and running very quickly without having to worry about the underlying details. However, an exploit targeted for a popular framework will simultaneously affect all of the apps that rely on it.
It's also important to be knowledgeable about how encryption systems work in order to be able to tell if you're using a particular framework securely. So, for this post we are going to get our hands dirty by looking at the cryptography provider directly.
AES and Password-Based Key Derivation
We will use the recommended AES standard, which encrypts data given a key. The same key used to encrypt the data is used to decrypt the data, which is called symmetric encryption. There are different key sizes, and AES256 (256 bits) is the preferred length for use with sensitive data.
While the user experience of your app should force a user to use a strong passcode, there is a chance that the same passcode will also be chosen by another user. Putting the security of our encrypted data in the hands of the user is not safe. Our data needs to be secured instead with a key that is random and large enough (ie. that has enough entropy) to be considered strong. This is why it's never recommended to use a password directly to encrypt data—that is where a function called Password-Based Key Derivation Function (PBKDF2) comes into play.
PDKDF2 derives a key from a password by hashing it many times over with a salt. This is called key stretching. The salt is just a random sequence of data and makes the derived key unique even if the same password was used by someone else. Lets start by generating that salt.
SecureRandom random = new SecureRandom(); byte salt[] = new byte[256]; random.nextBytes(salt);
The SecureRandom
class guarantees that the generated output will be hard to predict—it is a "cryptographically strong random number generator". We can now put the salt and password into a password-based encryption object: PBEKeySpec
. The object's constructor also takes an iteration count form making the key stronger. This is because increasing the number of iterations expands the time it would take to operate on a set of keys during a brute force attack. The PBEKeySpec
then gets passed into the SecretKeyFactory
, which finally generates the key as a byte[]
array. We will wrap that raw byte[]
array into a SecretKeySpec
object.
char[] passwordChar = passwordString.toCharArray(); //Turn password into char[] array PBEKeySpec pbKeySpec = new PBEKeySpec(passwordChar, salt, 1324, 256); //1324 iterations SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1"); byte[] keyBytes = secretKeyFactory.generateSecret(pbKeySpec).getEncoded(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES");
Note that the password is passed as a char[]
array and the PBEKeySpec
class stores it as a char[]
array as well. char[]
arrays are usually used for encryption functions because while the String
class is immutable, a char[]
array containing sensitive information can be overwritten—thus removing the sensitive data entirely from the device's phyc RAM.
Initialization Vectors
We are now ready to encrypt the data, but we have one more thing to do. There are different modes of encryption with AES, but we'll be using the recommended one: cipher block chaining (CBC). This operates on our data one block at a time. The great thing about this mode is that each next unencrypted block of data is XOR’d with the previous encrypted block to make the encryption stronger. However, that means the first block is never as unique as all the others!
If a message to be encrypted were to start off the same as another message to be encrypted, the beginning encrypted output would be the same, and that would give an attacker a clue to figuring out what the message might be. The solution is to use an initialization vector (IV).
An IV is just a block of random bytes that will be XOR’d with the first block of user data. Since each block depends on all blocks processed up until that point, the entire message will be encrypted uniquely—identical messages encrypted with the same key will not produce identical results. Lets create an IV now.
SecureRandom ivRandom = new SecureRandom(); //not caching previous seeded instance of SecureRandom byte[] iv = new byte[16]; ivRandom.nextBytes(iv); IvParameterSpec ivSpec = new IvParameterSpec(iv);
A note about SecureRandom
. On versions 4.3 and under, the Java Cryptography Architecture had a vulnerability due to improper initialization of the underlying pseudorandom number generator (PRNG). If you are targeting versions 4.3 and under, a fix is available.
Encrypting the Data
Armed with an IvParameterSpec
, we can now do the actual encryption.
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7Padding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec, ivSpec); byte[] encrypted = cipher.doFinal(plainTextBytes);
Here we pass in the string "AES/CBC/PKCS7Padding"
. This specifies AES encryption with cypher block chaining. The last part of this string refers to PKCS7, which is an established standard for padding data that doesn't fit perfectly into the block size. (Blocks are 128 bits, and padding is done before encryption.)
To complete our example, we will put this code in an encrypt method that will package the result into a HashMap
containing the encrypted data, along with the salt and initialization vector necessary for decryption.
private HashMap<String, byte[]> encryptBytes(byte[] plainTextBytes, String passwordString) { HashMap<String, byte[]> map = new HashMap<String, byte[]>(); try { //Random salt for next step SecureRandom random = new SecureRandom(); byte salt[] = new byte[256]; random.nextBytes(salt); //PBKDF2 - derive the key from the password, don't use passwords directly char[] passwordChar = passwordString.toCharArray(); //Turn password into char[] array PBEKeySpec pbKeySpec = new PBEKeySpec(passwordChar, salt, 1324, 256); //1324 iterations SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1"); byte[] keyBytes = secretKeyFactory.generateSecret(pbKeySpec).getEncoded(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); //Create initialization vector for AES SecureRandom ivRandom = new SecureRandom(); //not caching previous seeded instance of SecureRandom byte[] iv = new byte[16]; ivRandom.nextBytes(iv); IvParameterSpec ivSpec = new IvParameterSpec(iv); //Encrypt Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7Padding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec, ivSpec); byte[] encrypted = cipher.doFinal(plainTextBytes); map.put("salt", salt); map.put("iv", iv); map.put("encrypted", encrypted); } catch(Exception e) { Log.e("MYAPP", "encryption exception", e); } return map; }
The Decryption Method
You only need to store the IV and salt with your data. While salts and IVs are considered public, make sure they are not sequentially incremented or reused. To decrypt the data, all we need to do is change the mode in the Cipher
constructor from ENCRYPT_MODE
to DECRYPT_MODE
. The decrypt method will take a HashMap
that contains the same required information (encrypted data, salt and IV) and return a decrypted byte[]
array, given the correct password. The decrypt method will regenerate the encryption key from the password. The key should never be stored!
private byte[] decryptData(HashMap<String, byte[]> map, String passwordString) { byte[] decrypted = null; try { byte salt[] = map.get("salt"); byte iv[] = map.get("iv"); byte encrypted[] = map.get("encrypted"); //regenerate key from password char[] passwordChar = passwordString.toCharArray(); PBEKeySpec pbKeySpec = new PBEKeySpec(passwordChar, salt, 1324, 256); SecretKeyFactory secretKeyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1"); byte[] keyBytes = secretKeyFactory.generateSecret(pbKeySpec).getEncoded(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); //Decrypt Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7Padding"); IvParameterSpec ivSpec = new IvParameterSpec(iv); cipher.init(Cipher.DECRYPT_MODE, keySpec, ivSpec); decrypted = cipher.doFinal(encrypted); } catch(Exception e) { Log.e("MYAPP", "decryption exception", e); } return decrypted; }
Testing the Encryption and Decryption
To keep the example simple, we are omitting error checking that would make sure the HashMap
contains the required key, value pairs. We can now test our methods to ensure that the data is decrypted correctly after encryption.
//Encryption test String string = "My sensitive string that I want to encrypt"; byte[] bytes = string.getBytes(); HashMap<String, byte[]> map = encryptBytes(bytes, "UserSuppliedPassword"); //Decryption test byte[] decrypted = decryptData(map, "UserSuppliedPassword"); if (decrypted != null) { String decryptedString = new String(decrypted); Log.e("MYAPP", "Decrypted String is : " + decryptedString); }
The methods use a byte[]
array so that you can encrypt arbitrary data instead of only String
objects.
Saving Encrypted Data
Now that we have an encrypted byte[]
array, we can save it to storage.
FileOutputStream fos = openFileOutput("test.dat", Context.MODE_PRIVATE); fos.write(encrypted); fos.close();
If you didn't want to save the IV and salt separately, HashMap
is serializable with the ObjectInputStream
and ObjectOutputStream
classes.
FileOutputStream fos = openFileOutput("map.dat", Context.MODE_PRIVATE); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(map); oos.close();
Saving Secure Data to SharedPreferences
You can also save secure data to your app's SharedPreferences
.
SharedPreferences.Editor editor = getSharedPreferences("prefs", Context.MODE_PRIVATE).edit(); String keyBase64String = Base64.encodeToString(encryptedKey, Base64.NO_WRAP); String valueBase64String = Base64.encodeToString(encryptedValue, Base64.NO_WRAP); editor.putString(keyBase64String, valueBase64String); editor.commit();
Since the SharedPreferences
is a XML system that accepts only specific primitives and objects as values, we need to convert our data into a compatible format such as a String
object. Base64 allows us to convert the raw data into a String
representation that contains only the characters allowed by the XML format. Encrypt both the key and the value so an attacker can't figure out what a value might be for. In the example above, encryptedKey
and encryptedValue
are both encrypted byte[]
arrays returned from our encryptBytes()
method. The IV and salt can be saved into the preferences file or as a separate file. To get back the encrypted bytes from the SharedPreferences
, we can apply a Base64 decode on the stored String
.
SharedPreferences preferences = getSharedPreferences("prefs", Context.MODE_PRIVATE); String base64EncryptedString = preferences.getString(keyBase64String, "default"); byte[] encryptedBytes = Base64.decode(base64EncryptedString, Base64.NO_WRAP);
Wiping Insecure Data From Old Versions
Now that the stored data is secure, it may be the case that you have a previous version of the app that had the data stored insecurely. On an upgrade, the data could be wiped and re-encrypted. The following code wipes over a file using random data.
In theory, you can just delete your shared preferences by removing the /data/data/com.your.package.name/shared_prefs/your_prefs_name.xml and your_prefs_name.bak files, and clearing the in-memory preferences with the following code:
getSharedPreferences("prefs", Context.MODE_PRIVATE).edit().clear().commit();
However, instead of attempting to wipe the old data and hope that it works, it's better to encrypt it in the first place! This is especially true in general for solid state drives that often spread out data-writes to different regions to prevent wear. That means that even if you overwrite a file in the filesystem, the physical solid-state memory might preserve your data in its original location on disk.
public static void secureWipeFile(File file) throws IOException { if (file != null && file.exists()) { final long length = file.length(); final SecureRandom random = new SecureRandom(); final RandomAccessFile randomAccessFile = new RandomAccessFile(file, "rws"); randomAccessFile.seek(0); randomAccessFile.getFilePointer(); byte[] data = new byte[64]; int position = 0; while (position < length) { random.nextBytes(data); randomAccessFile.write(data); position += data.length; } randomAccessFile.close(); file.delete(); } }
Conclusion
That wraps up our tutorial on storing encrypted data. In this post, you learned how to securely encrypt and decrypt sensitive data with a user-supplied password. It's easy to do when you know how, but it's important to follow all the best practices to ensure your users' data is truly secure.
In the next post, we will take a look at how to leverage the KeyStore
and other credential-related APIs to store items safely. In the meantime, check out some of our other great articles on Android app development.
-
Android SDKShowing Material Design Dialogs in an Android AppChike Mgbemena
-
Android SDKSending Data With Retrofit 2 HTTP Client for AndroidChike Mgbemena
-
Android SDKHow to Create an Android Chat App Using FirebaseAshraff Hathibelagal
Eloquent Mutators and Accessors in Laravel
In this article, we'll go through mutators and accessors of the Eloquent ORM in the Laravel web framework. After the introduction, we'll go through a handful of examples to understand these concepts.
In Laravel, mutators and accessors allow you to alter data before it's saved to and fetched from a database. To be specific, the mutator allows you to alter data before it's saved to a database. On the other hand, the accessor allows you to alter data after it's fetched from a database.
In fact, the Laravel model is the central place where you can create mutator and accessor methods. And of course, it's nice to have all your modifications in a single place rather than scattered over different places.
Create Accessors and Mutators in a Model Class
As you're familiar with the basic concept of mutators and accessors now, we'll go ahead and develop a real-world example to demonstrate it.
I assume that you're aware of the Eloquent model in Laravel, and we'll use the Post model as a starting point of our example. If you haven't created the Post
model yet, let's use the artisan
command to create it.
php artisan make:model Post --migration
That should create a model file at app/Post.php
as shown below.
<?php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { // }
Let's replace the contents of that file with the following.
<?php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { /** * The attributes that should be mutated to dates. * * @var array */ protected $dates = [ 'created_at', 'updated_at', 'published_at' ]; /** * Get the post title. * * @param string $value * @return string */ public function getNameAttribute($value) { return ucfirst($value); } /** * Set the post title. * * @param string $value * @return string */ public function setNameAttribute($value) { $this->attributes['name'] = strtolower($value); } }
As we've used the --migration
option, it should also create an associated database migration. Just in case you are not aware, you can run the following command so that it actually creates a table in the database.
php artisan migrate
In order to run examples in this article, you need to create name
and published_at
columns in the post
table. Anyway, we won't go into the details of the migration topic, as it's out of the scope of this article. So we'll get back to methods that we are interested in.
Firstly, let's go through the mutator method.
/** * Set the post title. * * @param string $value * @return string */ public function setNameAttribute($value) { $this->attributes['name'] = strtolower($value); }
As we discussed earlier, the mutators are used to alter data before it's saved to a database. As you can see, the syntax of the mutator method is set{attribute-name}Attribute
. Of course, you need to replace {attribute-name}
with an actual attribute name.
The setNameAttribute
method is called before the value of the name
attribute is saved in the database. To keep things simple, we've just used the strtolower
function that converts the post title to lowercase before it's saved to the database.
In this way, you could create mutator methods on all columns of your table. Next, let's go through the accessor method.
If mutators are used to alter data before it's saved to a database, the accessor method is used to alter data after it's fetched from a database. The syntax of the accessor method is the same as that of the mutator except that it begins with the get keyword instead of the set keyword.
Let's go through the accessor method getNameAttribute
.
/** * Get the post title. * * @param string $value * @return string */ public function getNameAttribute($value) { return ucfirst($value); }
The getNameAttribute
method will be called after the value of the name attribute is fetched from the database. In our case, we've just used the ucfirst
method to alter the post title.
And that's the way you are supposed to use accessors in your models. So far, we've just created mutator and accessor methods, and we'll test those in the upcoming section.
Mutators in Action
Let's create a controller at app/Http/Controllers/MutatorController.php
so that we can test the mutator method that we created in the earlier section.
<?php // app/Http/Controllers/MutatorController.php namespace App\Http\Controllers; use App\Post; use App\Http\Controllers\Controller; class MutatorController extends Controller { public function index() { // create a new post object $post = new Post; $post->setAttribute('name', 'Post title'); $post->save(); } }
Also, you need to create an associated route in the routes/web.php
file to access it.
Route::get('mutator/index', 'MutatorController@index');
In the index
method, we're creating a new post using the Post
model. It should set the value of the name column to post title as we've used the strtolower
function in the setNameAttribute
mutator method.
Date Mutators
In addition to the mutator we discussed earlier, the Eloquent model provides a couple of special mutators that allow you to alter data. For example, the Eloquent model in Laravel comes with a special $dates
property that allows you to automatically convert the desired columns to a Carbon
date instance.
In the beginning of this article, we created the Post
model, and the following code was part of that class.
... ... /** * The attributes that should be mutated to dates. * * @var array */ protected $dates = [ 'created_at', 'updated_at', 'published_at' ]; ... ...
As you probably know, Laravel always creates two date-related fields, created_at
and updated_at
, with each database migration. And it converts those values to a Carbon
date instance as well.
Let's assume that you have a couple of fields in a table that you would like to treat as date fields. In that case, you just need to add column names in the $dates
array.
As you can see in the above code, we've added the published_at
column in the $dates
array, and it makes sure that the value of that column will be converted to a Carbon
date instance.
Accessors in Action
To see accessors in action, let's go ahead and create a controller file app/Http/Controllers/AccessorController.php
with the following contents.
<?php namespace App\Http\Controllers; use App\Post; use App\Http\Controllers\Controller; class AccessorController extends Controller { public function index() { // load post $post = Post::find(1); // check the name property echo $post->name; // check the date property echo $post->published_at; // as we've mutated the published_at column as Carbon date, we can use following as well echo $post->published_at->getTimestamp(); exit; } }
Also, let's create an associated route in the routes/web.php
file to access it.
Route::get('accessor/index', 'AccessorController@index');
In the index
method, we've used the Post
model to load an example post in the first place.
Next, we're inspecting the value of the name column, and it should start with an uppercase letter as we've already defined the accessor method getNameAttribute
for that column.
Moving further, we've inspected the value of the published
_at
column, and that should be treated as a date. Due to that, Laravel converts it to a Carbon instance so that you can use all the utility methods provided by that library. In our case, we've used the getTimestamp
method to convert the date into a timestamp.
And that brings us to the end of this article!
Conclusion
Today, we've explored the concepts of mutators and accessors of the Eloquent ORM in Laravel. It provides a nice way to alter data before it's saved to and fetched from a database.
For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.
Don't hesitate to share your thoughts using the feed below!
Monday, February 26, 2018
Sunday, February 25, 2018
Saturday, February 24, 2018
Friday, February 23, 2018
Thursday, February 22, 2018
Wednesday, February 21, 2018
JSON Serialization With Golang
Overview
JSON is one of the most popular serialization formats. It is human readable, reasonably concise, and can be parsed easily by any web application using JavaScript. Go as a modern programming language has first-class support for JSON serialization in its standard library.
But there are some nooks and crannies. In this tutorial you'll learn how to effectively serialize and deserialize arbitrary as well as structured data to/from JSON. You will also learn how to deal with advanced scenarios such as serialization enums.
The json Package
Go supports several serialization formats in the encoding package of its standard library. One of these is the popular JSON format. You serialize Golang values using the Marshal() function into a slice of bytes. You deserialize a slice of bytes into a Golang value using the Unmarshal() function. It's that simple. The following terms are equivalent in the context of this article:
- Serialization/Encoding/Marshalling
- Deserialization/Decoding/Unmarshalling
I prefer serialization because it reflects the fact that you convert a potentially hierarchical data structure to/from a stream of bytes.
Marshal
The Marshal() function can take anything, which in Go means the empty interface and return a slice of bytes and error. Here is the signature:
func Marshal(v interface{}) ([]byte, error)
If Marshal() fails to serialize the input value, it will return a non-nil error. Marshal() has some strict limitations (we'll see later how to overcome them with custom marshallers):
- Map keys must be strings.
- Map values must be types serializable by the json package.
- The following types are not supported: Channel, complex, and function.
- Cyclic data structures are not supported.
- Pointers will be encoded (and later decoded) as the values they point to (or 'null' if the pointer is nil).
Unmarshal
The Unmarshal() function takes a byte slice that hopefully represents valid JSON and a destination interface, which is typically a pointer to a struct or basic type. It deserializes the JSON into the interface in a generic way. If the serialization failed, it will return an error. Here is the signature:
func Unmarshal(data []byte, v interface{}) error
Serializing Simple Types
You can easily serialize simple types like using the json package. The result will not be a full-fledged JSON object, but a simple string. Here the int 5 is serialized to the byte array [53], which corresponds to the string "5".
// Serialize int var x = 5 bytes, err := json.Marshal(x) if err != nil { fmt.Println("Can't serislize", x) } fmt.Printf("%v => %v, '%v'\n", x, bytes, string(bytes)) // Deserialize int var r int err = json.Unmarshal(bytes, &r) if err != nil { fmt.Println("Can't deserislize", bytes) } fmt.Printf("%v => %v\n", bytes, r) Output: - 5 => [53], '5' - [53] => 5
If you try to serialize unsupported types like a function, you'll get an error:
// Trying to serialize a function foo := func() { fmt.Println("foo() here") } bytes, err = json.Marshal(foo) if err != nil { fmt.Println(err) } Output: json: unsupported type: func()
Serializing Arbitrary Data With Maps
The power of JSON is that it can represent arbitrary hierarchical data very well. The JSON package supports it and utilizes the generic empty interface (interface{}) to represent any JSON hierarchy. Here is an example of deserializing and later serializing a binary tree where each node has an int value and two branches, left and right, which may contain another node or be null.
The JSON null is equivalent to the Go nil. As you can see in the output, the json.Unmarshal()
function successfully converted the JSON blob to a Go data structure consisting of a nested map of interfaces and preserved the value type as int. The json.Marshal()
function successfully serialized the resulting nested object to the same JSON representation.
// Arbitrary nested JSON dd := ` { "value": 3, "left": { "value": 1, "left": null, "right": { "value": 2, "left": null, "right": null } }, "right": { "value": 4, "left": null, "right": null } }` var obj interface{} err = json.Unmarshal([]byte(dd), &obj) if err != nil { fmt.Println(err) } else { fmt.Println("--------\n", obj) } data, err = json.Marshal(obj) if err != nil { fmt.Println(err) } else { fmt.Println("--------\n", string(data)) } } Output: -------- map[right:map[value:4 left:<nil> right:<nil>] value:3 left:map[left:<nil> right:map[value:2 left:<nil> right:<nil>] value:1]] -------- {"left":{ "left":null, "right":{"left":null,"right":null,"value":2}, "value":1}, "right":{"left":null, "right":null, "value":4}, "value":3}
To traverse the generic maps of interfaces, you'll need to use type assertions. For example:
func dump(obj interface{}) error { if obj == nil { fmt.Println("nil") return nil } switch obj.(type) { case bool: fmt.Println(obj.(bool)) case int: fmt.Println(obj.(int)) case float64: fmt.Println(obj.(float64)) case string: fmt.Println(obj.(string)) case map[string]interface{}: for k, v := range(obj.(map[string]interface{})) { fmt.Printf("%s: ", k) err := dump(v) if err != nil { return err } } default: return errors.New( fmt.Sprintf("Unsupported type: %v", obj)) } return nil }
Serializing Structured Data
Working with structured data is often the better choice. Go provides excellent support for serializing JSON to/from structs
via its struct
tags. Let's create a struct
that corresponds to our JSON tree and a smarter Dump()
function that prints it:
type Tree struct { value int left *Tree right *Tree } func (t *Tree) Dump(indent string) { fmt.Println(indent + "value:", t.value) fmt.Print(indent + "left: ") if t.left == nil { fmt.Println(nil) } else { fmt.Println() t.left.Dump(indent + " ") } fmt.Print(indent + "right: ") if t.right == nil { fmt.Println(nil) } else { fmt.Println() t.right.Dump(indent + " ") } }
This is great and much cleaner than the arbitrary JSON approach. But does it work? Not really. There is no error, but our tree object is not getting populated by the JSON.
jsonTree := ` { "value": 3, "left": { "value": 1, "left": null, "right": { "value": 2, "left": null, "right": null } }, "right": { "value": 4, "left": null, "right": null } }` var tree Tree err = json.Unmarshal([]byte(dd), &tree) if err != nil { fmt.Printf("- Can't deserislize tree, error: %v\n", err) } else { tree.Dump("") } Output: value: 0 left: <nil> right: <nil>
The problem is that the Tree fields are private. JSON serialization works on public fields only. So we can make the struct
fields public. The json package is smart enough to transparently convert the lowercase keys "value", "left", and "right" to their corresponding uppercase field names.
type Tree struct { Value int `json:"value"` Left *Tree `json:"left"` Right *Tree `json:"right"` } Output: value: 3 left: value: 1 left: <nil> right: value: 2 left: <nil> right: <nil> right: value: 4 left: <nil> right: <nil>
The json package will silently ignore unmapped fields in the JSON as well as private fields in your struct
. But sometimes you may want to map specific keys in the JSON to a field with a different name in your struct
. You can use struct
tags for that. For example, suppose we add another field called "label" to the JSON, but we need to map it to a field called "Tag" in our struct.
type Tree struct { Value int Tag string `json:"label"` Left *Tree Right *Tree } func (t *Tree) Dump(indent string) { fmt.Println(indent + "value:", t.Value) if t.Tag != "" { fmt.Println(indent + "tag:", t.Tag) } fmt.Print(indent + "left: ") if t.Left == nil { fmt.Println(nil) } else { fmt.Println() t.Left.Dump(indent + " ") } fmt.Print(indent + "right: ") if t.Right == nil { fmt.Println(nil) } else { fmt.Println() t.Right.Dump(indent + " ") } }
Here is the new JSON with the root node of the tree labeled as "root", serialized properly into the Tag field and printed in the output:
dd := ` { "label": "root", "value": 3, "left": { "value": 1, "left": null, "right": { "value": 2, "left": null, "right": null } }, "right": { "value": 4, "left": null, "right": null } }` var tree Tree err = json.Unmarshal([]byte(dd), &tree) if err != nil { fmt.Printf("- Can't deserislize tree, error: %v\n", err) } else { tree.Dump("") } Output: value: 3 tag: root left: value: 1 left: <nil> right: value: 2 left: <nil> right: <nil> right: value: 4 left: <nil> right: <nil>
Writing a Custom Marshaller
You will often want to serialize objects that don't conform to the strict requirements of the Marshal() function. For example, you may want to serialize a map with int keys. In these cases, you can write a custom marshaller/unmarshaller by implementing the Marshaler
and Unmarshaler
interfaces.
A note about spelling: In Go, the convention is to name an interface with a single method by appending the "er" suffix to the method name. So, even though the more common spelling is "Marshaller" (with double L), the interface name is just "Marshaler" (single L).
Here are the Marshaler and Unmarshaler interfaces:
type Marshaler interface { MarshalJSON() ([]byte, error) } type Unmarshaler interface { UnmarshalJSON([]byte) error }
You must create a type when doing custom serialization, even if you want to serialize a built-in type or composition of built-in types like map[int]string
. Here I define a type called IntStringMap
and implement the Marshaler
and Unmarshaler
interfaces for this type.
The MarshalJSON()
method creates a map[string]string
, converts each of its own int keys to a string, and serializes the map with string keys using the standard json.Marshal()
function.
type IntStringMap map[int]string func (m *IntStringMap) MarshalJSON() ([]byte, error) { ss := map[string]string{} for k, v := range *m { i := strconv.Itoa(k) ss[i] = v } return json.Marshal(ss) }
The UnmarshalJSON() method does the exact opposite. It deserializes the data byte array into a map[string]string
and then converts each string key to an int and populates itself.
func (m *IntStringMap) UnmarshalJSON(data []byte ) error { ss := map[string]string{} err := json.Unmarshal(data, &ss) if err != nil { return err } for k, v := range ss { i, err := strconv.Atoi(k) if err != nil { return err } (*m)[i] = v } return nil }
Here is how to use it in a program:
m := IntStringMap{4: "four", 5: "five"} data, err := m.MarshalJSON() if err != nil { fmt.Println(err) } fmt.Println("IntStringMap to JSON: ", string(data)) m = IntStringMap{} jsonString := []byte("{\"1\": \"one\", \"2\": \"two\"}") m.UnmarshalJSON(jsonString) fmt.Printf("IntStringMap from JSON: %v\n", m) fmt.Println("m[1]:", m[1], "m[2]:", m[2]) Output: IntStringMap to JSON: {"4":"four","5":"five"} IntStringMap from JSON: map[2:two 1:one] m[1]: one m[2]: two
Serializing Enums
Go enums can be pretty vexing to serialize. The idea to write an article about Go json serialization came out of a question a colleague asked me about how to serialize enums. Here is a Go enum
. The constants Zero and One are equal to the ints 0 and 1.
type EnumType int const ( Zero EnumType = iota One )
While you may think it's an int, and in many respects it is, you can't serialize it directly. You must write a custom marshaler/unmarshaler. That's not a problem after the last section. The following MarshalJSON()
and UnmarshalJSON()
will serialize/deserialize the constants ZERO and ONE to/from the corresponding strings "Zero" and "One".
func (e *EnumType) UnmarshalJSON(data []byte) error { var s string err := json.Unmarshal(data, &s) if err != nil { return err } value, ok := map[string]EnumType{"Zero": Zero, "One": One}[s] if !ok { return errors.New("Invalid EnumType value") } *e = value return nil } func (e *EnumType) MarshalJSON() ([]byte, error) { value, ok := map[EnumType]string{Zero: "Zero", One:"One"}[*e] if !ok { return nil, errors.New("Invalid EnumType value") } return json.Marshal(value) }
Let's try to embed this EnumType
in a struct
and serialize it. The main function creates an EnumContainer
and initializes it with a name of "Uno" and a value of our enum
constant ONE
, which is equal to the int 1.
type EnumContainer struct { Name string Value EnumType } func main() { x := One ec := EnumContainer{ "Uno", x, } s, err := json.Marshal(ec) if err != nil { fmt.Printf("fail!") } var ec2 EnumContainer err = json.Unmarshal(s, &ec2) fmt.Println(ec2.Name, ":", ec2.Value) } Output: Uno : 0
The expected output is "Uno : 1", but instead it's "Uno : 0". What happened? There is no bug in the marshal/unmarshal code. It turns out that you can't embed enums by value if you want to serialize them. You must embed a pointer to the enum. Here is a modified version where that works as expected:
type EnumContainer struct { Name string Value *EnumType } func main() { x := One ec := EnumContainer{ "Uno", &x, } s, err := json.Marshal(ec) if err != nil { fmt.Printf("fail!") } var ec2 EnumContainer err = json.Unmarshal(s, &ec2) fmt.Println(ec2.Name, ":", *ec2.Value) } Output: Uno : 1
Conclusion
Go provides many options for serializing and deserializing JSON. It's important to understand the ins and outs of the encoding/json package to take advantage of the power.
This tutorial put all the power in your hands, including how to serialize the elusive Go enums.
Go serialize some objects!
Tuesday, February 20, 2018
Introduction to Multiprocessing in Python
The multiprocessing package supports spawning processes using an API similar to the threading module. It also offers both local and remote concurrency. This tutorial will discuss multiprocessing in Python and how to use multiprocessing to communicate between processes and perform synchronization between processes, as well as logging.
Introduction to Multiprocessing
Multiprocessing works by creating a Process
object and then calling its start()
method as shown below.
from multiprocessing import Process def greeting(): print 'hello world' if __name__ == '__main__': p = Process(target=greeting) p.start() p.join()
In the example code above, we first import the Process class and then instantiate the Process object with the greeting function which we want to run.
We then tell the process to begin using the start()
method, and we finally complete the process with the join()
method.
Additionally, you can also pass arguments to the function by providing the args
keyword argument like so:
from multiprocessing import Process def greeting(name): print 'hello' + " " + name if __name__ == '__main__': p = Process(target=greeting, args=('world',)) p.start() p.join()
Example
Let's look at a more detailed example that covers all the concepts we have discussed above.
In this example, we are going to create a process that calculates the square of numbers and prints the results to the console.
from multiprocessing import Process def square(x): for x in numbers: print('%s squared is %s' % (x, x**2)) if __name__ == '__main__': numbers = [43, 50, 5, 98, 34, 35] p = Process(target=square, args=('x',)) p.start() p.join print "Done" #result Done 43 squared is 1849 50 squared is 2500 5 squared is 25 98 squared is 9604 34 squared is 1156 35 squared is 1225
You can also create more than one process at the same time, as shown in the example below, in which process p1 gets the results of numbers squared, while the second process p2 checks if the given numbers are even.
from multiprocessing import Process def square(x): for x in numbers: print('%s squared is %s' % (x, x**2)) def is_even(x): for x in numbers: if x % 2 == 0: print('%s is an even number ' % (x)) if __name__ == '__main__': numbers = [43, 50, 5, 98, 34, 35] p1 = Process(target=square, args=('x',)) p2 = Process(target=is_even, args=('x',)) p1.start() p2.start() p1.join() p2.join() print "Done" #result 43 squared is 1849 50 squared is 2500 5 squared is 25 98 squared is 9604 34 squared is 1156 35 squared is 1225 50 is an even number 98 is an even number 34 is an even number Done
Communication Between Processes
Multiprocessing supports two types of communication channels between processes:
- Pipes
- Queues
Queues
Queue
objects are used to pass data between processes. They can store any pickle-able Python object, and you can use them as shown in the example below:
import multiprocessing def is_even(numbers, q): for n in numbers: if n % 2 == 0: q.put(n) if __name__ == "__main__": q = multiprocessing.Queue() p = multiprocessing.Process(target=is_even, args=(range(20), q)) p.start() p.join() while q: print(q.get())
In the above example, we first create a function that checks if a number is even and then put the result at the end of the queue. We then instantiate a queue object and a process object and begin the process.
Finally, we check if the queue is empty, and if not, we get the values from the front of the queue and print them to the console.
We have shown how to share data between two processes using a queue, and the result is as shown below.
# result 0 2 4 6 8 10 12 14 16 18
It's also important to note that Python has a Queue module which lives in the process module and is used to share data between threads, unlike the multiprocessing queue which lives in shared memory and is used to share data between processes.
Pipes
Pipes in multiprocessing are primarily used for communication between processes. Usage is as simple as:
from multiprocessing import Process, Pipe def f(conn): conn.send(['hello world']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() p.join()
Pipe()
returns two connection objects which represent the two ends of the pipe. Each connection object has send()
and recv()
methods. Here we create a process that prints the string hello world
and then shares the data across.
Result
# result ['hello world']
Locks
Locks
work by ensuring that only one process is executed at a time, hence blocking other processes from executing similar code. This allows the process to be completed, and only then can the lock be released.
The example below shows a pretty straightforward usage of the Lock method.
from multiprocessing import Process, Lock def greeting(l, i): l.acquire() print 'hello', i l.release() if __name__ == '__main__': lock = Lock() names = ['Alex', 'sam', 'Bernard', 'Patrick', 'Jude', 'Williams'] for name in names: Process(target=greeting, args=(lock, name)).start() #result hello Alex hello sam hello Bernard hello Patrick hello Jude hello Williams
In this code, we first import the Lock method, acquire it, execute the print function, and then release it.
Logging
The multiprocessing module also provides support for logging, although the logging package doesn't use locks so messages between processes might end up being mixed up during execution.
Usage of logging is as simple as:
import multiprocessing, logging logger = multiprocessing.log_to_stderr() logger.setLevel(logging.INFO) logger.warning('Error has occurred')
Here we first import the logging and multiprocessing modules, and we then define the multiprocessing.log_to_stderr()
method, which performs a call to get_logger()
as well as adding a handler which sends output to sys.stderr
. Finally, we set the logger level and the message we want to convey.
Conclusion
This tutorial has covered what is necessary to get started with multiprocessing in Python. Multiprocessing overcomes the problem of GIL (Global Interpreter Lock) since it leverages the use of subprocesses instead of threads.
There is much more in the Python documentation that isn’t covered in this tutorial, so feel free to visit the Python multiprocessing docs and utilize the full power of this module.
Monday, February 19, 2018
Custom Events in Laravel
In this article, we are going to explore the basics of event management in Laravel. It's one of the important features that you, as a developer, should have in your arsenal in your desired framework. As we move on, we'll also grab this opportunity to create a real-world example of a custom event and listener, and that's the ultimate goal of this article as well.
The concept of events in Laravel is based on a very popular software design pattern—the observer pattern. In this pattern, the system is supposed to raise events when something happens, and you could define listeners that listen to these events and react accordingly. It's a really useful feature in a way that allows you to decouple components in a system that otherwise would have resulted in tightly coupled code.
For example, say you want to notify all modules in a system when someone logs into your site. Thus, it allows them to react to this login event, whether it's about sending an email or in-app notification or for that matter anything that wants to react to this login event.
Basics of Events and Listeners
In this section, we'll explore Laravel's way of implementing events and listeners in the core framework. If you're familiar with the architecture of Laravel, you probably know that Laravel implements the concept of a service provider that allows you to inject different services into an application.
Similarly, Laravel provides a built-in EventServiceProvider.php
class that allows us to define event listener mappings for an application.
Go ahead and pull in the app/Providers/EventServiceProvider.php
file.
<?php namespace App\Providers; use Illuminate\Support\Facades\Event; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * The event listener mappings for the application. * * @var array */ protected $listen = [ 'App\Events\SomeEvent' => [ 'App\Listeners\EventListener', ], ]; /** * Register any events for your application. * * @return void */ public function boot() { parent::boot(); // } }
Let's have a close look at the $listen
property, which allows you to define an array of events and associated listeners. The array keys correspond to events in a system, and their values correspond to listeners that will be triggered when the corresponding event is raised in a system.
I prefer to go through a real-world example to demonstrate it further. As you probably know, Laravel provides a built-in authentication system that facilitates features like login, register, and the like.
Assume that you want to send the email notification, as a security measure, when someone logs into the application. If Laravel didn't support the event listener feature, you might have ended up editing the core class or some other way to plug in your code that sends an email.
In fact, you're on the luckier side as Laravel helps you to solve this problem using the event listener. Let's revise the app/Providers/EventServiceProvider.php
file to look like the following.
<?php namespace App\Providers; use Illuminate\Support\Facades\Event; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * The event listener mappings for the application. * * @var array */ protected $listen = [ 'Illuminate\Auth\Events\Login' => [ 'App\Listeners\SendEmailNotification', ], ]; /** * Register any events for your application. * * @return void */ public function boot() { parent::boot(); // } }
Illuminate\Auth\Events\Login
is an event that'll be raised by the Auth
plugin when someone logs into an application. We've bound that event to the App\Listeners\SendEmailNotification
listener, so it'll be triggered on the login event.
Of course, you need to define the App\Listeners\SendEmailNotification
listener class in the first place. As always, Laravel allows you to create a template code of a listener using the artisan command.
php artisan event:generate
This command generates event and listener classes listed under the $listen
property.
In our case, the Illuminate\Auth\Events\Login
event already exists, so it only creates the App\Listeners\SendEmailNotification
listener class. In fact, it would have created the Illuminate\Auth\Events\Login
event class too if it didn't exist in the first place.
Let's have a look at the listener class created at app/Listeners/SendEmailNotification.php
.
<?php namespace App\Listeners; use Illuminate\Auth\Events\Login; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class SendEmailNotification { /** * Create the event listener. * * @return void */ public function __construct() { // } /** * Handle the event. * * @param Login $event * @return void */ public function handle(Login $event) { } }
It's the handle
method that will be invoked with appropriate dependencies whenever the listener is triggered. In our case, the $event
argument should contain contextual information about the login event—logged in user information.
And we can use the $event
object to carry out further processing in the handle
method. In our case, we want to send the email notification to the logged in user.
The revised handle
method may look something like:
public function handle(Login $event) { // get logged in user's email and username $email = $event->user->email; $username = $event->user->name; // send email notification about login }
So that's how you're supposed to use the events feature in Laravel. From the next section onwards, we'll go ahead and create a custom event and associated listener class.
Create a Custom Event
The example scenario that we're going to use for our example is something like this:
- An application needs to clear caches in a system at certain points. We'll raise the
CacheClear
event along with the contextual information when an application does the aforementioned. We'll pass cache group keys along with an event that was cleared. - Other modules in a system may listen to the
CacheClear
event and would like to implement code that warms up related caches.
Let's revisit the app/Providers/EventServiceProvider.php
file and register our custom event and listener mappings.
<?php namespace App\Providers; use Illuminate\Support\Facades\Event; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * The event listener mappings for the application. * * @var array */ protected $listen = [ 'App\Events\ClearCache' => [ 'App\Listeners\WarmUpCache', ], ]; /** * Register any events for your application. * * @return void */ public function boot() { parent::boot(); // } }
As you can see, we've defined the App\Events\ClearCache
event and associated listener class App\Listeners\WarmUpCache
under the $listen
property.
Next, we need to create associated class files. Recall that you could always use the artisan command to generate a base template code.
php artisan event:generate
That should have created the event class at app/Events/ClearCache.php
and the listener class at app/Listeners/WarmUpCache.php
.
With a few changes, the app/Events/ClearCache.php
class should look like this:
<?php namespace App\Events; use Illuminate\Broadcasting\Channel; use Illuminate\Queue\SerializesModels; use Illuminate\Broadcasting\PrivateChannel; use Illuminate\Broadcasting\PresenceChannel; use Illuminate\Foundation\Events\Dispatchable; use Illuminate\Broadcasting\InteractsWithSockets; use Illuminate\Contracts\Broadcasting\ShouldBroadcast; class ClearCache { use Dispatchable, InteractsWithSockets, SerializesModels; public $cache_keys = []; /** * Create a new event instance. * * @return void */ public function __construct(Array $cache_keys) { $this->cache_keys = $cache_keys; } /** * Get the channels the event should broadcast on. * * @return Channel|array */ public function broadcastOn() { return new PrivateChannel('channel-name'); } }
As you've probably noticed, we've added a new property $cache_keys
that will be used to hold information that'll be passed along with an event. In our case, we'll pass cache groups that were flushed.
Next, let's have a look at the listener class with an updated handle
method at app/Listeners/WarmUpCache.php
.
<?php namespace App\Listeners; use App\Events\ClearCache; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Contracts\Queue\ShouldQueue; class WarmUpCache { /** * Create the event listener. * * @return void */ public function __construct() { // } /** * Handle the event. * * @param ClearCache $event * @return void */ public function handle(ClearCache $event) { if (isset($event->cache_keys) && count($event->cache_keys)) { foreach ($event->cache_keys as $cache_key) { // generate cache for this key // warm_up_cache($cache_key) } } } }
When the listener is invoked, the handle
method is passed with the instance of the associated event. In our case, it should be the instance of the ClearCache
event that will be passed as the first argument to the handle
method.
Next, it's just a matter of iterating through each cache key and warming up associated caches.
Now, we have everything in place to test things against. Let's quickly create a controller file at app/Http/Controllers/EventController.php
to demonstrate how you could raise an event.
<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Library\Services\Contracts\CustomServiceInterface; use App\Post; use Illuminate\Support\Facades\Gate; use App\Events\ClearCache; class EventController extends Controller { public function index() { // ... // you clear specific caches at this stage $arr_caches = ['categories', 'products']; // want to raise ClearCache event event(new ClearCache($arr_caches)); // ... } }
Firstly, we've passed an array of cache keys as the first argument while creating an instance of the ClearCache
event.
The event helper function is used to raise an event from anywhere within an application. When the event is raised, Laravel calls all listeners listening to that particular event.
In our case, the App\Listeners\WarmUpCache
listener is set to listen to the App\Events\ClearCache
event. Thus, the handle
method of the App\Listeners\WarmUpCache
listener is invoked when the event is raised from a controller. The rest is to warm up caches that were cleared!
So that's how you can create custom events in your application and work with them.
What Is an Event Subscriber?
The event subscriber allows you to subscribe multiple event listeners in a single place. Whether you want to logically group event listeners or you want to contain growing events in a single place, it's the event subscriber you're looking for.
If we had implemented the examples discussed so far in this article using the event subscriber, it might look like this.
<?php // app/Listeners/ExampleEventSubscriber.php namespace App\Listeners; class ExampleEventSubscriber { /** * Handle user login events. */ public function sendEmailNotification($event) { // get logged in username $email = $event->user->email; $username = $event->user->name; // send email notification about login... } /** * Handle user logout events. */ public function warmUpCache($event) { if (isset($event->cache_keys) && count($event->cache_keys)) { foreach ($event->cache_keys as $cache_key) { // generate cache for this key // warm_up_cache($cache_key) } } } /** * Register the listeners for the subscriber. * * @param Illuminate\Events\Dispatcher $events */ public function subscribe($events) { $events->listen( 'Illuminate\Auth\Events\Login', 'App\Listeners\ExampleEventSubscriber@sendEmailNotification' ); $events->listen( 'App\Events\ClearCache', 'App\Listeners\ExampleEventSubscriber@warmUpCache' ); } }
It's the subscribe
method that is responsible for registering listeners. The first argument of the subscribe
method is the instance of the Illuminate\Events\Dispatcher
class that you could use to bind events with listeners using the listen
method.
The first argument of the listen
method is an event that you want to listen to, and the second argument is a listener that will be called when the event is raised.
In this way, you can define multiple events and listeners in the subscriber class itself.
The event subscriber class won't be picked up automatically. You need to register it in the EventServiceProvider.php
class under the $subscriber
property, as shown in the following snippet.
<?php namespace App\Providers; use Illuminate\Support\Facades\Event; use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider; class EventServiceProvider extends ServiceProvider { /** * The subscriber classes to register. * * @var array */ protected $subscribe = [ 'App\Listeners\ExampleEventSubscriber', ]; /** * Register any events for your application. * * @return void */ public function boot() { parent::boot(); // } }
So that was the event subscriber class at your disposal, and with that we've reached the end of this article as well.
Conclusion
Today we've discussed a couple of the exciting features of Laravel—events and listeners. They're based on the observer design pattern that allows you to raise application-wide events and allow other modules to listen to those events and react accordingly.
Just getting up to speed in Laravel or looking to expand your knowledge, site, or application with extensions? We have a variety of things you can study in Envato Market.
Feel free to express your thoughts using the feed below!
Sunday, February 18, 2018
Saturday, February 17, 2018
Friday, February 16, 2018
Thursday, February 15, 2018
Wednesday, February 14, 2018
Introduction to Mocking in Python
Mocking is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used. This tutorial will discuss in detail what mocking is and how to use it in Python applications.
What Is Mocking?
Mocking is a library for testing in Python which allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.
In Python, mocking is accomplished by replacing parts of your system with mock objects using the unittest.mock module. This module contains a number of useful classes and functions, namely the patch function (as decorator and context manager) and the MagicMock class. These two components are very important in achieving mocking in Python.
A mock function call usually returns a predefined value immediately. A mock object's attributes and methods are defined in the test as well, without creating the real object.
Mocking also allows you to return predefined values to each function call when writing tests. This allows you to have more control when testing.
Prerequisites
Mock is available in Python 3, but if you are using a Python version below
3.3, you can still use unittest.mock
by importing it as a separate library like so.
$ pip install mock
Benefits of Mocking
Some of the benefits of mocking include:
- Avoiding overdependence. Mocking reduces the dependence of functions. For instance, if you have a function A class that depends on a function B, you will need to write a few unit tests covering the features provided by function B. Let's say the code grows in future and you have more functions, i.e. A depends on B, B depends on C, and C depends on D. If a fault is introduced in Z, all your unit tests will fail.
- Reduced overload. This applies to resource-intensive functions. A mock of that function would cut down on unnecessary resource usage during testing, therefore reducing test run time.
- Bypass time constraints in functions. This applies to scheduled activities. Imagine a process that has been scheduled to execute every hour. In such a situation, mocking the time source lets you actually unit test such logic so that your test doesn't have to run for hours, waiting for the time to pass.
Usage
Usage of mock
is simple as:
>>> from mock import Mock >>> mock = Mock(return_values = 10) >>> mock(1,4,foo ='bar') <Mock name='mock()' id='140305284793040'> >>> mock.return_values 10
Here, we import the mock module, create a mock object, and specify return values. When the mock object is called, we want it to be able to return some values. In our case, we want the mock object to return a value of 10. If we call the mock object with the arguments (1, 4, foo ='bar')
, the result will be the value 10, which was defined as a return value.
You can also raise exceptions inside mocks as follows:
>>> mock = Mock(side_effect=KeyError('foobar')) >>> mock() Traceback (most recent call last): ... KeyError: 'foobar'
The side_effects
argument allows you to perform certain things like raising an exception when a mock is called.
Example
Consider this simple function:
import requests def api(): response = requests.get('https://www.google.com/') return response
This function performs an API request to the Google webpage and returns a response.
The corresponding simple test case will be as follows:
import unittest from main import api class TetsApi(unittest.TestCase): def test_api(self): assert api() == 200
Running the above test should give an output like so:
---------------------------------------------------------------------- Ran 1 test in 3.997s OK
Let's introduce mocking to this example, and the resulting test with the Mock module will be as shown below:
import unittest from mock import Mock from mock import patch import requests import unittest class TetsApi(unittest.TestCase): def test_api(self): with patch.object(requests, 'get') as get_mock: get_mock.return_value = mock_response = Mock() mock_response.status_code = 200 assert api() == 200
Running the above test should give an output like so:
---------------------------------------------------------------------- Ran 1 test in 0.001s OK
As seen above, the mocking module takes less time to make the same API call as the normal test case.
Larger Example
Let's assume you have a script that interacts with an external API and makes calls to that API whenever a certain function is called. In this example, we are going to use the Twitter API to implement a Python script which will post to the Twitter profile page.
We don't want to post messages on Twitter every time we test the script, and that's where Mocking comes in.
Let's get started. We will be using the python-twitter library, and the first thing we will do is create a folder python_mock
and, inside the folder, create two files, namely tweet.py
and mock_test.py
.
Write the following code to the file tweet.py
.
# pip install python-twitter import twitter # define authentication credentials consumer_key = 'iYD2sKY4NC8teRb9BUM8UguRa' consumer_secret = 'uW3tHdH6UAqlxA7yxmcr8FSMSzQIBIpcC4NNS7jrvkxREdJ15m' access_token_key = '314746354-Ucq36TRDnfGAxpOVtnK1qZxMfRKzFHFhyRqzNpTx7wZ1qHS0qycy0aNjoMDpKhcfzuLm6uAbhB2LilxZzST8w' access_token_secret = '7wZ1qHS0qycy0aNjoMDpKhcfzuLm6uAbhB2LilxZzST8w' def post_tweet(api, tweet): # post tweet status = api.PostUpdate(tweet) return status def main(): api = twitter.Api(consumer_key=consumer_key, consumer_secret=consumer_secret, access_token_key=access_token_key, access_token_secret=access_token_secret) message = raw_input("Enter your tweet :") post_tweet(api, message) if __name__ == '__main__': main()
In the code above, we first import the Twitter library and then define the authentication credentials, which you can easily get from the Twitter Apps page.
The Twitter API is exposed via the twitter.Api
class, so we create the class by passing our tokens and secret keys.
The post_tweet
function takes in an authentication object and the message and then posts the tweet to the Twitter profile.
We then go ahead and mock the API call to Twitter so that the API doesn't post to Twitter every time it is called. Go ahead and open the mock_test.py
file and add the following code.
# mock_test.py #!/usr/bin/env python import unittest from mock import Mock import tweet class TweetTest(unittest.TestCase): def test_example(self): mock_twitter = Mock() tweet.post_tweet( mock_twitter, "Creating a Task Manager App Using Ionic: Part 1") mock_twitter.PostUpdate.assert_called_with( "Creating a Task Manager App Using Ionic: Part 1") if __name__ == '__main__': unittest.main()
Running the above test should give an output like so:
---------------------------------------------------------------------- Ran 1 test in 0.001s OK
Conclusion
This tutorial has covered most of the fundamentals of mocking and how to use mocking to perform external API calls. For more information, visit the official Python mocking documentation. You can also find additional resources on authentication with the Twitter API in this tutorial.
Additionally, don’t hesitate to see what we have available for sale and for study in the Envato Market, and please go ahead and ask any questions and provide your valuable feedback using the feed below.