Friday, September 28, 2018

Hands-on With ARIA: Accessibility for eCommerce

5 Amazing Assets for Fitness Fanatics' Vlogs

The Portrait Photographer's Quick Guide to Lighting Ratios

15+ Best Business Proposal Templates: For New Client Projects

How to Use Grids in Photoshop to Create a Typographic Poster

What Is Art Nouveau?

10 Best Photo Effects to Create Blurs, Cracks, and Shattered Glass in Photoshop

How to Set an Out of Office Message in Outlook (Automatic Away Reply)

How to Define, Analyze, & Seize a Market Opportunity

20 Best Corporate Stock Videos for Motion Graphics Projects

Build an HTML Email Template From Scratch

Thursday, September 27, 2018

15 Best eCommerce Android App Templates

Top 3 Slideshow Video Templates for Final Cut Pro

5 Amazing Assets for Fantastic Fashion Vlogs

10 Best Collage and Mosaic Templates to Combine Pictures in Photoshop (for Social Media)

How to Make a Terrifying Zombie Portrait in Photoshop

Create Custom Maps With the MapSVG Plugin

How to Create a Hand-Lettered T-Shirt Design in Adobe Illustrator

How to Group Objects, Items, & Pictures in PowerPoint (In 60 Seconds)

How to Build Serverless GraphQL and REST APIs Using AWS Amplify

How to Create a Tous-Les-Memes-Inspired, Isometric Scene Effect in Photoshop

How to Use the Animation Inspector in Chrome Developer Tools

New Short Course: A Designer’s Guide to Cookies

Tuesday, September 25, 2018

Create a Database Cluster in the Cloud With MongoDB Atlas

How to Digitally Paint Dimension and Texture in Adobe Photoshop

35+ Awesome French Design Tutorials and Articles on Envato Tuts+

How to Record a Webinar on Your Mac or PC (In 2018)

Best in 2018: Professional Resume Design Templates (Cool + Modern)

How to Build Serverless Web Apps With React and AWS Amplify

New Course on Kotlin Android Services

Create Your Own WordPress Theme Option Panel With Redux Framework

How to Create Floral Typography With Ink

Monday, September 24, 2018

5 Amazing Assets for Elegant Real Estate Videos

How to Create a Surreal Poster Design in Adobe Photoshop

10 Best Print-Inspired Halftone Effects (Color and Black-and-White) for Photoshop

Render Text and Shapes on Images in PHP

25+ Inspirational PowerPoint Presentation Design Examples (2018)

How to Build a Login UI with Angular and Material Design

Prototyping in Design Thinking: Fail Fast, Fail Often

Typography: The Anatomy of a Letter

How to Create a Letter Characters Text Effect in Adobe Illustrator

Friday, September 21, 2018

20 Different Fonts to Make Stylish Graphic Design Projects in 2018

50 Free Black and White Conversion Presets for Adobe Photoshop Lightroom

Securing Communications on Android

Securing Communications on Android

With all the recent data breaches, privacy has become an important topic. Almost every app communicates over the network, so it's important to consider the security of user information. In this post, you'll learn the current best practices for securing the communications of your Android app.

Use HTTPS

As you are developing your app, it's best practice to limit your network requests to ones that are essential. For the essential ones, make sure that they're made over HTTPS instead of HTTP. HTTPS is a protocol that encrypts traffic so that it can't easily be intercepted by eavesdroppers. The good thing about Android is that migrating is as simple as changing the URL from http to https

In fact, Android N and higher versions can enforce HTTPS using Android’s Network Security Configuration.

In Android Studio, select the app/res/xml directory for your project. Create the xml directory if it doesn't already exist. Select it and click File > New File. Call it network_security_config.xml. The format for the file is as follows:

To tell Android to use this file, add the name of the file to the application tag in the AndroidManifest.xml file:

Update Crypto Providers

The HTTPS protocol has been exploited several times over the years. When security researchers report vulnerabilities, the defects are often patched. Applying the patches ensures that your app's network connections are using the most updated industry standard protocols. The most recent versions of the protocols contain fewer weaknesses than the previous ones. 

To update the crypto providers, you will need to include Google Play Services. In your module file of build.gradle, add the following line to the dependencies section:

The SafetyNet services API has many more features, including the Safe Browsing API that checks URLs to see if they have been marked as a known threat, and a reCAPTCHA API to protect your app from spammers and other malicious traffic.

After you sync Gradle, you can call the ProviderInstaller's installIfNeededAsync method:

The onProviderInstalled() method is called when the provider is successfully updated, or already up to date. Otherwise, onProviderInstallFailed(int errorCode, Intent recoveryIntent) is called. 

Certificate and Public Key Pinning

When you make an HTTPS connection to a server, a digital certificate is presented by the server and validated by Android to make sure the connection is secure. The certificate may be signed with a certificate from an intermediate certificate authority. This certificate used by the intermediate authority might in turn be signed by another intermediate authority, and so on, which is trustworthy as long as the last certificate is signed by a root certificate authority that is already trusted by Android OS.

If any of the certificates in the chain of trust are not valid, then the connection is not secure. While this is a good system, it's not foolproof. It's possible for an attacker to instruct Android OS to accept custom certificates. Interception proxies can possess a certificate that is trusted, and if the device is controlled by a company, the company may have configured the device to accept its own certificate. These scenarios allow for a “man in the middle” attack, allowing the HTTPS traffic to be decrypted and read. 

Certificate pinning comes to the rescue by checking the server's certificate that is presented against a copy of the expected certificate. This prevents connections from being made when the certificate is different from the expected one.

In order to implement pinning on Android N and higher, you need to add a hash (called pins) of the certificate into the network_security_config.xml file. Here is an example implementation:

To find the pins for a specific site, you can go to SSL Labs, enter the site, and click Submit. Or, if you're developing an app for a company, you can ask the company for it.

Note: If you need to support devices running an OS version older than Android N, you can use the TrustKit library. It utilizes the Network Security Configuration file in exactly the same way.

Sanitization and Validation

With all of the protections so far, your connections should be quite secure. Even so, don't forget about regular programming validation. Blindly trusting data received from the network is not safe. A good programming practice is “design by contract”, where the inputs and outputs of your methods satisfy a contract that defines specific interface expectations. 

For example, if your server is expecting a string of 48 characters or less, make sure that the interface will only return up to and including 48 characters.

If you're only expecting numbers from the server, your inputs should check for this. While this helps to prevent innocent errors, it also reduces the likelihood of injection and memory corruption attacks. This is especially true when that data gets passed to NDK or JNI—native C and C++ code.

The same is true for sending data to the server. Don't blindly send out data, especially if it's user-generated. For example, it's good practice to limit the length of user input, especially if it will be executed by an SQL server or any technology that will run code. 

While securing a server against attacks is beyond the scope of this article, as a mobile developer, you can do your part by removing characters for the language that the server is using. That way, the input is not susceptible to injection attacks. Some examples are stripping quotes, semicolons and slashes when they're not essential for the user input:

If you know exactly the format that is expected, you should check for this. A good example is email validation:

Files can be checked as well. If you're sending a photo to your server, you can check it's a valid photo. The first two bytes and last two bytes are always FF D8 and FF D9 for the JPEG format.

Be careful when displaying an error alert that directly shows a message from the server. Error messages could disclose private debugging or security-related information. The solution is to have the server send an error code that the client looks up to show a predefined message.

Communication With Other Apps

While you're protecting communication to and from the device, it's important to protect IPC as well. There have been cases where developers have left shared files or have implemented sockets to exchange sensitive information. This is not secure. It is better to use Intents. You can send data using an Intent by providing the package name like this:

To broadcast data to more than one app, you should enforce that only apps signed with your signing key will get the data. Otherwise, the information you send can be read by any app that registers to receive the broadcast. Likewise, a malicious app can send a broadcast to your app if you have registered to receive the broadcast. You can use a permission when sending and receiving broadcasts where signature is used as the protectionLevel. You can define a custom permission in the manifest file like this:

Then you can grant the permission like this: 

Both apps need to have the permissions in the manifest file for it to work. To send the broadcast:

Alternatively, you can use setPackage(String) when sending a broadcast to restrict it to a set of apps matching the specified package. Setting android:exported to false in the manifest file will exclude broadcasts that are received from outside of your app.

End-to-End Encryption

It's important to understand the limits of HTTPS for protecting network communications. In most HTTPS implementations, the encryption is terminated at the server. For example, your connection to a corporation's server may be over HTTPS, but once that traffic hits the server, it is unencrypted. It may then be forwarded to other servers, either by establishing another HTTPS session or by sending it unencrypted. The corporation is able to see the information that has been sent, and in most cases that's a requirement for business operations. However, it also means that the company could pass the information out to third parties unencrypted.

There is a recent trend called "end-to-end encryption" where only the two end communicating devices can read the traffic. A good example is an encrypted chat app where two mobile devices are communicating with each other through a server; only the sender and receiver can read each other's messages.

An analogy to help you understand end-to-end encryption is to imagine that you want someone to send you a message that only you can read. To do this, you provide them with a box with an open padlock on it (the public key) while you keep the padlock key (private key). The user writes a message, puts it in the box, locks the padlock, and sends it back to you. Only you can read the message because you're the only one with the key to unlock the padlock.

With end-to-end encryption, both users send each other their keys. The server only provides a service for communication, but it can't read the content of the communication. While the implementation details are beyond the scope of this article, it's a powerful technology. If you want to learn more about this approach, a great place to start is the GitHub repo for the open-sourced Signal project.

Conclusion

With all the new privacy laws such as GDPR, security is ever more important. It's often a neglected aspect of mobile app development.

In this tutorial, you've covered the security best practices, including using HTTPS, certificate pinning, data sanitization and end-to-end encryption. These best practices should serve as a foundation for security when developing your mobile app. If you have any questions, feel free to leave them below, and while you're here, check out some of my other tutorials about Android app security! 

  • Security
    Keys, Credentials and Storage on Android
    Collin Stuart
  • Security
    Storing Data Securely on Android
    Collin Stuart

How to Properly List Promotions & Certifications on a Resume

New Course: Adobe Illustrator for Beginners

How to Create an Eco Bulb and Butterfly Illustration in Adobe Illustrator

Thursday, September 20, 2018

How to Use Colour Balance for Better Black and White Conversions in Lightroom

18+ Cool PowerPoint Templates (To Make Presentations in 2018)

How to Create a Stylish Magazine Layout in Affinity Publisher

How to Edit a Brain Infographic PowerPoint Template in 60 Seconds

Quick Tip: 10 Audio Mastering Tips Every Engineer Should Know

Quick Tip: 10 Audio Mastering Tips Every Engineer Should Know

Mastering is considered to be a dark art. A dark art that's seen an increasing number of mastering engineers enter the fray. 

In this tutorial, I'll shed light on how you can become a better mastering engineer with ten straightforward tips.  

1. Room Treatment  

Room treatment is vital. I'd choose budget monitors in a well treated room over high end monitors in a gymnasium. 

However you treat the room, the goal is always the same—to have a listening environment that is neutral, that results in even frequencies.  

2. Multiple Monitors 

One pair of monitors isn't enough. Monitors present their own individual view of a master.

You should invest in a good pair of monitors, but also a budget pair like the Avantone MixCubes, found in both home and pro studios. These are an excellent substitute for checking in the car, due to their limited bandwidth. 

You should also check the master on a laptop, on headphones and even whilst wearing earbuds. 

3. References

Listen to a lot of well mastered music from various genres and engineers. Create a collection of diverse references and put them into a playlist. My rock/pop playlist includes:

  • Dance-Pop: Don't Stop the Music by Rihanna 
  • Bass Heavy Pop: Drunk in Love by Beyoncé and Jay Z 
  • High Fidelity Pop Rock: Sledgehammer by Peter Gabriel
  • Classic Rock: Back in Black by AC/DC 
  • Acoustic Rock: Back on Your Side by Chris Isaak 
  • Alternative Light Rock: Here It Comes by Doves
  • Alternative Heavy Rock: Drunken Butterfly by Sonic Youth

You may want follow the complete playlist to use as your own reference material.   

4. Know the Gear

Know every aspect of the equipment you use and how it interacts with other equipment. 

If you're just starting out, try working with software before moving to hardware. The reason for this is that it is quicker to experiment and to check A/B comparisons using software. 

5. Know the Sequence

It helps if you know your track order before you begin. This can help with consistency in sound, ensuring the songs sit side by side sonically and dynamically. That doesn't mean that you should start work on track one first. 

My preference is to start work on the track that sounds the best sonically to me. This will become the benchmark track, the one that the remaining tracks will be judged against. 

6. Work in the Present and the Future

Consider the formats that it might get converted to and platforms it might get uploaded to. Consider the effect of converting from 24-bit to mp3 may have on the audio. 

Consider the uploading of music to iTunes, YouTube and SoundCloud and how each affects the sound. 

7. Less is More

The general rule of thumb for mastering is that you shouldn't boost or attenuate no-more than 3dB at any given frequency. This might seem very little, however when you do the math, it really isn't. 

For example, attenuating 3dB on a complete mix in the low mids is the same as attenuating 3dB on each individual instrument that lives in that frequency range. 

This might include the kick drum, snare drum, electric guitar, bass guitar and whatever else that often resides in the low-mids.    

8. Trust Through Respect

For me, trust is vital, more so then the equipment or skills an engineer might possess.

Trusting the mastering engineer to do his research into the artist and to be honest about the mix and to feedback to the mix engineer if there are any problems. If a mastering engineer says to themselves, "This is the way they have sent it, so this is the way they must want it", then work should not proceed without communication about the mix or whatever aspect is making the mastering engineer feel that it's not ready for mastering.  

9. Don't Master Your Own Mixes

If budget allows, don't master your own mixes. Even if you're a mastering engineer. 

It would go against one of the principles of what makes a great master, that is 'fresh ears' in a different environment to the one in which it was mixed. 

10. Sleep on it

This isn't one thats employed by all mastering engineers. 

Whenever a mastering session feels complete, I would listen the next day with fresh ears and a fresh mind. 

Bear in mind that, when a listener hears it, they most likely haven't played it at least five times that day. I feel you need to hear it, in the same way a new listener would before approving the final master.  

Conclusion 

In this Quick Tip Tutorial, I've given you ten tips improve your craft as a mastering engineer and in addition, encourage you to seek more information. 


Art for All: Celebrate Diversity in Design—Volume 13

Data Science and Analytics for Business: Challenges and Solutions

Data Science and Analytics for Business: Challenges and Solutions

As more companies discover the importance of data science and advanced analytics for their bottom line, a clash of cultures has begun. How can these quickly growing fields become part of a company’s ecosystem, especially for established companies that have been around for a decade or longer? 

Data scientists and IT professionals have vastly different needs when it comes to infrastructure. Here, I’ll lay out some of those requirements and discuss how to move beyond them—and evolve together.

Department Perspectives

When starting up data science programs within companies, the biggest issues often arise not from the technology itself, but from simple miscommunication. Interdepartmental misconceptions can result in a lot of grudge-holding between fledgling data science teams and IT departments. 

To combat this, we’ll examine both perspectives and take each of their needs into account. We'll start by defining what an IT professional requires to maintain a successful workflow, and then we'll look at what a data scientist needs for maximum efficiency. Finally, we'll find the common ground: how to use it to implement a healthy infrastructure for both to flourish.

IT Needs

Let’s start by taking a look at a typical data infrastructure for IT and Software Development.

Regarding data, there are three essential prerequisites that any IT department will focus on: 

  • data that is secure
  • data that is efficient
  • data that is consistent

Because of this, much of IT utilizes table-based schemas, and often uses SQL (Structured Query Language) or one of its variants.

This setup means that there are a large number of tables for every purpose. Each of these tables is separate from one another, with foreign keys connecting them. Because of this setup, queries can be executed quickly, efficiently, and with security in mind. This is important for software development, where data needs to remain intact and reliable.

With this structure, the required hardware is often minimal when compared to the needs of data science. The stored data is well defined, and it evolves at a predictable pace. Little of the data repeats, and the querying process reduces the amount of processing resources required. 

Let’s see how data science differs.

Data Science Needs

On the other end, data science has a different set of needs. Data scientists need freedom of movement with their data—and flexibility to modify their data quickly. They need to be able to move data in non-standard ways and process large amounts at a time.

These needs are hard to implement using highly structured databases. Data science requires a different infrastructure, relying instead upon unstructured data and table-less schemas.

When referring to unstructured data, we’re talking about data with no intrinsic definition. It’s nebulous until given form by a data scientist. For most development, each field needs to be of a defined type—such as an integer or a string. For data science, however, it’s about supporting data points that are ill defined.

Table-less schemas add more versatility to this quasi-chaotic setup, allowing all the information to live in one place. It’s especially useful for data scientists who regularly need to merge data in creative and unstructured ways. Popular choices include NoSQL variants or structures that allow several dimensions, such as OLAP cubes.

As a result, the hardware required for data science is often more substantial. It will need to hold the entirety of the data used, as well as subsets of that data (though this is often spread out among multiple structures or services). The hardware can also require considerable processing resources as large amounts of data are moved and aggregated.

Distilling Needs Into Action

With these two sets of needs in mind, we can now see how miscommunication can occur. Let’s take these perspectives and use them to define what changes we’re looking for and how. What problems need to be solved when bringing data science into a traditional IT setting?

Ease of Data Manipulation

In a traditional IT setting, any given business’s databases likely follow a rigid structure, with tables divided to fit specific needs, an appropriate schema to define each piece of data, and foreign keys to tie it all together. This makes for an efficient system of querying data. The exploratory nature of some data science methods can push this to its limits.

When a common task might require joining a dozen or more tables, the benefits of table-based structures become less apparent. A popular method to handle this is to implement a secondary NoSQL or multi-dimensional database. This secondary database uses regular ETLs (Extract, Transform, Load) to keep its information fresh. This adds the cost of additional hardware or cloud service usage, but minimizes any other drawbacks.

Keep in mind that in some cases, adding a separate database for data science can be more affordable than using the same database (especially when complicated licensing issues come into play).

Ease of Data Scaling

This specific problem covers two mentioned mismatches:

  1. regular increases in data from procedures
  2. a need for unstructured data types

In traditional IT, the size of your database is well defined, either staying the same size or growing at a modest pace. When using a database for data science, that growth can be exponential. It is common to add gigabytes of data each day (or more). With the sheer size of this kind of data, a business will need to incorporate a plan for scaling internal architecture or use an appropriate cloud solution.

As for unstructured data, it can take up a lot of resources in terms of storage and processing power, depending on your specific uses. Because of this, it's often inefficient to keep it all in a database that might be used for other purposes. The solution is similar to scaling in general. We’ll either need a plan for scaling our internal architecture to meet these needs or we'll have to find an appropriate cloud solution.

Resource Usage

The last major difference we’ll talk about is the use of resources. For IT, the usage of resources is typically efficient, well defined, and consistent. If a database powers an eCommerce site, there are known constraints. An IT professional will know roughly how many users there will be over a given period of time, so they can plan their hardware provisioning based on how much information is needed for each user.

With traditional IT infrastructure, there won’t be any problems encountered if a project uses only a few hundred rows from a handful of tables. But a project that requires every row from two dozen tables can quickly become a problem. In data science, the needs in terms of processing and storage change from project to project—and that kind of unpredictability can be difficult to support.

In traditional IT, resources may be shared with other parties, which might be a live production site or an internal dev team. The risk here is that running a large-scale data science project could potentially lock those other users out for a period of time. Another risk is that the servers holding a database may not be able to handle the sheer amount of processing necessary. Calling 200,000 rows from 15 tables, and asking for data aggregation on top, becomes a problem. This magnitude of queries can be extremely taxing on a server that might normally handle a thousand or so simultaneous users.

The ideal solution comes down to cloud processing. This addresses two key factors. The first is that it allows query performance away from any important databases. The second is that it provides scaling resources that can fit each project.

So What’s the Final List of Requirements for Both?

Now that we’ve talked about the needs in depth, let’s sum them up. An IT and data science department will need the following for long-term success:

  • a separate database to reduce the impact on other stakeholders
  • a scaling storage solution to accommodate changes in data
  • a scaling processing solution to accommodate varying project types
  • an unstructured database to provide efficient retrieval and storage of highly varying data

Building a Case for Data Science

Let’s break everything down into specifications so we can put together a mutually beneficial solution. Now we’ll take a look at how to define the specific resources needed for an organization:

Researching Specifications

From the IT side, there are three main definitions needed to create the necessary infrastructure. These are: 

  1. the amount of data
  2. to what extent it needs processing
  3. how the data will get to the storage solution

Here’s how you can determine each.

Data Storage Needs

It all starts with the initial data size needed and the estimated ongoing data additions.

For your initial data needs, take the defined size of your current database. Now subtract any columns or tables that you won't need in your data science projects. Take this number and add in the data size of any new sources that you’ll be introducing. New sources might include Google Analytics data or information from your Point of Sale system. This total will be the data storage we’ll be looking to attain upfront.

While the initial storage needs are useful upfront, you’ll still have to consider ongoing data needs—as you’ll likely be adding more information to the database over time. To find this information out, you can calculate your daily added data from your currently available data. Take a look at the amount of information that has been added to your database in the last 30 days, and then divide that by 30. Then repeat that for each information source that you’ll be using, and add them together. 

While this isn’t precise, there’s an old development mantra that you should double your estimate, and we’re going to use that here. Why? We want to account for unpredictable changes that might affect your data storage needs—like company growth, per-project needs, or just general areas.

With that number now defined, multiply it by 365. This is now your projected data growth for one year, which, when added to your initial amount, will determine how much storage you should look at obtaining.

Processing Resource Needs

Unlike data storage needs, processing needs are a lot more difficult to calculate exactly. The main goal here is to decide whether you want to put the heavy lifting on queries or on a local machine (or cloud instance). Keep in mind here that when I talk about a local machine, I don’t mean just the computer you normally use—you’ll likely need some kind of optimized workstation for the more intensive calculations.

To make this choice, it helps to think about the biggest data science project that you might run within the next year. Can your data solution handle a query of that size without becoming inaccessible to all other stakeholders? If it can, then you’re good to go with no additional resources needed. If not, then you’ll need to plan on getting an appropriately sized workstation or scaling cloud instances.

ETL (Extract, Transform, Load) Processes

After deciding where to store and process your data, the next decision is how. Creating an ETL process will keep your data science database orderly and updated and prevent it from using unnecessary resources from elsewhere.

Here’s what you should have in your ETL documentation:

  • any backup procedures that should take place
  • where data will be coming from and where it will be going
  • the exact dimensions that should be moved
  • how often the transfer should occur
  • whether the transfer needs to be complete (rewrite the whole database) or can be additive (only move over new things)

Preparing a Solution

With all the data points in hand, it’s time to pick out a solution. This part will take a bit of research and will rely heavily on your specific needs, as on the surface they tend to have a lot of similarities.

Three of the biggest cloud solutions—Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure—offer some of the best prices and features. All three have relatively similar costs, though AWS is notably more difficult to calculate costs for (due to its a la carte pricing structure).

Beyond price, each offers scalable data storage and the ability to add processing instances, though each calls its ‘instances’ by a different name. When researching which to use for your own infrastructure, take into account which types of projects you’ll be utilizing the most, as that can shift the value of each one’s pricing and feature set.

However, many companies simply select whichever aligns with their existing technology stack.

You may also want to set up your own infrastructure in-house, although this is significantly more complex and not for the faint of heart.

Extra Tips for Smooth Implementation

With all of your ducks in a row, you can start implementation! To help out, here are some hard-earned tips on making your project easier—from pitch to execution.

Test Your ETL Process

When you first put together your ETL process, don’t test the entire thing all at once! This can put some serious strain on your resources and increase your cloud costs drastically if there’s a mistake, or if you have to attempt the process several times.

Instead, it’s a good idea to run your process using just the first 100 rows or so of your origin tables at first. Then run the full transfer once you know it will work.

Test Your Queries Too

The same goes for any large query you run on a cloud instance. Making a mistake that pulls in millions of pieces of data is much harder on a system than one that only pulls in a few—especially when you’re paying per GB.

Create a Warehousing Backup Strategy

Most cloud operators offer this as a feature, so you may not have to worry about it. Your team should still discuss whether they would like to create their own regular backups of the data, though, or if it might be more effective to reconstruct the data should the need arise.

Security and Privacy Concerns

When moving customer data to the cloud, make sure that everyone involved is aware of your company’s data governance policies in order to prevent problems down the road. This can also help you save some money on the amount of data being stored in the cloud.

Dimension Naming During ETL

When performing your ETL from a table-based database to an unstructured one, be careful about naming procedures. If names are just wholesale transferred over, you’ll likely have a lot of fields from different tables sharing the same name. An easy way to overcome this at first is to name your new dimensions in the unstructured database as {oldtablename}_{columnname} and then rename them from there.

Get Your Motor Running!

Now you can plan the basics of your analytics and data science infrastructure. With many of the key questions and answers defined, the process of implementation and getting managerial buy-in should go much more smoothly. 

Having difficulty coming up with answers for your own company? Did I gloss over something important? Let me know in the comments!


New Course: Code-Friendly Design With Adobe XD

Wednesday, September 19, 2018

How to Draw a Kangaroo

Formatting the Current Date and Time in PHP

Formatting the Current Date and Time in PHP

You'll often want to work with dates and times when developing websites. For example, you might need to show the last modified date on a post or mention how long ago a reader wrote some comment. You might also have to show a countdown of the days until a special event.

Luckily, PHP comes with some built-in date and time functions which will help us do all that and much more quite easily.

This tutorial will teach you how to format the current date and time in PHP. You will also learn how to get the timestamp from a date string and how to add and subtract different dates.

Getting the Date and Time in String Format

date($format, $timestamp) is one of the most commonly used date and time functions available in PHP. It takes the desired output format for the date as the first parameter and an integer as a timestamp value which needs to be converted to the given date format. The second parameter is optional, and omitting it will give output the current date and time in string format based on the value of $format.

The $format parameter accepts a series of characters as valid values. Some of these characters have straightforward meanings: Y gives you the full numeric representation of the year with 4 digits (2018), and y only gives you the last two digits of the current year (18). Similarly, H will give you the hour in 24-hour format with leading zeros, but h will give you the hour in 12-hour format with leading zeros. 

Here are some of the most common date format characters and their values.

Character Meaning Example
d day of the month with leading zeros 03 or 17
j day of the month without leading zeros 3 or 17
D day of the week as a three-letter abbreviation Mon
l full day of the week Monday
m month as a number with leading zeros 09 or 12
n month as a number without leading zeros 9 or 12
M month as a three-letter abbreviation Sep
F full month September
y two-digit year 18
Y full year 2018

There are many other special characters to specify the output for the date() function. It is best to consult the format characters table in the date() function documentation for more information about special cases.

Let's see some practical examples of the date() function now. We can use it to get the current year, current month, current hour, etc., or we can use it to get a complete date string.

You can also use the date() function to output the time. Here are some of the most commonly used time format characters:

Character Meaning Example
g hours in 12-hour format without leading zeros 1 or 12
h hours in 12-hour format with leading zeros 01 or 12
G hours in 24-hour format without leading zeros 1 or 13
H hours in 24-hour format with leading zeros 01 or 13
a am/pm in lowercase am
A am/pm in uppercase AM
i minutes with leading zeros 09 or 15
s seconds with leading zeros 05 or 30

And here are some examples of outputting formatted time strings.

It is also important that you escape these special characters if you want to use them inside your date string.

Get the Unix Timestamp

Sometimes, you will need to get the value of the current Unix timestamp in PHP. This is very easy with the help of the time() function. It returns an integer value which describes the number of milliseconds that have passed since 1 January 1970 at midnight (00:00:00) GMT.

You can also use this function to go back and forth in time. To do so, all you have to do is subtract the right number of seconds from the current value of time() and then change the resulting value into the desired date string. Here are two examples:

One important thing you should remember is that the timestamp value returned by time() is time-zone agnostic and gets the number of seconds since 1 January 1970 at 00:00:00 UTC. This means that at a particular point in time, this function will return the same value in the US, Europe, India, or Japan.

Another way to get the timestamp for a particular date would be to use the mktime($hour, $minute, $second, $month, $day, $year) function. When all the parameters are omitted, this function just uses the current local date and time to calculate the timestamp value. This function can also be used with date() to generate useful date and time strings.

Basically, time() can be used to go back and forth to a period of time, while mktime() is useful when you want to go to a particular point in time.

Convert a Datetime String to a Timestamp

The strtotime($time, [$now = time()]) function will be incredibly helpful when you want to convert different date and time values in string format to a timestamp. The function can parse almost all kinds of datetime strings into timestamps.

You should definitely check the valid time formats, date formats, compound datetime formats, and relative datetime formats.

With relative datetime formats, this function can easily convert commonly used strings into valid timestamp values. The following examples should make it clear:

Adding, Subtracting and Comparing Dates

It's possible to add and subtract specific periods of time to and from a date. This can be done with the help of the date_add() and date_sub() functions. You can also use the date_diff() function to subtract two dates and output the difference between them in terms of years, months, and days, or something else.

Generally, it's easier to do any such date and time related arithmetic in object-oriented style with the DateTime class instead of doing it procedurally. We'll try both these styles here, and you can choose whichever you like the most.

When using DateTime::diff(), the DateTime object on which the diff() method is called is subtracted from the DateTime object which is passed to the diff() method. When you are writing procedural style code, the first date parameter is subtracted from the second date parameter.

Both the function and the method return a DateInterval() object representing the difference between two dates. This interval can be formatted to give a specific output using all the characters listed in the format() method documentation.

The difference between object-oriented style and procedural style becomes more obvious when subtracting or adding a time interval.

You can instantiate a new DateTime object using the DateTime() constructor. Similarly, you can instantiate a DateInterval object using the  DateInterval() constructor. It accepts a string as its parameter. The interval string starts with P, which signifies period. After that, you can specify each period using an integer value and the character assigned to a particular period. You should check the DateInterval documentation for more details.

Here is an example that illustrates how easy it is to add or subtract dates and times in PHP.

You can also compare dates in PHP using comparison operators. This can come in handy every now and then. Let's create a Christmas day counter using the comparison operators and other DateTime methods.

We began by creating two DateTime objects to store the present time and the date of this year's Christmas. After that, we run a while loop to keep adding 1 year to the Christmas date of 2018 until the present date is less than the Christmas date. This will be helpful when the code runs on 18 January 2024. The while loop will increment the Christmas date as long as it is less than the present date at the time of running this script.

Our Christmas day counter will now work for decades to come without any problems.

Final Thoughts

In this tutorial, we learned how to output the current date and time in a desired format using the date() function. We also saw that date() can also be used to get only the current year, month, and so on. After that, we learned how to get the current timestamp or convert a valid DateTime string into a timestamp. Finally, we learned how to add or subtract a period of time from different dates.

I've tried to cover the key DateTime functions and methods here. You should definitely take a look at the documentation to read about the functions not covered in the tutorial. If you have any questions, feel free to let me know in the comments.


Keynote Magic Move: How to Use Slide Transition Effects

10 Best Multi-Purpose Android App Templates

Your First eCommerce Website Prototype With Adobe XD

New eBooks Available for Subscribers

How to Create a 3D Black and Gold Text and Logo Mockup

Quick Tip: How to Fill Text With an Image in Adobe InDesign

How to Create a Multi-Layered Text Effect in Adobe Illustrator

Monday, September 17, 2018

5 Amazing Assets for Wonderful Wedding Albums

Top 3 Slideshow Templates for Adobe Premiere

How to Draw Disney Animals

Easily Create Sideways Text Using the “writing-mode” CSS Property

14 Best Web Video Conferencing Software for Small Business (Free + Paid)

Testing Android User Interfaces With Espresso

How to Create a Photo-Realistic Wax Seal Mockup With Adobe Photoshop

The Do's and Don'ts of Creating Line Icons

Wednesday, September 12, 2018

Set Up Caching in PHP With the Symfony Cache Component

Set Up Caching in PHP With the Symfony Cache Component

Today, I'll show you the Symfony Cache component, an easy way to add caching to your PHP applications. This helps improve the overall performance of your application by reducing the page load time.

The Symfony Cache Component

The Symfony Cache component allows you to set up caching in your PHP applications. The component itself is very easy to install and configure and allows you to get started quickly. Also, it provides a variety of adapters to choose from, as shown in the following list:

  • database adapter
  • filesystem adapter
  • memcached adapter
  • Redis adapter
  • APCu adapter
  • and more

When it comes to caching using the Symfony Cache component, there are a couple of terms that you should get familiar with.

To start with, the cache item refers to the content which is stored. Each item is stored as a key-value pair. The cache items are managed by the cache pool, which groups them logically. In fact, you need to use the cache pool to manipulate cache values. Finally, it's the cache adapter which does all the heavy lifting to store items in the cache back-end.

In this article, we'll explore how you can unleash the power of the Symfony Cache component. As usual, we'll start with installation and configuration, and then we'll go on to explore a few real-world examples in the latter half of the article.

Installation and Configuration

In this section, we're going to install the Cache component. I assume that you have already installed Composer in your system—you'll need it to install the Cache component available at Packagist.

Once you have installed Composer, go ahead and install the Cache component using the following command.

That should have created a composer.json file that should look like this:

That's it for installation, but how are you supposed to add it to your application? It's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

A Real-World Example

In this section, we'll go through an example which demonstrates how you could use the Cache component in your applications to cache content.

To start with, let's go ahead and create the index.php file with the following contents.

Let's go through the main parts of the index.php file to understand their purpose.

Create the Cache Pool

As we discussed earlier, cached items are stored in a cache pool. Furthermore, each cache pool is backed by a specific cache back-end and adapter. If you want to store items in the file system cache, for example, you need to initialize the cache pool of the file system adapter.

You can provide three optional arguments to the FilesystemAdapter object:

  • the namespace in which you would like to create cache entries 
  • a lifetime in seconds for cache items
  • the directory in which the cache will be stored.

How to Store String Values

Since we've already created the cache pool, we can use it to store cache items.

Firstly, we use the getItem method to fetch the cache item with the demo_string key. Next, we use the isHit method to check if the value we're looking for is already present in the cache item $demoString.

Since this is the first time we're fetching the demo_string cache item, the isHit method should return false. Next, we use the set method of the $demoString object to set the cache value. Finally, we save the $demoString cache item into the $cachePool cache pool using the save method.

Now that we've stored the item in the cache, let's see how to fetch it from the cache.

Here, we use the hasItem method to check the existence of the cache item in the cache pool before retrieving it. 

Next, let's see how to delete all cache items from the cache pool:

How to Store Array Values

In the previous section, we discussed how to store basic values in the cache pool. Storing array values is much the same, as you can see in the following example.

As you can see, we can simply set the cache item with an array value, just the same as we did for a string. 

Next, let's see how to delete the specific cache item from the cache pool.

Here, we use the deleteItem method to delete the demo_array item from the cache pool.

How to Set an Expiry Date for Cached Items

So far, we've cached items into the pool without an expiry date. However, you don't typically want to store items in the cache permanently. For example, you might like to refresh cache items periodically, so you need a mechanism which purges expired cache items.

In this section, we'll discuss how to store items in the cache along with an expiry date.

As you can see in the above snippet, you can use the expiresAfter method to set an expiry date for the cached item. You can pass the number of seconds you would like to cache an item for in the first argument of the expiresAfter method.

In our example, we use the sleep method to test if the cached item is still available in the cache pool.

Go ahead and test it to see how it works!

Conclusion

Today, we had a brief look at the Symfony Cache component, which allows you to set up caching in your PHP applications. It also supports a variety of caching adapters that together give you the flexibility to choose the kind of back-end you want to use.

Feel free to express your thoughts and queries using the form below.


How to Create a Leaf-Covered Text Effect Action in Adobe Photoshop

International Artist Feature: Iran

How to Insert a Footnote in a PowerPoint Presentation in 60 Seconds

How to Create a Vector Autumn Background in Adobe Illustrator

New Course: How to Create an Email Template With Envato Elements

Design Considerations for Multiple Email Clients and Devices

Tuesday, September 11, 2018

How to Convert Your Images to Black and White in Photoshop

How to Create a Honey Bee Themed Photo Manipulation in Photoshop

18 Great Campaign Monitor Templates for Email and Newsletters

20 Best Science & Technology PowerPoint Templates With High-Tech Designs

New Course: Hand Lettering for Beginners

How to Weave a Bedouin Sadu Fabric Pattern Using Adobe Illustrator

How to Create a Retro Game Boy in 3D: Part 1

A Beginner’s Guide to Email Accessibility (Checklist + Resources)

Tuesday, September 4, 2018

Site Accessibility: Getting Started With ARIA

Destructing Elements in Maya With PullDownIt: Part 5

Capturing Lifelike Guitar Sounds Without Microphones: Part 1

Capturing Lifelike Guitar Sounds Without Microphones: Part 1

If you're recording anything acoustically, or playing live through a PA, microphones remain the usual and accepted route for capturing guitar sounds. From studio exotica costing thousands of pounds down to handheld and USB devices, the microphone reigns supreme.

As popular as this is, however, there are some drawbacks.

The relationship between the mic and its source needs to remain the same. The slightest movement in any plane of direction can change both the tone and volume of what’s captured. Anyone who's tried overdubbing an acoustic guitar part will know the challenge of creating a consistent recording.

Then there's the question of noise. Whether it comes from the recording space or stage, the player, the instrument, or even the recording equipment, all this can diminish the resultant quality.

You may believe that there's no alternative.

DI

This stands for Direct Injection, and refers to connecting an instrument electronically to an amplifier or recording device. For guitarists, whether recording or playing live, DI means capturing sound via the instrument’s onboard pickups.

For acoustic guitars, the most common pickup is the piezo. A crystal located under the guitar’s saddle, it translates vibrations from the strings into a small electrical signal.

Coupled to an onboard or external preamp, this is the sound audiences hear.

It certainly gets around the issues of using mics, and is ideal for a noisy live environment. However, because the piezo’s at the bridge, it captures the sound of the strings, but very little of the guitar’s body. It's okay for live work, especially in the context of a larger band sound. For recording, however, definitely not.

As for electric guitars, electromagnetic pickups are the norm. You can DI them very easily, but as with an acoustic guitar, you won’t get the sound you expect to hear, as you’re missing crucial elements of the overall sound, such as an amp and its speakers.

What is required, therefore, is something that sounds as good as a microphone recording, but with all the benefits of DI.

Thankfully, there’s a way to achieve this.

Impulse Responses (IRs)

I first came across these nearly 20 years ago. I’d just finished recording with a band, and the engineer was creating rough mixes. He knew I was interested in recording and production, so he clicked on a plugin, and said, “Listen to this”.

The sound of my band was suddenly given a really live, cohesive feel, despite having tracked each part separately in an sound-deadened room. The plugin was imprinting the recording with the acoustics of a rural church somewhere in the middle of the American Mid-West. 

I was fascinated both by how lifelike it sounded, and the possibilities it represented.

How Impulse Responses Work

An impulse response is a measurement taken of anything that deals with acoustics, be it a space, a speaker, or an instrument. Particular audio signals are either played through it or into it, and the resultant response is captured. Whatever this response is applied to then exhibits the characteristics of the original.

To start with, IRs were usually limited to reverb plugins, with Altiverb being one of the most well-known.

Reverbs still remain the most ubiquitous use of IRs, to the point where some DAWs have IR-capable plugins as standard. Logic’s Space Designer is one such example.

But as technology and processing power advanced, IRs have moved beyond just reverbs and this is where guitarists get involved. 

In the next tutorial I’ll examine acoustic guitars, but for now, I'll show you how IRs assist the recording of electric guitar amplifiers.

Electrickery

To this day, recording the sound of an amp and its associated speakers usually involves one or more mics being placed strategically around it. Even live, most sound engineers expect to mic up a guitarist’s combo or cabinet.

Some amplifiers have an emulated speaker output. This allows a DI connection between the amp and either a DAW interface or PA system. They’re very useful, but they’re slightly misleading. 

The emulated speaker is in fact the application of EQ to create a reasonable facsimile of a guitar speaker. It does the job, but doesn’t fully capture how a speaker works, or indeed, the influence of the speaker cabinet on the resultant sound.

An increasingly common occurrence in studios is to DI the amp and then use IRs to apply the sound of speaker cabinets. 

There are several reasons why this is a good idea.

Record Silently

Valve amps in particular really come alive as volume increases, but this isn’t always appropriate to every recording situation. Using DI means you can crank the amp without the resultant high levels of volume.

A word of caution, however.

Amps are designed to supply certain levels of current in order to drive speakers. If the speakers aren’t there, the current has nowhere to go, which can cause permanent damage to the output transformer

If you wish to record silently, and thus cannot attach speakers, you’ll need to use a reactive load box.

You Don’t Need Expensive Mics

Nor do you need mic preamps, an acoustically-treated space, or even knowledge of mic placement.

You Don’t Need a Selection of Speakers

Instead of spending thousands on different speakers and cabs, you can select from a huge range of IRs.

You Can Change the Tone

If you're not happy with the sound, just change the IR. This also means you can audition different speakers until you’re happy.

Conclusion

IRs are becoming more and more popular, and with good reason. They represent sounds of spaces and equipment at your fingertips that you wouldn’t otherwise have access to. 

Going direct means:

  • You get a consistent tone
  • Noise becomes less of an issue
  • The recording environment is less critical
  • You can change the sound during or after the recording
  • It’s a lot cheaper than owning lots of physical equipment

In the next tutorial I’ll show how IRs benefit acoustic guitars and their usage in the live environment.


Envato Tuts+ Community Challenge: Created by You, September 2018 Edition

Understand Arrays in PHP

Understand Arrays in PHP

In this post, you'll learn the basics of arrays in PHP. You'll learn how to create an array and how to use associative and multidimensional arrays, and you'll see lots of examples of arrays in action.

What Is an Array?

In PHP, an array is a data structure which allows you to store multiple elements in a single variable. These elements are stored as key-value pairs. In fact, you can use an array whenever there’s a need to store a list of elements. More often than not, all the items in an array have similar data types.

For an example, let’s say you want to store fruit names. Without an array, you would end up creating multiple variables to store the different fruit names. On the other hand, if you use an array to store fruit names, it might look like this:

As you can see, we’ve used the $array_fruits variable to store the different fruit names. One great thing about this approach is that you can add more elements to the $array_fruits array variable later on.

There are plenty of ways to manipulate values in the array variable—we’ll explore these in the later part of this article.

How to Initialize an Array

In this section, we’ll explore how to initialize an array variable and add values in that variable.

When it comes to array initialization, there are a few different ways. In most cases, it’s the array() language construct which is used to initialize an array.

In the above snippet, the $array variable is initialized with a blank array.

As of PHP 5.4, you can also use the following syntax to initialize an array.

Now, let’s see how to add elements to an array.

The above snippet should produce the following output:

The important thing to note here is that an array index starts with 0. Whenever you add a new element to an array without specifying an index, the array assigns an index automatically.

Of course, you can also create an array already initialized with values. This is the most concise way to declare an array if you already know what values it will have.

How to Access Array Elements

In the previous section, we discussed how to initialize an array variable. In this section, we’ll explore a few different ways to access array elements.

The first obvious way to access array elements is to fetch them by the array key or index.

The above snippet should produce the following output:

A cleaner way to write the code above is to use a foreach loop to iterate through the array elements.

The above snippet should produce the same output, and it takes much less code.

Similarly, you can also use the for loop to go through the array elements.

Here, we're using the for loop to go through each index in the array and then echoing the value stored in that index. In this snippet, we’ve introduced one of the most important functions you’ll end up using while working with arrays: count. It’s used to count how many elements are in an array.

Types of Arrays in PHP

In this section, we’ll discuss the different types of array you can use in PHP.

Numerically Indexed Arrays

An array with the numeric index falls in the category of an indexed array. In fact, the examples we’ve discussed so far in this article are indexed arrays.

The numeric index is assigned automatically when you don’t specify it explicitly.

In the above example, we don't specify an index for each item explicitly, so it'll be initialized with the numeric index automatically.

Of course, you can also create an indexed array by using the numeric index, as shown in the following snippet.

Associative Arrays

An associate array is similar to an indexed array, but you can use string values for array keys.

Let’s see how to define an associative array.

Alternatively, you can use the following syntax as well.

To access values of an associative array, you can use either the index or the foreach loop.

As you can see, here we got the name by querying it directly, and then we used the foreach loop to get all the key-value pairs in the array.

Multidimensional Arrays

In the examples we’ve discussed so far, we’ve used scalar values as array elements. In fact, you can even store arrays as elements within other arrays—this is a multidimensional array.

Let’s look at an example.

As you can see, the hobbies key in the $employee array holds an array of hobbies. In the same way, the profiles key holds an associative array of the different profiles.

Let’s see how to access values of a multidimensional array.

As you can see, the elements of a multidimensional array can be accessed with the index or key of that element in each array part.

Some Useful Array Functions

In this section, we'll go through a handful of useful array functions that are used frequently for array operations.

The count Function

The count function is used to count the number of elements in an array. This is often useful if you want to iterate an array with a for loop.

The is_array Function

This is one of the most useful functions for dealing with arrays. It's used to check if a variable is an array or some other data type.

You should always use this function before you perform any array operation if you're uncertain of the data type.

The in_array Function

If you want to check if an element exists in the array, it's the in_array function which comes to the rescue.

The first argument of the in_array function is an element which you want to check, and the second argument is the array itself.

The explode Function

The explode function splits a string into multiple parts and returns it as an array. For example, let's say you have a comma-separated string and you want to split it at the commas.

The first argument of the explode function is a delimiter string (the string you're splitting on), and the second argument is the string itself.

The implode Function

This is the opposite of the explode function—given an array and a glue string, the implode function can generate a string by joining all the elements of an array with a glue string between them.

The first argument of the implode function is a glue string, and the second argument is the array to implode.

The array_push Function

The array_push function is used to add new elements to the end of an array.

The first argument is an array, and the subsequent arguments are elements that will be added to the end of an array.

The array_pop Function

The array_pop function removes an element from the end of an array.

The array_pop function returns the element which is removed from an array, so you can pull it into the variable. Along with array_push, this function is useful for implementing data structures like stacks.

Conclusion

That's all you need to get started coding with arrays in PHP.  You saw how to create arrays and how to retrieve elements from them. You learned the different types of arrays in PHP, and you got a look at some of the most useful built-in PHP functions for working with arrays.


16+ Best Free Keynote Presentation Templates Designs (Download Now)

How to Get Started With Product Packaging Design

Quick Tip: Remove the White Background From Line Art in Adobe Photoshop

New Short Course on Kotlin Android Intents