2018.05.26 Training

Today’s lifting just didn’t feel good – it’s been a long week at work so I wanted to just get something done.

Snatch Pull (Straight Leg) + Snatch + Snatch Balance (1+1+1)

  • 70 x 3
  • 75 x 3
  • 75 x 3
  • 75 x 3

Was programmed for 85 for 3 sets – but struggled with 75 so I stayed there to get it right.

Pull-ups

  • 6 reps
  • 4 sets of 5

2018.05.21 Training

Reverse Lunges

  • 10/10 x 20
  • 10/10 x 50
  • 10/10 x 50
  • 10/10 x 60
  • 10/10 x 70

Really tough. These are definitely a weakness.

Bulgarian Split Squat

  • 10/10 x BW
  • 10/10 x 20
  • 10/10 x 20
  • 10/10 x 20
  • 10/10 x 20

Pull-ups

  • 5 x 5

WordCamp Detroit 2018 Recap

2018.05.11 EDIT:  Here are the speaker slides provided by the group

This weekend, I attended the WordCamp Detroit conference, which was revived after around 6 years of being quiet.  Held at Grand Circus, it was a good way to learn a few things related to WordPress, especially on some of the things that are available from a technical perspective.

I mostly stuck to the developer track through the sessions, and took some of the things I took back from each of those sessions.

In particular, my most recent interest is understanding WP-API to be able to integrate it into an Angular SPA, to be able to build a web app with CMS capabilities to allow for customization of data for end customers.

WordPress + React

Amit Rathi

Adding React to a WordPress project using the following options:

The option of adding React to a WordPress website comes in two forms:

  • Brainless CMS
    • Using WordPress to serve the front-end and React libraries, and specifically using wp_enqueue_scripts to serve resources.
    • Allows for adding smaller functionality to an existing application.
  • Headless CMS
    • Removing the WordPress front-end, and serving data in WordPress through the WP-API.
    • Useful for the SPA design, and can be easily tailored to use any other front-end application.

This talk makes me think about the popularity of React, and how on perspective it seems to be ahead of the other major frameworks, Angular and Vue.js.  I’m curious as to how that’s the case, and what it is about React that makes it stand above the others.

How to Build Themes using the WordPress Rest API

Anthony Montalbano

For this talk, we went through what the WordPress API is what it’s capabilites are.  Like Amit’s talk above, it talks about using a headless WordPress setup, which is becoming a best practice to be able to use the decoupled architecture.

We specifically went through building a mini-theme using the WordPress API, and then related this to moving to a headless WordPress design, allowing for customization to the API to allow for adding fields to different data types.

Rebrand Cities: Crushing the Digital Divide One Website at a Time

Hajj Flemings

Hajj’s talk had to do with the goal of providing websites for small businesses that do not have a website.  Statistics show that 46% of small businesses don’t have a website.

Ultimately, this talks was geared towards entrepreneurs, specifically in bringing cost-effective websites to businesses with limited funds.  For $300, Rebrand Cities provides a full end-to-end product including photography, branding, web design, and so on.

One of the mentions was in terms of securing brands throughout a business’s tenure – the key was to focus on the social media outlets that they will use, alongside reserving the handles for social media that they will use.

Site Configuration In Code: Configuration Management

Tessa Krissel

Tessa talked about the scenario where a group of developers are working on their own features, which will effect database changes inside of WordPress.  Talking about site configuration specifically, the concept of configuration management, or managing your site configuration in code through version control, comes into play.

The wp-cfm plugin is mentioned, which allows for pushing and pulling data between databases.  Some of the perks include:

  • Less need to copy the full database.
  • No need to manually apply database setting changes.
  • Track and migrate configuration files using version control.

It tracks everything inside the wp_options table, which covered most of the configuration for the WordPress application.

Some of the terminology that wp-cfm includes:

  • Bundle – A group of settings to track.
  • Push – Export configuration from your database to the filesystem.
  • Pull – Import configuration from the filesystem to your database.

While going through this talk, the tool Pantheon was brought up.  This basically serves as a hub for hosting and automated workflow for a WordPress website.  Another tool brought up was MergeBot, which allows for syncing of content between environments.

And finally, a security concern – make sure to change file permissions on the /config folder, as the configuration info will be stored in this directory.  This will be especially useful when storing API keys.

From Discovery to Design: A Collaborative Approach to a User-Centered Website

Katy Hinz

This talk stressed about the concept of using collaboration to delier a project that will provide the most value to the user.  User-centric

The user-centric design process goes as follows:

  • Discovery
    • Creating a common understanding of what the goal is.
    • Surveying interested parties (user, stakeholders, client design).
    • Analytics review.
    • Determining pain points and goals.
    • Creating a written website plan, potentially with sign-off.
  • User Mapping
    • Who are we designing for, and what do they need?
    • Empathy maps
    • Journey maps (Discover, Engage, Convert, Retain, Advocate)
    • Suggestions:
      • Advocate for the user’s needs.
      • Put user preference over personal preference.
      • Document findings and get sign off.
  • Information Architecture
    • Creating a sitemap for web site flow.
    • Suggestions:
      • Leave room for iteration.
      • Create page templates (not specific pages).
  • UX (wire-framing)
    • Work out major content, feature and functionality issues with wires to make design phase more seamless.
  • Design
    • Consistency in design patterns
    • Patterns
    • Interactions & animations
    • Usability & accessibility

After handing off the design to the development team, this design should communicate the user needs, A potential option is to use InVision and provide annotations for what should happen.  This helps keep the intent of the design more explicit.

Don’t Shard Your Pants!  A WordPress / Amazon Domain Sharding Solution for Images

Kim Kaiser (more information on talk)

The site used in all demos is http://www.enchantedcelebrations.com/

Domain Sharding is a technique where resources for one website are served from multiple domains, improving performance (browsers limit 6 simultaneous connection to one domain).

Pros:

  • Significant increase in page loading speed.
  • Ideal for 1000s of small images.

Cons:

  • Time required for extra DNS looking.
  • HTTP/2 protocol performs parallelized transfers.
  • Website modifications.

WordPress provides a plugin to allow for offloading all media into Amazon S3 and Azure.

Converting Images with Amazon Lambda and S3

How to Be an A11y

Aisha Blake

This talk defines accessibility as making it easy for everyone to access web content, regardless of how they approach the web.

Approximately 1/5 of people on the web have some form of disability.

How to meet WCAG 2.0 Guidelines

A couple general guidelines:

  • Do your images have an ‘alt’ tag?
  • Is your markup organized semantically?
  • Is your color contrast acceptable?
  • Are your headers and buttons descriptive?
  • Is there padding around clickables?
  • Can a user navigate through all content using only a keyboard?
  • Does focus shift to your dynamic elements appropriately?
  • Do videos have captions, or transcripts for audio?

There are a few automated tools for determining accessibility:

Doing A Search & Replace Using SQL Server

Had a long weekend of work this week so this post is going to be short (since I didn’t have enough time to flesh something out. I’ll be better next week!).

I was working with a client and found I needed a way to do a search and replace on a SQL database. In particular I was changing a URL over from an old domain to a new domain.

Check the link below to see in easy way to apply it to your own database.

https://stackoverflow.com/questions/59044/how-do-i-do-a-simple-find-and-replace-in-mssql

Setting up Continuous Integration and Deployment with WordPress using DeployHQ

To get this blog started, I wanted to set it up correctly and use a good Continuous Integration and Delivery workflow.  To do this, I’ve found that DeployHQ does a good job of simplifying the process.  Simply put, DeployHQ monitors a source control repository and deploys changes to servers that you signify.

By automating this process, you’ll no longer need to worry about issues that can come from deployment, and simplify the steps for getting your code into a production environment.  In addition, you can implement build steps, allowing you to do things such as compile code, run unit tests, add configuration files, and more.

This process is going to specifically focus on migrating a series of files, so this would work with WordPress.  In a later section of these blog post series, we will learn how to use build steps for any applications that require a build beforehand, such as a React or an Angular application.

If you’re developing using a local WordPress environment (and you should!), this will allow for the changes being made in your repository to automatically be transferred to the server.  No more needing to connect to the server and make sure your files are in sync!

Before you get started, you’ll need the following:

  • You’ll need to have your WordPress source code checked into a repository – the easiest to get started with is Github, but you can also use Bitbucket, Gitlab, or use a private Git/Subversion/Mercurial server.
  • You’ll need somewhere to host your website.  I personally use NearlyFreeSpeech.NET.  You can use any type of hosting that allows for an FTP connection,
  • You’ll need a DeployHQ account (you can get a 10 day free trial, and then you can use the free account indefinitely for a single project.  For about $8 a month, you get 10 projects you can deploy).

Creating a New Project

After creating a DeployHQ account, the first thing to do is log in to the DeployHQ website and get to the dashboard:

From here, you can add your first project, connect to your repository, and specify the servers to deploy the code to.  First, you’ll set up a name for the project and specify where the source code is located (I’m using Github here, but you can choose from a few others such as Gitlab, Bitbucket, or your own Git/Subversion/Mercurial server):

If you are using a service such as Github, you will need to log in and approve DeployHQ to access the available repositories.  Once that is done, you should see a list of repositories you can select from:

Step 2: Creating A Server

The next step is to create a server for the project to deploy your code base to.  You will need at least one server (for your Production environment), but may want to create multiples based on the different environments that your project uses.

You’ll also be able to select the default branch that the project will deploy from, alongside setting up notification settings.  Personally, I suggest changing to allow for receiving a notification after all deployments, whether they are successful or not.

If you are using Github to host your repository, you can add the webhook created to the repository afterwards to be able to kick off automatic deployments, fulfilling the continuous delivery capability.

Part 3:  Creating Configuration Files

Next, you’ll want to create multiple config files with the deployment to allow for setting the configuration for the application correctly:

  1. Set permalinks to work with an .htaccess file.
  2. Create a wp-config.php file for WordPress configuration (one for each environment).

If you are planning to use post names for permalinks, you can use the following for .htaccess:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

and you can use the generate wp-config tool to create your wp-config.php file.

Part 4: Deployment

Once all of the steps above are complete, you are ready to deploy your web site!  Just click on the ‘New Deployment’ button on the top right, and you’ll be brought to this screen:

You’ll be able to change a few things on this screen if desired, such as selecting which server to deploy to or scheduling the deployment for a later time (useful if you don’t want these changes to take effect until off-peak traffic hours).

You can click on ‘Preview’ to check which files are going to be transferred over (good to make sure you’ve set which revisions to deploy correctly), and you can click ‘Deploy’ to actually start the deployment.  Once you start a deployment, you’ll be brought to this screen:

This shows the actual process of the deployment – once this screen shows that it’s finished, you’ve completed your first deployment!

Ripping a DVD To MP4 Using Handbrake

Disclaimer – my quick research on this tells me the legality of making copies of your digital media is a gray area.  The rule of thumb is that you are allowed to make a backup copy of physical media that you own, but you are assuming any risks that may come from copying your physical media to videos.

My wife watches a series of DVDs she uses for her fitness, and I wanted to get them off of the DVDs and onto her computer, so she doesn’t have to keep lugging around an external DVD player and the actual files.

There are a few options I’ve found in ripping DVDs to your computer for later use, such as:

  • Handbrake – an open-source video transcoder that can pull for all kinds of media.  A little complex to use and requires some setup to get in place.
  • WinX DVD Ripper Platinum is another application I’ve found that you can use for a quick download.  To rip a full DVD, you’ll have to purchase the full version (around $39.95).

I decided to go through and work on configuring Handbrake just to learn a bit more about the process and have something I can come back to anytime I need to rip a movie or other data from a DVD.

First, download and Install Handbrake.  After installing HandBrake, you’ll want to install libdvdcss (32bit, 64bit) in order to bypass any copy protection issues the DVD may have.  Without this file, you may end up with a video that comes back scrambled and unviewable.

Once you download the libdvdcss file, you’ll move it to the HandBrake installation location (usually C:/Program Files/HandBrake).  After this is done, start HandBrake and you’ll see the following:

Assuming you already have your DVD in your laptop (or connected via external drive), you can click on the DVD on the left side and allow it to load all source.  This will take maybe 15-20 minutes to load everything into the application.

Once everything is loaded, you’ll just need to do the following:

  • Select the title that matches the length of your video (usually one of the first few titles).
  • Choose a filename and a destination for the DVD rip output (the movie).

Once this is done, click on ‘Start Encode’, and allow the DVD to start ripping.  At the bottom left, you should see the encode process begin.  Once that’s all set, check the video and see if the encode worked!

Remember, if you end up with something that is scrambled, make sure you set up the libdvdcss file in your HandBrake directory correctly.

Sources:

Lifehacker

Fixing the scrambled video issue on YouTube

Setting Up My New Windows 10 PC

Whenever setting up a new machine, I go off of this list to make sure I’ve got all the right software on my machine, along with getting everything set up.  That way, I have everything all taken care of, and nothing will surprise me when I get started and working.

Configuration

The first thing to do is run Windows Update and make sure I have the latest version of the operating system.

Once that is done, I go in and set up Windows 10 Mail to use my email accounts and calendar.  Some of the things I do include:

  • Adding all email accounts.
  • Linking inboxes together.
  • Creating signatures for each account.
  • Setting up the calendar with the following:
    • Coloring calendars appropriately.
    • Setting the first day of the week to Monday.
    • Setting working hours.

Software

The first thing I install is Microsoft Office 2016 Pro Plus.  I make sure to log into OneNote as well so I can start syncing my notes into the computer.

For everything else, I like to use Chocolatey to expediate the progress and get everything set up on your computer quickly.  Ninite is also a good choice to speed up the progress, but doesn’t have quite the same selection of applications.

Run the following command in an elevated CLI, adding or removing any applications as desired:

  • Remote Desktop Software (teamviewer)
    • Log in to account, set up machine for easy access.
  • Internet Browser (google-chrome-x64)
    • Log into account to sync bookmarks, extensions.
  • Cloud Storage (dropbox)
    • Log in and start data sync.
  • Anti-Theft (prey)
    • Log in to configure machine, clean up any unused machines.
  • Version Control (git)
  • Visual Studio Code (visualstudiocode)
    • Install the following extensions:
      • TSLint
      • Path Intellisense
      • vscode-icons
      • One Dark Pro
  • Visual Studio Community 2017 (visualstudio2017community)
    • Open the Visual studio Installer and install
  • Resharper (resharper)
    • Run Visual Studio and register product.
  • Code Comparison Tool (beyondcompare)
    • Run and register product.
  • Divvy (divvy)
    • Register product
    • Start Divvy at login, Set Ctrl-Shift-Z as shortcut
  • Torrent Client (qbittorrent)
  • WAMP Server (wamp-server)
  • Directory Admin Tool (windirstat)
  • Chat Client (franz –pre)
    • Log into Franz account and all active accounts.
  • Node.JS (long term support) (nodejs-lts)
  • File Archiver (7zip)
  • FTP Client (filezilla)
  • Cleanup Tool (ccleaner)

After the above list is installed, I just have to finish by globally installing any tools I use for JavaScript development using npm.  These tools are:

  • @angular/cli
  • npm-check-updates

Aggregating Data Into a Single SQL Server Column

2018.04.13 EDIT:  I made a slight change to this to fix the filtering for the Start Date and End Date for the Promotions in the query below.

This week, I was working with a client using nopCommerce to try and fix a performance issue relating to syncing data between their legacy product management system and their nopCommerce system used for their front-facing website.  A plugin was built in nopCommerce that would utilize a SQL Server staging database to stage data from the legacy system and import into nopCommerce on a daily basis.

Specifically, this issue relates to trying to add a promotion to a product, in which promotions and products have a many-to-many relationship with each other, and each product has a one-to-many relationship with a series of product tabs that are used to display information on the product.  The entity relationship looks like this:

When the client attempted to add a promotion to 3000 products, it slowed the sync process from about 30 minutes to 8 hours – causing those products to not reflect the new promotion until mid-day and making for an unhappy client.

Digging into the code, the major slowness issue was relating to creating the product tab page – the system was searching to see if a tab page existed for the product, and then either created a new one or adding to the existing with the appropriate data, forcing iteration through a large dataaset and giving a poorly scaling solution.

Creating a stored procedure for this make the most sense – since it’s just copying data from a few tables into another, but with database programming, I cannot truly iterate through each mapping to create the Promotion product tab correctly, so I needed to use aggregation to add the content to the product tab for each product.

I found out about the STUFF() operator and came up with this:

SELECT TOP 1 WITH TIES
     productId,
    'Promotions' as title,
     STUFF(
        (SELECT '<br />' + p2.Name
         FROM Promotions p2
         JOIN ProductPromoMappings ppm2 on ppm2.Promo_Id = p2.Id
             and p2.StartDate <= GETDATE() AND GETDATE() < p2.EndDate
         WHERE ppm2.Product_Id = ppm.Product_Id
         FOR XML PATH(''), TYPE)
             .value('.','varchar(max)'),1,6, '') AS content,

 FROM ProductPromoMappings ppm
 JOIN Promos p ON p.Id = ppm.Promo_Id
     and p.StartDate <= GETDATE() AND GETDATE() < p.EndDate
 ORDER BY ROW_NUMBER() OVER(PARTITION BY ppm.Product_Id ORDER BY ppm.Product_Id DESC);

Here’s what is happening – we are going through the ProductPromoMappings and returning the content for the product tab pages for use.  For the content in the product tabs, we are going through each promotion in an inner query and getting the content for each promotion.  After that, then concatenating it into a single value to be used by the outer query.

Here’s a little more details on some of the commands that may be new to you:

  • WITH TIES – Addition to the ‘TOP’ keyword that allows for returning ‘ties’, or multiple rows in last place with the limited data set.
  • STUFF() – Inserts a string into another string.  Specifically, this allows us to insert the ‘<br />’ element into each subsequent string without inserting the line break in the first element.
  • FOR XML PATH(”) – Outputs the results of a query in XML – for us, this means taking the contents and making them into a single string for use.
  • .value() – Converts the XML node passed from FOR XML PATH into a varchar,  allowing us to escape all characters that would cause issue with XML conversion (such as ‘<‘ and ‘>’).
  • ROW_NUMBER() – Numbers the result rows – this is uses with the OVER and PARTITION keywords to filter each product tab page by the Product ID.

Putting this command into a stored procedure reduced the runtime from 8 hours to about 15 seconds, as the data is now just copying directly into the database.

 

Reference:

https://stackoverflow.com/questions/5031204/does-t-sql-have-an-aggregate-function-to-concatenate-strings/5031297#5031297

 

Adding Google Analytics To Your Angular CLI App

As you’re working through your Angular CLI application, you may want to set up Google Analytics for tracking the traffic coming in to your website.  This guide will help on setting up the basics for being able to view the active users on you web application, along with being able to see more analytical data on how your web application is being used.

Pre-reqs:

Set up a Google Analytics account and property for your web site.  You should get a code that looks like the following for your web application: UA-XXXXXXXXX-XX.

Adding the Tracking Code to Your Application

Once you have the tracking code from your Google Analytics account, you’ll want to add it to the application somewhere that it can be referenced in the application.  The best place to do this is in the environment.ts file meant for environment-specific variables.  Add the following to your environment.ts file:

googleAnalyticsTrackingId: 'UA-XXXXXXXXX-XX' // add the tracking ID from your GA account

Once this is done, you’ll need to add the actual tracking code to your application.  One way to do this is to write it into the code that bootstraps your Angular App in the main.ts file.  Add the following to your main.ts file:

..
import { environment } from './environments/environment';
..

if (environment.googleAnalyticsTrackingId !== '') {
 document.write('<script>' +
 '(function(i,s,o,g,r,a,m){i[\'GoogleAnalyticsObject\']=r;i[r]=i[r]||' +
 'function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();' +
 'a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;' +
 'm.parentNode.insertBefore(a,m)})(window,document,\'script\',' +
 '\'https://www.google-analytics.com/analytics.js\',\'ga\');' +
 'ga(\'create\', \'' + environment.googleAnalyticsTrackingId + '\', \'auto\');</script>');
}

Modifying The App Component

Google Analytics works by receiving page events from the web application, such as page loads.  With a single page application, you’ll need to send an event for each time the user routes to a different part of the application.  The easiest way to do this is to edit the App Component, which will affect all routes.

Make the following additions to your app.component.ts file:

 

..
import { environment } from '../environments/environment';

declare let ga: Function;
..

constructor(private router: Router, private location: Location) {
    if (!this.hasGoogleAnalyticsTrackingId()) { return; }

    this.router.event.subscribe(event => {
        if (eventinstanceofNavigationEnd) {
            ga('set', 'page', event.urlAfterRedirects);
            ga('send', 'pageview');
        }
    });
}
...

private hasGoogleAnalyticsTrackingId() {
    return environment.googleAnalyticsTrackingId !== null &&
           environment.googleAnalyticsTrackingId !== '';
}

Verify Working Functionality

Once this is all set, you’ll want to make sure your changes are working – you can actually determine this using your local environment.  Open the Google Analytics dashboard and go to the view for the property you’ve created for your application, and access the ‘Real-time’ Reports.  You should see the following:

Start your Angular application locally and access the home page.  Once you do this, you should see the following change in the Google Analytics dashboard:

and just to see how the system acts on routing changes:

Congratulations!  You’ve set up Google Analytics for your Angular app.

Setting Up For Multiple Environments

Depending on how you’re deploying your Angular CLI app, you likely are using multiple environments (such as a Staging and a Production environment).  A common practice is to set up a separate property inside Google Analytics for each environment to allow for viewing statistics for each website.  Especially useful for testing to see if the changes are actually going through the environments.

The optimal way to set this up is using the environment specific files in Angular.  You’ll include the property-specific code for each environment, and build the application for each environment.

For this example, let’s imagine we have 3 different environments:

  • My local environment I use for development – we do not want the Google Analytics code running on this.
  • A staging environment used for testing changes before promoting changes to production.
  • A production environment that will be used by the end user.

Angular CLI comes with two environment.prod.ts files, so let’s create a third one called environment.staging.ts.  Once that’s done, add the following content to the new file and the environment.prod.ts file:

export const environment = {
    production:true, // allows for simulating production configuration
    googleAnalyticsTrackingId: 'UA-XXXXXXXXX-X'  // your tracking ID for the specific environment goes here
    // ..and any other environment variables you may have.
};

Notice that we’re leaving environment.ts as is – since we want the codebase to not use Google Analytics when working locally.

We will make one more change to .angular-cli.json to allow for Angular CLI to recognize the new environment and allow for the Staging environment build:

"environments": {
    // dev and prod statements are here, add this line below
    "staging": "environments/environment.staging.ts",
}

Once this is done, you’ll be able to use different builds that associate with different tracking IDs.

 

References:

Tutorial: Angular 2 & Google Analytics with Event tracking

https://codeburst.io/using-google-analytics-with-angular-25c93bffaa18

https://github.com/angular/angular-cli/issues/4451