Setting Up A Windows 10 Workstation in 2018

2018.10.24 EDIT: Added a few new applications when setting up my Surface Pro 4.
I recently built a new PC for myself to use for work and wanted to document the process of getting it set up for use. Each year, I go through and reinstall any system I have as a means to make sure I have this process down, alongside doing an audit to see what I’m actually using, to make sure I keep things clean.
Here’s the process I use when setting up a Windows 10 workstation:


First, I go to the Microsoft website and create a bootable thumb drive containing Windows 10.
I restart my computer and boot to the thumb drive I created earlier. Once Windows setup loads, I end up selecting the option to Custom: Install Windows only (advanced).
For my PC, I have two drives running in it:

  • An SSD that’s responsible for storing the Windows 10 OS.
  • An HDD that’s responsible for holding all My Documents data, as well as everything for Dropbox.

I’ll select to install to OS on the SSD and kick off the installation process. After some time, the installation will finish and the computer will restart.

Display Configuration

Once I have the Windows desktop running, I like to set up my display before I do anything else. I currently run 3 4K monitors with a GeForce 1080 Ti, so I will set up scaling and make sure the monitors are configured as desired.
Once that’s done, I set up Windows to display everything as desired, including:

  • Shifting the Taskbar over to the right side, and only showing one taskbar.
  • Turn off Cortana, Tasks, and People
  • Changing to use Dark Mode for default apps

Essentials: Mail

Once I have the display set up correctly, the next step for setting up a Windows 10 workstation is to set up some of the essential software on the system. I like to use the Windows 10 Mail app for both my mail and calendar. I’ll access that and add all relevant accounts, and then make the following configurations:

  • Creating a linked inbox
  • Turn off email notifications
  • Set Swipe Right to Delete
  • Create a signature for all accounts
  • Set the calendar to start on Monday

Software Installation

Next up is downloading a host of software to use for the PC. I use Chocolatey for as much as I can, so after setting it up through Powershell (as administrator), I install the following applications using Chocolatey:

  • googlechrome (sign in to account, configure extensions)
  • divvy (register application, start Divvy at login, hide from tray, set Ctrl-Shift-Z as shortcut)
  • 7zip
  • ccleaner (set up to check automatically)
  • dropbox (log in and download all files, point to HDD)
  • filezilla
  • git
  • nodejs
  • libreoffice-fresh
  • vscode
  • rescuetime (log in)
  • bitnami-xampp (LAMP server instructions, including setting up a full local WordPress installation)
  • mysql.workbench
  • postman
  • qbittorrent
  • steam
  • sql-server-management-studio
  • teamviewer
  • visualstudio2017community
  • azure-cli

You can do this with one command, like so:

choco install -y googlechrome ...

In addition, there are a few applications I use that aren’t in the Chocolatey package manager:

  • Franz – web application allowing me to combine Hangouts, Messenger and Slack into one application.
  • Manta – open source application I use for invoicing (looking for a replacement for this, one that can store data in the cloud).

After everything is installed, I make sure to go through each of the applications if they need configuring (the notes are above for applications that need to be configured).

Visual Studio Code Setup

Once all of my software is set up, I take a look at Visual Studio Code and set it up, since that’s where most of my development occurs.
I install the following plugins:

  • TSLint
  • Angular Language Service
  • EditorConfig for VS Code
  • Path Intellisense
  • Prettier – Code formatter
  • Terraform
  • PowerShell

Once this is done, I install Fira Code as the font to be used.
In terms of configuration, I copy and paste the current settings I like to use:

    "editor.rulers": [
    "editor.minimap.enabled": false,
    "window.zoomLevel": 0,
    "explorer.confirmDelete": false,
    "explorer.confirmDragAndDrop": false,
    "workbench.startupEditor": "newUntitledFile",
    "editor.formatOnSave": true,
    "editor.fontFamily": "Fira Code",
    "editor.fontLigatures": true,
    "typescript.updateImportsOnFileMove.enabled": "always",
    "prettier.singleQuote": true,
    "prettier.printWidth": 80

Configuring PowerShell

Once Visual Studio Code is set up, the next step is to configure Windows Powershell correctly. There are a few things to do here, build a profile, and then set up Azure CLI.
I run the following commands in administrator PowerShell:

set-executionpolicy unrestricted
New-Item -path $profile -type file –force

Once that’s done, I should have a profile file created. I add the following to it:

set-location D:

Once that’s done, I’ll restart PowerShell to confirm that when it starts up, it’ll move to the D:.

Final Touches

Once I’m all set with most things, there are a few more things I like to do:

  • Hide all desktop apps
  • Unpin everything from the Taskbar

All Set And Ready To Go

Once all of that is done, we’re all set with setting up a Windows 10 workstation. I’ll take a look at this again in 2019 to see what changes in a year. Perhaps I’ll switch over to using Linux?]]>

Upgrading Jenkins On Azure

Jenkins Azure Marketplace to create the Jenkins instance.  This assumes you’ve already created a Jenkins virtual machine on Azure.  In addition, the initial set up should be complete.   If not, you can check out some more information within the Microsoft documentation.

SSH into the Virtual Machine

If you’ve been using this instance, you should be familiar with SSH forwarding and being able to access Jenkins using localhost.  If this is new, you can read this Jenkins article on the security setup for the Azure Jenkins Marketplace instance.
Use this command in a command line interface to log in to your virtual machine:

ssh -L <username>@<domain_name>

Replace the following:

  • Username – the server admin username created when creating the Azure VM.
  • Domain Name – the domain name to access the virtual machine.

You’ll be asked for a password – use the password provided alongside the username when creating the virtual machine.
After logging in you’ll see a screen like below.  This verifies you are connected to the virtual machine.
logged into the virtual machine
Access http://localhost:8080 and verify you can access the Jenkins interface:
Jenkins interface via localhost

Download Newest JAR

After logging into the server, you’ll use the following command to download the latest version of Jenkins:

sudo wget

Assuming all is successful, you should see the following
Downloaded newest Jenkins JAR
After downloading the .war file, copy it to the Jenkins directory:

sudo mv jenkins.war /usr/share/jenkins/jenkins.war

(if you need to know the location of the WAR file on the server, you can check it in Manage Jenkins -> System Information)
And finally, with the new .war in place, restart Jenkins:

sudo systemctl restart jenkins

Verification of the installation is simple. Access Jenkins and check the bottom right to see the updated version number:
Checked for latest Jenkins installation


Upgrade Jenkins server to a new version]]>

Living In A DevOps World (Part 2: Implementation)

This post was originally featured at
Part one of this article focused on some of the more behind-the-scenes benefits of an Agile DevOps approach. In part two the focus turns to some of the other traditional problems that a well-executed DevOps approach can address, and how doing so can benefit an organization in more ways than just a technical perspective.
By way of quick review, DevOps was born out of the Lean and Agile software development methodologies when it became clear that, while those methodologies did indeed speed up the development process, a bottleneck still occurred when push came to shove and new code had to be moved to quality assurance and production environments.
DevOps was created to more seamlessly connect the development and operations processes, making code migration, testing and promotion to production a more efficient process. To accomplish this, the DevOps approach had to find solutions for some of the issues that caused operational delays, and create new ways to organize, implement and continuously optimize the operations process.


For those who have been in development and/or operations for any length of time, it quickly becomes clear that there is a multitude of operational safety checks that serve to protect a production environment. While that is vitally important, it was also clear that there had grown an “over” problem around many operational procedures, and in many cases that manifested itself in the development process. That includes overproduction, when making or requesting more than was needed from requirements and/or operations perspective to clear arbitrary operations process hurdles.
Alternatively, overprocessing, when development and operations resources do more work (as opposed to just enough, as Lean and Agile would suggest) than required to smooth the transition of code and functions from development to operations. This created waste regarding time, resources and budgets that were not proportional to the benefits derived from following the operations process.

Motion and Transportation

Similarly, DevOps also sought to solve the operational problems of both motion and transportation. That is, the amount of excess work required to deliver new code to meet the operational requirements for code migration. The friction caused by such requirements slowed the motion and momentum of the development process. The same is true of transportation, or the difficulty in moving code between environments such as testing, quality assurance and production.
In both cases, development and project momentum was sacrificed for what often turned out to be a series of artificial hurdles that had long since become less effective or even obsolete parts of the operations process.

Correction and Inventory

In most instances, all of the above resulted in the final maladies of the pre-DevOps development and operational ways. The first was the number of in-flight corrections required when timelines were squeezed, and the rush was on to get to production. Unfortunately, this went hand in hand with the ultimate problem of good code being sacrificed for expedient delivery, often resulting in inadequate functionality, system outages and, in the end, lost market opportunity and revenue.

3 Keys to DevOps Success

Any successful DevOps implementation must address three critical factors in this order: culture, organization and tools.


It’s critically important to connect an organization’s values to the DevOps process. Valuing quality, timeliness and organizational alignment of goals and objectives is the first step toward DevOps success. Such cultural values translate directly into a DevOps organization.
Providing empowerment and accountability to DevOps team members helps to build ownership among the team, and trust from their customers in the rest of the organization. It also helps to provide a physical environment that fosters collaboration, teamwork and continued learning. Co-working spaces and collaboration tools such as Slack are a good start. Attending external conferences to broaden perspectives and to bring new ideas back to the team is often beneficial. From there, brown bag lunch sessions where ideas and experiences can be shared, frequent post-mortems on implementations to hone best practices, and even internal mini-conferences where several departments come together for a day to discuss DevOps practices are all effective ways to build a strong DevOps culture.


Any good DevOps organization is two-sided; that is it has to work from the top down and from the bottom up at the same time.
The top-down part is in the ability to “see the system” from a macro level, allowing for process understanding and insights from a business workflow perspective. This helps to identify the pain points and bottlenecks in the current process that can be optimized through the DevOps process.
Once that’s accomplished, the bottom-up work begins. Identifying things such as inconsistencies in code deployment environments that cause delivery issues, elimination of manual and custom built deployment processes and quarantining inefficient and poorly written code until it can be redone or eliminated are all part of optimizing the time, quality, resources and success factors for deploying production systems on schedule. It’s also important here to continually audit the current processes with an eye toward eliminating the processes that are no longer required or useful but have been kept in place out of the fear of  “breaking something we don’t understand.” If nobody understands it, then it shouldn’t be in production software.

Automation Tools

The final factor for DevOps success is to have the right toolset.
Communication: Any DevOps team requires the ability to quickly and directly communicate with other team members sans meetings. For this purpose, tools such Slack (real-time chat), Skype (video chat), and Confluence (for storing persistent information) are pretty good options.
Planning, Monitoring & Consistency: For the team’s planning needs, a tool such as Trello that can provide Kanban board functionality is worth a look. For issue tracking and monitoring of any system’s overall health, tools such as Jira and NewRelic respectively provide some good functionality. Likewise, consistency is vital in a DevOps world, and using automation to ensure that all systems are configured as desired across different environments is a crucial best practice. For this, a tool such as Ansible is worth a review.
Integration & Deployment: For continuous integration of systems in development and as a way to tighten the feedback loop for developers to determine if the central build used for deployment to production is working as intended, the Jenkins toolset might be a good fit. And finally, when it comes making any deployment process as painless as possible, a tool such as Docker that can handle created containers for an application that includes all dependencies, reducing the complexity of deployment to multiple environments, is a solid way to go.
The point of all of this is to create an environment—culturally, technically and physically—where DevOps can succeed, grow and thrive. Organizations that can create an effective and efficient DevOps environment have also created a competitive advantage for themselves.]]>

Living In A DevOps World (Part 1: Recognition)

This post was originally featured on
DevOps has evolved into a conceptual and working model for more effective software implementation. However, there are differences of opinion on the real-world value of any approach to date.  This includes the question of the best way to implement a real-world DevOps environment. This two-part article will focus on what an agile DevOps approach is meant and not meant to address.
DevOps sits at the nexus of three essential business technology functions: software development, quality assurance and operations. A short and concise definition of DevOps proposed in 2015 seems as appropriate as any:

DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into regular production while ensuring high quality.

The definition was suggested in the book, “DevOps: A Software Architect’s Perspective,” and the authors have hit upon the essence of the practice. The key, of course, is how to put that concept into practice.
The first step on the journey to effective DevOps is the recognition that the concept is the result of the rise of the Lean and Agile software development methodologies. Those methodologies, among other things, emphasize the following:

  • A focus on customer value.
  • The elimination of waste.
  • Reduced cycle time (accomplishing work faster, releasing faster).
  • Shared learning.
  • Avoiding batching (don’t do things until required).
  • Theory of constraints (break things up, focus on individual issues).
  • Continuous integration, testing and delivery.
  • Faster time to market.

DevOps in Practice

Something had to be invented to adhere to the principles above; that something was DevOps.  Over time, an effective practice should address any number of business technology pain points. The following short list of those pain points and their respective solutions should prove instructive.

System Downtime

System outages are inevitable as long as systems are designed, tested and implemented—even with increased automation—by imperfect beings. DevOps changes the focus from trying to create applications that never fail to designing systems that can recover quickly.  This decreases aggregate systems outage time over the life cycle of any application or system.


This was a staple of traditional systems development and is most closely associated with the waterfall methodology for systems development. After requirements were created, the development team would be locked away for weeks, months or even years before emerging with “fully” working software.  This software inevitably no longer satisfied rapidly evolving business requirements. DevOps fits hand-in-glove with the Agile practice of short windows of incremental changes instead of long release cycles, providing value to customers as quickly as possible.

Team Conflict

Having been borne from the cultural combination of Agile and Lean, DevOps has taken on the problem of functional silos that are often erected between development, operations and the business customers. It follows the methodological approaches of collaboration and teamwork first to understand what others know and to leverage the best of it to solve business problems more rapidly. There is also a cultural bent toward experimentation, continual learning and constant improvement. This leads to blameless post-mortems, where collaborative discussion replace finger pointing and blame.

Knowledge Silos

Functional silos have led to compartmentalized knowledge. If the old game was that knowledge is power, the new game in the DevOps world is that knowledge is freely exchanged as an enabler to solving business problems. DevOps addresses the problem of information being lost in translation between the development and operations functions by eliminating the functional barricades and making knowledge sharing the highest form of collaboration.


Waiting for things to happen used to be a standard operating procedure in the pre-DevOps world. Project plans were created and managed to account for the time it takes for new code to be moved into a testing, quality or even production environment. This was a momentum killer for projects and at times a morale killer for developers waiting to see what changes they might need to make to their code.
The combined Agile and DevOps approach rewrites the traditional approach to code migration, smoothing and eliminating wait times.  Projects can flow more seamlessly from start to finish. This also keeps business resources—testers, approvers, etc.—more engaged as a result of a constant flow of new functions and features to test and use.  There is a financial aspect to this as well. Reducing speed to market with new functionality, reducing or eliminating idle hands—be they technical or business—and delighting customers with a steady stream of enhancements and features all go directly to an organization’s top and bottom lines.
That, after all, is in many ways what the DevOps approach is all about. All of these critical areas become the means to accomplish it. Part two of this article will focus on some more of the benefits of a DevOps approach, and how to achieve them.]]>

Setting Up a Local WordPress Development Environment

  • Setting up the website and adding content.
  • Installing plugins to test their functionality.
  • Viewing a functional website without a shared environment.
  • Theme or plugin development.
  • This guide will walk you through the following steps.

    1. Installing a LAMP Development Environment
    2. Creating a database
    3. Downloading WordPress core files, and setting them up on server.
    4. Creating a local domain for use.

    Installing A LAMP Development Environment

    XAMPP is a cross-platform LAMP (Linux, Apache, MySQL, PHP) environment used for running WordPress on your local machine.  A few platform-specific alternatives to use as well are WampServer for Windows, and MampServer for OS X.  All of these options will provide the same functionality for running a local WordPress environment, but I chose XAMPP for the following reasons:

    • Cross-platform compatibility, the setup and configuration for this tool will stay the same between different platforms.
    • Personal preference for the XAMPP Control Panel over the other tools.
    • The use of MariaDB over MySQL**.

    **Note that XAMPP actually uses MariaDB as it’s database server.  This is different than the standard MySQL database server used for WordPress.   I’ve provided a reference to a few articles at the end of this post if you’re interested in learning more.

    Starting with XAMPP

    When opening XAMPP, you’ll be presented with the Control Panel:
    To get the environment started, start both the Apache and MySQL services.  Check that the server is running correctly by accessing http://localhost on your machine.  If the XAMPP dashboard appears, you’re good to move to the next step.

    Creating a Database

    The next step is creating or importing a database in your local MySQL server.  This can be done using phpMyAdmin (set up with your XAMPP installation) or by using MySQL Workbench to connect.  I’ll use MySQL Workbench, since it gives a more friendly UI to work with.
    Right-click ‘Schemas’ on the left side to create a database for WordPress to use.  Later, you’ll use this schema name when running the WordPress installer.

    Linking/Installing WordPress

    After the server is running and the database created, the next step is to get the WordPress source code onto the server.  Download the files for WordPress and copy them to a directory on your computer, something like:


    You’ll likely want to store the source code for the site somewhere other than the /htdocs folder.  This way, you can have a central repository on your machine for all projects.  You can use a symlink to accomplish this.  Open a command line terminal as an admin and run the following:

    mklink /D C:\xampp\htdocs\<your-project-name> C:\code\<your-project-name>

    After this, check that you can access the WordPress installation at http://localhost/<your-project-name>.  Assuming you can see the WordPress installation screen, you have everything ready to get started.  You can progress from here if you so desire, but let’s also create a local domain for use as well, to make access to the site a little easier.

    Creating a Local Domain

    Creating a local domain will allow you to access your local running website at something such as http://local.<your-project-name>.com.  There are two parts:

    1. Editing the Apache server configuration to serve the local domain.
    2. Editing the hosts file to point the new domain from your machine to your server.

    First, open the Apache Virtual Hosts configuration file at C:\xampp\apache\conf\extra\httpd-vhosts.conf and add the following:

    <VirtualHost *:80>
     DocumentRoot "C:\xampp\htdocs\<your-project-name>"
     ServerName local.<your-project-name>.com
     ErrorLog "logs/<your-project-name>.error.log"
     CustomLog "logs/<your-project-name>.custom.log"

    Next, open the C:\Windows\System32\drivers\etc\hosts as an administrator and add the following: <your-local-domain>

    Restart Apache using XAMPP Control Panel.  Attempt to access the site using the local domain.  If you’re able to access the WordPress installer screen at the local domain, you’ll all set to start the 5-minute installation.

    Complete the 5-minute WordPress Installation

    Installing WordPress is very simple.  Select the language for installation, and then fill out the database connection details like below.  You’ll just need to change the database name to the schema created:
    You should now be all set with a local WordPress environment, useful for plugin development, trying out different themes, and other configuration changes without affecting a live website.


    MySQL Face-off:   MySQL or MariaDB?]]>

    Installing WordPress in Azure In 15 Minutes

  • ~$9 a month for a custom domain that does not supports using SSL
    • If you’re interested in having SSL support, you’ll need to jump to the B1 level.  This costs ~$32 a month.
  • ~$25 a month for the lowest setting capabilities for Azure Database for MySQL
  • This assumes a fresh installation of WordPress, with no existing databases available on your current Azure instance.  If you have a MySQL Server already set up in Azure, you’ll change the directions a bit to just use the existing instance.

    Step 1: Create a WordPress Resource

    After logging into the Azure portal, click on ‘Create a resource.’  Search for ‘WordPress and select the image published by WordPress.
    You’ll be asked to provide some information, including:

    • App Name: a unique name you would like to use for the app.
    • Resource Group: type in the name of the new resource group to include the resources in.  You can also add these resources to an existing resource group.
    • App Service plan/Location: select the app service plan to use for the WordPress instance.  this is what drives the overall cost for all resources running – you can read more here.
    • Database:  To use WordPress, Azure needs to create a MySQL server.  Azure provides using the ‘Azure Database for MySQL’ service.  Using this, you’ll set the server admin credentials and the pricing tier to be used.  This will also create a database name for you for immediate use that you can change to use a more friendly name.

    After about 5 minutes, the WordPress instance should be created.  You can test it by accessing the URL https://{your-app-name}  If you can see the WordPress installation screen, you’re good to go.

    Step 2: Complete WordPress Installation

    Once the database user has been created, run through the standard WordPress installation.  If you’re familiar with this process, you’ll notice the wp-config screen is skipped.  That’ because the file is already generated for you when installing the system in Azure.
    Although this is the fastest way to get started, there are a few issues that you’ll want to address as you begin development:

    • Setting up a database user that only has access to the newly created database.  This ensures that the user accessing the database only have the rights needed for WordPress.  If the wp-config.php file is compromised, you don’t want to have the server admin credentials exposed.
    • Securing the Azure Database for MySQL Server to only allow access for particular IP addresses.  The current setup allows for all IP addresses to connect in.  Convenient, but not very secure.
    • Adding an SSL connection requirement.  This ensures a secure connection to the database server.

    I’ll write more on the above in the upcoming weeks on securing your WordPress installation.]]>