Setting up OpenVPN on Azure From Scratch

Why do this? One of the major benefits being able to use the internet with a specified IP address. If you’re going to be working systems that whitelist specific IP addresses, you can use this solution to allow for access regardless of both machine and location.

This guide assumes that you:

  • Have an Azure subscription in place.
  • Have a means of SSHing into a virtual machine, such as OpenSSH.

Creating the Virtual Machine

First, access the Azure Marketplace image for OpenVPN.

Next, fill out the form to create a virtual machine with OpenVPN pre-installed. The following information

Once you’re finished, it should look like this:

After creating the virtual machine, access it via the Azure portal and take note of the Public IP address – you’ll use this to SSH into the server and complete the installation:

Creating and Securing the Domain Using Let’s Encrypt

You may want to run OpenVPN using a domain. You have two options:

  1. Use the domain provided by Azure.
  2. Add a CNAME record to a domain you already own.

Access the Public IP and create a DNS name label.

Verify you can access the OpenVPN server using the domain provided by Azure (making sure to use HTTPS).

If desired, you can also create a subdomain (such as vpn.<your-domain>.com). Create a CNAME record that looks like the following:

  • Host: vpn
  • Value: use the Azure domain provided above.

Let DNS propagate (might take a second), and try accessing with your own domain.

Now, whether you use Azure’s domain or your own, the next step is to set up a certificate with Let’s Encrypt:

SSH into the server using OpenSSH and the admin credentials created when creating the virtual machine:


Use the CertBot site to generate instructions for installation, using the following commands for guidance (you are using ‘None of the above’ for the software)

lsb_release -a #determine the OS

Before starting, you will need to run the following to allow for DNS:

echo "nameserver" | sudo tee /etc/resolv.conf > /dev/null

Run the commands provided by CertBot for Apache to obtain your cert (spin up a temporary webserver). When doing this, you will need to open Port 80 to allow Let’s Encrypt to connect.

Verify that accessing the site using HTTPS will result in a secure connection.

Running The OpenVPN Installation

Once SSHed into the virtual machine, you’ll be asked to complete the OpenVPN installation. For most settings, you’ll just use default options. After the installation is complete, you’ll need to reset the admin password:

sudo su -
passwd openvpn

Once this is done, verify the installation is complete and access the OpenVPN admin section at the following URL: https://<YOUR_PUBLIC_IP>/admin. Log in using the following credentials:

  • Username: openvpn
  • Password: the password generated in the step above.

Once this is done, you’ll want to change the server name to the generated public IP address from Azure. Go to ‘Network Settings’ and change the IP address to reflect your public IP, alongside making the following changes:

Setting up OpenVPN Connect

To verify that VPN is working and that you can connect, access the URL https://<PUBLIC_IP>. Log in using the same credentials above and download the OpenVPN Connect application for your operating system.

Once this is done, you can import the record from the server, and log in using the credentials provided above.

Finally, you can verify everything is working by checking your IP again. If your IP address matches the VPN server, you’ve successfully set up the VPN!

Next Steps

  • Remove SSH access to the server, requiring whitelisting again from Azure to allow for configuration.


How I Set Up My iPhone

Recently, I had to purchase an iPhone XS after having my iPhone 7 break, causing a need to set up my phone again after years of being used to my configuration. I decided I want to write this down in case I need to do this again in the future.

Initial Setup

The first thing to do when setting up the iPhone is going through settings and changing a variety of the settings I prefer. This includes:

  • Connecting to my home WiFi.
  • Connecting to my Bluetooth devices (my headphones and my car).
  • Scheduling Do Not Disturb from 10pm to 7am.
  • Setting up Downtime from 10pm to 7am.
  • Turning on Automatic Updates.
  • Turn on Screen Recording shortcut in the Control Panel.
  • Scheduling Night Shift from 10pm to 7am.
  • Setting Wallpaper (both lock and home screens).
  • Set MMS message history to 30 days.
  • Record video at 4K, 60 FPS.

Once that’s complete, most of the basic configuration for the phone is set. The next step is adding and configuring apps – let’s move on.

App Installlation and Configuration

My preference is to keep things as simple as possible, so I only keep three rows of apps available at any point on the home screen. I start by deleting all of the apps possible on the phone I’m not planning to use. Generally, that’s everything except the Notes app.

After that, I install my primary applications:

  • Google Maps (log in)
  • Hulu (log in)
  • Kindle (log in)
  • Slack (connect to workspaces)
  • WordPress (connect to all revelant websites)
  • Yelp
  • YNAB (log in)
  • YouTube (should auto log in from Google Maps)
  • Outlook (add email addresses)
  • Messenger (log in)
  • RTM (use badge count for tasks due today)

And then a few of the apps I leave in the ‘Utilities’ folder:

  • Dark Sky (turn on precipitation warnings, server weather alerts)
  • Discord (log in)
  • Dropbox (log in, set up camera uploads)
  • Facebook (log in)
  • Fitbit (log in, connect Fitbit)
  • Fly Delta (log in)
  • Genisys (log in)
  • Instagram (log in)
  • Life Cycle (activate)
  • Pluralsight (log in)
  • Google Sheets (log in)
  • WhatsApp (log in)

Most of these apps require configuring – so I go through each one and make sure everything is in place (logged in, etc.).

Final Steps

Once I have most everything set up, there’s a few more things I do to finish the process:

  • Set up widgets to display calendar and weather
  • Set a 15 minute time limit for Facebook and Instagram.

API Vs. Microservices: A Microservice Is More Than Just an API

When writing software, consider both the implementation and the architecture of the code. The software you write is most effective when written in a way that logically makes sense. In addition to being architecturally sound, software should also consider the interaction the user will have with it and the interface the user will experience.

Both the concept of an API and the concept of a microservice involve the structure and interactions of software. A microservice can be misconstrued as simply an endpoint to provide an API. But microservices have much more flexibility and capabilities than that. This article will speak on the differences between APIs and microservices, plus detail some of the benefits a microservice can provide.

To get started, let’s define our terms.

What Is an API?

First, let’s define what an API is. According to Wikipedia, an API (application programming interface) is:

a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication between various components.

An easy way to think about an API is to think of it as a contract of actions you can request for a particular service. APIs are in use today in a multitude of web applications, such as social media, banking software, and much more. The standardized contract allows for external applications to interface with another.

For instance, let’s say you’re building an application that’s going to integrate with Facebook. You would be able to use the Facebook Graph API to access data inside Facebook, such as users, post, comments, and more. The API simplifies the complexity of trying to use the data inside Facebook and provides an easy-to-use way for the developer to access that data.

Common API Actions

In today’s world, APIs are usually developed using a RESTful style. These APIs will have a series of verbs associating with HTTP actions, like the following:

  • GET (get a single item or a collection)
  • POST (add an item to a collection)
  • PUT (edit an item that already exists in a collection)
  • DELETE (delete an item in a collection)

The advantage of this consistency through different applications is having a standard when performing various actions. The four different HTTP verbs above correlate with the common CRUD capabilities that many applications use today. When working with different APIs in one application, this makes for a recognizable way to understand the implications of the actions taken across different interfaces.

If you’re interested in working with an interactive example, take a look at Reqres. Reqres provides mock data for interfacing with a RESTful API and the actions you can take when interacting with an API.

Okay, now that we have that covered, let’s take a look at microservices.

What Is a Microservice?

Wikipedia defines a microservice as:

software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.

But before we dig deeper into what microservices are and how they can be useful, let’s take a quick look into the monolith. Understanding how microservices differ from monoliths will give you a better sense of the benefits of moving to a microservices architecture.

The Precursor to Microservices: Monoliths

In the early days of software development (and continuing in many large enterprise environments today), there’s the concept of a monolith. A monolith is a single application that holds a full collection of functionality, serving as one place to store everything. Architecturally, it looks like this:

All of the components of the application reside in one area, including the UI layer, the business logic layer, and the data access layer. Building applications in a monolith is an easy and natural process, and most projects start this way. But adding functionality to the codebase causes an increase in both the size and complexity of the monolith, and allowing a monolith to grow large comes with disadvantages over time. Some of these include:

  • Risk of falling into the big ball of mud anti-pattern, not having any rhyme or reason in their architecture and difficult to understand from a high level.
  • Restriction of the technology stack inside the monolith. Especially as the application grows, the ability to move to a different technology stack becomes more and more difficult, even when the technology proves to no longer be the best choice.
  • Making changes to the codebase affects the entire application, no matter how small. For example, if just one of the business logic sections is receiving constant changes, this forces redeployment of the entire application, wasting time and increasing risk.

So what’s the alternative to building a monolith? Taking the monolith and breaking it up into microservices.

Enter the Microservice

Let’s take the monolith example from above and convert it to use microservices. In that case, the application architecture would change to look like this:

There are a few key takeaways from this re-architecture:

  • The broken out sections of the business logic, each encompassing a microservice. Instead of having a single boundary for the entire application, the application is broken into pieces. The complexity of the application is reduced, as the different services have well-defined interactions with each other. For example, this allows for the capability to assign align teams to each individual service, encompassing responsibility in an abstracted piece.
  • The UI layer from before only needs to interface with the customer and event microservices, removing a dependency for the billing microservice on the UI.
  • The billing microservice does not need to store data, so it doesn’t have a data access layer or a database. Instead, it interacts and processes data directly from both the customer and event microservices.

With this kind of architecture comes a whole host of advantages:

  • It’s easier to separate concerns. These boundaries between areas help with development (you only need to concern yourself with your microservice, not the entire application) and with understanding the architecture of the application.
  • Unlike with a monolith, a microservice can use a different tech stack as needed. Considering rewriting everything in a new language? Just change one microservice to use the new tech stack, assess the benefits gained, and determine whether to proceed.
  • Deployments of the application as a whole become more focused. Microservices give you the flexibility to deploy different services as needed.

In the example above, notice the API sitting alongside the other portions of the microservice? We’ll get into that. It’s finally time to talk about the differences between APIs and microservices.

The Difference Between APIs and Microservices

Here are the main differences between APIs and microservices:

  • An API is a contract that provides guidance for a consumer to use the underlying service.
  • A microservice is an architectural design that separates portions of a (usually monolithic) application into small, self-containing services.

By definition, this means an API is usually a portion of a microservice, allowing for interaction with the microservice itself. Another way to think about this is that the API serves as a contract for interactions within the microservice, presenting the options available for interacting with the microservice.

However, if we look at the microservices diagram above, we can see that each microservice is built slightly differently based on its needs. Here are a few examples of different functions a microservice can have:

  • Providing CRUD operations for a particular entity type, such as a customer, event, etc. This service would hold the ability to persist data in a database.
  • Providing a means to accept parameters and return results based on (potentially intense) computations. The billing microservice above may take information on an event or customer and return the billing information required, without needing to store data.

With the above example, you can probably see that a microservice is capable of being more than just an API for a system. An entire application can encompass a series of microservices that use their own APIs for communication with each other. In addition, each of these microservices can abstract its own functionality, drawing logical boundaries for responsibility in the application and separating concerns to make for a more maintainable codebase.


Hopefully now you have a better understanding of what both APIs and microservices are. Code maintainability and quality are both key parts of a successful IT strategy. Microservices help you stay true to them.  They keep your teams agile and help you meet customer demands by producing high-quality, maintainable code.

Are you working in a monolith codebase? Think about taking a portion of that monolith and moving it into a microservice of its own. Once you do that, you should be able to see the benefits of using microservices. In fact, you might even decide to convert the entire thing.


The Benefits of Test-driven Development

This post was originally featured at

It sounds like an oxymoron: Create test cases before coding. In the process of writing and using test cases before writing functionality, a developer can produce higher-quality software. That is the concept behind test-driven development (TDD), and it works.

TDD Snapshot

TDD has been around since at least 1999, and it is part of the emerging test-first approach to development associated with extreme programming. In 2003, American software developer Kent Beck “rediscovered” TDD and applied it as a way to create simple designs and inspire developer confidence. Fast forward to today’s agile development world, TDD is a software development process using a very short feedback loop in which developers do the following:

  1. Create a test that will fail immediately.
  2. Write the necessary code to pass the test as quickly as possible.
  3. Refactor the code written in step two, using the created test as a reference.

TDD Benefits

So why might a long-in-the-tooth software development approach have tangible benefits in an agile development world? Because put in its simplest terms, TDD is a way to develop highly usable software.

Following the TDD process, a developer must focus on the test cases before actually coding anything. That means a developer is thinking more about the use of the software and the design of user interfaces to achieve that. As a result, the developer is more interested in the interface than in the implementation—and that leads to more usable software.

There are several other tangible benefits for taking a TDD approach, including:

  • Creating a metric for the code quality that can be easy and quick.
  • Allowing for a quick visualization to determine if the code base has any functional issues.
  • Living documentation of functionality with new code written.
  • Allowing for safe refactors of code, whether based on attempts to improve code quality or a changed requirement.

That final point is worth dwelling on for a moment. The TDD approach requires that the evolving code base is cleaned up frequently so that new tests and code are easy to introduce. That usually means that code moves from its current place to where it more logically belongs in the software. That has the effect of reducing any extraneous duplication of code and enforces a strong discipline around objects, classes, modules, etc. In this way, the overall maintainability of the software increases incrementally.

The improvement in readability and maintainability will pay huge dividends during the expected life cycle of the software. Following a TDD methodology requires developers to focus on writing smaller units of testable code. By following the TDD methodology, this leads to more modularized, flexible and extensible software.

Where Can TDD Fit?

A TDD approach is appropriate for both new greenfield software and legacy systems. For a development team that has to deal with existing legacy software, the key is to start small and begin with working on bug fixes. A good practice is that for each bug reported, create a test that addresses the broken bug and then fix the functionality. After several iterations of this, the development team has created a repeatable working test to address bug fixes. When applying this methodology to new software applications, make a point of understanding the testing tools used for the technology stack.

For example, when working in an Angular application that generally uses the Jasmine testing framework for unit testing, and when using the Angular CLI for creation, unit tests are created alongside modules of code. Using a TDD methodology, the approach would be to:

  1. Determine a portion of the functionality to be created with this component.
  2. Create a unit test that will fail immediately targeting this portion of functionality.
  3. Run the test runner to confirm the failing test (it may be useful here to leave the test runner on to run after each source file save, which speeds the process).
  4. Write code in the Angular component that will make the written test pass.
  5. Make any refactoring changes for the Angular component after confirming passing, using the test as a guideline to ensure code refactoring does not break functionality.

Measuring Testability with Code Coverage

Another essential consideration when improving testability in code is to use a code coverage tool. Code coverage is a metric to show the percentage of code that has a unit test written for it. Angular applications use Istanbul to calculate code coverage through the application. Running a single run of code coverage in an existing project gives the following output:

The output provided by Istanbul gives a metric on the overall test coverage and the areas of code that need improvement in testing. Code coverage can be useful in a few ways:

  • Provides an idea of the overall testability, allowing for a threshold to ensure the total software testability does not drop past a certain point.
  • Identifies areas in the code base with poor testing, making them opportunities for refactoring.

However, as effective as code coverage can sound, it’s important to understand that it’s just a metric. Writing good unit tests is a matter of following what the code will do, and metrics such as these should not drive significant decisions.

Considerations When Using TDD

It’s important to note that TDD does not solve all problems. There are many different types of testing required to create a fully comprehensive testing strategy, including acceptance testing. In TDD, the focus is on a single unit of code at a time. A sophisticated software application may have many thousands of units of code and their corresponding tests. That’s why it’s critical to make sure that testing quality stays high when following the TDD methodology. Tests cannot become something bypassed in the pursuit of more functionality or expediency. Avoiding testing generates the risk of test creation becoming a hindrance for developers.

For example, ignoring failing tests makes it difficult to determine the actual state of the application. It’s also important to have buy-in for a TDD approach form all the teams involved in the effort. Buy-in is particularly true on the business side. Time must be spent up front discussing the nature and benefits of a TDD approach, and the belief that using TDD will improve the resulting software. Otherwise, business management sees writing tests as an activity that doesn’t contribute to the bottom line.


TDD emphasizes the importance of an effective and sustainable testing approach. TDD also contributes directly to the overall quality of software. It’s a truism for small or large system development that often goes missing in the day-to-day hustle to get new functionality into production. Quality software gets built when there’s an acknowledgment that quality test code should receive the same amount of attention and resources as quality production code, as they are equally essential in development.


9 Top DevOps Conferences — A Developer's Picks

This post was originally featured at Sonatype.

With the DevOps movement only continuing to grow, plenty of conferences have sprung  up around the topic. These are a great way to get acquainted with DevOps methodology and the capabilities it can bring to your organization.

Conferences also let you do the following:

  1. Ramp up your knowledge on DevOps technologies, whether you’re new to the concept or have plenty of experience.
  2. Network with other like-minded people.
  3. Take a trip somewhere fun. (though, this isn’t always needed to accomplish the first two)

So DevOps conferences are great. But which one(s) should you attend? A person only has so much free time and money.

In this post, I want to share a developer’s perspective on what I think are the best DevOps conferences currently offered. I’ll let you know why I think they’re worth attending, and I’ll cover the target audience, upcoming dates, and costs. Hopefully, this will help you choose the conference that will benefit you the most.

1. AllDayDevOps

First off, we have AllDayDevOps; a DevOps and DevSecOps conference that houses over 100 sessions spread out between five tracks and 24 hours, including:

  • CI/CD
  • Cloud-native infrastructure
  • DevSecOps
  • Cultural transformations
  • Site reliability engineering

AllDayDevOps is unique from the other conferences on this list in that it has a live Q&A session on Slack, it’s completely online, and it’s free!

You read this correctly—you’ll be getting around the same session count as other large conferences on this list (if not more), with the convenience of not needing to travel and not needing any expenses. If you’re curious as to why AllDayDevOps works this way, take a look at the FAQ provided.

AllDayDevOps takes place October 17, 2018. After the conference, all sessions will be sent to registered attendees—another big advantage of a fully remote conference.

2. DevOpsDays

Next up, there’s DevOpsDays, one of my favorites on this list. It’s a series of worldwide conferences run by local organizers. It’s similar to WordCamp for WordPress, which is a collection of conferences organized by local meetup folks interested in the technology.

DevOpsDays focuses on general DevOps topics and includes curated talks. It also features a unique spin on the concept of open space, with discussions on software development and IT infrastructure.

DevOpsDays tends to draw smaller crowds than some of the bigger conferences. That means networking can be easier for people intimidated by large crowds.

Many, many DevOpsDays conferences take place each year. (As of Labor Day, there are still 40 left in 2018!) This means you’ll likely find one you can attend. Here are some of the upcoming locations:

  • Cairo, Egypt
  • Chicago, IL, US
  • Columbus, OH, US
  • Detroit, MI, US
  • Paris, France
  • Portland, OR, US
  • Salvador, Brazil
  • And many others!

These events tend to cost less than other, more mainstream conferences. That’s because of their local nature. For example, I’m based in Detroit. So I could attend DevOpsDays there for only about $150. That would get me a two-day, single-track conference pass to around 10 curated talks and five open-space sessions.

3. DockerCon

DockerCon is an American and European conference focused on the Docker technology. It’s a very good DevOps conference to attend, as Docker is considered the most popular container technology available. The event is suitable for many types of professionals, including software developers and IT staff members. Even non-technical folks such as business analysts and executives would enjoy it.

As DockerCon is a large-scale conference for a major technology, you can expect the following:

  • 6,000+ attendees
  • 5+ tracks and 100 sessions
  • Insight into the latest happenings with the Docker technology, much like you’d get from Microsoft’s Build or Apple’s WWDC
  • The Docker Hallway Track, a system to help find others, network, and communicate in a one-on-one fashion
  • Opportunity to become a Docker Certified Associate with an on-venue test

You can attend DockerCon at these upcoming dates and locations:

  • Barcelona, Spain: December 3–5, 2018
  • San Francisco, CA: April 29–May 2, 2019

Pricing for DockerCon runs from around $990 for early-bird reservations to $1,395 for standard tickets. Lodging ranges from about $290–$360 a night.

4. ChefConf

Chef is a software company that provides multiple DevOps products and solutions to help businesses improve their software delivery processes. ChefConf, as its name suggests, is a conference run by Chef. It provides hands-on workshops. There are also sessions explaining the Chef software itself and how it works in real-world scenarios.

For companies already using Chef products for software delivery solutions, this conference can help them understand the software’s capabilities. For a company not using Chef, this conference offers a demonstration of the technology’s real-life capabilities and what it can do for an organization. In addition, ChefConf offers a chance to network with 1,500 attendees.

The next conference is scheduled for May 20–24, 2019, in Seattle, WA. With early-bird pricing, it generally costs less than $1,000.

To get a feel for what it would be like to attend, you can watch footage from past conferences right on the ChefConf website.

5. KubeCon/CloudNativeCon

KubeCon (and CloudNativeCon) is a Linux Foundation event focused on the Kubernetes technology. Kubernetes is currently the leading container orchestration system used in conjunction with Docker. Therefore, this conference is appropriate for those working with Kubernetes, as well as application developers, IT professionals, and anyone else interested in learning about Cloud Native.

Similar to DockerCon, KubeCon provides a method for gaining credentials in the Kubernetes technology space. The conference serves as a venue for attending the Kubernetes Fundamentals course. You can also earn the Certified Kubernetes Administrator certification.

KubeCon will take place in Seattle from December 10–13, 2018. Prices vary, depending on the type of ticket:

  • Corporate (your company pays for your attendance): $1,100–$1,200
  • Individual (you pay for yourself): $550–$650
  • Academic (you’re a student currently enrolled in college): $150

This conference is a decent deal if you’re paying out of pocket, especially you’re a student.

A quick note: KubeCon has an interesting registration process. If you want to attend as an individual (at your own expense), you can email the event organizers with the intent to register in that category. You’ll receive a registration code, which will let you purchase a ticket for the conference.

6. PowerShell + DevOps Global Summit

The PowerShell + DevOps Global Summit is a DevOps conference focused on the Windows tool PowerShell and how to integrate it with the DevOps methodology. PowerShell is a popular tool in the automation world. So, attending this conference makes sense if your organization uses PowerShell in its DevOps solutions.

Unlike other conferences described in this post, this event provides an OnRamp track. OnRamp coaches entry-level IT professionals to understand DevOps and PowerShell. But the track accepts only 40 attendees, based on applications completed during registration. So this summit might be one of the only selective DevOps conferences around.

The PowerShell + DevOps Global Summit conference usually takes place in Bellevue, WA, and costs $1,650 per person. Check out the brochure for the 2019 event for more information.

7. DevOps World | Jenkins World

DevOps World | Jenkins World is a Jenkins-supported DevOps conference. Predictably, it focuses on the automation software Jenkins and its capabilities for IT teams. This conference welcomes about 2,500 attendees, making it the largest Jenkins-focused DevOps conference around. In addition, it provides 100+ sessions for attendees to enjoy. If your team uses Jenkins in your software delivery pipeline, this conference is a must. While the US event just took place two weeks ago, you can still attend its counterpart in Nice, France from October 22-25, 2018 (schedule).

Prices range from $499 for early-bird tickets purchased four months in advance, up to $1,199 for the standard rate. So it’s a good move to register early.

Potential attendees should also check out this post, which provides more information and a discount code for 30 percent off.

8. DevOps Enterprise Summit

DevOps Enterprise Summit (or DOES) is a 3-day DevOps conference with a focus in working in the enterprise area. Whereas most of the conferences above have a mix of high-level concepts, this conference consists of speakers with real-world experience working in enterprise environments.

In terms of locations, the European DOES (in London) has already passed – but you can still attend the US DOES in Las Vegas on October 22-24, 2018. Tickets run around $1,975 for a full-price ticket.

In past years, tickets have sold out of the conference, so if this conference sounds interesting, you may want to act quick!

9. CodeMash

Although not directly a DevOps-focused conference, CodeMash is a two-day event (or four-day, if you attend workshop sessions). It focuses on multiple facets of technology. This includes software development, DevOps, technical leadership, and much more. The event takes place at the beginning of each year in Sandusky, OH, at the Kalahari Resort.

CodeMash offers a great ROI for individuals who want to learn all things related to DevOps and to choose from a variety of other types of workshops and sessions. In addition, CodeMash is perfect for travelers attending DevOps conferences with their family, as there are specific events for children called KidzMash.

The next CodeMash will take place from January 8–11, 2019. The first two days include workshop sessions, also called pre-compiler sessions. The next two days are the main conference and involve plenty of curated talks.

In terms of cost, tickets go as follows:

  • two-day conference (all talks): $350
  • four-day conference (pre-compiler sessions and all talks): $650

CodeMash also provides a deeply discounted rate for lodging at the Kalahari. That’s another reason this event is a great choice for those traveling with family. In addition, kids can play in the water park and enjoy other activities at the venue.

For more information, take a look at last year’s schedule and sessions or watch recorded sessions from the CodeMash YouTube channel.

Get Your DevOps Conference On

These conferences all have one thing in common—they are pertinent to current DevOps practices today, providing value for all levels of DevOps practitioners, both complete beginners and veterans.  Hopefully, this list helps you better decide which DevOps conferences are right for you to attend—wherever your experience level is.

What’s your experience with the DevOps conferences above? Have something to say on one of them? Did I miss an important conference? Let me know in the comments!


TDD vs. BDD: What Are They and How Do They Differ?

Improving the testability of software is extremely important, and a difficult ideal in its own right. To achieve that, engineers and others interested in software testability have created many strategies, tools, and methodologies to improve the testability of software solutions.

With so many methodologies and techniques available, which one should you choose? Two of the most prominent methodologies I’ve encountered are test-driven development (TDD) and behavior-driven development (BDD). In this post, I’ll describe what TDD and BDD are, discuss their real-world applications, and explain how they differ. In addition, I’ll talk about considerations between TDD vs. BDD.

Here are some facts to consider when thinking about TDD vs. BDD.


What Is TDD?

Test-driven development (TDD) is one of the most popular methodologies today for adding testability to a software project. Test-driven development became popular around 2003 via Kent Beck, alongside the test-first extreme programmingprinciple. TDD aims to provide 100 percent test coverage in code by working with a very short feedback cycle when writing code. The short feedback cycle drives implementation, verifying functionality with automated testing. In other words, you write tests beforehand and then write functionality so that the software passes those tests.

The TDD methodology and its short feedback loop are defined in Kent Beck’s book Test-Driven Development: By Example with the following steps:

1. Write a failing test (red).

2. Make the test work ASAP (green).

3. Refactor the code.

Step 1: Write a Failing Test

The first step with TDD is to create a “red” test that fails. This test doesn’t need to pass, or even compile at all. The primary purpose of this step is to define the functionality to test.

Step 2: Make the Test Pass ASAP

Once you’ve established a failing test, the next step is to make it pass as quickly as possible. There are no rules in getting the implementation correct to turn the test “green.”

Step 3: Refactor the Code

Once you have a passing test, you have the means to test the functionality created in step two. The next step is to refactor the code to a good standard, such as by removing any duplication created during the previous steps. While refactoring, you’ll have the ability to use the test you just created to ensure that the code is always working as intended.

TDD: Benefits and Drawbacks

Now that you know the steps involved in test-driven development, let’s look at some of the benefits and drawbacks. Some of the benefits of practicing TDD include:

  • Improve code quality by virtue of writing code with testability in mind—this generally results in well-defined and decoupled code that is easy to maintain over time.
  • Provide a framework of automated tests that can provide a metric of the code quality at any point. This is great for understanding regression effects when changing functionality in the codebase.
  • Test creation alongside functionality creation. Instead of tests being an afterthought, they become as critical as writing functionality.

Of course, there are some disadvantages as well:

  • When considering the business case for writing with testability in mind, it can be difficult to explain how testing can contribute to the bottom line. This is especially difficult for teams working under unrealistic deadlines.
  • To follow TDD correctly, you need both the knowledge to actually follow the TDD methodology correctly and the discipline to commit to following the rules of the methodology. Especially as timelines tighten and deadlines loom closer, it becomes easy to push off testing as something to “address later.” 

Now that we’ve covered the basics of TDD, let’s talk about the next testing methodology in our discussion of TDD vs. BDD, behavior-driven development.

What Is BDD?

Behavior-driven development (BDD) combines TDD methodology with core principals from object-oriented programming and domain-driven development.  BDD allows both technical and non-technical teams to be involved in software development testing efforts. Writing tests in a way that explicitly defines the behavior expected for the software accomplishes this by allowing both non-technical and technical members to be involved in the test definition process.

Behavior-driven development focuses on creating user stories explaining the desired functionality. This is opposed to test-driven development, which focuses on having a feedback loop driven by written tests. In 2007, Dan North wrote a guideline for the different steps of defining a user story—let’s take a look.

Step 1: Given 

First, a behavior-driven test will establish the initial context for the test. This step is called “given,” which describes a situation in which functionality will be determined. This step is important in establishing the state of the application before testing for functionality. Since a behavior-driven test should still look to isolate functionality, it’s important to be specific with the context of the user story when determining functionality in the application.

Step 2: When

The next step in writing a BDD test is to establish the “when” criteria. The “when” step describes the action occurring with the “given” content. Generally, this step is a verb describing something happening in a system, such as clicking a page or registering a user.

Step 3: Then 

Finally, the last step is to define a “then” statement for the behavior-driven test. This statement describes the final result that occurs after performing an action with the “when” statement.

BDD: A Quick Example

Let’s look at an example. If I’m building an application that collects registrations for an event, I’d think about the different functionalities this application can have. A core function to test would be the capability to add an attendee to an already existing event. My “given” statement would look something like: given that an event already exists and is selected.

Next, I would create a “when” statement to provide an action to perform for this test. Since my goal is to add an attendee to an event, my “when” statement would be: when a user registers for the given event.

Lastly, I would finish the test by including a “then” statement. Once the user registers for an event, I need to confirm the user was actually registered. My statement could look like: then confirm the user is registered for the event.

BDD: Some of the Benefits

Let’s go over a few benefits of using behavior-driven development:

  • Good user of establishing strong business requirements for the application. Behavior-driven tests provide a framework of tests focusing on the business functionality of the application since they are created with desired behavior in mind.
  • High collaboration between domain expert and developers. Building tests in this way helps prevent miscommunication between requirements and implementation.
  • Provides repeatable high-level tests for regression throughout the application. If you’ve ever run into a case where business users forget the functionality they’re asking for, this benefit will work really well for you.

Now that we have a better understanding of behavior-driven testing, let’s look at the differences between TDD vs. BDD.

TDD vs. BDD: Understanding the Difference

When looking at the difference between TDD vs. BDD, it’s important to understand that you don’t have to choose just one or the other—you can use TDD and BDD together to cover your organization’s testing needs. By using aspects of both TDD and BDD, you’ll be able to have low-level tests for the details of the codebase, and higher-level tests that touch on the direct functionality desired by the customer. Additionally, the different sectors of your organization can have different involvement in the testing process. A different workflow in test creation can involve different team members, depending on their technical capability.

First, consider an environment where the development team is driving internal functionality. Test-driven development centers around tests defined by the development team. TDD’s short feedback cycle works well within a development team since the tests created with this methodology tend to be small and quick.

Second, an environment with more collaboration between the domain experts and development team will benefit from a stronger behavior-driven development process. This allows for tests written in simple English that makes sense for a non-technical user. In addition, behavior-driven tests make for great acceptance tests. These tests define the desired functionality without considering implementation details, allowing them to include non-technical users in creation.

Using TDD and BDD in Harmony

This post should have given you a good understanding of what test-driven development and behavior-driven development are. When considering whether to implement TDD vs. BDD in your workflow, both can create an effective test creation strategy. If you aren’t using one (or both!) of these methodologies, give them each a try and see if they help you with creating better tests for your software.


Server Performance Metrics: 8 You Should Be Considering

This post was originally featured at Raygun.

With the DevOps movement entering the spotlight, more and more developers concern themselves with the end-to-end delivery of web applications. This includes the deployment, performance, and maintenance of the application.

As an application gains more users in a production environment, it’s increasingly critical that you understand the role of the server. To determine the health of your applications, you may find it useful to gather performance metrics for the servers running your web applications.

All different types of web servers (like Apache, IIS, Azure, AWS, and NGINX, for example) have similar server performance metrics. Most of my experience in this realm lies in Microsoft Azure, which provides an easy-to-use interface for finding and collecting data. Working with Microsoft Azure gives the capability to host applications in either Azure App Services (PaaS), or Azure Virtual Machines (IaaS). This setup gets you a view of the different metrics for the application or server running.

Because of all this experience I’ve had in the last few months, I’ve found what I think is eight of the most useful server performance metrics. These metrics can be divided into two categories: **app performance metrics and user experience metrics.

Let’s start by looking at the metrics under the app performance umbrella.

App performance metrics

App performance metrics are specific to the speed of the web applications that are running. If you’re having issues with an application performing slowly, these metrics are a good place to start.

Metric 1: Requests per second

Requests per second (also called throughput) is just like it sounds—it’s the number of requests your server receives every second. This is a fundamental metric that measures the main purpose of a web server, which is receiving and processing requests. Large-scale applications can reach up to about 2,000 requests per second.

Given enough load, any server can fall. When considering the impact, remember that requests are just that: a single request to the server. This metric doesn’t consider what’s happening in each of these requests.

This brings us to our next metric.

Metric 2: Data in and data out

The next metric I suggest you look at is your data in and data out. The data in metric is the size of the request payload going to the web server. For this metric, a lower rate is better (lower meaning that small payloads being sent into the server). A high data-in measurement can indicate the application is requesting more information than it needs.

Data out is the response payload being sent to clients. As websites have been getting larger over time, this causes an issue especially for those with slower network connections. Bloated response payloads lead to slow websites, and slow websites will dissatisfy your users. With enough slowness, these users abandon the website and move on. Google suggests pages that take three or more seconds for mobile users to load have about a 53% chance of users abandoning before load completion.

Metric 3: Average response time

Defined directly, the average response time (ART) is the average time the server takes to respond to all requests given to it. This metric is a strong indicator of the overall performance of the application, giving an impression of the application usability. In general, the lower this number is, the better. But there are studies showing that the ceiling limit for a user navigating through an application is around one second.

When considering ART, remember what the acronym stands for—it’s just an average. Like all metrics determined with an average, high outliers can throw the number off completely and make the system seem slower than is. ART is most helpful when used alongside our next metric on the list.

Metric 4: Peak response time

Similar to the average response time, the peak response time (PRT) is the measurement of the longest responses for all requests coming through the server. This is a good indicator of performance pain points in the application.

PRT will not only give you an idea of which portions of your applications are causing hangups; it will also help you find the root cause of these hangups. For example, if there’s a certain slow web page or a particularly slow call, this metric can give you an idea of where to look.

Metric 5: Hardware utilization

Next, let’s talk about overall hardware utilization. Any application or server running is limited by the resources allocated to it. Therefore, keeping track of the utilization of resources is key, primarily to determine if a resource bottleneck exists. You have three major aspects of a server to consider:

  • the processor
  • the RAM (memory)
  • the disk space and usage

When considering these, you’re looking for what can become a bottleneck for the whole system. As any physical (or virtual!) computer running with these components will show, performance is only as strong as its weakest link. This metric can tell you what the bottleneck is and what physical component can be updated to improve performance.

For example, you may run into issues when trying to render data from a physical hard drive. That will cause a bottleneck in the I/O interactions between gathering files and presenting them to the user. While the hard drive spins and gathers data, the other physical components do nothing. An upgrade to a solid-state drive would improve the performance of the entire application because the bottleneck will disappear.

Metric 6: Thread count

The next metric—the thread count of a server—tells you how many concurrent requests are happening in the server at a particular time. This metric will help you understand what the general load of a server looks like from a request level. It will also give you an idea of the load placed on the server when running multiple threads.

A server can generally be configured with a maximum thread count allowed. By doing this, you’re setting a max limit of requests that can happen at one time. If the thread count passes this maximum value, all remaining requests will be deferred until there’s space available in the queue to process them. If these deferred requests take too long, they’ll generally time out.

It’s worth noting that increasing the max thread count generally relies on having the appropriate resources available for use.

User experience metrics

Now that we’ve covered the app performance metrics, let’s discuss a few that are user experience centered. These server performance metrics can measure your users’ overall satisfaction when using your web applications.

Metric 7: Uptime

Although not directly related to its performance, the uptime of the server is a critical metric. Uptime is the percentage that the server is available for use. Ideally, you’re aiming for 100% uptime, and you’ll see many cases of 99.9% uptime (or more) when looking at web hosting packages. It’s not uncommon for software projects to abide by a service level agreement that dictates a particular server uptime rate.

If uptime metrics checking isn’t something your server can provide built in, there are plenty of third-party services, such as, that can do it for you. These services can even give you a visual depiction of their report:

And here’s an interesting fact. Calculating the monthly allowed downtime shows

  • 99%: ~7 hours
  • 99.9%:  ~45 minutes
  • 99.999%:  30 seconds

Metric 8: HTTP server error rate

The HTTP server error rate is a performance metric that doesn’t directly relate to application performance, but it’s a very critical one. It returns the count of internal server errors (or HTTP 5xx codes) being returned to clients. These errors are returned from malfunctioning applications when you have an exception or other error not being handled correctly.

A good practice is to set up an alert whenever these kinds of errors occur. Because 500 errors are almost completely preventable, you can be certain you have a robust application. Being notified of all HTTP server errors allows you to stay on top of any errors occurring. This prevents the issue of having errors build up in the application over time.

How to measure server performance

Measuring server performance with an Application Performance Monitoring (APM) tool like [Raygun APM]( is the easiest and most accurate way of measuring the health of your software. APM should be, giving your team greater context and diagnostic tools into your biggest application performance questions. Discover and pinpoint the root cause of performance issues with greater speed and accuracy than traditional APM solutions.

Keep your finger on the pulse

These are the server performance metrics I’ve personally found to be the most valuable. If you collect and monitor this kind data on both your users’ experience and your app performance, very little will fall between the cracks.

Did I mention any metrics that you’re not currently using? Consider trying them out. After all, metrics are your best way to keep your eye on your server performance—and, by extension, your application’s health.


Setting Up A Windows 10 Workstation in 2018

2018.10.24 EDIT: Added a few new applications when setting up my Surface Pro 4.
I recently built a new PC for myself to use for work and wanted to document the process of getting it set up for use. Each year, I go through and reinstall any system I have as a means to make sure I have this process down, alongside doing an audit to see what I’m actually using, to make sure I keep things clean.
Here’s the process I use when setting up a Windows 10 workstation:


First, I go to the Microsoft website and create a bootable thumb drive containing Windows 10.
I restart my computer and boot to the thumb drive I created earlier. Once Windows setup loads, I end up selecting the option to Custom: Install Windows only (advanced).
For my PC, I have two drives running in it:

  • An SSD that’s responsible for storing the Windows 10 OS.
  • An HDD that’s responsible for holding all My Documents data, as well as everything for Dropbox.

I’ll select to install to OS on the SSD and kick off the installation process. After some time, the installation will finish and the computer will restart.

Display Configuration

Once I have the Windows desktop running, I like to set up my display before I do anything else. I currently run 3 4K monitors with a GeForce 1080 Ti, so I will set up scaling and make sure the monitors are configured as desired.
Once that’s done, I set up Windows to display everything as desired, including:

  • Shifting the Taskbar over to the right side, and only showing one taskbar.
  • Turn off Cortana, Tasks, and People
  • Changing to use Dark Mode for default apps

Essentials: Mail

Once I have the display set up correctly, the next step for setting up a Windows 10 workstation is to set up some of the essential software on the system. I like to use the Windows 10 Mail app for both my mail and calendar. I’ll access that and add all relevant accounts, and then make the following configurations:

  • Creating a linked inbox
  • Turn off email notifications
  • Set Swipe Right to Delete
  • Create a signature for all accounts
  • Set the calendar to start on Monday

Software Installation

Next up is downloading a host of software to use for the PC. I use Chocolatey for as much as I can, so after setting it up through Powershell (as administrator), I install the following applications using Chocolatey:

  • googlechrome (sign in to account, configure extensions)
  • divvy (register application, start Divvy at login, hide from tray, set Ctrl-Shift-Z as shortcut)
  • 7zip
  • ccleaner (set up to check automatically)
  • dropbox (log in and download all files, point to HDD)
  • filezilla
  • git
  • nodejs
  • libreoffice-fresh
  • vscode
  • rescuetime (log in)
  • bitnami-xampp (LAMP server instructions, including setting up a full local WordPress installation)
  • mysql.workbench
  • postman
  • qbittorrent
  • steam
  • sql-server-management-studio
  • teamviewer
  • visualstudio2017community
  • azure-cli

You can do this with one command, like so:

choco install -y googlechrome ...

In addition, there are a few applications I use that aren’t in the Chocolatey package manager:

  • Franz – web application allowing me to combine Hangouts, Messenger and Slack into one application.
  • Manta – open source application I use for invoicing (looking for a replacement for this, one that can store data in the cloud).

After everything is installed, I make sure to go through each of the applications if they need configuring (the notes are above for applications that need to be configured).

Visual Studio Code Setup

Once all of my software is set up, I take a look at Visual Studio Code and set it up, since that’s where most of my development occurs.
I install the following plugins:

  • TSLint
  • Angular Language Service
  • EditorConfig for VS Code
  • Path Intellisense
  • Prettier – Code formatter
  • Terraform
  • PowerShell

Once this is done, I install Fira Code as the font to be used.
In terms of configuration, I copy and paste the current settings I like to use:

    "editor.rulers": [
    "editor.minimap.enabled": false,
    "window.zoomLevel": 0,
    "explorer.confirmDelete": false,
    "explorer.confirmDragAndDrop": false,
    "workbench.startupEditor": "newUntitledFile",
    "editor.formatOnSave": true,
    "editor.fontFamily": "Fira Code",
    "editor.fontLigatures": true,
    "typescript.updateImportsOnFileMove.enabled": "always",
    "prettier.singleQuote": true,
    "prettier.printWidth": 80

Configuring PowerShell

Once Visual Studio Code is set up, the next step is to configure Windows Powershell correctly. There are a few things to do here, build a profile, and then set up Azure CLI.
I run the following commands in administrator PowerShell:

set-executionpolicy unrestricted
New-Item -path $profile -type file –force

Once that’s done, I should have a profile file created. I add the following to it:

set-location D:

Once that’s done, I’ll restart PowerShell to confirm that when it starts up, it’ll move to the D:.

Final Touches

Once I’m all set with most things, there are a few more things I like to do:

  • Hide all desktop apps
  • Unpin everything from the Taskbar

All Set And Ready To Go

Once all of that is done, we’re all set with setting up a Windows 10 workstation. I’ll take a look at this again in 2019 to see what changes in a year. Perhaps I’ll switch over to using Linux?]]>

Upgrading Jenkins On Azure

Jenkins Azure Marketplace to create the Jenkins instance.  This assumes you’ve already created a Jenkins virtual machine on Azure.  In addition, the initial set up should be complete.   If not, you can check out some more information within the Microsoft documentation.

SSH into the Virtual Machine

If you’ve been using this instance, you should be familiar with SSH forwarding and being able to access Jenkins using localhost.  If this is new, you can read this Jenkins article on the security setup for the Azure Jenkins Marketplace instance.
Use this command in a command line interface to log in to your virtual machine:

ssh -L <username>@<domain_name>

Replace the following:

  • Username – the server admin username created when creating the Azure VM.
  • Domain Name – the domain name to access the virtual machine.

You’ll be asked for a password – use the password provided alongside the username when creating the virtual machine.
After logging in you’ll see a screen like below.  This verifies you are connected to the virtual machine.
logged into the virtual machine
Access http://localhost:8080 and verify you can access the Jenkins interface:
Jenkins interface via localhost

Download Newest JAR

After logging into the server, you’ll use the following command to download the latest version of Jenkins:

sudo wget

Assuming all is successful, you should see the following
Downloaded newest Jenkins JAR
After downloading the .war file, copy it to the Jenkins directory:

sudo mv jenkins.war /usr/share/jenkins/jenkins.war

(if you need to know the location of the WAR file on the server, you can check it in Manage Jenkins -> System Information)
And finally, with the new .war in place, restart Jenkins:

sudo systemctl restart jenkins

Verification of the installation is simple. Access Jenkins and check the bottom right to see the updated version number:
Checked for latest Jenkins installation


Upgrade Jenkins server to a new version]]>

Living In A DevOps World (Part 2: Implementation)

This post was originally featured at
Part one of this article focused on some of the more behind-the-scenes benefits of an Agile DevOps approach. In part two the focus turns to some of the other traditional problems that a well-executed DevOps approach can address, and how doing so can benefit an organization in more ways than just a technical perspective.
By way of quick review, DevOps was born out of the Lean and Agile software development methodologies when it became clear that, while those methodologies did indeed speed up the development process, a bottleneck still occurred when push came to shove and new code had to be moved to quality assurance and production environments.
DevOps was created to more seamlessly connect the development and operations processes, making code migration, testing and promotion to production a more efficient process. To accomplish this, the DevOps approach had to find solutions for some of the issues that caused operational delays, and create new ways to organize, implement and continuously optimize the operations process.


For those who have been in development and/or operations for any length of time, it quickly becomes clear that there is a multitude of operational safety checks that serve to protect a production environment. While that is vitally important, it was also clear that there had grown an “over” problem around many operational procedures, and in many cases that manifested itself in the development process. That includes overproduction, when making or requesting more than was needed from requirements and/or operations perspective to clear arbitrary operations process hurdles.
Alternatively, overprocessing, when development and operations resources do more work (as opposed to just enough, as Lean and Agile would suggest) than required to smooth the transition of code and functions from development to operations. This created waste regarding time, resources and budgets that were not proportional to the benefits derived from following the operations process.

Motion and Transportation

Similarly, DevOps also sought to solve the operational problems of both motion and transportation. That is, the amount of excess work required to deliver new code to meet the operational requirements for code migration. The friction caused by such requirements slowed the motion and momentum of the development process. The same is true of transportation, or the difficulty in moving code between environments such as testing, quality assurance and production.
In both cases, development and project momentum was sacrificed for what often turned out to be a series of artificial hurdles that had long since become less effective or even obsolete parts of the operations process.

Correction and Inventory

In most instances, all of the above resulted in the final maladies of the pre-DevOps development and operational ways. The first was the number of in-flight corrections required when timelines were squeezed, and the rush was on to get to production. Unfortunately, this went hand in hand with the ultimate problem of good code being sacrificed for expedient delivery, often resulting in inadequate functionality, system outages and, in the end, lost market opportunity and revenue.

3 Keys to DevOps Success

Any successful DevOps implementation must address three critical factors in this order: culture, organization and tools.


It’s critically important to connect an organization’s values to the DevOps process. Valuing quality, timeliness and organizational alignment of goals and objectives is the first step toward DevOps success. Such cultural values translate directly into a DevOps organization.
Providing empowerment and accountability to DevOps team members helps to build ownership among the team, and trust from their customers in the rest of the organization. It also helps to provide a physical environment that fosters collaboration, teamwork and continued learning. Co-working spaces and collaboration tools such as Slack are a good start. Attending external conferences to broaden perspectives and to bring new ideas back to the team is often beneficial. From there, brown bag lunch sessions where ideas and experiences can be shared, frequent post-mortems on implementations to hone best practices, and even internal mini-conferences where several departments come together for a day to discuss DevOps practices are all effective ways to build a strong DevOps culture.


Any good DevOps organization is two-sided; that is it has to work from the top down and from the bottom up at the same time.
The top-down part is in the ability to “see the system” from a macro level, allowing for process understanding and insights from a business workflow perspective. This helps to identify the pain points and bottlenecks in the current process that can be optimized through the DevOps process.
Once that’s accomplished, the bottom-up work begins. Identifying things such as inconsistencies in code deployment environments that cause delivery issues, elimination of manual and custom built deployment processes and quarantining inefficient and poorly written code until it can be redone or eliminated are all part of optimizing the time, quality, resources and success factors for deploying production systems on schedule. It’s also important here to continually audit the current processes with an eye toward eliminating the processes that are no longer required or useful but have been kept in place out of the fear of  “breaking something we don’t understand.” If nobody understands it, then it shouldn’t be in production software.

Automation Tools

The final factor for DevOps success is to have the right toolset.
Communication: Any DevOps team requires the ability to quickly and directly communicate with other team members sans meetings. For this purpose, tools such Slack (real-time chat), Skype (video chat), and Confluence (for storing persistent information) are pretty good options.
Planning, Monitoring & Consistency: For the team’s planning needs, a tool such as Trello that can provide Kanban board functionality is worth a look. For issue tracking and monitoring of any system’s overall health, tools such as Jira and NewRelic respectively provide some good functionality. Likewise, consistency is vital in a DevOps world, and using automation to ensure that all systems are configured as desired across different environments is a crucial best practice. For this, a tool such as Ansible is worth a review.
Integration & Deployment: For continuous integration of systems in development and as a way to tighten the feedback loop for developers to determine if the central build used for deployment to production is working as intended, the Jenkins toolset might be a good fit. And finally, when it comes making any deployment process as painless as possible, a tool such as Docker that can handle created containers for an application that includes all dependencies, reducing the complexity of deployment to multiple environments, is a solid way to go.
The point of all of this is to create an environment—culturally, technically and physically—where DevOps can succeed, grow and thrive. Organizations that can create an effective and efficient DevOps environment have also created a competitive advantage for themselves.]]>