The Benefits of Test-driven Development

This post was originally featured at DevOps.com.


It sounds like an oxymoron: Create test cases before coding. In the process of writing and using test cases before writing functionality, a developer can produce higher-quality software. That is the concept behind test-driven development (TDD), and it works.

TDD Snapshot

TDD has been around since at least 1999, and it is part of the emerging test-first approach to development associated with extreme programming. In 2003, American software developer Kent Beck “rediscovered” TDD and applied it as a way to create simple designs and inspire developer confidence. Fast forward to today’s agile development world, TDD is a software development process using a very short feedback loop in which developers do the following:

  1. Create a test that will fail immediately.
  2. Write the necessary code to pass the test as quickly as possible.
  3. Refactor the code written in step two, using the created test as a reference.

TDD Benefits

So why might a long-in-the-tooth software development approach have tangible benefits in an agile development world? Because put in its simplest terms, TDD is a way to develop highly usable software.

Following the TDD process, a developer must focus on the test cases before actually coding anything. That means a developer is thinking more about the use of the software and the design of user interfaces to achieve that. As a result, the developer is more interested in the interface than in the implementation—and that leads to more usable software.

There are several other tangible benefits for taking a TDD approach, including:

  • Creating a metric for the code quality that can be easy and quick.
  • Allowing for a quick visualization to determine if the code base has any functional issues.
  • Living documentation of functionality with new code written.
  • Allowing for safe refactors of code, whether based on attempts to improve code quality or a changed requirement.

That final point is worth dwelling on for a moment. The TDD approach requires that the evolving code base is cleaned up frequently so that new tests and code are easy to introduce. That usually means that code moves from its current place to where it more logically belongs in the software. That has the effect of reducing any extraneous duplication of code and enforces a strong discipline around objects, classes, modules, etc. In this way, the overall maintainability of the software increases incrementally.

The improvement in readability and maintainability will pay huge dividends during the expected life cycle of the software. Following a TDD methodology requires developers to focus on writing smaller units of testable code. By following the TDD methodology, this leads to more modularized, flexible and extensible software.

Where Can TDD Fit?

A TDD approach is appropriate for both new greenfield software and legacy systems. For a development team that has to deal with existing legacy software, the key is to start small and begin with working on bug fixes. A good practice is that for each bug reported, create a test that addresses the broken bug and then fix the functionality. After several iterations of this, the development team has created a repeatable working test to address bug fixes. When applying this methodology to new software applications, make a point of understanding the testing tools used for the technology stack.

For example, when working in an Angular application that generally uses the Jasmine testing framework for unit testing, and when using the Angular CLI for creation, unit tests are created alongside modules of code. Using a TDD methodology, the approach would be to:

  1. Determine a portion of the functionality to be created with this component.
  2. Create a unit test that will fail immediately targeting this portion of functionality.
  3. Run the test runner to confirm the failing test (it may be useful here to leave the test runner on to run after each source file save, which speeds the process).
  4. Write code in the Angular component that will make the written test pass.
  5. Make any refactoring changes for the Angular component after confirming passing, using the test as a guideline to ensure code refactoring does not break functionality.

Measuring Testability with Code Coverage

Another essential consideration when improving testability in code is to use a code coverage tool. Code coverage is a metric to show the percentage of code that has a unit test written for it. Angular applications use Istanbul to calculate code coverage through the application. Running a single run of code coverage in an existing project gives the following output:

The output provided by Istanbul gives a metric on the overall test coverage and the areas of code that need improvement in testing. Code coverage can be useful in a few ways:

  • Provides an idea of the overall testability, allowing for a threshold to ensure the total software testability does not drop past a certain point.
  • Identifies areas in the code base with poor testing, making them opportunities for refactoring.

However, as effective as code coverage can sound, it’s important to understand that it’s just a metric. Writing good unit tests is a matter of following what the code will do, and metrics such as these should not drive significant decisions.

Considerations When Using TDD

It’s important to note that TDD does not solve all problems. There are many different types of testing required to create a fully comprehensive testing strategy, including acceptance testing. In TDD, the focus is on a single unit of code at a time. A sophisticated software application may have many thousands of units of code and their corresponding tests. That’s why it’s critical to make sure that testing quality stays high when following the TDD methodology. Tests cannot become something bypassed in the pursuit of more functionality or expediency. Avoiding testing generates the risk of test creation becoming a hindrance for developers.

For example, ignoring failing tests makes it difficult to determine the actual state of the application. It’s also important to have buy-in for a TDD approach form all the teams involved in the effort. Buy-in is particularly true on the business side. Time must be spent up front discussing the nature and benefits of a TDD approach, and the belief that using TDD will improve the resulting software. Otherwise, business management sees writing tests as an activity that doesn’t contribute to the bottom line.

Conclusion

TDD emphasizes the importance of an effective and sustainable testing approach. TDD also contributes directly to the overall quality of software. It’s a truism for small or large system development that often goes missing in the day-to-day hustle to get new functionality into production. Quality software gets built when there’s an acknowledgment that quality test code should receive the same amount of attention and resources as quality production code, as they are equally essential in development.

]]>

9 Top DevOps Conferences — A Developer's Picks

This post was originally featured at Sonatype.

With the DevOps movement only continuing to grow, plenty of conferences have sprung  up around the topic. These are a great way to get acquainted with DevOps methodology and the capabilities it can bring to your organization.

Conferences also let you do the following:

  1. Ramp up your knowledge on DevOps technologies, whether you’re new to the concept or have plenty of experience.
  2. Network with other like-minded people.
  3. Take a trip somewhere fun. (though, this isn’t always needed to accomplish the first two)

So DevOps conferences are great. But which one(s) should you attend? A person only has so much free time and money.

In this post, I want to share a developer’s perspective on what I think are the best DevOps conferences currently offered. I’ll let you know why I think they’re worth attending, and I’ll cover the target audience, upcoming dates, and costs. Hopefully, this will help you choose the conference that will benefit you the most.


1. AllDayDevOps

First off, we have AllDayDevOps; a DevOps and DevSecOps conference that houses over 100 sessions spread out between five tracks and 24 hours, including:

  • CI/CD
  • Cloud-native infrastructure
  • DevSecOps
  • Cultural transformations
  • Site reliability engineering

AllDayDevOps is unique from the other conferences on this list in that it has a live Q&A session on Slack, it’s completely online, and it’s free!

You read this correctly—you’ll be getting around the same session count as other large conferences on this list (if not more), with the convenience of not needing to travel and not needing any expenses. If you’re curious as to why AllDayDevOps works this way, take a look at the FAQ provided.

AllDayDevOps takes place October 17, 2018. After the conference, all sessions will be sent to registered attendees—another big advantage of a fully remote conference.


2. DevOpsDays

Next up, there’s DevOpsDays, one of my favorites on this list. It’s a series of worldwide conferences run by local organizers. It’s similar to WordCamp for WordPress, which is a collection of conferences organized by local meetup folks interested in the technology.

DevOpsDays focuses on general DevOps topics and includes curated talks. It also features a unique spin on the concept of open space, with discussions on software development and IT infrastructure.

DevOpsDays tends to draw smaller crowds than some of the bigger conferences. That means networking can be easier for people intimidated by large crowds.

Many, many DevOpsDays conferences take place each year. (As of Labor Day, there are still 40 left in 2018!) This means you’ll likely find one you can attend. Here are some of the upcoming locations:

  • Cairo, Egypt
  • Chicago, IL, US
  • Columbus, OH, US
  • Detroit, MI, US
  • Paris, France
  • Portland, OR, US
  • Salvador, Brazil
  • And many others!

These events tend to cost less than other, more mainstream conferences. That’s because of their local nature. For example, I’m based in Detroit. So I could attend DevOpsDays there for only about $150. That would get me a two-day, single-track conference pass to around 10 curated talks and five open-space sessions.


3. DockerCon

DockerCon is an American and European conference focused on the Docker technology. It’s a very good DevOps conference to attend, as Docker is considered the most popular container technology available. The event is suitable for many types of professionals, including software developers and IT staff members. Even non-technical folks such as business analysts and executives would enjoy it.

As DockerCon is a large-scale conference for a major technology, you can expect the following:

  • 6,000+ attendees
  • 5+ tracks and 100 sessions
  • Insight into the latest happenings with the Docker technology, much like you’d get from Microsoft’s Build or Apple’s WWDC
  • The Docker Hallway Track, a system to help find others, network, and communicate in a one-on-one fashion
  • Opportunity to become a Docker Certified Associate with an on-venue test

You can attend DockerCon at these upcoming dates and locations:

  • Barcelona, Spain: December 3–5, 2018
  • San Francisco, CA: April 29–May 2, 2019

Pricing for DockerCon runs from around $990 for early-bird reservations to $1,395 for standard tickets. Lodging ranges from about $290–$360 a night.


4. ChefConf

Chef is a software company that provides multiple DevOps products and solutions to help businesses improve their software delivery processes. ChefConf, as its name suggests, is a conference run by Chef. It provides hands-on workshops. There are also sessions explaining the Chef software itself and how it works in real-world scenarios.

For companies already using Chef products for software delivery solutions, this conference can help them understand the software’s capabilities. For a company not using Chef, this conference offers a demonstration of the technology’s real-life capabilities and what it can do for an organization. In addition, ChefConf offers a chance to network with 1,500 attendees.

The next conference is scheduled for May 20–24, 2019, in Seattle, WA. With early-bird pricing, it generally costs less than $1,000.

To get a feel for what it would be like to attend, you can watch footage from past conferences right on the ChefConf website.


5. KubeCon/CloudNativeCon

KubeCon (and CloudNativeCon) is a Linux Foundation event focused on the Kubernetes technology. Kubernetes is currently the leading container orchestration system used in conjunction with Docker. Therefore, this conference is appropriate for those working with Kubernetes, as well as application developers, IT professionals, and anyone else interested in learning about Cloud Native.

Similar to DockerCon, KubeCon provides a method for gaining credentials in the Kubernetes technology space. The conference serves as a venue for attending the Kubernetes Fundamentals course. You can also earn the Certified Kubernetes Administrator certification.

KubeCon will take place in Seattle from December 10–13, 2018. Prices vary, depending on the type of ticket:

  • Corporate (your company pays for your attendance): $1,100–$1,200
  • Individual (you pay for yourself): $550–$650
  • Academic (you’re a student currently enrolled in college): $150

This conference is a decent deal if you’re paying out of pocket, especially you’re a student.

A quick note: KubeCon has an interesting registration process. If you want to attend as an individual (at your own expense), you can email the event organizers with the intent to register in that category. You’ll receive a registration code, which will let you purchase a ticket for the conference.


6. PowerShell + DevOps Global Summit

The PowerShell + DevOps Global Summit is a DevOps conference focused on the Windows tool PowerShell and how to integrate it with the DevOps methodology. PowerShell is a popular tool in the automation world. So, attending this conference makes sense if your organization uses PowerShell in its DevOps solutions.

Unlike other conferences described in this post, this event provides an OnRamp track. OnRamp coaches entry-level IT professionals to understand DevOps and PowerShell. But the track accepts only 40 attendees, based on applications completed during registration. So this summit might be one of the only selective DevOps conferences around.

The PowerShell + DevOps Global Summit conference usually takes place in Bellevue, WA, and costs $1,650 per person. Check out the brochure for the 2019 event for more information.


7. DevOps World | Jenkins World

DevOps World | Jenkins World is a Jenkins-supported DevOps conference. Predictably, it focuses on the automation software Jenkins and its capabilities for IT teams. This conference welcomes about 2,500 attendees, making it the largest Jenkins-focused DevOps conference around. In addition, it provides 100+ sessions for attendees to enjoy. If your team uses Jenkins in your software delivery pipeline, this conference is a must. While the US event just took place two weeks ago, you can still attend its counterpart in Nice, France from October 22-25, 2018 (schedule).

Prices range from $499 for early-bird tickets purchased four months in advance, up to $1,199 for the standard rate. So it’s a good move to register early.

Potential attendees should also check out this post, which provides more information and a discount code for 30 percent off.


8. DevOps Enterprise Summit

DevOps Enterprise Summit (or DOES) is a 3-day DevOps conference with a focus in working in the enterprise area. Whereas most of the conferences above have a mix of high-level concepts, this conference consists of speakers with real-world experience working in enterprise environments.

In terms of locations, the European DOES (in London) has already passed – but you can still attend the US DOES in Las Vegas on October 22-24, 2018. Tickets run around $1,975 for a full-price ticket.

In past years, tickets have sold out of the conference, so if this conference sounds interesting, you may want to act quick!


9. CodeMash

Although not directly a DevOps-focused conference, CodeMash is a two-day event (or four-day, if you attend workshop sessions). It focuses on multiple facets of technology. This includes software development, DevOps, technical leadership, and much more. The event takes place at the beginning of each year in Sandusky, OH, at the Kalahari Resort.

CodeMash offers a great ROI for individuals who want to learn all things related to DevOps and to choose from a variety of other types of workshops and sessions. In addition, CodeMash is perfect for travelers attending DevOps conferences with their family, as there are specific events for children called KidzMash.

The next CodeMash will take place from January 8–11, 2019. The first two days include workshop sessions, also called pre-compiler sessions. The next two days are the main conference and involve plenty of curated talks.

In terms of cost, tickets go as follows:

  • two-day conference (all talks): $350
  • four-day conference (pre-compiler sessions and all talks): $650

CodeMash also provides a deeply discounted rate for lodging at the Kalahari. That’s another reason this event is a great choice for those traveling with family. In addition, kids can play in the water park and enjoy other activities at the venue.

For more information, take a look at last year’s schedule and sessions or watch recorded sessions from the CodeMash YouTube channel.


Get Your DevOps Conference On

These conferences all have one thing in common—they are pertinent to current DevOps practices today, providing value for all levels of DevOps practitioners, both complete beginners and veterans.  Hopefully, this list helps you better decide which DevOps conferences are right for you to attend—wherever your experience level is.

What’s your experience with the DevOps conferences above? Have something to say on one of them? Did I miss an important conference? Let me know in the comments!

]]>

TDD vs. BDD: What Are They and How Do They Differ?

Improving the testability of software is extremely important, and a difficult ideal in its own right. To achieve that, engineers and others interested in software testability have created many strategies, tools, and methodologies to improve the testability of software solutions.

With so many methodologies and techniques available, which one should you choose? Two of the most prominent methodologies I’ve encountered are test-driven development (TDD) and behavior-driven development (BDD). In this post, I’ll describe what TDD and BDD are, discuss their real-world applications, and explain how they differ. In addition, I’ll talk about considerations between TDD vs. BDD.

Here are some facts to consider when thinking about TDD vs. BDD.

TDD vs BDD

What Is TDD?

Test-driven development (TDD) is one of the most popular methodologies today for adding testability to a software project. Test-driven development became popular around 2003 via Kent Beck, alongside the test-first extreme programmingprinciple. TDD aims to provide 100 percent test coverage in code by working with a very short feedback cycle when writing code. The short feedback cycle drives implementation, verifying functionality with automated testing. In other words, you write tests beforehand and then write functionality so that the software passes those tests.

The TDD methodology and its short feedback loop are defined in Kent Beck’s book Test-Driven Development: By Example with the following steps:

1. Write a failing test (red).

2. Make the test work ASAP (green).

3. Refactor the code.

Step 1: Write a Failing Test

The first step with TDD is to create a “red” test that fails. This test doesn’t need to pass, or even compile at all. The primary purpose of this step is to define the functionality to test.

Step 2: Make the Test Pass ASAP

Once you’ve established a failing test, the next step is to make it pass as quickly as possible. There are no rules in getting the implementation correct to turn the test “green.”

Step 3: Refactor the Code

Once you have a passing test, you have the means to test the functionality created in step two. The next step is to refactor the code to a good standard, such as by removing any duplication created during the previous steps. While refactoring, you’ll have the ability to use the test you just created to ensure that the code is always working as intended.

TDD: Benefits and Drawbacks

Now that you know the steps involved in test-driven development, let’s look at some of the benefits and drawbacks. Some of the benefits of practicing TDD include:

  • Improve code quality by virtue of writing code with testability in mind—this generally results in well-defined and decoupled code that is easy to maintain over time.
  • Provide a framework of automated tests that can provide a metric of the code quality at any point. This is great for understanding regression effects when changing functionality in the codebase.
  • Test creation alongside functionality creation. Instead of tests being an afterthought, they become as critical as writing functionality.

Of course, there are some disadvantages as well:

  • When considering the business case for writing with testability in mind, it can be difficult to explain how testing can contribute to the bottom line. This is especially difficult for teams working under unrealistic deadlines.
  • To follow TDD correctly, you need both the knowledge to actually follow the TDD methodology correctly and the discipline to commit to following the rules of the methodology. Especially as timelines tighten and deadlines loom closer, it becomes easy to push off testing as something to “address later.” 

Now that we’ve covered the basics of TDD, let’s talk about the next testing methodology in our discussion of TDD vs. BDD, behavior-driven development.

What Is BDD?

Behavior-driven development (BDD) combines TDD methodology with core principals from object-oriented programming and domain-driven development.  BDD allows both technical and non-technical teams to be involved in software development testing efforts. Writing tests in a way that explicitly defines the behavior expected for the software accomplishes this by allowing both non-technical and technical members to be involved in the test definition process.

Behavior-driven development focuses on creating user stories explaining the desired functionality. This is opposed to test-driven development, which focuses on having a feedback loop driven by written tests. In 2007, Dan North wrote a guideline for the different steps of defining a user story—let’s take a look.

Step 1: Given 

First, a behavior-driven test will establish the initial context for the test. This step is called “given,” which describes a situation in which functionality will be determined. This step is important in establishing the state of the application before testing for functionality. Since a behavior-driven test should still look to isolate functionality, it’s important to be specific with the context of the user story when determining functionality in the application.

Step 2: When

The next step in writing a BDD test is to establish the “when” criteria. The “when” step describes the action occurring with the “given” content. Generally, this step is a verb describing something happening in a system, such as clicking a page or registering a user.

Step 3: Then 

Finally, the last step is to define a “then” statement for the behavior-driven test. This statement describes the final result that occurs after performing an action with the “when” statement.

BDD: A Quick Example

Let’s look at an example. If I’m building an application that collects registrations for an event, I’d think about the different functionalities this application can have. A core function to test would be the capability to add an attendee to an already existing event. My “given” statement would look something like: given that an event already exists and is selected.

Next, I would create a “when” statement to provide an action to perform for this test. Since my goal is to add an attendee to an event, my “when” statement would be: when a user registers for the given event.

Lastly, I would finish the test by including a “then” statement. Once the user registers for an event, I need to confirm the user was actually registered. My statement could look like: then confirm the user is registered for the event.

BDD: Some of the Benefits

Let’s go over a few benefits of using behavior-driven development:

  • Good user of establishing strong business requirements for the application. Behavior-driven tests provide a framework of tests focusing on the business functionality of the application since they are created with desired behavior in mind.
  • High collaboration between domain expert and developers. Building tests in this way helps prevent miscommunication between requirements and implementation.
  • Provides repeatable high-level tests for regression throughout the application. If you’ve ever run into a case where business users forget the functionality they’re asking for, this benefit will work really well for you.

Now that we have a better understanding of behavior-driven testing, let’s look at the differences between TDD vs. BDD.

TDD vs. BDD: Understanding the Difference

When looking at the difference between TDD vs. BDD, it’s important to understand that you don’t have to choose just one or the other—you can use TDD and BDD together to cover your organization’s testing needs. By using aspects of both TDD and BDD, you’ll be able to have low-level tests for the details of the codebase, and higher-level tests that touch on the direct functionality desired by the customer. Additionally, the different sectors of your organization can have different involvement in the testing process. A different workflow in test creation can involve different team members, depending on their technical capability.

First, consider an environment where the development team is driving internal functionality. Test-driven development centers around tests defined by the development team. TDD’s short feedback cycle works well within a development team since the tests created with this methodology tend to be small and quick.

Second, an environment with more collaboration between the domain experts and development team will benefit from a stronger behavior-driven development process. This allows for tests written in simple English that makes sense for a non-technical user. In addition, behavior-driven tests make for great acceptance tests. These tests define the desired functionality without considering implementation details, allowing them to include non-technical users in creation.

Using TDD and BDD in Harmony

This post should have given you a good understanding of what test-driven development and behavior-driven development are. When considering whether to implement TDD vs. BDD in your workflow, both can create an effective test creation strategy. If you aren’t using one (or both!) of these methodologies, give them each a try and see if they help you with creating better tests for your software.

]]>

Server Performance Metrics: 8 You Should Be Considering

This post was originally featured at Raygun.

With the DevOps movement entering the spotlight, more and more developers concern themselves with the end-to-end delivery of web applications. This includes the deployment, performance, and maintenance of the application.

As an application gains more users in a production environment, it’s increasingly critical that you understand the role of the server. To determine the health of your applications, you may find it useful to gather performance metrics for the servers running your web applications.

All different types of web servers (like Apache, IIS, Azure, AWS, and NGINX, for example) have similar server performance metrics. Most of my experience in this realm lies in Microsoft Azure, which provides an easy-to-use interface for finding and collecting data. Working with Microsoft Azure gives the capability to host applications in either Azure App Services (PaaS), or Azure Virtual Machines (IaaS). This setup gets you a view of the different metrics for the application or server running.

Because of all this experience I’ve had in the last few months, I’ve found what I think is eight of the most useful server performance metrics. These metrics can be divided into two categories: **app performance metrics and user experience metrics.

Let’s start by looking at the metrics under the app performance umbrella.

App performance metrics

App performance metrics are specific to the speed of the web applications that are running. If you’re having issues with an application performing slowly, these metrics are a good place to start.

Metric 1: Requests per second

Requests per second (also called throughput) is just like it sounds—it’s the number of requests your server receives every second. This is a fundamental metric that measures the main purpose of a web server, which is receiving and processing requests. Large-scale applications can reach up to about 2,000 requests per second.

Given enough load, any server can fall. When considering the impact, remember that requests are just that: a single request to the server. This metric doesn’t consider what’s happening in each of these requests.

This brings us to our next metric.

Metric 2: Data in and data out

The next metric I suggest you look at is your data in and data out. The data in metric is the size of the request payload going to the web server. For this metric, a lower rate is better (lower meaning that small payloads being sent into the server). A high data-in measurement can indicate the application is requesting more information than it needs.

Data out is the response payload being sent to clients. As websites have been getting larger over time, this causes an issue especially for those with slower network connections. Bloated response payloads lead to slow websites, and slow websites will dissatisfy your users. With enough slowness, these users abandon the website and move on. Google suggests pages that take three or more seconds for mobile users to load have about a 53% chance of users abandoning before load completion.

Metric 3: Average response time

Defined directly, the average response time (ART) is the average time the server takes to respond to all requests given to it. This metric is a strong indicator of the overall performance of the application, giving an impression of the application usability. In general, the lower this number is, the better. But there are studies showing that the ceiling limit for a user navigating through an application is around one second.

When considering ART, remember what the acronym stands for—it’s just an average. Like all metrics determined with an average, high outliers can throw the number off completely and make the system seem slower than is. ART is most helpful when used alongside our next metric on the list.

Metric 4: Peak response time

Similar to the average response time, the peak response time (PRT) is the measurement of the longest responses for all requests coming through the server. This is a good indicator of performance pain points in the application.

PRT will not only give you an idea of which portions of your applications are causing hangups; it will also help you find the root cause of these hangups. For example, if there’s a certain slow web page or a particularly slow call, this metric can give you an idea of where to look.

Metric 5: Hardware utilization

Next, let’s talk about overall hardware utilization. Any application or server running is limited by the resources allocated to it. Therefore, keeping track of the utilization of resources is key, primarily to determine if a resource bottleneck exists. You have three major aspects of a server to consider:

  • the processor
  • the RAM (memory)
  • the disk space and usage

When considering these, you’re looking for what can become a bottleneck for the whole system. As any physical (or virtual!) computer running with these components will show, performance is only as strong as its weakest link. This metric can tell you what the bottleneck is and what physical component can be updated to improve performance.

For example, you may run into issues when trying to render data from a physical hard drive. That will cause a bottleneck in the I/O interactions between gathering files and presenting them to the user. While the hard drive spins and gathers data, the other physical components do nothing. An upgrade to a solid-state drive would improve the performance of the entire application because the bottleneck will disappear.

Metric 6: Thread count

The next metric—the thread count of a server—tells you how many concurrent requests are happening in the server at a particular time. This metric will help you understand what the general load of a server looks like from a request level. It will also give you an idea of the load placed on the server when running multiple threads.

A server can generally be configured with a maximum thread count allowed. By doing this, you’re setting a max limit of requests that can happen at one time. If the thread count passes this maximum value, all remaining requests will be deferred until there’s space available in the queue to process them. If these deferred requests take too long, they’ll generally time out.

It’s worth noting that increasing the max thread count generally relies on having the appropriate resources available for use.

User experience metrics

Now that we’ve covered the app performance metrics, let’s discuss a few that are user experience centered. These server performance metrics can measure your users’ overall satisfaction when using your web applications.

Metric 7: Uptime

Although not directly related to its performance, the uptime of the server is a critical metric. Uptime is the percentage that the server is available for use. Ideally, you’re aiming for 100% uptime, and you’ll see many cases of 99.9% uptime (or more) when looking at web hosting packages. It’s not uncommon for software projects to abide by a service level agreement that dictates a particular server uptime rate.

If uptime metrics checking isn’t something your server can provide built in, there are plenty of third-party services, such as Updown.io, that can do it for you. These services can even give you a visual depiction of their report:

And here’s an interesting fact. Calculating the monthly allowed downtime shows

  • 99%: ~7 hours
  • 99.9%:  ~45 minutes
  • 99.999%:  30 seconds

Metric 8: HTTP server error rate

The HTTP server error rate is a performance metric that doesn’t directly relate to application performance, but it’s a very critical one. It returns the count of internal server errors (or HTTP 5xx codes) being returned to clients. These errors are returned from malfunctioning applications when you have an exception or other error not being handled correctly.

A good practice is to set up an alert whenever these kinds of errors occur. Because 500 errors are almost completely preventable, you can be certain you have a robust application. Being notified of all HTTP server errors allows you to stay on top of any errors occurring. This prevents the issue of having errors build up in the application over time.

How to measure server performance

Measuring server performance with an Application Performance Monitoring (APM) tool like [Raygun APM](https://raygun.com/platform/apm) is the easiest and most accurate way of measuring the health of your software. APM should be, giving your team greater context and diagnostic tools into your biggest application performance questions. Discover and pinpoint the root cause of performance issues with greater speed and accuracy than traditional APM solutions.

Keep your finger on the pulse

These are the server performance metrics I’ve personally found to be the most valuable. If you collect and monitor this kind data on both your users’ experience and your app performance, very little will fall between the cracks.

Did I mention any metrics that you’re not currently using? Consider trying them out. After all, metrics are your best way to keep your eye on your server performance—and, by extension, your application’s health.

]]>

Setting Up A Windows 10 Workstation in 2018

2018.10.24 EDIT: Added a few new applications when setting up my Surface Pro 4.
I recently built a new PC for myself to use for work and wanted to document the process of getting it set up for use. Each year, I go through and reinstall any system I have as a means to make sure I have this process down, alongside doing an audit to see what I’m actually using, to make sure I keep things clean.
Here’s the process I use when setting up a Windows 10 workstation:

Installation

First, I go to the Microsoft website and create a bootable thumb drive containing Windows 10.
I restart my computer and boot to the thumb drive I created earlier. Once Windows setup loads, I end up selecting the option to Custom: Install Windows only (advanced).
For my PC, I have two drives running in it:

  • An SSD that’s responsible for storing the Windows 10 OS.
  • An HDD that’s responsible for holding all My Documents data, as well as everything for Dropbox.

I’ll select to install to OS on the SSD and kick off the installation process. After some time, the installation will finish and the computer will restart.

Display Configuration

Once I have the Windows desktop running, I like to set up my display before I do anything else. I currently run 3 4K monitors with a GeForce 1080 Ti, so I will set up scaling and make sure the monitors are configured as desired.
Once that’s done, I set up Windows to display everything as desired, including:

  • Shifting the Taskbar over to the right side, and only showing one taskbar.
  • Turn off Cortana, Tasks, and People
  • Changing to use Dark Mode for default apps

Essentials: Mail

Once I have the display set up correctly, the next step for setting up a Windows 10 workstation is to set up some of the essential software on the system. I like to use the Windows 10 Mail app for both my mail and calendar. I’ll access that and add all relevant accounts, and then make the following configurations:

  • Creating a linked inbox
  • Turn off email notifications
  • Set Swipe Right to Delete
  • Create a signature for all accounts
  • Set the calendar to start on Monday

Software Installation

Next up is downloading a host of software to use for the PC. I use Chocolatey for as much as I can, so after setting it up through Powershell (as administrator), I install the following applications using Chocolatey:

  • googlechrome (sign in to account, configure extensions)
  • divvy (register application, start Divvy at login, hide from tray, set Ctrl-Shift-Z as shortcut)
  • 7zip
  • ccleaner (set up to check automatically)
  • dropbox (log in and download all files, point to HDD)
  • filezilla
  • git
  • nodejs
  • libreoffice-fresh
  • vscode
  • rescuetime (log in)
  • bitnami-xampp (LAMP server instructions, including setting up a full local WordPress installation)
  • mysql.workbench
  • postman
  • qbittorrent
  • steam
  • sql-server-management-studio
  • teamviewer
  • visualstudio2017community
  • azure-cli

You can do this with one command, like so:

choco install -y googlechrome ...

In addition, there are a few applications I use that aren’t in the Chocolatey package manager:

  • Franz – web application allowing me to combine Hangouts, Messenger and Slack into one application.
  • Manta – open source application I use for invoicing (looking for a replacement for this, one that can store data in the cloud).

After everything is installed, I make sure to go through each of the applications if they need configuring (the notes are above for applications that need to be configured).

Visual Studio Code Setup

Once all of my software is set up, I take a look at Visual Studio Code and set it up, since that’s where most of my development occurs.
I install the following plugins:

  • TSLint
  • Angular Language Service
  • EditorConfig for VS Code
  • Path Intellisense
  • Prettier – Code formatter
  • Terraform
  • PowerShell

Once this is done, I install Fira Code as the font to be used.
In terms of configuration, I copy and paste the current settings I like to use:

{
    "editor.rulers": [
        80
    ],
    "editor.minimap.enabled": false,
    "window.zoomLevel": 0,
    "explorer.confirmDelete": false,
    "explorer.confirmDragAndDrop": false,
    "workbench.startupEditor": "newUntitledFile",
    "editor.formatOnSave": true,
    "editor.fontFamily": "Fira Code",
    "editor.fontLigatures": true,
    "typescript.updateImportsOnFileMove.enabled": "always",
    "prettier.singleQuote": true,
    "prettier.printWidth": 80
}

Configuring PowerShell

Once Visual Studio Code is set up, the next step is to configure Windows Powershell correctly. There are a few things to do here, build a profile, and then set up Azure CLI.
I run the following commands in administrator PowerShell:

set-executionpolicy unrestricted
New-Item -path $profile -type file –force

Once that’s done, I should have a profile file created. I add the following to it:

set-location D:

Once that’s done, I’ll restart PowerShell to confirm that when it starts up, it’ll move to the D:.

Final Touches

Once I’m all set with most things, there are a few more things I like to do:

  • Hide all desktop apps
  • Unpin everything from the Taskbar

All Set And Ready To Go

Once all of that is done, we’re all set with setting up a Windows 10 workstation. I’ll take a look at this again in 2019 to see what changes in a year. Perhaps I’ll switch over to using Linux?]]>

Upgrading Jenkins On Azure

Jenkins Azure Marketplace to create the Jenkins instance.  This assumes you’ve already created a Jenkins virtual machine on Azure.  In addition, the initial set up should be complete.   If not, you can check out some more information within the Microsoft documentation.

SSH into the Virtual Machine

If you’ve been using this instance, you should be familiar with SSH forwarding and being able to access Jenkins using localhost.  If this is new, you can read this Jenkins article on the security setup for the Azure Jenkins Marketplace instance.
Use this command in a command line interface to log in to your virtual machine:

ssh -L 127.0.0.1:8080:localhost:8080 <username>@<domain_name>

Replace the following:

  • Username – the server admin username created when creating the Azure VM.
  • Domain Name – the domain name to access the virtual machine.

You’ll be asked for a password – use the password provided alongside the username when creating the virtual machine.
After logging in you’ll see a screen like below.  This verifies you are connected to the virtual machine.
logged into the virtual machine
Access http://localhost:8080 and verify you can access the Jenkins interface:
Jenkins interface via localhost

Download Newest JAR

After logging into the server, you’ll use the following command to download the latest version of Jenkins:

sudo wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war

Assuming all is successful, you should see the following
Downloaded newest Jenkins JAR
After downloading the .war file, copy it to the Jenkins directory:

sudo mv jenkins.war /usr/share/jenkins/jenkins.war

(if you need to know the location of the WAR file on the server, you can check it in Manage Jenkins -> System Information)
And finally, with the new .war in place, restart Jenkins:

sudo systemctl restart jenkins

Verification of the installation is simple. Access Jenkins and check the bottom right to see the updated version number:
Checked for latest Jenkins installation

Reference

Upgrade Jenkins server to a new version
https://wiki.jenkins.io/display/JENKINS/Automated+Upgrade]]>

Living In A DevOps World (Part 2: Implementation)

This post was originally featured at DevOps.com.
Part one of this article focused on some of the more behind-the-scenes benefits of an Agile DevOps approach. In part two the focus turns to some of the other traditional problems that a well-executed DevOps approach can address, and how doing so can benefit an organization in more ways than just a technical perspective.
By way of quick review, DevOps was born out of the Lean and Agile software development methodologies when it became clear that, while those methodologies did indeed speed up the development process, a bottleneck still occurred when push came to shove and new code had to be moved to quality assurance and production environments.
DevOps was created to more seamlessly connect the development and operations processes, making code migration, testing and promotion to production a more efficient process. To accomplish this, the DevOps approach had to find solutions for some of the issues that caused operational delays, and create new ways to organize, implement and continuously optimize the operations process.

Overproduction/Overprocessing

For those who have been in development and/or operations for any length of time, it quickly becomes clear that there is a multitude of operational safety checks that serve to protect a production environment. While that is vitally important, it was also clear that there had grown an “over” problem around many operational procedures, and in many cases that manifested itself in the development process. That includes overproduction, when making or requesting more than was needed from requirements and/or operations perspective to clear arbitrary operations process hurdles.
Alternatively, overprocessing, when development and operations resources do more work (as opposed to just enough, as Lean and Agile would suggest) than required to smooth the transition of code and functions from development to operations. This created waste regarding time, resources and budgets that were not proportional to the benefits derived from following the operations process.
 

Motion and Transportation

 
Similarly, DevOps also sought to solve the operational problems of both motion and transportation. That is, the amount of excess work required to deliver new code to meet the operational requirements for code migration. The friction caused by such requirements slowed the motion and momentum of the development process. The same is true of transportation, or the difficulty in moving code between environments such as testing, quality assurance and production.
 
In both cases, development and project momentum was sacrificed for what often turned out to be a series of artificial hurdles that had long since become less effective or even obsolete parts of the operations process.
 

Correction and Inventory

 
In most instances, all of the above resulted in the final maladies of the pre-DevOps development and operational ways. The first was the number of in-flight corrections required when timelines were squeezed, and the rush was on to get to production. Unfortunately, this went hand in hand with the ultimate problem of good code being sacrificed for expedient delivery, often resulting in inadequate functionality, system outages and, in the end, lost market opportunity and revenue.
 

3 Keys to DevOps Success

Any successful DevOps implementation must address three critical factors in this order: culture, organization and tools.

Culture

It’s critically important to connect an organization’s values to the DevOps process. Valuing quality, timeliness and organizational alignment of goals and objectives is the first step toward DevOps success. Such cultural values translate directly into a DevOps organization.
Providing empowerment and accountability to DevOps team members helps to build ownership among the team, and trust from their customers in the rest of the organization. It also helps to provide a physical environment that fosters collaboration, teamwork and continued learning. Co-working spaces and collaboration tools such as Slack are a good start. Attending external conferences to broaden perspectives and to bring new ideas back to the team is often beneficial. From there, brown bag lunch sessions where ideas and experiences can be shared, frequent post-mortems on implementations to hone best practices, and even internal mini-conferences where several departments come together for a day to discuss DevOps practices are all effective ways to build a strong DevOps culture.

Organization

Any good DevOps organization is two-sided; that is it has to work from the top down and from the bottom up at the same time.
The top-down part is in the ability to “see the system” from a macro level, allowing for process understanding and insights from a business workflow perspective. This helps to identify the pain points and bottlenecks in the current process that can be optimized through the DevOps process.
Once that’s accomplished, the bottom-up work begins. Identifying things such as inconsistencies in code deployment environments that cause delivery issues, elimination of manual and custom built deployment processes and quarantining inefficient and poorly written code until it can be redone or eliminated are all part of optimizing the time, quality, resources and success factors for deploying production systems on schedule. It’s also important here to continually audit the current processes with an eye toward eliminating the processes that are no longer required or useful but have been kept in place out of the fear of  “breaking something we don’t understand.” If nobody understands it, then it shouldn’t be in production software.

Automation Tools

The final factor for DevOps success is to have the right toolset.
Communication: Any DevOps team requires the ability to quickly and directly communicate with other team members sans meetings. For this purpose, tools such Slack (real-time chat), Skype (video chat), and Confluence (for storing persistent information) are pretty good options.
Planning, Monitoring & Consistency: For the team’s planning needs, a tool such as Trello that can provide Kanban board functionality is worth a look. For issue tracking and monitoring of any system’s overall health, tools such as Jira and NewRelic respectively provide some good functionality. Likewise, consistency is vital in a DevOps world, and using automation to ensure that all systems are configured as desired across different environments is a crucial best practice. For this, a tool such as Ansible is worth a review.
Integration & Deployment: For continuous integration of systems in development and as a way to tighten the feedback loop for developers to determine if the central build used for deployment to production is working as intended, the Jenkins toolset might be a good fit. And finally, when it comes making any deployment process as painless as possible, a tool such as Docker that can handle created containers for an application that includes all dependencies, reducing the complexity of deployment to multiple environments, is a solid way to go.
The point of all of this is to create an environment—culturally, technically and physically—where DevOps can succeed, grow and thrive. Organizations that can create an effective and efficient DevOps environment have also created a competitive advantage for themselves.]]>

Living In A DevOps World (Part 1: Recognition)

This post was originally featured on DevOps.com.
DevOps has evolved into a conceptual and working model for more effective software implementation. However, there are differences of opinion on the real-world value of any approach to date.  This includes the question of the best way to implement a real-world DevOps environment. This two-part article will focus on what an agile DevOps approach is meant and not meant to address.
DevOps sits at the nexus of three essential business technology functions: software development, quality assurance and operations. A short and concise definition of DevOps proposed in 2015 seems as appropriate as any:

DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into regular production while ensuring high quality.

The definition was suggested in the book, “DevOps: A Software Architect’s Perspective,” and the authors have hit upon the essence of the practice. The key, of course, is how to put that concept into practice.
The first step on the journey to effective DevOps is the recognition that the concept is the result of the rise of the Lean and Agile software development methodologies. Those methodologies, among other things, emphasize the following:

  • A focus on customer value.
  • The elimination of waste.
  • Reduced cycle time (accomplishing work faster, releasing faster).
  • Shared learning.
  • Avoiding batching (don’t do things until required).
  • Theory of constraints (break things up, focus on individual issues).
  • Continuous integration, testing and delivery.
  • Faster time to market.

DevOps in Practice

Something had to be invented to adhere to the principles above; that something was DevOps.  Over time, an effective practice should address any number of business technology pain points. The following short list of those pain points and their respective solutions should prove instructive.

System Downtime

System outages are inevitable as long as systems are designed, tested and implemented—even with increased automation—by imperfect beings. DevOps changes the focus from trying to create applications that never fail to designing systems that can recover quickly.  This decreases aggregate systems outage time over the life cycle of any application or system.

Stagnation

This was a staple of traditional systems development and is most closely associated with the waterfall methodology for systems development. After requirements were created, the development team would be locked away for weeks, months or even years before emerging with “fully” working software.  This software inevitably no longer satisfied rapidly evolving business requirements. DevOps fits hand-in-glove with the Agile practice of short windows of incremental changes instead of long release cycles, providing value to customers as quickly as possible.

Team Conflict

Having been borne from the cultural combination of Agile and Lean, DevOps has taken on the problem of functional silos that are often erected between development, operations and the business customers. It follows the methodological approaches of collaboration and teamwork first to understand what others know and to leverage the best of it to solve business problems more rapidly. There is also a cultural bent toward experimentation, continual learning and constant improvement. This leads to blameless post-mortems, where collaborative discussion replace finger pointing and blame.

Knowledge Silos

Functional silos have led to compartmentalized knowledge. If the old game was that knowledge is power, the new game in the DevOps world is that knowledge is freely exchanged as an enabler to solving business problems. DevOps addresses the problem of information being lost in translation between the development and operations functions by eliminating the functional barricades and making knowledge sharing the highest form of collaboration.

Inefficiency

Waiting for things to happen used to be a standard operating procedure in the pre-DevOps world. Project plans were created and managed to account for the time it takes for new code to be moved into a testing, quality or even production environment. This was a momentum killer for projects and at times a morale killer for developers waiting to see what changes they might need to make to their code.
The combined Agile and DevOps approach rewrites the traditional approach to code migration, smoothing and eliminating wait times.  Projects can flow more seamlessly from start to finish. This also keeps business resources—testers, approvers, etc.—more engaged as a result of a constant flow of new functions and features to test and use.  There is a financial aspect to this as well. Reducing speed to market with new functionality, reducing or eliminating idle hands—be they technical or business—and delighting customers with a steady stream of enhancements and features all go directly to an organization’s top and bottom lines.
That, after all, is in many ways what the DevOps approach is all about. All of these critical areas become the means to accomplish it. Part two of this article will focus on some more of the benefits of a DevOps approach, and how to achieve them.]]>

Setting Up a Local WordPress Development Environment

  • Setting up the website and adding content.
  • Installing plugins to test their functionality.
  • Viewing a functional website without a shared environment.
  • Theme or plugin development.
  • This guide will walk you through the following steps.

    1. Installing a LAMP Development Environment
    2. Creating a database
    3. Downloading WordPress core files, and setting them up on server.
    4. Creating a local domain for use.

    Installing A LAMP Development Environment

    XAMPP is a cross-platform LAMP (Linux, Apache, MySQL, PHP) environment used for running WordPress on your local machine.  A few platform-specific alternatives to use as well are WampServer for Windows, and MampServer for OS X.  All of these options will provide the same functionality for running a local WordPress environment, but I chose XAMPP for the following reasons:

    • Cross-platform compatibility, the setup and configuration for this tool will stay the same between different platforms.
    • Personal preference for the XAMPP Control Panel over the other tools.
    • The use of MariaDB over MySQL**.

    **Note that XAMPP actually uses MariaDB as it’s database server.  This is different than the standard MySQL database server used for WordPress.   I’ve provided a reference to a few articles at the end of this post if you’re interested in learning more.

    Starting with XAMPP

    When opening XAMPP, you’ll be presented with the Control Panel:
    To get the environment started, start both the Apache and MySQL services.  Check that the server is running correctly by accessing http://localhost on your machine.  If the XAMPP dashboard appears, you’re good to move to the next step.

    Creating a Database

    The next step is creating or importing a database in your local MySQL server.  This can be done using phpMyAdmin (set up with your XAMPP installation) or by using MySQL Workbench to connect.  I’ll use MySQL Workbench, since it gives a more friendly UI to work with.
    Right-click ‘Schemas’ on the left side to create a database for WordPress to use.  Later, you’ll use this schema name when running the WordPress installer.

    Linking/Installing WordPress

    After the server is running and the database created, the next step is to get the WordPress source code onto the server.  Download the files for WordPress and copy them to a directory on your computer, something like:

    C:\code\<your-project-name>

    You’ll likely want to store the source code for the site somewhere other than the /htdocs folder.  This way, you can have a central repository on your machine for all projects.  You can use a symlink to accomplish this.  Open a command line terminal as an admin and run the following:

    mklink /D C:\xampp\htdocs\<your-project-name> C:\code\<your-project-name>

    After this, check that you can access the WordPress installation at http://localhost/<your-project-name>.  Assuming you can see the WordPress installation screen, you have everything ready to get started.  You can progress from here if you so desire, but let’s also create a local domain for use as well, to make access to the site a little easier.

    Creating a Local Domain

    Creating a local domain will allow you to access your local running website at something such as http://local.<your-project-name>.com.  There are two parts:

    1. Editing the Apache server configuration to serve the local domain.
    2. Editing the hosts file to point the new domain from your machine to your server.

    First, open the Apache Virtual Hosts configuration file at C:\xampp\apache\conf\extra\httpd-vhosts.conf and add the following:

    <VirtualHost *:80>
     DocumentRoot "C:\xampp\htdocs\<your-project-name>"
     ServerName local.<your-project-name>.com
     ErrorLog "logs/<your-project-name>.error.log"
     CustomLog "logs/<your-project-name>.custom.log"
    </VirtualHost>

    Next, open the C:\Windows\System32\drivers\etc\hosts as an administrator and add the following:

    127.0.0.1 <your-local-domain>

    Restart Apache using XAMPP Control Panel.  Attempt to access the site using the local domain.  If you’re able to access the WordPress installer screen at the local domain, you’ll all set to start the 5-minute installation.

    Complete the 5-minute WordPress Installation

    Installing WordPress is very simple.  Select the language for installation, and then fill out the database connection details like below.  You’ll just need to change the database name to the schema created:
    You should now be all set with a local WordPress environment, useful for plugin development, trying out different themes, and other configuration changes without affecting a live website.

    References

    MySQL Face-off:   MySQL or MariaDB?]]>

    Installing WordPress in Azure In 15 Minutes

  • ~$9 a month for a custom domain that does not supports using SSL
    • If you’re interested in having SSL support, you’ll need to jump to the B1 level.  This costs ~$32 a month.
  • ~$25 a month for the lowest setting capabilities for Azure Database for MySQL
  • This assumes a fresh installation of WordPress, with no existing databases available on your current Azure instance.  If you have a MySQL Server already set up in Azure, you’ll change the directions a bit to just use the existing instance.

    Step 1: Create a WordPress Resource

    After logging into the Azure portal, click on ‘Create a resource.’  Search for ‘WordPress and select the image published by WordPress.
    You’ll be asked to provide some information, including:

    • App Name: a unique name you would like to use for the app.
    • Resource Group: type in the name of the new resource group to include the resources in.  You can also add these resources to an existing resource group.
    • App Service plan/Location: select the app service plan to use for the WordPress instance.  this is what drives the overall cost for all resources running – you can read more here.
    • Database:  To use WordPress, Azure needs to create a MySQL server.  Azure provides using the ‘Azure Database for MySQL’ service.  Using this, you’ll set the server admin credentials and the pricing tier to be used.  This will also create a database name for you for immediate use that you can change to use a more friendly name.

    After about 5 minutes, the WordPress instance should be created.  You can test it by accessing the URL https://{your-app-name}.azurewebsites.net.  If you can see the WordPress installation screen, you’re good to go.
     

    Step 2: Complete WordPress Installation

    Once the database user has been created, run through the standard WordPress installation.  If you’re familiar with this process, you’ll notice the wp-config screen is skipped.  That’ because the file is already generated for you when installing the system in Azure.
    Although this is the fastest way to get started, there are a few issues that you’ll want to address as you begin development:

    • Setting up a database user that only has access to the newly created database.  This ensures that the user accessing the database only have the rights needed for WordPress.  If the wp-config.php file is compromised, you don’t want to have the server admin credentials exposed.
    • Securing the Azure Database for MySQL Server to only allow access for particular IP addresses.  The current setup allows for all IP addresses to connect in.  Convenient, but not very secure.
    • Adding an SSL connection requirement.  This ensures a secure connection to the database server.

    I’ll write more on the above in the upcoming weeks on securing your WordPress installation.]]>