Setting up Remote State with Terraform and Azure

Create the Azure Storage Account

The first step is creating a storage account that will host the state information for the entire tenant. Create the following in order:

  1. Resource group (specifically for the Storage Account)
  2. Storage Account
  3. A container/blob for the Storage Account

In terms of working with multiple projects, I personally like to use multiple storage accounts for each major project you’ll be building out. This gives a separate access key for each project.

Setting up Backend Config

First, create a file called backend.tf and populate with the following:

Second, create a file called beconf.tfvars, and populate it with the following:

resource_group_name = “”
storage_account_name = “”
container_name = “”
key = “”
access_key = “”

Once this is done, initialize the backend with the following command:

terraform init -backend-config="beconf.tfvars"

Verify everything is set up with the following:

Running Terraform

Once the backend configuration is set, the next step is running Terraform itself. I suggest running it alongside a tfvars file that defines environment-specific information:

terraform apply -var-file="dev.tfvars"

Checking into Source Code

Now that you’re using remote state for Terraform, you don’t have to worry about checking in your state file into source code. Just in case you’re doing this on a fresh Terraform setup, add the following to your .gitignore file:

Reference

https://medium.com/@trstringer/use-azure-blob-storage-for-remote-terraform-state-5f809b95215a

Git Repo Best Practices

When creating a repository for a project, here are a few best practices to keep everything in good working state.

Forcing Pull Request Merges to master

When creating a repo, it’s a good idea to keep the master branch deployable and clean at all times. A good way to do this is to not allow direct commits to the branch. In turn, if you have a CI/CD process in place, you can have more confidence that checking things into master can result in a deployment. In a system using trunk-based deployment, you may decide to automatically deploy all changes from master into production.

You can do this in Github with the following steps:

Go to the repo, and go to ‘Settings’.

Click on ‘Branches’.

Create the rule:

Once this is done, if you try to commit directly to master, you’ll see the following:

Creating a nopCommerce Plugin in version 3.90 and below

I recently worked with a client using an older version of nopCommerce that needed to have a plugin made for functionality. In particular, this plugin integrates with updown.io to allow for turning checks on and off during scheduled maintenance.

To do this, you’ll need to have the nopCommerce source code available for use.

Creating the Plugin

To get started, create a Class Library in the Plugins folder:

  • Make sure to change the location to the plugin folder below the name.
  • Use naming convention Nop.Plugin.{category}.{name}
  • The categories you can use include:
    • ExternalAuthWidgetsExchangeRateDiscountRules
    • PaymentsShippingTaxMisc

Next, go into properties and change the output path for all configurations to deploy to:

This will build the output of the plugin into the Web project, which will ultimately deploy to the nopCommerce application.

After that, copy a web.config file from another plugin, and have 

Next, create a description file with the following content:

Set both the web.config and Description.txt files to ‘Copy if newer’.

Finally, there is a Class.cs file that was created when creating the Class Library. Rename the file to {name}Plugin.cs and use the following code snippet:

Once this is done, you should be able to clean, build, and publish the project. Check the plugins list of the admin backend to see your plugin listed:

Next Steps

After the initial plugin was created, next steps include:

  • Creating a logo named logo.png and upload it to the plugin. Set the image to ‘Copy if newer’.

Reference: 
http://docs.nopcommerce.com/pages/viewpage.action?pageId=22675509

SSHing into an Azure Kubernetes Service node

First, a few reasons why you might do this:

  • If you were looking to view and/or retrieve files from the node itself, perhaps generated by a pod.
  • If you wanted to get more information on the OS running for the node.

Generate an SSH Key

First, generate an SSH key to connect to the server with, noting where you save:

Add Public SSH Key to Node

Next, add your generated public SSH key to the node, alongside a user:

Setting up to SSH

To actually SSH into the node, there are two ways you can do this:

  1. Open the node via a public IP address and SSH into it directly from your machine. This is the easiest and my preferred way to do this, but requires the ability to make changes to the network interface of the node.
  2. Create a pod with OpenSSH installed to ssh into the node while connected.

Method 1: Public IP Address

First, access the network interface of the node you’re trying to access. Access IP Configurations, and click the ‘ipconfig1’ configuration. Add a public IP:

Next, create an NSG rule for the cluster with the following info:

  • Source: your local IP address (https://canihazip.com/s)
  • Destination: the private IP address of the node.
  • Destination port ranges: 22
  • Protocol: TCP

Method 2: Creating OpenSSH Pod

I’ll fill this out at some point.

SSHing into the Node

After following either of the methods above, the final step is actually SSHing into the node. If you’ve been using PowerShell here, I’d switch to WSL:

Clean Up

Once you’ve finished, I recommend:

  • Removing the SSH access rule from the NSG.
  • Removing the public IP address from the AKS node network interface.

Setting up Jenkins in Azure

Before getting started, you’ll need to have:

  • An Azure tenant and subscription.
  • OpenSSH (installation for Windows 10)

Installing Jenkins via Azure Marketplace

The easiest way to install Jenkins is to use the Azure Marketplace link. You’ll likely want to change the size of the VM to something smaller when testing out – you can always increase size later.

Accessing and Logging Into Jenkins

After completed, access the Jenkins instance at:

http://<DNS-NAME>.<LOCATION>.cloudapp.azure.com.

When logging in, you’ll get a screen saying that you need to use SSH port forwarding to connect:

Run the command provided using the admin username provided at VM provisioning. Provide the admin password provided, and you should come to a screen that looks like:

If you get this screen, you’re successfully using SSH port forwarding. Try accessing the Jenkins instance using http://localhost:8080.

Performing Initial Install

You should get to a screen asking for the ability to “Unlock Jenkins”. Run the following command in the SSHed server to get a code for the screen:

Next, you’ll get a request to either install suggested plugins or select plugins as desired. I recommend going through and selecting the plugins desired to keep the installation minimal. Remove anything from the list that you may not need (such as Subversion). You can always add plugins later if you find you need.

After that, create an admin user for yourself, and you’ll be ready to get started!

Next Steps

After you’ve finished setting up Jenkins, a few next steps would be:

  • Setting up a CI/CD job
  • Setting up Azure AD Authentication
  • Adjusting to allow for HTTPS access

Keeping Track of Apparel in NopCommerce by Size

Setting the Product to Use Attributes For Inventory

  • View the product using the “Advanced” view
  • Set the Inventory Method for the product to be “Track inventory by product attributes”
  • Optionally, set “Display availability” and “Display stock quantity”

Adding Inventory for the Product

  • Go to “Product Attributes” -> “Attribute Combinations”
  • You should see a listing of the different sizes available – if not, click the “Generate all possible combinations” button.
  • Change the stock count for each item as needed:

Setting up a cheap LAMP Server in Azure with HTTPS and Let’s Encrypt

Setting up a LAMP server using a small virtual machine size is a good way to host low-traffic PHP sites (such as those running on WordPress) without having to use an expensive App Service. To give a perspective on the cost using App Services for a small WordPress site:

  • Basic plan (allowing for Always on and SSL): ~$50/month
  • MySQL for Azure: ~$25

With this solution, you will create everything on one virtual machine, allowing for dynamic scaling as needed for the machine. This does of course come with some downside:

  • You will need to handle backing up of both the files on the server and the data in the database.

Creating the VM

First, create the VM and Resource Group:

  • RG name: <app>-<env>-<location>-rg
  • VM name: <app>-<env>-<location>-vm
  • Image: Ubuntu 18.04 LTS
  • VM size: B1s
  • VNet name: <app>-<env>-<location>-vnet
  • Diagnostics Storage Account: <app><env><location>vmdiag
  • Allow Inbound Port Access for HTTP, HTTPS, SSH
  • Login access through Azure Active Directory

Once the VM is created, access the NSG and add a restriction to IP to only allow your local IP to access:

Access VM and Install LAMP Server

Retrieve the public IP address and SSH into the server:

Install LAMP Server:

To ensure the installation happened successfully, run the following commands:

Once LAMP server is installed, verify that you can connect to HTTP using the public IP address – you should see the Apache2 Ubuntu Default Page:

 

Set up MySQL

Once the web server is running, the next step is configuring MySQL. Run the following command, installing the Validate Password Plugin and using “Medium” policy:

When installing, use medium strength, and default yes to all options except “Disallow root login remotely?” Generate a password.

The next step is configuring access to MySQL through external servers (such as from a VPN). This assumes you’ll be using the NSG from Azure to restrict access based on desired IP addresses.

Run a query to allow access:

Edit MySQL configuration:

Comment out the line that says ‘bind-address’.

After making that change, restart MySQL:

Finally, create an NSG rule that allows for external access to port 3306:

Once the installation is done, let’s verify that the MySQL server can be accessed. I usually use MySQL Workbench and connect to the server using the following information:

  • Hostname: public IP

After MySQL is set up, set up any database that may be needed.

Here, you’ll want to set up the WordPress database server, whether you are starting fresh or migrating from an old instance.

If running into an issue with packet size, run the following command in MySQL and restart:

Setting up SSL

Once the webserver can be reached and LAMP is installed, the next step is securing the site using SSL. Run the following to enable SSL, enable the SSL site, and restart Apache:

Once that’s done, access the public IP using HTTPS – you should get an insecure cert warning.

Now that we’ve determined the port is listening, let’s set up Let’s Encrypt. Using CertBot usually makes this much easier. Since in this case, we’re using Apache and Ubuntu 18.04, we just need to populate those values and run the commands provided by CertBot:

With these commands, you’ll also need to set up DNS for the domain to use. With the public IP address, create the following:

  • create an A record with the host as @ and the IP address as the web server IP address.

After this finishes, allow for the ability to redirect to HTTPS using the CertBot plugin.

Installing WordPress

After finishing setting up the LAMP server, next is installing WordPress. Before doing the 5 minute install, run the following:

Reference:

Setting up WordPress to force SSL in the Admin Area

Recently, I ran into an issue when trying to migrate an existing WordPress website to start using HTTPS, especially in the admin area. Even after migrating everything in the database over to use HTTPS, I ran into a 302 redirect issue when trying to log in.

NOTE: make sure an SSL certificate is configured for the site, or you’ll receive an invalid certificate error.

Add the following to your wp-config.php file at the very top (if you don’t add to top, you’ll run into yet another issue where your users will not have permissions to access anything in admin):

With this in place, you’ll see when you go into the administration area, you’ll always be redirected to use HTTPS.

Reference: https://codex.wordpress.org/Administration_Over_SSL

Create a Java Web App with Spring Boot Starter, Gradle, and Azure Active Directory

Working on a project that would eventually use Azure Active Directory for authentication, I wanted to use Microsoft’s guide to get started with an app that could authenticate. However, the guide is written using Maven – and in our case, we are using Gradle for builds across the ecosystem.

I was able to mostly follow the guide provided, but ran into a gotcha – involving having to add a fixed dependency to get the system working. In addition, I added a secondary API endpoint to allow for testing both with and without group authentication.

To get started, you’ll need the following:

Create an App With Spring Initializr

To get started, create an app at https://start.spring.io/ with the following information:

  • Project: Gradle
  • Project Metadata Artifact: oauth2-demo-gradle
  • Dependencies: Web, Security, Azure Active Directory

After clicking ‘Generate Project’, a ZIP file will be created. Extract this file to someone on your local machine, as you’ll be configuring it later.

Set Up Azure Active Directory

After the app is created, the next step is setting up Azure Active Directory for authentication. Follow the Create Azure Active Directory instance section, taking note of the App Registration information provided.

Once you get to the section involving configuration of the app, come back to this guide – you’ll make specific changes to make the testing experience both compatible with Gradle and a little more user-friendly.

Configure and Compile the App

Once Azure Active Directory is configured, open your Java IDE and import the build.gradle file as a project. After that’s done, there are a few files to configure/create.

Add the following dependencies into the build.gradle file (notice the third dependency, if this one is not added, you’re run into an exception when testing):

Edit /src/main/resources/application.properties to look like the following:

Create a file called /controller/HelloController.java and add the following:

Create another file called security/WebSecurityConfig.java and add the following:

Build and Verify App

Now that the configuration is complete, next step is building and testing. Run the following Gradle commands (easy way to do this is through the IDE):

You should end up with a log like the following:

Once this is confirmed, access http://localhost:8080/. You should receive a request to log in to Azure services. Log in as the user created in the step above, and you should see the following:

Assuming the general API is working, now confirm that the secure endpoint is also working. This endpoint will verify the logged in user is part of a verified group.

Reference: https://docs.microsoft.com/en-us/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-azure-active-directory?view=azure-java-stable

Delete Recovery Containers in Azure

While doing some clean-up of a collection of Azure resources, I came across difficulty when trying to delete Azure Site Recovery resources. Specifically, when trying to delete the Recovery Services vault, I’d receive this ‘Vault deletion error’:

Vault deletion error message

Furthermore, when trying to use the portal to delete the existing containers, I would end up with a delay in the actual deletion of the containers. I ended up looking through Azure CLI does and finding a way to programmatically delete the containers.

First, find the subscription ID of your Azure tenant, and use the following command:

Next, connect to the Azure Recovery Vault using the Resource Group name and Vault name:

Finally, you can run the following command to delete a recovery container from the vault. Use this for each of the containers:

After deleting all the containers, you should be able to delete the Recovery vault without any issues.