Viewing Traffic to Azure VMs using NSG Flow Logs

Setting up NSG flow logs allows for viewing the traffic coming in through a network security group. This can be useful for a few things:

  • Troubleshooting access issues (maybe something shouldn’t have access, or vise versa).
  • Providing logging on the traffic accessing a server.

You’ll need the following to get started with this guide:

  • Ensure that a network watcher is configured.
  • An Azure subscription with the Insights provider installed.
  • An existing Network Security Group.
  • A storage account (ideally stored in the same resource group) that will hold the log data.


Go into Network Watcher and click on ‘NSG Flow Logs’:

Turn on Flow logs, and select the storage account to store logs in. A few notes here:

  • If retention is kept at 0, all logs will stay in the storage account forever. Useful for audits, but will end up costing more in the long run. (I personally set to 7 days).

Accessing Logs

For viewing the logs, you can either use the Azure Portal or use the Microsoft Azure Storage Explorer.

View the insights-logs-networksecuritygroupflowevent container in the configured storage account.

Access the PT1H.json file.

The number associates to the following:

  1. Timestamp
  2. Inbound IP (coming in from Internet)
  3. Outbound IP (going through the NSG)
  4. Inbound Port
  5. Outbound Port
  6. Protocol
  7. Traffic Flow (I – Inbound, O – Outbound)
  8. Acceptance (A – Allowed, D – Denied)


Adding SSL to an AKS Ingress Controller (without using Let’s Encrypt)

I wrote about the ability to add HTTPS to an AKS cluster using Let’s Encrypt, but recently ran into a case where I needed to add a cert from a specific CA to the cluster.

To do this, you need the following:

  • An AKS cluster deployed in an Azure tenant.
  • A certificate (should start with —–BEGIN CERTIFICATE—–)
  • A private key associated to the certificate above (used when creating the CSR for the cert, and will start with —–BEGIN RSA PRIVATE KEY—–)

Import the cert into the cluster:

kubectl create secret tls tls-ca-secret --key certificate.key --cert certificate.crt

After the cert imported, create an ingress controller:

apiVersion: extensions/v1beta1
kind: Ingress
  name: ingress
  annotations: nginx '10m' 'true'
    - hosts:
        - YOUR_DOMAIN
      secretName: tls-ca-secret
  - host: YOUR_DOMAIN
          - path: /some/endpoint
              serviceName: some-service
              servicePort: 80

Afterwards, check to ensure your cert is coming through using the endpoint defined in the Ingress Controller.

Fixing “unable to get credential storage lock: File exists” when Deploying Function App via Jenkins

When working with deploying Azure Function Apps with Jenkins, I ran into an issue when trying to rebuild a Function App from scratch with the same name. I was unable to deploy the codebase via Pipeline due to the following error:

unable to get credential storage lock: File exists

I was able to fix by doing the following:

SSH into the server.

Log in as the user that runs when running a Jenkins job (for example):

sudo su -s /bin/bash jenkins

Access the user’s .git-credentials file, and remove the reference to the pre-existing Function App SCM.

Delete the .git-credentials.lock file.

After doing this, try running the job again and ensure the issue has been solved.

Setting up a Configuration Page for Your 3.90 or Below NopCommerce Plugin

After writing your NopCommerce plugin, a common next step is to create a ‘Configure’ page to allow for configuring the settings of the plugin.

First, create a model that will represent the values to configure in the plugin – Models/YourPluginModel.cs

Next, create a controller Controllers/YourPluginController.cs, which will look something like this:

public class YourPluginController: BasePluginController
        private readonly ILocalizationService _localizationService;

        public YourPluginController(ILocalizationService localizationService)
            _localizationService = localizationService;

        public ActionResult Configure()
            return View("~/Plugins/Misc.YourPlugin/Views/YourPlugin/Configure.cshtml", model);

        public ActionResult Configure(YourPluginModel model)
            if (!ModelState.IsValid)
                return Configure();


            return Configure();

Finally, create a view Views/YourPlugin/Configure.cshtml:

    Layout = "";
@using Nop.Web.Framework

@using (Html.BeginForm())
    <div class="panel-group">
        <div class="panel panel-default">
            <div class="panel-body">
                <div class="form-group">
                    <div class="col-md-3">
                    <div class="col-md-9">
                        <input type="submit" name="save" class="btn bg-blue" value="@T("Admin.Common.Save")" />

Make sure the newly created View is set to ‘Copy if newer’.

Now when running your store, you should be able to enter the Configure page for your plugin, assuming the plugin is installed:

Setting up Jenkins to Run Angular Unit Tests

To be able to run unit tests in a Linux-based Jenkins instance, you just need to SSH into the Jenkins instance and run the following command:

sudo dpkg -i google-chrome-stable_current_amd64.deb

When installing, you may run into a dependency issue, if so, run:

sudo apt-get install -f
sudo dpkg -i google-chrome-stable_current_amd64.deb

After Google Chrome is installed, you should be able to run npm test, meaning you can use Jenkins to run unit tests in the CI process.

Suggested Jenkins Plugins

Here’s a list of Jenkins plugins I tend to use frequently.

Global Slack Notification Plugin

Allows for sending messages via pipeline to Slack channels.

Azure AD Plugin

Allows for authentication to Jenkins using Azure Active Directory.


If you are using Checkmarx to scan for vulnerabilities in your codebases, this plugin will allow for connecting to a Checkmarx server automatically to generate a report.

After installing, make sure to set up a server to allow for connection in Configure System -> Checkmarx:

If you need to add the Checkmarx step to the pipeline, this can be generated using the Pipeline Syntax feature.


This allows for using npm steps in your project, great for running processes for JavaScript based projects.

After installing, make sure to set up a NodeJS installation in Global Tool Configuration:

Build with Parameters Plugin

If you have a job that takes a series of parameters to run (such as a job that performs a deployment), this plugin will allow for programmatically filling in the parameters via a GET call.

Once installed, you can call any job with parameters like such:


Windows Azure Storage Plugin

This plugin allows for programmatic interaction with different Azure storage accounts, allowing for both uploading and downloading.

After installing, you’ll need to set up credentials for use:

To use in pipeline, the syntax will look like:

azureUpload blobProperties: [
                        cacheControl: '',
                        contentEncoding: '',
                        contentLanguage: '',
                        contentType: '',
                        detectContentType: true],
                        containerName: 'ui-dist',
                        fileShareName: '',
                        filesPath: 'dist/**',
                        storageCredentialId: 'YOUR-SP',
                        storageType: 'blobstorage'

Active Choices Plugin

This allows for more dynamic functionality with parameters (selecting one parameter allows a set of options for the other).

Setting this up in Pipeline takes some work, you’ll need to add using the following format:

        [$class: 'ChoiceParameter', 
            choiceType: 'PT_SINGLE_SELECT', 
            description: 'Select the context from the Dropdown List', 
            filterLength: 1, 
            filterable: true, 
            name: 'context', 
            randomName: 'choice-parameter-5631314439613978', 
            script: [
                $class: 'GroovyScript', 
                fallbackScript: [
                    classpath: [], 
                    sandbox: false, 
                        'return[\'Could not get Env\']'
                script: [
                    classpath: [], 
                    sandbox: false, 
        [$class: 'CascadeChoiceParameter', 
            choiceType: 'PT_SINGLE_SELECT', 
            description: 'Select the Server from the Dropdown List', 
            filterLength: 1, 
            filterable: true, 
            name: 'service', 
            randomName: 'choice-parameter-5631314456178619', 
            referencedParameters: 'Env', 
            script: [
                $class: 'GroovyScript', 
                fallbackScript: [
                    classpath: [], 
                    sandbox: false, 
                        'return[\'Could not get Environment from Env Param\']'
                script: [
                    classpath: [], 
                    sandbox: false, 
                        ''' if (context.equals("uat")){
                            else if(context.equals("dev")){

Copying Jenkins Jobs from Server to Server

In the case where you want to copy a collection of jobs from one Jenkins server to another, here’s a process you can use to make the migration. This guide assumes you have two different remote Jenkins instances to copy jobs from.

SSH into the source server to determine where the /jobs directory is for Jenkins. Example would be in /

Use scp (on your local machine) to copy the jobs from the remote Jenkins instance with the jobs:

scp -r USER@DNS:~JENKINS/jobs C:\Users\d\Downloads\jobs

After everything is downloaded, upload the files to the new server (you will need to upload them to the home directory of the user, and then move them via SSH):

scp -r C:\Users\d\Downloads\jobs DEST_USER@DEST_DNS:~

After that completes, SSH into the server again and move the files:

sudo mv jobs /var/lib/jenkins

Also, set the permissions of the jobs file over to the Jenkins service user:

sudo chown -R jenkins:jenkins /var/lib/jenkins/jobs

After this is complete, go into Settings and Reload Configuration from Disk. Ensure that all of your previous jobs are in place.

Setting up Jenkins with Azure AD Authentication

Configuring Azure AD

Run the following command in CLI to generate a service principal:

az ad sp create-for-rbac --name="{NAME}" --role="Contributor" --scope="/subscriptions/{SUBSCRIPTION_ID}" --years=100

Save the output generated, as you’ll use it for configuration in Jenkins.

Create a Redirect URL to https://YOURHOST/securityRealm/finishLogin

Set Required Permissions in Azure Active Directory to:

  • Application Permissions (Read Directory Data)
  • Delegated Permissions (Read Directory Data)

Click on ‘Grant permissions’.

If planning to use an Azure AD group for authorization, create one now.

Configuring Jenkins

Download the ‘Azure AD’ plugin, and restart after installation.

Go to Manage Jenkins → Configure Global Security.

Select ‘Enable Security’ if it isn’t already selected.

Under ‘Security Realm’, select ‘Azure Active Directory’, and fill the information:

  • Client ID – ‘appId’
  • Client Secret – ‘password’
  • Tenant – ‘tenant’

Use the button to verify the application.

Set Authorization to ‘Azure Active Directory Matrix-based security’.

Set the Group to be the newly created, and assign the appropriate permissions.

Verify by logging out and logging back in as Azure AD user.


If you accidentally lock yourself out after enabling Azure AD, do the following:

SSH into the server.

modify the config.xml file

sudo nano /var/lib/jenkins/config.xml
  • For the useSecurity item, change to ‘false’
  • Remove authorizationStrategy and securityRealm sections.

Restart Jenkins:

sudo systemctl restart jenkins

Jenkins is now completely unprotected – so continue working on whatever security strategy you were working on.

Upgrading Ubuntu

To upgrade Ubuntu, use the following procedure:

Upgrade all of your current dependencies:

sudo apt update && sudo apt upgrade

Change over dependencies with new versions of packages:

sudo apt dist-upgrade && sudo apt-get autoremove

Perform the OS upgrade:

sudo apt install update-manager-core && sudo do-release-upgrade

Setting up WordPress to serve next-gen WebP images

Why do this?

If you are looking to improve performance on your website, this will automatically convert the images on your site to the more efficient WebP format. This will help with Google Lighthouse scores (especially in solving the “Serve images in next-gen format” issue).


Install the WebP Express Plugin (and buy the developer a coffee!)

Go into the WebP Express settings, accept the defaults, and click ‘Save settings’.

Verify this is working by running Google’s Lighthouse analytics. You should not see the item ‘Serve images in next-gen formats” item in your listing.

Installing Dependencies

Depending on the server configuration you have, you may need to do more work after installing the WebP Express Plugin. If you see the following, you’ll need to configure your server to allow access:

SSH into the server and install the gd using the following:

sudo apt-get install php7.2-gd

In addition, you may need to install mod_headers:

sudo a2enmod headers
sudo systemctl restart apache2