Creating a Function App With a Full CI/CD Pipeline with VSCode and Jenkins

Before starting this, you’ll need to have a few things on your machine:

  • Azure Functions VSCode Extension
  • Azure Functions Core Tools (choco install azure-functions-core-tools)
  • .NET Core Build Tools (choco install visualstudio2017-workload-netcorebuildtools)
  • An Azure subscription
  • A Function app created inside of the Azure subscription
  • A Jenkins server with the following set up:
    • Azure Function plugin installed
    • A service principal configured (use az ad sp create-for-rbac -n "jenkins" --role contributor --scopes /subscriptions/{SubID} and then add to Jenkins Credentials)

Creating and Locally Running the Function App

Inside VSCode, create a function app project with the following:

  • Create a new folder for use
  • Language: C#
  • Template: HttpTrigger
  • Security: Anonymous

After creating, you’ll need to resolve a few dependencies, which VSCode should prompt for.

I ran into issues with getting the function app debugging locally, but I was able to run it without issue by running the following in PowerShell

dotnet clean
dotnet build
func host start

Once you get it running, you can use the URL provided to test:

Locally Running a TimerTrigger Function

To use a TimerTrigger functions, there are just a few changes:

  • You will need to have a Storage Account available to allow for running locally.
  • When running, you can invoke the function by making a POST to http://localhost:7071/admin/functions/{FUNCTION_NAME} (make sure to provide an empty JSON body in the payload)

Adding to Git and Deploying via Jenkins

The next step is checking in the example code to Git, so you have a place to get the codebase from for deployment.

After checking in the codebase, create a Multibranch Pipeline project in Jenkins.

Use the following Jenkinsfile as a reference:

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        sh 'dotnet clean'
        sh 'dotnet build'
    stage('Deploy to Function App') {
      when { branch 'master' }
      steps {
        azureFunctionAppPublish appName: "fa-poc-123",
          azureCredentialsId: 'jenkins-sp',
          resourceGroup: "fa-poc-ue-rg",
          sourceDirectory: '',
          targetDirectory: '',
          filePath: ''

After that, check in your codebase and you should have a deployed Function App with the codebase provided. Verify it using the Function App URL provided.

Copying a Database in Azure with Always Encrypted Data

When trying to copy a database with Always Encrypted data (say, to a different environment), you’ll generally want to recycle the Column Master Key used to match the vault stored in the same Azure resource group. This takes a little bit of work to do:


You’ll need to have the following software installed:

  • SSMS
  • Azure CLI

You’ll also need to make sure the database you’re copying from has a key that already exists. Run the following query on your newly copied database:

select * from sys.column_master_keys

And then check to see if the key exists in the appropriate vault:

az keyvault key show --id KEY_PATH

If it exists, you’ll be able to copy the database over without issue.


Create a new key in the Key Vault:

az keyvault key create --name Always-Encrypted-Auto1 --vault-name VAULT_NAME

Next, create a new Column Master Key, using the created key above.

With two CMKs, rotate the initial CMK (using credentials from the source key vault, and then the destination key vault).

Next, clean up the previous CMK:

After this is done, you can delete the old CMK.

Setting up HTTPS on an AKS Cluster


The following is required:

  • An ingress controller should already be installed.
  • The public IP of the Ingress controller should have a DNS name.
  • Helm needs to be running at 2.13.1.
  • The Kubernetes cluster should be publicly accessible (to allow cert creation)

To check if Helm is running with version 2.13.1 (there is a bug that doesn’t allow 2.14+ to work). To check, run the following:

helm version

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

If the output doesn’t match as below, you need to downgrade Helm. Install:

# only needed if Client above is not 2.13.1
choco uninstall kubernetes-helm
choco install kubernetes-helm --version 2.13.1

helm init --upgrade --force-upgrade


Run the following commands:

# Install the CustomResourceDefinition resources separately
kubectl apply -f

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager

# Add the Jetstack Helm repository
helm repo add jetstack

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install --name cert-manager --namespace cert-manager --version v0.8.0 jetstack/cert-manager

Create the following YAML file `cluster-issuer.yml’:

kind: ClusterIssuer
  name: letsencrypt-staging
  namespace: ingress-nginx
      name: letsencrypt-staging
    http01: {}

Apply the changes to the cluster:

kubectl apply -f ../shared/cluster-issuer.yml

Create the certificate.yml file:

kind: Certificate
  name: tls-secret
  namespace: ingress-nginx
  secretName: tls-secret-staging
    - http01:
        ingressClass: nginx
    name: letsencrypt-staging
    kind: ClusterIssuer

Apply those changes:

kubectl apply -f certificate.yml


Confirm creation of the certificate:

kubectl describe certificate tls-secret --namespace ingress-nginx

You should see the following:

Normal   OrderCreated        27s                cert-manager  Created Order resource "tls-secret-3300974441"
Normal   CertIssued          3s (x2 over 20m)   cert-manager  Certificate issued successfully
Normal   OrderComplete       3s                 cert-manager  Order "tls-secret-3300974441" completed successfully

Verify HTTPS can be accessed.

Applying IP Restrictions to a Large Set of Azure Resources

To do this, use PowerShell and Azure CLI to collect all of the NSGs and get all of the NSGs in the subscription:

az account set -s <SUB_ID>
$nsgs = az network nsg list | ConvertFrom-Json

Then go through every NSG and create the rule:

$nsgs | ForEach-Object -Process { az network nsg rule create --name NAME --nsg-name $ --priority PRIORITY --resource-group $_.resourceGroup --protocol Tcp --destination-port-ranges 80 443 --source-address-prefixes IP_ADDRESSES }

Next, get a list of the App Services:

$webapps = az webapp list | ConvertFrom-Json

And go through and add the list of IPs required (must use individual IPs):

$webapps | ForEach-Object -Process { $WebAppConfig = Get-AzResource -ResourceName $ -ResourceType Microsoft.Web/sites/config -ResourceGroupName $_.resourceGroup -ApiVersion 2018-11-01; $WebAppConfig.Properties.ipSecurityRestrictions = @([PSCustomObject] @{ ipAddress = '' },@{ ipAddress = '' });  Set-AzResource -ResourceId $WebAppConfig.ResourceId -Properties $WebAppConfig.Properties -ApiVersion 2018-11-01 }

Removing Access

To delete the same list of rules from the NSGs, use the same name:

$nsgs | ForEach-Object -Process { az network nsg rule delete -g resourceGroup --nsg-name $ -n NAME }


Creating a Buy One Get One Half Off Discount In NopCommerce

I recently worked with a client on trying to set up a Buy One Get One Half Off deal on NopCommerce – here’s what I did to get it working:


Before starting, you should have the following:


Go to the Discounts page and create a new Discount with the following information:

Next, go to the ‘Restrictions’ tab and add a restriction for the Product, using the ‘Add Product’ functionality Make sure you add the :2 to the end to force purchase of two:

Deploying web.config with an Angular Project

When deploying an Angular project out to Azure, you’ll need to include a web.config file to allow for things such as the following:

  • Getting routing to work.
  • Serving static content.

First, create a web.config file in src/. Here’s an example of what it might look like:

<?xml version="1.0"?>
                <rule name="Angular Routing" stopProcessing="true">
                <match url=".*" />
                <conditions logicalGrouping="MatchAll">
                    <add input="{REQUEST_FILENAME}" matchType="IsFile"
                         negate="true" />
                    <add input="{REQUEST_FILENAME}" matchType="IsDirectory"
                         negate="true" />
                <action type="Rewrite" url="/" />
            <mimeMap fileExtension="woff" mimeType="application/font-woff" />
            <mimeMap fileExtension="json" mimeType="application/json" />

After this is done, make a change to angular.json to bundle the web.config file in the build:

"assets": [

Now let’s verify by running ng build --prod:

Adding Bootstrap To An Angular CLI Project


Before setting this up, you’ll need to have an Angular project to add Bootstrap to. If you’re starting fresh, you can create one easily with ng new <name>.


First, install bootstrap from npm:

npm install bootstrap

Then add the following to the top of your styles.css file:

@import '~bootstrap/dist/css/bootstrap.min.css';

That’s it! Bootstrap is now enabled for your Angular CLI application.

Setting up WordPress in Azure with AKS and Helm

To get started, you’ll need the following:

  • kubectl
  • helm
  • Azure subscription

Installing WordPress in AKS with Helm

Set up an Azure Kubernetes Service with the following:

  • Node size – desired VM size (cheapest available is B2s)
  • Node count – need to use at least a minimum of 1.

After AKS is set up, connect to it with the following:

az aks get-credentials --resource-group AKS_RG --name AKS_NAME

Install Helm:

helm init

Create a values.yml file:

wordpressUsername: sammy
wordpressFirstName: Sammy
wordpressLastName: the Shark
wordpressBlogName: Sammy's Blog!

Create a second file values.secret.yml:

wordpressPassword: password

Now install WordPress with the following commands:

helm install --name blog -f values.yml -f values.secrets.yml stable/wordpress

After installation, run the following command to get the external IP:

kubectl get svc --namespace default -w blog-wordpress

Once the IP is ready, you’ll likely need to wait for the pods to finish loading. You can check their status with:

kubectl get pods

Once ready, try accessing the IP address to get the WordPress installation.


Setting Up SonarQube in Azure with Azure AD Integration

Installing SonarQube

Go to the marketplace and install SonarQube from Bitnami – with the following:

  • resize to D1_v2
  • if desired, configure any NSG restrictions

After creation, set a DNS name label for the IP address.

Access using the domain name, and follow the directions in the link above to log in.

Setting up Azure AD Integration

Setting up HTTPS

To be able to use Azure AD, you have to use HTTPS.

Set up the Server base URL to be the configured HTTPS address in Administration → Configuration → General → Server base URL:

SSH into the server and set up a Let’s Encrypt script and enforce redirection to HTTPS here (make sure you aren’t behind a firewall to allow for Let’s Encrypt).

Restart Apache:

sudo /opt/bitnami/ restart apache

Verify that trying to access the HTTP version of SonarQube automatically redirects you to the HTTPS version.

Creating App Registration in Azure

Run the following command in CLI to generate a service principal:

az ad sp create-for-rbac --name="{NAME}" --role="Contributor" --scope="/subscriptions/{SUBSCRIPTION_ID}" --years=100

You should receive some output data, which you will use when filling out info in SonarQube:

Set the Reply URL of the application to https://<URL>/oauth2/callback/aad:

Set the required permissions:

  • Sign in and read user profile
  • Read all users’ basic profiles

Installing Plug-in

Install the Azure Active Directory (AAD) Authentication Plug-in for SonarQube (restarting when requested):

Configure in Administration → General Settings → Azure Active Directory:

  1. Activate Azure AD
  2. Use the value in “appId”
  3. Use the value in “password”
  4. Use the value in “tenant”
  5. Allow users to sign-up.

Log out as admin, and verify the ability to log in as an Azure AD user.