Suggested Jenkins Plugins

Here’s a list of Jenkins plugins I tend to use frequently.

Global Slack Notification Plugin

Allows for sending messages via pipeline to Slack channels.

Azure AD Plugin

Allows for authentication to Jenkins using Azure Active Directory.

Checkmarx

If you are using Checkmarx to scan for vulnerabilities in your codebases, this plugin will allow for connecting to a Checkmarx server automatically to generate a report.

After installing, make sure to set up a server to allow for connection in Configure System -> Checkmarx:

If you need to add the Checkmarx step to the pipeline, this can be generated using the Pipeline Syntax feature.

NodeJS

This allows for using npm steps in your project, great for running processes for JavaScript based projects.

After installing, make sure to set up a NodeJS installation in Global Tool Configuration:

Build with Parameters Plugin

If you have a job that takes a series of parameters to run (such as a job that performs a deployment), this plugin will allow for programmatically filling in the parameters via a GET call.

Once installed, you can call any job with parameters like such:

https://server/job/$JOB/buildwithparam?PARAMS

Windows Azure Storage Plugin

This plugin allows for programmatic interaction with different Azure storage accounts, allowing for both uploading and downloading.

After installing, you’ll need to set up credentials for use:

To use in pipeline, the syntax will look like:

azureUpload blobProperties: [
                        cacheControl: '',
                        contentEncoding: '',
                        contentLanguage: '',
                        contentType: '',
                        detectContentType: true],
                        containerName: 'ui-dist',
                        fileShareName: '',
                        filesPath: 'dist/**',
                        storageCredentialId: 'YOUR-SP',
                        storageType: 'blobstorage'

Active Choices Plugin

This allows for more dynamic functionality with parameters (selecting one parameter allows a set of options for the other).

Setting this up in Pipeline takes some work, you’ll need to add using the following format:

properties([
    parameters([
        [$class: 'ChoiceParameter', 
            choiceType: 'PT_SINGLE_SELECT', 
            description: 'Select the context from the Dropdown List', 
            filterLength: 1, 
            filterable: true, 
            name: 'context', 
            randomName: 'choice-parameter-5631314439613978', 
            script: [
                $class: 'GroovyScript', 
                fallbackScript: [
                    classpath: [], 
                    sandbox: false, 
                    script: 
                        'return[\'Could not get Env\']'
                ], 
                script: [
                    classpath: [], 
                    sandbox: false, 
                    script: 
                        'return["uat","dev"]'
                ]
            ]
        ], 
        [$class: 'CascadeChoiceParameter', 
            choiceType: 'PT_SINGLE_SELECT', 
            description: 'Select the Server from the Dropdown List', 
            filterLength: 1, 
            filterable: true, 
            name: 'service', 
            randomName: 'choice-parameter-5631314456178619', 
            referencedParameters: 'Env', 
            script: [
                $class: 'GroovyScript', 
                fallbackScript: [
                    classpath: [], 
                    sandbox: false, 
                    script: 
                        'return[\'Could not get Environment from Env Param\']'
                ], 
                script: [
                    classpath: [], 
                    sandbox: false, 
                    script: 
                        ''' if (context.equals("uat")){
                                return["uat-1","uat-2"]
                            }
                            else if(context.equals("dev")){
                                return["dev-1","dev-2"]
                            }
                        '''
                ]
            ]
        ]
    ])
])

Copying Jenkins Jobs from Server to Server

In the case where you want to copy a collection of jobs from one Jenkins server to another, here’s a process you can use to make the migration. This guide assumes you have two different remote Jenkins instances to copy jobs from.

SSH into the source server to determine where the /jobs directory is for Jenkins. Example would be in /

Use scp (on your local machine) to copy the jobs from the remote Jenkins instance with the jobs:

scp -r USER@DNS:~JENKINS/jobs C:\Users\d\Downloads\jobs

After everything is downloaded, upload the files to the new server (you will need to upload them to the home directory of the user, and then move them via SSH):

scp -r C:\Users\d\Downloads\jobs DEST_USER@DEST_DNS:~

After that completes, SSH into the server again and move the files:

sudo mv jobs /var/lib/jenkins

Also, set the permissions of the jobs file over to the Jenkins service user:

sudo chown -R jenkins:jenkins /var/lib/jenkins/jobs

After this is complete, go into Settings and Reload Configuration from Disk. Ensure that all of your previous jobs are in place.

Setting up Jenkins with Azure AD Authentication

Configuring Azure AD

Run the following command in CLI to generate a service principal:

az ad sp create-for-rbac --name="{NAME}" --role="Contributor" --scope="/subscriptions/{SUBSCRIPTION_ID}" --years=100

Save the output generated, as you’ll use it for configuration in Jenkins.

Create a Redirect URL to https://YOURHOST/securityRealm/finishLogin

Set Required Permissions in Azure Active Directory to:

  • Application Permissions (Read Directory Data)
  • Delegated Permissions (Read Directory Data)

Click on ‘Grant permissions’.

If planning to use an Azure AD group for authorization, create one now.

Configuring Jenkins

Download the ‘Azure AD’ plugin, and restart after installation.

Go to Manage Jenkins → Configure Global Security.

Select ‘Enable Security’ if it isn’t already selected.

Under ‘Security Realm’, select ‘Azure Active Directory’, and fill the information:

  • Client ID – ‘appId’
  • Client Secret – ‘password’
  • Tenant – ‘tenant’

Use the button to verify the application.

Set Authorization to ‘Azure Active Directory Matrix-based security’.

Set the Group to be the newly created, and assign the appropriate permissions.

Verify by logging out and logging back in as Azure AD user.

Troubleshooting

If you accidentally lock yourself out after enabling Azure AD, do the following:

SSH into the server.

modify the config.xml file

sudo nano /var/lib/jenkins/config.xml
  • For the useSecurity item, change to ‘false’
  • Remove authorizationStrategy and securityRealm sections.

Restart Jenkins:

sudo systemctl restart jenkins

Jenkins is now completely unprotected – so continue working on whatever security strategy you were working on.

Upgrading Ubuntu

To upgrade Ubuntu, use the following procedure:

Upgrade all of your current dependencies:

sudo apt update && sudo apt upgrade

Change over dependencies with new versions of packages:

sudo apt dist-upgrade && sudo apt-get autoremove

Perform the OS upgrade:

sudo apt install update-manager-core && sudo do-release-upgrade

Setting up WordPress to serve next-gen WebP images

Why do this?

If you are looking to improve performance on your website, this will automatically convert the images on your site to the more efficient WebP format. This will help with Google Lighthouse scores (especially in solving the “Serve images in next-gen format” issue).

Procedure

Install the WebP Express Plugin (and buy the developer a coffee!)

Go into the WebP Express settings, accept the defaults, and click ‘Save settings’.

Verify this is working by running Google’s Lighthouse analytics. You should not see the item ‘Serve images in next-gen formats” item in your listing.

Installing Dependencies

Depending on the server configuration you have, you may need to do more work after installing the WebP Express Plugin. If you see the following, you’ll need to configure your server to allow access:

SSH into the server and install the gd using the following:

sudo apt-get install php7.2-gd

In addition, you may need to install mod_headers:

sudo a2enmod headers
sudo systemctl restart apache2

Setting up Jenkins in Azure

Before getting started, you’ll need to have:

  • An Azure tenant and subscription.
  • OpenSSH (installation for Windows 10)

Installing Jenkins via Azure Marketplace

The easiest way to install Jenkins is to use the Azure Marketplace link. A couple suggestions when setting up:

  • I recommend using an SSH Public Key to sign in. If you haven’t yet, generate one using ssh-keygen and then get it using cat ~.ssh\id_rsa.pub
  • Set up a domain name label, especially if you aren’t planning to put this behind a different domain.
  • Set the VM as B1ms starting off – you can upgrade later as the system is used more.

After creation, modify the NSG created and use your public IP to secure SSH access (check your public IP).

Next, SSH into the server using the IP and check to see if you can update the OS (as of this writing, the image ships with Ubuntu 16.04 LTS, and can be upgraded to 18.04 LTS).

Connecting a Domain

If you’re planning to use a different domain to host Jenkins (as opposed to the provided domain-name-label.location.cloudapp.azure.com), set the following DNS record:

  • CNAME
  • Host: desired subdomain (ex. jenkins -> jenkins.dfar.io)
  • Value: DNS record from Azure.

Since DNS will take a second, check to verify you can access the new server.

Setting up SSL using Let’s Encrypt

The next step is setting up SSL using Let’s Encrypt to allow for an HTTPS connection. First, open the 443 port on the VM:

az network nsg rule update -g RG_NAME --nsg-name NSG_NAME -n http-rule --destination-port-ranges 80, 443

Now SSH into the server and modify SSL Offloading:

sudo nano /etc/nginx/sites-available/default

Use the following configuration:

server {
    listen 80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name CUSTOMDOMAIN;
    ssl_certificate /etc/letsencrypt/live/CUSTOMDOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/CUSTOMDOMAIN/privkey.pem;
    location / {
        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;


        # Fix the “It appears that your reverse proxy set up is broken" error.
        proxy_pass          http://localhost:8080;
        proxy_read_timeout  90;
    }
}

Then run the following commands:

sudo service nginx stop
git clone https://github.com/letsencrypt/letsencrypt
./letsencrypt/letsencrypt-auto certonly
sudo service nginx restart

Accessing and Logging Into Jenkins

After completed, access the Jenkins instance at your domain. Verify that both the SSL connection is valid and that you are on the ‘Unlock Jenkins’ page:

Run the following command in the SSHed server to get a code for the screen:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Next, you’ll get a request to either install suggested plugins or select plugins as desired. I recommend going through and selecting the plugins desired to keep the installation minimal. Remove anything from the list that you may not need (such as Subversion). You can always add plugins later if you find you need.

After that, create an admin user for yourself, and you’ll be ready to get started!

Next Steps

After you’ve finished setting up Jenkins, a few next steps would be:

Reference

Creating a Function App With a Full CI/CD Pipeline with VSCode and Jenkins

Before starting this, you’ll need to have a few things on your machine:

  • Azure Functions VSCode Extension
  • Azure Functions Core Tools (choco install azure-functions-core-tools)
  • .NET Core Build Tools (choco install visualstudio2017-workload-netcorebuildtools)
  • An Azure subscription
  • A Function app created inside of the Azure subscription
  • A Jenkins server with the following set up:
    • Azure Function plugin installed
    • A service principal configured (use az ad sp create-for-rbac -n "jenkins" --role contributor --scopes /subscriptions/{SubID} and then add to Jenkins Credentials)

Creating and Locally Running the Function App

Inside VSCode, create a function app project with the following:

  • Create a new folder for use
  • Language: C#
  • Template: HttpTrigger
  • Security: Anonymous

After creating, you’ll need to resolve a few dependencies, which VSCode should prompt for.

I ran into issues with getting the function app debugging locally, but I was able to run it without issue by running the following in PowerShell

dotnet clean
dotnet build
func host start

Once you get it running, you can use the URL provided to test:

Locally Running a TimerTrigger Function

To use a TimerTrigger functions, there are just a few changes:

  • You will need to have a Storage Account available to allow for running locally.
  • When running, you can invoke the function by making a POST to http://localhost:7071/admin/functions/{FUNCTION_NAME}

Adding to Git and Deploying via Jenkins

The next step is checking in the example code to Git, so you have a place to get the codebase from for deployment.

After checking in the codebase, create a Multibranch Pipeline project in Jenkins.

Use the following Jenkinsfile as a reference:

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        sh 'dotnet clean'
        sh 'dotnet build'
      }
    }
    stage('Deploy to Function App') {
      when { branch 'master' }
      steps {
        azureFunctionAppPublish appName: "fa-poc-123",
          azureCredentialsId: 'jenkins-sp',
          resourceGroup: "fa-poc-ue-rg",
          sourceDirectory: '',
          targetDirectory: '',
          filePath: ''
      }
    }
  }
}

After that, check in your codebase and you should have a deployed Function App with the codebase provided. Verify it using the Function App URL provided.

Copying a Database in Azure with Always Encrypted Data

When trying to copy a database with Always Encrypted data (say, to a different environment), you’ll generally want to recycle the Column Master Key used to match the vault stored in the same Azure resource group. This takes a little bit of work to do:

Pre-Reqs

You’ll need to have the following software installed:

  • SSMS
  • Azure CLI

You’ll also need to make sure the database you’re copying from has a key that already exists. Run the following query on your newly copied database:

select * from sys.column_master_keys

And then check to see if the key exists in the appropriate vault:

az keyvault key show --id KEY_PATH

If it exists, you’ll be able to copy the database over without issue.

Procedure

Create a new key in the Key Vault:

az keyvault key create --name Always-Encrypted-Auto1 --vault-name VAULT_NAME

Next, create a new Column Master Key, using the created key above.

With two CMKs, rotate and then cleanup the initial CMK.

After this is done, you can delete the old CMK.

Setting up HTTPS on an AKS Cluster

Pre-Reqs

The following is required:

  • An ingress controller should already be installed.
  • The public IP of the Ingress controller should have a DNS name.
  • Helm needs to be running at 2.13.1.
  • The Kubernetes cluster should be publicly accessible (to allow cert creation)

To check if Helm is running with version 2.13.1 (there is a bug that doesn’t allow 2.14+ to work). To check, run the following:

helm version

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

If the output doesn’t match as below, you need to downgrade Helm. Install:

# only needed if Client above is not 2.13.1
choco uninstall kubernetes-helm
choco install kubernetes-helm --version 2.13.1

helm init --upgrade --force-upgrade

Procedure

Run the following commands:

# Install the CustomResourceDefinition resources separately
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install --name cert-manager --namespace cert-manager --version v0.8.0 jetstack/cert-manager

Create the following YAML file `cluster-issuer.yml’:

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
  namespace: ingress-nginx
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: UPDATE_THIS_EMAIL
    privateKeySecretRef:
      name: letsencrypt-staging
    http01: {}

Apply the changes to the cluster:

kubectl apply -f ../shared/cluster-issuer.yml

Create the certificate.yml file:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: tls-secret
  namespace: ingress-nginx
spec:
  secretName: tls-secret-staging
  dnsNames:
  - YOUR_DOMAIN
  acme:
    config:
    - http01:
        ingressClass: nginx
      domains:
      - YOUR_DOMAIN
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer

Apply those changes:

kubectl apply -f certificate.yml

Verification

Confirm creation of the certificate:

kubectl describe certificate tls-secret --namespace ingress-nginx

You should see the following:

...
Normal   OrderCreated        27s                cert-manager  Created Order resource "tls-secret-3300974441"
Normal   CertIssued          3s (x2 over 20m)   cert-manager  Certificate issued successfully
Normal   OrderComplete       3s                 cert-manager  Order "tls-secret-3300974441" completed successfully

Verify HTTPS can be accessed.

Adding Font Awesome to an Angular CLI Project

To add Font Awesome to your Angular CLI project, do the following:

Install Font Awesome:

npm i @fortawesome/fontawesome-free

Then add the following to your styles.css:

@import '~@fortawesome/fontawesome-free/css/all.min.css';

Once that’s done, use the following to add to your project:

<i class="far fa-copy"></i>