Automatically Upgrading Azure Kubernetes Services

Azure provides a means to upgrade Kubernetes clusters using the AKS service. This guide will walk you through using an automation account to upgrade the services on a regular basis, making the process something you don’t need to worry about.

Note that you may want to hold off on doing this for production systems – if for some reason an upgrade were to break currently functionality, there is no means for reverting a cluster back to an original version.

Create a Powershell Core Function App

First, create a function app that runs on PowerShell Core:

After creating the function app, you’ll also want to make sure to increase the standard timeout rate, since this call can take some time to process. Change host.json to have the following:

{
    "functionTimeout": "00:10:00"
}

If you have a large number of clusters you’ll be checking regularly, you should use a standard App Service plan instead, to remove the timeout entirely.

Import Azure CLI into the Function App

Next, you’ll want to import Azure CLI into the Function App, to allow for calling the az command.

First, you’ll need to install Azure CLI on you local machine. You’ll be copying this directory created into the function app created to use, so after installing, locate the Azure CLI files at C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2.

Connect to FTP using the publish profile for the Function App (access this through the portal) and copy the entire CLI2 folder into the /home directory. Make sure that all of the files are copied successfully.

To verify everything is working, run the following command:

D:/home/CLI2/wbin/az.cmd

If you get a successful call back, you’ve imported Azure CLI correctly and can now write the code to programmatically upgrade the AKS clusters.

Create Service Principal

Next, you’ll create a service principal that has access to the clusters in mind, so you have an account that can log in and perform the upgrade duties. Run the following command locally when logged in to the subscription desired:

az ad sp create-for-rbac -n "aks-autoupgrade-sp" --role contributor --scopes /subscriptions/{SubID}

After this is done, you should receive output showing the name, password, and tenant. Add the three of these as configuration values for the Function App as:

  • AZ_USER – appId
  • AZ_PASS – password
  • AZ_TENANT – tenant

Create Timer Function

Next, create a timer function that runs every day, let’s say at noon:

0 0 12 * * *

Use the following codebase.

param($Timer)

Write-Output "Logging in as Service Principal $env:AZ_USER"
D:/home/CLI2/wbin/az.cmd login --service-principal -u $env:AZ_USER -p $env:AZ_PASS --tenant $env:AZ_TENANT | Out-Null

if (!($?)) {
    Write-Error "Login failed, exiting script."
    exit 1;
}

Write-Output "Getting list of AKS clusters...";
$aksClusters = D:/home/CLI2/wbin/az.cmd aks list | ConvertFrom-Json;

if ($aksClusters.length -le 0) {
  Write-Output "No AKS clusters found, exiting...";
  exit 1;
}

$aksClusters | ForEach-Object -Process {
  $clusterName = $_.name
  Write-Output "$clusterName : checking for upgrades..."
  $upgrades = D:/home/CLI2/wbin/az.cmd aks get-upgrades `
    -g $_.resourceGroup -n $_.name | ConvertFrom-Json
  $agentPoolProfiles = $upgrades.agentPoolProfiles

  if ($agentPoolProfiles.upgrades -eq $null) {
    Write-Output "No upgrades available."
    return;
  }

  $latestUpgrade = $agentPoolProfiles.upgrades[-1];

  $currentKubernetesVersion = $upgrades.agentPoolProfiles.kubernetesVersion;
  $newKubernetesVersion = $latestUpgrade.kubernetesVersion

  Write-Output `
    "Upgrade available: $currentKubernetesVersion => $newKubernetesVersion"

  Write-Output "Upgrading to $newKubernetesversion automatically."
  D:/home/CLI2/wbin/az.cmd aks upgrade -k $newKubernetesVersion -g $_.resourceGroup -n $_.name --no-wait --yes
}

Write-Output "Complete."
exit 0;

You can run the function to make sure it is running as intended, while commenting out the az aks upgrade line to ensure no upgrades occur.

Setting up Failure Alerts

The final (optional) step is setting up a means to alert in case of failure. When creating the Function App, an Application Insights resource should have been created as well. Go the ‘Alerts’ section in the App Insight resource, and create an alert:

Add your email as an action group to notify if there is an issue with failures.

Reference

https://stackoverflow.com/questions/56544059/azure-cli-commands-not-working-inside-azure-function-apps-portal

Setting up SFTP File Migration to Azure Storage Account

This guide assumes you are using a Linux SFTP server for receiving files, and then uploading those files into an Azure storage account.

Initial Setup

Install Azure CLI on the SFTP server with the following command:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Then login using an account with access to the storage account:

az login

Perform File Migration

The next step is actually sending the files over to a storage account. For this example, we’ll send all of the files in a particular folder to the storage account.

az storage blob upload-batch -d YOUR_CONTAINER --account-name STORAGE_ACCOUNT_NAME -s DIRECTORY

Once those files are moved over, move them into a processed/ directory:

mkdir /data/ftpuser/upload/processed

sudo mv * /processed/

Automate Migration Process

The final step is making sure to automate the process to allow for migrating files on a regular interval.

First, create a file named /bin/migrate.sh:

#!/bin/bash

echo "Starting file -> AZ storage upload"
sudo mkdir -p /data/ftpuser/queued/
sudo mv /data/ftpuser/upload/* /data/ftpuser/queued/
az storage blob upload-batch -d CONTAINER_NAME --account-name STORAGE_ACCOUNT_NAME -s DIRECTORY
if [ $? -ne 0 ]; then
    echo FAIL
    exit 1;
fi
sudo mkdir -p /data/ftpuser/processed
sudo mv /data/ftpuser/queued/* /data/ftpuser/processed/

Allow the script to be executable:

chmod u+x /bin/migrate.sh

Then edit cronjobs to run every 15 minutes:

sudo crontab -e

*/15 * * * * /bin/migrate.sh >> /var/log/migrate_output.log 2>&1

You can check the logs of the process here:

/var/log/migrate_output.log

Reference

https://kvz.io/schedule-tasks-on-linux-using-crontab.html

Manually Renew A Let’s Encrypt Certificate from Kubernetes

I work with a series of Kubernetes clusters that are restricted to public access, but still are encrypted using Let’s Encrypt certificates. Normally, when renewal is required, this process is automatically done for you.

In the case above, I have to kick off the renewal process directly. This can be done easily with deleting the TLS secret generated for the certificate.

First, make sure there is an open port to allow for port 80 to connect to Let’s Encrypt servers.

This can be done through the UI:

Or via CLI:

kubectl delete secret tls-secret --namespace ingress-nginx

Once this is done, the cert should automatically attempt a renewal. You can check the status of this with:

kubectl describe certificate tls-secret --namespace ingress-nginx

You should see the following:

Setting up Slack Notifications with Jenkins

Setting up Slack notifications for build results with Jenkins is pretty simple. You’ll need the following:

  • A Jenkins server that you have admin rights to.
  • Slack Notification Plugin
  • A Slack workspace, ideally with a channel dedicated to Jenkins alerts.

Setup

To set up Slack, go to <your-workspace-name>.slack.com/apps and search for Jenkins CI:

Add the configuration and set the desired channel where alerts will appear.

After this, you’ll see some directions on setting up Slack on the Jenkins side.

When you’ve finished the setup, make sure to test the connection – you should see a message in Slack like the following:

Adding Jenkins Calls to Pipeline

Once the setup is complete, you can use the following examples to call Slack during the pipeline jobs:

stages {
  stage('Build') {
    steps {
      echo 'build'
    }
  }
  stage('Deploy') {
    steps {
      echo 'Deploy'
      slackSend color: 'good', message: "deployment successful => https://yoursite.com"
    }
  }
}
...
post {
  failure {
    slackSend color: 'danger', message: "deployment failed (<${env.BUILD_URL}|Open>)"
  }
  always {
    cleanWs()
  }
}

This example will call a successful message when all stages complete, and sends a failure message if the build as a whole fails.

Setting up Jenkins to Auto-Update

When setting up Jenkins, you’ll have to perform upgrades on the system yourself. You can do this pretty easily, but let’s automate the process so that each night, Jenkins will download and install the latest version.

To do this you’ll need to have SSH access to the server.

Creating Upgrade Script

First, SSH into the server.

Create a bin/ folder in the user’s directory:

cd ~
mkdir bin

If it doesn’t already exist, create a .bash_profile file and add the following:

 export PATH=$PATH:/home/<YOUR_USER>/bin

Now create the script file for use:

cd bin
touch upgrade-jenkins

Create a script with the following contents:

#!/bin/bash

sudo wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war
sudo mv jenkins.war /usr/share/jenkins/jenkins.war
sudo systemctl restart jenkins

And make the script executable:

chmod u+x upgrade-jenkins

You can finally test the script by calling ./upgrade-jenkins. When it runs, Jenkins should restart with the latest version.

Setting up for Automatic Updates

The next step is setting up the script to run each night at some point to promote automatic upgrades, so you don’t have to worry about updating the system. To do this, we’ll create a cron job that runs the above script once a day.

Set up a crontab file using sudo crontab -e, and add the following to the file to run everyday at 3am. (make sure to check date to see if the system is using UTC):

0 3 * * * /home/YOUR_USER/bin/upgrade-jenkins >> /var/log/upgrade_jenkins.log 2>&1

Once this is in place, you’ll be running the script above once a day, which will download and run the latest version of Jenkins on a regular basis.

If you’re running into issues, you can check the logs at:

/var/log/upgrade_jenkins.log


Reference:

https://www.taniarascia.com/how-to-create-and-use-bash-scripts/

Setting up a Linux SFTP Server

To get started, you’ll need to use the following:

  • A virtual machine with a public IP address.
  • SSH address to the above virtual machine

Configuring the User

First, SSH into the server and install OpenSSH:

sudo apt install -y ssh openssh-server

Now create the group to have access, alongside a user to be able to access:

sudo groupadd sftpg
sudo useradd -g sftpg ftpuser

Now set up the directories to allow for the following:

  • The user can view the contents of their own folder.
  • The user can upload files to the uploads/ folder.
sudo mkdir -p /data/ftpuser/upload
sudo chown -R root.sftpg /data/ftpuser
sudo chown -R ftpuser.sftpg /data/ftpuser/upload

Setting up User Access

There are two ways to have the user access – by key or by password. Key is preferred in production scenarios as it’s overall more secure.

By Key

Basically, you will be storing the public key on the FTP server, and the client connecting into the server will use the associated private key. Easiest way is to have the client generate a public/private key pair and then add the public key to /home/ftpuser/.ssh/authorized_keys.

Once that’s done, make sure the ftpuser user has ownership of the authorized_keys file.

By Password

To allow for password, just use the following command:

ssudo passwd ftpuser

Setting up SFTP

To set up SFTP, append the following to the end of /etc/ssh/sshd_config:

Match Group sftpg
     ChrootDirectory /data/%u
     ForceCommand internal-sftp

As you’re using a password to authenticate the above user, make sure the following is in the file as well:

PasswordAuthentication yes

Restart the service:

sudo systemctl restart sshd

Verification

To verify everything is in place, try logging into the server as the new user:

sftp ftpuser@YOUR_DOMAIN

Once connected in, try uploading a file:

PUT D:/local-file.png file.png

And then check that the file is there:

GET file.png D:/downloaded-file.png

Setting up Artifactory to use a root domain with a Reverse Proxy

When installing Artifactory, you’ll be able to start using it as `http://YOUR_DOMAIN:8081/artifactory. To change this to a simpler domain, you can read the following guide. You’ll need to have the following:

  • Artifactory administrative access
  • SSH access to the server.

Setting up Reverse Proxy to Simplify Domain

First, SSH into the server and install Apache:

sudo apt install apache2 -y

This should create a default page you can access by hitting the root URL:

Now configure the appropriate modules:

sudo a2enmod proxy_http proxy_ajp rewrite deflate headers proxy_balancer proxy_connect proxy_html ssl lbmethod_byrequests slotmem_shm proxy

While logged in as an admin in Artifactory, access Admin -> HTTP Settings and make the following changes:

After saving, you’ll see the ability to View/Download the configuration on the right side of the page. Download the contents and replace the contents of /etc/apache2/sites-enabled/000-default.conf on the Artifactory server with them.

Restart Apache with sudo systemctl restart apache2.

Now confirm that you can request Artifactory using http://YOUR_DOMAIN.

Setting up SSL using Let’s Encrypt

Now that you can access the domain, let’s secure the domain using Let’s Encrypt.

Use the following to check which version of Linux you’re running: lsb_release -a

Most likely, you’ll be using Ubuntu 18.04, if so, you can use these steps. If not, check out the CertBot page for appropriate directions.

You should be able to generate and configure the cert automatically running the directions above. Assuming this is successful, select to redirect HTTP traffic to HTTPS, removing HTTP access.

Now confirm that you can both:

  1. Access the HTTPS version of your domain above.
  2. When trying to access HTTP, it automatically redirects to HTTPS.

Reference: https://www.jfrog.com/confluence/display/RTF/Configuring+Apache

Resetting Artifactory Admin Password

If you’ve lost access to the administrator account in your on-premised Artifactory, this guide will help walk you through setting it back up. You’ll need to have SSH access to the server in order to complete this guide.

To get started, SSH into the server and open$ARTIFACTORY_HOME/etc/security/access/bootstrap.creds. Enter the following information:

access-admin@127.0.0.1=YOUR_PASSWORD

Now assign read/write permissions:

chmod 600 bootstrap.creds

And finally, restart the server.

Once that’s done, you’ll need to generate the Basic authorization header:

echo -n access-admin:YOUR_PASSWORD | base64

Then make the following POST call while still SSHed into the server:

curl --request PATCH \
  --url http://localhost:8081/artifactory/api/access/api/v1/users/admin \
  --header 'authorization: Basic <USE_CODE_FROM_ABOVE>' \
  --header 'content-type: application/json' \
  --data '{"password":"NEW_PASSWORD"}'

If you receive a response like the following, you’re all set:

{
  "username" : "admin",
  "realm" : "internal",
  "status" : "enabled",
  "allowed_ips" : [ "*" ],
  "created" : "2019-08-01T13:47:16.959Z",
  "modified" : "2019-09-04T15:25:02.398Z",
  "last_login_time" : "2019-08-21T13:46:55.061Z",
  "last_login_ip" : "204.194.141.30",
  "custom_data" : {
    "updatable_profile" : "true",
    "artifactory_admin" : "true"
  },
  "password_expired" : false,
  "password_last_modified" : 1567610702390,
  "groups" : [ ]
}

Adding Entity Framework/SQL Server to a Function App

When working with function apps, you may want to connect SQL Server to allow for using an API with data persistence.

Setting up Project to Use Entity Framework

Add the following to your .csproj file, in the section with PackageReference entries:

<ItemGroup>
    ...
    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.2.3" />
</ItemGroup>

And then add the following to allow for copying the project DLL (supports builds in both Windows and Linux):

...
  <Target Name="PostBuild" AfterTargets="PostBuildEvent">
    <Exec Command="copy /Y &quot;$(TargetDir)bin\$(ProjectName).dll&quot; &quot;$(TargetDir)$(ProjectName).dll&quot;" Condition=" '$(OS)' == 'Windows_NT'" />
    <Exec Command="cp &quot;$(TargetDir)bin\$(ProjectName).dll&quot; &quot;$(TargetDir)$(ProjectName).dll&quot;" Condition=" '$(OS)' != 'Windows_NT'" />
  </Target>
</Project>

For setting this up, I’d recommend having at least one Model ready for use. In our case, let’s use a simple example with Entry.cs:

[Key]
public int EntryId { get; set; }

public DateTime Date { get; set; }

public string Content { get; set; }

Now create the Context.cs and ContextFactory.cs files in the Models folder:

    
using Microsoft.EntityFrameworkCore;

namespace YOUR_NAMESPACE
{
  public class Context : DbContext
  {
    public Context(DbContextOptions<UsawStatsContext> options)
      : base(options)
    { }

    public DbSet<Entry> MeetResults { get; set; }
  }
}
using System;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;

namespace YOUR_NAMESPACE
{
  public class ContextFactory : IDesignTimeDbContextFactory<Context>
  {
    public Context CreateDbContext(string[] args)
    {
      string SqlConnection =
        Environment.GetEnvironmentVariable("CONN_STR");

      if (SqlConnection == null)
      {
        throw new ArgumentNullException(
          $"Environment variable CONN_STR not set.");
      }

      var optionsBuilder = new DbContextOptionsBuilder<Context>();
      optionsBuilder.UseSqlServer(SqlConnection);

      return new Context(optionsBuilder.Options);
    }
  }
}

Now create a Startup.cs file in root:

using System;
using YOUR_NAMESPACE.Models;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(YOUR_NAMESPACE.Startup))]

namespace YOUR_NAMESPACE
{
  class Startup : FunctionsStartup
  {
    ConfigureDatabase(builder);
  }

  private void ConfigureDatabase(IFunctionsHostBuilder builder)
    {
      string SqlConnection =
                    Environment.GetEnvironmentVariable(
                      "CONN_STR");

      if (SqlConnection == null)
      {
        throw new ArgumentNullException(
          $"Environment variable "CONN_STR" not set.");
      }

      builder.Services.AddDbContext<Context>(
        options => options.UseSqlServer(SqlConnection));

      var optionsBuilder = new DbContextOptionsBuilder<Context>();
      optionsBuilder.UseSqlServer(SqlConnection);

      using (var context = new Context(optionsBuilder.Options))
      {
        try
        {
          context.Database.Migrate();
        }
        catch (Exception e)
        {
          throw new Exception(
            $"Error when migrating database: {e.Message}");
        }
      }
    }
}

Setting up Database

Finally, you’ll need to set up a database for use. Create the database and use the connection string – let’s use SQL Server locally and set this in the Azure Function project’s local.settings.json:

"Values": {
  ...
  "CONN_STR": "Data Source=localhost\\SQLEXPRESS;Initial Catalog=DB_NAME;Integrated Security=SSPI;"
},

Once this is set, setting up migrations is the next step. Set the environment variable on your PC:

$env:CONN_STR="CONN_STRING"
dotnet ef migrations add InitialCreate

With this step done, start the application and check to see if your database was updated correctly, and that you can see the tables desired.

Setting Up a Root Domain with Azure DNS

When setting up Azure CDN, you may want to use a naked domain (yourdomain.com) to access. It’s a little tricky, so here’s how I set it up:

Creation and Verification

Create the CDN profile and endpoint in Azure.

Next set up a custom domain:

To do this, you’ll need to modify the value highlighted as a CNAME record for your domain’s DNS. It will look like this:

  • Host: cdnverify.<yourdomain>
  • Value: cdnverify.<endpoint>.azureedge.net

Setting up SSL and HTTPS Redirect

With the domain working, you’ll notice that you only have an insecure connection in place, so let’s set that up.

One of the disadvantages of using a root domain is that you cannot use the Azure CDN self-generated certificates. This means you’ll have to bring your own certificate in. There are three options immediately available:

  1. Purchase a certificiate from Azure (easy and reliable but starts at ~$70)
  2. Purchase a certificate from a reputable CA (Such as from Namecheap, can purchase a cert as low as ~$8 a year, although these certs are not as secure).
  3. Use the manual process at Let’s Encrypt to generate a certificate (free, but will need to be renewed regularly).

Obtaining a Let’s Encrypt Certificate Using Windows

To use the manual process, you’ll need to start with the following:

Install WSL (Ubuntu 18.04) onto your computer, and open a CLI.

Install certbot with the following commands:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

And then run certbot in manual mode:

sudo certbot certonly --manual --preferred-challenges dns-01

You’ll need to create a TXT record with the data provided from certbot. After creating the TXT record, use https://dnschecker.org to verify the TXT record has been resolved before progressing (may take 5-10 minutes).

Once you’ve verified (and deleted the TXT record generated), you’ll have a certificate generated with both a certificate and private key.

Next, you’ll add these to an Azure Key Vault to finish setting up HTTPS.

Adding Certificate to Key Vault, Configuring Certificate

To add the certificate to the keyvault, you need to first convert it to PFX (using WSL):

openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out cert.pfx -passout pass:password

Now back to using Azure CLI, import the certificate:

az keyvault certificate import --vault-name VAULT_NAME -n cert -f cert.pfx --password password

Once the certificate is added, set up Custom HTTPS for the endpoint:

Final steps are setting up a service principal for access with the command provided above. (Make sure to use Connect-AzAccount). Once this is done, you’ll need to allow some time (around 6-8 hours) to pass for allowing certificate import and CDN provisioning.

Verification

To make sure everything is in place, first check to ensure the status shows Custom HTTPS being enabled:

Afterwards, try accessing your site using HTTPS to confirm everything working.

Redirecting all non-HTTP/root Traffic

The last step to getting this working is setting anything not going to https://<yourdomain>.com to the correct place. This can be done in the Premium CDN with Verizon plan by changing the traffic rules in the Custom rules engine:

This will take some time to propagate, you’ll know it’s complete when you see “Active XML” next to the rule.

Once this is done, you can validate by trying to access the site using HTTP, and seeing it redirect to HTTPS (make sure to use an Incognito tab if using Chrome).