Keep A Consumption-Based Function App Warm With A Scheduled Task

With Azure Function Apps using a Consumption plan, they will need to be warmed up if not used for 20 minutes to prevent having cold starts for the users in place. If you’re serving an API using a Function app, you’ll want to put this in place to keep performance ideal.

Something to note with this solution – it works well for low-traffic APIs where the goal is to serve an API using the consumption app for low costs. Assuming larger traffic use, you may be better off switching to a dedicated App Service plan, to prevent the cold start issue at all, because cold starts will still come when scaling out.

To follow this guide, I’ll assume you already have a Function app in place. Create a new function with the following:

using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;

namespace YOUR_NAMESPACE
{
  public static class KeepWarm
  {
    [FunctionName("KeepWarm")]
    public static void Run([TimerTrigger("0 */15 * * * *")]TimerInfo myTimer, ILogger log)
    {
      log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
    }
  }
}

That’s it! After this is deployed, the cold start issue of your API should be removed, as running the log function every 15 minutes will prevent the system from needing a cold start.

Further Reading

https://mikhail.io/2018/05/azure-functions-cold-starts-beyond-first-load/

Adding a Scheduled Task to NOPCommerce through the database

When working in NOPCommerce, you may want to create a scheduled task without having to go through plugin install and uninstall.

First, you’ll have to have a task set up in ideally a plugin.

You can do so by adding the following to the ScheduleTask table in the NOPCommerce database:

INSERT INTO [dbo].[ScheduleTask]
           ([Name]
           ,[Seconds]
           ,[Type]
           ,[Enabled]
           ,[StopOnError]
           ,[LeasedByMachineName]
           ,[LeasedUntilUtc]
           ,[LastStartUtc]
           ,[LastEndUtc]
           ,[LastSuccessUtc])
     VALUES
           (
           'NAME_OF_TASK'
           SCHEDULE_IN_SECONDS
           'NAMESPACE.ClassName, NAMESPACE'
           IS_ENABLED, -- 1-yes, 0-no
           SHOULD_STOP_ON_ERROR, -- 1-yes, 0-no
           NULL,
           NULL,
           NULL,
           NULL,
           NULL
           )
GO

After that’s done, you should be able to immediately run the task.

Adding Swagger UI Documentation to Azure Function APIs

You can set up Swagger UI in your Azure Function API to allow for providing documentation for your serverless API pretty easily.

Initial Configuration

First, add the SwashBuckle library to your process via the <project>.csproj file:

<ItemGroup>
    <PackageReference ... />
    <PackageReference Include="AzureFunctions.Extensions.Swashbuckle" Version="1.4.1" />
    ...
</ItemGroup>

Next set up the SwashBuckle startup code in SwashBucketStartup.cs:

using System.Reflection;
using AzureFunctions.Extensions.Swashbuckle;
using YOUR_NAMESPACE;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;

[assembly: WebJobsStartup(typeof(SwashBuckleStartup))]
namespace YOUR_NAMESPACE
{
  internal class SwashBuckleStartup : IWebJobsStartup
  {
    public void Configure(IWebJobsBuilder builder)
    {
      builder.AddSwashBuckle(Assembly.GetExecutingAssembly());
    }
  }
}

Now create both HTTP Triggers for the Swagger document:

[FunctionName("SwaggerJson")]
[SwaggerIgnore]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "json")]
        HttpRequestMessage req,
    ILogger log,
    [SwashBuckleClient] ISwashBuckleClient swashBuckleClient)
{
  return Task.FromResult(swashBuckleClient.CreateSwaggerDocumentResponse(req));
}

And the Swagger UI document:

[FunctionName("SwaggerUI")]
[SwaggerIgnore]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "ui")]
        HttpRequestMessage req,
    ILogger log,
    [SwashBuckleClient] ISwashBuckleClient swashBuckleClient)
{
  return Task.FromResult(swashBuckleClient.CreateSwaggerUIResponse(req, "json"));
}

Running this locally will provide two endpoints:

The final step for initial configuration is changing the documentation for the API page. Add the following to host.json:

{
  "version": "2.0",
  "extensions": {
    ...

    "Swashbuckle": {
      "Documents": [
        {
          "Title": "YOUR_TITLE",
          "Version": "v1",
          "Description": "YOUR_DESCRIPTION"
        }
      ]
    }
  }
}

Which will give you:

Further Reading:

https://medium.com/@yuka1984/open-api-swagger-and-swagger-ui-on-azure-functions-v2-c-a4a460b34b55

Cleaning up Hard Drive Space in Linux

If you run into a situation where a Linux machine is running out of space, here’s a way to check what is taking up so much space and clean the server up.

Install ncdu:

sudo apt-get install ncdu

Afterwards, start ncdu and you’ll be presented with an interface that’ll walk you through the directories taking the most space:

Troubleshooting – My Drive is 100% Full

If you’re having trouble installing the above, you likely have a full drive that cannot do anything else. Here are a few steps that may help:

First, see if there are any individual files you can delete to clear some space (such as in the user directory). If this isn’t an option, go into /var/tmp and delete the files in there () this is not always a safe choice).

After that, run the following:

sudo apt-get clean
sudo apt install byobu
sudo purge-old-kernels
sudo apt autoremove
sudo update-grub

With this you should be able to install the above and check into what is taking up so much space.

Getting Slack Notifications with Azure Monitor

An option when trying to get alerts from Azure Monitor (such as web tests, processing alerts and more) is to get them to a Slack channel to allow for an alternative to receiving alerts via email.

Setting up Slack

To start, you’ll need to have a Slack workspace with an available app. If needed, create the channel desired to receive notifications.

Create a new Slack app named “Azure Notifications” and turn on the “Incoming Webhooks” capability, which will provide a URL (webhook) that you’ll use later to receive the notifications from Azure.

Creating Conversion Logic App in Azure

Next, you’ll need to create a logic app in Azure that takes the alert provided from Azure and converts the message into something that can be displayed in Slack.

Create a logic app, open Code View and use the following (replacing the URI provided with the one above):

{
    "definition": {
        "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
        "actions": {
            "Http": {
                "inputs": {
                    "body": {
                        "text": "@{if(equals(triggerBody()?['data']?['essentials']?['monitorCondition'],'Resolved'), ':green_heart:', ':red_circle:')} @{triggerBody()?['data']?['essentials']?['alertRule']}"
                    },
                    "headers": {
                        "Content-Type": "application/json"
                    },
                    "method": "POST",
                    "uri": "YOUR_WEBHOOK_HERE"
                },
                "runAfter": {},
                "type": "Http"
            }
        },
        "contentVersion": "1.0.0.0",
        "outputs": {},
        "parameters": {},
        "triggers": {
            "manual": {
                "inputs": {
                    "schema": {
                        "properties": {
                            "data": {
                                "properties": {
                                    "alertContext": {
                                        "properties": {
                                            "condition": {
                                                "properties": {
                                                    "allOf": {
                                                        "items": {
                                                            "properties": {
                                                                "dimensions": {
                                                                    "items": {
                                                                        "properties": {
                                                                            "name": {
                                                                                "type": "string"
                                                                            },
                                                                            "value": {
                                                                                "type": "string"
                                                                            }
                                                                        },
                                                                        "required": [
                                                                            "name",
                                                                            "value"
                                                                        ],
                                                                        "type": "object"
                                                                    },
                                                                    "type": "array"
                                                                },
                                                                "metricName": {
                                                                    "type": "string"
                                                                },
                                                                "metricNamespace": {
                                                                    "type": "string"
                                                                },
                                                                "metricValue": {
                                                                    "type": "number"
                                                                },
                                                                "operator": {
                                                                    "type": "string"
                                                                },
                                                                "threshold": {
                                                                    "type": "string"
                                                                },
                                                                "timeAggregation": {
                                                                    "type": "string"
                                                                }
                                                            },
                                                            "required": [
                                                                "metricName",
                                                                "metricNamespace",
                                                                "operator",
                                                                "threshold",
                                                                "timeAggregation",
                                                                "dimensions",
                                                                "metricValue"
                                                            ],
                                                            "type": "object"
                                                        },
                                                        "type": "array"
                                                    },
                                                    "windowSize": {
                                                        "type": "string"
                                                    }
                                                },
                                                "type": "object"
                                            },
                                            "conditionType": {
                                                "type": "string"
                                            },
                                            "properties": {}
                                        },
                                        "type": "object"
                                    },
                                    "essentials": {
                                        "properties": {
                                            "alertContextVersion": {
                                                "type": "string"
                                            },
                                            "alertId": {
                                                "type": "string"
                                            },
                                            "alertRule": {
                                                "type": "string"
                                            },
                                            "alertTargetIDs": {
                                                "items": {
                                                    "type": "string"
                                                },
                                                "type": "array"
                                            },
                                            "description": {
                                                "type": "string"
                                            },
                                            "essentialsVersion": {
                                                "type": "string"
                                            },
                                            "firedDateTime": {
                                                "type": "string"
                                            },
                                            "monitorCondition": {
                                                "type": "string"
                                            },
                                            "monitoringService": {
                                                "type": "string"
                                            },
                                            "originAlertId": {
                                                "type": "string"
                                            },
                                            "resolvedDateTime": {
                                                "type": "string"
                                            },
                                            "severity": {
                                                "type": "string"
                                            },
                                            "signalType": {
                                                "type": "string"
                                            }
                                        },
                                        "type": "object"
                                    }
                                },
                                "type": "object"
                            },
                            "schemaId": {
                                "type": "string"
                            }
                        },
                        "type": "object"
                    }
                },
                "kind": "Http",
                "type": "Request"
            }
        }
    }
}

You can test by running the code directly, and see if you get a red circle:

Assuming this is in place, the final step is creating an action group that allows for setting alerts to send to the Slack channel.

Before going to the next step, take note of the webhook provided by the Logic App:

Setting up Action Group

The final step is setting up an action group with the Logic App webhook, while turning on the Common Alert Schema.

Automatically Upgrading Azure Kubernetes Services

Azure provides a means to upgrade Kubernetes clusters using the AKS service. This guide will walk you through using an automation account to upgrade the services on a regular basis, making the process something you don’t need to worry about.

Note that you may want to hold off on doing this for production systems – if for some reason an upgrade were to break currently functionality, there is no means for reverting a cluster back to an original version.

Create a Powershell Core Function App

First, create a function app that runs on PowerShell Core:

After creating the function app, you’ll also want to make sure to increase the standard timeout rate, since this call can take some time to process. Change host.json to have the following:

{
    "functionTimeout": "00:10:00"
}

If you have a large number of clusters you’ll be checking regularly, you should use a standard App Service plan instead, to remove the timeout entirely.

Import Azure CLI into the Function App

Next, you’ll want to import Azure CLI into the Function App, to allow for calling the az command.

First, you’ll need to install Azure CLI on you local machine. You’ll be copying this directory created into the function app created to use, so after installing, locate the Azure CLI files at C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2.

Connect to FTP using the publish profile for the Function App (access this through the portal) and copy the entire CLI2 folder into the /home directory. Make sure that all of the files are copied successfully.

To verify everything is working, run the following command:

D:/home/CLI2/wbin/az.cmd

If you get a successful call back, you’ve imported Azure CLI correctly and can now write the code to programmatically upgrade the AKS clusters.

Create Service Principal

Next, you’ll create a service principal that has access to the clusters in mind, so you have an account that can log in and perform the upgrade duties. Run the following command locally when logged in to the subscription desired:

az ad sp create-for-rbac -n "aks-autoupgrade-sp" --role contributor --scopes /subscriptions/{SubID}

After this is done, you should receive output showing the name, password, and tenant. Add the three of these as configuration values for the Function App as:

  • AZ_USER – appId
  • AZ_PASS – password
  • AZ_TENANT – tenant

Create Timer Function

Next, create a timer function that runs every day, let’s say at noon:

0 0 12 * * *

Use the following codebase.

param($Timer)

Write-Output "Logging in as Service Principal $env:AZ_USER"
D:/home/CLI2/wbin/az.cmd login --service-principal -u $env:AZ_USER -p $env:AZ_PASS --tenant $env:AZ_TENANT | Out-Null

if (!($?)) {
    Write-Error "Login failed, exiting script."
    exit 1;
}

Write-Output "Getting list of AKS clusters...";
$aksClusters = D:/home/CLI2/wbin/az.cmd aks list | ConvertFrom-Json;

if ($aksClusters.length -le 0) {
  Write-Output "No AKS clusters found, exiting...";
  exit 1;
}

$aksClusters | ForEach-Object -Process {
  $clusterName = $_.name
  Write-Output "$clusterName : checking for upgrades..."
  $upgrades = D:/home/CLI2/wbin/az.cmd aks get-upgrades `
    -g $_.resourceGroup -n $_.name | ConvertFrom-Json
  $agentPoolProfiles = $upgrades.agentPoolProfiles

  if ($agentPoolProfiles.upgrades -eq $null) {
    Write-Output "No upgrades available."
    return;
  }

  $latestUpgrade = $agentPoolProfiles.upgrades[-1];

  $currentKubernetesVersion = $upgrades.agentPoolProfiles.kubernetesVersion;
  $newKubernetesVersion = $latestUpgrade.kubernetesVersion

  Write-Output `
    "Upgrade available: $currentKubernetesVersion => $newKubernetesVersion"

  Write-Output "Upgrading to $newKubernetesversion automatically."
  D:/home/CLI2/wbin/az.cmd aks upgrade -k $newKubernetesVersion -g $_.resourceGroup -n $_.name --no-wait --yes
}

Write-Output "Complete."
exit 0;

You can run the function to make sure it is running as intended, while commenting out the az aks upgrade line to ensure no upgrades occur.

Setting up Failure Alerts

The final (optional) step is setting up a means to alert in case of failure. When creating the Function App, an Application Insights resource should have been created as well. Go the ‘Alerts’ section in the App Insight resource, and create an alert:

Add your email as an action group to notify if there is an issue with failures.

Reference

https://stackoverflow.com/questions/56544059/azure-cli-commands-not-working-inside-azure-function-apps-portal

Setting up SFTP File Migration to Azure Storage Account

This guide assumes you are using a Linux SFTP server for receiving files, and then uploading those files into an Azure storage account.

Initial Setup

Install Azure CLI on the SFTP server with the following command:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Then login using an account with access to the storage account:

az login

Perform File Migration

The next step is actually sending the files over to a storage account. For this example, we’ll send all of the files in a particular folder to the storage account.

az storage blob upload-batch -d YOUR_CONTAINER --account-name STORAGE_ACCOUNT_NAME -s DIRECTORY

Once those files are moved over, move them into a processed/ directory:

sudo mv * /processed/

Manually Renew A Let’s Encrypt Certificate from Kubernetes

I work with a series of Kubernetes clusters that are restricted to public access, but still are encrypted using Let’s Encrypt certificates. Normally, when renewal is requied, this process is automatically done for you.

In the case above, I have to kick off the renewal process directly. This can be done easily with deleting the TLS secret generated for the certificate. This can be done through the UI:

Or via CLI:

kubectl delete secret tls-secret

Once this is done, the cert should automatically attempt a renewal. You can check the status of this with:

kubectl describe certificate tls-secret --namespace ingress-nginx

You should see the following:

Setting up Slack Notifications with Jenkins

Setting up Slack notifications for build results with Jenkins is pretty simple. You’ll need the following:

  • A Jenkins server that you have admin rights to.
  • Slack Notification Plugin
  • A Slack workspace, ideally with a channel dedicated to Jenkins alerts.

Setup

To set up Slack, go to <your-workspace-name>.slack.com/apps and search for Jenkins CI:

Add the configuration and set the desired channel where alerts will appear.

After this, you’ll see some directions on setting up Slack on the Jenkins side.

When you’ve finished the setup, make sure to test the connection – you should see a message in Slack like the following:

Adding Jenkins Calls to Pipeline

Once the setup is complete, you can use the following examples to call Slack during the pipeline jobs:

stages {
  stage('Build') {
    steps {
      echo 'build'
    }
  }
  stage('Deploy') {
    steps {
      echo 'Deploy'
      slackSend color: 'good', message: "deployment successful => https://yoursite.com"
    }
  }
}
...
post {
  failure {
    slackSend color: 'danger', message: "deployment failed (<${env.BUILD_URL}|Open>)"
  }
  always {
    cleanWs()
  }
}

This example will call a successful message when all stages complete, and sends a failure message if the build as a whole fails.

Setting up Jenkins to Auto-Update

When setting up Jenkins, you’ll have to perform upgrades on the system yourself. You can do this pretty easily, but let’s automate the process so that each night, Jenkins will download and install the latest version.

To do this you’ll need to have SSH access to the server.

Creating Upgrade Script

First, SSH into the server.

Create a bin/ folder in the user’s directory:

cd ~
mkdir bin

If it doesn’t already exist, create a .bash_profile file and add the following:

 export PATH=$PATH:/home/<YOUR_USER>/bin

Now create the script file for use:

cd bin
touch upgrade-jenkins

Create a script with the following contents:

#!/bin/bash

sudo wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war
sudo mv jenkins.war /usr/share/jenkins/jenkins.war
sudo systemctl restart jenkins

And make the script executable:

chmod u+x upgrade-jenkins

You can finally test the script by calling upgrade-jenkins. When it runs, Jenkins should restart with the latest version.

Setting up for Automatic Updates

The next step is setting up the script to run each night at some point to promote automatic upgrades, so you don’t have to worry about updating the system. To do this, we’ll create a cron job that runs the above script once a day.

Set up a crontab file using crontab -e, and add the following to the file to run everyday at 3am. (make sure to check date to see if the system is using UTC):

0 3 * * * upgrade-jenkins

Once this is in place, you’ll be running the script above once a day, which will download and run the latest version of Jenkins on a regular basis.


Reference:

https://www.taniarascia.com/how-to-create-and-use-bash-scripts/