Setting up a Linux SFTP Server

To get started, you’ll need to use the following:

  • A virtual machine with a public IP address.
  • SSH address to the above virtual machine

Configuring the User

First, SSH into the server and install OpenSSH:

sudo apt install -y ssh openssh-server

Now create the group to have access, alongside a user to be able to access:

sudo groupadd sftpg
sudo useradd -g sftpg ftpuser
sudo passwd ftpuser

Now set up the directories to allow for the following:

  • The user can view the contents of their own folder.
  • The user can upload files to the uploads/ folder.
sudo mkdir -p /data/ftpuser/upload
sudo chown -R root.sftpg /data/ftpuser
sudo chown -R ftpuser.sftpg /data/ftpuser/upload

Setting up SFTP

To set up SFTP, append the following to the end of /etc/ssh/sshd_config:

Match Group sftpg
     ChrootDirectory /data/%u
     ForceCommand internal-sftp

As you’re using a password to authenticate the above user, make sure the following is in the file as well:

PasswordAuthentication yes

Restart the service:

sudo systemctl restart sshd


To verify everything is in place, try logging into the server as the new user:

sftp ftpuser@YOUR_DOMAIN

Once connected in, try uploading a file:

PUT D:/local-file.png file.png

And then check that the file is there:

GET file.png D:/downloaded-file.png


Setting up Artifactory to use a root domain with a Reverse Proxy

When installing Artifactory, you’ll be able to start using it as `http://YOUR_DOMAIN:8081/artifactory. To change this to a simpler domain, you can read the following guide. You’ll need to have the following:

  • Artifactory administrative access
  • SSH access to the server.

Setting up Reverse Proxy to Simplify Domain

First, SSH into the server and install Apache:

sudo apt install apache2 -y

This should create a default page you can access by hitting the root URL:

Now configure the appropriate modules:

sudo a2enmod proxy_http proxy_ajp rewrite deflate headers proxy_balancer proxy_connect proxy_html ssl lbmethod_byrequests slotmem_shm proxy

While logged in as an admin in Artifactory, access Admin -> HTTP Settings and make the following changes:

After saving, you’ll see the ability to View/Download the configuration on the right side of the page. Download the contents and replace the contents of /etc/apache2/sites-enabled/000-default.conf on the Artifactory server with them.

Restart Apache with sudo systemctl restart apache2.

Now confirm that you can request Artifactory using http://YOUR_DOMAIN.

Setting up SSL using Let’s Encrypt

Now that you can access the domain, let’s secure the domain using Let’s Encrypt.

Use the following to check which version of Linux you’re running: lsb_release -a

Most likely, you’ll be using Ubuntu 18.04, if so, you can use these steps. If not, check out the CertBot page for appropriate directions.

You should be able to generate and configure the cert automatically running the directions above. Assuming this is successful, select to redirect HTTP traffic to HTTPS, removing HTTP access.

Now confirm that you can both:

  1. Access the HTTPS version of your domain above.
  2. When trying to access HTTP, it automatically redirects to HTTPS.



Resetting Artifactory Admin Password

If you’ve lost access to the administrator account in your on-premised Artifactory, this guide will help walk you through setting it back up. You’ll need to have SSH access to the server in order to complete this guide.

To get started, SSH into the server and open$ARTIFACTORY_HOME/etc/security/access/bootstrap.creds. Enter the following information:


Now assign read/write permissions:

chmod 600 bootstrap.creds

And finally, restart the server.

Once that’s done, make the following POST call while still SSHed into the server:

curl --request PATCH \
  --url http://localhost:8081/artifactory/api/access/api/v1/users/admin \
  --header 'authorization: Basic <ENCODE access-admin:PASSWORD>' \
  --header 'content-type: application/json' \
  --data '{"password":"NEW_PASSWORD"}'

If you receive a response like the following, you’re all set:

  "username" : "admin",
  "realm" : "internal",
  "status" : "enabled",
  "allowed_ips" : [ "*" ],
  "created" : "2019-08-01T13:47:16.959Z",
  "modified" : "2019-09-04T15:25:02.398Z",
  "last_login_time" : "2019-08-21T13:46:55.061Z",
  "last_login_ip" : "",
  "custom_data" : {
    "updatable_profile" : "true",
    "artifactory_admin" : "true"
  "password_expired" : false,
  "password_last_modified" : 1567610702390,
  "groups" : [ ]

Adding Entity Framework/SQL Server to a Function App

When working with function apps, you may want to connect SQL Server to allow for using an API with data persistence.

Setting up Project to Use Entity Framework

Add the following to your .csproj file, in the section with PackageReference entries:

    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.2.3" />

And then add the following to allow for copying the project DLL (supports builds in both Windows and Linux):

  <Target Name="PostBuild" AfterTargets="PostBuildEvent">
    <Exec Command="copy /Y &quot;$(TargetDir)bin\$(ProjectName).dll&quot; &quot;$(TargetDir)$(ProjectName).dll&quot;" Condition=" '$(OS)' == 'Windows_NT'" />
    <Exec Command="cp &quot;$(TargetDir)bin\$(ProjectName).dll&quot; &quot;$(TargetDir)$(ProjectName).dll&quot;" Condition=" '$(OS)' != 'Windows_NT'" />

For setting this up, I’d recommend having at least one Model ready for use. In our case, let’s use a simple example with Entry.cs:

public int EntryId { get; set; }

public DateTime Date { get; set; }

public string Content { get; set; }

Now create the Context.cs and ContextFactory.cs files in the Models folder:

using Microsoft.EntityFrameworkCore;

  public class Context : DbContext
    public Context(DbContextOptions<UsawStatsContext> options)
      : base(options)
    { }

    public DbSet<Entry> MeetResults { get; set; }
using System;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;

  public class ContextFactory : IDesignTimeDbContextFactory<Context>
    public Context CreateDbContext(string[] args)
      string SqlConnection =

      if (SqlConnection == null)
        throw new ArgumentNullException(
          $"Environment variable CONN_STR not set.");

      var optionsBuilder = new DbContextOptionsBuilder<Context>();

      return new Context(optionsBuilder.Options);

Now create a Startup.cs file in root:

using System;
using YOUR_NAMESPACE.Models;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(YOUR_NAMESPACE.Startup))]

  class Startup : FunctionsStartup

  private void ConfigureDatabase(IFunctionsHostBuilder builder)
      string SqlConnection =

      if (SqlConnection == null)
        throw new ArgumentNullException(
          $"Environment variable "CONN_STR" not set.");

        options => options.UseSqlServer(SqlConnection));

      var optionsBuilder = new DbContextOptionsBuilder<Context>();

      using (var context = new Context(optionsBuilder.Options))
        catch (Exception e)
          throw new Exception(
            $"Error when migrating database: {e.Message}");

Setting up Database

Finally, you’ll need to set up a database for use. Create the database and use the connection string – let’s use SQL Server locally and set this in the Azure Function project’s local.settings.json:

"Values": {
  "CONN_STR": "Data Source=localhost\\SQLEXPRESS;Initial Catalog=DB_NAME;Integrated Security=SSPI;"

Once this is set, setting up migrations is the next step. Set the environment variable on your PC:

dotnet ef migrations add InitialCreate

With this step done, start the application and check to see if your database was updated correctly, and that you can see the tables desired.


Setting Up a Root Domain with Azure DNS

When setting up Azure CDN, you may want to use a naked domain ( to access. It’s a little tricky, so here’s how I set it up:

Creation and Verification

Create the CDN profile and endpoint in Azure.

Next set up a custom domain:

To do this, you’ll need to modify the value highlighted as a CNAME record for your domain’s DNS. It will look like this:

  • Host: cdnverify.<yourdomain>
  • Value: cdnverify.<endpoint>

Setting up SSL and HTTPS Redirect

With the domain working, you’ll notice that you only have an insecure connection in place, so let’s set that up.

One of the disadvantages of using a root domain is that you cannot use the Azure CDN self-generated certificates. This means you’ll have to bring your own certificate in. There are three options immediately available:

  1. Purchase a certificiate from Azure (easy and reliable but starts at ~$70)
  2. Purchase a certificate from a reputable CA (Such as from Namecheap, can purchase a cert as low as ~$8 a year, although these certs are not as secure).
  3. Use the manual process at Let’s Encrypt to generate a certificate (free, but will need to be renewed regularly).

Obtaining a Let’s Encrypt Certificate Using Windows

To use the manual process, you’ll need to start with the following:

Install WSL (Ubuntu 18.04) onto your computer, and open a CLI.

Install certbot with the following commands:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

And then run certbot in manual mode:

sudo certbot certonly --manual --preferred-challenges dns-01

You’ll need to create a TXT record with the data provided from certbot. After creating the TXT record, use to verify the TXT record has been resolved before progressing (may take 5-10 minutes).

Once you’ve verified (and deleted the TXT record generated), you’ll have a certificate generated with both a certificate and private key.

Next, you’ll add these to an Azure Key Vault to finish setting up HTTPS.

Adding Certificate to Key Vault, Configuring Certificate

To add the certificate to the keyvault, you need to first convert it to PFX (using WSL):

openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out cert.pfx -passout pass:password

Now back to using Azure CLI, import the certificate:

az keyvault certificate import --vault-name VAULT_NAME -n cert -f cert.pfx --password password

Once the certificate is added, set up Custom HTTPS for the endpoint:

Final steps are setting up a service principal for access with the command provided above. (Make sure to use Connect-AzAccount). Once this is done, you’ll need to allow some time (around 6-8 hours) to pass for allowing certificate import and CDN provisioning.


To make sure everything is in place, first check to ensure the status shows Custom HTTPS being enabled:

Afterwards, try accessing your site using HTTPS to confirm everything working.

Redirecting all non-HTTP/root Traffic

The last step to getting this working is setting anything not going to https://<yourdomain>.com to the correct place. This can be done in the Premium CDN with Verizon plan by changing the traffic rules in the Custom rules engine:

This will take some time to propagate, you’ll know it’s complete when you see “Active XML” next to the rule.

Once this is done, you can validate by trying to access the site using HTTP, and seeing it redirect to HTTPS (make sure to use an Incognito tab if using Chrome).


Viewing Traffic to Azure VMs using NSG Flow Logs

Setting up NSG flow logs allows for viewing the traffic coming in through a network security group. This can be useful for a few things:

  • Troubleshooting access issues (maybe something shouldn’t have access, or vise versa).
  • Providing logging on the traffic accessing a server.

You’ll need the following to get started with this guide:

  • Ensure that a network watcher is configured.
  • An Azure subscription with the Insights provider installed.
  • An existing Network Security Group.
  • A storage account (ideally stored in the same resource group) that will hold the log data.


Go into Network Watcher and click on ‘NSG Flow Logs’:

Turn on Flow logs, and select the storage account to store logs in. A few notes here:

  • If retention is kept at 0, all logs will stay in the storage account forever. Useful for audits, but will end up costing more in the long run. (I personally set to 7 days).

Accessing Logs

For viewing the logs, you can either use the Azure Portal or use the Microsoft Azure Storage Explorer.

View the insights-logs-networksecuritygroupflowevent container in the configured storage account.

Access the PT1H.json file.

The number associates to the following:

  1. Timestamp
  2. Inbound IP (coming in from Internet)
  3. Outbound IP (going through the NSG)
  4. Inbound Port
  5. Outbound Port
  6. Protocol
  7. Traffic Flow (I – Inbound, O – Outbound)
  8. Acceptance (A – Allowed, D – Denied)



Adding SSL to an AKS Ingress Controller (without using Let’s Encrypt)

I wrote about the ability to add HTTPS to an AKS cluster using Let’s Encrypt, but recently ran into a case where I needed to add a cert from a specific CA to the cluster.

To do this, you need the following:

  • An AKS cluster deployed in an Azure tenant.
  • A certificate (should start with —–BEGIN CERTIFICATE—–)
  • A private key associated to the certificate above (used when creating the CSR for the cert, and will start with —–BEGIN RSA PRIVATE KEY—–)

Import the cert into the cluster:

kubectl create secret tls tls-ca-secret --key certificate.key --cert certificate.crt

After the cert imported, create an ingress controller:

apiVersion: extensions/v1beta1
kind: Ingress
  name: ingress
  annotations: nginx '10m' 'true'
    - hosts:
        - YOUR_DOMAIN
      secretName: tls-ca-secret
  - host: YOUR_DOMAIN
          - path: /some/endpoint
              serviceName: some-service
              servicePort: 80

Afterwards, check to ensure your cert is coming through using the endpoint defined in the Ingress Controller.


Fixing “unable to get credential storage lock: File exists” when Deploying Function App via Jenkins

When working with deploying Azure Function Apps with Jenkins, I ran into an issue when trying to rebuild a Function App from scratch with the same name. I was unable to deploy the codebase via Pipeline due to the following error:

unable to get credential storage lock: File exists

I was able to fix by doing the following:

SSH into the server.

Log in as the user that runs when running a Jenkins job (for example):

sudo su -s /bin/bash jenkins

Access the user’s .git-credentials file, and remove the reference to the pre-existing Function App SCM.

Delete the .git-credentials.lock file.

After doing this, try running the job again and ensure the issue has been solved.


Setting up a Configuration Page for Your 3.90 or Below NopCommerce Plugin

After writing your NopCommerce plugin, a common next step is to create a ‘Configure’ page to allow for configuring the settings of the plugin.

First, create a model that will represent the values to configure in the plugin – Models/YourPluginModel.cs

Next, create a controller Controllers/YourPluginController.cs, which will look something like this:

public class YourPluginController: BasePluginController
        private readonly ILocalizationService _localizationService;

        public YourPluginController(ILocalizationService localizationService)
            _localizationService = localizationService;

        public ActionResult Configure()
            return View("~/Plugins/Misc.YourPlugin/Views/YourPlugin/Configure.cshtml", model);

        public ActionResult Configure(YourPluginModel model)
            if (!ModelState.IsValid)
                return Configure();


            return Configure();

Finally, create a view Views/YourPlugin/Configure.cshtml:

    Layout = "";
@using Nop.Web.Framework

@using (Html.BeginForm())
    <div class="panel-group">
        <div class="panel panel-default">
            <div class="panel-body">
                <div class="form-group">
                    <div class="col-md-3">
                    <div class="col-md-9">
                        <input type="submit" name="save" class="btn bg-blue" value="@T("Admin.Common.Save")" />

Make sure the newly created View is set to ‘Copy if newer’.

Now when running your store, you should be able to enter the Configure page for your plugin, assuming the plugin is installed:


Setting up Jenkins to Run Angular Unit Tests

To be able to run unit tests in a Linux-based Jenkins instance, you just need to SSH into the Jenkins instance and run the following command:

sudo dpkg -i google-chrome-stable_current_amd64.deb

When installing, you may run into a dependency issue, if so, run:

sudo apt-get install -f
sudo dpkg -i google-chrome-stable_current_amd64.deb

After Google Chrome is installed, you should be able to run npm test, meaning you can use Jenkins to run unit tests in the CI process.