Setting up Jenkins to Auto-Update

When setting up Jenkins, you’ll have to perform upgrades on the system yourself. You can do this pretty easily, but let’s automate the process so that each night, Jenkins will download and install the latest version.

To do this you’ll need to have SSH access to the server.

Creating Upgrade Script

First, SSH into the server.

Create a bin/ folder in the user’s directory:

cd ~
mkdir bin

If it doesn’t already exist, create a .bash_profile file and add the following:

 export PATH=$PATH:/home/<YOUR_USER>/bin

Now create the script file for use:

cd bin
touch upgrade-jenkins

Create a script with the following contents:

#!/bin/bash

sudo wget http://mirrors.jenkins-ci.org/war/latest/jenkins.war
sudo mv jenkins.war /usr/share/jenkins/jenkins.war
sudo systemctl restart jenkins

And make the script executable:

chmod u+x upgrade-jenkins

You can finally test the script by calling upgrade-jenkins. When it runs, Jenkins should restart with the latest version.

Setting up for Automatic Updates

The next step is setting up the script to run each night at some point to promote automatic upgrades, so you don’t have to worry about updating the system. To do this, we’ll create a cron job that runs the above script once a day.

Set up a crontab file using crontab -e, and add the following to the file to run everyday at 3am. (make sure to check date to see if the system is using UTC):

0 3 * * * upgrade-jenkins

Once this is in place, you’ll be running the script above once a day, which will download and run the latest version of Jenkins on a regular basis.


Reference:

https://www.taniarascia.com/how-to-create-and-use-bash-scripts/

Setting up a Linux SFTP Server

To get started, you’ll need to use the following:

  • A virtual machine with a public IP address.
  • SSH address to the above virtual machine

Configuring the User

First, SSH into the server and install OpenSSH:

sudo apt install ssh openssh-server

Now create the group to have access, alongside a user to be able to access:

sudo groupadd sftpg
sudo useradd -g sftpg ftpuser
sudo passwd ftpuser

Now set up the directories to allow for the following:

  • The user can view the contents of their own folder.
  • The user can upload files to the uploads/ folder.
sudo mkdir -p /data/ftpuser/upload
sudo chown -R root.sftpg /data/ftpuser
sudo chown -R ftpuser.sftpg /data/ftpuser/upload

Setting up SFTP

To set up SFTP, append the following to the end of /etc/ssh/sshd_config:

Match Group sftpg
     ChrootDirectory /data/%u
     ForceCommand internal-sftp

As you’re using a password to authenticate the above user, make sure the following is in the file as well:

PasswordAuthentication yes

Restart the service:

sudo systemctl restart sshd

Verification

To verify everything is in place, try logging into the server as the new user:

sftp ftpuser@YOUR_DOMAIN

Once connected in, try uploading a file:

PUT D:/local-file.png file.png

And then check that the file is there:

GET file.png D:/downloaded-file.png

Setting up Artifactory to use a root domain with a Reverse Proxy

When installing Artifactory, you’ll be able to start using it as `http://YOUR_DOMAIN:8081/artifactory. To change this to a simpler domain, you can read the following guide. You’ll need to have the following:

  • Artifactory administrative access
  • SSH access to the server.

Setting up Reverse Proxy to Simplify Domain

First, SSH into the server and install Apache:

sudo apt install apache2

This should create a default page you can access by hitting the root URL:

Now configure the appropriate modules:

sudo a2enmod proxy_http proxy_ajp rewrite deflate headers proxy_balancer proxy_connect proxy_html ssl lbmethod_byrequests slotmem_shm proxy

While logged in as an admin in Artifactory, access Admin -> HTTP Settings and make the following changes:

After saving, you’ll see the ability to View/Download the configuration on the right side of the page. Download the contents and replace the contents of /etc/apache2/sites-enabled/000-default.conf on the Artifactory server with them.

Restart Apache with sudo systemctl restart apache2.

Now confirm that you can request Artifactory using http://YOUR_DOMAIN.

Setting up SSL using Let’s Encrypt

Now that you can access the domain, let’s secure the domain using Let’s Encrypt.

Use the following to check which version of Linux you’re running: lsb_release -a

Most likely, you’ll be using Ubuntu 18.04, if so, you can use these steps. If not, check out the CertBot page for appropriate directions.

You should be able to generate and configure the cert automatically running the directions above. Assuming this is successful, select to redirect HTTP traffic to HTTPS, removing HTTP access.

Now confirm that you can both:

  1. Access the HTTPS version of your domain above.
  2. When trying to access HTTP, it automatically redirects to HTTPS.

Reference: https://www.jfrog.com/confluence/display/RTF/Configuring+Apache

Resetting Artifactory Admin Password

If you’ve lost access to the administrator account in your on-premised Artifactory, this guide will help walk you through setting it back up. You’ll need to have SSH access to the server in order to complete this guide.

To get started, SSH into the server and open$ARTIFACTORY_HOME/etc/security/access/bootstrap.creds. Enter the following information:

access-admin@127.0.0.1=PASSWORD

Now assign read/write permissions:

chmod 600 bootstrap.creds

And finally, restart the server.

Once that’s done, make the following POST call while still SSHed into the server:

curl --request PATCH \
  --url http://localhost:8081/artifactory/api/access/api/v1/users/admin \
  --header 'authorization: Basic <ENCODE access-admin:PASSWORD>' \
  --header 'content-type: application/json' \
  --data '{"password":"NEW_PASSWORD"}'

If you receive a response like the following, you’re all set:

{
  "username" : "admin",
  "realm" : "internal",
  "status" : "enabled",
  "allowed_ips" : [ "*" ],
  "created" : "2019-08-01T13:47:16.959Z",
  "modified" : "2019-09-04T15:25:02.398Z",
  "last_login_time" : "2019-08-21T13:46:55.061Z",
  "last_login_ip" : "204.194.141.30",
  "custom_data" : {
    "updatable_profile" : "true",
    "artifactory_admin" : "true"
  },
  "password_expired" : false,
  "password_last_modified" : 1567610702390,
  "groups" : [ ]
}

Adding Entity Framework/SQL Server to a Function App

When working with function apps, you may want to connect SQL Server to allow for using an API with data persistence.

Setting up Project to Use Entity Framework

Add the following to your .csproj file, in the section with PackageReference entries:

<ItemGroup>
    ...
    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.2.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.2.3" />
</ItemGroup>

And then add the following to allow for copying the project DLL:

...
  <Target Name="PostBuild" AfterTargets="PostBuildEvent">
    <Exec Command="copy /Y "$(TargetDir)bin\$(ProjectName).dll" "$(TargetDir)$(ProjectName).dll"" />
  </Target>
</Project>

For setting this up, I’d recommend having at least one Model ready for use. In our case, let’s use a simple example with Entry.cs:

[Key]
public int EntryId { get; set; }

public DateTime Date { get; set; }

public string Content { get; set; }

Now create the Context.cs and ContextFactory.cs files in the Models folder:

    
using Microsoft.EntityFrameworkCore;

namespace YOUR_NAMESPACE
{
  public class Context : DbContext
  {
    public Context(DbContextOptions<UsawStatsContext> options)
      : base(options)
    { }

    public DbSet<Entry> MeetResults { get; set; }
  }
}
using System;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;

namespace YOUR_NAMESPACE
{
  public class ContextFactory : IDesignTimeDbContextFactory<Context>
  {
    public Context CreateDbContext(string[] args)
    {
      var optionsBuilder = new DbContextOptionsBuilder<Context>();
      optionsBuilder.UseSqlServer(Environment.GetEnvironmentVariable("SqlConnectionString"));

      return new Context(optionsBuilder.Options);
    }
  }
}

Now create a Startup.cs file in root:

using System;
using YOUR_NAMESPACE.Models;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(YOUR_NAMESPACE.Startup))]

namespace YOUR_NAMESPACE
{
  class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      string SqlConnection =
        Environment.GetEnvironmentVariable("SqlConnectionString");

      builder.Services.AddDbContext<Context>(
        options => options.UseSqlServer(SqlConnection)
      );
    }
  }
}

Setting up Database

Finally, you’ll need to set up a database for use. Create the database and use the connection string – let’s use SQL Server locally and set this in the Azure Function project’s local.settings.json:

"Values": {
  ...
  "SqlConnectionString": "Data Source=localhost\\SQLEXPRESS;Initial Catalog=DB_NAME;Integrated Security=SSPI;"
},

Once this is set, setting up migrations is the next step:

$env:SqlConnectionString="CONN_STRING"
dotnet ef migrations add InitialCreate
dotnet ef database update

With this step done, check to see if your database was updated correctly, and that you can see the tables desired.

Setting Up a Root Domain with Azure DNS

When setting up Azure CDN, you may want to use a naked domain (yourdomain.com) to access. It’s a little tricky, so here’s how I set it up:

Creation and Verification

Create the CDN profile and endpoint in Azure.

Next set up a custom domain:

To do this, you’ll need to modify the value highlighted as an A record for your domain’s DNS. It will look like this:

  • Host: cdnverify.<yourdomain>
  • Value: cdnverify.<endpoint>.azureedge.net

Creating Root Domain Link

Next, ping the endpoint hostname above using a CLI:

ping <endpoint_hostname>

And add that domain as an A record for your DNS.

Once this validates, check and see that the domain works as intended.

Setting up SSL and HTTPS Redirect

With the domain working, you’ll notice that you only have an insecure connection in place, so let’s set that up.

One of the disadvantages of using a root domain is that you cannot use the Azure CDN self-generated certificates. This means you’ll have to bring your own certificate in. There are three options immediately available:

  1. Purchase a certificiate from Azure (easy and reliable but starts at ~$70)
  2. Purchase a certificate from a reputable CA (Such as from Namecheap, can purchase a cert as low as ~$8 a year, although these certs are not as secure).
  3. Use the manual process at Let’s Encrypt to generate a certificate (free, but will need to be renewed regularly).

Obtaining a Let’s Encrypt Certificate Using Windows

To use the manual process, you’ll need to start with the following:

Install WSL (Ubuntu 18.04) onto your computer, and open a CLI.

Install certbot with the following commands:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

And then run certbot in manual mode:

certbot certonly --manual --preferred-challenges dns-01

You’ll need to create a TXT record with the data provided from certbot.

Once you’ve verified (and deleted the TXT record generated), you’ll have a certificate generated with both a certificate and private key.

Next, you’ll add these to an Azure Key Vault to finish setting up HTTPS.

Adding Certificate to Key Vault, Configuring Certificate

To add the certificate to the keyvault, you need to first convert it to PFX (using WSL):

openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out cert.pfx -passout pass:password

Now back to using Azure CLI, import the certificate:

az keyvault certificate import --vault-name VAULT_NAME -n cert -f cert.pfx --password password

Once the certificate is added, set up Custom HTTPS for the endpoint:

Final steps are setting up a service principal for access with the command provided above. (Make sure to use Connect-AzAccount). Once this is done, you’ll need to allow some time (around 6-8 hours) to pass for allowing certificate import and CDN provisioning.

Verification

To make sure everything is in place, first check to ensure the status shows Custom HTTPS being enabled:

Afterwards, try accessing your site using HTTPS to confirm everything working.

Redirecting all non-HTTP/root Traffic

The last step to getting this working is setting anything not going to https://<yourdomain>.com to the correct place. This can be done in the Premium CDN with Verizon plan by changing the traffic rules in the Custom rules engine:

This will take some time to propagate, you’ll know it’s complete when you see “Active XML” next to the rule.

Once this is done, you can validate by trying to access the site using HTTP, and seeing it redirect to HTTPS (make sure to use an Incognito tab if using Chrome).

Viewing Traffic to Azure VMs using NSG Flow Logs

Setting up NSG flow logs allows for viewing the traffic coming in through a network security group. This can be useful for a few things:

  • Troubleshooting access issues (maybe something shouldn’t have access, or vise versa).
  • Providing logging on the traffic accessing a server.

You’ll need the following to get started with this guide:

  • Ensure that a network watcher is configured.
  • An Azure subscription with the Insights provider installed.
  • An existing Network Security Group.
  • A storage account (ideally stored in the same resource group) that will hold the log data.

Configuration

Go into Network Watcher and click on ‘NSG Flow Logs’:

Turn on Flow logs, and select the storage account to store logs in. A few notes here:

  • If retention is kept at 0, all logs will stay in the storage account forever. Useful for audits, but will end up costing more in the long run. (I personally set to 7 days).

Accessing Logs

For viewing the logs, you can either use the Azure Portal or use the Microsoft Azure Storage Explorer.

View the insights-logs-networksecuritygroupflowevent container in the configured storage account.

Access the PT1H.json file.

The number associates to the following:

  1. Timestamp
  2. Inbound IP (coming in from Internet)
  3. Outbound IP (going through the NSG)
  4. Inbound Port
  5. Outbound Port
  6. Protocol
  7. Traffic Flow (I – Inbound, O – Outbound)
  8. Acceptance (A – Allowed, D – Denied)

Reference

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-portal

Adding SSL to an AKS Ingress Controller (without using Let’s Encrypt)

I wrote about the ability to add HTTPS to an AKS cluster using Let’s Encrypt, but recently ran into a case where I needed to add a cert from a specific CA to the cluster.

To do this, you need the following:

  • An AKS cluster deployed in an Azure tenant.
  • A certificate (should start with —–BEGIN CERTIFICATE—–)
  • A private key associated to the certificate above (used when creating the CSR for the cert, and will start with —–BEGIN RSA PRIVATE KEY—–)

Import the cert into the cluster:

kubectl create secret tls tls-ca-secret --key certificate.key --cert certificate.crt

After the cert imported, create an ingress controller:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.org/client-max-body-size: '10m'
    nginx.ingress.kubernetes.io/enable-cors: 'true'
spec:
  tls:
    - hosts:
        - YOUR_DOMAIN
      secretName: tls-ca-secret
  rules:
  - host: YOUR_DOMAIN
      http:
        paths:
          - path: /some/endpoint
            backend:
              serviceName: some-service
              servicePort: 80

Afterwards, check to ensure your cert is coming through using the endpoint defined in the Ingress Controller.

Fixing “unable to get credential storage lock: File exists” when Deploying Function App via Jenkins

When working with deploying Azure Function Apps with Jenkins, I ran into an issue when trying to rebuild a Function App from scratch with the same name. I was unable to deploy the codebase via Pipeline due to the following error:

unable to get credential storage lock: File exists

I was able to fix by doing the following:

SSH into the server.

Log in as the user that runs when running a Jenkins job (for example):

sudo su -s /bin/bash jenkins

Access the user’s .git-credentials file, and remove the reference to the pre-existing Function App SCM.

Delete the .git-credentials.lock file.

After doing this, try running the job again and ensure the issue has been solved.

Setting up a Configuration Page for Your 3.90 or Below NopCommerce Plugin

After writing your NopCommerce plugin, a common next step is to create a ‘Configure’ page to allow for configuring the settings of the plugin.

First, create a model that will represent the values to configure in the plugin – Models/YourPluginModel.cs

Next, create a controller Controllers/YourPluginController.cs, which will look something like this:

...
[AdminAuthorize]
public class YourPluginController: BasePluginController
    {
        private readonly ILocalizationService _localizationService;

        public YourPluginController(ILocalizationService localizationService)
        {
            _localizationService = localizationService;
        }

        [ChildActionOnly]
        public ActionResult Configure()
        {
            return View("~/Plugins/Misc.YourPlugin/Views/YourPlugin/Configure.cshtml", model);
        }

        [HttpPost]
        [ChildActionOnly]
        public ActionResult Configure(YourPluginModel model)
        {
            if (!ModelState.IsValid)
            {
                return Configure();
            }

            SuccessNotification(_localizationService.GetResource("Admin.Plugins.Saved"));

            return Configure();
        }
    }
...

Finally, create a view Views/YourPlugin/Configure.cshtml:

@{
    Layout = "";
}
@using Nop.Web.Framework

@using (Html.BeginForm())
{
    @Html.AntiForgeryToken()
    <div class="panel-group">
        <div class="panel panel-default">
            <div class="panel-body">
                <div class="form-group">
                    <div class="col-md-3">
                         
                    </div>
                    <div class="col-md-9">
                        <input type="submit" name="save" class="btn bg-blue" value="@T("Admin.Common.Save")" />
                    </div>
                </div>
            </div>
        </div>
    </div>
}

Make sure the newly created View is set to ‘Copy if newer’.

Now when running your store, you should be able to enter the Configure page for your plugin, assuming the plugin is installed: