Creating an Angular and Azure Function API App with Azure Static Web App

Recently, Azure released Azure Static Web Apps, which looks like a way to host static web sites easily. Some of the perks I see immediately are:

  • Works well with SPA technologies (Angular, React, Vue)
  • Serve an API using Azure Functions
  • Automatic integration with GitHub and GitHub Actions to deploy immediately
  • Currently costs nothing (while this is in preview)

Pre-requisites

To get started, you’ll need:

  • Angular CLI
  • Azure Functions Core Tools
  • An Azure account

Create a GitHub Repo with Angular and Azure Function Apps

First, create a repo in your GitHub account, and clone that repo to your local PC.

Now create an Angular app with the CLI:

ng new NAME --directory app

Next, create an Azure Functions API (currently, there is a limitation that only allows for use of Javascript as the runtime):

func init NAME --javascript
mv NAME api
cd api
func new --name TestFunction --language dotnet --template HttpTrigger

Commit the changes made above, and then let’s move onto creating the Azure Static Web App.

Creating Azure Static Web App

Next, create an Azure Static Web App in your Azure account. When doing this, do the following:

  • Sign in to your GitHub account and select the correct repository and branch.
  • For build details, use the following information (replacing azure-static-web-app-poc with):
    • App location: app
    • Api location: api
    • App artifact location: dist/APPNAME

Automatic Deployment

After creating the Static Web App, a GitHub Workflows file will be created and committed to your repo. In turn, your skeleton application should be built and deployed automatically.

With the deployment completed, you can view the deployed UI and API by checking the URL of the Static Web App in the Azure portal:

  • UI – check the URL provided.
  • API – check the URL, plus /api/TestFunction

Further Reading

Microsoft’s Guide on Static Web Apps: https://docs.microsoft.com/en-us/azure/static-web-apps/overview

Adding Settings to a Plugin in NopCommerce (pre-4.00)

Once you’re started creating a plugin for NopCommerce, you’ll likely want to add the ability to configure settings inside the plugin for reference later.

When adding this capability, we’re going to work on trying to make this as immutable as possible, to follow functional programming as best we can, just because it makes things a little cleaner and puts all of the conversion between the configuration model and settings object.

First, create an ISettings implementation in the root of the plugin.

public class MyPluginSettings : ISettings
{
    public string MySetting { get; private set; }
}

We use private set here to make sure we initialize the settings object from a configuration model (when saving) and disallow the ability to change the settings object – making it immutable.

Next, create the configuration model to be used on the configuration page – this will usually have the same values as the Settings object above:

public class ConfigurationModel {
    [NopResourceDisplayName("Name")]
    public string Name
}

Next, create the base plugin controller, which will hold the implementation for the configure page:

public class YourPluginController : BasePluginController
{
    [ChildActionOnly]
    [AdminAuthorize]
    [HttpGet]
    public ActionResult Configure()
    {
        // implementation..
    }

    [ChildActionOnly]
    [AdminAuthorize]
    [HttpGet]
    public ActionResult Configure(ConfigurationModel model)
    {
        // implementation..
    }
}

Finally, create the View.

Recording HTTP Request Body with Java, Spring Boot and Application Insights

Building off of my previous post about integrating App Insights into Spring Boot, I also wanted to record the request body in each trace sent to Azure. This is especially useful when looking up failures, since you’ll be able to see the request body used that caused the failure.

Important Note Regarding Privacy

Before getting started, something to consider is the issue of privacy – by activating this, you’ll be storing request body information into Azure, which can be an issue if you’re dealing with sensitive information.

If that’s the case, you should be sure to process the body extracted from this inplementation and remove the sensitive information in the payload before adding it to the request telemetry.

Bypassing the HttpServletRequest issue

Java servlets do not allow the ability to read a response multiple times – if you try to do so by reading getReader() multiple times, you’ll get an IllegalStateException. To fix this, we’ll create a custom implementation of HttpServletRequest that will cache the request provided, allowing us to read the request body, and then passing this down further the Spring Boot chain.

Create the CachedBodyHttpServletRequest class:

package com.example.demo;

import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.stream.Collectors;

import javax.servlet.ServletInputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;

import org.springframework.util.StreamUtils;

public class CachedBodyHttpServletRequest extends HttpServletRequestWrapper {
    private byte[] cachedBody;

    public CachedBodyHttpServletRequest(HttpServletRequest request) throws IOException {
        super(request);
        InputStream requestInputStream = request.getInputStream();
        this.cachedBody = StreamUtils.copyToByteArray(requestInputStream);
    }

    @Override
    public ServletInputStream getInputStream() throws IOException {
        return new CachedBodyServletInputStream(this.cachedBody);
    }

    @Override
    public BufferedReader getReader() throws IOException {
        // Create a reader from cachedContent
        // and return it
        ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(this.cachedBody);
        return new BufferedReader(new InputStreamReader(byteArrayInputStream));
    }

    public String getBody() throws IOException {
        return getReader().lines().collect(Collectors.joining(System.lineSeparator()));
    }
}

Next, create the CachedBodyServletInputStream class:

package com.example.demo;

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;

import javax.servlet.ReadListener;
import javax.servlet.ServletInputStream;

public class CachedBodyServletInputStream extends ServletInputStream {
    private InputStream cachedBodyInputStream;

    public CachedBodyServletInputStream(byte[] cachedBody) {
        this.cachedBodyInputStream = new ByteArrayInputStream(cachedBody);
    }

    @Override
    public boolean isFinished() {
        try {
            return cachedBodyInputStream.available() == 0;
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        return false;
    }

    @Override
    public boolean isReady() {
        return true;
    }

    @Override
    public void setReadListener(ReadListener readListener) {
        throw new UnsupportedOperationException();
    }

    @Override
    public int read() throws IOException {
        return cachedBodyInputStream.read();
    }
}

Adding CachedBodyHttpServletRequest to Spring Boot Filter

To use this, you’ll create a filter that activates before processing a request, which will add the request body to the request telemetry when defined as a POST or PUT method.

Create the CachedHttpServletRequestFilter class:

package com.example.demo;

import com.microsoft.applicationinsights.telemetry.RequestTelemetry;
import com.microsoft.applicationinsights.web.internal.RequestTelemetryContext;
import com.microsoft.applicationinsights.web.internal.ThreadContext;
import org.springframework.core.Ordered;
import org.springframework.core.annotation.Order;
import org.springframework.http.HttpMethod;
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;

import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;

@Order(value = Ordered.HIGHEST_PRECEDENCE)
@Component
@WebFilter(filterName = "ContentCachingFilter", urlPatterns = "/*")
public class CachedHttpServletRequestFilter extends OncePerRequestFilter {
    @Override
    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
        // Skip all processing if not a POST or PUT request to improve efficiency
        if (!isPost(request) && !isPut(request)) {
            filterChain.doFilter(request, response);
            return;
        }

        RequestTelemetryContext context = ThreadContext.getRequestTelemetryContext();
        RequestTelemetry requestTelemetry = context.getHttpRequestTelemetry();

        CachedBodyHttpServletRequest cachedRequest = new CachedBodyHttpServletRequest(request);
        String body = cachedRequest.getBody();
        requestTelemetry.getProperties().put("Request Body", body);

        filterChain.doFilter(cachedRequest, response);
    }

    private boolean isPost(HttpServletRequest request) {
        return request.getMethod().equalsIgnoreCase(HttpMethod.POST.toString());
    }

    private boolean isPut(HttpServletRequest request) {
        return request.getMethod().equalsIgnoreCase(HttpMethod.PUT.toString());
    }
}

Verification

To verify, start running the application and make a few calls. When observing the requests in App Insights, you should be able to see the recorded Request Body on all POST and PUT calls:

Integrate Application Insights into a Spring Boot Application

To get started, we’ll set up a basic Spring Boot application, and then add Application Insights in the next step.

Creating a Skeleton Spring Boot App

To get started, go to Spring Boot Initializr and create an app with the following selected:

  • Create a gradle project
  • Add the Spring Web dependency

Once that’s done, extract the archive file given and open in your Java IDE of choice (IntelliJ, for example).

Add the following Controller ApiController.javato add an API endpoint:

package com.example.demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ApiController {

    @GetMapping("/")
    public String doGet() {
        return "Hello World!";
    }
}

Start the application using clean, build, and bootRun. You can verify the endpoint is working with a tool such as Insomnia and calling at http://localhost:8080 to have “Hello World!” displayed.

Once you have this working, you’re ready to start integrating Application Insights to provide analytics.

Integrating Application Insights

First, add the Application Insights dependencies to your build.gradle file:

dependencies {
    ...  // other dependencies
    compile group: 'com.microsoft.azure', name: 'applicationinsights-web-auto', version: '2.5.0'
}

Then add the Resources/ApplicationInsights.xml file:

<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
   <!-- HTTP request component (not required for bare API) -->
   <TelemetryModules>
      <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebSessionTrackingTelemetryModule"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebUserTrackingTelemetryModule"/>
   </TelemetryModules>

   <!-- Events correlation (not required for bare API) -->
   <!-- These initializers add context data to each event -->
   <TelemetryInitializers>
      <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationIdTelemetryInitializer"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationNameTelemetryInitializer"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebSessionTelemetryInitializer"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserTelemetryInitializer"/>
      <Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserAgentTelemetryInitializer"/>
   </TelemetryInitializers>

</ApplicationInsights>

Finally, you’ll add the section in startup to add the App Insights Instrumentation Key to the codebase, to link the resource to send data to. A quick note on this – you can add the key to the .xml file, but I prefer to add it in as an environment variable, so this can be changed across different environments when deployed.

Add the following method to the DemoApplication.java file:

private static final Logger log = LoggerFactory.getLogger(DemoApplication.class);

...

@PostConstruct
	private void init() {
		String appInsightsKey = System.getenv("AppInsightsKey");
		if (appInsightsKey == null) {
			log.warn("App Insights Key not provided, no analytics will report.");
			return;
		}

		TelemetryConfiguration.getActive().setInstrumentationKey(appInsightsKey);
	}

Verifying in App Insights

With the changes made in place, the last step is verifying everything is in place. To start the application with App Insights enabled:

  • Add the App Insights Instrumentation Key to an environment variable called AppInsightsKey.
  • Start the application.
  • Call the endpoint at http://localhost:8080
  • View the results at the App Insights screen.

Further Reading

https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-get-started?tabs=maven

Developing with nopCommerce using VSCode and Linux

Pre-requisites

To get started, you’ll need to set up:

  • VSCode with the C# and vscode-solution-explorer extensions installed
  • SQL Server

Download and Build nopCommerce Source Code

To get started, get a copy of the nopCommerce source code at their Github page.

After downloading the source code, open the /src folder using VSCode.

For cleaning and building the project, you have two choices:

  1. Use dotnet to run clean and build while in the src/ directory.
  2. Using the solution viewer plugin, you can clean and build the project:

After cleaning and building, you can run using either:

  1. dotnet run in the src/Presentation/Nop.Web directory.
  2. Using the VSCode solution explorer.

Once started running, you’ll be able to access nopCommerce below:

With this, you are able to run locally and perform all tasks related to administering nopCommerce, including running the installation and anything else.

Setting up A Jitsi Meet Server on Azure

With the COVID-19 pandemic going on, video chat is on the rise, with people using tools like Zoom, WebEx, and others. One option is to use an open-source solution like Jitsi Meet, which provides both a public cloud version you can use easily, and a version you can host on your own.

Why host on your own? You should consider hosting on your own if:

  • You’re concerned with security – hosting the software on your own ensures a third part is uninvolved.
  • You want more hands on capability to change the specs on the server for performance based on when you’re using video software.
  • You can control the location of the hardware, which depending on where everyone is located, may improve performance.

Server Creation and Jitsi Installation

First, create an Azure VM with the following:

  • Running Ubuntu
  • Open inbound ports 80, 443, and 22.
  • SSH key access (recommended)

Once the VM is created, SSH into the server and install the Jitsi full suite:

wget -qO - https://download.jitsi.org/jitsi-key.gpg.key | sudo apt-key add -

sudo sh -c "echo 'deb https://download.jitsi.org stable/' > /etc/apt/sources.list.d/jitsi-stable.list"

sudo apt-get -y update

sudo apt-get -y install jitsi-meet

when installing, you’ll be asked to configure jitsi-videobridge2, use the URL you plan to use for this Jitsi instance. Afterwards, generate a self-signed certificate (we’ll create one with Let’s Encrypt later).

To verify installation worked successfully, visit the IP address in a browser, making sure you use HTTPS.

Set up SSL with Let’s Encrypt

To set up SSL, you’ll need to set up a domain name for the server in place – you can either:

  • Configure the Public IP assigned by Azure to create a domain name.
  • Using the process above, but also creating an A or CNAME record for a purchase domain pointing to the server IP

Once this is done, you can run the following to automatically create a cert:

sudo /usr/share/jitsi-meet/scripts/install-letsencrypt-cert.sh

Reference

https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md#advanced-configuration

Adding a Custom Method to a nopCommerce Core Service in a Plugin

In nopCommerce, you can extend a core service to include a new method and new functionality using both dependency injection and inheritance. For this example, we’ll use TaxService provided in Nop.Services.

Implementation

First, create a new interface in your plugin that inherits from the core service interface:

// ICustomTaxService.cs

using Nop.Services.Tax;

namespace YourNamespace
{
    public interface ICustomTaxService : ITaxService
    {
        void YourCustomMethod(); 
    }
}

After creating the interface, create the implementation class, inheriting from both the newly created interface and the base service:

// CustomTaxService.cs

namespace YourNamespace
{
    public class CustomTaxService : TaxService, ICustomTaxService
    {
        // constructor passing in all dependencies using base()
        public CustomTaxService(...) : base(...)
        
        public void YourCustomMethod() {
            // implementation here
        }
    }
}

Finally, make sure to connect the interface to the implementation in your plugin’s DependencyRegistrar.cs file:

class DependencyRegistrar : IDependencyRegistrar
{
    public int Order { get { return 1; } }

    public void Register(ContainerBuilder builder, ITypeFinder typeFinder, NopConfig config)
    {
        builder.RegisterType<CustomTaxService>().As<ICustomTaxService>();
    }
}

Usage

Once the above is created, you can reference the new service using the interface created (usually, this is done in a controller or a service):

private readonly ICustomTaxService _customTaxService;

public WarrantyTaxService(
    ICustomTaxService customTaxService
){
    _customTaxService = customTaxService;
}

// now use the service for anything you need!

Book Notes – Digital Minimalism

Digital Minimalism by Cal Newport lays out the benefits and then steps of using a more deliberate approach of selecting technology that benefits oneself. This book shares the same writing tone of Deep Work, providing a lot of examples for the different principles explained in the book.

Digital Minimalism is defined as “a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.”

Overall, a pretty good read. For someone that generally follows Cal’s line of thinking, it was nice to read some of the examples (gives some inspiration) alongside the different ideas provided to balance between using technology for your own benefit and getting sucked into overuse.

Outline

The book provides a collection of steps that can be outlined below:

Digital Minimalism Principles:

  • Clutter is costly.
  • Optimization is important.
  • Intentionality is satisfying (intention trumps convenience)

Digital Declutter process – set aside all optional technologies for 30 days, then reintroduce them slowly into life after the habit for checking regularly disappears.

Digital Minimalism Practices:

  • Leave your phone at home
  • Take long walks (without headphones, use this time to think intentionally)
  • Write letters to yourself (process of writing more important than outcome)
  • Spent time alone (provides solitude)
  • Don’t click “Like” (prioritize high-value conversation vs. low-value communication, more below)
  • Consolidate texting
  • Fix or build something every week
  • Schedule your low-quality leisure
  • Join something
  • Follow leisure plans
  • Delete social media from your phone (just use on desktop/laptop)
  • Turn your devices into single-purpose computers (limit options available for general-purpose device at certain times)
  • Use social media like a professional (remember that social media companies spend millions to keep you invested as long as possible, get just the basic needs you need from it)
  • Embrace slow media
  • Dumb down your smartphone

Conversation vs. Communication

In her book Reclaiming Conversation, Turkle makes a distinction between:

  • High-value conversation, committed face to face conversation with others
  • Low-value communcation, asynchronous communication with others usually via digital means

This ties into the idea of staying away from something like “Likes” on social media – focus on dedicated conversations with others, which makes for a more fulfilling social life as a whole over trying to spread communication thin.

Personal thought: Interesting the idea that something like using social media makes you think you could miss out on what people are doing- but you can always reach out individually to the people you care about to check in!

Book Notes – The Shallows (What the Internet is Doing to Our Brains)

The premise of the book is that as we move into integrating the Internet into our lives more and more, we are shifting from thinking deeply and creatively on topics, to learning quickly on a surface level.

Brain Plasticity

The brain is capable of being rewired and redeveloped through-out the entirety of a human life – even if that ability slows down in age. This applies in many cases such as the loss of capabilities (blindness, loss of limb, etc.) – the gray matter will be rerouted in the brain to other functions.

Plasticity, however, means that as changes occur in a person’s thinking and interaction, they can have lasting effects on how the brain processes in the future. In a sense, we become how we think and practice.

Connection to Medium

In writing and creation, there is a sort of tie-in that correlates to the medium being used to express. For instance, the way a book is written will change if the book is written online (where hyper-linking is available) or written in a paper-back book.

Prime example in this when using a Kindle (which I usually use to read), you have the ability to go to different websites and look up words right on the device – therefore diverting from the general process of reading the book from start to finish (took a look while writing this – no way to turn that off on the Kindle unfortunately).

In the comparison to digital vs. print, there’s also the connection between mutable (digital) vs. immutable (print). This makes for a push of more speed vs. quality, as publishing has become more of an ongoing process. This could remove the pressure of perfection seen in writing an immutable novel.

Personal note: I don’t see the above as an issue, although I’m sure I’m biased being in software. In this case, I see it as more of a “perfect is the enemy of good”.

Internet as a Medium of Distraction

The Internet serves as a medium that promotes distraction, which comes from sources such as:

  • Social media (engineered to provide stimulus rewards for checking regularly, much like a rat pulling a lever)
  • Page layout (advertisements)
  • Pop-ups

This does potentially come with some benefit – because browsing the internet promotes a shallow and quick level of thinking, it can keep minds sharp. However, this can mean we become good to identifying information and losing the ability to think creatively (especially thinking our own thoughts) on particular topics. Neuropyschologist Jordan Grafman says “The more you multitask, the less deliberative you become; the less able to think and reason out a problem.”

The major issue comes into the difficulty of learning and processing when our cognitive load is increased by use of the Internet (cognitive load – information flowing into our working memory at any given moment). Essentially, we can only handle so much information (2-4 things max) at a time before our ability to absorb information is diminished, and the information just rolls off.

Memory Collection, Digital vs. Biological Memory

When making memories, we have two types:

  • Short-term – like RAM, in working thoughts
  • Long-term – like HDD, stored away in brain

Short-term memories become long term memories after a period of time and reinforcement, where they gain new context on each subsequent retrieval and storage.

This changes with technology – memories are stored with no context, and as they are, which can limit their usefulness.

Personal note: This is another I don’t necessarily see a negative. It’s been proven that human memory is flawed, and our long-term memory is tarnished over time, moving away from reality over time and becoming what we want to believe. Using technology to store memories is a way to ensure those memories can be viewed intact. Something like writing a journal can show the thoughts of a particular experience at the time, and seeing how they compare to the long-term memory stored.

Tool Usage and Dependence

When using tools to amplify output (using a calculator to perform simple math, or using physical tools to perform labor), this has a numbing effect on our ability to perform the task ourselves.

This can especially apply in the software we use when interacting with a computer. A study of different users for software using both helpful and unhelpful software showed “The subjects using the bare-bones software consistently demonstrated “more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge.” The more that people depended on explicit guidance from software programs, the less engaged they were in the task and the less they ended up learning.”

This reflects in the way we search for information – we use tools that provide easy results without contemplation on our end, limiting the amount of creativity we can apply to our work.