Book Notes – The Shallows (What the Internet is Doing to Our Brains)

The premise of the book is that as we move into integrating the Internet into our lives more and more, we are shifting from thinking deeply and creatively on topics, to learning quickly on a surface level.

Brain Plasticity

The brain is capable of being rewired and redeveloped through-out the entirety of a human life – even if that ability slows down in age. This applies in many cases such as the loss of capabilities (blindness, loss of limb, etc.) – the gray matter will be rerouted in the brain to other functions.

Plasticity, however, means that as changes occur in a person’s thinking and interaction, they can have lasting effects on how the brain processes in the future. In a sense, we become how we think and practice.

Connection to Medium

In writing and creation, there is a sort of tie-in that correlates to the medium being used to express. For instance, the way a book is written will change if the book is written online (where hyper-linking is available) or written in a paper-back book.

Prime example in this when using a Kindle (which I usually use to read), you have the ability to go to different websites and look up words right on the device – therefore diverting from the general process of reading the book from start to finish (took a look while writing this – no way to turn that off on the Kindle unfortunately).

In the comparison to digital vs. print, there’s also the connection between mutable (digital) vs. immutable (print). This makes for a push of more speed vs. quality, as publishing has become more of an ongoing process. This could remove the pressure of perfection seen in writing an immutable novel.

Personal note: I don’t see the above as an issue, although I’m sure I’m biased being in software. In this case, I see it as more of a “perfect is the enemy of good”.

Internet as a Medium of Distraction

The Internet serves as a medium that promotes distraction, which comes from sources such as:

  • Social media (engineered to provide stimulus rewards for checking regularly, much like a rat pulling a lever)
  • Page layout (advertisements)
  • Pop-ups

This does potentially come with some benefit – because browsing the internet promotes a shallow and quick level of thinking, it can keep minds sharp. However, this can mean we become good to identifying information and losing the ability to think creatively (especially thinking our own thoughts) on particular topics. Neuropyschologist Jordan Grafman says “The more you multitask, the less deliberative you become; the less able to think and reason out a problem.”

The major issue comes into the difficulty of learning and processing when our cognitive load is increased by use of the Internet (cognitive load – information flowing into our working memory at any given moment). Essentially, we can only handle so much information (2-4 things max) at a time before our ability to absorb information is diminished, and the information just rolls off.

Memory Collection, Digital vs. Biological Memory

When making memories, we have two types:

  • Short-term – like RAM, in working thoughts
  • Long-term – like HDD, stored away in brain

Short-term memories become long term memories after a period of time and reinforcement, where they gain new context on each subsequent retrieval and storage.

This changes with technology – memories are stored with no context, and as they are, which can limit their usefulness.

Personal note: This is another I don’t necessarily see a negative. It’s been proven that human memory is flawed, and our long-term memory is tarnished over time, moving away from reality over time and becoming what we want to believe. Using technology to store memories is a way to ensure those memories can be viewed intact. Something like writing a journal can show the thoughts of a particular experience at the time, and seeing how they compare to the long-term memory stored.

Tool Usage and Dependence

When using tools to amplify output (using a calculator to perform simple math, or using physical tools to perform labor), this has a numbing effect on our ability to perform the task ourselves.

This can especially apply in the software we use when interacting with a computer. A study of different users for software using both helpful and unhelpful software showed “The subjects using the bare-bones software consistently demonstrated “more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge.” The more that people depended on explicit guidance from software programs, the less engaged they were in the task and the less they ended up learning.”

This reflects in the way we search for information – we use tools that provide easy results without contemplation on our end, limiting the amount of creativity we can apply to our work.

Book Notes – Deep Work by Cal Newport

Pretty good read overall – a lot of this is speaking to the choir with me in terms of the idea of working deeply on things, and applying focus. For someone not familiar with the benefits of something like deep work, it makes for a good explanation, and for someone already familiar, it provides some exercises and things to figure out the important things to work on.

By the time I read this, I was already doing a few of these things, like scheduling all of my time on a calendar, and weighing everything on a pro/con list. Some of the exercises (memorizing a deck of cards) seems extreme, and I think you can get a lot of this benefit with the calendar scheduling tactic alone.

I think the main theme I get from this book is the concept of deliberation, or being specific on how you use your time, alongside how you design your lifestyle.

The main thing I’m going to try is to cut down on social media even more than I already do. I already don’t use a lot of social media, but I’m going to try removing it completely for a month or two and see how I feel after.

Importance of Deep Work

We’re in a society that is grabbing shallow attention, and being able to work deeply will become a defining trait that will help make one valuable in the economy.

More focus on high-skilled workers being valuable (especially with automation on the horizon), and talents pools increasing with remote work capabilities.

Learning complex systems quickly is a deep work trait, which will become valuable over time.

Deliberation in Activity Selection

The book describes two approaches when selecting a new tool/process:

  • Any-Benefit – use the tool if any benefit can be found in use. Generally applies in cases where negatives of a behavior outweigh a smaller positive.
  • Craftsman – carefully deciding whether to incorporate a tool based on pros and cons.

To work with this, an exercise is provided to determine those activities:

  1. Identify the main high-level goals in life (family, career, friendships, etc.)
  2. Identify 2-3 most important activities for achieving said goals.

The important activities tend to provide in the 80/20 sense, in that they are much more productive uses of time, and should be focused on. Since your time and energy is a zero-sum game, it makes more sense to put as much time and effort as possible into those activities.

Planning Ahead

The book promotes a methodology of planning ahead, or scheduling every minute of your day (including leisure time) on your calendar which looks something like:

Schedule in blocks

This reinforces the zero-sum idea above, that you can really only do one thing at a time, and doing things bring an opportunity cost for other activities. It’s also recommended to do this with leisure time, meaning you choose ahead of time how you’ll spend your off-time.

Other Points of Interest

  • Attention residue – when switching between tasks, your mind will linger on the original task for some time. Optimize this by having a “shut-down” process and giving as long as possible uninterrupted blocks of time.
  • Deep work generally involves getting into the “flow” state, or losing track of time and feeling effortless in efforts put towards a task. The human mind thrives in this state.
  • Good intentions to work deeply are not enough – it’s important to set yourself up for success. “It’s not the will to win that matters-everyone has that. It’s the will to prepare to win that matters.”
  • Important to expose yourself to hubs to gain new information, but then have spokes to be able to perform deep work as required.
  • To be able to perform deep work well, you’ll need to also have moments of disconnection – when you’re working, work. when you’re done, be done completely.
    • This issue with heightened with our ability to access stimuli at any moment, to make sure we never get bored. Put away your phone and just sit during moments of idleness.
  • Productive meditation – A period where you’re occupied physically but not mentally (walking), giving the opportunity to focus on a single problem.

Set up nopCommerce 4.2 on a Linux Azure VM

Infrastructure Setup

First, create a Ubuntu-based VM, opening ports 80, 443, and 22.

Afterwards, create an Azure SQL Database.

VM Configuration

Once all of the infrastructure is created, set up a DNS record for the public IP created for the VM to make access easier.

Next, SSH into the VM using the domain name created above. Run the following commands to install the .NET Core runtime and SDK:

wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install aspnetcore-runtime-2.2 dotnet-sdk-2.2

Next, install and enable nginx:

sudo apt-get install nginx
sudo systemctl enable nginx
sudo systemctl start nginx

Edit /etc/nginx/sites-available/default:

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    server_name   YOUR_DOMAIN.com;

    location / {
        proxy_pass         http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection keep-alive;
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }
}

Finally, restart NGINX:

sudo systemctl restart nginx

Set up NopCommerce

This step will lay out the process of downloading the NoSource version of nopCommerce and getting it running.

If you have a custom copy of nopCommerce from source, you would publish it and use the output here instead.

If starting from scratch, run the following:

sudo wget https://github.com/nopSolutions/nopCommerce/releases/download/release-4.20/nopCommerce_4.20_Source.rar

sudo apt-get install unrar -y
mkdir nopSource
unrar x nopCommerce_4.20_Source.rar nopSource

cd nopSource/src
dotnet clean -c Release
dotnet build -c Release

cd Presentation/Nop.Web
sudo mkdir /var/www/nop
sudo dotnet publish -c Release -o /var/www/nop

cd /var/www/nop
sudo mkdir bin
sudo mkdir logs

cd ..

sudo chown -R www-data:www-data nop/

Now create the nopCommerce service by creating file /etc/systemd/system/nop.service:

[Unit]
Description=nopCommerce

[Service]
WorkingDirectory=/var/www/nop
ExecStart=/usr/bin/dotnet /var/www/nop/Nop.Web.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=nop
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false

[Install]
WantedBy=multi-user.target

Enable the above service:

sudo systemctl enable nop.service
sudo systemctl start nop.service

Once this is all set, you should be able to install the store, using the Azure SQL Database created above. When installing, you may want to use the connection string provided by Azure:

While installing, you’ll likely see a 504 timeout from NGINX – this is okay, just give some time for the installation to finish, and you should be able to eventually see the store front.

Set up SSL using Let’s Encrypt

To set up Let’s Encrypt, follow the Certbot directions, making sure to enable auto-redirect.

Afterwards, make the following change to the appsettings.json file:

"Hosting": {
    //Set to "true" the settings below if your hosting uses a load balancer$
    "UseHttpClusterHttps": false,
    "UseHttpXForwardedProto": true,

Restart the NopCommerce service and ensure you can view the front page without an SSL warning.

Set up Auto-Renewing Let’s Encrypt Certificates on App Service Linux

Set up the ability to assign auto-renewing Let’s Encrypt certs to an Azure App Service Linux instance with shibayan’s appservice-acmebot. This will create a Function App that automatically updates the App Service as needed.

Before getting started, you’ll need to have a few things:

  • An Azure App Service (Linux-based).
  • DNS records being served using Azure DNS.

Procedure

First, deploy the solution to your Azure subscription. Use the following for configation:

  • Resource group – certbot-rg
  • App Name Prefix – certbot-fa
  • Mail Address – your email address.

Next, set up the ability for the Certbot function app to access your Linux App Service

  • Select ‘App Service Authentication’ to on.
  • Set ‘Log in with Azure Active Directory’
  • Set up the ‘Azure Active Directory’ Auth Provider
    • Set Management mode to ‘Express’
    • Create a new AD app
  • Save changes.

Now, assign the Website Contributor and Web Plan Contributor roles to the Certbot function app:

Finally, restart the function app and access the Function App URL (https://your-function-app.azurewebsites.net/add-certificate). You should be able to see the UI showing the available RGs and App Names available (access can take 30-60 minutes to reflect in Azure, so if you’re seeing a 401, wait a while for things to resolve):

Once the certificate is added, you’re all set! Renewals will automatically occur, as the app will scan regularly to check when to renew the certs.

Quickly Create Graylog Streams

When using Graylog, you may run into a case where you’re making a large number of streams to organize data. A quick way to do this is to use the API to programmatically add the streams to the system quickly.

Use the GET endpoint to get the stream payload:

https://YOUR_GRAYLOG_ENDPOINT/api/streams

Grab one of the streams and format the payload to use for the POST endpoint, and then insert the streams using the API.

Setting up my Android Device

Initial Setup

First, go through the default setup, connecting to Wi-Fi and Google.

Turn off backup to Google Drive.

Set up all of the methods of accessing phone (face recognition, etc.)

Configuration

  • Turn on night mode.
  • Set resolution to max.
  • Set navigation bar to full screen gestures
  • Show battery percentage
  • Set sound mode to vibrate

Turn off Bixby home.

Install the following apps from the Play Store:

  • Outlook (login, turn off notifs, swipe right delete, turn off focused inbox, week start Monday, set calendar widget)
  • Firefox (sign in)
  • Messenger (sign in, turn on dark mode, turn off chat bubbles)
  • Dropbox (sign in)
  • Instagram (sign in)
  • Netflix (sign in)
  • YNAB (sign in)
  • Slack (sign in)
  • Fly Delta (sign in)
  • Ecobee (sign in)
  • Fi (sign in)
  • Fitbit (sign in)
  • WhatsApp (sign in, turn off notifs)

Configure the following:

  • Youtube (turn on dark mode)

Uninstall the following:

  • Duo
  • Drive
  • Galaxy Wearable
  • Google Play Movies & TV
  • Kids Home
  • Photos
  • Samsung Global Goals
  • Samsung Health
  • Samsung Pay
  • Smart Switch
  • SmartThings
  • Spotify

Moving Resources to a Different Azure Tenant

Let’s say you have some resources in your Azure tenant that you want to move to a different tenant (such as if you’re going to transfer ownership of a particular set of resources to another company). You can do that with the following steps:

Move Existing Resources to A New Subscription

  1. Create a new subscription in your current tenant as the current account admin (use your own information, which you will change over later so you can be an Account Admin).
  2. Rename the subscription to something more meaningful to avoid confusion.
  3. Sign into your tenant and move the desired resources over to the new subscription (you may need to recreate certain resources as some resources cannot be moved).
    1. If running any VM backups, you’ll need to delete all images before you can move a VM over. After moving over, make sure to turn backups back on.

Transfer Subscription

Next, create a new Microsoft account, and create an Azure account to transfer the subscription to – using the phone and credit card of the responsible party. When going through this process, you’ll either need the person by you when setting up the account, or have them set it up on their own.

Once the account is created, go back to the newly created subscription from above and transfer billing ownership using the email account of the Azure account from above:

Setting up the transfer request will send an email to the above – and give them the directions to accept the transfer.

Additional Steps

Once the transfer is done, they will own the resources. Here’s a few more steps you may want to take:

Set up Custom Domain for Azure AD

To set up the ability to use a domain for login to the tenant, you’ll need DNS access to a preferred domain.

Create an Administrative User

If you want to keep the ability to administer the resources, create a user for yourself and give them the following:

  • Global Admin rights in Azure AD
  • Owner rights to the subscription

Upgrading nopCommerce

Upgrading nopCommerce is a manual process that requires a decent amonut to work to get in place. When upgrading, you’ll need to upgrade sequentially (3.80 -> 3.90 -> 4.00 -> etc.).

NOTE: When performing this process on a live site, make sure to close the store during the upgrade process so no changes occur on the site during upgrade.

Upgrade Database

First, you’ll need to upgrade the database using a script provided by nopCommerce. Download the script here based on the version and apply the changes in the SQL script to a copied version of the database.

Upgrade Site Files

The script above with have a README.txt file that describes the next steps, which is essentially:

  • Download the source code for the upgrade version
  • Porting over the specific configuration files (Settings.txt, InstalledPlugins.txt, etc.) and making sure to point to the newly created database above.

If storing images on the filesystem, make sure to copy images over in /Content/images as well.

At this point, you should be able to see the front page of the site.

Restore Theme

If you were using a theme in the previous version, you’ll need to get the correct version of it and restore it.

First, if there are any SQL scripts for the theme, run those on the upgraded database.

Upload the theme to the nopCommerce installation, alongside any plugins required by the theme.

After this is done, check the site again and confirm the new theme is being used.

Restore Plugins

Finally, go through the plugins from the original site and add the appropriate version. If you’re using the source code version of nopCommerce, you’ll want to make sure the plugin is set up to use the latest version of nopCommerce, and that it compiles.

Some plugins will require database upgrades as well.

Setting up Graylog in Azure

To get started with installing Graylog, do the following:

Create a VM using the following:

  • Image: Ubuntu 18.04 LTS
  • Minimum Size: B2s (~$30/month)
  • Open ports 80,443,22

SSH into the server and follow this guide to get Graylog installed.

To set up public access, set the following variables in Graylog config file:

http_bind_address = PRIVATE_IP
http_external_uri = http://<PUBLIC_IP>

Once fully installed, set up an Apache reverse proxy:

sudo apt-get install apache2 -y
sudo a2enmod proxy_http proxy_ajp rewrite deflate headers proxy_balancer proxy_connect proxy_html ssl lbmethod_byrequests slotmem_shm proxy

Edit /etc/apache2/sites-enabled/000-default.conf:

ProxyPass "/"  "http://PRIVATE_IP:9000/"
ProxyPassReverse "/"  "http://PRIVATE_IP:9000/"

Then restart both servers:

sudo systemctl restart graylog-server.service
sudo systemctl restart apache2

To verify installation, access at <IP_ADDRESS> to verify the installation. If you see the Graylog login screen, you’ve successfully set up the server.

Finish by setting the SSH networking rule to a trusted IP to improve security.

Set up HTTPS using Let’s Encrypt

To set up HTTPS using Let’s Encrypt, use the Certbot directions.

Once that’s done, make sure to change http_external_uri in the Graylog config file and restart Graylog.

Send Kubernetes Logs to Graylog

First, SSH into the server and configure Elasticsearch (/etc/elasticsearch/elasticsearch.yml) to bind to the private IP of the VM:

network.host: PRIVATE_IP

Restart Elasticsearch, then configure Graylog to listen to the new Elasticsearch host:

elasticsearch_hosts = http://PUBLIC_IP:9200

Restart Graylog, then open the firewall to allow for port 9200 to be accessible by the cluster IP. Confirm access by trying to hit port 9200.

Next, set up an input in Graylog.

After that, create the RBAC role for the cluster (fluentd-rbac.yml):

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluentd
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system

Then create the daemonset, changing the container environment variables as needed (fluentd-daemonset.yml):

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "YOUR_GRAYLOG_HOST"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          # Custom values for each cluster
          - name: FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX
            value: "YOUR_LOG_PREFIX"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

Deploy both of these out:

kubectl create -f fluentd-rbac.yml
kubectl create -f fluentd-daemonset.yml

Then check the logs of the daemonset to confirm correct connection:

kubectl get pods -n kube-system
kubectl logs FLUENTD_POD -n kube-system

Increase Heap Size

To increase the heap size, edit /etc/default/graylog-server, then restart graylog-server.service.

References

https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/#fluentd